ADF Amazon Redshift linked service produces SSL error

Vincents Goldmanis 20 Reputation points
2025-11-20T12:15:04.1133333+00:00

I have a Amazon Redshift v1.0 linked service in Azure Data Factory that connects successfully, when I test the connection. When the service is used in a dataset, it fails to fetch data with 27809 error code:

Failed while reading data from source during connector transparent migration

ERROR [HY000] [Redshift][ODBC Driver][Server]SSL error: certificate verify failed

I can only use Azure Intergation Runtime, and cannot use self hosted integration runtime.

What are the recommended steps to solve this issue?

User's imageUser's image

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
{count} votes

2 answers

Sort by: Most helpful
  1. Jerald Felix 9,840 Reputation points
    2025-11-20T16:40:32.5433333+00:00

    Hello Vincents Goldmanis,

    Thanks for raising this question in Q&A forum.

    I understand that you are using an Amazon Redshift v1.0 linked service with the Azure Integration Runtime (Auto-Resolve IR), and you are encountering the error SSL error: certificate verify failed when trying to read data, even though the "Test Connection" succeeds. You also mentioned that you cannot use a Self-Hosted IR.

    This error typically occurs because the legacy Redshift v1.0 connector in Azure Data Factory is based on an older ODBC driver that may not trust the newer CA certificates used by AWS Redshift, or it is failing during the "Transparent Migration" phase where ADF attempts to optimize the copy command.

    Here are the recommended steps to resolve this, specifically for Azure IR:

    1. Use the Amazon Redshift V2 Connector (Recommended):
      • The V1.0 connector is legacy. Microsoft strongly recommends using the Amazon Redshift V2 connector for new workloads.
      • Action: Create a new Linked Service using the Amazon Redshift V2 type. This connector is built on top of the newer drivers and handles SSL/TLS certificates much better natively within the Azure IR environment.
      • Note: The V2 connector creates a dataset typed as "Amazon Redshift" (same name in UI, but the underlying type is often different in JSON). If you edit your existing dataset, check if you can switch the Linked Service reference to the new V2 one.
    2. Bypass Certificate Validation (If staying on V1):
      • If you must use V1 and cannot use a Self-Hosted IR to install custom certificates, you can try to disable strict SSL verification in the connection string, though this is less secure.
      • In your Linked Service, look for "Encryption Method" or "Additional Connection Properties".
      • Try adding EncryptionMethod=1;ValidateServerCertificate=0; to the connection string or additional properties. This tells the driver to use encryption but not to validate the server's certificate chain (which is failing because the Azure IR doesn't have the specific Redshift CA trusted).
    3. EncryptionMethod Property:
      • Ensure your connection string explicitly sets EncryptionMethod=1 (SSL). Sometimes the default negotiation fails if not explicitly forced.
    4. Check "Unload" Settings:
      • The error mentions "connector transparent migration". This implies ADF might be trying to use the UNLOAD command to S3 for performance (which is the default efficient copy method).
      • Ensure the S3 bucket you are using for staging (if configured) is accessible and that the Redshift cluster has permission to write to it. Sometimes SSL errors mask underlying permission issues during the UNLOAD handshake.

    Summary: The most robust fix is to switch your Linked Service to the Amazon Redshift V2 connector, which is designed to fix these exact driver obsolescence issues on the Azure IR.

    If helps, approve the answer.

    Best Regards,

    Jerald Felix


  2. VRISHABHANATH PATIL 2,305 Reputation points Microsoft External Staff Moderator
    2025-12-03T06:35:27.6833333+00:00

    Hi @Vincents Goldmanis **

    **Thank you contacting to Microsoft QA, below are the few mitigation steps that may help you to resolve the issue

    SSL Error Occurs

    • ADF Redshift v1.0 uses an older built-in ODBC driver. During “transparent migration,” ADF routes data through this driver.
    • Newer AWS certificate chains or mismatched hostnames (e.g., using a CNAME) often cause SSL verification failures.
    • AWS now issues Redshift certificates via ACM. If the client doesn’t trust the updated root bundle or the hostname doesn’t match the certificate, the error appears.

    Recommended Fix (Azure Integration Runtime, No SHIR)

    1. Use the cluster’s actual endpoint
      • In Linked Service → Server, specify the native xxx.redshift.amazonaws.com endpoint.
        • Avoid CNAMEs, Route 53 aliases, or private IPs to ensure hostname matches the certificate CN/SAN.
          • This alone resolves many verify-full failures.
          1. Relax strict certificate validation in v1.0
            • In Linked Service → Advanced → Additional connection properties, add

    EncryptionMethod=1;ValidateServerCertificate=0;ssl=true;sslmode=require

     

      - EncryptionMethod=1 + ssl=true keeps traffic encrypted.
      
         - ValidateServerCertificate=0 bypasses chain/hostname checks that fail on Azure IR.
         
            - sslmode=require enforces TLS without strict cert matching.
            
    

    Additional Checks

    Fix

    • Retry dataset preview or pipeline run.
    • If error persists, confirm:

    o    Exact endpoint is used.

    o    No gateway/proxy is intercepting TLS.

    Future-Proof Option

    • Upgrade to Redshift v2 connector (creates a new Linked Service).

    References


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.