Jason
01/30/2023, 4:35 PMpersistent volumes
but if I'm wrong can you share any insights?
Currently, I have all of the pods running in the cluster, except minio (using external postgres). No errors in my pods. I'm able to log in and set up a snowflake destination and a source but when trying to do a sync, all 3 attempts fail. No logs except the one below.
2023-01-30 16:20:32 - Additional Failure Information: message='io.temporal.serviceclient.CheckedExceptionWrapper: java.util.concurrent.ExecutionException: java.lang.RuntimeException: io.airbyte.workers.exception.WorkerException: Running the launcher replication-orchestrator failed', type='java.lang.RuntimeException', nonRetryable=false
It works fine locally, so I'm guessing my last hope is trying to get Infra to set me up with an EC2 server to run docker compose
but then I'll have to deal with Dagster in K8s getting access to EC2Jason
01/30/2023, 4:37 PMAdam Bloom
01/30/2023, 4:39 PMAdam Bloom
01/30/2023, 4:40 PMAdam Bloom
01/30/2023, 5:20 PMJason
01/30/2023, 5:23 PMCONTAINER_ORCHESTRATOR_ENABLED: ""
Adam Bloom
01/30/2023, 5:24 PMAdam Bloom
01/30/2023, 5:25 PMJason
01/30/2023, 5:27 PMAdam Bloom
01/30/2023, 5:28 PMSTATE_STORAGE_MINIO_ENDPOINT
- just totally delete that from the airbyte-worker deployment.
Then, you need to add the following env vars to use S3 for state storage:
• STATE_STORAGE_S3_BUCKET_REGION
• STATE_STORAGE_S3_REGION
(yes, looks like a duplicate, but different parts of the airbyte code check different variables right now)
• STATE_STORAGE_S3_BUCKET_NAME
• STATE_STORAGE_S3_ACCESS_KEY
• STATE_STORAGE_S3_SECRET_ACCESS_KEY
Adam Bloom
01/30/2023, 5:29 PMJason
01/30/2023, 5:31 PMCONTAINER_ORCHESTRATOR_ENABLED: ""
got me past the error and sync'ing seems to work on first attempt but no S3 logs.
I'll follow your above instructions (and re-enable the orchestrator).Adam Bloom
01/30/2023, 5:32 PMAdam Bloom
01/30/2023, 5:32 PMJason
01/30/2023, 5:41 PMAdam Bloom
01/30/2023, 5:51 PM