Zach
11/15/2023, 4:39 PMCraig Austin
11/16/2023, 7:53 AMIvan Tsarev
11/16/2023, 5:22 PM#13 60.68 INFO: pip is looking at multiple versions of json5 to determine which version is compatible with other requirements. This could take a while.
#13 60.70 Downloading json5-0.9.4-py2.py3-none-any.whl (17 kB)
#13 62.51 Downloading json5-0.9.3-py2.py3-none-any.whl (17 kB)
#13 64.31 Downloading json5-0.9.2-py2.py3-none-any.whl (27 kB)
#13 66.11 Downloading json5-0.9.1-py2.py3-none-any.whl (27 kB)
#13 67.91 Downloading json5-0.9.0-py2.py3-none-any.whl (27 kB)
#13 69.70 INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See <https://pip.pypa.io/warnings/backtracking> for guidance. If you want to abort this run, press Ctrl + C.
#13 69.70 INFO: pip is looking at multiple versions of httptools to determine which version is compatible with other requirements. This could take a while.
#13 69.71 Collecting httptools>=0.5.0
#13 69.73 Downloading httptools-0.6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (441 kB)
#13 69.74 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 441.7/441.7 kB 63.9 MB/s eta 0:00:00
#13 93.11 Downloading httptools-0.5.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (427 kB)
#13 93.12 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 427.8/427.8 kB 73.1 MB/s eta 0:00:00
#13 116.4 INFO: pip is looking at multiple versions of h11 to determine which version is compatible with other requirements. This could take a while.
#13 116.5 Collecting h11>=0.8
#13 116.5 Downloading h11-0.13.0-py3-none-any.whl (58 kB)
#13 116.5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.2/58.2 kB 18.3 MB/s eta 0:00:00
#13 186.9 Downloading h11-0.12.0-py3-none-any.whl (54 kB)
#13 186.9 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.9/54.9 kB 18.0 MB/s eta 0:00:00
#13 233.5 INFO: pip is looking at multiple versions of httptools to determine which version is compatible with other requirements. This could take a while.
#13 257.0 Downloading h11-0.11.0-py2.py3-none-any.whl (54 kB)
#13 257.0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.6/54.6 kB 17.2 MB/s eta 0:00:00
#13 327.9 Downloading h11-0.10.0-py2.py3-none-any.whl (53 kB)
#13 327.9 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.7/53.7 kB 15.9 MB/s eta 0:00:00
#13 351.4 INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See <https://pip.pypa.io/warnings/backtracking> for guidance. If you want to abort this run, press Ctrl + C.
#13 398.8 Downloading h11-0.9.0-py2.py3-none-any.whl (53 kB)
#13 398.8 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.6/53.6 kB 19.0 MB/s eta 0:00:00
#13 469.7 Downloading h11-0.8.1-py2.py3-none-any.whl (55 kB)
#13 469.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 55.8/55.8 kB 17.6 MB/s eta 0:00:00
#13 540.8 Downloading h11-0.8.0-py2.py3-none-any.whl (55 kB)
#13 540.9 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 55.6/55.6 kB 17.4 MB/s eta 0:00:00
#13 611.4 INFO: pip is looking at multiple versions of h11 to determine which version is compatible with other requirements. This could take a while.
#13 611.4 INFO: pip is looking at multiple versions of graphene to determine which version is compatible with other requirements. This could take a while.
#13 611.4 Collecting graphene>=3
#13 611.4 Downloading graphene-3.2.2-py2.py3-none-any.whl (125 kB)
#13 611.4 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.6/125.6 kB 36.3 MB/s eta 0:00:00
#13 965.5 INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See <https://pip.pypa.io/warnings/backtracking> for guidance. If you want to abort this run, press Ctrl + C.
#13 1178.8 Downloading graphene-3.2.1-py2.py3-none-any.whl (125 kB)
#13 1178.9 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.1/125.1 kB 5.8 MB/s eta 0:00:00
#13 1728.2 Downloading graphene-3.2-py2.py3-none-any.whl (124 kB)
#13 1728.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.6/124.6 kB 6.4 MB/s eta 0:00:00
#13 2279.0 Downloading graphene-3.1.1-py2.py3-none-any.whl (121 kB)
#13 2279.0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.4/121.4 kB 6.0 MB/s eta 0:00:00
#13 2831.6 Downloading graphene-3.1-py2.py3-none-any.whl (114 kB)
#13 2831.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 114.2/114.2 kB 6.0 MB/s eta 0:00:00
#13 3389.9 Downloading graphene-3.0-py2.py3-none-any.whl (112 kB)
#13 3389.9 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 112.9/112.9 kB 5.9 MB/s eta 0:00:00
#13 3390.0 INFO: pip is looking at multiple versions of gql[requests] to determine which version is compatible with other requirements. This could take a while.
#13 3390.0 Collecting gql[requests]>=3.0.0
#13 3390.0 Downloading gql-3.4.0-py2.py3-none-any.whl (65 kB)
#13 3390.0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.2/65.2 kB 17.5 MB/s eta 0:00:00
#13 5062.8 INFO: pip is looking at multiple versions of graphene to determine which version is compatible with other requirements. This could take a while.
#13 7319.6 Downloading gql-3.3.0-py2.py3-none-any.whl (63 kB)
#13 7319.6 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.4/63.4 kB 3.7 MB/s eta 0:00:00
#13 8449.9 INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See <https://pip.pypa.io/warnings/backtracking> for guidance. If you want to abort this run, press Ctrl + C.
Charlie Bini
11/16/2023, 7:13 PMdagster_cloud_cli.core.errors.GraphQLStorageError: Error in GraphQL response: [{'message': 'Internal Server Error (Trace ID: 7836303090216615385)', 'locations': [{'line': 4, 'column': 13}], 'path': ['eventLogs', 'getMaterializationCountByPartition']}]
Eliran Shem Tov
11/20/2023, 7:51 AMdagster dev
and is configured like this in dagster.yaml
run_coordinator:
module: run_coordinator
class: CustomRunCoordinator
config:
tag_concurrency_limits:
- key: "my_tag"
value:
applyLimitPerUniqueValue: true
limit: 1
I explored almost every bit of the web, trying to figure out how to define my CustomRunCoordinator
from project-root/run_coordinator.py
as my dagster.cloud run coordinator, and still haven't figured it out.
How can I specify a concrete run coordinator for cloud deployments? (preferably via dagster_cloud_staging.yaml
/`dagster_cloud_prod.yaml` or other config files and not through the UI/CLI)
Thank you very much, folks! keanu thanks 🙏Ivan Tsarev
11/20/2023, 8:23 AMStill waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Still waiting for agent to sync changes to globaldata_pipeline, dataset_export_for_longevity_map, and article_ingestions. This can take a few minutes.
Error: Some locations failed to load after being synced by the agent:
Error loading globaldata_pipeline: {'__typename': 'PythonError', 'message': 'Exception: Timed out after waiting 315s for server globaldatapipeline-fe2bfeea94c5a62adf63e1027a37f39372-b8a41f.serverless-agents-namespace-1:4000.\n\nTask logs:\n raise _InactiveRpcError(state) # pytype: disable=not-instantiable\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\ngrpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNAVAILABLE\n\tdetails = "failed to connect to all addresses; last error: UNKNOWN: unix:/tmp/tmpn1b8it8p: No such file or directory"\n\tdebug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: unix:/tmp/tmpn1b8it8p: No such file or directory {grpc_status:14, created_time:"2023-11-20T07:50:49.169683184+00:00"}"\n>\n{"time": "20/Nov/2023:07:50:51 +0000", "log": "registry - Ignoring failure to clean up local unused path /tmp/pex-files/source-948ebbb361e2d784a6ec5b1c21065161f43621a3.pex\\nTraceback (most recent call last):\\n File \\"/dagster-cloud/dagster_cloud/pex/grpc/server/registry.py\\", line 193, in cleanup_unused_files\\n os.remove(path)\\nFileNotFoundError: [Errno 2] No such file or directory: \'/tmp/pex-files/source-948ebbb361e2d784a6ec5b1c21065161f43621a3.pex\'", "status": "ERROR", "logger": "root"}\n{"time": "20/Nov/2023:07:50:54 +0000", "log": "_server - Exception calling application: <_InactiveRpcError of RPC that terminated with:\\n\\tstatus = StatusCode.UNAVAILABLE\\n\\tdetails = \\"failed to connect to all addresses; last error: UNKNOWN: unix:/tmp/tmpn1b8it8p: No such file or directory\\"\\n\\tdebug_error_string = \\"UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: unix:/tmp/tmpn1b8it8p: No such file or directory {grpc_status:14, created_time:\\"2023-11-20T07:50:49.169683184+00:00\\"}\\"\\n>\\nTraceback (most recent call last):\\n File \\"/usr/local/lib/python3.11/site-packages/grpc/_server.py\\", line 552, in _call_behavior\\n response_or_iterator = behavior(argument, context)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/dagster-cloud/dagster_cloud/pex/grpc/server/server.py\\", line 172, in Ping\\n return self._query(\\"Ping\\", request, context)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/dagster-cloud/dagster_cloud/pex/grpc/server/server.py\\", line 137, in _query\\n return client_or_error._get_response(api_name, request) # noqa: SLF001\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/dagster/dagster/_grpc/client.py\\", line 140, in _get_response\\n return getattr(stub, method)(request, metadata=self._metadata, timeout=timeout)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/usr/local/lib/python3.11/site-packages/grpc/_channel.py\\", line 1161, in __call__\\n return _end_unary_response_blocking(state, call, False, None)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/usr/local/lib/python3.11/site-packages/grpc/_channel.py\\", line 1004, in _end_unary_response_blocking\\n raise _InactiveRpcError(state) # pytype: disable=not-instantiable\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\ngrpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:\\n\\tstatus = StatusCode.UNAVAILABLE\\n\\tdetails = \\"failed to connect to all addresses; last error: UNKNOWN: unix:/tmp/tmpn1b8it8p: No such file or directory\\"\\n\\tdebug_error_string = \\"UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: unix:/tmp/tmpn1b8it8p: No such file or directory {grpc_status:14, created_time:\\"2023-11-20T07:50:49.169683184+00:00\\"}\\"\\n>", "status": "ERROR", "logger": "grpc._server"}\n{"time": "20/Nov/2023:07:50:59 +0000", "log": "registry - Ignoring failure to clean up local unused path /tmp/pex-files/deps-f3b8ee6b2caf2a7277819e380fa17e3221a7fd25.pex\\nTraceback (most recent call last):\\n File \\"/dagster-cloud/dagster_cloud/pex/grpc/server/registry.py\\", line 193, in cleanup_unused_files\\n os.remove(path)\\nFileNotFoundError: [Errno 2] No such file or directory: \'/tmp/pex-files/deps-f3b8ee6b2caf2a7277819e380fa17e3221a7fd25.pex\'", "status": "ERROR", "logger": "root"}\n\nMost recent connection error: dagster._core.errors.DagsterUserCodeUnreachableError: Could not reach user code server. gRPC Error code: UNAVAILABLE\n\nStack Trace:\n File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 1710, in _wait_for_server_process\n client.ping("")\n File "/dagster/dagster/_grpc/client.py", line 200, in ping\n res = self._query("Ping", api_pb2.PingRequest, echo=echo)\n File "/dagster/dagster/_grpc/client.py", line 167, in _query\n self._raise_grpc_exception(\n File "/dagster/dagster/_grpc/client.py", line 150, in _raise_grpc_exception\n raise DagsterUserCodeUnreachableError(\n\nThe above exception was caused by the following exception:\ngrpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNAVAILABLE\n\tdetails = "failed to connect to all addresses; last error: UNKNOWN: ipv4:10.0.115.152:4000: connection attempt timed out before receiving SETTINGS frame"\n\tdebug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv4:10.0.115.152:4000: connection attempt timed out before receiving SETTINGS frame {grpc_status:14, created_time:"2023-11-20T07:53:53.102027102+00:00"}"\n>\n\nStack Trace:\n File "/dagster/dagster/_grpc/client.py", line 165, in _query\n return self._get_response(method, request=request_type(**kwargs), timeout=timeout)\n File "/dagster/dagster/_grpc/client.py", line 140, in _get_response\n return getattr(stub, method)(request, metadata=self._metadata, timeout=timeout)\n File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 1161, in __call__\n return _end_unary_response_blocking(state, call, False, None)\n File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 1004, in _end_unary_response_blocking\n raise _InactiveRpcError(state) # pytype: disable=not-instantiable\n\n', 'stack': [' File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 1304, in _reconcile\n self._wait_for_new_server_ready(\n', ' File "/dagster-cloud/dagster_cloud/workspace/ecs/launcher.py", line 461, in _wait_for_new_server_ready\n self._wait_for_dagster_server_process(\n', ' File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 1688, in _wait_for_dagster_server_process\n self._wait_for_server_process(\n', ' File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 1723, in _wait_for_server_process\n raise Exception(\n']}
Any ideas what can it be? Does seems to be connected with our code but I'm not quite sure however.Ivan Tsarev
11/20/2023, 10:25 AMJonathan Williams
11/21/2023, 3:40 AMYoshi Gillaspie
11/21/2023, 7:03 PMdagster-cloud serverless deploy-python-executable . \
--location-name ciro \
--package-name etl_pipelines \
--python-version=3.11
Going to deploy location ciro
Building Python executable for ciro from directory /Users/XXXX/etl and Python 3.11.
Building project dependencies for Python 3.11, writing to /var/folders/pj/zxnmgbk523ldprrb4bm9rgbw0000gn/T/tmp5yu67pe7
Warning: Failed to build dependencies in current environment:ValueError: No pre-built wheel was available for pendulum 2.1.2.No pre-built wheel was available for pendulum 2.1.2.
Warning: Falling back to build in a docker environment
Running docker <http://ghcr.io/dagster-io/dagster-manylinux-builder:latest|ghcr.io/dagster-io/dagster-manylinux-builder:latest>
Mapped folders:
- /var/folders/pj/zxnmgbk523ldprrb4bm9rgbw0000gn/T/tmp5yu67pe7 -> /output
Please let me know what other information might be helpful to figure out what’s going on, and any help regarding this problem would be greatly appreciated.Venky Iyer
11/21/2023, 9:53 PMMuhammad Jarir Kanji
11/22/2023, 11:50 PMdbt deps
to work, however it seems the dbt repo is not being copied into the Docker container so this step fails. Any help would be appreciated!
(My workflow files and logs are shown in the thread.)Muhammad Jarir Kanji
11/25/2023, 3:42 PMActivating serverless deployment
in the Deployment > Agents
tab every few minutes) and my runs are therefore unable to execute, constantly giving a Still waiting for compute resources to spin up.
message and then eventually giving the following error and failing:
Run dequeue failed to reach the user code server after 1 attempts, failing run
dagster._core.errors.DagsterUserCodeUnreachableError: Timed out waiting for call to user code LAUNCH_RUN
I don't see any mention of outages on the Dagster Cloud Status page. Why is the agent so...unstable(?) and constantly redeploying? Is this a common occurrence for serverless users?Brendan Jackson
11/27/2023, 1:48 PMdagster-cloud
cli. The deployment hasn't changed. I suspect it's related to importing a local package as a library in the setup.py:
install_requires=[
# Local dependencies
"dagster-lib @ file://"
+ os.path.join(os.path.dirname(__file__), "../libs/dagster_lib"),
I get:
dagster._core.errors.DagsterImportError: Encountered ImportError: `cannot import name 'SqlIOManager' from 'dagster_lib.resources.io_managers' (/venvs/1a7e8cb1c6a5/lib/python3.10/site-packages/dagster_lib/resources/io_managers.py)` while importing module data_pipelines. Local modules were resolved using the working directory `/venvs/4c52386641a1/lib/python3.10/site-packages/working_directory/root`. If another working directory should be used, please explicitly specify the appropriate path using the `-d` or `--working-directory` for CLI based targets or the `working_directory` configuration option for workspace targets.
In the deployment logs I notice:
Reusing cached dependencies: deps-dabf9cc84304f18af0dce6c9b35b3c8730259878.pex
Is it possible this dependency is cached and not update? Is there anything else that might cause this?John
11/27/2023, 10:12 PMdagster-cloud serverless deploy-python-executable ./dagster_university \
--location-name dagster_university \
--package-name dagster_university
I get a ModuleNotFoundError
error like this:
Going to deploy location dagster_university
Building Python executable for dagster_university from directory /Users/john/Projects/dagster_essentials/dagster_university and Python 3.8.
<<I snipped the traceback>>
ValueError: Error running setup.py egg_info: Traceback (most recent call last):
File "/Users/john/Projects/dagster_essentials/dagster_university/setup.py", line 1, in <module>
from setuptools import find_packages, setup
ModuleNotFoundError: No module named 'setuptools'
Dennis Schwartz (he/him)
11/29/2023, 12:18 PMdagster_cloud.yaml
However, the example reference specifies a docker image using the latest
tag, which is not best practice in Docker as far as I know.
If I wanted to specify e.g. a git commit hash as the docker image tag, is there a mechanism for that?
Basically I want the code the agent uses to update whenever I make a new commit.
I can't set the dagster_cloud.yaml
to specify the current commit as the tag because I don't know the commit hash before committing, so it's a circular dependency.
I've looked for about an hour in various docs and couldn't find any good examples. Could someone point me in the right direction?Olivier Chancé
11/29/2023, 12:57 PMreturn 1
, takes more than 5 minutes to initiate the Run, and half the time, it ends in a timeout with the error dagster._core.errors.DagsterUserCodeUnreachableError: Timed out waiting for call to user code LAUNCH_RUN
after several minutes of running (even though no other job or sensor is running in parallel). Launching other partitioned jobs results in 504 errors before even a single Run is initiated.
How can we debug this kind of issue?Alex Chisholm
11/29/2023, 7:46 PMAustin Bailey
11/29/2023, 11:53 PMNicolas Nguyen
11/30/2023, 10:27 AMscott simpson
12/02/2023, 1:40 AMJoe Naso
12/03/2023, 10:01 PM<instance>.dagster.cloud/prod/overview/jobs
when attempting to launch a backfill run with a specific partition configurationHugo Vandernotte
12/05/2023, 1:49 PMdagster-cloud serverless deploy --location-name=my_location --python-version=3.9
However, when trying to run it, it seems that the --python-version
argument is not available anymore, could you help me?Hugo Vandernotte
12/05/2023, 4:53 PMsecret and access
keys and would like to know whether it would be possible for Dagster to assume a specific role in our account? Or if it is planned to be implemented in the near future?Leah Padgett
12/05/2023, 7:45 PMBrandon Freeman
12/05/2023, 8:16 PMPhuoc Nguyen
12/06/2023, 12:14 PMOlivier Chancé
12/06/2023, 5:33 PMAlberto Vila Tena
12/08/2023, 10:58 AMdagster-datadog
just works within ops and the existence of this Github issue doesn't keep my hopes very high on this matter.Kevin
12/09/2023, 12:21 PM'Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=109052027, lib=13, reason=123, reason_text=header too long)>]
The keypair does work when I use it to connect using the SnowSQL CLI instance (when referencing the file rather than using the private key directly).
Currently I have the private key set as an environment variable in dagster cloud.. which I think is the reason I'm facing the error above. I should instead refer to the key location.
If that's the case my question is, is it possible to have the encrypted private key stored in a location on the dagter cloud instance so that I can reference the path?Rytis Zolubas
12/09/2023, 1:52 PM