William Reed
08/12/2021, 12:11 AMcommand
to the Job resource that is created from pipeline executions when using K8sRunLauncher? Thank you.George Pearse
08/12/2021, 7:47 AMNavneet Sajwan
08/12/2021, 8:15 AMGeorge Pearse
08/12/2021, 8:40 AMchrispc
08/12/2021, 1:57 PMException: Timed out waiting for tail process to start
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\execution\api.py", line 756, in pipeline_execution_iterator
for event in pipeline_context.executor.execute(pipeline_context, execution_plan):
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\executor\in_process.py", line 38, in execute
yield from iter(
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\execution\api.py", line 835, in __iter__
yield from self.iterator(
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\execution\plan\execute_plan.py", line 75, in inner_plan_execution_iterator
active_execution.verify_complete(pipeline_context, step.key)
File "C:\Users\***\Anaconda3\envs\borrar\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\storage\compute_log_manager.py", line 56, in watch
yield
File "C:\Users\***\Anaconda3\envs\borrar\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\storage\local_compute_log_manager.py", line 51, in _watch_logs
yield
File "C:\Users\***\Anaconda3\envs\borrar\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\execution\compute_logs.py", line 31, in mirror_stream_to_file
yield pids
File "C:\Users\***\Anaconda3\envs\borrar\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\execution\compute_logs.py", line 75, in tail_to_stream
yield pids
File "C:\Users\***\Anaconda3\envs\borrar\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\***\Anaconda3\envs\borrar\lib\site-packages\dagster\core\execution\compute_logs.py", line 104, in execute_windows_tail
raise Exception("Timed out waiting for tail process to start")
chrispc
08/12/2021, 2:21 PMJordan W
08/12/2021, 9:09 PMdagster-prometheus
?David
08/13/2021, 4:44 AMArun Kumar
08/13/2021, 10:08 AMdagster_snowflake
resource in my pipelines. What is the recommended way to set the password to the resource config? The password is available in the user-deployment as an env var. Should I just set the env var name in the resource config and make sure that the env var is set in the K8s job container?Chris Le Sueur
08/13/2021, 10:57 AMrun_key
of a Sensor, but let's suppose (I'm not sure this is actually the case...) we want to keep to a proper hourly schedule rather than specify a minimum interval of about an hour and allow it to drift. Is there a good way to accomplish this?George Pearse
08/13/2021, 10:59 AMDylan Hunt
08/13/2021, 12:13 PMGeorge Pearse
08/13/2021, 12:38 PMLily Grier
08/13/2021, 4:55 PMJean-Pierre M
08/13/2021, 8:25 PMDonny Winston
08/13/2021, 9:47 PMWilliam Reed
08/13/2021, 10:38 PMDaniel Kim
08/14/2021, 1:35 AMException: Timed out waiting for tail process to start
running a pipeline in Windows 10, dagster version 0.12.6:sumanta baral
08/14/2021, 9:25 PMXu Zhang
08/15/2021, 12:13 AMDean Jackson
08/16/2021, 1:17 AMevent_log_storage
, compute_logs
. But I don't know if these represent 'all dagster logging output', or how to configure each to log to multiple locations: a place convenient for dagit/dagster-daemon (postgres) AND a place that a logging agent can read them.Xu Zhang
08/16/2021, 1:20 AM/export/apps/python/3.7/bin/python3.7 -E -s -c from multiprocessing.semaphore_tracker import main;main(19)
George Pearse
08/16/2021, 10:17 AMcvb
08/16/2021, 10:20 AM(pipeline, input-class-N)
, I don't mind if runs for (pipeline, input-class-I)
and (pipeline, input-class-J)
runs in parallel, but there should be only 1 (pipeline, input-class-I)
at any time
And I can't really figure out how to do that, I can do that with QueuedRunCoordinator
specifying tagConcurrencyLimits
for each input class, but it looks ugly.
Wouldn't it be much better, if I could define something like
type: QueuedRunCoordinator
config:
queuedRunCoordinator:
tagConcurrencyLimits:
- key: non-concurrent
limit: 1
and then for each run define tags like tags={"non-concurrent": "pipeline-input-class-I"}
? And it looks like it wouldn't be that hard to do in _TagConcurrencyLimitsCounter that will change the api unfortunately, so maybe it would be better to be able to use custom RunCoordinatorDaemon?
What do you think, is there any plans for that? Does any of that sounds good enough so I could make pull request?zafar mahmood
08/16/2021, 10:21 AMGeorge Pearse
08/16/2021, 12:55 PMUtkarsh
08/16/2021, 1:17 PMJordan W
08/16/2021, 2:05 PMHuib
08/16/2021, 2:56 PMHuib
08/16/2021, 3:00 PM