I'm calling the graphql endpoint over HTTP with a ...
# announcements
d
I'm calling the graphql endpoint over HTTP with a
launchPipelineExecution
mutation to start a pipeline. Looking to have it start in process on the dagit instance. I can't figure out the new 0.8 selector I need to specify. Here's what I got so far:
Copy code
launchPipelineExecution(
	executionParams: {
		selector: {
			repositoryLocationName: "<<in_process>>",
			repositoryName: "my_repository",
			pipelineName: "my_pipeline",
		},
		mode: "default",
	}
)
Getting this error:
Copy code
RESPONSE >>> {'data': {'launchPipelineExecution': {'__typename': 'PipelineNotFoundError', 'message': 'Could not find Pipeline <<in_process>>.my_repository.my_pipeline', 'pipelineName': 'my_pipeline'}}}
I'm trying to copy how the dagster_graphql test suite does it but not succeeding. On a side note, might be useful to incorporate "execute pipelines over graphql" machinery into the main dagster_graphql API, because it's very useful for use cases like mine where I have pipelines launching other pipelines, and it's a shame all that useful client code is buried in tests.
Btw I copied the
<<in_process>>
part from https://github.com/dagster-io/dagster/blob/fb351600fb814c681f4c6305bded3dc67548dfe8/python_modules/dagster-graphql/dagster_graphql_tests/graphql/setup.py#L116. Not sure if that's correct, but I also tried the filename and the path+filename of my repo.py file which contains
Copy code
@repository
def my_repository():
    return [my_pipeline]
and none worked.
a
ya this stuff is still very rough The best thing to do is in your
workspace.yaml
you can specify
location_name
on your
load_from
targets and then you use that name for
repositoryLocationName
d
@alex worked, thank you! šŸ™
a
If you want some context on the architectural changes that these rough edges fell out from they are covered best in this video

https://www.youtube.com/watch?v=OzF0Vt4BBIo&amp;feature=youtu.beā–¾

d
Watched it, very informative
@alex with multiprocess gone temporarily as you mentioned yesterday, in my use case where a pipeline launches another pipeline over graphql, what are my options for getting multiple pipelines to run at the same time on a central dagit "server"?
a
oh you can have multiple - just no way to bound how many
d
Actually I hacked something up for this in 0.7:
Copy code
def wait_for_run_slot(pipeline_name, max_runs=1):

    instance = DagsterInstance.get()

    def get_active_runs(pipeline_name):
        started_runs = len(instance.get_runs(PipelineRunsFilter(pipeline_name=pipeline_name, status=PipelineRunStatus.STARTED)))
        not_started_runs = len(instance.get_runs(PipelineRunsFilter(pipeline_name=pipeline_name, status=PipelineRunStatus.NOT_STARTED)))
        return started_runs + not_started_runs

    while get_active_runs(pipeline_name) >= max_runs:
        time.sleep(2)
This worked nicely when called in the parent pipeline:
Copy code
wait_for_run_slot("child_pipeline", 10)
a
oh great ya i was just thinking through how to suggest something like this
very nice!
d
Thx
But what I'm currently seeing is that the parent and child pipelines, which my graphql requests all run in process on the dagit server, block my from navigating dagit in my browser until all pipelines finish
Which I'm guessing is due to multiprocess being removed
a
hmmm - you are not calling
executeRunInProcess
at all right? and you are just using the default run launcher?
d
I don't even have a way to verify whether my parent and children pipelines are in fact being run in parallel
Yes, no run launcher defined in dagster.yaml, and not calling that method. Just a
launchPipelineExecution
mutation request
a
just to verify you see
CliApiRunLauncher
if you look at the ā€œInstance Detailsā€ in
dagit
d
Yup
Copy code
Run Launcher:
     module: dagster.core.launcher.cli_api_run_launcher
     class: CliApiRunLauncher
     config:
       {}
a
and dagit locks up - very peculiar
d
Yes, until the pipelines complete. Then I can review everything and see no errors in the pipeline results.
a
Can you describe in more detail what you are seeing? Does the web page not load at all ? Just certain views?
d
Let me gather some more clues and get back
a
appreciated
d
Looks like it was an issue on my end, dagit is fine. My laptop kept freezing up while my parent pipeline spawned a whole bunch of children, and with each child being a noop that returned immediately (hacking on a 0.8 prototype currently) it was constantly spawning new ones. Looked like dagit was dead, but putting a sleep into the children revealed it was fine. My bad!
šŸ˜Œ 1
@alex really appreciate the help and thanks to you and the team for an awesome 0.8 release!
ā¤ļø 1