Hello again :blob-wave: I am using a factory patt...
# ask-community
d
Hello again blob wave I am using a factory pattern to generate my pipelines (I've also tried the same arrangement with one pipeline and multiple presets). I need this because the pipelines configurations aren't known upfront, but configuration can be determined automatically. In development the pipelines all refresh fine, but I suppose that is dagster daemon doing code hot-reloading, or a byproduct of it using a multiprocessing executor that just starts a new interpreter each time In production using the recommended grpc server pattern
Copy code
load_from:
  - grpc_server:
      host: executor
      port: 4000
Clicking the "reload repository" buttons on the UI all indicate they are doing something, but the pipelines are never updated. Previously I suspected the 'lazy repository' pattern (I was returning a callable as per docs) but even returning JobDefinition objects it still doesn't work. It requires a full container restart to force a reload:
Copy code
@repository
def dev_repository():
    return list(pipelines_factory(mode=mode_development))
My questions are: • what am I doing wrong? • how can I have a containerised deployment that supports reloading (manual, or preferably automatically via a 'sensor'?) • or perhaps I've misunderstood the functionality - what is the act of 'reloading repository' for, if not to re-fetch repository content? TIA! 😀 (0.13.0, but I couldn't get it to work prior to that either)
d
Hi David - you're running into a rough edge with the 'reload' experience that's high on our list of things we want to improve soon. The way to reload your code right now is to restart the server (this is as much a Python restriction as a Dagster one, the process ultimately needs to restart in order to be able to safely reload the module), and dagit doesn't currently have a great way to control that when the server is running as an external gRPC server. We have plans in the future to manage your gRPC servers fully within Dagster instead of via a docker-compose file that would make a single-click reload possible, but we're not quite there yet - the reload button is more like 'refresh dagit's copy of the pipeline data', which isn't particularly useful unless you restart the server as well. In the meantime as a workaround we do have this graphql API that will shut down the server, you could trigger that periodically from a sensor like you said : https://docs.dagster.io/_apidocs/libraries/dagster-graphql#dagster_graphql.DagsterGraphQLClient.shutdown_repository_location - and if your docker-compose is set up to automatically restart services on failure, that would be enough to trigger a periodic reload. We're definitely planning to make this more seamless though, apologies for the difficulty.
thankyou 1
d
Thanks Daniel, I'll have a look in to the gql options and keep an eye out for upcoming changelogs 🙂