Hi team, I have a question about deploying user co...
# deployment-kubernetes
a
Hi team, I have a question about deploying user code changes. Context: For our usecase, we have a set of entities stored in DB that needs to be processed and I am currently creating a separate pipeline to process each entity by dynamically generating the DAG (instead of creating a single pipeline and creating a new run for each entity) to better track each process in Dagit. I was planning to update the user code deployment on a daily basis to include the new entities added in the previous day. In our case, the helm chart and the user code exists in different repositories and the helm chart is managed by using Flux. I think the recommended way to update the deployment is to create a new user-code image tag and update the tag in the helm chart and do a helm upgrade. But this requires us to create a PR in the Flux repo everyday which looks quite inefficient. I just wanted to see if you guys have seen similar pattern before and have a recommended way to automate the upgrade process?
c
cc @rex who added helm support for deploying helm chart and user code separately
thankyou 1
t
I’m not sure if this is applicable in your situation, but I use a particular Docker tag, specifically
latest-rc
on the user code images and then have a cron job that simply restarts the user code deployments on a daily basis via
kubectl
, which will then pull the most recent
latest-rc
image as the pods roll over. Dagit will then notice the change and automatically reload that repository and things keep chugging along with the newest version.
a
Thanks Trevor. I am planning to do something similar 🙂 Just wanted to check if there are any other recommended approaches
r
Hey Arun, Trevor’s approach should work for your use case if you don’t want to keep changing the image tag associated with your user code.
Though I am curious - is each entity processed using the same transformation? In other words, the only thing different about each pipeline is what entity is processed? In that case, it may be more slick to use run tags on the pipeline as a way to better track each process
a
Thanks a lot @rex I am currently exploring an exact approach using tags. Here is the separate thread on that discussion https://dagster.slack.com/archives/C01U954MEER/p1622751974387600