Hi everyone, I'm Vivek from Hyderabad, India. I re...
# ask-community
Hi everyone, I'm Vivek from Hyderabad, India. I recently deployed dagster using helm chart on Redhat openshift container platform 4.7 and ran into an error during the (example_usercode - dagster/*user-code-example:0.11.16*) execution. I actually ran into a permission denied issue (/opt/dagster/dagster_home/storage - mkdir) with Run_Coordinator (Job - Pods). Can anyone kindly help with this issue. Run Configuration: solids: multiply_the_word: config: factor: 2 inputs: word: value: test
This may be an issue with your container storage permissions. You should ensure that your pods’ containers have the ability to write to their local filesystem.
Thank you @rex. Yes, but how could I provide a persistent volume (Read/Write) to Run Coordinator pods ? The run coordinator pods are running as jobs in openshift, not anything like Deployment/Deployment Config. So there's no configuration that I was able to provide for these jobs. Any suggestions over here.
hello @Noah Sanor, were you able to configure storage for run coordinator ?
the storage in question here is the compute logs storage. By default, the Helm chart uses the
which stores the stdout/stderr for each compute step in your pipeline on disk. See https://docs.dagster.io/deployment/dagster-instance#default-local-behavior for more information. A Kubernetes job is an abstraction over a Kubernetes pod, so you should be able to edit the security permissions (in the pod template) to allow it to access the container filesystem explicitly. If you want to do this with your pipeline, you can follow https://docs.dagster.io/deployment/guides/kubernetes/customizing-your-deployment#solid-or-pipeline-kubernetes-configuration. If you don’t want to go through that, you could also just enable the
in the Helm chart.
Copy code
  # Type can be one of [
  #   LocalComputeLogManager,
  #   AzureBlobComputeLogManager,
  #   GCSComputeLogManager,
  #   S3ComputeLogManager,
  #   CustomComputeLogManager,
  # ]
  type: CustomComputeLogManager
      module: dagster.core.storage.noop_compute_log_manager
      class: NoOpComputeLogManager
      config: {}
Thanks, @rex. NoOpComputeLogManager did the job. But I also had a try with AzureBlobComputeLogManger & imported necessary configurations in dagster.yaml file. But I faced this error in my job. I'm using dagster 0.11.16 ModuleNotFoundError: No module named 'dagster_azure' _dagster.check.CheckError: Failure condition: Couldn't import module dagster_azure.blob.compute_log_manager when attempting to load the configurable class dagster_azure.blob.compute_log_manager.AzureBlobComputeLogManager_ I assume I don't need to write an import statement in my user deployment code.
Here is my YAML block in 
compute_logs: module: dagster_azure.blob.compute_log_manager class: AzureBlobComputeLogManager config: container: REDACTED local_dir: /tmp/cool prefix: dagster-test- secret_key: REDACTED storage_account: REDACTED
I assume I don’t need to write an import statement in my user deployment code.
this is an incorrect assumption - you need this dependency in your user code. https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#build-docker-image-for-user-code
Thanks, @rex, It's working fine.