Is the concept of a 'dynamically partitioned asset...
# ask-community
Is the concept of a 'dynamically partitioned asset' still not around? I have a pipeline that does some data extraction -- lets say I have 100 users, my bucket would be segmented by each user's unique id, ie:
Copy code
- big bucket
  - id_1
     - folder_for_certain_type_of_doc
        - unprocessed_doc.txt
  - id_2
  - id_3
I basically want to sense when we add an unprocessed doc/dir to the user_id key in s3, so I can take
run it through dagster, and put it back into the same folder as
. Am I thinking about this incorrectly? Is this possible?
is this "sensable" or do I manually have to call the API and kick off a run with a pointer to the S3 dir?
(or, even better segmenting the users into different buckets etc)
Hi Josh, we’re actively designing and building this feature right now. Here’s the tracking issue for it: cc @sandy
Ok thanks! I’ll probably hack something together with lamdas and the graphql api in the meantime I think
it will probably be something like upload to s3 -> lambda calls GQL api to kick off job -> dagster users internal s3 library to upload to output library