this table has a nice overview of how to think about it:
https://docs.dagster.io/guides/dagster/enriching-with-software-defined-assets#when-should-i-use-software-defined-assets
in practice, i tend to think of 1 asset as a table or a discrete "dataset". while you can dynamically generate asset metadata through ops, the recommended path is to define all your permutations in code.
If you're looking for a streaming/continuous use case, Dagster may not be the best fit, although you certainly can make it work. You could do small partitions (say 5 minutes), and that lets you take advantage of backfill capabilities.
In my mind, the value of assets scales with the amount of processes you're managing through dagster, so it really pays off when you have multiple separate workfloads that are dependent on each other. (an ingestion pipeline, and a transform layer, and a serving layer.) for point solutions, using ops / jobs might make a ton of sense!