Anton Abilov
05/31/2023, 4:24 PMmaxBytesPerTrigger
dynamically?
I would like for my Spark streaming job to process large batches (i.e. 50GB) when there is a lot of data to backfill, however when it’s caught up it should process smaller batches.JosephK (exDatabricks)
05/31/2023, 5:25 PMAnton Abilov
06/01/2023, 1:21 PMavailableNow : bool, optional
if set to True, set a trigger that processes all available data in multiple batches then terminates the query. Only one trigger can be set.
This is a continuous query with new data arriving — we don’t want it to terminateJosephK (exDatabricks)
06/01/2023, 11:53 PM