[In Order Processing of Micro Batches] ❓
Let's say I'm streaming from Kafka to delta table 1 via append only. Then I open a stream on delta table 1 to write to delta table 2.
The current delta version has 100 parquet files in the initial snapshot, but the stream can only process 10 at a time. Is there a way to guarantee the order of the processing? It's vitally important we process the records in order of their columns(kafka_partition, kafka_offset), because between delta table 1 and delta table 2 we "reduce" the rows based on the operation in the row. E.g. 2*3+4 is different from 4 + 3 then * 2.