Hi, Kilic. I think this answers your question
https://docs.delta.io/latest/delta-storage.html#multi-cluster-setup. If I understood correctly you have two different spark workloads, one consuming from s3 the other from Kafka. If they are managed by different spark drivers processes, I believe you will be in the multicluster writer scenario, where you will need to configure a dynamodb integration to guarantee write after write consistency. If both workloads are managed by the same spark driver process then you'll be in this scenario,
https://docs.delta.io/latest/delta-storage.html#single-cluster-setup-default, and nothing needs to be done; you will already have write after write consistency. Also, I'm considering when you mentioned "Iceberg" you want it to say delta, otherwise I didn't understand the question and you may disregard this answer 😅.