We're performing frequent writes (with concurrent writers) into a Delta table; we noticed that although the count in the table (data files content) is small, the delta log is growing out of hand. This makes queries take longer and longer when Trino tries to load the snapshot of the table before running the query. We have a mechanism for optimization in place, but it just can't run as fast as the ingestion, due to conflicts..
Is there a way to mitigate this issue with such architecture?
Another question, is there a way to reset all delta logs and start from scratch after a complete successful vacuum?