Hello! My team was facing an interesting data corruption issue with Spark/Delta and we reported it to the Apache Spark maintainers. They created a new setting which is supposed to address this. I wanted to see if anyone had other thoughts on why this issue might be happening and some ideas on other ways that the problem could be addressed?
https://issues.apache.org/jira/browse/SPARK-43816