I have a question regarding the maximum file size created when using the Z-Order optimization command. I have a partitioned Delta Lake table where I structure streaming data and use the merge command in the foreach batch operation. I run the Z-Order optimization on every 100th batch, and I've noticed that each time it runs, it creates a very large file. The current large file is 6.3 GB, and it keeps increasing. I didn't set the 'spark.databricks.delta.optimize.maxFileSize' property, so it should be the default 1 GB, right? Do you have any insights into why it's creating such large files? I'm using Spark 3.4.1 and Delta Lake 2.4.0, and I'm not observing this behavior in non-partitioned tables where the maximum file size remains at 1 GB."