I am using spark 3.3.0
but got a problem when count dataframe
with pyspark (python) scan 900Mb too slow
with spark (scalar) scan 145Mb 5 times faster
why it have different between pyspark and spark
statement: df = spark.read.format("delta").load("path").count()
help!!! It'll save my day