We built a multi-protocol frontend that aggregates data to S3, and use auto-loader to feed into our Spark-based ETL pipeline. That is actually the original use-case for Auto-loader. THe way things are going is more like Kafka-ingest--a service layer that gets data from sources and direct commits to Delta Lake tables, and then Spark-based processing for deeper transformations, relations, and aggregations.