"Best" is hard to pin down without knowing more about your needs / requirements. But assuming you mean the standard way to ingest data handled by multiple separate clusters into a single Delta Table sink in S3... then you might want to take a look at
this blog. You can have each Spark cluster write to the same Delta Table in S3 as long as you have a service that handles maintaining atomicity of writes. There are standard DynamoDB implementations, delta docs
here! Hope that helps 😁