There is a lot to this question. Serializing a very large binary blob into column value of a row is probably not wise. A traditional database pattern for this use case would be to store the blob to an object store, and when the write is confirmed, insert the metadata and path to the blob in the database. You can do the same thing with Delta Lake. If you want to delete the blob, you will have to independently delete the metadata record and the blob. Delta Lake does have a convention that directories under the table root that start with an underscore won’t have their contents deleted by vacuum, so you can place the blobs there. You can handle versioning by either writing the new blob version to a different path and updating the metadata record, and then the history is addressable through time travel. Or if using S3 or similar object store that supports versioning, you can rely on that update/pin versions in the metadata record. You will need some frontend service/application logic to perform these things.