I’m currently working on an open source alternative to Google Photos. Since I want it to be self-hostable but also run in the cloud, I decided that I would support both s3 and local fs storage options. It also needs to support per user quota limits.
I came up with several ways to implement such a system, but I am currently stuck on deciding which of them is the best. Here is what I came up with:
-
Build a microservice that proxies s3 or the local file system but is isolated from the rest of the microservices (so basically a standalone service). So presigned urls are required, which need to be created by another microservice which also needs to check the quota.
-
Build a microservice that proxies s3 or the local filesystem but is connected to the rest of the microservices. So a JWT auth token with the user id can be used and the microservice handles the auth and quota itself.
-
Build a microservice that only returns presigned urls and the client communicates directly to s3 or local storage (using a rest api wrapper). This seems pretty complicated, since the client needs to implement different behaviour based on which storage backend is used (especially when using multipart upload which is required for bigger file uploads and differs quite a lot between different storage backends).
Do these options make sense or can you think of anything better?
Thanks for your support!
1