What is the preferred way to load images from s3 into torch serve for inference?
I have an image classifier model that I plan to deploy via torch serve. My question is, what is the ideal way to load as well write images from / to s3 buckets instead of from local filesystem for inference. Should this logic live in the model handler file? Or should it be a separate worker that sends images to the inference endpoint, like this example, and the resulting image is piped into an aws cp
command for instance?