How can I host bulky custom AI models on serverless cloud
I have multiple combinations of models – for preprocessing and inferencing along with separate a training pipeline. The models include stable diffusion models and I perform inferencing using FastAPI.