When training with large-scale data, how is the data processed?
I am faced with the challenge of training with large-scale data, specifically about 19 terabytes of video data. Creating the model isn’t difficult, but I’m not sure where to store this massive amount of data and how to use it. Since we don’t have high-performance computers, it seems we might need to rent some. I am curious about how AI developers who handle large-scale data typically manage such situations.