Understanding the Performance Benefits of CPU Caches When Cache Line Fetch Latency Equals Main Memory Access Time
I’m trying to grasp the performance advantages of CPU caches, particularly in scenarios where fetching a cache line (e.g., 64 bytes) from main memory to the cache incurs the same latency as directly accessing main memory because we are just fetching from memory at the end,So i am having really difficult understanding the benefits of cache(spatial locality).
(By the way i feel the advatages of cache when it is used for frequently accessed datas but here I don’t get the advantage of caching the next block of instructions(spatial locality)).