The performance of any computing system is predicated on its Memory Hierarchy, a tiered structure designed to manage the vast speed disparity between the CPU core and the slowest forms of storage. This hierarchy ensures that the most frequently accessed data is stored closest to the CPU, trading off capacity for access speed. At the core level are the L1, L2, and L3 caches, constructed from fast, expensive Static Random-Access Memory (SRAM). The L1 cache, split into instruction (L1i) and data (L1d), is the smallest and fastest, accessible in just a few clock cycles, and private to each core. The L2 cache is larger and slightly slower, acting as the next level of filtration. The L3 cache, or Last Level Cache (LLC), is the largest cache tier and is shared among all cores, serving as the final intermediary before reaching main memory. Below this cache subsystem is Main Memory (DRAM), which is several orders of magnitude slower than the L-caches but provides the bulk of the volatile storage capacity, measured in gigabytes, necessary for the active “working set” of all running programs. The constant integrity of this multi-level system is maintained through cache coherence protocols (like MESI), which ensure that when one core modifies a piece of data in its private cache, all other caches containing that stale data are notified and invalidated.