Computer architecture is fundamentally defined by the intricate relationship between the Central Processing Unit (CPU) and the various layers of memory. These two components work in concert to execute all system tasks, process data, and ensure system integrity.

The Central Processing Unit (CPU): Execution and Control
The CPU is the system’s execution core, a complex microprocessor fabricated with billions of transistors. Its primary responsibility is the interpretation and execution of machine code instructions. This process is governed by the relentless Fetch-Decode-Execute-Store cycle. Within this cycle, the Control Unit (CU) acts as the orchestrator, fetching instructions from memory and generating the necessary control signals. The Arithmetic Logic Unit (ALU) performs all fundamental data manipulation, handling integer arithmetic and logical comparisons, supplemented by the Floating-Point Unit (FPU) for complex real-number calculations. Results are temporarily held in ultra-fast Registers, the smallest form of memory located directly within the CPU, which provide instantaneous access to operands during active computation. Modern CPUs employ a multi-core architecture, where a single physical chip houses multiple independent processing units, enabling parallel execution of multiple threads to vastly increase multitasking capability and system throughput. The CPU’s communication with the rest of the system is managed by its integrated Memory Controller, which dictates data flow over high-speed buses to and from the main memory.
The Memory Hierarchy: Speed, Capacity, and Volatility
Memory is structured into a precise hierarchy based on three key metrics: speed, capacity, and cost per bit. The closer a memory type is to the CPU, the faster and more expensive it is.
The highest tier is Cache Memory (L1, L2, L3), which utilizes specialized, high-speed Static RAM (SRAM). This volatile memory serves as a crucial buffer by employing the principle of locality of reference, preemptively storing data that the CPU is likely to need next, thereby minimizing the critical time the processor spends waiting for data from slower layers.
Next is Primary Memory, consisting primarily of Dynamic RAM (DRAM), or main system memory. DRAM is also volatile and serves as the primary execution workspace, holding the operating system kernel and data for actively running applications. DRAM requires continuous electrical refresh cycles to maintain its charge and, thus, its stored data. Also essential is Read-Only Memory (ROM), which is non-volatile and stores essential firmware—such as the BIOS or UEFI—required for hardware initialization upon system power-on.
Memory Management and Persistent Storage
A modern operating system utilizes Virtual Memory to efficiently manage and protect the finite resource of physical RAM. The hardware-based Memory Management Unit (MMU) within the CPU translates the logical addresses used by applications into the physical addresses in RAM, isolating processes to enhance system stability. Virtual memory extends the perceived size of physical RAM by using a dedicated portion of Secondary Memory (known as the swap file or paging file) as a temporary overflow area. When needed data is resident in this file, the operating system executes a slow process called paging or swapping, moving data between RAM and the secondary storage, which introduces significant access latency despite allowing greater system utilization.
Secondary Memory provides the foundational non-volatile storage for persistent data retention. This includes mechanical Hard Disk Drives (HDDs), which rely on magnetic storage, and the vastly superior Solid State Drives (SSDs), which use NAND Flash memory and possess no moving parts. SSDs leverage modern protocols such as NVMe to achieve maximum throughput and minimal latency, serving as the essential repository for the operating system, applications, and all user data, ensuring that system state is preserved when electrical power is terminated.