architecture: develop notes on memory

This commit is contained in:
thomasabishop 2023-09-21 07:17:30 +01:00
parent d1a2885ecb
commit d94d8f29e3
2 changed files with 80 additions and 28 deletions

View file

@ -7,21 +7,25 @@ tags: [memory, motherboard]
# Memory # Memory
In essence the memory is just a temporary and volatile storage area for a series of binary digits. Each slot for a 0 or 1 is called a bit: ## Why do we need memory?
> This is where the running kernal and processes reside - they're just big collections of bits. A CPU is just an operator on memory. It reads its instructions and data from the memory and writes back out to the memory. (Ward 2021) When a [CPU](/Computer_Architecture/CPU/CPU_architecture.md) executes a program, it needs a place to store the program's **instructions** and **related data**.
Technically, the harddisk is also memory of a non-volatile sort however we typically distinguish "disk storage" from memory. We can think of memory as active storage that is utilised during the runtime of a program and disk memory as dormant storage that only becomes active once it is loaded into memory. > A CPU is just an operator on memory. It reads its instructions and data from the memory and writes back out to the memory. (Ward 2021)
## What memory is
A program's data is a series of bits. The basic unit of memory storage is a **memory cell**: a circuit that can store a single bit.
### Memory types
There are two types of memory: SRAM and DRAM. Both types of RAM memory are _volatile_ : the memory is only retained whilst the computer has a power supply and is wiped when the computer is rebooted. This contrasts with the memory of the harddisk which is non-volatile and is retained after a reboot.
Programs that are executing are loaded into memory because the chips that comprise memory can read and store data much faster than the harddisk. It would be possible to run a program from the harddisk but it would be 500 - 1000 times slower than memory. Programs that are executing are loaded into memory because the chips that comprise memory can read and store data much faster than the harddisk. It would be possible to run a program from the harddisk but it would be 500 - 1000 times slower than memory.
## Memory types #### DRAM
### DRAM DRAM uses capacitors to create the memory cell:
When we think of memory we generally think of the _main_ memory: the 8GB or 16GB+ slots of volatile, non-permanent storage that is utilised by the CPU during the runtime of programs. This is DRAM memory: Dynamic Random Access memory.
DRAM uses capacitors to store bits:
> a **capacitor** is an electronic component that stores electrical energy in an electrical field. A device which can accumulate and release electrical charge. > a **capacitor** is an electronic component that stores electrical energy in an electrical field. A device which can accumulate and release electrical charge.
@ -29,30 +33,18 @@ In a DRAM cell, each bit of data is stored as a charge in a capacitor. The prese
However capacitors lose [charge](/Electronics_and_Hardware/Analogue_circuits/Current.md) over time due to leaks. As a result DRAM is memory that needs to be refreshed (recharged) frequently. For this reason and because it only uses one transistor and capacitor per bit, DRAM is the less expensive form of volatile memory. However capacitors lose [charge](/Electronics_and_Hardware/Analogue_circuits/Current.md) over time due to leaks. As a result DRAM is memory that needs to be refreshed (recharged) frequently. For this reason and because it only uses one transistor and capacitor per bit, DRAM is the less expensive form of volatile memory.
### SRAM #### SRAM
SRAM (Static Random Access Memory) is also volatile memory but, in terms of the electronics, it is different in its implementation. In contrast to DRAM it doesn't use capacitors. As a result the transistors do not leak and therefore do not need to be refreshed, hence why SRAM is _static_ and DRAM is _dynamic_. SRAM (Static Random Access Memory) is also volatile memory but, in terms of the electronics, it is different in its implementation. In contrast to DRAM it doesn't use capacitors. As a result the transistors do not leak and therefore do not need to be refreshed, hence why SRAM is _static_ and DRAM is _dynamic_.
SRAM uses [flip flops](/Electronics_and_Hardware/Digital_circuits/Flip_flops.md) to store the bits. It also uses multiple transistors per bit. This makes it faster than DRAM but more expensive. DRAM is at least ten times slower than SRAM. SRAM is used as [cache memory](/Computer_Architecture/Memory/Role_of_memory_in_computation.md#the-role-of-the-cache) on the [motherboard](/Electronics_and_Hardware/Motherboard.md) of which there are two types: L1 (on the processor chip) and L2 (separate from the processor). SRAM uses [flip flops](/Electronics_and_Hardware/Digital_circuits/Flip_flops.md) to store the bits. It also uses multiple transistors per bit. This makes it faster than DRAM but more expensive. DRAM is at least ten times slower than SRAM.
### Relative speeds ### Memory addresses
The table below details the relative speeds of the different types of memory and those of other types of motherboard storage. We can think of the internals of RAM as grids of memory cells.
| Storage type | Access speed (clock cycles) | Relative times slower | Each single-bit cell in the grid can be identified using two dimensional coordinates. The coordinates are the location of that cell in the grid. Handling one bit at a time isn't very efficient so RAM accesses multiple grids of 1-bit memory cells in parallel. This allows for reads or writes of multiple bits at once, such as a whole byte.
| ------------ | --------------------------- | --------------------- |
| CPU register | 2 | |
| L1 cache | 4 | 2x |
| L2 cache | 6-20 | 3-10x |
| DRAM memory | 50 | 25x |
| Harddisk | 2000 | 1000x |
## The memory hierarchy The location of a set of bits in memory is known as a **memory address**.
The diagram below compares the different forms of memory within a computing device in terms of speed, monetary cost and capacity: ### Demonstration
![](/_img/Memory-Hierarchy.jpg)
## References
Ward, Brian. 2021. _How Linux works_. No Starch Press.

View file

@ -0,0 +1,60 @@
### Relative speeds and placement of memory types
SRAM is used as [cache memory](/Computer_Architecture/Memory/Role_of_memory_in_computation.md#the-role-of-the-cache) on the [motherboard](/Electronics_and_Hardware/Motherboard.md) of which there are two types: L1 (on the processor chip) and L2 (separate from the processor).
The table below details the relative speeds of the different types of memory and those of other types of motherboard storage.
| Storage type | Access speed (clock cycles) | Relative times slower |
| ------------ | --------------------------- | --------------------- |
| CPU register | 2 | |
| L1 cache | 4 | 2x |
| L2 cache | 6-20 | 3-10x |
| DRAM memory | 50 | 25x |
| Harddisk | 2000 | 1000x |
## The memory hierarchy
The diagram below compares the different forms of memory within a computing device in terms of speed, monetary cost and capacity:
![](/_img/Memory-Hierarchy.jpg)
# The role of memory in computation
The following steps outline the way in which memory interacts with the processor during computational cycles, once the [bootstrapping](/Operating_Systems/Boot_process.md) process has completed and the OS kernel is itself loaded into memory.
1. A file is loaded from the harddisk into memory.
2. The instruction at the first address is sent to the CPU, travelling accross the data bus part of the [system bus](/Computer_Architecture/Bus.md).
3. The CPU processes this instruction and then sends a request accross the address bus part of the system bus for the next instruction to the memory controller within the [chipset](/Computer_Architecture/Chipset_and_controllers.md).
4. The chipset finds where this instruction is stored within the [DRAM](/Computer_Architecture/Memory/Memory.md#dram) and issues a request to have it read out and send to the CPU over the data bus.
> This is a simplified account; it is not the case that only single requests are passed back and forth. This would be inefficient and time-wasting. The kernel sends to the CPU not just the first instruction in the requested file but also a number of instructions that immediately follow it.
![](/_img/memory-flow.svg)
Every part of the above process - the journey accross the bus, the lookup in the controller, the operations on the DRAM, the journey back accross the bus - takes multiple CPU clock cycles.
## The role of the cache
The cache is SRAM memory that is separate from the DRAM memory which comprises the main memory. It exists in order to boost perfomance when executing the read/request cycles of the steps detailed above.
There are two types of cache memory:
- L1 cache
- Situated on the CPU chip itself
- L2 cache
- Situated outside of the CPU on its own chip
The L1 cache is the fastest since the data has less distance to travel when moving to and from the CPU. This said, the L2 cache is still very fast when compared to the main memory, both because it is SRAM rather than DRAM and because it is closer to the processor than the main memory.
Cache controllers use complex algorithms to determine what should go into the cache to facilitate the best performance, but generally they work on the principle that what has been previously used by the CPU will be requested again soon. If the CPU has just asked for an instruction at memory location 555 it's very likely that it will next ask for the one at 556, and after that the one at 557 and so on. The cache's controller circuits therefore go ahead and fetch these from slow DRAM to fast SRAM.
## Relation between cache and buffers
The terms _cache_ and _buffer_ are often used interchangeably because they are both types of temporary storage used to speed up CPU operations. Also they are both mechanisms for avoiding writing data to a storage device in the midst of active computation. They are different however:
- A cache is used to store a subset of data (typically transient in nature) from a more permanent or slower storage location. In the context of the CPU, the L1 is a cache whereas the DRAM is the more permanent storage location.
- A buffer is a temporary storage area for data while it is being transferred from one place to another. It helps with "smoothing out" data transfers, ensuring that the sending and receiving entities (which might operate at different speeds) can handle the data transfer effectively.
Whereas a CPU cache is a **physical** part of the processor a buffer is more of a **logical** concept implemented within the system software. However a buffer does use physical memory - it is portion of RAM set aside for temporary storage.
[Registers](/Computer_Architecture/CPU/CPU_architecture.md#registers) should not be confused with caches. Unlike the caches, registers are a part of the CPU itself. They are much quicker but hold less data than the caches.