[From the last episode: We saw how gaps in heap memoryA block of working memory reserved for use by programs as they request an allocation. The programs request and release memory from the heap as they go, and where specific data will end up in the heap is unpredictable. can be eliminated through garbage collectionThe process of defragmenting memory by moving multiple free “holes” in memory together so that they can be allocated more effectively..]
So far we’ve seen several aspects of the hierarchy of computer memory. We know that the fastest memory is SRAMStands for "static random access memory." This is also temporary memory in a computer. It's very fast compared to other kinds of memory, but it's also very expensive and burns a lot of energy, so you don't have nearly so much., but it’s expensive and energy-hungry, so we don’t have enough of it to hold everything. So we keep more stuff in DRAMStands for "dynamic random access memory." This is temporary working memory in a computer. When the goes off, the memory contents are lost. It's not super fast, but it's very cheap, so there's lots of it., which is slower – and we cacheA place to park data coming from slow memory so that you can access it quickly. It's built with SRAM memory, but it's organized so that you can figure out which parts haven't been used in a while so that they can be replaced with new cached items. Also, if you make a change to cached data, that change has to make its way back into the original slow storage. what we’re currently using in… cache.
But there’s one more level to this hierarchy. We’ve talked about flash memoryA type of memory that can retain its contents even with the power off. Thumb drives use flash memory. as the permanent storageThis usually refers to memory that doesn't lose its contents when powered off - like a thumb drive or a hard disk. It's a place to store data. for many programs, and that’s definitely true for IoT-device computing. But what about in the cloudA generic phrase referring to large numbers of computers located somewhere far away and accessed over the internet. For the IoT, computing may be local, done in the same system or building, or in the cloud, with data shipped up to the cloud and then the result shipped back down.? Those programs may be huge, and the traditional storage for such big programs is the hard disk. We can fit way more on a hard driveA type of persistent (non-volatile) memory built from rotating platters and “read heads” that sense the data on the platters. than we can in DRAM in most systemsThis is a very generic term for any collection of components that, all together, can do something. Systems can be built from subsystems. Examples are your cell phone; your computer; the radio in your car; anything that seems like a "whole.".
That said, the hard drive – an old-fashioned memory with a spinning platter and a read head, all of which resembles an old vinyl record player with a needle – is, in many cases, being replaced by the so-called solid-state driveA memory that acts like a hard drive, but is built out of flash memory instead., or SSD.
And what’s an SSD? It’s basically a large batch of flash memory built to look like a hard drive. The computer, when reading from or writing to an SSD, shouldn’t care whether it’s an SSD or a true hard drive.
Hard Drives and DRAM
So, in addition to DRAM, we have these hard drives – and they’re bigger than the DRAM we have in the system. So what happens when we have a program that’s too big to fit entirely in DRAM? Well, it’s sort of like the cache situation with SRAM: the part of the program we need right now is loaded from the hard drive into the DRAM. Unlike cache, however, the chunks that are brought onto DRAM are usually pretty big. They’re referred to as pages.
You might think, “Well, then, this is just like cache!” – but there’s important difference from cache: you don’t directly address cache memory. The program always tries to address DRAM. If the cache happens to have what you need, then you get the result a whole lot faster. But the program still thinks it got what it needed directly from the DRAM. The cache is sort of invisible that way.
But this holds for the hard drive too. The program may try to execute parts of the program that aren’t in DRAM yet, so a part of the circuit has to notice that the program wants something that’s not yet available, and then load the needed pageA chunk of memory on a hard drive or SSD that can be brought into DRAM for use or for execution. from the hard drive into DRAM before it can proceed. This process is what we’ll talk about in the next blog.
(Hard-drive image credit: Evan-Amos [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)])
Leave a Reply