[From the last episode: We saw that converters are needed around an analog memory to convert between digital and analog parts of the circuit.]
We’ve seen that we can modify a digital memory a number of ways to make it do math for us. Those modifications include:
- Using the bit cell as a resistor;
- Programming the specific resistanceForces that tend to reduce the amount of flow or current. Measured in ohms (Ω). value of each bit cell;
- Engaging more than one word lineA line in a memory that selects which word (and its associated bit cells) will be selected for reading or writing, based on the memory address. The word lines and bit lines are orthogonal to each other, making an array of the bit cells that connect them. at a time;
- Applying specific voltagesVoltage is what gets electrons to flow. It's analogous to water pressure, which gets water to flow. Voltage is measured in units of "volts." to the word lines, individually (rather than just “high” or “low”)
- Using DACs to convert digital values to analog values on the word line; and
- Using ADCs to convert the analog results to digital results.
Here’s where the caveats I’ve mentioned come in. We’re looking at precise voltage values on the word line, precise resistance values in the bit cellsA cell that stores a single bit within a memory. (Some can store more than one bit.) It connects a word line to a bit line. The bit cell can either conduct or not conduct current; those are the two states of the memory cell., and precise measurement of resulting currents in the sense amps.
The first caveat is that you can only be so precise. We saw some reasons last week why you get some rounding when converting back and forth from digital to analog. And there’s always going to be a little bit of fudge in those conversions anyway. In fact, some are concerned that super-precise converters are needed. We can build those, but they take more circuitry, which takes more space on the chip, which makes the chip more expensive.
But there’s another, tougher set of caveats, and they fall under the general category of variation.
Fuzzy Numbers
It used to be that designers could assume that, more or less, the numbers they designed with were “reliable.” In other words, if the width of some feature was supposed to be 500 nm, then you could bank on that. Yeah, it might be plus or minus a couple nanometers, but that didn’t really matter.
Features are much smaller now. For comparison, 1 nm of “slop” when you’re dealing with 500 nm is 0.2%. So you know you’re going to have some variation, but it’s well under 1% (making up easy numbers to illustrate the concept). If you are dealing with a 10-nm feature, that 1 nm of slop is now 10%. While before you didn’t really care about that slop, now you have to make sure that your design works across that whole range.
When building chipsAn electronic device made on a piece of silicon. These days, it could also involve a mechanical chip, but, to the outside world, everything looks electronic. The chip is usually in some kind of package; that package might contain multiple chips. "Integrated circuit," and "IC" mean the same thing, but refer only to electronic chips, not mechanical chips., like most products, you build them by the lot. Each lot has some number of wafersIn the context of making circuits, sensors, and actuators, a thin, round slice of pure silicon. Multiple devices will be made on it; it will then be sliced up to separate the individual chips., usually held together in a container. It used to be that one lot might have somewhat different numbers from another lot. Then it progressed to one wafer possibly varying from another wafer within the same lot.
Encroaching Variation
You may remember that each wafer has lots of dice on it. These days, a die on one side of the wafer may vary from a die on the other side of the same wafer. Worse than that, it’s possible that, within a single die, one part of the die may vary from another part of the die. And there are many reasons for the variation.
So here we are trying to create this precise, accurate math engine inside a memory. If you use a normal CPUStands for "central processing unit." Basically, it's a microprocessor - the main one in the computer. Things get a bit more complicated because, these days, there may be more than one microprocessor. But you can safely think of all of them together as the CPU. to do that math, then the numbers are separate from the circuit doing the calculation. As long as the CPU is working, then it can handle the math on any numbers.
But with in-memory compute, the circuit IS the math. If you get variation in the word-line voltages, the bit-cell resistances, and the accuracy of the sense amps, then, if you do nothing else, you’re going to create a bunch of these circuits that will all yield different results. That’s because they’ll all create slightly different voltages and resistances, and the sense amps will read them slightly differently.
If the variations are super slight, then it probably doesn’t matter. But these days, variation is anything but slight. So this is an important consideration for designers.
Gotta Keep ‘Em Calibrated
So what do you do? Details are slim (since this is a very new thing, and many companies are stingy with information since they don’t want competitors to know what they’re doing). But the general idea is that they need to calibrate each chip. Calibration is an old idea, and it’s been widely applied in the mechanical world for… probably centuries. Many mechanical devices need calibration before they leave the factory.
That’s likely what will happen with these circuits. When tested, they’ll also be calibrated, and that calibration will be stored on-chip somehow.
There’s also concern about variation when the temperature changes or if there are slight changes in the power supply voltage. Those can’t be calibrated in. Instead, designers must do a careful job of making sure that their circuits compensate for temperature and voltage variations.
And if the signals are solid, then there might be “noise” on them – effectively, electrical vibrations – that could make for a wobbly result.
So the in-memory compute idea is cool and interesting. But there are some challenges, and it remains to be seen whether it really takes off. The biggest problem would be if all of the things necessary to give the right precision – big DACs and ADCs, calibration circuits, and calibration during testing – add so much to the cost of the chip that it becomes economically unviable.
This isn’t to pooh-pooh the idea; it’s just that it’s a tad too early to break out the champagne just yet.
It also gives you a good look at some of the real-world issues that chip designers have to deal with.
Leave a Reply