[From the last episode: We looked at some of the caveats around the whole in-memory compute idea.]
OK, that’s been quite a slog over the last many months. We’re going to come up for air and review what we talked about in the world of machine learningMachine learning (or ML) is a process by which machines can be trained to perform tasks that required humans before. It's based on analysis of lots of data, and it might affect how some IoT devices work..
- First we looked at different types of artificial intelligence (AIA broad term for technology that acts more human-like than a typical machine, especially when it comes to "thinking." Machine learning is one approach to AI.). The popular ones today don’t actually behave like our brains.
- We then looked at the basic structure of artificial neural networks.
- Then we introduced the two main AI functions: training and inference.
- We then dug into machine vision.
- That got us next into looking at different kinds of artificial neural networks, like CNNs.
- From there we went to the mathematical notion of convolution, the “C” in “CNNA type of artificial neural network specifically used for machine-vision applications..” (No, not that CNN…)
- From there we looked at what it is that makes multiply-accumulate (MAC) so important.
- And then we discussed why multiplication by weights is important.
- We then looked at why sparsity is useful in a machine-learning modelA simplified representation of something real. We can create models of things in our heads without even realizing we're doing it. Technology often involves models because they let us simplify what would otherwise be extremely detailed, complicated concepts by focusing only on essential elements..
- Then we dipped our toes into activation functions.
- After that, we talked about what it means to do inference “at the edge.”
- For that, we then discussed ways of making models smaller for the edgeThis term is slightly confusing because, in different contexts, in means slightly different things. In general, when talking about networks, it refers to that part of the network where devices (computers, printers, etc.) are connected. That's in contrast to the core, which is the middle of the network where lots of traffic gets moved around..
- From there we went into a series motivating in-memory compute. We had to start with some electrical basics in the form of Ohm’s Law.
- We then needed to see how memories work, starting with some basic memory building blocks.
- Then we looked at how memories are structured.
- And then we assembled a basic memory.
- We saw how we could isolate a single memory bit using masking or multiplexors.
- We then saw how resistors can be memory cells (or memory cells can be resistors).
- That took us to using a memory bit cell to do multiplication.
- From that, we were able to do in-memory computing – MACs in the memory.
- I then took a quick aside to ruminate on intuition vs. math for new ideas.
- We then looked at some other things we need for an analog memory that can compute.
- Finally, we looked at some of the caveats around in-memory compute. It has yet to prove itself completely.
Yeah… that was a long way to go without a break. We’re next going to look at a couple more machine-learning notion before moving on to other things.
Leave a Reply