[From the last episode: We took a review break on all of the AIA broad term for technology that acts more human-like than a typical machine, especially when it comes to "thinking." Machine learning is one approach to AI. stuff we’ve discussed so far.]
Before we leave AI for a while, are a couple of other things we should circle back on. (Yeah, we could have done this before the break, but… man, I needed a break, didn’t you?) Way at the beginning, we talked about artificial neural networks (ANNsA type of neural network that’s loosely inspired by biological neurons, but operates very differently.) and the fact that they don’t work the way the brain works (hence “artificial”). So… what about networks that work the way the brain does? Is anything happening there?
The answer is “Yes,” kind of. (Folks working on it might object to the “kind of” qualifier…)
Compute Everything
There’s a fundamentally different thing about how the brain works. I’ve heard some disagreement on this, so please keep your mind open to new developments. For now, we’re going to go with what appears to be the most popular view.
Let’s think for a second about how an ANN processes video. What looks to us like a smooth moving image is, in fact, a quickly-moving series of still images, each of which is called a frame. If they move fast enough, then our eyes perceive them as continuous rather than choppy. That speed is referred to as the frame rate, and it’s measured in frames per second.
An ANN takes each of those frames and calculates the whole thing. If, say, it detects a dog in one frame and that dog is still in the next frame, it’s going to completely recompute the dog in the next frame. Brute force, and consumes a lot of energy, but it works well enough.
You might well ask, “If there’s a dog in each of several successive frames – especially if it’s sitting still – why are we recomputing it for each frame?” And that’s a really good question, one that some have answered with, “We shouldn’t be – and we don’t need to.”
Using Events to Drive Computing
The other way of approaching these things is sometimes called event-based. You don’t process everything; you process when events occur. In the video example, an event would be that something changed from one frame to another.
Something usually changes between frames in a video, but often it’s not the whole image. If someone is walking past a static background, then that background hasn’t changed. Only the person has. You could even argue that, if the camera were panning around so that the background also moved, then, while all of the pixels might have changed, the image itself mostly hasn’t: it’s just shifted in some direction.
So the idea here is, “Only process things that changed.” And then you can do a lot less work.
How do we do that physically? Well, there are variations. The brain is an analog thing. At least for parts of the brain, it works by sending electrical pulses down nerves, and they may cause a neuron to “fire” – that is, send a burst of information across the gap in a synapse.
Analog SNNs
I’m not going to get into the details of how that happens – partly because I’m no biology expert (and I’m not sure how much the experts know today – it’s still a research area as far as I know). But let’s go with this much: we have a mechanism whereby a spike arrives at a synapse and it may or may not cause something to fire.
This has given rise to what are called spiking neural networksAn approach to neuromorphic machine learning that attempts to mimic how the brain (or parts of the brain) works. (SNNs). And some of them try really hard to replicate the analog spiking behavior. In particular, you may see references to one of the two following kinds of circuits:
- Integrate-and-fire: this accepts a series of spikes, and, after accumulating enough of them, it fires.
- Leaky integrate-and-fire: this is like the first one, but, between spikes, some of the accumulation “leaks” away. This is so that, if there was a big gap in time between spikes, that the older spikes have less impact.
What does “fire” mean in this case? Clearly we’re not squirting chemicals across a synaptic gap. In this case, it means we create a spike downstream, which then travels to the next… layer, for lack of a better term. But here’s the thing: these spikes are events, and SNNs respond to events. If there are enough events, then they create another event as a response.
Digital SNNs
Most of these analog approaches are (to my knowledge) university research projects. There are also companies that are doing SNNs in a digital manner. Some refer to this as emulatingRefers to one kind of system that can behave as if it were another kind. A good example is a Macintosh, which, by itself, works very different from an Intel/Microsoft-based PC. But a Mac can pretend to be a PC by emulating how a PC works. SNNs. Here, instead of voltagesVoltage is what gets electrons to flow. It's analogous to water pressure, which gets water to flow. Voltage is measured in units of "volts." and spikes and such, each event is simply a digital packetA group of bits being sent from one place to another. How big the group is may vary depending on what kind of packet it is. Long messages -- like an email -- will typically be broken up into many packets, each of which travels independently until it gets to the destination, where they're reassembled into the email. of information that travels down a busA way of connecting the components of a computing system where the components all share the same wires, but a controller makes sure that only one device can put its signals on the wires at a time.. All of the calculations are done digitally.
There are a few start-ups using the digital SNN modelA simplified representation of something real. We can create models of things in our heads without even realizing we're doing it. Technology often involves models because they let us simplify what would otherwise be extremely detailed, complicated concepts by focusing only on essential elements.. BrainChip and GrAI Matter come to mind. But the biggest and most-watched project appears to be Intel’s Loihi project. This is a massive effort, and yet it’s still research. The leader of the project has indicated hope for true products sometime in the next five years. In other words, there’s lots of work to do.
The big gain from these, if they prove out their promise, is to do more work with less energy. Pretty simple notion, even if the technology itself isn’t simple.
I’ve covered this topic in more detail in SemiconductorA material that, under some circumstances, can conduct electricity and, in other circumstances, cannot. Engineering. It’s written for engineers, but it may still be of interest if you’re looking to learn more. It doesn’t go into crazy detail.
Leave a Reply