Inside the Internet of Things - Bryon Moyer Blog
[From the last episode: We looked at some ways of optimizing neural-network so that they run better at the edge.] We’re going to cover one more interesting development in the world of AI, but in order for it to make any sense, we’re going to have to start by covering […]
Read More[From the last episode: We looked at what it means to do machine learning “at the edge,” and some of the compromises that must be made.] When doing ML at the edge, we want two things: less computing (for speed and, especially, for energy) and smaller hardware that requires less […]
Read More[From the last episode: We looked at activation and what they’re for.] We’ve talked about the structure of machine-learning (ML) models and much of the hardware and math needed to do ML work. But there are some practical considerations that mean we may not directly use the pristine model as […]
Read More[From the last episode: We looked at the notion of sparsity and how it helps with the math.] We saw before that there are three main elements in a CNN: the convolution, the pooling, and the activation . Today we focus on activation . I’ll start by saying that the […]
Read More[From the last episode: We looked at the reason for doing all the multiply-accumulate math with machine learning.] Before we leave this matrix multiplication thing, there’s one other notion floating about a lot these days. Anyone listening to conversations about how to make machine-learning (ML) algorithms faster will eventually hear […]
Read More