Realising the vision for enabling adaptive learning at the edge: In conversation with Gordon Wilson, CEO, Rain Neuromorphics

Among one of the bottlenecks to enabling ever-increasing intelligence at the edge of the network, is the cost of enabling machine-learning capabilities. For a number of reasons (ranging from size of the algorithmic learning models, software, the available hardware, the network bandwidth transmitting data between cloud and edge, and training challenges at the edge), enabling machine learning at the edge is currently too costly, and so most hardware players are currently focused on mechanising the inference stage of intelligence. Gordon, however, has a different take that goes something like this “Well, training is too expensive, yes. Until you have the hardware”.

The ability for machines to learn from the data they are absorbing is something that the scientific community is growing increasingly comfortable with – it is only a matter of time before advanced learning capabilities at the edge are mass-deployed. Currently the work done around training AI models is largely reserved for the digital hardware processing that exists at servers. These training models are then broadcasted over networks to edge devices that are increasingly running inference workloads on them. This of course works for applications that require a low-level of general intelligence such as smart hearing aids that can also monitor heartbeat – such a hearing aid would run on a pre-trained model that would infer “when there is no heartbeat detected, the user is most-likely experiencing a bout of bad fortune”. The inference being done at the hearing aid would run collected data through this pertained model to infer an output along the lines of “no news is good news” or “something seems out of normal boundaries, lets send a message back to HQ”. The key takeaway here is that the same training model is deployed across all the hearing aids, and so training the model can be done on the GPU cloud, followed by the uploading of the model onto an edge circuit (e.g mixed signal circuit) that requires further digital calibration.

But what if each device needs to adapt independently, constantly updating its models and assumptions of the environments around it? Starting to sound like a brain? Well thats exactly the inspiration behind this architecture, and it has a name: Neuromorphic Computing. I’ll leave it to the hyperlinks to explain just what neuromorphic computing is, but, Rain Neuromorphics are looking to somewhat leapfrog existing solutions to enabling brain-inspired computing. Gordon and the team at Rain Neuromorphics are innovating on edge hardware with a constant vision to enable fields and capabilities such as robotics, autonomy, continuous learning, adaptive learning, personalisation, and, local / secure compute. You can imagine true autonomy of learning in machines being used for military robotics, space exploration, self-driving capabilities and practically any application that requires a self-adaptive learning mechanism to exist in isolation. Its here that we begin to see the shape of some sort of end vision for isolated intelligence machines learning independently.

To achieve this vision, Gordon says, “There are important clues to take from the brain”. . Some of these clues include …

  • A physics and learning-based gradient rule thats also local – we see this in the brain with neurons requiring neurotransmitters to diffuse across an ion gradient so that the next neuron can be activated if enough of a signal is picked up by the sequential neuron. Yes, this can now be achieved on a microchip.
  • A way to scale this through sparse connectivity – for the purpose of explanation, sparse neural networks can be interchanged with “compressed” neural networks where each “neuron” is connected to only a select number of other neurons. We also see this in the brain e.g. the neurons that detect a tickle on the neck being connected with the neurons associated with laughter. A tickle thus equals laughter (it would be pretty strange for a human to be rushed into A&E because a light neck tickle triggered the extreme pain neurons in the brain).

There are a number of other basics that are required to enable Neuromorphic Computing, but where Rain Neuromorphics’ U.S.P really lies, is that fact that they have chosen to train their models on the analogue circuitry right at the start of the process of enabling intelligence at the edge. The work comes from Yoshua Bengio‘s laboratory, and the challenge as Gordon very rightly put it is “training on analogue is hard, otherwise everyone would be doing it”. In a light analogy, this means that instead of using a pencil to sketch out how a brain works, then figuring out how that looks good on a watercolour, and then figuring out how to turn that watercolour into a clay model of the brain,,, Rain Neuromorphics is looking to do all their intelligent work straight on the clay brain model. Its an oversimplified analogy to explain the leapfrogging that Gordon has the vision will enable true autonomous and adaptive learning on edge devices with their own brains.

The hope is that not only is Rain Neuromorphics’ technology inspired by the brain, but it will also help us to understand the brain itself further by creating a functional informational model of the sparse networks found in our own intelligent organ. You can find more on this here.

Published by Prab Jaswal

https://anthroconomy.wordpress.com/about/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: