04-21-2020, 04:18 AM
(This post was last modified: 04-21-2020, 04:19 AM by Gabrielle of Gaussia.)
One of the notable limitations of ANNs over the last decade has been dependence on extremely high quantities of training data for any good results on a task. One of the reasons for this is that they use brute force training methods like gradient descent to optimize the neural network in the parameter space. Another reason for this is that traditional ANNs can't exactly learn on the job because with the exception of RNNs each layer of the NN only outputs once during a particular task so there's not much room for learning based on interaction between activity and synaptic weights. There's another type of ANN called a Spiking Neural Network that doesn't work this way though. Unlike traditional ANNs SNNs are asynchronous and each neuron is constantly firing spikes and receiving input spikes from other neurons which means that you can have interaction between input spikes and synaptic weights. The main problem with SNNs is that they are much more computationally intensive to run than the current vogue types of ANNs on traditional hardware so research on SNNs on the same scale as popular ANN types hasn't really been done yet; recently though, Intel has developed neuromorphic ASICs (Application Specific Integrated Circuits) code-named Loihi that run SNNs extremely efficiently and a new SNN research community has grown around the new efficient chips. They've been developing new learning algorithms for SNNs including ones that take advantage of interaction between activity and synaptic weights. Recently using one such learning algorithm they were able to create an artificial Olfactory that could actively learn new smells after one or a few tries which is far more impressive than things like alphago in my opinion because this sort of one shot learning is often cited as one of the essential qualities of AGI since it's something that we're able to do without much effort. Anyway this sort of thing makes me much more excited that SNNs will be able to demonstrate other qualities of AGI like unsupervised learning, the ability to learn entirely new skills without direction, and the ability to combine multiple disparate skills for a new task that hasn't been encountered before. Anyway, the model ran on a single Loihi chip which is capable of only simulating 130k spiking neurons at a time, but intel recently developed a rackmounted system, Pohoiki springs which can simulate SNNs with 100 million neurons with a power budget of only 300W so I expect much more impressive research in the next 1-2 years. I really wouldn't be surprised if technology similar to this will allow for the creation of AGI in roughly the same timeframe as it occurs in OA.
TLDR Researchers working on unorthodox learning algorithms for Spiking Neural Networks have demonstrated a quality of AGI, one shot learning in recent research such as this artificial olfactory SNN.
https://www.nature.com/articles/s42256-020-0159-4
TLDR Researchers working on unorthodox learning algorithms for Spiking Neural Networks have demonstrated a quality of AGI, one shot learning in recent research such as this artificial olfactory SNN.
https://www.nature.com/articles/s42256-020-0159-4