Overview
Even though the basic concept is same about the learning way , when it comes to brain and neural network , brain has some skills which we could not replicate well in neural network.
(1) Ability of learning from very less data samples – For eg.Some of us have seen Leaning Tower of Pisa in real and most of us seen it through pictures. How many pictures you might have seen , a very few, may be 100s or 1000s. Also , there is very less probability that you have seen pictures from different angles. being the situation like this, our brain can detect Leaning Tower of Pisa from any angle , any light mode , different colour or with any minor changes to the structure of the Leaning Tower of Pisa. Many neuroscientist believe that brain can construct many forms of one object in different angle or colour or spacial positions or the learning of brain is neutral irrespective of colour , shape , size and position.
(2) Ability to forget learning – Depending on our ability of brain and it capacity to store information , brain keeps the learning. Brain generally forget learned in 2 situations, (a) If we don’t practice what we learned , eventually brain forgets it (b) If you learn something incomplete , brain tends to forget (c) Learn beyond capacity – Brain tends to forget some information to make room for new information. The rule of forget is not very clear but one thing is sure that , it is not just forget old learning. The decision of what to forget is much more complicated I believe.
(3) Hard-coded learning – Some learnings are hard-coded in brain. If you don’t practice also these learning will never forgets. Eg. Driving , Swimming – Once we learn driving, we never forgets. Neuroscientist believes that these learnings are complete and that it why it is hard-coded.
–
Size
Signal Type
Training
Power
Network topology
Human Brain
80~100 Billion Neuron
Nerve impulses
Cognitive , No structured data
Low power
Complex with asynchronous connections
Neural Network
Few thousands of neuron
Floating point values
Structured data
High Power
Layer Patterns, Serial connections
Source:Wikipedia
Will discuss some more important points of a brain and move forward with basic details of artificial neural networks. Those additional capabilities are
1. Human brain aggregates computations involved. It actually accelerate the inference. Almost all cases , our view frame has multiple objects and we can easily focus on one object while classifying all the objects in the scene. Sometime we totally miss some objects in the scene. So assumption is that this capability of brain is dynamic and how it is dynamic is mostly unknown.
2. Human brain connect one object with other objects , words , sound or emotions. A superior quality multi model output. The word blue connect with sky , image of the bird connect with its chirping , smell with flowers etc are examples of this connect.
3. Human brain can correlate multiple inferences from multiple sensors and build emotions from it. This high level correlation is the key of brain capability. When you read a paragraph loud , and touch the text with fingers through the words and once reading is finished , you start writing the paragraph , we can observe that a quick learning happens. This is because , the brain is getting multiple inputs and it can correlate these multiple inputs which correlates to learning.
4. Human brain get better and better through continuous learning. For E.g, driving. When you start learning driving , we don’t have clear control on steering wheel. At the same time , you have to handle break , clutch and gear. Initially , this is really a difficult task but eventually brain learns it and get better and better. After few months of learning , you can very well drive the car without much difficulty and your mind will have less focus until you have to take important or abrupt decision making.
5. Human brain can filter noise in an exceptional way. We can dynamically focus on one object among multiple objects or one voice among multiple voices or one smell among a combination of smells. What we have to focus , we know how to filter out similar data and have a focused attention on what data we have to focus.
One good article explains these concept little more elaborate can be found here
Human Brain vs Deep Learning
I will explain the whole training process in a simple way . Detailed post con be found here.
Additional components of ANN
To ensure the complete process of learning , any neural network has to adjust the weights and biases in such a way that the cost is minimised. Remaining important concepts are activation function , loss/ cost functions and optimisers.
Short explanations of activation function , loss / cost functions and optimisers are given below.
- Activation function: This function is used to translate input functions to output. Activation function brings in non-linearity to the network. Output value of activation function can be in the range of 0 to 1 or -1 to 1. Output of the activation function represents the probability of the cell will fire or not. By bringing this into part of ANN helps to ensure that for a particular input , some set of neurons will fire and for some other input , other different set of neurons will fire.
- Loss/Cost Function: Loss functions are used to compare the model results with the ground truth. Simple loss function is Mean Squared Error (MSE)
- Optimisers: Used to minimise the error. Optimisers are controlled by learning rate. Major purpose of optimisers are to reach global minima.
Neural network learns by passing the input , convert the output of a neuron to non-linearity , then calculate the error , the error is then back propagated back to the network to adjust weights and biases until the optimisers reach global minima.