Human Brain vs Deep Learning – the very beginning

  • Post author:
  • Post category:AI

Overview

This article shares the very basic operations of a neural network and how it is simulating human brain to a large extent as well as what are the shortcomings of neural network. The whole write up is separated into multiple sessions for easy readability.It covers deep learning in a very simplified overview to start with and an attempt to compare it with human brain. Overview details of Human Brain vs Deep Learning are as follows.

Graphics by Simon Thorpe describing how human brain detects and understands as well as the latencies involved.

This is a great graphics which help us to understand how human brain detecting objects, decision making , motor commands etc. It also explains what human visual cortex learn is each V1, V2, V4 cortex. However V3 cortex is missing in this diagram. My understanding about the latencies explained here is an average out numbers from multiple observations and it can vary from human to human. For example, a football player is capable of reacting more faster than these numbers ( Just an assumption , no experiments has been done from my side ). Another thought process is that , these latencies operates in parallel and the maximum time taking is equal to the maximum latency involved in one of the pipeline ( Again assumptions , No Experiments done )

Even though the basic concept is same about the learning way , when it comes to brain and neural network , brain has some skills which we could not replicate well in neural network.
(1) Ability of learning from very less data samples – For eg.Some of us have seen Leaning Tower of Pisa in real and most of us seen it through pictures. How many pictures you might have seen , a very few, may be 100s or 1000s. Also , there is very less probability that you have seen pictures from different angles. being the situation like this, our brain can detect Leaning Tower of Pisa from any angle , any light mode , different colour or with any minor changes to the structure of the Leaning Tower of Pisa. Many neuroscientist believe that brain can construct many forms of one object in different angle or colour or spacial positions or the learning of brain is neutral irrespective of colour , shape , size and position.
(2) Ability to forget learning – Depending on our ability of brain and it capacity to store information , brain keeps the learning. Brain generally forget learned in 2 situations, (a) If we don’t practice what we learned , eventually brain forgets it (b) If you learn something incomplete , brain tends to forget (c) Learn beyond capacity – Brain tends to forget some information to make room for new information. The rule of forget is not very clear but one thing is sure that , it is not just forget old learning. The decision of what to forget is much more complicated I believe.
(3) Hard-coded learning – Some learnings are hard-coded in brain. If you don’t practice also these learning will never forgets. Eg. Driving , Swimming – Once we learn driving, we never forgets. Neuroscientist believes that these learnings are complete and that it why it is hard-coded.

Size

Signal Type

Training

Power

Network topology

Human Brain

80~100 Billion Neuron

Nerve impulses

Cognitive , No structured data

Low power

Complex with asynchronous connections

Neural Network

Few thousands of neuron

Floating point values

Structured data

High Power

Layer Patterns, Serial connections

Source:Wikipedia

Single neuron of a human brain.

Now , let’s try to understand little more in depth by taking one brain neuron. Single neurons consist of Dendrites , Synapse , Soma/Cell body , Axons & axon terminals.

  • Dendrites : Receives signals from other neurons. Previous neurons Axon terminals will connect to Dendrites of the current neuron.
  • Synapse : Holds the computational responsibility, Dendrites issue signal spikes or voltage signals which enable synapse to pass data from input to cell body. This does not just add signals together and pass it to the cell body instead , it adds up other huge computational steps
  • Soma/ Cell Body : Cell body is responsible for actions as well as it is responsible to synthesis of neural proteins.
  • Axons : Responsibility to accelerate signals and movement of these signals to a long distance.
  • Axon Terminals: Output of a neuron. It generally connect to other Dendrites. It triggers action in s cell body where passing neurons to other neurons.

Will discuss some more important points of a brain and move forward with basic details of artificial neural networks. Those additional capabilities are
1. Human brain aggregates computations involved. It actually accelerate the inference. Almost all cases , our view frame has multiple objects and we can easily focus on one object while classifying all the objects in the scene. Sometime we totally miss some objects in the scene. So assumption is that this capability of brain is dynamic and how it is dynamic is mostly unknown.
2. Human brain connect one object with other objects , words , sound or emotions. A superior quality multi model output. The word blue connect with sky , image of the bird connect with its chirping , smell with flowers etc are examples of this connect.
3. Human brain can correlate multiple inferences from multiple sensors and build emotions from it. This high level correlation is the key of brain capability. When you read a paragraph loud , and touch the text with fingers through the words and once reading is finished , you start writing the paragraph , we can observe that a quick learning happens. This is because , the brain is getting multiple inputs and it can correlate these multiple inputs which correlates to learning.
4. Human brain get better and better through continuous learning. For E.g, driving. When you start learning driving , we don’t have clear control on steering wheel. At the same time , you have to handle break , clutch and gear. Initially , this is really a difficult task but eventually brain learns it and get better and better. After few months of learning , you can very well drive the car without much difficulty and your mind will have less focus until you have to take important or abrupt decision making.
5. Human brain can filter noise in an exceptional way. We can dynamically focus on one object among multiple objects or one voice among multiple voices or one smell among a combination of smells. What we have to focus , we know how to filter out similar data and have a focused attention on what data we have to focus.
One good article explains these concept little more elaborate can be found here

Human Brain vs Deep Learning

Single neuron in AI

The architecture of AI is similar to human brain . I should say that some what similar. Most of the components similar to human brain (Known components ) neurons are also present here as well. Those components are

  • Input equal to axon from neuron : receives inout values from external sources or from another neuron.
  • Weights equal to Synapse : Weights helps ANN to learn. ANN learns by adjusting the weights.
  • Cell Body equal to Dendrites + Cell body : Weighted input and balanced bias. Output pass to the activation function to bring in non-linearity

I will explain the whole training process in a simple way . Detailed post con be found here.

Additional components of ANN

To ensure the complete process of learning , any neural network has to adjust the weights and biases in such a way that the cost is minimised. Remaining important concepts are activation function , loss/ cost functions and optimisers.

Short explanations of activation function , loss / cost functions and optimisers are given below.

  • Activation function: This function is used to translate input functions to output. Activation function brings in non-linearity to the network. Output value of activation function can be in the range of 0 to 1 or -1 to 1. Output of the activation function represents the probability of the cell will fire or not. By bringing this into part of ANN helps to ensure that for a particular input , some set of neurons will fire and for some other input , other different set of neurons will fire.
  • Loss/Cost Function: Loss functions are used to compare the model results with the ground truth. Simple loss function is Mean Squared Error (MSE)
  • Optimisers: Used to minimise the error. Optimisers are controlled by learning rate. Major purpose of optimisers are to reach global minima.

Neural network learns by passing the input , convert the output of a neuron to non-linearity , then calculate the error , the error is then back propagated back to the network to adjust weights and biases until the optimisers reach global minima.

Rectified linear unit (ReLU)
Source : Wikipedia

Gradient descent
Source : Wikipedia

Conclusion

This article gives some introduction to how human brain work and how similar an AI works. Even-though AI has lot of drawbacks, state-of-the-art AI algorithms can do individual task much better than human brain. The absolute difference is the dynamic and multi-model nature of the brain which helps to aggregate and associate complex higher-order decision making capability.