In recent times the Artificial Neurons of Deep Learning AI models is being perceived as Biological Neurons and how they are somewhat related in functioning. However the difference between them are numerous and distinct, let us explore them. In this post we’ll see few key characteristics of biological neurons, and how they are simplified to obtain artificial neurons. We’ll then try to understand how these differences impose limits on deep learning networks, and how advancement towards a better model close to biological neuron can improve AI as we know of it now.

Biological Neurons

Neurons are the basic functional units of the nervous system, and they generate electrical signals called action potentials, which allows them to quickly transmit information over long distances. Almost all the neurons have three basic functions essential for the normal functioning of all the cells in the body.

These are to

1. Receive signals from other senses or neuron.
2. Process the incoming signals and determine whether or not the information should be passed along.
3. Communicate signals to target cells which might be other neurons or muscles or other parts of body.

Now let us understand the basic parts of a neuron to know how they actually work. It is mainly composed of 3 main parts:

Neuron; Source: Wikipedia

1. Dendrite

These thin filaments dendrites are responsible for getting incoming signals from outside and propagate the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron.

2. Cell Body/Soma

Soma is the cell body responsible for the processing of input signals and deciding whether a neuron should fire an output signal. It contains the cell’s nucleus.

3. Axon

Axon is responsible for typically conducting electrical impulses known as action potentials away from the nerve cell body.  It normally ends with a number of synapses connecting to the dendrites of other neurons.

Working of the parts

The function of receiving the incoming information is done by dendrites, and processing usually takes place in the Soma. Incoming signals can be either excitatory — which means they tend to make the neuron fire (generate electrical impulses) — or inhibitory — which means that they tend to block/ keep the neuron from firing.

Most neurons receive multiple input signals throughout their dendritic trees. A single neuron may have more than one set of dendrites and can receive thousands of input signals. Whether or not a neuron is excited into firing an impulse depends on the scale of all of the excitatory and inhibitory signals it receives. The processing of this information happens in soma which is neuron cell body. When a neuron does end up firing, the nerve impulse, or action potential, is conducted down the axon.

Towards the end of it, the axon divides into many branches and has large swellings known as axon terminals (or nerve terminals). These axon terminals make contact with the target cells.

Artificial Neurons

Artificial neuron also known as perceptron is the basic unit of the neural network. In simple terms, it is a mathematical function based on a model of biological neurons. It can also be seen as a simple logic gate with binary outputs. They are sometimes also called perceptrons. Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Network / Deep Neural Network.

Each artificial neuron has the following main functions:

  1. Takes inputs from the input layer.
  2. Weighs them separately and does summation.
  3. Pass this summation through a nonlinear function to produce output.
Source: missinglink.ai

A Perceptron learning process:

  1. Input values
    We transfer the input values ​​to a neuron using this layer. It can be as simple as an array of the values. It is similar to a dendrite in biological neurons.
  2. Weights and Bias
    Weights are a collection of array values which are multiplied to the corresponding input values. We then take a sum of all these multiplied values which is called a weighted sum. Next, we add a bias value to the weighted sum to get final predictable value by our neuron. This is a technical step that makes it possible to move the activation function curve up and down, or left and right on the number graph. It makes it possible to fine-tune the numeric output of the perceptron.
  3. Activation Function
    Activation Function decides whether or not a neuron is fired. It maps the input values to the required output values and decides which of the two output values should be generated by the neuron.
  4. Output Layer
    Output layer gives the final output of a neuron which can then be passed to other neurons in the network or taken as the final output value.

Now, let us understand the overwhelming theoretical knowledge of working of an artificial neuron with an example in practical.

Consider a neuron with two inputs (x1,x2,x3,x4) :

# Example of dot product using numpy
import numpy as np

# Sample input to perceptron
inputs = [1.2, 2.2, 3.3, 2.5]

# Weights passed to perceptron
weights = [0.4,0.6,-0.7, 1.1]

# bias for a particular perceptron
bias = 2

# Take dot product between weights and input 
# and add bias to the summation value
output = np.dot(weights, inputs) + bias
print(output)

# Output:-
4.24

Here numpy.dot() function is used to calculate dot product between the input and the weights. Internally it works as follows:

output = np.dot(weights, inputs) + bias
       = np.dot([0.4, 0.6, -0.7, 1.1], [1.2, 2.2, 3.3, 2.5])
       = ((0.4*1.2) + (0.6*2.2) + (-0.7*3.3) + (1.1*2.5)) + 2
       = (0.48 +1.32 -2.31 +2.75) +2
       = 2.24 +2
       = 4.24

The output value from above function is fed to an activation function to calculate final value of a Perceptron.

Conclusion

Hopefully, this article gives a basic understanding of the most basic unit of a neural network. In real world, the perceptrons work under the hood. We use neural networks using deep learning frameworks such as TensorFlow, Keras, and PyTorch. These frameworks require hyperparameters such as the number of layers, activation function, and neural network type, and constructs the network of perceptrons automatically. When we work on real, production-scale deep learning projects, we will find that the operations side of things can become a bit daunting.

Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.

But real brains are highly unlikely to be relying on the same algorithm. It’s not just that “brains are able to generalize and learn better and faster than the state-of-the-art AI systems,” said Yoshua Bengio, a computer scientist at the University of Montreal, the scientific director of the Quebec Artificial Intelligence Institute and one of the organizers of the 2007 workshop. For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.

References:
1. Relation between Artificial and Biological Neuron
2. Perceptrons and MLP
3. QuantaMagazine