Brian Muinde P101/0928G/17
Brian Muinde P101/0928G/17
Brian Muinde P101/0928G/17
P101/0928G/17
CAT 1
TYPED ONLY
QUESTION ONE
i. learning (2 Marks)
is the ability of the network to learn from its environment, and to improve its performance
through learning.
are crude approximations of the neurons found in real brains. They may be physical devices, or
purely mathematical constructs.
Can computers take decisions (even smart smarter ones) just like humans?
The learning element uses the feedback to make the action better next time, and.
The problem generator suggests new experiences for the learning agent to learn from.
i. Soma
processes the incoming activations and converts them into output activations.
ii. Dendrite
are fibres which emanate from the cell body and provide the receptive zone that receive
activation from other neurons.
is a specialized part of the cell body (or soma) of a neuron that connects to the axon. It can be
identified using light microscopy from its appearance and location in a neuron and from its
sparse distribution of Nissl substance.
is an insulating layer, or sheath that forms around nerves, including those in the brain and
spinal cord. It is made up of protein and fatty substances.
v. Nodes of ranvier
periodic gap in the insulating sheath (myelin) on the axon of certain neurons that serves to
facilitate the rapid conduction of nerve impulses
e) Briefly explain how information flow in a neural cell (6 Marks)
As shown in the above diagram, a typical neuron consists of the following four parts with the
help of which we can explain its working −
Dendrites − They are tree-like branches, responsible for receiving the information from other
neurons it is connected to. In other sense, we can say that they are like the ears of neuron.
Soma − It is the cell body of the neuron and is responsible for processing of information, they
have received from dendrites.
Axon − It is just like a cable through which neurons send the information.
Synapses − It is the connection between the axon and other neuron dendrites.
f) Describe the significance of synapse strength between any two neurons (2 Marks).
is its ability to show synaptic plasticity, and this is the fundamental property of neurons that
confers the human brain its capacity for memory and learning, and intelligence – which in turn
forms the basis of all higher intellectual functions.
g) Briefly explain three applications that are performed by artificial neural networks. (3 Marks)
QUESTION TWO
This vastly simplified model of real neurons is also known as a Threshold Logic Unit:
1.A set of synapses (i.e. connections) brings in activations from other neurons.
2.A processing unit sums the inputs, and then applies a nonlinear activation function
How it works:
• The activation level is then transformed by an activation function to produce the neuron’s
output, Yi
– External input
– The output of some other neuron
b) state any three features that are missing in mcculloch-pitts neuron model (3 Marks)
- Non-linear summation,
- Smooth thresholding,
- Stochastic, and
c) Artificial neuron consists of four basic components. Describe each of these components. Use
a diagram to illustrate your answer (6 Marks)
Dendrites − They are tree-like branches, responsible for receiving the information from other
neurons it is connected to. In other sense, we can say that they are like the ears of neuron.
Soma − It is the cell body of the neuron and is responsible for processing of information, they
have received from dendrites.
Axon − It is just like a cable through which neurons send the information.
Synapses − It is the connection between the axon and other neuron dendrites.
d) There are two types of threshold functions. State and explain each of them. Use a diagram to
illustrate your answer (6 Marks)
Different threshold functions: (a) hard threshold function; (b) soft threshold function; (c) half-
soft threshold function; (d) improved halfsoft threshold function.
e) Explain the significance of synapse strength in a biological neuron (1 Mark)
QUESTION THREE
It is the method of fine-tuning the weights of a neural net based on the error rate obtained in
the previous epoch (i.e., iteration). Proper tuning of the weights allows you to reduce error
rates and to make the model reliable by increasing its generalization.
involves feedforward of data signals to generate the output and then the backpropagation of
errors for gradient descent optimization.
is when an ENTIRE dataset is passed forward and backward through the neural network only
ONCE. Since one epoch is too big to feed to the computer at once we divide it in several smaller
batches.
You need time to achieve any satisfying results and planning is difficult
Data is not free at all. to train a machine learning model, you need big sets of data. It
may seem that it's not a problem anymore, since everyone can afford to store and
process petabytes of information.
Talent deficit. Although many people are attracted to the machine learning industry,
there are still very few specialists that can develop this technology.
The black box problem. The early stages of machine learning belonged to relatively
simple, shallow methods.
d) Neural network has three main types of layers. State and explain each of these layers.
Hidden layers — intermediate layer between input and output layer and place where all the
computation is done.
function in a neural network defines how the weighted sum of the input is transformed into an
output from a node or nodes in a layer of the network. Sometimes the activation function is
called a “transfer function.”
QUESTION FOUR
i. Bias (2 Marks).
is like the intercept added in a linear equation. It is an additional parameter in the Neural
Network which is used to adjust the output along with the weighted sum of the inputs to the
neuron. Thus, Bias is a constant which helps the model in a way that it can fit best for the given
data.
describes a progression towards a network state where the network has learned to properly
respond to a set of training patterns within some margin of error.
A self-organizing map or self-organizing feature map is a type of artificial neural network that is
trained using unsupervised learning to produce a low-dimensional, discretized representation
of the input space of the training samples, called a map, and is therefore a method to do
dimensionality reduction.
Handwriting Recognition – The idea of Handwriting recognition has become very important.
This is because handheld devices like the Palm Pilot are becoming very popular. Hence, we can
use Neural networks to recognize handwritten characters.
Traveling Salesman Problem – Neural networks can also solve the traveling salesman problem.
But this is to a certain degree of approximation only.
Image Compression – Vast amounts of information is received and processed at once by neural
networks. This makes them useful in image compression. With the Internet explosion and more
sites using more images on their sites, using neural networks for image compression is worth a
look.
Stock Exchange Prediction – The day-to-day business of the stock market is very complicated.
Many factors weigh in whether a given stock will go up or down on any given day.
c) State any three parameters that are set in neural networks (3 Marks)
Learning rate: this hyperparameter refers to the step of backpropagation, when parameters are
updated according to an optimization function.
Momentum: it is a technique used during the backpropagation phase. As said regarding the
learning rate, parameters are updated so that they can converge towards the minimum of the
loss function.
Minibatch size: when you are facing billions of data, it might result inefficient (as well as
counterproductive) feeding your NN with all of them.
Unsupervised learning. They revolve around the model learning complex relationships within
data that you haven’t been able to determine yet. This can be through tasks such as clustering
data-points, which help to give insight to the structure of the data.
Supervised learning
Self-Optimization – includes features for the network and radio optimization during network
operation.
Self-Healing – relates to features that are utilized during network operation upon cell failure.
QUESTION FIVE
a) Briefly explain the meaning of the term ‘hope field network’. Use a diagram to illustrate
youranswer. (3 Marks).
Hopfield network is a special kind of neural network whose response is different from other
neural networks. It is calculated by converging iterative process. It has just one layer of neurons
relating to the size of the input and output, which must be the same. When such a network
recognizes, for example, digits, we present a list of correctly rendered digits to the network.
Subsequently, the network can transform a noise input to the relating perfect output.
b) Describe four properties of hope field network (4 Marks)
This model consists of neurons with one inverting and one non-inverting output.
The output of each neuron should be the input of other neurons but not the input of self.
Connections can be excitatory as well as inhibitory. It would be excitatory, if the output of the
neuron is same as the input, otherwise inhibitory.
c) Use back propagation algorithm to compute one training pass on the following neural
network. (6 Marks)
d) State and explain any two types radial basis functions (4 Marks)
ART1 – It is the simplest and the basic ART architecture. It is capable of clustering binary input
values.
ARTMAP – It is a supervised form of ART learning where one ART learns based on the previous
ART module. It is also known as predictive ART.