Introduction to Deep Learning
What is the difference between AI, Machine Learning and Deep Learning?
1. Artificial Intelligence (A.I.)
A field of study with the goal of creating machines that exhibit intelligence.
2. Machine Learning(ML)
This involves AI algorithms that learn pattern from data, to perform inference on new data. For example, given a lot of photographs labelled as of person A or not of person A, machine learning involves developing algorithms which when given a new photo can identify whether it is of person A or not.
All AI algorithms may not use machine learning; they can be rule-based. For instance, to identify names in text, a rule-based algorithm could use capitalization.
3. Deep Learning(DL)
It is a particular technique in Machine Learning. It uses an Artificial Neural Network(ANN) with many layers, using particular clever techniques to learn the optimal model parameters. Every individual node in the network represents some “feature” which helps in representing the input or assigning a class to the input. For example, if all the inputs are faces, a node might be “nose width” or “average skin tone” or something like that. Crucially, the people creating the network do not need to put these features in themselves. The algorithm will automatically decide which features work best to accomplish its task.
Deep Learning in a detailed view
The word “Deep” in Deep Learning(DL) stands for successive layers of representations. The modern DL has 10 (or) 100’s of successive layers of representation.
It is a multistage way to learn the data representation.
For example, the following four-layer network has two hidden layers:
Normally Machine Learning is all about mapping inputs(Such as images) to targets(labels such as ‘Cats’,’Dog’ etc). It is done by observing inputs to targets.
What is weights and Learning in deep learning neural networks?
Weights is the specification of what layer does with the inputs. The weights get stored in the layers weights. Weight is nothing but a bunch of numbers. The weights are assigned with random values initially. In simple words, weight means giving importance to an input.
For example:
We are in a situation to predict the price of a laptop based on specifications.
- Features like processors, GPU, Memory has more importance(Higher Weights) in predicting the prices.
- Features like color,keyboard has lower importance(Lower weight) in predicting the prices.
Normally a deep learning model learns how much each feature contributing to price of the laptop. Later deep learning model uses weights to predict the price when features(inputs) of the laptop is given.
The weight is a feature(Input) denoting how much the feature is important(matters) in the model.
Learning is the process of finding weights of all layers in the network such that network correctly map(inputs) to associated targets(outputs).
What is a Loss Function?
Normally to control something we need to observe it.
Similarly, in order to control output of a neural network we need to measure how far the output is from what we have expected.
Loss function takes the prediction of the network and true target(What you want the neural network to output). It then computes the distance score capturing how well the network has been performed.
What is an optimizer?
An optimizer adjusts the value of the weight in the direction to lower the loss score for current examples.
It implements back propagation.
What is Back Propagation algorithm?
Initially all the edge weights are randomly assigned. For every input in the training dataset, the ANN is activated and its output is observed. This output is compared with the desired output that we already know, and the error is “propagated” back to the previous layer. This error is noted and the weights are “adjusted” accordingly. This process is repeated until the output error is below a predetermined threshold.
Once the above algorithm terminates, we have a “learned” ANN which, we consider is ready to work with “new” inputs. This ANN is said to have learned from several examples (labeled data) and from its mistakes (error propagation).
So why Deep Learning is Over hyped?
It makes problem solving much easier. It then completely automates feature engineering: The most crucial set in Machine Learning Work Flow.