Home > Sci Tech > History Of Deep Learning

History Of Deep Learning

Learning is simple, isn’t it? It was not at all easy to learn but all of us were taught and slowly we started learning to upgrade ourselves. Likewise, we teach a machine by configuring features and algorithms. The idea behind deep learning is to build learning algorithms that mimic brains.

Time Lapse Photography of Blue Lights

What is Deep learning?

Deep learning is a subset of machine learning, that employs algorithms. A newborn baby that can’t speak in the very beginning within a few years, it learns from the people around. Here comes the role of the neural network which is similar to network in the brain.Introduction to Deep Learning | Hacker Noon

The word deep learning was first used while talking about Artificial Neural Networks by Igor Aizenberg and colleagues in or around 2000. deep learning is a subset that uses multiple layers to progressively extract higher-level features from the raw input. for example, image processing higher the layers increased the concept of identification is relevant to that of human-like face or letters lower the layers it may identify thee edges only.



The neurons at each level make some guesses and predictions and pass that information to the next level, all the way to the eventual outcome. Learning can be classified as supervised learning, unsupervised learning and semi-supervised learning.

OK!!! maybe all of you find this a bit confusing, well you guys are not alone

Evolution of deep learning

The History, Evolution and Growth of Deep Learning | Analytics Insight

Artificial intelligence(AI) has a subset that is Machine learning(ML) and ML contains another subset that is Deep learning(DL). Deep learning plays a role in machine learning by the neurons present in our brains. These neurons are organized in layers. Different layers perform different kinds of transformations at their inputs. A vast variety of neural network task includes speech recognition, social networking, etc. Deep learning was developed to a more specific level with the involvement of Artificial Intelligence & Machine learning.

History of deep learning

It’s the most valuable development in the world of artificial intelligence right now. But instead of trying to grasp the details of the field which could lengthen this article a little too much. Let’s just take a look at some of the major developments in the history of deep learning.A Short History Of Deep Learning -- Everyone Should Read

Although the study of the human brain is thousands of years old. The first step towards neural networks took place in 1943.

  • In 1943

When Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work. They modelled a simple neural network with electrical circuits.

  • In 1958

Frank Rosenblatt creates the perceptron, an algorithm for pattern recognition based on a two-layer computer neural network using simple addition and subtraction. The perceptron computes a weighted sum of the inputs, subtracts a                  threshold, and passes one of two possible values out as the result.

  • In 1980

Kunihiko Fukushima proposes the Neocognitron which is a hierarchical, multilayered artificial neural network. It has been used for handwriting recognition and other pattern recognition problems.

  • In the 1980s-1990s

John Hopfield presented a paper to the National Academy of Sciences. His approach to creating useful devices.

Joint Conference on Cooperative/ Competitive Neural Networks at which Japan announced their Fifth-Generation effort resulted in the US worrying about being left behind. Soon funding was flowing once again.

Deep Learning was introduced to the machine learning community by Rina Dechter in 1986.

Yann LeCun’s invented the machine that could read handwritten digits. This invention felled beneath the wider world’s radar. While the algorithm worked and it required training for 3 days.

This time when the second AI winter kicked in, which also effected research for neural networks and Deep Learning. Various overly-optimistic individuals had exaggerated the “immediate” potential of Artificial Intelligence, breaking expectations, and angering investors. Luckily, some people continued to work on AI and DL, and some significant advances were made. In 1995, Dana Cortes and Vladimir Vapnik developed the support vector machine.

Sepp Hochreiter and Jürgen Schmidhuber publish a milestone paper on “Long Short-Term Memory” (LSTM). It’s a type of RNN architecture that will go on to revolutionize deep learning in decades to come.

  • In 2006

Geoffrey Hinton, Ruslan Salakhutdinov, Osindero, and The altogether published the paper A fast learning algorithm for deep belief nets. In which they stacked multiple databases together in layers and called them Deep Belief Networks. The training process is much more efficient for a large amount of data.

  • In 2008

Andrew NG’s group at Stanford started advocating for the use of GPUs. So, that they can train Deep Neural Networks to speed up the training time by many folds. This could bring practicality in the field of Deep Learning for training on a huge volume of data efficiently.

  • In 2009

Finding enough labeled data has always been a challenge for the Deep Learning community. In 2009 Fei-Fei Li, an AI professor at Stanford launched ImageNet, assembled a free database of more than 14 million labeled images. It would serve as a benchmark for the deep learning researchers who would participate in ImageNet competitions (ILSVRC) every year.

  • In 2012

AlexNet, a GPU implemented the CNN model designed by Alex Krizhevsky. AlexNet won Imagenet’s image classification contest with an accuracy of 84%.  It is a huge jump over 75% accuracy that earlier models had achieved. This win triggers a new deep learning boom globally.

  • In 2014

Ian Goodfellow created GAN also known as Generative Adversarial Neural Network. GANs open a whole new door of application of deep learning in fashion, art, science, etc.

  • In 2016

Deepmind’s deep reinforcement learning model beats the human champion in the complex game of Go. The game is much more complex than chess. As a result, this feat captures the imagination of everyone. Also, it takes the promise of deep learning to a whole new level.Self-learning computer eclipses human ability at complex game Go ...

 

  • In 2019

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun won Turing Award 2018. They had immensely contributed to advancements in the area of deep learning and artificial intelligence. This was a defining moment for those who had worked relentlessly on neural networks.2018 Turing Award

By 2012, deep learning had already been used to help people turn left at Albuquerque (Google Street View). It inquired about the estimated average airspeed velocity of an unladen swallow (Apple’s Siri). In June of the year 2012, Google linked 16,000 computer processors, gave them Internet access, and watched as the machines taught themselves how to identify…cats. What may seem laughably simplistic, though, was quite earth-shattering as scientific progress goes.

The Cat Experiment works about 70% better than its forerunners in processing unlabeled images. However, it recognized less than 16% of the objects used for training and did even worse with objects that were rotated or moved.

Currently, the processing of Big Data and the evolution of Artificial Intelligence are both dependent on Deep Learning. Deep Learning is still evolving and in need of creative ideas.

Leave a Reply