Have you ever wondered how virtual assistants like Alexa or Siri understand our speech and language? How does Google's translator app translate an entire document to our preferred language in a matter of seconds? How do Amazon and YouTube provide us with appropriate recommendations? All of these are the results of Deep Learning and Artificial Neural Networks. Want to know more about it? Keep reading!

What is Deep Learning?

Deep Learning is a part of Machine Learning (ML), which in itself is a part of Artificial Intelligence (AI). Artificial Intelligence allows machines to be programmed in such a way that they can think and act like humans. Machine Learning represents a set of algorithms from data provided so that the machine can carry out the above tasks on its own, without human intervention. Deep Learning is a type of machine learning, that is based on the structure of the human brain called Artificial Neural Networks (ANN).

The adjective "deep" refers to the use of a multi-layered structure of algorithms. These algorithms are designed in a logical structure so that they can carry out actions just like humans.

Deep Learning to the Rescue | Security Info Watch

Source:- Image

Evolution of Deep Learning

Those of you who think that deep learning was invented in the 21st century, well then it is a piece of wrong information. The history of deep learning travels back to 1943. Warren McCulloch and Walter Pitts created a computer model based on the neural networks of the human brain. McCulloch and Pitts used a combination of mathematics and algorithms and named it threshold logic to mimic the thought process. And thus, deep learning kept evolving steadily over the years.
Most of us are unaware of the history of deep learning because it was unpopular back then. It had various flaws, was inefficient, and thus was not very useful. But every original technology undergoes certain breakthroughs before leading to popularisation. Deep Learning also went through three major advancements:

Source:-Image

In 1960, Henry J. Kelley developed the basics of the Back Propagation Model. Stuart Dreyfus developed a simpler version of this, based only on the chain rule, in 1962. Although the concept of Back Propagation existed in the 1960s, it never became useful before 1985 because of its inefficiency.
Alexey Grigoryevich Ivakhnenko (developed the Group Method of Data Handling) and Valentin Grigorʹevich Lapa (author of Cybernetics and Forecasting Techniques) came up with the Deep Learning algorithms in 1965, using models with polynomial (complicated equations) activation functions. During the 1970s, there was some slowdown in the development of AI research due to a lack of funding.
Kunihiko Fukushima, in 1985, used Convolutional Neural Network (CNN) for the first time to design the neural networks with multiple pooling and convolutional layers. In 1979, he developed an Artificial Neural Network called Neocognitron, which used a multi-layered and hierarchical design. Many concepts of Neocognitron are still used today. During 1970-1985, the concepts of the Back Propagation Model were developed significantly. Again, there was a setback of AI in 1985-1990 which drastically affected the research of deep learning.

6. The Neocognitron proposed by Fukushima [77]. Digit features of increasing complexity are extracted in a hierarchical feed-forward neural network.

Source:-Image

In 1999, computers started becoming faster at processing data and GPU (graphics processing units) were developed. This was regarded as a remarkable development in the evolution of deep learning. The Vanishing Gradient Problem appeared in 2000. This was not a fundamental problem for all the neural networks, but only with the ones which used gradient-based learning. It arose due to certain activation functions but they could be solved using two methods: layer-by-layer pre-training method, and development of long short-term memory (LSTM). The META Group (now Gartner) came up with the challenges and opportunities of the three-dimensional data growth in a research report in 2001. This report marked the descent of Big Data into today's world. By 2011 the speed of GPUs had increased appreciably, to train CNNs without using layer-by-layer pre-training methods.

In 2012, Google Brain published the results of an unusual free-spirited project called the Cat Experiment that explored the difficulties of unsupervised learning. Since then, unsupervised learning remains as a remarkable achievement in the field of deep learning. 2018 and onwards, has embarked an extraordinary development in AI and Big Data. Deep Learning remains a growing part, which requires continuous improvisation and creative ideas.

Deep Learning vs. Machine Learning

Deep Learning is a specialized version of Machine Learning. Machine Learning workflow starts by extracting relevant features from images manually. The features are then used to classify the object in the images. Deep Learning workflow starts with relevant features being automatically extracted from images. Also, it performs "end-to-end learning" i.e., some raw data and some tasks are supplied to a network so that it can automatically learn how to classify the objects.
Another key difference includes deep learning algorithms work more efficiently with the increase in the size of data. But machine learning ends up in a performance plateau even when more data is added to the network.

Deep Learning Spreads

Source:-Image

Why is Deep Learning so popular?

All thanks to the accuracy level. Deep Learning provides recognition accuracy like never before. Thus, the electronic devices meet user expectations which is a crucial factor for critical applications. Deep learning has reached such levels where machines excel humans in performing certain tasks like image recognition and classification.

  1. We can perform feature extraction and classification in a go, which means we have to design just one model.
  2. The models are much faster than the previous methods because of the availability of amounts of labeled data as well as GPUs which process the data at high speed.
  3. Deep networks can learn highly complex mechanisms due to their structured algorithms and lots of other parameters. So, no more complex design is required to be done manually anymore.
  4. They are quite easy to implement by using high-level open-source libraries such as TensorFlow, Pytorch, etc.

Applications:-

We live in a world where we cannot even think about living without machines and if they can perform complex decisions on our behalf wouldn't that be great?! So, let's see 7 practical examples of deep learning.

How Singapore Is Developing Driverless Cars | CIO

Source:-Image

Translations

One of the most amazing applications of deep learning is Image-Text Translation. With Google Translate App we can translate photographic images of text into our preferred language. It is very useful for businessmen, travellers, historians, linguists, and others.

Virtual Assistants

Be it Alexa, Siri, Cortana, or Google Assistant, all of them use deep learning to interact with the users. They understand our speech and language and give us a secondary human experience. They provide us our favorite playlists or our favorite dining spots based on our previous preferences. They also send emails, take notes for us, book appointments and so much more. Virtual assistants are literally at our beck and call for everything we do.

Vision for autonomous vehicles

Deep learning is used by the vehicles to understand the realities of the road like road signals, a person crossing the road, or another vehicle ahead of us. The more data the algorithms they receive, the more human-like decisions they can take.

Entertainment

Life without Netflix or Amazon is unimaginable nowadays. Ever wondered if they ever stopped providing us suggestions on what to watch or buy? Probably we would be watching a small handful of movies or web series or some handpicked items, just what we know of, no variations, nothing new. Thankfully we do not have to face such a problem and all the credit goes to deep learning. We can watch, buy, or listen to anything just based on our previous searches. It makes life so easy and interesting and it seems that this field is going to become more popular very soon.

Medicine and pharmaceuticals

Yet it is a developing field, it has shown great improvisation in diagnosing diseases to tumors and even created medicines based on an individual's genomic structure. It would be a breakthrough in the field of science and medicine once it is fully developed.

Facial recognition

Not just for security purposes, deep learning can be used for facial recognition for payment in stores in the future. The main challenge is to identify the person's face even if they have changed their hairstyle, grown or shaved off their beards, or the image has a bad lighting effect.

Image colorization

Previously, turning black-and-white pictures into colored ones manually was such a tedious job. As of now, deep learning algorithms are written in such a way that they can recreate the black-and-white image to a colored one and the results are pretty impressive.

Conclusion

Even though deep learning is a boon to mankind at the same time it greatly affects our daily life. Humans start lacking the motivation to work anymore leading to unemployment and laziness. It not only affects our minds but also our health. Our imaginative powers will eventually get lost and make us more dependent on machines to perform our tasks. Our primary role is to make sure that we make good use of these technologies and at the same time, we should use them in a controlled manner.