

#Efficient processing of deep neural networks manual
From each layer, the best statistically chosen features were then forwarded on to the next layer (a slow, manual process). They used models with polynomial (complicated equations) activation functions, that were then analyzed statistically. The earliest efforts in developing deep learning algorithms came from Alexey Grigoryevich Ivakhnenko (developed the Group Method of Data Handling) and Valentin Grigorʹevich Lapa (author of Cybernetics and Forecasting Techniques) in 1965. While the concept of back propagation (the backward propagation of errors for purposes of training) did exist in the early 1960s, it was clumsy and inefficient, and would not become useful until 1985. In 1962, a simpler version based only on the chain rule was developed by Stuart Dreyfus. Kelley is given credit for developing the basics of a continuous Back Propagation Model in 1960. Both were tied to the infamous Artificial Intelligence winters. Since that time, Deep Learning has evolved steadily, with only two significant breaks in its development. They used a combination of algorithms and mathematics they called “ threshold logic” to mimic the thought process. The history of deep learning can be traced back to 1943, when Walter Pitts and Warren McCulloch created a computer model based on the neural networks of the human brain. Normally a data scientist, or a programmer, is responsible for feature extraction. Feature extraction uses an algorithm to automatically construct meaningful “features” of the data for purposes of training, learning, and understanding. It is used for pattern recognition and image processing. Feature extraction is another aspect of deep learning.
