Neural Nets need to be deep before they work well

I'm reading Neural Networks and Deep Learning and it drifts off into Algebra a little more than my intuitive brain wants. So I google around a little more to try and understand what's happening with backpropagation, sigmoid neurons, perceptrons, and find the following video. It has a decent survey of the journey we've taken to get neural nets working.

Turns out that, after many years of puzzling over why they weren't working so well, neural nets started working when people figured out that they needed to have many more layers than they first expected, and needed to train on much more data than they started with. As soon as they made this simple breakthrough, neural nets suddenly started performing amazingly well.

The Recurrent Neural Network section went a little over my head, but it looks like something I will learn. This video is oddly edited(?). The talk is completed about halfway through, and then it starts repeating, so it's really only about 15 minutes long or so. This is sometimes a trick to prevent copyrighted-material-detection algorithms from delisting a video, so it seems odd to have this effect. Anyway, here's the video:

"Google's AI Chief Geoffrey Hinton How Neural Networks Really Work" https://www.youtube.com/watch?v=EInQoVLg_UY (that URL stopped working, so here is another version of the same speech, but maybe the original was only an excerpt? This appears to be a slightly longer video than the original. "Geoff Hinton: Neural Networks for Language and Understanding" https://www.youtube.com/watch?v=o8otywnWwKc)

Add a comment

HTML code is displayed as text and web addresses are automatically converted.

Page top