Finally understand backpropagation for neural nets

Well I cannot speak highly enough of this guide into Neural Nets written for people who already understand software programming: Hacker's guide to Neural Networks. I wouldn't suggest it for people who don't program, but it's a very good example of how to target a reader like me. He starts with a very basic intuitive example, then goes into a slightly more complex example of the same basic idea, then a third slightly more complex example, and finally the fourth example, where backpropagation is finally shown in all its humble glory.

The first time I read it, I understood the first example okay. The second time I read it, I understood the second example, and so forth. Throughout, he was using a simplified form of neural "gate" which was simply a mathematical function (Add, Multiply, etc), instead of fiddling around with logic gates -- which would have added another layer of complexity to the subject he was making as simple as possible.

I did have to take a couple days out to learn what a derivative is, and there are some other calculus concepts like Chain Rules which he touches on that I don't fully understand. And I will say that I only understand backpropagation at a high level now, it will take more study to be able to generate the code for it myself. But I get it enough to move on in the journey, and also to determine a better name for it in my own inner language: neural net echo calculation, or maybe calculation echo.

Anyway, I've linked to this article now three times, and I've read a number of other articles, but kept coming back to this one because it really speaks my language.

Thanks Andrej Karpathy

Add a comment

HTML code is displayed as text and web addresses are automatically converted.

Page top