Learning to see the hidden layer in neural nets

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Could be? It definitely is. I've thought about this problem for a long time, ever since it turned up in my private meditations on Cybernetic Intelligence years ago (back when I was inventing ideas like the Parableizer Engine long before I even knew such a thing was already being discussed theoretically by others). Now, people are writing about it, and it really does present a problem to which there is no obvious answer yet. As the article says: "How well can we get along with machines that are unpredictable and inscrutable?"

The Dark Secret at the Heart of AI

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ The article starts with some background illustrations to frame things and then gets into the problem:

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

The problem may end up being so hard that people take the easy way out and ignore it, which may work for a while, or forever, but is also potentially dangerous. What we really need is a way to ensure what is happening under the hood of an AI is ethically and morally solid: modeling on human behavior is not necessarily the best approach since even the most virtuous humans have all kinds of minor character flaws which could be misinterpreted by AI. We need a way to model on principles that guide behavior toward the betterment of all, respecting core attributes like free agency, etc. I am pleased to see this in the article:

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface...

I'm writing this post today partly because I was surprised to discover I hadn't written it already, since I read this article when it came out and knew I should comment on it. As usual, I'm posting this here for the dual purpose of expressing some fragments of thoughts on the subject and as a bookmark I can come back to in the future when I have the time to bring all these threads together into a larger narrative, like a book or a website.

Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Also wrote this post today because I've been visualizing how to measure this hidden layer, and think I may have a few ideas how it could be done. Lots of math, lots of data, some over my head stuff, but I think we can do this.

Add a comment

HTML code is displayed as text and web addresses are automatically converted.

Page top