Explanation-AI Machine Learning for AI Self-Driving Cars

1112

By Lance Eliot, the AI Trends Insider

When my son and daughter were young, I thought it would be handy for them to learn something about how cars work. My father had taught me the basics and had me do things like change the oil on our family cars and sometimes do more complex activities such as changing the spark plugs. I admit freely that I was never trained to be a car mechanic and so please don’t ask me to overhaul your car’s engine. Let’s just say that I understood and was able to explain the essentials of how a car engine worked and could also reasonably carry on discourse about the fundamentals of cars.  By the way, my dad said it would impress girls, and so that was quite a motivator.

Anyway, I figured that my own children ought to also know something about how cars work. Kind of like passing along knowledge from one generation to the next. Of course, its harder these days to do much on a car engine because of the automation now integrated into the engine and also how well protected the engine parts are from everyday tinkering consumers. So, I resolved that my intent was just to familiarize them with cars and especially the foundation of how conventional engines work.

I opened the hood of the car (which is more than most kids seem to see these days!), and pointed out the essentials of what the engine consisted of. I realized that it was not making a tremendous impression upon them. This is the case partially because the engine wasn’t running at that moment and so they couldn’t see any action happening, and also because there are so many other shielding type of elements that you really cannot see the inner areas of the engine. Even when starting the car so that the engine was running, there still wasn’t a lot that the naked eye could directly see. I decided that I somehow needed to take them inside the engine.

Well, besides potentially using a super shrinking ray that would make us really tiny (think of “Honey, I Shrunk the Kids”), I remembered that when I was young that I had gotten one of those model engine kits. It allows you to put together plastic parts that represent various elements of a car engine. Once you’ve put it together, you hook it to a small battery and watch as the engine does its thing. The plastic parts are mainly transparent and so you can see a little piston going up and down, you can see the timing belt linking the crankshaft and the camshaft, and so on. It was a blast when I was a kid and I thought it would a handy tool for teaching my kids about car engines.

Fortunately, I found that they still make those plastic engine model kits, bought one, and with the help of my children we put it together as a family project (better than doing jigsaw puzzles, so they said). When we finally got it to work, which wasn’t as easy as I had anticipated, we watched in amazement as the little engine ran. Exciting! It allowed me to further explain how engines work, and made it seem more real by looking at the components of the model kit that we had put together (I realize it might seem odd that a model helped to make something seem more real, but it does make sense).

Now, the kids came up with an interesting idea that I had not considered. They decided to disconnect some elements to see what impact it had on the engine. This was a clever way of ensuring that they understood the nature and purpose of each of the elements. By disconnecting one item, they could see what would happen and it reinforced or at times changed their thinking about how things worked in the engine. They reached a point where they eventually could predict what would occur if you disconnected any piece of the engine. I realized it was a clever way to get a sense of whether they not only understood the individual components, but also that they understood the whole picture of how everything worked together. Double exciting!

Let’s switch topics for the moment and discuss machine learning for AI self-driving cars. Don’t worry, we’ll revisit my story about the plastic model car engine. You’ll see.

With many of the budding AI self-driving cars, there is the use of machine learning as a key aspect of creating the ability for AI to drive a car. Usually making use of artificial neural networks, the developers of the AI self-driving cars get a bunch of data and use machine learning to have the system become able to drive a car. Normally, the neural network consists of layers of nodes that we consider suggestive of neurons, though much more simplistic, and the neural network finds patterns in the collected data. The data might be for example images from cameras on a self-driving car and the neural network is trained to discern pedestrians, street signs, dogs, fire hydrants, and so on.

Sometimes the neural network will consist of thousands and upon thousands of neurons. The number of layers can be small, such as a few dozen, or could be hundreds for a millions sized neurons type of neural network. In a general way, this is somewhat akin to how the human brain seems to operate, though the human brain is many orders of magnitude larger and more complex. We still don’t really know how the human brain is able to do everything from driving a car to composing music, and in the case of artificial neural networks we are simplistically borrowing from what we kind of think the human brain is partially doing.

The thing that’s somewhat disconcerting is that when putting together a sizable artificial neural network, it does some incredible mathematical patterning about the data, but by-and-large if we all looked closely at the neurons, their weights, the synapses or connections, and the layers, we would not have any idea of why it is able to detect the patterns that it has found. In a sense, the typical artificial neural network of a sizable nature becomes a kind of mathematical black box. Sure, you can look at a particular neuron and see that it has such-and-such numeric connections and weights, but you could not directly say that it is there for a specific purpose such as it identifies say the body of a dog in an image captured by the self-driving car camera.

Some say that we don’t need to know why the neural network is able to work and that we should just be happy that the advances in developing neural networks has gotten us a lot further along in being able to develop AI self-driving cars. If it works, leave it be. What ain’t broken, no need to fix. That might seem like an alluring notion, but it really is not very satisfying. Besides our innate human curiosity, it could also be vital to know why the various aspects of the neural network work the way they do. Suppose that the neural network falsely identifies a dog when the image really consists of a fire hydrant? You won’t be able to say why the neural network goofed-up. Those that say no explanation is needed would argue that you should just retrain the neural network, and they continue to insist that your best bet is to treat the neural network as a black box.

Not Buying Into the Black Box Approach

At the Cybernetic Self-Driving Car Institute, we don’t buy into the black box approach and instead say that we all should be able to interrogate an artificial neural network and be able to explain why it does what it does.

Indeed, as an expert witness in court cases involving computer related disputes, I am already predicting that once we have AI self-driving cars getting into accidents, which mark my words is going to happen, there will be a big backlash against AI self-driving cars, and people are going to want to know why things went awry. There will be lawsuits, for sure, and there will be experts called upon to explain what the system was doing and why it got involved in a car accident. If the experts just shrug their shoulders and say that the black box neural network did it, and can’t explain how or why, it isn’t going to be good times for the AI self-driving car future.

As such, we are proponents of recent efforts to try and get toward explanations of how and why a neural network works. Again, I don’t mean the mathematical aspects. I mean instead the logical aspects of what aspects of the neural network do what. As an example of what I mean, there’s a handy research paper entitled “Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks” that was done by MIT Computer Science and AI Lab members that provides an indication of some of the state-of-the-art on devising explanations of neural networks.

Their approach is similar to others that are trying to crack open the inner secrets of sizable neural networks. Generally, one of the more promising approaches consists of taking the sizable neural network and trying to review each layer, one layer at a time. You can either turn-on a means of essentially watching the layer operate, or, cleverly instead you extract the layer and put it into a new neural network. You then pump data into the new neural network and see what this isolated ayer does. Out of this aspect, you are maybe able to figure out the logical purpose of that specific layer. You do this with each layer and maybe end-up having an understanding for each layer of the sizable neural network.

This has kind of worked for neural networks that are set up to do speech recognition or that do language translation of say English to French. It appears that the layers are often doing their translative functions by becoming hierarchically based. One layer for example might be down at the lowest level and be identifying the particular sound and what word it fits to, and the next layer might be a step higher in terms of it looks at words in a sentence, and the next layer further higher that starts to peg the word to a targeted translation word, and so on. Another way to describe this is that the layers are arranged from encoders to decoders, and that the raw input passes through the layers and becomes more semantically enriched in terms of the translation effort as it goes from one layer to the next.

Notice that the key trick involved here consists of isolating a layer and trying to figure out what the isolated layer does. Okay, you’ll like this, now let’s tie this to my story about my kids and the plastic model car engine. Remember how they opted to disconnect a particular element of the model car engine and see what would happen? This is the same concept being used by today’s AI and Machine Language researchers that are trying to explain the nature and function of sizable neural networks. Disconnect a piece and take a close look at that piece.

Now, in the case of my kids, they weren’t so much focused on the element that was disconnected as they were on what impact the disconnected element had on the rest of the model car engine.

Efforts to Understand What the Black Box is Doing

This brings us to the aspects of the upcoming future efforts that are being worked on right now to try and understand what the black box of artificial neural networks is doing.

The divide and conquer strategy to figure out a neural network is handy. Some say that this though might not lead to discovering what the logical basis for parts of the neural network are about. If you look at just one layer, they say, it might be that you cannot figure out its logical purpose. Maybe that layer isn’t really just one cohesive comprehensible thing. The layers as a form of grouping of the neural network might be like taking a car engine and just arbitrarily slicing parts of it, and then try to figure out from those random slices what the car engine is doing. It won’t get you anywhere toward a logic-based explanation, they say.

Indeed, some say that it’s the classic the-sum-is-greater-than-the-parts. Maybe the neural network on an isolated by-layer basis does not have any logic that we can discern. It might go beyond our human-derived sense of logic. The mathematical operation is not necessarily going to become a cleanly discernable logical explanation, and in fact the totality of the neural network is really the key. If you accept this argument, you then are back to the notion that it doesn’t make sense to try and get into the parts of the neural network to figure out what it does logically.

Another viewpoint is to try and slice the neural network in some other manner. Maybe you are to find a virtual layer that encompasses multiple actual layers of the neural network. The virtual layer could be able to collect together otherwise disparate and seemingly dispersed parts of the neural network and thus be able to logically explain things.

There are some that believe size is the key. For them, the larger the neural network, the less likely they believe that you will be able to find a logic to the neural network. It has become an unruly overgrown tree, and there’s no point at trying to look at particular limbs and find logic, nor does it make sense to collect limbs together in something we might consider layers and hope that it is logically purposed.

Others claim that the larger the neural network, the greater chance of identifying the logical elements. Their view is that the smaller neural network must cram together things that aren’t seemingly logically to be combined, while a larger sized neural network does lead to specialization that bears upon logical explanation.

Another viewpoint is that the human designer of the neural network that decided on how many layers to use, well, they obviously had in mind a logic and so in the end all you are going to do is come back to that logic of why the neural network operates as it does.

In whatever manner you consider the matter, those that say we aren’t going to find a means to logically explain a sizable neural network are kind of throwing in the towel before we’ve even tried.

Our viewpoint is that we do need to try a myriad of avenues to get toward logical explanations of neural networks. And, as an aside, we suggest that doing so will ultimately likely help us to explain the true “wetware” neural network of them all, the human brain.

In the meantime, let’s all keep trying to crack open the mystery of sizable neural networks, and especially when they are used in AI self-driving cars. Please keep in mind that if the general public knew that there were AI self-driving cars on the roadways that had these AI-based black boxes in them, and for which no one really knew logically why they worked, I would think it would diminish the overall fondness for AI self-driving cars.

I suppose you could say that the Uber driver or Lyft driver is in the same boat, namely that we cannot say for sure how their brains are operating and thus how they able to drive a car for us when we do ridesharing, but I think we as humans accept that aspect about other humans, while for AI self-driving cars we have a different set of expectations, fair or not. We all want to know what the AI knows.

Copyright 2018 Dr. Lance Eliot

This content originally posted on AI Trends.