Don’t Be Scared of AI (Artificial Intelligence)

For a while now, I’ve been thinking about writing something about AI (Artificial Intelligence). This isn’t because I’m very versed in it (I’m not – the work I’ve done related to AI is some simple investigation of HMM and GMM – models for pattern recognition that very well might be how our neurons are organized). But I feel it is important to talk a bit about it because (a) AI is always an interesting topic in science fiction, and (b) one of the most amazing technical and business minds in our era, Elon Musk – the real life Tony Stark – has been on record saying he is a bit scared of it. Even Stephen Hawking as expressed a level of fear about it.

So, anyway, I figured I would give you a bit of perspective as a guy who, for the last 20 plus years (man, I’m getting old), has designed and architected silicon chips. You don’t know me probably, but if you used a PC between the years 1995 and the first decade of the 2000s, and that PC had a hard drive or CD/DVD in it that was connected to “IDE”, you used hardware that I built. And even today, if you use Serial ATA (you do), PCI-Express (you do), and your PC generates interrupts (it does), then you are using hardware that I either personally designed, architected, or was the output of a team that designed and architected them that I was a member of.

Now, I am nowhere near as smart as either Elon Musk or Stephen Hawking, but one thing I do have that they do not is a lot of experience in the computer industry. And I think that perspective is an important counter-weight to their fears. Just as I shouldn’t comment on a fear that a black hole is going to kill us all because of what Hawking knows, I don’t think Hawking should be commenting on an industry he doesn’t know.

With that, let’s begin.

As I said, I have been building silicon chips for over 20 years. In that time, I’ve seen some amazing advances. The first chip I built contained around 100,000 transistors, of which about 1/3 of them were things I designed. Now I work on chips that contain tens of millions, where each transistor is about 1% the size of the transistors I started with. I used tools that were basically the equivalent of drafting tools that connected symbols (logic gates) to each other, to highly complex languages that look like C++ with thousands of lines of code. I’ve designed at a very low level (drawing out things called K-maps for logic truth tables) and at very high levels (flow charts with lots of decision making, loop back, memory, etc.).

And in all that time, while the tools were different and more advanced, while the problems being solved were more complex, in the end… nothing has changed. It’s all the same stuff.

In fact, the stuff I was doing, while it was incredibly advanced at the time, was, for the most part, stuff that had already been done. Technology had simply advanced such that something that used to require a mainframe computer, and perhaps dozens of logic boards, could now be done by me, for a desktop PC, with a few lines of C-like looking code.

A Personal Example

One of these examples of this was a thing called “message signaled interrupts” (called MSIs). Now, bear with me here. I’ll try not to get too technical, but I need to get a bit technical to show you how most everything new is, in fact, old.

In the dark old days of PCs that were used against monochromatic green screen (or if you were lucky, amber!) monitors, a piece of hardware would interrupt the task of the computer by changing the voltage on a wire (from a logic 1 to a logic 0 or vice versa). An example of this would be you pressing a key on the keyboard. The act of pulling on the wire caused the CPU to “jump” to a new set of instructions, which caused it to read another piece of hardware to find out why the wire was pulled. And then that would cause the CPU to read other hardware and service the interrupt (in the keyboard case, finding out you pressed the letter B, so then it would draw the B on the screen).

MSIs, however, were different. Rather than yanking on a wire to say “service something”, the device doing the interrupt would, instead, send a message over a pair of wires (later a message over the same bus used for memory), and this message would say “I’m the one who interrupted you, so talk to me over here”. This saved time (what we call latency), and thus improved the overall performance. Removing latency is a major component in making your PC or phone appear “faster”, as most of the time these devices are sitting around waiting for you to do something.

OK, so MSIs were great! It was quite a bit of logic in the device to enable this. It went from a simple decision “pull wire!” to a more complex decision “send this message to that place with this data”. This was a major advance! The controller I helped design for this was itself about 100,000 transistors and was to be part of a component that was over 1 million transistors. But it was much easier to design it than the design I had done just a few years earlier where I designed 30,000 transistors, because the tools were better. This controller, in fact, wasn’t even the only thing I was working on at the time. It was a low priority task.

Sill, however, this was a major advance, right?

Well, yes, but, well, it wasn’t a new concept. It was quite an old concept, actually. Mainframes had been doing things like this for at least two decades, maybe longer.

What was new wasn’t the idea, what was new was that this was an idea that could now be brought into a machine you could put on your desk or on your lap. What was new was that it could be done for basically “free” as the technology of Moore’s Law, which means you can double the amount of transistors you can put on a chip every 18 months to two years, allowed me to do it.

The point of this example is, for the most part, this is all AI is. We are taking pieces of software and/or hardware that required big, heavy, expensive machines to run, and putting it on your desk, or increasingly, in the palm of your hand or hidden in your car.

Real Life Examples You Might Not Know Have AI

Take something like Siri or Google’s speech recognition. It seems super smart, because you talk to your device, and it answers you with its voice. But… not really. What is happening is no different from what happened 10 years prior. You are putting information into the computer, and it is spitting out a result. It’s just that 10 years ago, you had to type it in instead of saying it, and 10 years ago you got a list back instead of a list where the top hit is now spoken to you.

While it’s a bit more complicated than that, It’s not really that different. It just looks “smart” because it is so transparent and easy (and sometimes humans have intervened so it looks like Siri understands and accepts that “I love you”).

To take another example, we are adding the ability for cars to parallel park themselves. That seems super “intelligent”. But again, not really. We’ve had vehicles that can move themselves, sense objects, and take action. But these vehicles were in a lab or a factory, attached to huge mainframes (probably with cables or easy to identify markings on a floor), and had big, bulky cameras to do object recognition. All that has changed is the cameras have gotten incredibly small and higher resolution (which is not AI), the computers have gotten smaller and lower power (which is not AI), and the software to do the recognition can now fit in memory because memory has gotten denser and smaller (again, not AI). This only seems like an amazing advance in “artificial” intelligence, because now the general public can see it.

Even Those Scared of AI use AI

For all of Elon Musk’s worry about AI, I think even he uses it a lot. Take the recent satellite launch he just did where he, by the closest of margins, was able to land the first stage on a floating platform in the sea.

While I know literally nothing about what they are doing, I can speculate that there is a lot of AI going on. The computers on board the first stage are making decisions on when to extend the flaps, how hard to fire the engine to slow the descent, and how to steer to the pad – this isn’t just a guy with a joystick, or even if it is, it is a “fly by wire” where the joystick controls are creating all kinds of complex movements in those parts to compensate for things like wind. Additionally, the rocket is landing on a platform that is autonomous – it maintains its position by throttling engines on board based upon GPS coordinates and sea conditions.

AI in the Future

In the future, we are going to see more AI in our lives. And we won’t even know it. You can already see it with drone helicopters, for example. When I first played with them, around 2006, they were these little, fragile things that you couldn’t operate outside due to the wind, and you were operating two joysticks (thrust and direction) that had the tiniest margins for failure. Now you have devices that, yes, you still operate with joysticks (or the equivalent on a phone app), but they can compensate for wind, don’t veer around wildly to become unstable, and are therefore used to take some incredible shots of local landscapes That improvement is AI – we didn’t all of a sudden become amazing drone pilots.

You can see it in smart thermostats like the Nest, which can get information from your phone about where you are and make sure the house turns down the heat when you aren’t around, and can figure out what you really want the temperature to be when you are around based upon your history. That also is AI.

My phone, right now, seems to know that it is a weekday, and that I’m at home, and that on weekdays I travel to another city to work, and if I leave right now, it will take me 15 minutes to get there. That, again, is AI. There isn’t some person at a desk somewhere that punched that in – that’s all “smart” algorithms.

But Is That Really AI?

The examples I gave above are considered a form of artificial intelligence to most people. It is spooky in some sense that your house knows you are coming home and changes the temperature, or knows that you drive to some city every Wednesday but not on Saturday, or that you searched for a camera lens, and now camera lenses show up as ads in your Facebook feed.

To be fair to those most scared of AI, that isn’t what is considered “classic” AI. It isn’t using some specific piece of hardware or software that looks like a brain neuron. And since it isn’t a neuron, it can’t form a new “connection”, and thus it can’t become “sentient”. True AI is something different.

What I’m saying is, no, no it isn’t. We’ve had examples of how to do “true” AI for decades now. We could absolutely have built something like the WOPR in the movie Wargames (and, in fact, we have). We have built machines that in some ways surprised us with how quickly they interpreted our voice to know we said one specific word instead of another. But in the end, everything we have built we can decode. Everything Watson did when on Jeopardy could be deconstructed so we could find out exactly “how” it did it. And it didn’t do it in some new way – it did it in a way we, as humans, told it to do it. Watson didn’t become “self-aware”.

In Conclusion

Guys like me will keep building the same kind of hardware that was invented in the 60s and 70s. We will also build hardware out of things that were software in the 80s and 90s to today, once the transistors become small enough to allow it. Algorithms will run on that hardware and make decisions, which will all come down to “if this, then that”, just like it did 15 years ago, and 15 years before that, and 30 years before that. We can just do a lot more “if this, then that” decisions per second

Nothing is new under the sun. SkyNet is not going to blow up the planet. You aren’t going to get dumped by your OS girlfriend with the sexy voice who has evolved past human existence. HAL will always open the pod bay doors.