Why AI

I am a little nervous about posting this. I really struggled to figure out how to explain things without small details y understood by people with advanced degrees. On top of that it is hard to talk about things like neural networks and explain why the hype and the reality of current AI development is more and less what people think it is.

AI is a set of techniques that allow a computer to analyze patterns in order to do things like answer questions. AI is substantially more than the ChatGDP experience. Ai is also not new. AI research started in the fifties and as you are probably aware the research continues. Let’s start by defining what we mean by AI.

Narrow AI

This is mostly what is in use today. It has been used for many kinds of problems, not just writing your report. AI is also more than just chatbots. AI encompasses problems like identifying data amid a noisy signal. It is used to understand speech through analysis of the sounds people make. AI is used in game design to provide a challenging opponent.’

General Purpose AI

This is still a future capability. A general purpose AI will have human like intelligence. I think we are still very far from general AI. A general AI will need to integrate multiple inputs, e.g. speech, vision, and possibly other sensors like detecting radar or chemical sensing. To be truthfully useful it will need the ability to understand how people think.

Super AI

Finaly super AI, it would be more intelligent than humans. It is very likely that such an AI would destroy humanity. This is a genuine concern but we don’t yet know how consciousness works and personally I’m not convinced that we will understand consciousness anytime soon.

Another limiting factor with AI is power consumption. Search farms like those Google runs are big power consumers. They location of the search processing is related to the power usage for AI. Current AI systems consume about 17,000 times the power used in a home.

See https://www.windowscentral.com/software-apps/a-new-report-reveals-that-chatgpt-exorbitantly-consumes-electricity for more info on this.

I was reading part of an article about AI and AI growth in capability. I think that article contained some of the main points that are misunderstood. Who knows, maybe I just missed that day in class. Let’s start by talking about Moore’s law. These researchers blithely say that only a multiplier of (order of magnitude) 100,000 lies between us and greater capabilities. Let me address a few of the real-life issues that such a system would encounter.

Power/Speed/Heat: as a device, and for the sake of argument we’ll talk about devices, the heat goes up with speed. How much? The rough power usage of a transistor is the current times the applied voltage. So if a transistor is switching in 1 nanosecond, and uses a microwatt is will consumer 1 * 1 * 1 units of energy. If we cut the switching time in half to .5 nanoseconds at 1 volt the power usage goes up by a factor of 2. Another doubling needs a factor of four. This is just one view on the performance of a transistor and a very primitive model at that. Real models would consider the capacitance of the junction, the tolerance for greater power usage – if you can’t keep it cool it will melt. Then there are issues as sizes get down to the size of a collection of atoms, charges will start doing things like “tunneling” through the material, i.e. the charge’s position is more and more driven by the Heisenberg uncertainly principle where the distribution of probabilities will fall below our ability to locate the charge without changing is position.

This also means that some of the assumptions in play are flawed. One article I read suggested that AGI would require about a speedup of the “computer” of roughly a million times. Even with improvements like electron beam lithography (in this context the method to put features on chips) It seems very unlikely we can speed up computing by such a large fraction. This also means that some of the assumptions in play are flawed. One article I read suggested that AGI would require about a speedup of the “computer” of roughly a million times. Even with improvements like electron beam lithography (in this context the method to put features on chips) It seems very unlikely we can speed up computing by such a large fraction. Lets put it this way, if replacing a paralegal put replacing a paralegal required one of these processors, imagine the electric bill for more than a hundred thousand kilowatt hours. Ultimately it doesn’t seem likely that today’s semiconductor approaches leave space for speed ups.

What does this mean? We’re already making chips at the limits of the various physical measures. I won’t predict how much more the clever physicists at AMD, Intel, etc. can squeak out but more than a factor of a few seems pretty hard to me. But do remember I’m an electrical engineer, not a PhD Physicist.

Safety

Although progress with AI systems will probably move more slowly than predicted, we are currently all but ignoring the very real dangers that can go with AI. Consider that without some sort of fail safe there may be no way to turn off an AI against it’s will. It may have long since surreptitiously been replaced by the AI itself. There just not enough effort being made right now and this may prove to be a dramatic result that wipes out humans.

Leave a comment