Why Artificial Intelligence is (Still) Human Intelligence

By: Gustav Hoyer
Nathan Johnson

Let’s clear something up: Artificial Intelligence is not magic. It might seem like it at times, with all the talk of self-driving cars, robots teaching themselves to walk, and all other signs generally pointing toward our progress toward a more autonomous world. But as with any other promising technology trend, understanding the facts about what makes a technology work can help us calibrate our expectations of its use and – by extension – its applicability to our lives.

At its core, Artificial Intelligence and its partner Machine Learning (abbreviated as AI/ML) is math. Complex math, but math nonetheless. More specifically, it’s probability – the application of weighted probabilistic networks at a computational scale we’ve never been able to perform before.

What can make AI seem almost magical is that, because of our computational power, these computed probabilities become self-training. It’s that characteristic more than any other that makes AI seem like wizardry. The little cylinder on the kitchen counter that suddenly lights up when you call it by name feels like something out of science fiction, but that entire process is the end product of the re-ingestion of new data to help fine-tune a highly complex probabilistic graph.

Such an aesthetically-pleasing justification for learning complex mathematics.

We know that the voice assistant’s “name” is its wake word, which it recognizes not because it’s self-aware but because it has been programmed to take an audio waveform and match it to a database of known waveforms with certain characteristics. It “wakes up” and plays back a human language dialogue because the audio pattern of its wake word most closely matches a corresponding programmed action – “if waveform x most closely matches input x, then go do action y.” This is a microcosm of the computational network of probabilities that form the heart and soul of AI/ML.

The math is not new. What’s new is our ability to compute it at a scale we never have before. Computational costs have come down enough that we can apply these calculations effectively and usefully, taking seconds not days. But, this doesn’t just represent an improvement to existing computational processes. This reality represents a total shift in the way we create software.

A New Way to Program

Historically, instructing a computer to perform functions for a user relied on a purely deterministic framework. Essentially that meant that computers interpreted their instructions based on a series of “if-then” statements: if a user hits the series of keys “Ctrl-Alt-Delete,” then the action to perform is “reboot the machine.” If the user types a search query into a web browser, then this triggers a series of electronic interchanges with remote network connected devices.

In the beginning, the tasks being asked of computers were simple enough that this was still the most suitable framework for coding. Nearly two decades into the 21st century, however, the complexity of software has grown tremendously. As the role software plays in our lives continues to expand, programmers need to anticipate ever more “if-then” instructions for software, accounting for a larger and larger web of possibilities from which the computer might derive instruction.

It is highly interconnected chains of simple conditionals like these that are the foundation of entropy in modern software. It’s impossible to eliminate this entropy, but it is necessary to attenuate it. In other words, the driver of complexity in software is procedural coding, namely the use of a coding framework of: “if input is A, then perform action B, otherwise perform C.” In order to write this conditional form of procedural code, the programmer must anticipate all possible inputs and either explicitly code for them or define generic behavior for unanticipated conditions. Because these simple branches create a highly intricate graph that initiates transitions in system state, it becomes very difficult for human minds and oversight to fully evaluate and confirm every possible decision and resulting system state. This is why operating software is always subject to unanticipated system states a.k.a bugs. Bugs, therefore, are inevitable, because it is impossible to fully anticipate and account for every possible input in this way.

A prime example of this concept in action is the infamous “blue screen of death” that displays when a computer crashes. This represents an inconsistent state for the system – the computer has encountered some combination of inputs that it doesn’t recognize and no longer understands what to do. This blue screen then becomes a fail-safe “behavior of last resort” – the system doesn’t know what to do so it executes a special set of instructions that freeze all other processing and presents the dreaded Blue-Screen-of-Death. This system state, although frustrating, is not the software failing. In fact, it is the software succeeding – it is doing exactly what it was told to do.

“And here we see everything going exactly according to plan.”

So why is AI/ML such a profound transformation? With it, we can now program behavior into our computing technology that can be trained by real-world, unstructured input without needing to systematize every possible input. The AI/ML interface of a voice assistant can parse digitized audio captured in real-time, and match it probabilistically to pre-processed reference audio that may differ in immaterial ways to recognize the voice pattern associated with its wake word.

A voice pattern, which is just a set of sound waves transcoded into bits, will never be exactly the same twice. Creating a software system that can continue to operate despite imprecise inputs means we can now have computer behavior performing within uncertainties. In other words, the computer no longer requires purely ‘black-and-white’ inputs to manage internal system state transitions. It can now receive the much more variable ‘greys’ of reality, and still successfully navigate to desired system states and operations.

This was effectively impossible with older approaches that relied upon procedural code. However, to be clear, AI systems aren’t built upon uncertainty, but they have a greater ability to absorb and respond to it. It’s not that the system will never receive unanticipated inputs, but it can now absorb a much broader spectrum of inputs than could ever be formalized and hand-coded in traditional software development approaches. Now, we can create systemic behaviors that incorporate and buffer uncertainty while still providing useful outcomes.

We’re mostly unfamiliar with the idea of teaching a computer rather than giving it instructions. Rather than being taught via “if-then” statements, AI/ML models are weighted models that must be taught much the same way one teaches a dog to sit. The first five times the dog doesn’t sit, it receives no cookie. Over time, it learns to associate the act of sitting with receiving a cookie, without its owner needing to impart every variation of the word “sit.”

But we don’t need a deterministic framework for knowing who’s a good boy.

Similarly, the AI/ML coding paradigm means the coder no longer has to find the statistically significant points of similarity between 1000 minimally varied inputs. The computer can now recognize them so that when the 1001st input comes along, it compares the new input to its previously computed set of characteristics and can determine if the new input matches the prior inputs without requiring that the new input exactly match one of the previously processed inputs. “Sounds a lot like sit? I’ve done this enough times now it probably means I need to sit.”

A New Generation of Programmers

If the tools of programming – of instructing our machines how we want them to behave – are changing, so too is the role of the coder. Rather than writing lines of procedural code, the programmer of the future will know how to create AI/ML models for emergent behavior. Rather than understanding how to create instructions for a computer to follow, they will understand the fundamentals of the mathematical probabilities that govern this new type of computational behavior.

For these reasons, it doesn’t seem too far fetched to suggest that data scientists are the next generation of programmers.

Data scientists understand the probabilities that our computers are growing ever-more adept at calculating. As the paradigm shifts more and more in the direction of this new normal, the role of the procedural coders might not vanish, but it quite likely will change. Finding and fixing the “if-then” statements of procedural code will no longer fall to a human being; instead, programmers will create an AI model to watch every relevant parameter and probabilistically match new inputs to determine any potential change in system state. The machine ‘learns’ how to find the inputs it needs rather than requiring programmers to define every possible variant input.

Note, however, that the process of instruction and the determination of resultant system states is still completely in the domain of human decision-making. So, there is nothing truly ‘artificial’ in AI systems. It is really a way for human programmers to use the vastly increased computational capability we now possess to create more robust input interfaces for our software systems to receive. The behaviors are still defined by human beings who ultimately define which types of inputs will drive the transitions in system state commensurate with the value and utility of these machines. The new coding paradigm is knowing how to move these new technologies efficiently through that arc, not replacing human judgement.

Apocalypse Not

When we hear phrases like “AI/ML is coming,” it can conjure images of a reality on par with Terminator or 2001: A Space Odyssey. The truth is that all the math that drives AI/ML processing is defined by human beings. There is no “artificial” intelligence. It’s man-made intelligence, because people train it. Its external behaviors may be unpredictable because of the enormous complexity of its inputs, but it is very difficult to imagine a behavior coming out of an AI/ML model that isn’t constrained by human expectations.

Artificial intelligence is human intelligence.

We should be excited by this new technology’s tremendous potential to revolutionize the world of work and the people who do it.

Note: This post draws from the guest perspective of Gustav Hoyer, an experienced IT Leader and FischerJordan alum. Gustav has led multiple transformation efforts of IT operating models and Enterprise Architecture teams in a variety of industries. Gustav served several interim leadership roles in client organizations, including a tenure as the Chief Architect and VP of Innovation for Catholic Health Initiatives, and interim Head of Technology for RE/MAX. His work creating strategic foundations for companies’ most complicated and important technology investments has led him to his current position of Global Public Sector Healthcare Architecture Lead at Amazon Web Services. Gustav’s views, perspectives, and opinions expressed here are solely his own and do not represent those of his employer or associates.

<<< Back to INSIGHTS 

ADDITIONAL INSIGHTS