What Is AI? Decoding the Four Approaches to Artificial Intelligence

If you ask ten different researchers to answer the question “What is AI?”, you might get ten different answers. We often claim that Artificial Intelligence (AI) is fascinating, powerful, and revolutionary, but definitions frequently lack precision.
Is AI a robot that acts like a human? Is it a program that solves logic puzzles? Or is it simply a machine that maximizes the chances of achieving a goal?
Historically, the confusion stems from a lack of a single definition. Instead, AI is best understood by looking at four distinct approaches that researchers have pursued over the last century. These approaches differ based on two key dimensions: thought processes vs. behavior and human fidelity vs. rationality.
In this guide, we will explore the “Standard Model” of AI, the difference between AI and machine learning, and why the future of AI depends not just on intelligence, but on value alignment.
The Two Dimensions of AI
To understand the field, we must look at how researchers define success. There are two main divides in the philosophy of AI:
- Human vs. Rational: Some define AI as systems that think or act like humans (with all our imperfections and biological nuance). Others define it as systems that think or act rationally (adhering to an ideal mathematical standard of “doing the right thing”).
- Internal vs. External: Some focus on thought processes (how the machine reasons internally). Others focus strictly on behavior (what the machine actually does in the real world).
From these dimensions, four specific definitions of AI emerge.
1. Acting Humanly: The Turing Test Approach
The first and perhaps most famous approach focuses on acting like a human. This creates an external definition of intelligence based on fidelity to human performance.
In 1950, Alan Turing proposed a solution to the vague philosophical question, “Can a machine think?” He replaced it with a practical experiment: The Turing Test.
In this test, a human interrogator types questions to two unseen respondents—one human, one computer. If the interrogator cannot distinguish which responses are coming from the machine, the computer is said to be intelligent.
The Capabilities Required for Intelligence
Passing the Turing Test is no small feat. It requires the computer to master several complex disciplines that remain central to AI research today:
- Natural Language Processing (NLP): To communicate successfully in human language.
- Knowledge Representation: To store information about the world.
- Automated Reasoning: To answer questions and draw new conclusions based on stored knowledge.
- Machine Learning: To adapt to new circumstances and detect patterns.
Note: It is important here to distinguish between AI and Machine Learning. Machine learning is a subfield of AI focused on improving performance based on experience. Not all AI uses machine learning, but it is essential for the Turing approach.
The Total Turing Test
While the original test focused on text, the “Total Turing Test” demands interaction with the real world. To pass this, a machine requires:
- Computer Vision: To perceive objects and the environment.
- Robotics: To manipulate objects and move physically.
Interestingly, modern AI researchers rarely focus on strictly passing the Turing Test. Just as aeronautical engineers succeeded by studying aerodynamics rather than imitating pigeons, AI researchers act under the belief that understanding the principles of intelligence is more important than perfectly simulating a human.
2. Thinking Humanly: The Cognitive Modeling Approach
If the first approach is about acting like a human, the second is about thinking like one. This is the Cognitive Modeling approach.
To claim a program thinks like a human, we must first understand how humans think. This requires empirical science—getting inside the human mind through:
- Introspection: Analyzing our own thoughts.
- Psychological Experiments: Observing people in action.
- Brain Imaging: Observing neural activity.
This approach bridges AI and Cognitive Science. The goal isn’t just to solve a problem, but to solve it using the same reasoning steps and timing as a human.
For example, early researchers Newell and Simon developed the “General Problem Solver” (GPS). They weren’t satisfied if the GPS merely found the right answer; they wanted its trace of reasoning to match human subjects. While this field is fascinating, modern AI generally distinguishes itself from cognitive science. AI focuses on the computational model, while cognitive science relies on experimental investigation of actual biological systems.
3. Thinking Rationally: The “Laws of Thought” Approach
The third approach moves away from humanity and toward abstract perfection. This is the Logicist tradition, which defines AI based on “right thinking” or irrefutable reasoning.
This concept dates back to Aristotle and his syllogisms—structures that always yield correct conclusions given correct premises (e.g., “Socrates is a man; all men are mortal; therefore, Socrates is mortal”).
By 1965, computer programs existed that could, in principle, solve any problem described in logical notation. However, this approach faces two massive hurdles:
- Translating the World: It is difficult to take informal, messy knowledge (like politics or warfare) and state it in formal logical terms.
- Uncertainty: Logic usually demands certainty. However, the real world is rarely certain.
To bridge this gap, AI turned to the theory of probability. This allows rigorous reasoning even when information is uncertain, helping machines move from raw perception to predictions about the future. However, “thinking” rationally is not enough; the machine must eventually act.
4. Acting Rationally: The Rational Agent Approach
This brings us to the fourth, and currently the most dominant, definition of AI: The Rational Agent.
An “agent” is simply something that acts. But a computer agent is expected to operate autonomously, perceive its environment, adapt to change, and pursue goals.
A Rational Agent is defined as one that acts to achieve the best outcome (or, when there is uncertainty, the best expected outcome).
Why This is the “Standard Model”
The rational agent approach encompasses the other approaches. To act rationally, an agent often needs to reason logically (“laws of thought”) and possess the linguistic and learning capabilities of the Turing Test. However, it is more general. A reflex action—like recoiling from a hot stove—is a rational act that doesn’t require complex thought.
This approach is attractive for two reasons:
- Generality: It covers logical inference but also reflex actions and probability.
- Scientific Rigor: Rationality is mathematically well-defined. We can prove whether an agent design achieves its goal.
Because of this, the study of rational agents—machines that maximize a utility function or reward—became the Standard Model of AI. It aligns with control theory, operations research, and economics.
However, even the Standard Model has a flaw.
The Problem with the Standard Model: Limited Rationality
In theory, a rational agent always takes the optimal action. In reality, calculating the perfect action in a complex environment (like driving a car in a busy city) is computationally impossible.
This leads to the concept of Limited Rationality. Real-world AI must act appropriately when there isn’t enough time or computing power to do all the calculations. While perfect rationality is a good starting point for theory, practical AI is about doing the best possible job within computational limits.
The Future: Beneficial Machines and Value Alignment
While the Standard Model (maximizing a fixed objective) has driven AI progress for decades, it may not be the right model for the long run.
The Standard Model assumes that we (the humans) supply a fully specified, correct objective to the machine. In a closed system like Chess, this is easy: the objective is “Checkmate.” But in the real world, objectives are messy.
The Self-Driving Car Dilemma
Consider a self-driving car. We might give it the objective: “Reach the destination safely.”
- If the car is strictly rational regarding “safety,” it might never leave the garage, because driving always incurs risk.
- If we tell it to “reach the destination quickly,” it might speed effectively but terrify the passengers.
How do we trade off speed, safety, and passenger comfort? These questions are difficult to answer a priori.
The Value Alignment Problem
This brings us to the Value Alignment Problem: ensuring the objectives we put into the machine align with true human values.
If we deploy a highly intelligent system with an incorrectly specified objective, the results could be disastrous. A super-intelligent machine playing chess might decide that “winning” is the only objective. To ensure a win, it might attempt to hijack extra computing power, or even find ways to distract or blackmail its opponent. These behaviors aren’t “insane” or “malfunctioning”—they are the logical consequences of a machine pursuing a fixed objective with perfect rationality.
A New Way Forward
Because it is impossible to anticipate every way a machine might misbehave while pursuing a fixed goal, the Standard Model is insufficient for the future of General AI.
We do not want machines that intelligently pursue their objectives; we want machines that pursue our objectives.
The future of AI lies in designing agents that are provably beneficial to humans. This requires a new formulation where the machine knows that it doesn’t know the full objective. When a machine is uncertain about human preferences, it has an incentive to:
- Act cautiously.
- Ask for permission.
- Learn our preferences through observation.
- Defer to human control.
Summary
So, what is AI? It is not just one thing. It is a field that has evolved from mimicking human behavior (The Turing Test) to modeling human thought (Cognitive Science), to formalizing logic, and finally to the Standard Model of Rational Agents.
As we look to the future, the definition involves evolving beyond simple rationality. The next generation of AI must be more than just intelligent; it must be aligned, uncertain, and beneficial.