Ch 1 — What Is Artificial Intelligence?

Defining AI, its branches, and how it differs from traditional software — from Dartmouth 1956 to today
High Level
psychology
Define AI
arrow_forward
history
Origins
arrow_forward
category
Types
arrow_forward
account_tree
Approaches
arrow_forward
quiz
Turing Test
arrow_forward
landscape
Today
-
Click play or press Space to begin the journey...
Step- / 8
psychology
What Is Artificial Intelligence?
The quest to make machines that think, learn, and act
The Core Idea
Artificial intelligence is the field of computer science dedicated to creating systems that can perform tasks normally requiring human intelligence — recognizing speech, understanding language, making decisions, and learning from experience. Unlike traditional software that follows explicit if/then rules, AI systems learn patterns from data and generalize to new situations they haven’t seen before.
AI vs Traditional Software
Traditional Software
Programmer writes explicit rules for every scenario. The program does exactly what it’s told — nothing more. Adding new capabilities means writing more code.
AI System
Programmer provides data and a learning algorithm. The system discovers rules and patterns on its own. It can handle scenarios the programmer never anticipated.
Key distinction: Traditional software is programmed; AI systems are trained. A spam filter doesn’t have a list of spam words — it learned to recognize spam by seeing millions of examples.
history
The Birth of AI: Dartmouth 1956
The summer workshop that launched a field
The Founding Moment
In the summer of 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organized a workshop at Dartmouth College in Hanover, New Hampshire. They proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” McCarthy coined the term “Artificial Intelligence” specifically for this proposal, choosing it for its neutrality over existing terms like “cybernetics.”
// The Dartmouth Proposal (Sep 1955) // Funded by the Rockefeller Foundation Organizers: John McCarthy // coined "AI" Marvin Minsky // neural nets Nathaniel Rochester // IBM Claude Shannon // info theory Duration: ~6-8 weeks, summer 1956 Budget: $13,500 (Rockefeller grant) Location: Dartmouth College, NH Legacy: "Constitutional Convention of AI"
Historical note: The workshop didn’t produce a breakthrough result, but it established AI as a distinct field of study and brought together the researchers who would define its first two decades.
category
Narrow AI vs. General AI
The spectrum of artificial intelligence capabilities
Narrow AI (Weak AI)
All AI systems that exist today are narrow AI — they excel at one specific task but cannot transfer that ability to other domains. A chess engine that beats grandmasters cannot hold a conversation. A language model that writes essays cannot drive a car. Narrow AI is powerful within its domain but brittle outside it.
Examples of Narrow AI
Image recognition: Classifying photos (Google Photos)
Language models: ChatGPT, Claude, Gemini
Recommendation engines: Netflix, Spotify, YouTube
Game playing: AlphaGo, Stockfish
Self-driving: Tesla Autopilot, Waymo
General AI (AGI)
Artificial General Intelligence would match human-level intelligence across all cognitive tasks — reasoning, learning, creativity, social understanding — without being limited to a specific domain. AGI could learn any new task the way humans do, transferring knowledge between domains. AGI does not yet exist and remains an active research goal with no consensus on when (or if) it will be achieved.
Superintelligence (ASI)
A hypothetical level beyond AGI where machine intelligence surpasses all human cognitive abilities. Purely theoretical — no serious researcher claims we are close to ASI.
Reality check: Despite impressive demos, today’s LLMs are sophisticated narrow AI. They predict the next token in a sequence — they don’t “understand” in the human sense. The gap between narrow AI and AGI remains vast.
layers
Four Functional Types of AI
Arend Hintze’s classification by operational capability
Type I — Reactive Machines
No memory, no learning from past experience. The same input always produces the same output. IBM’s Deep Blue (1997) evaluated 200 million chess positions per second but couldn’t remember previous games or improve over time.
Type II — Limited Memory
Can use past data and recent experience to inform decisions. This is where most modern AI lives. Self-driving cars observe other vehicles over time. LLMs use context windows. These systems learn during training but have limited ability to update in real-time.
Type III — Theory of Mind
Would understand emotions, beliefs, and intentions of other agents. Could predict behavior based on mental states. Does not yet exist — though some researchers argue LLMs show early, limited signs of modeling others’ perspectives.
Type IV — Self-Aware
Would possess consciousness and self-awareness — an understanding of its own existence and internal states. Purely theoretical. No AI system has demonstrated anything approaching genuine self-awareness.
Current status: Only Types I and II exist today. Types III and IV remain aspirational goals that may require fundamental breakthroughs we haven’t yet achieved.
account_tree
Two Paradigms: Symbolic AI vs. Connectionism
The fundamental debate that shaped the field
Symbolic AI (Classical AI)
Represents knowledge using explicit symbols and rules. Intelligence emerges from logical reasoning over structured representations. Dominated AI from the 1950s through the 1980s. Produced expert systems like MYCIN (medical diagnosis, 1976) and DENDRAL (chemical analysis, 1965).
Strengths & Limits
Strengths: Transparent, interpretable, works with small datasets, excellent at logical reasoning
Limits: Requires hand-coded rules, brittle in ambiguous situations, doesn’t scale to complex real-world problems
Connectionism (Neural Networks)
Represents knowledge through weighted connections between artificial neurons, inspired by the brain. Intelligence emerges from learning patterns in data. Struggled in the 1970s–80s but exploded after 2012 with deep learning. Powers all modern AI breakthroughs: image recognition, language models, game playing.
Strengths & Limits
Strengths: Learns from raw data, scales to massive problems, handles ambiguity and noise
Limits: Requires huge datasets, computationally expensive, “black box” — hard to interpret why it makes specific decisions
Modern trend: The field is increasingly exploring neuro-symbolic AI — combining neural networks’ pattern recognition with symbolic reasoning’s interpretability and logical rigor.
quiz
The Turing Test & the Chinese Room
Can machines think? Two landmark thought experiments
The Turing Test (1950)
In his 1950 paper “Computing Machinery and Intelligence,” Alan Turing proposed the “Imitation Game”: a human interrogator communicates via text with two hidden entities — one human, one machine. If the interrogator cannot reliably tell which is which, the machine demonstrates intelligence. Turing argued this behavioral test sidesteps the “meaningless” question of whether machines truly “think.”
The Chinese Room (1980)
John Searle challenged the Turing Test with a thought experiment: imagine a person in a room who follows rulebooks to manipulate Chinese symbols, producing correct responses without understanding Chinese. Searle argued that passing the Turing Test doesn’t prove understanding — a system can manipulate symbols syntactically without any semantic comprehension.
Why it matters today: LLMs like GPT-4 can pass many Turing-style tests, yet the Chinese Room argument remains relevant. Do these models “understand” language, or are they sophisticated symbol manipulators? This debate shapes AI safety, ethics, and policy.
hub
The Subfields of AI
A map of the major branches
Core Subfields
Machine Learning Systems that learn from data (Ch 3-6) // Supervised, unsupervised, RL Computer Vision Understanding images and video (Ch 7) // CNNs, object detection, segmentation Natural Language Processing Understanding and generating text (Ch 8-10) // RNNs, transformers, LLMs Robotics Physical agents in the real world // Perception, planning, control Reinforcement Learning Learning by trial and reward (Ch 12) // Games, robotics, RLHF
Emerging Subfields
Generative AI Creating new content (Ch 11) // GANs, diffusion models, LLMs AI Safety & Alignment Ensuring AI behaves as intended (Ch 13) // Bias, fairness, value alignment Multimodal AI Processing text, images, audio together // GPT-4V, Gemini, Claude 3 Agentic AI Autonomous AI that plans and acts (Ch 14) // Tool use, multi-step reasoning
Overlap is the norm: These subfields aren’t isolated. Modern systems like GPT-4 combine NLP, reasoning, and generative capabilities. A self-driving car uses computer vision, RL, and planning simultaneously.
landscape
Where AI Stands Today
The current state and what’s ahead in this course
The Current Moment
We are in an era of narrow AI that is extraordinarily capable within specific domains. Large language models can write code, translate languages, and pass professional exams. Diffusion models generate photorealistic images. RL agents master complex games. Yet every one of these systems is narrow — brilliant at its task, unable to generalize beyond it.
Key Takeaways
1. AI is the science of making machines that learn from data
2. All current AI is narrow (task-specific), not general
3. Two paradigms: symbolic (rules) vs. connectionist (learning)
4. Connectionism (neural networks) dominates modern AI
5. The Turing Test measures behavior, not understanding
What’s Next in This Course
Ch 2: History of AI — The full timeline from Turing to transformers

Ch 3: ML Paradigms — Supervised, unsupervised, and reinforcement learning

Ch 5: Perceptrons — How neural networks actually work

Ch 9: Transformers — The architecture behind modern AI

Ch 10: LLMs — How ChatGPT and Claude actually work
Foundation matters: Every concept in this chapter — narrow vs. general, symbolic vs. connectionist, the Turing Test — will resurface throughout the course. Understanding these foundations makes everything else click.