GPT-3 and the Scaling Hypothesis (2020)
OpenAI released GPT-3 with 175 billion parameters, demonstrating that scaling transformer models produced emergent capabilities — abilities that appeared without being explicitly trained. GPT-3 could write essays, generate code, translate languages, and answer questions, all from a single model trained on internet text. The “scaling hypothesis” — that bigger models with more data produce qualitatively better results — reshaped the industry’s investment thesis.
ChatGPT (November 30, 2022)
OpenAI released ChatGPT, built on GPT-3.5 with reinforcement learning from human feedback (RLHF). It reached 100 million users in two months — the fastest-growing consumer application in history at the time. For the first time, the general public could interact with a capable AI system through natural conversation. This was AI’s “iPhone moment.”
The Current Landscape (2023–Present)
The field has exploded: GPT-4, Claude, Gemini, Llama, Mistral — multiple frontier models from competing organizations. Image generation (DALL-E, Midjourney), video (Sora), and code assistants (GitHub Copilot, Cursor) have entered mainstream use. AI agents that plan and use tools are the current frontier. Gartner projects $2.52 trillion in global AI spending by 2026.
The pattern to watch: Every previous era of AI followed the same arc — breakthrough, hype, overpromise, correction. The technology was always real; the timelines were always wrong. The executives who navigate this era successfully will be those who invest based on demonstrated value, not projected potential.