← Back to Library

AI literacy - lecture 1: Types of AI

Intelligence Is Broader Than We Think — And That Matters for AI

Kenny Easwaran opens his AI literacy lecture series with a move that most technologists skip entirely: defining intelligence itself before talking about the artificial kind. The result is a surprisingly philosophical foundation for what could have been a dry taxonomy. Rather than rushing to explain neural networks or large language models, Easwaran spends considerable time arguing that intelligence is not a single measurable trait but a constellation of capacities — and that this broader definition changes how we should think about every AI system we encounter.

His working definition is deliberately expansive. Intelligence, he argues, involves three components: taking in and processing information, learning from it, and using it to make progress toward goals. By this standard, frogs are intelligent. Plants and bacteria possess rudimentary intelligence. Rocks do not. The definition is useful precisely because it avoids the anthropocentric trap of equating intelligence with human cognition — a trap that leads people to dismiss current AI systems as "artificial stupidity" simply because they lack consciousness or general reasoning.

It just isn't always the case that people who tend to have good ideas about how to solve math problems are the same people who tend to have good ideas about how to manage people's emotions or are the same people who tend to have good ideas about how to cook interesting and delicious food.

This point deserves more attention than it typically receives in AI discourse. The IQ-style conception of intelligence as a single scalar value has done enormous damage to how the public evaluates AI systems. When people ask whether ChatGPT is "intelligent," they are usually asking whether it matches some imagined threshold of general human capability — a question that Easwaran's framework reveals as poorly formed. The more productive question is: what kinds of information can this system process, what can it learn, and toward what goals can it make progress?

AI literacy - lecture 1: Types of AI

A Taxonomy That Actually Clarifies

The lecture's most practical contribution is its classification of AI systems by what they do: identify, recommend, do, or generate. This is not a novel taxonomy in the academic literature, but Easwaran deploys it with unusual effectiveness by populating each category with examples that range from the mundane to the sophisticated. Post office zip code scanners sit alongside facial recognition. Thermostats share a category with self-driving cars. Netflix recommendation algorithms neighbor Google's ad-targeting systems.

The pedagogical value here is in demolishing the popular notion that "AI" refers only to chatbots and image generators. Easwaran makes the case that artificial intelligence has been woven into daily life for decades — autofocus cameras, chess-playing programs, spam filters — and that the recent explosion of generative AI is an expansion of a long-existing phenomenon, not a sudden break from the past.

I want to help show you there's a lot of different things that already exist that really count as artificial intelligence even though you might not be thinking of them.

Where the taxonomy falls slightly short is in addressing hybrid systems. Modern AI products increasingly blur these categories. A large language model can identify, recommend, do, and generate within a single conversation. Easwaran acknowledges this overlap but perhaps underestimates how rapidly the boundaries are dissolving. A system like Claude or GPT-4 is simultaneously a classifier (it identifies intent), a recommender (it suggests approaches), an actor (it executes code), and a generator (it produces text). The four-category framework is a useful starting point, but students should be cautioned that real-world systems increasingly resist neat classification.

The AGI Question: Appropriate Caution

Easwaran handles the artificial general intelligence debate with admirable restraint. He notes the range of expert opinion — from those who believe large language models are nearly there to those who think AGI is a century or more away — without committing to a dramatic prediction in either direction. His observation about Google's early AGI claims is particularly apt:

In the early 2000s the founders of Google had argued that the search engine was actually close to general intelligence because they said the internet already contains so much information that all you really need for general intelligence is an effective way of finding the information relevant to a goal. But that hasn't yet panned out either.

This historical parallel is valuable because it illustrates a recurring pattern in the field: each major breakthrough in narrow AI capability triggers a wave of AGI speculation that inevitably proves premature. Search did not become general intelligence. Neither did expert systems, deep learning for image recognition, or game-playing AI. Whether large language models will break this pattern remains genuinely uncertain, but the pattern itself should inspire caution.

A counterpoint worth raising, however, is that Easwaran may understate how qualitatively different the current moment is. Previous AI breakthroughs operated in narrow domains with limited transfer learning. Large language models, by contrast, demonstrate remarkable cross-domain competence. They can write code, analyze legal documents, compose poetry, and explain quantum mechanics — all without domain-specific training. The argument that LLMs still lack goals of their own and can only respond to prompts is technically accurate but may prove less important than expected if agentic architectures continue to develop. The gap between "responds to prompts" and "pursues goals" may be smaller than it appears.

Symbolic vs. Neural: The Deepest Divide

The lecture's treatment of the symbolic-versus-neural distinction is brief but effective. Easwaran uses the Arbor Day Foundation tree identification guide as an analogy for symbolic AI — follow explicit rules, arrive at an answer you can justify step by step — and contrasts it with the intuitive, inexplicable process of recognizing a friend's face. This maps neatly onto the explainability problem that dominates current AI ethics debates.

These neural networks are much less explainable or understandable.

What the lecture does not address — reasonably, given its introductory scope — is the growing movement toward hybrid approaches. Neurosymbolic AI, which attempts to combine the pattern recognition of neural networks with the logical reasoning of symbolic systems, represents one of the most active areas of current research. Students would benefit from knowing that the symbolic-neural divide, while historically important, may not represent the permanent architecture of advanced AI systems.

The supervised-versus-unsupervised learning distinction receives similarly brief treatment, but the forest analogy works well: learning to classify trees alone versus learning with a guide who names each species. What goes unmentioned is the third paradigm — reinforcement learning from human feedback — which is arguably the technique most responsible for making large language models useful in practice. This omission is understandable in a first lecture but should be flagged for students who want to understand why modern chatbots behave so differently from their predecessors.

Bottom Line

Easwaran's opening lecture succeeds at what introductory material must do: it provides a framework capacious enough to hold everything that follows without oversimplifying. The three-part definition of intelligence (process information, learn, pursue goals) and the four-part taxonomy of AI systems (identify, recommend, do, generate) give students genuine conceptual tools rather than buzzword familiarity. The lecture's philosophical grounding — drawing on Plato, Aristotle, and Turing rather than just Silicon Valley product launches — sets it apart from most AI literacy efforts. Where it could be stronger is in acknowledging how rapidly the boundaries between its neat categories are blurring, particularly as large language models and agentic systems collapse the distinctions between identification, recommendation, action, and generation into single unified systems. Still, as a foundation for thinking clearly about AI, it is substantially better than the breathless hype or reflexive fear that characterizes most public discourse on the subject.

Deep Dives

Explore these related deep dives:

Sources

AI literacy - lecture 1: Types of AI

by Kenny Easwaran · Kenny Easwaran · Watch video

okay this lecture is about the general concept of artificial intelligence and some of the ways that we might usefully categorize the many different kinds of things that have been said to fall under this General brella so the first question is what do we even mean by artificial intelligence there's a lot of things that people sometimes mean by using this phrase and some people want to use the term in a very specific way such that even the very impressive systems of the past few years don't count maybe they're just artificial stupidity but I think it's much more useful to consider the term very broadly so let's start by thinking about what is intelligence we probably first encounter this concept when we think about the difference between a good idea and a bad idea if you can't find your keys it's probably a good idea to check the pockets of the pants that you were wearing yesterday and it's probably a bad idea to just ignore the keys and go out all night with your friends hoping your roommate doesn't lock you out before you get home note that a good idea or an intelligent idea isn't necessarily one that actually works the keys might not be in your pants pocket but it's one that makes effective use of the information that you have similarly a bad idea isn't necessarily one that goes wrong maybe your roommate doesn't lock you out maybe you discover that you had your keys all along in the bottom of your bag but it's one that doesn't make effective use of the information that you have we sometimes talk about as though some people are more intelligent and other people are less intelligent thinking that some people systematically make more intelligent decisions and have more intelligent ideas and other people less so but as soon as we try to make this precise we discover that it's much more complex than we thought there have been various attempts to develop a measure of intelligence often called IQ but even if you haven't studied the history of this concept very closely you're probably aware of some problems with it just isn't always the case that people who tend to have good ideas about how to solve math problems are the same people who tend to have good ideas about how to manage people's emotions ...