← Back to Library

AI literacy - lecture 6.1: Using llms effectively

In a field drowning in superficial prompt engineering tips, Kenny Easwaran offers a startlingly human-centric thesis: the most critical variable in interacting with artificial intelligence isn't the algorithm, but the user's own cognitive state. This lecture cuts through the noise of technical parameters to argue that effective AI use requires you to be the boss of your own ideas before the machine ever speaks. For busy professionals, this reframing is essential—it suggests that the bottleneck in AI productivity isn't the model's intelligence, but our failure to clear our own mental decks first.

The Human Bottleneck

Easwaran begins by dismantling the common assumption that the conversation should start with the machine. He writes, "in any conversation with a large language model you are the most important conversation you are the one whose interests matter." This is a provocative stance in an era where users often treat these tools as oracle-like entities that should do the heavy lifting immediately. The author argues that if you let the AI lead the brainstorming, its suggestions will inevitably replace your own, diluting your unique perspective. "You need to be the boss not it," Easwaran insists, a directive that feels less like technical advice and more like a warning against intellectual atrophy.

AI literacy - lecture 6.1: Using llms effectively

The core of his argument rests on the idea that professional creativity requires a "blank slate." He notes that writers and mathematicians often step away from projects for weeks to forget their initial thoughts and return with fresh eyes. Since most of us cannot afford to pause our work for that long, Easwaran suggests a tactical compromise: "if it matters what's going to come out you should always start by doing some of your own brainstorming about what issues you want the conversation to cover and write down that checklist." This is a crucial distinction. By forcing the user to articulate their own checklist first, the AI becomes a tool for expansion rather than a substitute for thought.

Critics might argue that this approach is inefficient for users who lack domain expertise and genuinely need the AI to generate the initial framework. However, Easwaran's point holds up when considering the long-term value of the output; an AI-generated idea is often generic, whereas a user-generated idea is grounded in specific context.

You need to be the boss not it.

Architecture and the Illusion of Control

Moving from psychology to mechanics, Easwaran explains the underlying architecture without getting bogged down in jargon. He describes the large language model as a device for "guessing the probability of what word comes next on the basis of a large amount of training data." He clarifies that the system doesn't just pick the most likely word every time; it uses a mechanism called "temperature" to introduce controlled chaos. "At the lowest temperature of zero... it always picks the word with the highest score but if temperature goes up it might become a bit more likely to pick words that are slightly lower score," he explains.

This technical detail is vital for the busy reader because it explains why the same prompt can yield wildly different results. Easwaran points out that while users rarely see the numerical temperature setting, they often see it disguised as "creative" versus "precise" modes. He argues that understanding this allows users to strategically select the right mode: "higher temperature for things that you're trying to create more unusual and interesting responses for and lower temperature for responses that are supposed to be more precise and authoritative." This reframes the user's role from a passive consumer to a strategic operator who understands the trade-off between novelty and accuracy.

He also touches on the hierarchy of models, noting that "higher end models have bigger neural Nets with more layers... and so very likely for almost any purpose you'll get better and more informed and more interesting responses." Yet, he acknowledges the cost: "the higher end model takes more energy and computation power... and that doesn't show up directly to us but the companies generally won't let us use the highest end model unless we pay them." This is a pragmatic admission that quality often comes with a price tag, whether in dollars or in the computational resources required to run the model.

Context as a Double-Edged Sword

Perhaps the most sophisticated part of Easwaran's commentary is his treatment of the "context window." He explains that unlike humans, who forget, these models remember everything within their vast memory limits. "Once you mention an idea in the conversation that idea will remain and will keep influencing what it's thinking and what it's saying in future rounds," he warns. This creates a unique danger: if the conversation goes off the rails, the model will try to justify or continue down that wrong path.

Easwaran suggests a radical solution that many users overlook: the strategic reset. "Sometimes it's better valuable to have multiple separate conversations going tracking different versions of the context," he advises. If the AI makes a mistake or adopts a tone you dislike, the best move is not to argue with it, but to "start a new conversation with a large language model so that it has no context." This turns the act of starting a new chat from a sign of failure into a deliberate tool for quality control.

He also emphasizes that the user is a "context machine" too. "Unlike the large language model you can't just click a button to forget everything that you've heard," he writes. This creates an asymmetry where the user must be more disciplined than the machine. If the AI suggests a bad idea, the user might unconsciously adopt it and carry it forward, whereas the machine can be wiped clean instantly. This insight is a powerful reminder that the human mind is the most fragile link in the chain.

Bottom Line

Kenny Easwaran's strongest contribution is the shift from technical prompt engineering to cognitive discipline; he proves that the best way to use AI is to first ensure your own thinking is clear and structured. The argument's greatest vulnerability is its assumption that users have the time and mental bandwidth to do this pre-work, which may not be true in high-pressure, real-time scenarios. Ultimately, the piece serves as a necessary corrective to the hype: the machine is powerful, but it is the human who must remain the architect of the conversation.

Sources

AI literacy - lecture 6.1: Using llms effectively

by Kenny Easwaran · Kenny Easwaran · Watch video

this is a lecture about how to use conversational AIS or virtual assistance more effectively these conversational AIS have a core large language model which is a device for guessing the probability of what word comes next on the basis of a large amount of training data there are two major steps in between this large language model and the conversational AI or virtual assistant itself first there's reinforcement learning from Human feedback or rhf which is supposed to make the responses more helpful harmless and honest and then furthermore there's a system prompt that is built into it that so that the llm doesn't just continue as though it was finishing a text that you started but instead responds to you like another person in a conversation with you thinking about all of this can help you understand both what other people might be doing to get more use out of these and what you can do for whatever it is that you're trying to do with one of these virtual assist or conversational AIS at the time I'm recording this video August 2024 it's just over a year and a half since open AI released chat gbt the first one of these systems but every few months in that time there have been new systems released by competitors updates to the existing systems changes in the interface and so on both for paid users and for free users so I won't focus too much on all the details but instead on more of the conceptual issues of how to use these systems more effectively now one thing I want to emphasize in any conversation with a large language model you are the most important conversation you are the one whose interests matter your strategies should be structured just as much to get good results out of you as out of the large language model if the conversation gets off on the wrong foot and goes in the wrong direction it's easy enough to reset the llm so that it forgets the whole thing and gets started better the second time from a blank slate but you can't do that yourself this is why professional writers and mathematicians and puzzle solvers and all often put down their project for a long time whether it's hours or weeks depends on what they need what it is so that they can ...