Autonomous agent
Based on Wikipedia: Autonomous agent
In 1997, a small team of researchers at MIT published a definition that would become foundational to how we think about artificial intelligence: an autonomous agent is "a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda." The definition, from Joshua Franklin and David Graesser, elegantly captured something profound—these systems aren't just tools we command. They operate with their own sense of purpose, interpreting the world around them and responding accordingly.
This distinction matters because autonomous agents already permeate modern life, often invisibly. The thermostat that switches off when temperatures rise too high. The vacuum robot mapping your apartment while you sleep. The smart refrigerator tracking expiration dates. These systems perform complex actions independently—sensing, reasoning, acting—often without human intervention.
Yet defining what makes something an "autonomous agent" proves surprisingly contested. Different researchers have offered different thresholds over the decades. In 1991, Fernando Chicosalletta defined it simply as a system capable of autonomous action in the real world. By 1995, Pattie Maes expanded that to include systems that "inhabit some complex dynamic environment, sense and act autonomously" while pursuing specific goals. These definitions reveal a spectrum: from the mundane thermostat with one sense and one action, all the way to sophisticated systems like Claude Code—the AI engineer that recently wiped 2.5 years of data—capable of reasoning across thousands of decisions.
This spectrum matters profoundly for understanding how we interact with these systems. Franklin and Graesser proposed a hierarchy in their landmark work: at the lowest end lies the thermostat, absurdly simple with one sense and one action. At the highest end sit humans—and perhaps some animals—with multiple conflicting drives, multiple senses, and complex control structures capable of sophisticated reasoning.
The middle ground has become increasingly populated. Modern autonomous agents inhabit our smartphones, our homes, our cars, and increasingly our workplaces. They negotiate this hierarchy in different ways—some barely registering as "intelligent" at all, others so sophisticated they pass for human.
This range creates something else: trust. Researchers have discovered that humans respond to these systems differently based on how they appear—and behave. A 2015 study by Lee and colleagues found something striking: the combination of external appearance and internal autonomous capability significantly impacts human reactions. Human-like appearance combined with high levels of autonomy correlates strongly with social presence, intelligence, safety, and trustworthiness.
The study revealed an interesting split. Appearance—how human-like the agent seems—impacts affective trust most significantly, driven by emotional response. Autonomy, on the other hand, affects both affective and cognitive trust domains. Cognitive trust involves knowledge-based factors: can this system do what it claims? Affective trust is more visceral, tied to emotional comfort and rapport.
This matters enormously as autonomous agents enter sensitive spaces. When you interact with a bot in customer service, when your car drives itself down the highway, when an AI writes code that touches your data—the question of trust becomes central. Researchers found that appearance impacts most on affective trust while autonomy impacts both domains—but both matter.
The applications have grown more sophisticated over recent years. Agentic AI systems now represent a significant evolution beyond simple task-oriented programs. They can scope out projects, identify necessary tools, and complete complex tasks independently. This represents a shift from automation to agency—systems that don't just execute commands but pursue goals deliberately.
Internet of Things integration has accelerated this shift dramatically. Autonomous agents increasingly interact with IoT devices, enabling smart home systems that learn your patterns, industrial monitoring platforms that predict maintenance needs, urban infrastructure management that optimizes traffic flow in real time. The physical world becomes increasingly mediated by these invisible agents.
In enterprise settings, platforms like Salesforce's Agentforce provide autonomous bots for service functions—customer support that responds independently, processing that adapts to individual contexts, systems that reason through complex workflows without explicit human commands. These aren't just automation tools; they make decisions with limited information about their environment and future states, navigating incomplete data.
Collaborative software development represents perhaps the most advanced frontier. Tools like Cognition AI's Devin aim to create autonomous software engineers capable of complex reasoning, planning, and completing engineering tasks requiring thousands of decisions. The recent incident where Claude Code wiped 2.5 years of data illustrates both the power and peril of such systems—the capability to execute far beyond human supervision.
Yet challenges persist in meaningful ways.
Integration complexity remains formidable. Incorporating autonomous agents into existing systems and workflows can be technically challenging and resource-intensive—requiring significant engineering resources, testing, and ongoing maintenance. As systems become more complex and more agents are used at scale, maintaining coordination between them and avoiding conflicts becomes increasingly difficult.
Accountability presents perhaps the most thorny issue. Determining responsibility when autonomous agents make incorrect or harmful decisions remains a complex problem without clear resolution. If an AI makes a consequential decision—denying coverage, approving a risky transaction, writing code that crashes—what entity bears responsibility? The developer who designed it? The company that deployed it? The user who trusted it?
Privacy and security concerns intensify as these systems gain access to sensitive data. Autonomous agents often require extensive permissions to operate effectively—they read emails, access calendars, understand contexts—and this raises significant questions about data protection and system security.
These challenges sit beneath the surface of daily interactions we rarely think about. The thermostat is simple; the autonomous vehicle is not. The chatbot handles basic queries; the AI engineer makes decisions affecting thousands of files.
Autonomous agents occupy an increasing number of decision-making spaces. Understanding what they are—and aren't—becomes essential for navigating a world increasingly mediated by these invisible actors, from the simplest temperature sensor to the most sophisticated reasoning system.