← Back to Library

How should AI be governed?: Crash course futures of AI #5

Crash Course cuts through the noise of AI hype by revealing a startling truth: the most powerful technology on earth is currently governed by a chaotic mix of corporate whims, conflicting national agendas, and a global race to the bottom. While most coverage fixates on the capabilities of chatbots, Kusha Navdar and the team at Crash Course pivot to the urgent, unglamorous question of who actually holds the leash. In an era where a single CEO can be fired and reinstated in less than a week, the lack of stable oversight isn't just a policy gap—it's an existential risk.

The Illusion of Corporate Responsibility

The piece opens with the dramatic ousting and rapid return of Sam Altman at OpenAI, using the event to illustrate the fragility of self-regulation. Crash Course writes, "Right now, there are very few rules to keep people like Altman and his technology in check. And that's not great." This comparison is sharp; the author notes that even a local deli faces stricter food safety protocols than the entities building systems that could theoretically "build devastating bioweapons or become dictators." The argument here is that the stakes have outgrown the voluntary nature of current corporate governance.

How should AI be governed?: Crash course futures of AI #5

The commentary details how major labs like Google and OpenAI have developed "responsible scaling" frameworks, essentially creating internal biosafety levels for their models. As Crash Course puts it, "Generally, the larger, more complex, or more powerful the model, the more potential for misuse a company anticipates and the stricter they're going to be." This sounds prudent, but the author immediately undercuts the reliability of this approach. They point out that these policies are often only enforced after dangerous capabilities are flagged, meaning "a whole bunch of risks could be flying under the radar." Critics might note that relying on companies to police their own profit margins is inherently flawed, a point the text acknowledges by asking, "What if the people in charge of the corporations are actually evil or so blinded by the idea of power that they throw caution to the wind?"

"It's just LLM all the way down. AI is really good at red teaming because it can find and exploit tons of different jailbreak pathways with tons of different strategies all in the blink of an eye."

The segment on "red teaming"—where developers try to break their own models—is particularly compelling. Crash Course highlights the irony that the most effective way to test AI safety is to use AI itself. The author describes how large language models can be tasked with finding loopholes, such as trying to convince a chatbot to help "murder my identical twin brother and pose as him at the wedding." While this sounds like science fiction, the text treats it as a necessary, if insufficient, safety valve. The reality is that despite these efforts, "it's not uncommon for users to find ways to jailbreak AI and talk it into doing some pretty illicit stuff."

The Patchwork of National Regulation

Moving beyond the lab, the article examines how nations are attempting to impose order. The European Union emerges as a strict regulator with its AI Act of 2024, which bans models deemed "unacceptably risky" and mandates transparency. Crash Course writes, "The EU also rolled out a code of practice in 2025 which is a voluntary agreement for AI companies to sign on to... It's kind of like a pinky promise to keep things safe, chill, and honorable." This metaphor effectively captures the tension between binding law and voluntary cooperation. However, the author notes that this approach creates a trade-off: companies get less red tape in exchange for adhering to specific safety standards.

China presents a contrasting model, aggressively pursuing safety standards to ensure compliance while simultaneously pushing for global dominance. The text notes that China "doubled the amount of safety research between 2024 and 2025," yet these policies remain "non-binding to allow developers to make their own judgments about what's safe and ethical in the pursuit of AI success." This highlights a fundamental global dilemma: the balance between safety and competitive advantage. A counterargument worth considering is whether non-binding regulations in a state-driven economy truly offer any real protection, or if they simply serve as a veneer for unchecked industrial growth.

The United States, meanwhile, is depicted as a landscape of chaos. The author describes how the Biden administration's safety guidelines were "rolled those guidelines way back" when the president took office for a second term, prioritizing innovation over safety. "In the end, governments can be just as corrupt and messy as profit-hungry CEOs," Crash Course observes, noting that intense lobbying has stalled state-level legislation. This fragmentation means that if one state like California pushes for development, others like Texas feel pressured to follow suit, creating a race to the bottom.

The Global Coordination Problem

The final section addresses the necessity of international treaties, citing the Bletchley Declaration and the International AI Safety Institutes. Yet, the author is candid about the limitations of these efforts. Despite 28 countries signing the initial declaration, "China signed the Bletchley declaration, but 6 months later passed on the sole ministerial statement." Furthermore, at the 2025 Paris summit, major players like the US and UK refused to sign a statement on inclusive AI, while signatories often shifted focus from safety to national advancement.

"In a world filled with different priorities, selfish players, and extremely powerful technology, teamwork can seem really hard to achieve, let alone actual functioning AI governance."

This admission is the piece's most honest moment. The author argues that without global alignment, a single rogue actor could derail everything. The solution proposed is not just policy, but public engagement. "We can make sure we stay up to date on what's going on with AI... and we can make sure that we're not only paying attention, but making others pay attention, too." The call to action is clear: governance cannot be left solely to experts and politicians.

Bottom Line

Crash Course's strongest asset is its refusal to treat AI governance as a solved problem or a purely technical challenge; it frames the issue as a human drama of power, greed, and the desperate need for cooperation. The piece's biggest vulnerability is its reliance on a rapidly shifting timeline of 2024 and 2025 events, which may date quickly as political realities change. Readers should watch for how these voluntary frameworks hold up when the first true global crisis hits, testing whether a "pinky promise" is enough to stop a runaway machine.

Sources

How should AI be governed?: Crash course futures of AI #5

by Crash Course · Crash Course · Watch video

Sam Alman was on top in AI until for 5 days he wasn't. Alman had been working in the AI space for years, most notably as the face of Open AI's popular product, Chat GPT. But in late 2023, the company's board of directors canned him. Public details were scarce, but it was speculated that the board's priority was AI safety, while Altman's was profits.

But in less than a week, Altman was reinstated while most of the board members were replaced. As of this filming in 2025, it's unclear why the chaos happened. All of that begs the question, who really controls AI and who should? I'm Kusha Navdar and this is Crash Course Futures of AI.

Right now, there are very few rules to keep people like Altman and his technology in check. And that's not great. even the deli on my corner is subject to strict rules about food safety. And no bologoney sub is going to be a threat to human society, no matter how delicious it may be.

So, where's the governance when it comes to AI? Now, when we talk about AI governance, we're really talking about a whole bunch of different things. policies, practices, standards, and guard rails that could help keep AI safe, keep it ethical, and keep it out of the director's chair. And a lot of the time, governance starts the same place AI does.

Corporations, places like Google, DeepMind, Anthropic, and Open AI that are using their massive resources to push the boundaries of AI. Lots of corporations have come up with systems to say who's allowed to access their models. ideally to prevent people from misusing AI to hoard wealth or build devastating bioweapons or become dictators or I don't know write their college entrance essay. Those systems of access are one part of something called responsible scaling which basically means assessing the potential risk level of a model and implementing whatever safety precautions the company thinks is appropriate.

Think of it like the government's biosafety level standards for toxic materials or defcon levels for the military. Generally, the larger, more complex, or more powerful the model, the more potential for misuse a company anticipates and the stricter they're going to be. That includes stuff like access, but also the commitment to not continue developing their models unless they meet all their safety conditions. Of course, different companies ...