← Back to Library

A practical roadmap to AI fluency for hardware designers

Vikram Sekar cuts through the noise of artificial intelligence hype to deliver a sobering reality check for hardware engineers: the industry's future belongs not to those who can simply prompt a chatbot, but to those who understand the physics and architecture underpinning the models. In a field often paralyzed by the dichotomy of blind excitement or total indifference, Sekar argues that true career differentiation lies in bridging the gap between semiconductor design and machine learning logic. This is not a guide to becoming an AI expert overnight, but a strategic roadmap for the engineer who realizes that the "infinite money glitch" of current funding is building a world where their specific domain knowledge is the only thing preventing obsolescence.

The Shift from Hype to Hardware

Sekar begins by dismantling the superficial engagement many professionals have with large language models. He observes that while early adopters used these tools for emails and brainstorming, "an AI chat box is not the same as a search bar," and for those doing highly technical work, the utility remains unclear. He notes that the current landscape is fraught with executives seeking an "AI checkbox" to signal trendiness, creating a pressure cooker for individual contributors who must decide whether to dive in or risk irrelevance. The author's framing is distinct because it rejects the surface-level "prompt engineering" trend in favor of deep structural understanding. He writes, "My belief is that everyone working in semiconductors and chip design should develop a foundational understanding of this technology; beyond just the chat box and prompt engineering methods - because that is where true differentiation lies in a career."

A practical roadmap to AI fluency for hardware designers

This argument lands with significant weight because it addresses the specific vulnerability of hardware engineering: unlike software, where millions of lines of code exist for models to learn from, "the actual skills required to get a chip to work isn't written down anywhere." Sekar correctly identifies that the undocumented intuition of a veteran engineer is the missing data set for AI. A counterargument worth considering is that the pace of tool evolution might render deep architectural knowledge less critical than adaptability; however, Sekar's point stands that without understanding the "why" and "how" of the hardware, any AI integration will be brittle. He posits that the most effective path forward is not to teach a machine learning expert about chips, but for the "semiconductor expert to bring some machine learning experience into the role."

The actual skills required to get a chip to work isn't written down anywhere. Building that undocumented intelligence into an AI-enabled workflow requires your domain expertise to be applied to machine learning.

A Roadmap for the Curious Engineer

Moving from theory to practice, Sekar outlines a non-linear, exploratory approach to fluency. He rejects the idea of a rigid "5-step process," instead encouraging engineers to follow a "goat trail" through the jungle of concepts. The first pillar of his roadmap is understanding the "Transformer" architecture, the foundational model proposed by Google researchers in 2017. He admits that reading the original paper is difficult but insists that a qualitative grasp is essential, suggesting visual resources like 3Blue1Brown's series to demystify the math. "If there was ever a fundamental concept you need to learn this decade, it's the idea of the 'Transformer'," he writes, emphasizing that this knowledge is the prerequisite for any meaningful application.

The second, and perhaps most critical, step involves a deep dive into the hardware itself. Sekar challenges engineers to ask simple but profound questions about the ecosystem they inhabit: "Why is high bandwidth memory so important for AI, and why is it so hard to manufacture?" and "How is power delivered from the utility service to the GPU?" He argues that the ability to explain these complexities "over a beer or coffee" is the true measure of expertise. This section is particularly effective because it grounds abstract AI concepts in tangible engineering constraints like power delivery, cooling, and interconnects. Critics might argue that this level of breadth is impossible for a single engineer to master, but Sekar's intent is not mastery of every sub-field, but rather the ability to ask the right questions and understand the system's bottlenecks.

Tooling and the Human Element

The final leg of Sekar's roadmap focuses on getting hands dirty with the actual tooling. He recommends starting with Python and accessible frameworks like Scikit-learn or PyCaret before tackling the "whole enchilada" of neural networks with PyTorch. He encourages running local open-source models on consumer hardware to understand the mechanics of fine-tuning without the prohibitive cost of cloud infrastructure. "Experiment with running local models, especially open source ones like GPT-OSS which runs just fine on my MacBook Pro," he advises, turning the abstract concept of model training into a tangible, accessible activity. He also highlights the utility of AI coding assistants like Cursor or Claude code for automating repetitive tasks, noting that these tools can "speed up your excel work" or generate data processing scripts.

However, Sekar's most insightful contribution is his warning against the "AI hammer" mentality. He cautions that the industry is currently "wielding an AI hammer looking for a nail to hit," often rebranding basic optimization techniques as artificial intelligence to stay relevant. His advice is to first identify a genuine pain point in chip design or validation before forcing a solution. "Maybe LLMs are not the answer: that is just fine," he writes, urging engineers to evaluate the right method for the problem rather than the other way around. This pragmatic stance is a necessary antidote to the feverish investment climate, where capital is flowing so heavily that comparisons are being made to the "2008 subprime mortgage crisis." Sekar reminds readers that while the funding may fluctuate, the technology is here to stay, much like the internet survived the dot-com bust.

Today, everyone is wielding an AI hammer looking for a nail to hit - most definitely in the hardware world.

Bottom Line

Vikram Sekar's strongest argument is his insistence that the value of AI in hardware design comes from the engineer's ability to inject their undocumented, tacit knowledge into machine learning workflows, rather than passively relying on chatbots. The piece's greatest vulnerability lies in its assumption that individual engineers have the bandwidth to self-teach complex AI architectures while maintaining their primary design duties. Ultimately, this is a call for a new hybrid competency: the engineer who speaks both the language of silicon and the language of algorithms.

Sources

A practical roadmap to AI fluency for hardware designers

by Vikram Sekar · Vik's Newsletter · Read full article

Today’s post is fully free. If you’re new, start here! On Sundays, I write deep-dive posts on critical semiconductor technology for the AI-age in an accessible manner for paid subscribers.

Also, here is my latest YouTube video:

Read time: 10 mins

Here is a question on many minds right now: what role does AI play in the role of a hardware engineer today?

The word “AI” elicits two opposite reactions in people: excitement or indifference.

Those in the excitement camp generally embrace AI quickly. I know people who subscribed to the paid ChatGPT version early and have used it ever since, well before most companies formalized AI policies in the workplace. These early adopters I know used it for writing emails, product documentation, and brainstorming marketing ideas.

On the opposite end, many engineers tell me they don’t quite see where AI fits in hardware design. In the early days of ChatGPT, some tried to use it simply as a better Google search. But models weren’t mature and their prompts often missed the mark. Over time we’ve learned that an AI chat box is not the same as a search bar. Today, prompting is seemingly an art form. People now build meta-prompts (prompts to generate prompts).

But for the engineer doing highly technical work, the question remains: where does AI realistically help? If your day is spent in semiconductor tools without AI features, how do LLMs or ML fit into your workflow?

Start-ups are exploring this, but it’s not yet standard. Meanwhile, executives want an “AI checkbox” to show they’re on the trend. Between hype and practicality, what should an individual contributor do to stay relevant?

In today’s post, we will answer these questions and at the end, provide a practical roadmap you can start using today if you are a hardware engineer who is still on the fence on learning and using AI. I tend to err on the side of thorough understanding. My recommendations in this post will go beyond just the surface of learning to prompt chatbots.

Tectonic shifts in semiconductors: The case for learning about AI.

I remember the time not so long ago when communication technology captured global attention. In 2018, the U.S. blocked Qualcomm’s acquisition by Broadcom over 5G national interest concerns. Since my own background is in radio frequency engineering, the 2010s were an exciting time to have that particular skill set. But if ...