{"content: "The Promise of an AI Research Machine
Imagine turning Claude Code—the already powerful AI assistant—into something that behaves like a well-trained research monster. That's not hypothetical anymore. Chase H has built exactly that, and he's showing others how to do it too.
The Core Workflow
What happens when you combine three tools? You get what Chase calls "research on steroids." The workflow uses Claude Code as the central hub, NotebookLM for deep analysis and deliverable creation, and Obsidian as a personal knowledge base that trains Claude to speak and think the way you prefer.
The process starts with YouTube search capabilities built through Claude's Skill Creator. From there, data flows to NotebookLM—which lacks a public API but can be accessed through a GitHub repository called notebooklm-pi. Once analyzed, results return to Claude Code, which records everything in Obsidian markdown files.
Critically, this isn't locked into YouTube research only. The template adapts to PDFs, articles, text files, or any information source you need.
How It Works
The Skill Creator tool allows users to describe what they want in plain language—"create a skill that searches YouTube and returns structured video results using yt-dlp"—and Claude builds it automatically. Users don't need technical backgrounds.
For NotebookLM specifically, the process involves installing notebooklm-pi via terminal commands, authenticating through a browser, then asking Claude to use Skill Creator to build a NotebookLM skill. This gives access to everything NotebookLM offers: creating notebooks with up to 50 sources from Google Drive, YouTube, or text files, and generating deliverables including audio podcasts, infographics, slide decks, mind maps, and flashcards.
The final step combines individual skills into one super-skill using the same Skill Creator. Users simply describe what they want in a single pipeline—search for videos, send to NotebookLM, create analysis and infographic—and Claude executes everything at once.
The Obsidian Secret
Obsidian serves double duty. First, it provides human-readable records of every workflow run—you see how files link together, click through related documents, and view graph visualizations. But more importantly, all those markdown files become transparent to Claude Code itself. Over time, the claude.md file within Obsidian trains Claude on conventions: how you like deliverables formatted, what tone you prefer, which analysis styles work best.
This creates a self-improving loop. Each workflow run refines Claude's understanding of your preferences. The more documents accumulate, the better Claude becomes at predicting exactly what you want—and it happens naturally without explicit retraining.
Chase demonstrates this by asking for research on top five MCP servers with analysis on view counts, outliers, and gaps, plus an infographic. After six minutes, NotebookLM delivers a full markdown file containing key takeaways—Context7, Supabase, Figma, Sentry, PostHog, Playwright—and generates the visual infographic requested.
The Flexibility Problem
A counterargument worth considering: this workflow assumes users already know what they want from research. People with ill-defined research goals may struggle to benefit from automation that optimizes for specific outputs. The template adapts, but you still need to know how to adapt it.
Claude Code becomes a well-trained personal assistant that executes this workflow on your behalf—and that's super powerful.
Bottom Line
The strongest part of this argument is its practical implementation: real commands, real setup steps, and a working demonstration. Chase proves the workflow works with actual output. The biggest vulnerability is the dependency on multiple tools—each requires authentication, installation, and configuration—which creates friction for casual users. For serious researchers willing to invest thirty minutes in setup, this combination genuinely represents something most people aren't doing yet."}