← Back to Library

Toward computer psychology

Some Guy offers a rare, unfiltered look inside the mind of an early AI practitioner, arguing that the next breakthrough in large language models won't come from bigger datasets, but from understanding the "psychology" of the machine itself. While the tech industry obsesses over compute power, this piece suggests that the key to reducing errors lies in how we linguistically structure data to match the model's internal token-based reality.

The Psychology of the Machine

The author opens with a disarming admission of social anxiety and a self-perceived inability to communicate, framing their technical insights as emerging from a unique, almost neurotic way of processing the world. "I always assume that I'm the dumbest person in the room until proven otherwise," Some Guy writes, describing this not as humility but as a "neuroses" that fuels their work. This personal vulnerability serves as a setup for a counter-intuitive technical claim: that the most effective way to interact with an AI is to treat it not as a database, but as a creature with a specific, weird cognitive style.

Toward computer psychology

The core of the argument rests on the idea that Large Language Models (LLMs) experience reality through tokens, which are roughly analogous to whole words. Some Guy proposes that the naming conventions of data structures—often ignored by human programmers as irrelevant—actually dictate how well an AI can "grab onto" relationships and avoid hallucinations. "If you're using an LLM to classify things and return values out of a longer list this might give you a few percentage points of increased performance just by the changing the stupid key name," they argue. This is a radical departure from standard computer science methodology, which prioritizes functional efficiency over semantic aesthetics.

The whole thing is a mind crystal made out of the relationships between words!

The author illustrates this with a bizarre yet plausible experiment: testing whether JSON keys named with rhyming words, alliteration, or whole-word strings outperform cryptic, abbreviated keys. Critics might note that this approach feels anecdotal and lacks the rigor of a controlled academic study, yet the author's track record of success in an industry bottlenecked by talent lends the hypothesis weight. The argument is that if an AI "understands" the relationship between words, then feeding it data that respects those relationships—like rhyming keys—creates a more stable cognitive environment for the model.

The Human Cost of Innovation

Beyond the technical theory, the piece offers a stark portrait of the human toll of being at the bleeding edge of AI implementation. Some Guy describes a life consumed by the work, working roughly twelve hours a day and struggling to explain their insights to colleagues who view their methods as "stupid." The author notes, "I keep having to host back to back meetings where I say something like, 'Yeah, remember that thing I told you a month ago? The reason we did that is so we could do this other thing. I forgot to tell you.'" This highlights a significant friction in the industry: the gap between early adopters who intuitively grasp the new paradigm and the broader workforce that is still trying to apply old rules to new tools.

The author acknowledges that while their "dumb bag of tricks" might seem trivial to others, they are among the very first to figure out how to make these systems work at scale in a corporate environment. "It takes a bit for industries to figure things out and for the 'one dumb trick' solutions to be broadly shared," Some Guy observes. The implication is that the current chaos and confusion in the sector are temporary, bridging the gap between the initial hype and the eventual, mundane reality of functional AI tools.

LLM's are not useless. They're just not magical.

This grounding of the technology is perhaps the piece's most valuable contribution. In an era of breathless marketing, the author insists on a pragmatic view where AI is a tool that requires specific, human-centric tuning. The author speculates on the future impact, suggesting that if these techniques are adopted broadly, "everyone would get much better and much faster customer service with many fewer errors," even if it means some jobs disappear. The vision is one of a world where health insurance and bureaucratic processes become navigable, not through magic, but through a better understanding of the machine's "psychology."

Bottom Line

Some Guy's argument is compelling precisely because it rejects the standard engineering playbook in favor of a more empathetic, psychological approach to machine learning. The strongest part of the piece is the assertion that data structure is not just a technical detail but a cognitive interface for the AI. However, the biggest vulnerability lies in the lack of empirical data to back up the specific claims about rhyming keys and alliteration; the argument relies heavily on the author's personal reputation rather than peer-reviewed results. As the industry matures, the real test will be whether these "dumb tricks" can be systematized or if they remain the idiosyncratic secret of a few overworked pioneers.

Sources

Toward computer psychology

by Some Guy · · Read full article

The State of Me, Computer Psychologist.

I always assume that I’m the dumbest person in the room until proven otherwise. This is better understood as a neuroses rather than humility. If you ask me if it’s likely I’m the dumbest person in the room, as a purely intellectual question, I would do some hemming and hawing before answering with some form of, “Probably not.”

Whenever I open my mouth, however, I expect people to look at me like I’m stupid.

I’m a bad talker, especially when sleep deprived. I’ll say something and people will immediately disregard it and I’ll think, there I go again, opening my big stupid mouth. It can be really bad. Sometimes someone will repeat my actual words back to me and then I’ll fall over myself trying to correct… myself.

This is an exaggerated example of me explaining how to count to ten:

In my mind: 1,2,3,4,5,6,7,8,9,10 but done immediately before moving onto another topic so it’s no longer stored in my brain that way.

Out of my mouth: “Seven. Oh sorry, you’re right you need six first. I’m just saying don’t forget the reason you’re doing that is to get to nine and then ten. Does anyone remember that cartoon Voltron? That music went so hard. It’s incredible how that was the level of quality for a children’s cartoon a few decades ago. But you feel this sense of progression listening to that music just like when you’re counting, yeah? The music is going somewhere. Oh yeah yeah, start with one. I’m not saying don’t start with one. One is essential. One sets up the whole thing. That’s a given. Duck Tales is another banger, it’s been almost four decades and I can still remember almost all the lyrics. ‘Whoo-oo’ just speaks to me somehow but it starts by analogizing life in the city of Duckburg to a hurricane. Why’d I start at seven then? Well, the tricky part is seven. Seven is how you bridge between six and eight and that’s where our core value is at. It’s the last odd number in the single digits that’s prime. Seven ate nine? Don’t you see? Um… I’m not saying this right. Oh man! You know what I haven’t thought about for years? Darkwing Duck! I guess maybe that’s why I’m skipping steps because it lives in the same place in my head as Darkwing Duck, ...