Computers Finally Speak Human

AI as the intelligence layer between what you mean and what machines do

TechnologyProduct

I've never been more excited about technology.

For the first time in my career, anything I want to build in the software world feels fundamentally within my grasp. Not because I've suddenly become a better engineer. Not because tools have gotten marginally more convenient. Something has shifted at a deeper level—something about the relationship between what I can imagine and what I can create.

I'm not an AI researcher. I couldn't explain the mathematics behind a transformer architecture or walk you through the computer science of attention mechanisms. But I can tell you what I see when I look at this technology, and I think the framing matters. How we understand what AI is will shape how we use it—and who gets to use it.

The simplest way I can describe AI, and large language models in particular, is this: AI is the new keyboard.

The Stack You Never Think About

Every time you use a computer, you're navigating a tower of abstraction layers—most of which are invisible to you. At the very bottom sits the hardware: silicon chips executing instructions as electrical signals. Above that, the BIOS and kernel manage the fundamental operations. Then comes the operating system, translating your high-level requests into something the kernel can process. Applications sit on top of that, each with its own interface and logic. And finally, at the top of the stack, there's you—pressing keys, moving a mouse, tapping a screen.

Human IntentInput Layerkeyboard, mouse, touch, voiceApplicationsyou learn their syntaxOperating SystemBIOS

That input layer—keyboard, mouse, touchscreen, voice—has always been the bridge between human intention and machine execution. But here's the thing about that bridge: it requires you to do most of the translation work.

When you type a document, you've already structured your thoughts into sentences. When you write code, you've already translated your logic into syntax. When you navigate an application, you've already learned its particular grammar of menus and buttons and gestures. The computer receives your input, but you've already done the heavy lifting of converting your raw intention into something structured enough for the machine to process.

This has been the fundamental bargain of computing since the beginning: you learn the machine's language, and in exchange, the machine does what you want. Entire industries have been built around making this translation easier—better programming languages, more intuitive interfaces, no-code tools that abstract away complexity. But the burden has always been on you to meet the computer partway.

The Intelligence Layer

What large language models introduce is something genuinely new: an intelligence layer that sits beneath your input. You still type. You still speak. You still interact through the familiar interfaces you've always used. But now there's something listening that can do more than faithfully record what you've already figured out how to say—it can help you figure out what to say in the first place.

Human IntentInput Layerkeyboard, mouse, touch, voiceApplicationsyou learn their syntaxOperating SystemBIOS

This creates a virtuous loop. You express an intention—messy, incomplete, in plain language. The intelligence layer interprets it, translates it into structured output, and shows you what it understood. You see the gap between what you meant and what it produced. You refine. It responds. The process of clarifying your own thinking, which used to happen entirely in your head before you touched a keyboard, now happens in dialogue.

This loop changes everything. A keyboard records. The intelligence layer collaborates. It can take your half-formed idea and translate it into whatever the layers below require—code, formatted documents, database queries, API calls—while you iterate toward what you actually meant.

Human IntentInput Layerkeyboard, mouse, touch, voiceApplicationsyou learn their syntaxOperating SystemBIOSIntelligence Layeryour thought partner for the machine
This is why calling AI a “tool” undersells what's happening. Tools extend your capabilities. This extends your ability to communicate with machines at all.

It's a new kind of interface—one that translates intent into implementation, not just keystrokes into characters.

What This Means for Builders

If you think of yourself as a builder—someone who wants to create things that live digitally—this shift changes the fundamental constraints of what's possible.

In the old model, having an idea was the easy part. The hard part was the translation chain: learning a programming language, understanding how databases work, figuring out the architecture that would support your vision, debugging the gap between what you imagined and what you actually built. For many ideas, the translation cost was simply too high. Either you needed to invest enormous time learning the technical skills, or you needed to spend money hiring someone who had them, or you gave up and added the idea to the pile of things you'd get to someday.

The intelligence layer collapses that translation cost. Not to zero—but to something dramatically lower. You can describe what you want in plain English, have a conversation about how it should work, and iterate your way to something functional. The syntax, the boilerplate, the implementation details that used to require years of specialized knowledge—the AI handles that.

This doesn't mean the ideas themselves get easier. If anything, the opposite is true. When the constraint isn't “can I build this” but “should I build this,” different skills become important. Understanding your users. Knowing what's actually valuable. Having taste about what deserves to exist. The bottleneck shifts from technical feasibility to judgment and prioritization.

Human IntentInput Layerkeyboard, mouse, touch, voiceApplicationsyou learn their syntaxOperating SystemBIOSIntelligence Layeryour thought partner for the machineiOS AppWebsiteDesktop

Take Mise, a recipe discovery app I'm building for iOS. The idea has lived in my head for years: a way to swipe through recipes like a dating app, build a weekly meal plan, and generate a grocery list automatically. Simple enough conceptually—but I'd never carved out the months it would take to learn Swift, understand iOS frameworks, and navigate Apple's ecosystem. The translation cost was too high for a personal project. With the intelligence layer, I described what I wanted, iterated through conversations about architecture and interaction patterns, and started building. I'm not an iOS engineer now, but I don't need to be. The app exists, and it works—not because the idea got easier, but because the distance between imagining it and making it finally collapsed.

All those small ideas—the ones that weren't worth starting a company for, weren't valuable enough to hire an engineer, weren't important enough to spend six months learning a framework—those are now accessible. Not just the big, fundable, venture-scale ideas. The personal tools. The weird side projects. The solutions to problems only you have. You can build those now.

The Honest Caveat

I can already hear the objection: “You still need to understand what you're building to build it well. AI just obscures the complexity rather than eliminating it.”

This is true, partially. There are domains where deep engineering knowledge remains essential. Systems that need to scale to millions of users. Safety-critical applications. Complex architectures where the wrong abstraction choice compounds into catastrophic technical debt. If you're building infrastructure that the world depends on, you probably need to understand what's happening under the hood.

But here's what I think the objection gets wrong: it overestimates how many problems actually require that depth. Most software isn't operating at world-scale. Most applications don't need perfect architectural decisions. Most ideas just need to work well enough, for a specific set of users, solving a specific problem. For that universe of problems—which is far larger than the set of problems that require deep engineering—the intelligence layer is transformative.

And even where deeper knowledge is required, the barrier to acquiring it has also collapsed. You now have an expert tutor on demand, available to explain any concept at whatever level of abstraction makes sense for you. Want to understand how a database actually works? Ask. Want to know why one architectural pattern is better than another for your use case? Have that conversation. The intelligence layer doesn't just help you build—it helps you learn, at exactly the pace and depth you need.

Just talk to it.

This is a rallying cry for anyone who's ever had an idea for something they wanted to build and filed it away as “too hard” or “not worth the effort.”

The barriers are lower than they've ever been. Not gone—but low enough that the constraint is no longer whether you can build something, but whether you should. That's a profound shift. It means the ideas that matter, the understanding of what's actually valuable, the taste to know what deserves to exist—those are the bottlenecks now.

It's not perfect. The technology is still evolving. There are things it gets wrong, limitations it runs into, cases where you'll hit a wall. But if you want to create something that lives digitally, you now have a collaborator who will listen to your half-formed thoughts and help you translate them into reality.

Just talk to it. You'll be surprised what you can build.

Human IntentInput Layerkeyboard, mouse, touch, voiceIntelligence Layeryour thought partner for the machine