Software Isn't Dying, but it's certainly changing

Software isn’t going to die anytime soon. But “static” software might. When AI can generate a custom tool for a specific workflow in minutes, nobody’s going to pay a monthly subscription for a one-size-fits-all version of the same thing. The SaaS model depended on the fact that writing, deploying, and maintaining code was hard. That’s becoming less true by the day.

So what will people actually pay for? Complexity that AI can’t just generate on the fly. I think the answer is something people are starting to call malleable software—software that has AI baked in, that can reshape itself around what the user is actually trying to do.

But the software itself isn’t really the point. The point is the guardrails.

The Platform Model

Think about how macOS works. You want a dialog box? You call NSAlert(), and you get a dialog box that looks and feels like every other Mac dialog box. That’s the whole reason macOS has a consistent style—every app is calling into the same shared APIs.

The future of software looks like this, except the thing calling those APIs won’t be a developer. It’ll be an LLM. You build a platform that exposes a set of tools and APIs, and an agent calls them to render UI, manage state, and handle interactions. It’s closer to building something like SwiftUI than a traditional app.

And here’s the key shift: this happens at runtime. In traditional software, NSAlert() gets called because a developer wrote an if-statement somewhere that deterministically leads there. Same input, same code path, same dialog box, every time. In malleable software, nothing is baked in. The LLM decides at runtime what to call based on what the user is doing and what makes sense in context. The UI isn’t predetermined—it’s emergent.

What you’re providing as a developer is the framework that keeps that emergence from going off the rails. And there are two kinds of guardrails that matter.

Data Guardrails

The straightforward one first. Say you’re building a spreadsheet platform with an AI agent. A user asks it to restructure their financial data, and the agent accidentally wipes a column. In a dumb system, that data is gone. In a well-built platform, every data mutation is staged before it’s committed. The platform catches the destructive action, holds it, and gives the agent enough context to understand what went wrong and roll it back—without the user ever losing a single cell.

The user didn’t have to hit undo. The agent didn’t have to be perfect. The guardrails handled it.

This matters, but it’s the less interesting half of the problem.

UI Guardrails

The bigger idea is that the agent isn’t just doing tasks behind the scenes—it’s reshaping the interface itself.

Think about GitHub PR review. You open a PR with 47 changed files and GitHub gives you a flat list of diffs. That’s it. That’s what everyone gets, regardless of how they actually think about code.

But people think about code review in fundamentally different ways. One person starts with the test files to understand the intent of the change before reading any implementation. Another reads commit-by-commit because they want the narrative of how the author got there. And then there’s the most common frustration: you’re reading a diff, you see a function was modified, that function calls some other function, and you need to understand how they fit together to evaluate the change. But that other function predates this PR, so it’s not in the diff. Now you’re opening tabs, spelunking through the repo, trying to build a mental model that the tool should be helping you build. GitHub has some LSP-like features for code navigation now, but they’re bolted on—not integrated into the review workflow in any meaningful way.

Everyone bends their review process to fit GitHub’s interface. Not because it matches how they think, but because it’s what they’re given. Malleable software inverts that. The agent reshapes the review UI around how you think about code. Tests first? Done. Commit narrative? Done. Inline navigation to referenced functions with context panels? The agent sets it up because it’s learned that’s how you review.

Of course, customizability without constraints is how you get Jira—infinitely configurable and universally despised. So the platform’s job isn’t just to allow customization. It’s to choose the right set of primitives—diffs, comments, file trees, navigation panels, commit views—and provide a substrate by which they can be composed into arrangements that always make sense. The agent can build wildly different review experiences for different users, but the platform ensures that every possible composition is valid, navigable, and reversible.

This is the same design problem as building something like SwiftUI: you define composable primitives, you constrain how they combine, and the result is that invalid states are unrepresentable. The art of building malleable software is going to be the art of selecting and constraining those primitives. It’s the same kind of hard, interesting design work that makes a tool like neovim so powerful—just without requiring your users to break their brains on vimscript or Lua to benefit from it.

We’re Already Seeing Prototypes

You can see early versions of this pattern today. OpenAI’s ChatGPT and Anthropic’s Claude both have something like it with their Canvas features—the LLM calls tools inside the app, those tools produce visible artifacts the user can interact with, and you iterate back and forth. It’s very limited right now, but the shape of the thing is there. The agent manipulates the UI, the user responds, the agent adapts.

Dotfiles Without the Dotfiles

Here’s why this matters even beyond developer tools.

Developers have always had deeply customized machines. Dotfiles, keybindings, shell aliases, window management—the whole deal. The gap between how a developer uses a computer and how everyone else does is enormous. But even for developers, building and maintaining that level of customization takes real effort. For non-technical people, it’s always been a non-starter.

Malleable software closes that gap. When your tools have AI baked in, customization stops being something you configure and starts being something you describe. “Next time I open this file, also open that other file and put them side by side.” And it just works.

The only real blocker right now is cost. Running an LLM for every micro-interaction is still too expensive. But the price is dropping fast. Open-weight models—primarily Chinese ones at the moment—are already bargain-basement cheap and roughly as capable as frontier models were six months ago, at a quarter to an eighth of the cost. If that curve continues, we’ll have models cheap enough for ambient, always-on AI integration very soon.

That’s when software gets really interesting. Not when AI can write code for you—that’s already here. When your tools start to think about how you think, and reshape themselves accordingly.