Everyone hears “OpenAI has no moat” and files it under investor drama. That’s a mistake.

If the core product is just “tokens through an API,” then the company you built your workflow, your agency offer, or your internal automation job around can get swapped like a payment processor. And when that happens, your “AI role” can vanish even if AI itself keeps growing.

OpenAI became the default stack for startups, agencies, and corporate pilots. I’m not here to say “AI is dead.” I’m here to map how commoditized models plus shaky economics make AI jobs fragile—and the one bet most people make that backfires.


What “No Moat” Actually Means in 2025

The surface-level story is simple: OpenAI is the leader, therefore OpenAI is the platform, therefore if you learn “GPT workflows” you’re future-proof.

You see this everywhere—job listings that basically translate to “be good at prompting,” agencies branding themselves as “GPT implementation partners,” internal teams that treat an OpenAI API key like it’s a long-term infrastructure decision.

But “no moat” means the opposite of “platform.” It means the advantage doesn’t stick. It means the thing you’re betting your career on—model superiority—gets competed away faster than your org can update a Jira ticket.

In 2025, frontier model advantage is transient because the product is weirdly compressible. It’s not like building a new iPhone factory or a global logistics network. It’s closer to shipping a new version of a compiler: impressive, hard, expensive—but once the technique exists, competitors converge. Sometimes they don’t copy weights, they copy methods. Sometimes they distill. Sometimes they just throw more compute and data at the same basic playbook.

The outcome for you—the person trying to build stable work on top—is parity. Not perfect parity, but enough parity that procurement starts asking questions.

You can see the multi-model reality already. Teams don’t just say “use GPT.” They route. They compare. They evaluate. Claude might be the coding choice this week. Gemini might win on a benchmark tomorrow. A smaller open-source model might be “good enough” for support tickets, contract summarization, or internal search.

So the question becomes: if five models can do 90% of the job, what exactly is the moat?

This is where switching costs matter—and they’re lower than most people want to admit.

A lot of modern AI stacks are designed to make models interchangeable. You’ve got abstraction layers, wrappers, orchestration frameworks, and prompt templates that are basically portable text files. Even “agent” systems usually boil down to: call model, call tools, store results, repeat. If you’ve built clean interfaces—input, output schema, tool calls, eval harness—you can swap vendors without rewriting the whole product.

And if you haven’t built it cleanly?

That’s not a moat. That’s tech debt.

People push back and say, “Brand is a moat.” And sure—brand matters. ChatGPT is the consumer verb right now. Distribution matters too—defaults are powerful.

But notice what “real distribution” actually means: being baked into the operating system, the browser, the enterprise suite. That’s the moat. Owning the workflow. Owning the surface where the user already lives.

If you don’t own that layer, you’re fighting on price and performance. And price/performance gets arbitraged.

That’s the part nobody’s mentioning when they casually say “OpenAI will always be fine.” Fine compared to what? If your main product is tokens, and tokens are becoming a commodity, then “fine” starts to look like: lower margins, higher churn risk, and constant pressure to spend more just to stay in the lead.

So what would a real moat look like?

  • Proprietary distribution competitors can’t replicate quickly
  • Being the default inside regulated workflows where switching requires audits and re-approvals
  • Owning unique datasets you can legally leverage at scale
  • Controlling a choke point: identity, device, payments, enterprise admin, or the system of record

If you don’t have that, you’re not a platform.

You’re a supplier.

And when suppliers don’t have moats, pricing pressure isn’t a possibility—it’s the whole game. The moment a CFO believes models are interchangeable, your vendor becomes a line item to cut. The moment your vendor can be swapped, your “GPT-native” role stops being a role and starts being a temporary implementation project.


Building Agentic AI Systems with RAG and MCP: A Developer’s Guide to Designing, Orchestrating, and Scaling Multi-Agent Workflows with LangGraph and OpenAI’s Model Context Protocol

Building Agentic AI Systems with RAG and MCP: A Developer’s Guide to Designing, Orchestrating, and Scaling Multi-Agent Workflows with LangGraph and OpenAI’s Model Context Protocol

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Here’s where people get weirdly optimistic: “Inference is getting cheaper, therefore everyone wins.”

Cheaper tokens are great when you’re building. They let you prototype faster, ship faster, and automate more without begging finance for budget. But cheaper intelligence also changes who gets paid—and that’s the punchline most AI workers don’t want to sit with.

When the core product gets cheaper every quarter, the value doesn’t disappear. It migrates. It moves away from the model vendor and toward whoever controls:

  • the integration layer
  • the distribution layer
  • the compute layer

Think about electricity. Power got cheaper and more reliable over time. That didn’t create a permanent monopoly for the company that invented the turbine. It created a utility industry: capital intensive, regulated, low margin, constantly squeezed between fixed costs and political pressure to keep prices down.

“Utility” sounds boring. That’s the point. Utilities don’t get valued like unstoppable platforms. They get valued like infrastructure: necessary, yes. Sexy, no.

Now apply that to tokens. If models converge and routing becomes normal, “intelligence” starts to look like a commodity input: like bandwidth, storage, or compute cycles. You don’t pick your cloud provider because you’re emotionally attached to S3. You pick them because pricing, reliability, compliance, and tooling fit. And if a competitor is 20% cheaper for the same performance, you switch—or you threaten to switch—and you negotiate.

That’s what “too cheap to meter” really implies. It’s a marketing line that sounds generous, but it’s economically hostile to the business selling the meter.

Because the metering is the revenue.

The only reason a token business works is because you can count usage and charge per unit. So when the story becomes “intelligence will be basically free,” you’re telling the market: “Our pricing power is temporary.” Markets take you seriously when you say that—even if you meant it as a vision statement.

There’s also a brutal mismatch between marginal cost and fixed cost.

  • Marginal cost: what it costs to generate one more token (falls with better hardware, kernels, distillation, quantization)
  • Fixed cost: the stuff you’ve already committed to (compute contracts, data center buildouts, training runs, staffing, frontier R&D)

So you get a trap: the product gets cheaper at the edge, but the company has to keep spending like it’s building a moonshot. That pushes the whole business toward utility economics: tons of capital, lots of volume, thin margins, and very little forgiveness when growth slows.

People counter with: “Scale fixes it. Sell enough tokens and you make it up in volume.”

Scale helps—but it helps the landlords more than the tenant.

The landlords are the cloud platforms and hardware suppliers. They get paid for capacity: chips, power, racks, hosting. They amortize costs across many customers and workloads.

The pure model vendor is fighting both directions: customers demanding lower prices and higher reliability, while competitors and open models cap how much you can charge for “intelligence” at all.

And once intelligence starts behaving like a utility, procurement starts treating it like one. They negotiate. They dual-source. They build fallback routes. They ask for exit clauses. They plan for outages. They want the option to swap models without rewriting the business.

Which leads to the uncomfortable connective tissue: if the economics push model vendors into a pricing war, they need constant financing to keep running at frontier scale.

That’s where the “drama” stops being drama and starts being a structural risk you can’t prompt your way out of.


The AI Agent Patterns Bible: A Practical Blueprint for Scalable Architectures, Reliable Workflows & Real-World Autonomous Systems

The AI Agent Patterns Bible: A Practical Blueprint for Scalable Architectures, Reliable Workflows & Real-World Autonomous Systems

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Financing Reality: Why Growth Narratives Turn Into Restructurings

Most of the OpenAI conversation gets stuck on “Is the model good?” when the real question is: Can the company afford to keep being that good?

The valuation story around frontier AI isn’t built like a normal software company. It’s built like an AGI lottery ticket.

When investors buy the lottery, they aren’t underwriting steady margins. They’re underwriting an outcome. The implicit deal is: “Burn whatever you need to burn, because if you hit the jackpot first, the rest won’t matter.”

That narrative is powerful. It also creates a funding treadmill, because the company has to keep making the next promise to justify the next check.

And online discourse turns “someone said X” into “X is verified,” so let’s keep this grounded: you hear claims about enormous annual compute spend, giant multi-year purchase commitments, and dependency on partners and financiers to keep the whole machine running. Whether the exact number is $20B or $100B isn’t the point.

The structure is the point: fixed costs are gigantic, and the product is trending toward commodity pricing.

The loop looks like this:

  1. You promise the next leap (agents, AGI, whatever the market will fund).
  2. You raise at a valuation that assumes you’re already the winner.
  3. You buy the compute to chase the promise.
  4. You lock in obligations that don’t care how pricing power evolves.
  5. You need growth to cover the obligations, so you push into enterprise, education, consumers—anything that expands volume.
  6. Competition pushes prices down; customers demand redundancy; unit economics get squeezed.
  7. You return to step one and promise the next leap again.

This is why “no moat” and “financing” are the same story. If you don’t have durable pricing power, the only way to keep funding frontier-scale burn is to keep convincing someone that the next version changes the rules.

That works right up until it doesn’t.

And when it doesn’t, companies don’t just “slow down.” They restructure. Markets tolerate losses when losses are framed as investment on the way to monopoly margins. But if intelligence is turning into metered electricity, monopoly margins stop being a believable destination.

It’s also messy because the failure modes aren’t clean. They’re political.

  • Partner extraction: If you rely on a partner for distribution, compute, or capital, they can demand better terms, preferential access, rights, or simply shift priorities and starve you without “acquiring” you. It’s not even villain behavior. It’s leverage.
  • Commitment renegotiation: If growth slows under massive purchase commitments, you renegotiate, cut scope, or sell parts—not because the product is worthless, but because cash flow doesn’t match obligations.
  • Hype-cycle exit: If private markets priced you as inevitable, the incentives to market like perfection are real—IPO talk, platform announcements, “next model” hype—anything that keeps the story ahead of the spreadsheets.

The obvious counterpoint is: “But demand is real.”

Yes. Demand is absolutely real. People want more AI, not less.

But real demand doesn’t guarantee stable economics for one supplier. Airlines have real demand. They still go bankrupt. Telecom has real demand. Same story. Demand doesn’t cancel bad economics.

And if you’re thinking, “Even if OpenAI wobbles, AI keeps going,” you’re right.

That’s why this is a career story, not a doomer story. If the supplier wobbles, the blast radius isn’t “AI disappears.” The blast radius is: contracts get repriced, vendors get swapped, teams get consolidated, and the work you thought was a long-term role turns into a short-term migration project.

So when you hear “OpenAI drama,” don’t treat it like celebrity gossip for tech people. Treat it like an early warning signal that the market is trying to turn frontier AI into a utility before the frontier players have utility-grade balance sheets.

And when that tension snaps, the first thing companies do is reduce risk.

The easiest risk to reduce is headcount tied to one vendor.


Your Boomi Enterprise Platform Companion

Your Boomi Enterprise Platform Companion

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

The Hidden Job Risk: AI Work Built on a Vendor That Can Be Swapped

This is where the conversation stops being about balance sheets and starts being about your calendar invite.

The market keeps telling people “AI jobs are future-proof,” and a lot of workers heard that as: “If I’m the GPT person, I’m safe.”

But most of the AI roles that exploded over the last two years aren’t durable professions yet. They’re vendor-dependent operations roles—and operations roles live and die by procurement.

You can see it in the hiring wave: “AI leads,” “prompt engineers,” “automation specialists,” “LLM product managers,” and a mini-economy of agencies selling “ChatGPT for your business.”

The pitch is always the same: wire an LLM into your workflows and save time. And honestly, plenty of these projects work.

That’s the trap.

When a project works, leadership assumes the magic is the vendor, not the design.

And a depressing amount of “AI implementation” in 2025 is thin: take an existing workflow, slap a model call in the middle, ship a UI that looks like a chat box. Maybe add retrieval. Maybe add a tool call.

The business value is still fragile because it’s riding on an assumption: your chosen vendor is the best combination of capability, price, and risk.

That assumption expires constantly.

Where the risk concentrates:

  • Prompt-only roles. If your job is “I’m really good at getting GPT to do the thing,” you’re competing with three forces at once: other models reaching parity, the model itself getting easier to use, and your company realizing they can standardize prompts and reduce headcount.
  • Chatbot wrappers and thin automations. If your product is “we built a bot for your internal knowledge base,” the buyer can choose between five vendors, buy a vertical tool, or spin up an open model behind the firewall. If your differentiation is a nicer prompt chain, you’re not a company—you’re a temporary configuration.
  • Agencies selling ‘one-model implementations.’ The second an enterprise says “We need Gemini for compliance,” “We’re standardizing on Azure,” or “Legal won’t approve that data flow,” your offering gets reframed as migration work. Migration work is lower margin, less defensible, and usually shorter-term.

And here’s the silent killer: enterprise procurement and compliance isn’t emotional. It’s not impressed by your demo. It’s a policy memo. It’s a risk register. It’s vendor questionnaires about data flow, retention, training policy, incident response, regional hosting, and whether the vendor will sign the contracts big companies require.

One day your LLM workflow is celebrated as innovation. The next day security says, “We can’t send this data there,” and your system becomes an exception that must be justified every quarter.

That’s when “AI roles” turn into consolidation targets.

Leadership doesn’t want ten teams each running their own model stack. They want one approved pipeline, one set of logs, one budget, one vendor list.

And even if you can switch models, that doesn’t mean your job survives the switch. Often it means the opposite. When tools get standardized and models get interchangeable, the organization needs fewer specialists. The work becomes repeatable. Easier to fold into an existing platform team.

The company doesn’t say, “Great, now we can do even more prompting.”

They say, “Great, now we can reduce the number of people touching this.”

This is the second-order layoff most people don’t see coming: startups built on one API get margin-crushed and pivot or die; agencies lose retainers when buyers choose bundled solutions; internal teams get merged when procurement forces a single stack.

None of this requires AI demand to fall.

It just requires switching to get easier.

So if your identity at work is “I’m the OpenAI person,” you’re not building a moat. You’re building a single point of failure.

And the minute leadership believes models are swappable, they’ll treat the entire role as swappable too.


Writing With AI: A Practical Guide to Tools, Workflows, and Writing in Your Own Voice (AI For Creative Professionals Book 1)

Writing With AI: A Practical Guide to Tools, Workflows, and Writing in Your Own Voice (AI For Creative Professionals Book 1)

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Who Wins in the “Solar Age” (And What That Means for Your Career)

Everyone argues model versus model like it’s a heavyweight title fight—GPT versus Gemini, Claude versus whatever dropped this week.

Meanwhile, economic gravity is pulling value away from the ring entirely.

The real question isn’t “Which model is smartest?”

It’s: Where does money land when intelligence gets cheap and interchangeable?

We’re moving from centralized frontier dependence to a world where intelligence is embedded everywhere: cloud, edge devices, enterprise stacks, open-source deployments.

Call it the “solar age”—not because it sounds cool, but because it captures the pattern: you stop buying electricity from one massive power plant and start generating it everywhere, then you fight over the grid and the appliances.

Here are the winners.

1) Hardware and on-device ecosystems

If AI runs on your phone, laptop, car, headset—whoever owns the silicon and the OS gets default distribution. They don’t have to beg you to choose their model. They ship an update and suddenly a billion devices have an assistant, local summarization, transcription, image tools.

Career implication: “AI” stops being a standalone product and becomes a device feature. Opportunities shift to edge optimization, privacy, deployment, performance constraints—not “which prompt gets the best answer.”

2) Cloud grid landlords

AWS, Azure, GCP—anyone who can host many models, route between them, and sell reliability, governance, and enterprise integration.

If models are interchangeable, the premium shifts to uptime, compliance, regional hosting, billing, audit logs, and integration with everything the enterprise already runs.

Career implication: “multi-model” becomes a keyword, and the job becomes designing the grid—cost controls, observability, latency budgets, failovers, access policies.

3) The electricians

High-leverage people who wire cheap intelligence into messy reality.

Integration isn’t glamorous, but it’s where budgets live: data plumbing, identity and permissions, evals that prove a model didn’t quietly break, monitoring that catches hallucinations before a customer does, change management that gets humans to actually adopt the workflow.

And when an LLM makes a mistake, the model vendor doesn’t get yelled at.

You do.

4) Vertical appliance makers

Domain-specific products that use cheap intelligence to widen margins—legal drafting inside controlled workflows, healthcare note summarization with guardrails and audits, finance reconciliation inside policy frameworks.

The model isn’t the product. The product is the workflow, data, integrations, and liability management.

That’s the moat: not “we have the smartest model,” but “we own the outcome inside a regulated, repeatable process.”

So what does this mean if you still need to pay rent?

The safest AI careers sit closest to three things:

  • data rights
  • workflow ownership
  • measurable outcomes

Not token access. Not “I know the best prompt.”

If you can tie AI work to a system of record—CRM, ERP, ticketing, finance, legal case management—you become harder to replace because the hard part is not generation. It’s trust, traceability, and fitting how the business actually operates.

And here’s what should calm you down: commoditization doesn’t delete work. It reallocates it. When intelligence gets cheaper, more processes become automatable.

But the people who win are the ones building the plumbing and controls—not the ones worshipping the power plant.


The Anti-Fragile AI Job Playbook

Steal this rule: never be the “Model X person.” Be the person who can make any model deliver a measurable outcome inside a real organization.

If models are commodities, the durable advantage is portability plus accountability.

1) Build portability on purpose

Don’t judge models by vibes. Build a simple evaluation harness: real tasks your business cares about, a scoring rubric, a repeatable way to run it across vendors.

If you can’t compare outputs, latency, and cost side by side, you’re not doing engineering—you’re doing fan culture.

Once you have evals, you can do routing and fallbacks: cheap model for easy tasks, strong model for hard tasks, backup when the primary vendor rate-limits or changes pricing.

That skill turns you from “prompt person” into “platform person.”

2) Own the hard parts that don’t commoditize

Data rights and pipelines. Retrieval that’s actually correct. Governance: who can access what, where data goes, how long it’s retained. Monitoring and incident response. Audit trails.

If you’ve ever tried explaining to legal why an output is trustworthy, you already know the moat isn’t the text generator.

It’s the controls around it.

3) Speak ROI like a grown-up

Executives don’t approve tokens. They approve dollars.

Translate usage into measurable outcomes: reduced handle time, fewer escalations, faster close rates, fewer compliance misses, fewer hours of manual reconciliation.

Design experiments that survive a skeptical CFO. Pre-register success metrics. Run A/B tests when you can.

If you can say, “This workflow saves 400 hours a month and costs $2,300 in inference,” you become hard to cut because you’re not a cost center—you’re a margin lever.

4) Pick a domain and a wedge

Domain gives you context and trust. Wedge gives you leverage.

Domain: finance ops, healthcare admin, legal review, sales enablement, customer support.
Wedge: evaluation, security, integration, data engineering, cost/performance tuning.

“General AI person” is a temporary title. “The person who makes AI safe and profitable in this domain” is a career.

5) De-risk vendor lock-in—quietly

If your company is all-in on one vendor, don’t panic. De-risk.

Ask: could we swap models in a week without breaking production?
If the answer is no, build the exit ramp:

  • add abstraction layers
  • store prompts/schemas in version control
  • separate business logic from model calls
  • introduce a second model in a non-critical workflow
  • run a vendor-swap drill the way security runs incident drills

You’re not being disloyal.

You’re being resilient.

Which brings us back to the hook: “no moat” isn’t doom. It’s a warning label telling you where to stand if the floor starts moving.


The Real Moat Is You

If AI is commoditizing, the job losses won’t come from smarter models—they’ll come from cheaper switching and fewer people needed to run the same workflow.

So here’s the question: could your company swap models in a week—and if they did, what role would disappear first?

You May Also Like

From Models to Meaning: Why the Next Phase of AI Is About Infrastructure, Incentives, and Human Agency

By Thorsten Meyer Artificial intelligence has moved past its novelty phase. The…

The Case for Public Ownership of AI: Keeping the Benefits of Automation Equitable

How public ownership of AI can ensure equitable distribution of automation benefits and prevent monopolization—discover why this approach matters.

Who Owns the Robots? The Future of Capital in an Automated Economy

Forces shaping robot ownership reveal who truly controls the future of capital in an automated economy and why it matters.

The Death of Jobs as We Know Them (Post-Labor Economics) – Better, Faster, Cheaper, Safer (2025 update)

What if I told you that 70% of today’s jobs won’t exist…