The Moat Is Scar Tissue

Meta reportedly delayed Avocado. Apple pushed Siri. Neither has a compute problem. Frontier AI capability is organizational, not computational — and it doesn't transfer from a hiring spree.

3 min read

Meta reportedly delayed "Avocado" — its internal frontier model — from March to May 2026. Apple pushed a more capable LLM-powered Siri to later in 2026. Neither company has a compute problem.

This isn't a scheduling slip. It's a signal about what frontier AI actually requires.

The conventional read is that AI development is ultimately a capital game: rent enough H100s, hire enough PhDs, and you'll converge on OpenAI or Anthropic's performance. Meta and Apple have run that experiment. The results are coming back negative — and product pivots or safety holds don't fully account for delays this persistent across organizations this well-funded.

What capital actually buys

Capital buys infrastructure. It buys talent. What it doesn't buy is the accumulated judgment of having run hundreds of large-scale RLHF pipelines and learned what fails.

RLHF — reinforcement learning from human feedback — sounds procedural. In practice it's closer to craft. Knowing which annotators to trust, how to structure reward models, what failure modes to watch for, when to act on a reward signal that's drifting: this isn't documented knowledge. It lives in the people who have done it and in the systems those people built around repeated failure.

Data curation compounds the same way. The difference between a model trained on a well-curated dataset and one trained on a mass scrape shows in the long tail — edge cases, rare domains, adversarial inputs. Getting this right requires judgment that accumulates across training run after training run. It doesn't transfer cleanly from a whitepaper or a new hire's LinkedIn.

OpenAI has run large-scale RLHF pipelines longer than almost any organization on Earth — and at higher stakes. Anthropic was founded by people who brought that institutional memory with them from OpenAI. Google DeepMind has been doing large-scale neural network training since before transformers existed. Meta and Apple have engineers of the same caliber on paper. What they don't have is the scar tissue.

The TSMC parallel

This pattern has a cleaner historical analogy than most people reach for.

When chip companies tried to close the gap with TSMC on advanced process nodes, the equipment was available. The basic process recipes could be approximated. What couldn't be transferred was yield optimization: the institutional responses to edge cases, the defect analysis workflows, the judgment calls that turn a theoretical process into a manufacturable one at scale. TSMC built that over decades. It doesn't compress.

AI training is running the same dynamic, faster. The organizations that have run the most training runs have developed what amounts to an organizational immune system. They know which anomalies to act on and which to ignore. They know which pre-training decisions propagate into alignment problems six steps downstream. They know which architectural choices are load-bearing versus cargo-culted. That knowledge takes time and it accumulates in teams, not documents.

The structural implication

If Meta and Apple can't close this gap with unlimited resources, the stickiness of frontier AI providers is different from conventional software vendors. You're dependent on organizational capabilities that are genuinely scarce and genuinely non-fungible — not just their compute or their APIs.

The organizations building this operational knowledge now have a compounding advantage that doesn't reset with the next funding round. Their model weights may or may not be ahead at any given moment. Their judgment about how to make the next weights better is improving faster than anyone who started the process later — and that gap compounds.

The real moat isn't compute. It's knowing what to do with it after a thousand failures.