By Thorsten Meyer, 28 June 2025

A headline that was too good to be true

When Fortune ran with the speculation that Meta was dangling US $100 million signing bonuses to woo OpenAI scientists, it instantly became a talking point across Silicon Valley. Within hours, former OpenAI researcher Lucas Beyer—one of the engineers joining Meta’s new Super‑Intelligence group—logged onto X to set the record straight:

“Yes, we will be joining Meta. No, we did not get 100M sign‑on. That’s fake news.” 

The walk‑back did little to quell speculation, but it offered a revealing glimpse into an uncomfortable truth: if a nine‑figure bonus sounds plausible, the market for elite AI talent must be extraordinarily tight.

How scarce is “frontier‑model” talent?

Recent reporting paints a picture that borders on an NHL free‑agency frenzy:

  • Meta’s secret “List.” The Wall Street Journal describes an internal spreadsheet of roughly 350 “Tier‑A” researchers—mostly PhDs from MIT, Berkeley, Oxford and the like—who are being courted with packages “sometimes reaching $100 million.”  
  • Industry vs. academia drain. According to the 2024 AI Index (Stanford HAI), 70.7 % of newly minted AI PhDs in the U.S. and Canada went straight to industry in 2022, compared with just 20 % who took academic posts—a 30‑point swing over the past decade.  
  • Corporate anxiety. In McKinsey’s January 2025 CxO survey, 46 % of executives who say their Gen‑AI roll‑outs are “too slow” blame pure skill gaps in the workforce.  

Put differently: the world is producing just over 2,100 computer‑science PhDs a year in North America—and far fewer have the deep‑learning chops needed to push the boundaries of large‑scale models. That feedstock is nowhere near the level required for the thousands of foundation‑model projects now under way.

Demand is exploding far faster than supply

LinkedIn’s own job‑posting data shows a 61 % year‑on‑year surge in roles that require AI skills, making them the hardest jobs to fill in enterprise IT—ahead of long‑standing pain‑points such as cybersecurity. 

Meanwhile, generative‑AI platforms have moved from prototype to production in under 24 months, creating an urgent need for:

  • Model engineers who understand distributed‑training tricks that can shave millions off a GPU bill.
  • Applied‑AI product leads who can translate lofty research into valuable user experiences.
  • Governance specialists who can keep regulators—on both sides of the Atlantic—onside.

These are not interchangeable software developers; they are rare “T‑shaped” professionals who blend deep math, systems engineering, and domain know‑how. Companies can’t reskill fast enough.

Why the pipeline is struggling to keep up

BottleneckEvidenceConsequence
Lengthy training cycleA PhD pipeline can take 5‑7 years; AI specialisations only modestly increased doctoral output to ~2,105 grads in 2022. Supply grows linearly while model complexity (and compute budgets) grow exponentially.
Brain‑drain to industry7 out of 10 AI PhDs now opt for commercial labs immediately after graduation. Universities lose mentors, slowing the next cohort and narrowing basic‑research diversity.
Visa and mobility frictionsInternational share of CS PhD grads dipped to 66 % in 2022 after policy and pandemic headwinds. Many would‑be researchers choose Canada, the U.K. or China instead of the U.S., deepening regional gaps.
Compute, not cash, is kingEven Meta’s richest packages compete less on salary and more on guaranteed access to bespoke H100 clusters. Start‑ups and universities without access to multi‑billion‑parameter infrastructure struggle to attract talent.

Collateral damage: academia, start‑ups—and ultimately safety

The market distortion is already visible:

  • Academic labs report shelved projects and fewer grad‑level teaching assistants.
  • Seed‑stage start‑ups with promising ideas cannot match Big Tech hardware budgets, so founders sell early or join a FAANG lab.
  • Safety research risk. When only a handful of firms control frontier expertise, it raises concerns that profit motives may outrun open scientific scrutiny.

What can be done?

  1. Expand the funnel. Fast‑track visas for advanced AI degrees and create more publicly funded doctoral fellowships tied to open‑source deliverables.
  2. Democratise compute. National research clouds (the EU’s planned AI Factories, the U.S. NSF’s NSFCloud) can give universities equal footing.
  3. Incentivise academic tenure. Matching grants for professors who stay, plus royalty‑sharing schemes when IP is licensed to industry.
  4. Corporate upskilling. Companies like Deloitte and NVIDIA now run internal “AI academies” to retrain thousands of software engineers into LLM fine‑tuning and RLHF experts—often in under six months.
  5. Talent‑sharing compacts. Similar to medical residencies, labs could loan researchers to regulatory agencies or safety‑auditing bodies for one year, broadening oversight without permanently losing head‑count.

The bottom line

Lucas Beyer’s dismissive “fake‑news” tweet doesn’t mean AI researchers aren’t commanding eye‑watering paychecks; it merely highlights the mythic proportions the market assumes when demand so dramatically outstrips supply. Until the pipeline of advanced talent widens—and until we make it attractive for researchers to stay in academia, safety labs, and mission‑driven start‑ups—the seven‑figure bidding wars will continue, rumours and all. And that could slow not just innovation but the responsible stewardship of one of the century’s defining technologies.

You May Also Like

About Thorsten Meyer

Short Bio Thorsten Meyer is a futurist, author, and insightful commentator based…

Reality Check: If AI Is So Great, Why Isn’t Productivity Skyrocketing?

Many organizations struggle to realize AI’s full potential due to complex integration, data challenges, and skills gaps, leaving the true impact uncertain.

Reality Check: Will AI Really Create More Jobs Than It Destroys?

Facing the future of AI, discover whether job creation truly surpasses destruction and what it means for your career prospects.

Genspark’s 45‑Day Rocket Ride

How a 20‑person team shipped no‑code personal agents on GPT‑4.1, leveraged the…