Nvidia has spent years ruling the discrete graphics market, but the company’s N1X system‑on‑chip (SoC) signals a strategic leap into consumer ARM computing. Developed with MediaTek and borrowing heavily from Nvidia’s Grace Blackwell GB10 “Superchip”, the N1X integrates a CPU, GPU and AI accelerator in one package. Recent leaks on Geekbench and FurMark have revealed early specifications and performance numbers, while reports from the supply chain suggest the product’s release has been pushed into 2026 because of Microsoft’s operating‑system roadmap and ongoing chip revisions. This article synthesises what is currently known about the N1X, evaluates early benchmarks, and assesses its potential impact on the PC industry.

Technical Highlights

Architecture and core configuration

  • 20‑core ARM CPU: Geekbench listings show the N1X with 20 cores arranged into two 10‑core clusterstomshardware.com. Sources assume the performance cluster uses Arm Cortex‑X925 cores at up to 4 GHz, while the efficiency cluster uses Arm Cortex‑A725 corespokde.net. This configuration mirrors the GB10 Superchip used in Nvidia’s DGX Spark AI mini‑PCs.
  • Blackwell‑based GPU: The same Geekbench entry reports 48 streaming multiprocessors (SMs), translating to 6,144 CUDA corestomshardware.com—the same count as Nvidia’s GeForce RTX 5070. However, the prototype runs its GPU at only around 1.05 GHz and relies on system LPDDR5X memory rather than dedicated GDDR, indicating a focus on efficiency over raw throughputtomshardware.com.
  • Memory subsystem: The N1X is expected to support up to 128 GB of LPDDR5X memory on a 256‑bit bus. This trade‑off sacrifices the high bandwidth of GDDR memory to provide large capacity and low power draw, which is valuable for AI models and creative workloads.
  • Power envelope: Reports indicate the entire SoC operates at around 120 W TDPtomshardware.com. For comparison, discrete desktop GPUs like the RTX 5070 draw about 250 W, highlighting Nvidia’s attempt to deliver desktop‑class graphics performance within a laptop‑friendly power budget.

AI and platform integration

  • Grace/Blackwell synergy: The N1X combines the Grace CPU architecture with a cut‑down Blackwell GPU, unifying Nvidia’s server‑class GB10 architecture for the consumer markettomshardware.com. The platform aims to deliver 180–200 TOPS of AI compute powertomshardware.com, rivaling dedicated accelerators from AMD, Intel and Qualcomm.
  • MediaTek partnership: Nvidia partnered with MediaTek to design the CPU cluster and integrate the SoC. MediaTek will also be permitted to sell the chip independently, according to Nvidia CEO Jensen Huangreuters.com.
  • LPDDR5X unified memory: The shared memory architecture gives the GPU direct access to system memory, simplifying platform design but limiting memory bandwidth. Early benchmarks show this constraint reduces graphics performance relative to discrete RTX cards, but capacity may be more important for on‑device AI.

Early Benchmarks

CPU performance

Geekbench 6 results for the N1X show promising single‑core performance but only moderate multi‑core scaling. An HP development board equipped with the N1X, running Ubuntu 24.04 and 128 GB of RAM, scored 3,096 points in single‑core tests and 18,837 points in multi‑thread teststomshardware.com. The chip was clocked at about 4 GHztomshardware.com. For context:

ProcessorSingle‑core scoreMulti‑core scoreNotes
Nvidia N1X (prototype)3,09618,837Ubuntu 24.04; 20 cores (10 × Cortex‑X925 + 10 × Cortex‑A725)tomshardware.com
AMD Ryzen AI MAX+ 3953,12521,03516 cores; AMD’s Strix Halo architecturetomshardware.com
Intel Core Ultra 9 285HX3,07822,104Windows 11; x86 architecturetomshardware.com
Apple M4 Max4,05425,913macOS; custom ARM designtomshardware.com
Qualcomm Snapdragon X Elite2,69313,950Windows 11; Oryon V1 corestomshardware.com

The N1X’s single‑core score edges out Qualcomm’s Snapdragon X Elite and competes closely with AMD’s and Intel’s x86 parts, but its multi‑core score trails behind leading chips like AMD’s Strix Halo and Apple’s M4 Max. This suggests the N1X’s big‑LITTLE design may not yet fully exploit its core count, potentially due to immature firmware or clock scaling.

Integrated‑graphics performance

While the N1X’s CPU scores are competitive, its integrated GPU is the more disruptive element. A Geekbench OpenCL test revealed that the early sample’s 6,144‑CUDA‑core GPU scored 46,361 pointstomshardware.com. That places the N1X iGPU ahead of AMD’s Radeon 890M and Intel’s Arc integrated parts, and even surpasses Apple’s M3 Max integrated graphics, which top out around 37,500 pointstomshardware.com. However, the N1X’s graphics performance remains only about 25 % of a desktop GeForce RTX 5070 (which has the same core count) because the SoC runs at lower clock speeds and relies on LPDDR5X memorytomshardware.com. A FurMark run under Windows 11 produced an OpenGL score of 4,286 and showed the GPU operating at just 63 % utilisation and ~59 °Ctomshardware.com. Analysts note this under‑utilisation likely stems from driver immaturity, power limits and firmware safeguards, meaning the current benchmarks do not represent the chip’s final potentialtomshardware.com.

Release Timeline and Supply‑Chain Dynamics

Delay to 2026

Initial rumors suggested Nvidia would unveil the N1X at Computex 2025 and release it later that year. However, multiple reports now agree that the launch has shifted to Q1 2026. DigiTimes sources cited by Tom’s Hardware note that MediaTek and Nvidia pushed the schedule back because Microsoft’s next‑generation Windows OS (often referred to as “Windows 12”) is running behind, and the N1X platform depends on features in that OStomshardware.com. The same report points to a weakened notebook market and ongoing chip revisions as further reasons for the delaytomshardware.com. Tom’s Guide similarly reports that the N1X may debut at CES 2026, citing Windows delays and suggesting a 2026 announcementtomsguide.com.

Hardware revisions and integration issues

Earlier in 2025, SemiAccurate and TechPowerUp reported that engineering flaws in the N1X forced Nvidia to respin the silicon, pushing the launch to late 2026techpowerup.com. Later reports from DigiTimes present a more nuanced picture: Nvidia and MediaTek are still revising the chip, but the delay may also be strategic—aimed at aligning with Microsoft’s OS rollout and stabilising demandtomshardware.com. Tom’s Hardware notes that the N1X is expected to deliver 180–200 TOPS of AI compute and that major OEMs such as Dell, HP, Lenovo, Asus and MSI are preparing designstomshardware.com. Despite this, the chips may not reach mass volumes until late 2026tomshardware.com due to unresolved integration problems with endpoint devicestomshardware.com.

Windows on ARM dependency

Another factor underlying the delay is the maturity of Windows on ARM. Nvidia’s N1X aims to run both Linux and Windows; early benchmarks on Windows 11 show progress but highlight driver gaps, as seen in the FurMark test where the GPU utilised only 63 % of its resourcestomshardware.com. Tom’s Hardware stresses that Nvidia is aggressively validating the chip on Windows 11—a necessary step before it can compete in mainstream laptopstomshardware.com. The platform will likely launch alongside Microsoft’s next major Windows release to ensure seamless support for ARM hardware and AI features.

Strategic Implications and Market Impact

Competitive positioning

The N1X enters a rapidly evolving landscape of AI‑enabled laptop chips. Qualcomm’s Snapdragon X Elite has already brought high‑performance ARM cores to Windows laptops, while AMD’s Ryzen AI series (Strix Halo and Strix Halo 2) and Intel’s Core Ultra (Meteor Lake and Panther Lake) integrate substantial AI acceleration. Apple continues to dominate the ARM laptop segment with its M‑series chips. According to early benchmarks, the N1X’s single‑core performance nearly equals AMD’s and Intel’s leading laptop CPUstomshardware.com, and its integrated GPU already surpasses current iGPUstomshardware.com. If Nvidia can refine its drivers and OS support, the N1X could deliver a desktop‑class graphics experience without a separate GPU, appealing to content creators, AI researchers and gamers seeking portable power.

Ecosystem leverage

Nvidia’s strength lies in its CUDA ecosystem and software stack. By bringing CUDA cores to a consumer ARM platform, Nvidia could entice developers to build AI and graphics applications that run efficiently on both laptops and servers. MediaTek’s involvement ensures a robust supply chain and the potential for the chip to appear in both premium and mid‑tier devices. However, Nvidia must overcome software compatibility hurdles—many Windows applications and games remain x86‑centric. Apple succeeded by controlling both hardware and software; Nvidia will depend on Microsoft’s translation layers and on developers to optimise for ARM.

Risks and uncertainties

  • Driver and OS maturity: The under‑utilisation seen in FurMark suggests the software stack is immaturetomshardware.com. Achieving stable, high‑performance drivers on both Linux and Windows will be critical.
  • Power and thermal limits: Delivering a 6,144‑CUDA‑core GPU within a 120 W envelope could lead to throttling and limited boost frequencies; early tests show low clock speedstomshardware.com. Efficient thermal design will be essential.
  • Market timing: With AMD and Intel planning advanced AI‑enhanced CPUs for late 2025–2026, and Qualcomm’s next‑generation Snapdragon X 2 on the horizon, Nvidia must launch on time to remain competitive. Further delays could erode the advantage of its large integrated GPU.
  • Pricing: Nvidia’s discrete GPUs are expensive; if the N1X commands a high premium, adoption could be limited in a price‑sensitive PC market.

Potential impact

If Nvidia executes its strategy successfully, the N1X could:

  • Catalyse broader adoption of Windows on ARM: A powerful Nvidia‑branded chip could persuade OEMs to invest in ARM laptop ecosystems, complementing Qualcomm’s efforts and giving Microsoft more incentive to optimise Windows for ARM.
  • Redefine integrated graphics expectations: The N1X’s integrated GPU, with desktop‑class core counts, may push competitors to increase iGPU performance and memory bandwidth. Apple’s M‑series chips set a precedent, and Nvidia could further raise the bar.
  • Unify AI development pipelines: With a single SoC combining CPU, GPU and AI engines, developers could deploy models across Nvidia’s desktop, laptop and server platforms without major rewrites, strengthening the CUDA ecosystem.
  • Pressure on x86 incumbents: Intel and AMD will face increased pressure to deliver energy‑efficient, AI‑capable chips. Nvidia’s entry may accelerate their adoption of advanced packaging (chiplets), 3D stacking and AI accelerators.

Outlook

The N1X is not yet a finished product, but the leaks to date hint at a bold attempt to bridge the gap between discrete graphics cards and laptop SoCs. Its 20‑core ARM CPU and desktop‑class Blackwell GPU set it apart from existing Windows‑on‑ARM chips, and early single‑core and iGPU benchmarks show real promisetomshardware.comtomshardware.com. However, the delays into 2026, the need for driver maturation, and the challenge of OS compatibility temper expectationstomshardware.com. By the time N1X arrives, Apple’s M5, AMD’s Strix Halo 2 and Intel’s Panther Lake could be raising the bar even higher.

In the best case, the N1X will energise the Windows‑on‑ARM ecosystem, deliver strong integrated graphics for portable AI workloads, and broaden the reach of Nvidia’s CUDA platform. In the worst case, extended delays and high pricing could relegate it to niche enterprise solutions. Either way, the N1X represents a significant strategic move by Nvidia, and its progress will be closely watched as the 2026 release window approaches.

You May Also Like

Reality Check: Does Free Money Make People Lazy? What We Know From UBI Trials

Unlock the truth behind UBI trials and discover whether free money truly fosters laziness or sparks meaningful life changes.

Reality Check: AGI Isn’t a Future Milestone, It’s Already Emerging!

You’ve probably heard the experts say, “AGI is still decades away, don’t…

Reality Check: Is a 4-Day Workweek a Solution or a Fantasy in an Automated World?

Fascinating debates persist on whether a 4-day workweek is feasible amid automation, leaving us to wonder if it’s a future reality or mere fantasy.

AI’s Role in Health Insurance Coverage Choices

Explore how artificial intelligence is reshaping the way health insurance coverage is determined and what it means for you.