AI RACE (PART 1) - BEFORE ASKING WHO IS WINNING, WE MUST FIRST DEFINE WHAT THE RACE IS


A Facebook troller of mine simply threw this 'one chart that beats them all" on his page, apeing Michael Bury's conviction of China's massive power generation capacity alone determines the eventual winner. There is no doubt AI is an electricity guzzler, but this is a black-and-white world view manifested in this black-and-white chart.

Much of the debate about the AI Race is confused because it assumes a single finish line. In reality, AI development spans multiple dimensions - intelligence depth and deployment scale - each with different constraints, costs and political requirements.

What does winning even mean? The answer is not forthcoming unless you have defined the parameters of the race.


100M, 100M HURDLES, 100M RELAY RACES

An analogy of track races will make the point of this article clear. We do not decide the winner by inventorying resources, but by deciding who crosses the finishing line first under the rules of the events.
Inputs describe potential. Outcomes describe performance
The inputs of the AI race are chips, power, materials (rare earths) talent and sanctions. The equivalent inputs for these track races are athlete's physiology. training facilities, coaches, equipment, funding and sponsorship, and track conditions.

Even with all the inputs known, you still cannot have the answer who will win.
Things you do not know -- who false-starts, who paces correctly, who sustains speed in the final 30m, who races often enough to improve?
Things that you should have asked -- what race is being run, how long is it, is it a sprint, a marathon, a relay, is the goal speed, endurance or consistency?

Counting chips, talent, rare earths in the AI Race is like counting shoes and coaches in a track meet. These inputs matter, but they do not define the race. Inputs do not win races. They qualify you to run.


AI INPUTS
AI Input dominance is not systems dominance
All debates on AI Race are framed on inputs. Many simply just point to the "one chart that wins all" - Power (thus China wins all). Others are chip-centric, or rare earth-centric.

Chips
Chips determine ceiling performance. It affects how high up the "authority-intelligence" depth you can go. But it exhibits diminishing returns fast. This goes right to the micro-economics of the AI Race and here's where a lot of public discussion goes wrong. 

Marginal returns in classical terms says the first units of a scarce input produce large gains. Subsequent units produce smaller incremental gains. Applied to AI chips - the first few GPUs were transformative. Going from weak GPUs to modern GPUs is huge. Going from very good GPUs to slightly better GPUs is incremental, not decisive. You don't get linear power from linear additions.  

"Compute" - a term you hear all the time in AI. It's used as a noun to basically mean "effective computational capacity" to train or run models. It's important to understand compute is not just chips. It bundles Chips (GPU) accelerators, interconnects (NVLink, InfiniBand), memory bandwidth, software stack efficiency and Power + cooling uptime. Thus two countries with the same chips can have very different compute. 

This is where the public does not understand. Chips diminish fast because AI performance is not linearly chip-bound. Modern AI systems hit constraints long before they hit "chip starvation".  Factors are data quality ceilings, algorithm bottlenecks, coordination and latency costs, training instability and inference efficiency limits. Once you clear a minimum compute threshold, progress depends not so much on chips but architecture design, training regime, systems engineering and deployment feedback loops. This is why Deepseek can punch above its compute weight, and OpenAI's gains are no longer proportional to GPU spend.

Parallelization penalties grow exponentially. As chips are added, synchronisation overheads rise, communication costs rise, failure probability rises, utilisation efficiency drops. At frontier scale, many GPUs are idle, waiting, or throttled, that is, "not thinking".

Frontier chips mostly help in speed, not capacity. Better chips mostly mean faster training cycles, more experiments per month. It does not automatically mean new cognitive abilities, qualitative breakthroughs.  So chips shift the tempo, not direction.

"Chip starvation" occurs when an AI actor cannot obtain enough compute to cross critical thresholds. It is not about not having the latest NVIDIA chips. It is about being unable to train or deploy models, locked below a minimum viable scale and unable to iterate sufficiently to improve. In the race track analogy, think of not being able to turn up at the starting blocks. 

"Chip starvation" is threshold-based, of which there are three:
- Entry threshold. If you don't have enough compute to train modern deep models, you are excluded from the game. This is where sanctions hit hardest. But China has long gone past this threshold.
- Frontier threshold. You must have enough compute to train models competitive with global SOTA (state of the art). If not, you lag in headline benchmarks. This is where media attention is fixated on the impact of sanctions on China. 
- Iteration threshold. This is most important. The AI actor must have enough compute to run many experiments, fail repeatedly, rapidly refine architecture. This determines long term dominance. 

Only the first threshold is fatal. The second and third threshold are often substitutable by other inputs. And this is why China is not chip-starved, despite sanctions. China lost access to top-end GPUs but avoided true starvation because (a) it has crossed the entry threshold, (b) it substituted horizontally, not vertically. Instead of getting the best chips, they did more chips x longer training, better parallel efficiency, sparsity, MOE, compression, and system-level co-design, (c) It compensated by cheaper and abundant electricity and scale. 

All the above info dump is just to explain sanctions overestimate chip leverage. In the context of the AI Race, this is something most people don't see. It is US policy misreading the technology. The sanctions assume "If China lacks top-tier chips, they fall behind". The sanctions achieved partial denial, which re-orientates development rather than stopping it.

Rare earths
This affects manufacturing resilience and it matters for long-term supply chains. But they do not directly constrain AI deployment once infrastructure exists.

High earth elements are critical for high performance motors, permanent magnets, precision actuators/ sensors. They are upstream industrial inputs. As far as AI is concerned, rare earths are not consumed in:
- Model training
- Inference
- Software iteration
- Deployment at scale
Once chips, servers and grids exist, REEs largely exit the picture. After an AI system is deployed, it is trained, installed in data centres,and connected to power and networks. Its continued operation depends on electricity, cooling, maintenance and software updates. 

REEs affect the rate and resilience of hardware build-out, not the functioning of AI systems already deployed. Thus rare earth leverage is strategic, not tactical. 

Rare earths are upstream, slow-acting inputs. AI is downstream, system-level infrastructure. Strategic advantage accrues to whoever builds first, deploys widest and integrates deepest. Rare earths can delay the future, but they cannot turn off the present.

Sanctions
These affect pace, cost and access to frontier inputs. But they also encourage substitution, which may lower efficiency thresholds. Nevertheless, it pushes the system toward "good enough at scale" strategies.

Sanctions often reinforce trajectories rather than stop them.

Talent
Talent usually means top researchers, model architects, algorithm designers and systems engineers. Talent is essential for breakthroughs, new paradigms, pushing intelligence depth, and reduction of compute per unit of capability. Talent gets us frontier models, redefines what AI can do.

Talent gets over-weighted when pushing frontier intelligence. But once AI becomes infrastructure, the scene changes.

At the discovery phase, talent is intensive. But diffusion is not. Breakthroughs come from small elite groups, but the societal impact comes from mundane deployment. History has shown so many examples - electrification, the internet, automobiles, railways, etc.

Put it another way - talent creates capability, but it is institution that creates saturation. This is why Silicon Valley can be brilliant and narrow, but large systems can be mediocre and pervasive.

The opposing behavior of talent and scale:
Talent concentrates -- it clusters geographically, prefers autonomy, avoids bureaucratic friction and moves easily across borders.
Scale on the other hand, requires coordination, standardisation, compliance, political alignment.
This creates tension. Systems optimised for elite talent are often poorly suited for mass deployment. This does not mean a failure. It means there are trade-offs.

Talent is indispensable for breakthroughts in AI, but it is not sufficient for dominance. Talent determines how far a system can advance toward elite or general intelligence, but it does not determine how widely that intelligence is deployed. Discovery is talent-intensive; diffusion is institution-intensive. As AI becomes infrastructure, the binding constraint shifts from who can invent best to who can deploy most.

Power is unique among inputs
Power is continuous, it is not discrete. It is non-substitutable at scale. It cannot be imported easily, that is, it is locally binding. Compared to power, chips, rare earths, talent are discrete, stockpiled, traded and substitutable beyond thresholds. 

This make power a structural limiter, while the other inputs are rate limiters.

Summary of inputs (chips, rare earths, power)
In relation to the AI Race, the key distinction is how fast each input bites, how substitutable it is, and whether its effects compound.
- Rare earths: It delay new fabs, increase capex, weaken manufacturing resilience. But it cannot stop existing AI systems, degrade deployed models, reduce inference output tomorrow. Rare earths shape the slope of future capacity.
- Chips: It can slow frontier model training, reduce iteration speed and increase cost. But once thresholds are crossed, it cannot prevent AI development entirely, stop deployment, guarantee long term advantage. Chips affect tempo, not trajectory.
- Electricity: It can hard limit hard inference scale, limit deployment density, determine national AI ceiling. But you cannot optimise around power shortages, cannot stockpile it at scale, cannot substitute it with software. 

Bottom line --
Chips and rare earths influence how fast you grow.
Electricity determines how big you grow.


DEFINING THE AI RACE

To understand what kind of "AI race" a country is actually running, we need to separate two often-conflated dimensions of "authority" and "scale"

Authority (Decision and control depth)
"Authority" describes who decides (man or machine?), how decisions are made, and how much autonomy an AI system is allowed to exercise.
At the low end:
- AI performs narrow, predefined tasks.
- Humans retain full control.
- Systems assist, but do not direct.
At the high end:
- AI integrates information across domains.
- Recommends or initiates actions.
- Shapes outcomes, not just inputs.

"Authority" is not about smartness alone. It is about permission to act. High-authority AI changes decision hierarchies and challenges institutions, which is why it is politically and socially constrained.

"Scale" (Breadth of deployment
"Scale" describes how widely AI is deployed across an economy or society.
At the low end:
- AI is concentrated in elite users or pilot projects.
- High cost, limited access
- Impact is localised
At the high end:
- AI is embedded across industries and governance.
- Low marginal cost per deployment.
- Impact is systemic and cumulative.

"Scale" depends less on model sophistication and more on infrastructure, power availability, and deployment permission.


OUTPUT-FRAMED AI QUADRANT

By plotting the two dimensions of AI using Authority (intelligence) for the X-axis, and Scale (deployment) for the Y-axis, we have an output-based Quadrant. The 4 zones basically ask which AI Race one is talking about in terms of output:
- Low Scale + Low Authority (Elite & Narrow)
- Low Scale + High Authority (Elite & General)
- High Scale + Low Authority (Broad & Narrow)
- High Scale + High Authority (Broad & General)

Inputs answer:
What can a country potentially do?
If a country has the inputs relevant to a particular Quadrant (or race), then it has the potential to achieve AI supremacy at that level.

The output Quadrant answers:
What does a country actually want to end up doing at scale?
If a country choses to aim for AI supremacy at one of the 4 levels, it must seek to possess the inputs (resources) appropriate for that.

As mentioned above, Power is a structural limiter, while the other inputs are rate limiters. That's why power appears on the axis logic itself. In the Quadrant shown below, Power is roughly right-upward sloping with Quadrant IV being the most power-hungry.

Let's look at the Quadrant and decide which race you want to be in. Each race has different constraints, different inputs.

Quadrant I - Elite & Narrow
This is an important level to be in, but it is not decisive. It is currently the place where most of the world is in. The finish line, or the objective is - operation advantage in niches. What's deployed are specialised and experimental AI which has high reliability but relatively low in imagination, mainly in internal analytics:
- Military systems
- Corporate tools.
It has minimal returns

Quadrant II - Elite & General
This is US's strength.
This is the zone of frontier intelligence of:
- Breakthroigh models
- GPT-class models.
- Reasoning, planning, creativity capabilities.
- Ad Targeting
- Recommendation engines
Used by/for researchers, engineers, militaries, strategic planners, medical AI, drug discovery, legal reasoning.
The finish line here is all about breakthrough capability - something qualitatively new (AGI-adjacent, paradigm shifts).
This is where the US is racing with:
-Smarter models
- Fewer players
- High cost
- High fragility
The big constraint here is the quadrant does not scale easily. This has moderate returns

Quadrant III - Broad & Narrow
This is China's strength.
This is where we see civilisational deployment. - AI is employed everywhere:
- Manufacturing with AI-run factories
- AI-run dynamic traffic systems
- Surveillance
- Autonomous logistics
- Energy with AI-run grids
- Governance.
The finish line here is - System-wide productivity and control.
The zone here is all about coverage. But the glitzy publicity makes most people fail to see that coverage is not brilliance..
That's why China does not need genius AI. All it needs are reliable and cheap inputs, always-on systems, and running everywhere.
There is exponential payoff.

Here is what most people do not understand why China beats US. This Quadrant rewards:
- Power surplus
- Permission
- Tolerance for inefficiency
- Political economy

(I will cover this in Part 2. Stay tuned because this part is real knowledge. This is what the China advantage is all about).

Quadrant IV Broad & General
This zone is the Holy Grail of AI. It is the mythical "AI wins everything" zone. This is where we will have:
- General intelligence (almost like omniscience)
- Mass deployment
- Low cost
- High Authority (machine over man)
The finish line here is Civilisation Transformation.
But this Quadrant is:
- Power-hungry
- Politically destabilising
- Institutionally dangerous.
No country is truly here although everyone talks about it as if China will achieve it. It is an existential risk in the likes of "Skynet" in the Terminator movie series.


INPUT VS OUTPUT FRAMEWORK OF VIEWING AI RACE
Chips, talent and materials determine ceilings; power determines persistence
Chips, rare earths, sanctions and talent absolutely matter, but they matter differently depending on where the country is in the Quadrant.

Output-based Quadrant prevents the common mistake of treating input dominance as synonymous with system dominance.
Input-heavy analysis misleads at this moment in history
An outcome-based framework does not deny the importance of inputs such as chips, rare earths, talent, sanctions. It simply recognises that these factors shape the boundaries of what is possible, not the form of what ultimately emerges. Power is treated differently because it is the only input that remains continuously binding at scale.

Assessing the AI Race using input-centric thinking comes from Industrial war logic, Platform competition and Short-cycle innovation economics. These lenses are often mis-applied to the AI Race

- Industrial war logic: This is policymakers' view. Power comes from controlling critical inputs. Deny enemy those inputs to weaken them. Thus export controls, sanctions, "chokepoint" strategies. 
Industrial war logic assumes linear scaling, stockpiling and slow adaptation .But AI is not consumed like ammo, it adapts via software, substitutes inputs rapidly and compounds after deployment. Denying some inputs only slows, but does not disable. Resulting error --- overestimating chip denial, underestimating system adaptation.

- Platform competition: This is Silicon Valley mindset.  Power comes from locking-in users. This treats AI like platform battles -- iOS vs Androids, Windows vs Linux, etc. The core assumptions - first movers win, network effects dominate, control the platform to control rents, lock-in beats performance.   Platform logic assumes voluntary adoption, market-based lock-in, consumer choice. But AI at scale is imposed institutionally, is embedded in state systems, driven by mandate not preference, can be forked, copied or nationalised. China does not play this game. Resulting error --- overweighing branding, APIs and developer ecosystems; underweighting deployment power and coercive adoption. 

-Shot-cycle innovation economics: This is the Wall Street-VC/start up world view. Power comes from iterating faster - who innovates faster wins. This treats AI like a fast-moving tech product, governed by Moore's Law, driven by rapid iteration and disruption. It assumes faster innovation wins, capital flows to the best ideas, incumbents are vulnerable, agility beats scale.  Resulting error --- overvaluing brilliance, undervaluing endurance.

AI at scale behaves more like Electrification, Bureaucratic automation, National infrastructure. AI behaves like infrastructure when it becomes the underlying platform that other tasks operate on.  If you have been observing closely, this is the Chinese mindset as regards AI.

In those transitions, the best components didn't win, the most deployable systems did.

The Quadrant is not saying "Inputs do not matter". It is saying "Inputs matter differently once intelligence becomes infrastructure."

And infrastructure success depends on:
- Permission
- Endurance
- Tolerance for inefficiency
- Political economy.
These are not input variables. They are system properties. This brings us back to the same points in Quadrant III above for China.

To sum up, output framework is not ignoring the importance of inputs. It is putting the inputs in their proper place -- upstream, conditional and insufficient on their own. That's not downplaying their importance. That's discipline in analytics. 


Tune in for 
AI Race (Part 2) - China wins but it's not what you think. China wins by accident. 


This platform has withdrawn it's subscriber widget. If you like blogs like this and wish to know whenever there is a new post, click the button to my FB and follow me there. I usually intro my new blogs there. Thanks.