The Race to Put AI Data Centers in Space

The Race to Put AI Data Centers in Space

The Race to Put AI Data Centers in Space

In the span of eight weeks, every major technology company on Earth announced plans to put computing infrastructure in orbit.

SpaceX filed with the FCC for up to one million data center satellites. Google unveiled Project Suncatcher, a research program exploring 81-satellite compute clusters powered by solar arrays. Blue Origin announced Project Sunrise, a constellation of 51,600 orbital data center nodes. Nvidia revealed the Vera Rubin Space-1 Module, its first chip system designed specifically for space. And former Google CEO Eric Schmidt acquired rocket manufacturer Relativity Space in a move widely interpreted as positioning for orbital infrastructure.

The convergence is not a coincidence. It traces back to a single constraint on Earth that none of these companies can engineer around: power.

The Terrestrial Energy Problem

AI data centers now consume approximately 4% of total U.S. electricity, and that share is rising faster than the grid can accommodate. Goldman Sachs projects a 165% increase in data center power demand by 2030. Some facilities under design would require as much as 10 gigawatts — ten times the output of a standard nuclear power plant.

The bottleneck is not money. It is physics, permitting, and time. Transmission infrastructure takes years to build. Utilities face permitting delays, supply chain constraints, and aging grid architecture. In parts of the country, AI-driven demand already exceeds available capacity, forcing companies to delay projects, contract directly with private power producers, or install banks of natural gas generators as interim solutions.

Eric Schmidt, testifying before Congress, framed the situation starkly: data centers could need an additional 29 gigawatts within a few years, and up to 67 gigawatts more by 2030. His proposed solution — harvesting solar energy directly in orbit, where panels are up to eight times more productive than on Earth and can generate power nearly continuously — led him to acquire Relativity Space.

Elon Musk went further. "You can mark my words," he said. "In 36 months but probably closer to 30 months, the most economically compelling place to put AI will be space." His prediction for five years out: "We will launch and be operating every year more AI in space than the cumulative total on Earth."

Who Is Building What

The landscape of companies pursuing orbital compute has expanded rapidly. Each is approaching the problem from a different direction, with different timelines and different assumptions about what's feasible.

SpaceX / xAI

Following their $1.25 trillion all-stock merger in February 2026, SpaceX and xAI filed with the FCC for authority to launch a constellation of up to one million satellites operating between 500 km and 2,000 km altitude. The filing describes an "optically linked constellation of solar-powered satellites with unprecedented computing capacity to power advanced artificial intelligence models."

SpaceX projects that launching one million tonnes of satellites annually would generate 100 kilowatts of compute power per tonne, yielding 100 gigawatts of AI compute capacity per year. The company is targeting an IPO at $1.75 trillion valuation in mid-2026, with proceeds expected to fund orbital data center development.

The scale of the proposal is unprecedented. For context, approximately 14,000 active satellites currently orbit Earth.

Nvidia

At GTC 2026 on March 16, CEO Jensen Huang announced the Vera Rubin Space-1 Module — Nvidia's first computing platform designed specifically for orbital deployment. The system combines IGX Thor and Jetson Orin components, engineered for the size, weight, and power constraints of spacecraft. Nvidia claims it delivers up to 25 times the AI inference performance of the H100 GPU.

Partners include Axiom Space, Starcloud, and Planet Labs. The module targets satellite constellations with onboard AI processing, future orbital data centers, and autonomous space operations.

What makes Nvidia's announcement distinct from the others is specificity. This is not a concept or a filing — it is a chip on a product roadmap, designed for customers who are already building hardware to fly. Huang acknowledged the central engineering challenge in the same breath: "In space, there's no convection, there's just radiation, and so we have to figure out how to cool these systems."

Google — Project Suncatcher

Google's approach is the most technically detailed. Project Suncatcher, developed in partnership with Planet Labs, envisions fleets of interconnected satellites in sun-synchronous orbit at approximately 650 km altitude. Each cluster would contain 81 satellites within a 1 km radius, equipped with Google's Trillium v6e Cloud TPU accelerators and connected via optical inter-satellite links.

The program has already produced specific results. A bench-scale demonstrator achieved 800 Gbps each-way transmission (1.6 Tbps total) using a single transceiver pair. Radiation testing of the TPU hardware showed irregularities only after 2 krad(Si) cumulative dose — nearly three times the expected five-year shielded mission dose of 750 rad(Si), with no hard failures up to the 15 krad(Si) testing limit.

Google plans to launch two prototype satellites with Planet Labs by early 2027 to validate TPU performance in orbit and test optical inter-satellite links for distributed machine learning tasks.

CEO Sundar Pichai has said data centers in space "will be the new normal in the next decade."

Blue Origin — TeraWave and Project Sunrise

Jeff Bezos entered the race on March 19-20, 2026 with two announcements. TeraWave is a communications constellation of 5,280 low-Earth orbit satellites and 128 medium-Earth orbit satellites, targeting symmetrical data speeds of 6 terabits per second — 6,000 times faster than Amazon's existing Kuiper satellite network. Deployment begins Q4 2027.

Project Sunrise, filed with the FCC on March 19, proposes 51,600 satellites designed to host data centers in orbit. Blue Origin describes them as spacecraft that will "ease mounting pressure on U.S. communities and natural resources by shifting energy- and water-intensive compute away from terrestrial data centers." TeraWave would serve as the high-throughput communications backbone for the compute nodes.

Eric Schmidt — Relativity Space

Schmidt's acquisition of Relativity Space gives him access to one of the few independent aerospace companies still developing new rocket technology. His strategic logic is explicit: the energy demands of AI data centers are outpacing terrestrial power infrastructure, and solar energy in orbit is the most scalable long-term solution. Acquiring a launch provider ensures he doesn't depend on SpaceX or Blue Origin — companies controlled by rivals with their own orbital data center ambitions.

Starcloud

Starcloud has the strongest claim to operational reality. On November 2, 2025, it launched the first terrestrial, data-center-class GPU ever deployed in orbit: an Nvidia H100 aboard a SpaceX rocket. The 60-kilogram satellite trained an LLM in space on December 10, 2025 — running and querying Google's Gemma model in orbit.

The company designed a thermal management system relying entirely on radiative cooling, using large specialized panels to dissipate the 700 watts generated by the H100. Starcloud plans to scale to a "Hypercluster" architecture by October 2026, which will require deployable radiators to manage 100 times the power generation of its current satellite.

The Rest of the Field

Aetherflux, founded by Robinhood co-founder Baiju Bhatt, combines orbital computing with power-beaming technology — transmitting energy to Earth via infrared laser. A 2026 demonstration satellite will attempt to beam one kilowatt from orbit to ground stations, with commercial data center nodes targeted for Q1 2027.

Axiom Space launched its first orbital data center prototype, the AxDCU-1, to the International Space Station in 2025, featuring "thermal tiles" co-developed with Spacebilt to reject heat directly into the cosmic microwave background.

Lonestar Data Holdings is pursuing a dual LEO-and-lunar strategy, targeting first commercial LEO service by Q4 2026 and eventual installations inside lunar lava tubes — which naturally shield hardware from temperature swings and cosmic radiation.

The Cooling Problem

The single largest engineering obstacle is thermal management. On Earth, data centers use air conditioning, liquid cooling, and massive water systems to remove heat from processors. In the vacuum of space, none of these work. There is no air for convection. There is no water to evaporate. Heat can only be removed through radiation — emitting infrared photons into the void.

The physics of radiative cooling are governed by the Stefan-Boltzmann law: radiated power increases with the fourth power of temperature. This creates a fundamental constraint. To dissipate one megawatt of heat while keeping electronics at a stable 20°C, an orbiting data center would require approximately 1,200 square meters of radiator surface — roughly four tennis courts.

Running radiators hotter reduces the required area. At 60°C, the surface can be halved. But this pushes silicon to its thermal limits, trading hardware longevity against system mass in a tradeoff that has no terrestrial equivalent.

Jensen Huang acknowledged this at GTC 2026 when he said Nvidia is working with partners on orbital data centers but "we have to figure out how to cool these systems." The company that announced space-grade AI chips in the same keynote simultaneously disclosed that the thermal problem remains unsolved at scale.

Different companies are approaching this differently. Starcloud uses passive radiative panels. Axiom Space is developing thermal tiles. Sophia Space integrates solar cells and passive radiators across the entire spacecraft surface as a unified heat exchanger. By 2027, the industry expects to move toward active thermal control, including space-rated heat pumps that can boost radiator temperatures to increase dissipation efficiency.

The Bandwidth Gap

Terrestrial data centers operate with 100 Gbps rack-to-rack interconnects as a baseline, with many deployments running at 400 Gbps or higher. Google's TPU supercomputers use custom optical interconnects delivering hundreds of gigabits per second per chip.

Current satellite optical inter-satellite links offer data rates between 1 and 100 Gbps. Google's Project Suncatcher bench test achieved 1.6 Tbps total, but this is a laboratory result, not an orbital deployment.

The scale of the gap is quantifiable. Novaspace (formerly EuroConsult) estimates total global satellite capacity will reach 50 Tbps by 2026. Total subsea cable capacity for the same year is projected at 8,750 Tbps — 175 times more bandwidth through fiber than through all satellites combined.

For AI training workloads, which require massive data movement between processors, this gap matters. Inference workloads — where a trained model processes individual requests — are more tolerant of bandwidth constraints and latency, making them a more realistic near-term application for orbital compute.

The Hardware Replacement Problem

GPUs and AI accelerators follow rapid development cycles. Nvidia has released new architectures every one to two years: Hopper (H100), Blackwell (B200), and now Vera Rubin. Each generation delivers substantial performance improvements. A chip launched into orbit today could be two architectural generations behind by the time it reaches its expected operational life.

On Earth, data center operators routinely replace hardware on 3-5 year cycles. In orbit, replacement requires a new launch. The economics of depreciation look fundamentally different when swapping a GPU means booking a rocket.

This favors a model where orbital hardware handles workloads that are less sensitive to absolute performance — edge processing for satellite data, inference for specific applications, geospatial analysis — rather than competing with terrestrial facilities for frontier AI training.

The Environmental Question

The environmental case for orbital data centers rests on displacing terrestrial energy consumption. The case against them involves the environmental costs of getting there.

Rocket launches emit greenhouse gases and black carbon. Over 300 successful orbital launches occurred in 2025. Research indicates a tenfold increase in launches could begin measurably damaging the ozone layer.

Satellite reentry poses a less visible but potentially significant concern. Between 2020 and 2024, atmospheric particulates from burning space debris doubled from 366 to 887 tons annually. Data center microchips contain PFAS compounds and transition metals like copper and titanium that act as atmospheric catalysts. One geophysicist warned that "small amounts could be sufficient to induce measurable changes in the atmosphere."

The collision risk is also growing. Princeton University's CRASH Clock research found that the timeframe for likely satellite collision during solar storms has shrunk from 164 days in 2018 to 3.8 days in January 2026. SpaceX's proposal for one million satellites would increase the orbital population by roughly 70 times. Astronomers have warned this density is "completely unsafe for collisions."

A cascading collision event — known as Kessler syndrome — could render entire orbital bands unusable, threatening not just data center satellites but the communications, weather monitoring, and defense systems the world already depends on.

The Regulatory Vacuum

No international framework governs orbital data centers. Existing space treaties address state sovereignty over celestial bodies and liability for space objects, but they were not written for commercial computing infrastructure processing civilian data in orbit.

This creates specific governance questions. If citizen data is processed in orbit, which jurisdiction applies — the country where the data originated, the country that launched the satellite, the country where the operating company is incorporated, or the country where the ground station sits? Professor Payal Arora has noted that "sovereignty becomes ambiguous" under these conditions.

Nations with data localization laws — regulations requiring citizen data to be stored domestically — could find those laws rendered moot by orbital processing. As infrastructure researcher Colin Thakur observed, orbital compute could bypass domestic bargaining power entirely.

The geopolitical dimension is not theoretical. Both the United States and China are actively pursuing orbital data center programs. The regulatory vacuum means whoever builds first operates in a space where the rules have not yet been written.

The Market Projections

Analysts project the orbital data center market will reach $1.77 billion by 2029 and $39.1 billion by 2035, representing a compound annual growth rate of approximately 67%. Broader projections suggest the combined market for orbital infrastructure, computing, and related satellite services could exceed $700 billion over the next decade.

These projections carry significant uncertainty. They assume sustained declines in launch costs — Google's analysis suggests costs may fall below $200 per kilogram by the mid-2030s, potentially making space-based computing "roughly comparable" to terrestrial data center energy costs annually. Current launch costs range from $1,500 to $2,900 per kilogram. Market shifts this large have multiple causes, and projections in emerging space markets have historically varied widely from actual outcomes.

By 2035, some analysts suggest 20-30% of new data center capacity could be orbital, complementing rather than replacing terrestrial infrastructure.

Where the Announcements Meet the Engineering

The space data center race is real in the sense that real money is being committed, real hardware is being designed, and one company — Starcloud — has already trained an AI model in orbit. The first H100 ran successfully in space. Google's TPU passed radiation testing. Nvidia has chips on a product roadmap.

It is also real in the sense that the engineering obstacles are not speculative. They are documented, quantified, and acknowledged by the companies making the announcements. The cooling problem is Stefan-Boltzmann, not an opinion. The bandwidth gap is 175x, not a rounding error. The hardware replacement economics are a function of orbital mechanics, not market dynamics.

The expert consensus, such as it exists, clusters around a timeline of small pilot projects by 2030 and meaningful commercial scale sometime after that. Jeff Thornburg of Portal Space Systems estimates "a minimum of three to five years before you see something that's actually working properly, and you're beyond 2030 for mass production." Georgetown's Kathleen Curlee is more direct: proposed timelines of 2030-2035 "are unrealistic."

The companies investing billions hold a different view, for reasons that are neither irrational nor proven. The terrestrial energy crisis is real. The solar power advantage in orbit is real. The launch cost trajectory is declining. The question is whether the engineering gap between "one H100 on a 60-kilogram satellite" and "gigawatt-scale orbital compute" can be closed on any timeline that justifies the investment being made today.

Jensen Huang's GTC keynote captured the duality precisely: "Space computing, the final frontier, has arrived" — and in the same address — "we have to figure out how to cool these systems." Both statements are true. The article lives in the distance between them.

Read more