It is time for the age of abundance in AI to begin.
Today we are launching Radiant. Radiant merges the software and compute assets of Ori with the capital, land, power and compute assets of Brookfield. Collectively, this combination paves the way for an age of AI abundance, where efficient software pairs with cost-advantaged capital, the world’s largest bank of powered-land, direct power relationships and NVIDIA DSX reference design for AI infrastructure to create the utility model of compute.
The utility model of compute is ubiquitous, it is inexpensive, reliable and always on. This low friction, high availability implementation of AI is what Radiant is building from the infrastructure layer to the software layer. For developers, this means a cloud that is faster, fairer and more reliable. For sovereigns and hyperscalers, this means a foundational partner for the next century.
Power
For more than a decade, from the early AI days until the inception of ChatGPT, AI infrastructure has been built downstream from power. Data centers chase electrons through congested grids, facing transmission losses, volatile pricing and regulatory hurdles. Yes, there is generation scarcity, but more importantly there is the proximity problem. The grids built in prior centuries weren't designed for 100MW training clusters or continuous inference workloads. As demand compounds, power procurement has become the first and most immovable bottleneck.
Abundance begins when compute meets generation. Locating data infrastructure at the source - hydro, wind, solar, nuclear - collapses cost and latency simultaneously. It converts stranded or intermittent energy into productive intelligence. The leverage isn’t in megawatts reserved, but in megawatts within reach.
Powered Land > Land
Land itself isn’t scarce. Permitted, powered, fiber-connected land is. The average hyperscale site takes 24–36 months from acquisition to activation, with lead times driven by substation interconnection and transformer delivery. Local permitting, cooling logistics, and power density constraints amplify cost and delay.
Abundance comes from inventory - land pre-positioned with both power and interconnection. “Powered land” is the new yield asset: immediate capacity that shortens time-to-deploy from years to months. The shift mirrors the renewable industry’s maturation - owning shovel-ready sites as Radiant does becomes the foundation for compounding infrastructure scale.
Capital
The cost of compute is fundamentally a function of the cost of capital. Venture funding models price infrastructure at ~20% cost of capital; project-finance structures price it at 5% or less. If that 15% difference feels massive to you, it should.
Abundance requires long-dated, low-cost capital that behaves like infrastructure investment, not speculation. Radiant is backed by Brookfield which blends real-asset discipline with data-era growth logic. The result is that Radiant is turning AI Infrastructure into an asset class - yield-bearing, collateralized and stable.
That changes the economics for everyone. And it starts today.
Infrastructure
The global backlog for usable data-center shells exceeds demand by years. Transformers, chillers and skilled engineering, procurement and construction talent are the gating items. Scarcity here is temporal: even with capital and hardware, you can’t build time.
Abundance means pre-positioning. Ready shells, such as those Radiant is building - physically complete, power-enabled, and modular - become inventory that can be activated on demand. They de-risk deployments, compress lead times, and transform compute expansion into a logistics problem instead of a construction problem. The new metric isn’t PUE - it’s TTI: Time-to-Intelligence.
Compute
AI systems are manufactured according to demand - but customers continue to underforecast and end up doubling or tripling their order six months in. There are limits to how manufacturers can manage that.
More importantly, and more solvable is the fact that many clusters run at 50–60% effective occupancy because scheduling, orchestration, and software fragmentation can leave that capacity stranded. If you aren’t running a business on your GPUs, you might be below 20% for utilization. The gap between theoretical and realized performance is now the biggest cost center in AI infrastructure.
Abundance begins when software fixes utilization.
Radiant’s platform—built on Ori’s AI Cloud stack—drives higher sustained GPU utilization through intelligent job packing, queue optimization, and workload-aware orchestration. The Ori AI Cloud runs at 85% utilization. That is an improvement of 70%+ in most cases - against organizations that are solely focused on the utilization challenge. For end users, that means more GPU availability, shorter queue times and lower cost per trained parameter.
Software is key, but relationships matter too. Radiant integrates NVIDIA's full stack - from NVIDIA CUDA and NVIDIA Collective Communications Library (NCCL) to NVIDIA computing platforms, and network fabrics. By designing to NVIDIA reference architectures, Radiant can build up - rapidly, and this enables customers to scale models and applications across the broad NVIDIA ecosystem, supporting workloads spanning every industry, including healthcare, manufacturing, financial services, retail and robotics.
Software
Power land, capital and compute already exist, but the challenge is that they often operate in silos. What has been missing is the intelligence to connect them - which makes orchestration essential. Radiant leverages Ori’s control plane as the connective tissue of abundance.
It unifies power hardware and capital into a single deterministic system where workloads are scheduled against real energy, real topology and real economics. It is dependency free, built entirely in house and engineered to run on multiple architectures.
Ori software does not just manage infrastructure, it optimizes it, continuously driving higher GPU utilization, energy efficiency and cost predictability across every node. It turns fragmented resources into a coherent global fabric.
This is the layer that transforms scarcity into scale. It is where AI infrastructure stops being managed and starts being governed by intelligence itself.
Welcome to the Age of AI Abundance
Abundance is more than efficiency - it’s freedom.
When compute is scarce, only the largest players can afford intelligence. When it’s abundant, creation compounds everywhere.
The next era of AI won’t be defined by who owns the most GPUs, but by who can make intelligence accessible without compromise - sovereign, affordable, and always on. That is the world Radiant exists to build - abundant intelligence, everywhere, for everyone who can use it.
