This Time is Different: Why Decentralized Compute is Finally Ready


Decentralized compute is one of crypto’s oldest dreams. As early as 2017, projects like Golem and iExec emerged with the vision of becoming decentralized cloud providers—resilient, permissionless marketplaces for general-purpose compute. It was a romantic idea. And like most romantic ideas in crypto, it never came to fruition.

While there were many reasons for this, the core issue was assuming startups and enterprises would abandon hyperscalers like AWS or Azure. Over the last decade, these platforms built massive switching costs through ecosystem lock-in. Managed services, proprietary tooling, and deep integrations made them nearly impossible to leave. Betting on a mass exodus to a cheaper, unproven alternative misunderstood the nature of the cloud entirely.

Then came ChatGPT. As the arms race to train larger models kicked off, GPU shortages followed and just like that, decentralized compute got a second wind in 2023. Founders and VCs rushed in, hungry to build new compute networks for the coming tsunami of AI inference. Networks launched. Supply showed up. But demand didn’t because the best models were all closed-source. AI compute networks had compute to offer but nothing worth running on it.

But now, a third opportunity is emerging, and this time, the conditions are different, which is why we’ve taken a position in Targon, Subnet 4 on Bittensor: an emerging decentralized AI compute network.

What’s Different This Time

Over the past year, a set of structural macro, geopolitical, and technological shifts have aligned and created real tailwinds for decentralized compute networks (DCNs). The biggest:

  • Open-source inference is booming, led by Chinese labs releasing high-performance models that people actually want to use and customize.

  • As models commoditize, Inference-as-a-Service platforms like Fireworks and Together AI are capitalizing by serving intelligence through open-source models at lower prices than closed alternatives from OpenAI or Anthropic.

Business is surging for these open inference providers, but so is the competition. As more Inference-as-a-Service (IaaS) platforms race to serve similar open-source models, differentiation collapses and margins follow. This forces inference providers to optimize costs, and their biggest lever is lowering GPU spend. 

This is where DCNs provide a structural advantage to inference platforms. They’re designed to surface true GPU prices by operating as open, permissionless markets. And for the first time, they have a real customer: IaaS platforms under mounting pressure to cut costs.

Competitive, Open Models Now Exist

The most important open-source models today aren’t coming out of Silicon Valley, they’re coming out of Shenzhen, Beijing, and Shanghai. In just the last year, Chinese labs like DeepSeek, Moonshot AI, and MiniMax have begun releasing models that are commercially usable and increasingly state-of-the-art.

According to Artificial Analysis’s latest Intelligence Index, the top three open-source AI models all come from China: Qwen3, DeepSeek R1, and GLM-4.5. And these models are actually getting used: DeepSeek alone now accounts for nearly 5% of all generative AI traffic.

China will likely remain at the frontier of open-source models due to various structural advantages. Game theory suggests they’ll keep open-sourcing as a strategic play to build international credibility, narrow the performance gap with U.S. leaders, and extend soft power through global adoption. Essentially, there will always be one side open-sourcing to catch up.

Adding to that momentum, the Trump campaign’s recent AI Action Plan encourages more open-source and open-weight model development from U.S. labs.

These are all massive tailwinds for open-source inference providers because it gives them a continuous supply of usable models to offer that people want to use.

Inference Platforms Are the Perfect Customer for DCNs

Inference-as-a-Service (IaaS) platforms are quietly becoming the backend of the AI economy. Startups like Fireworks, Together AI, DeepInfra, and Replicate don’t pre-train their own models. Instead, they host, fine-tune, and serve open-source models through simple APIs, letting developers plug in and start building without managing infrastructure.

Thanks to a recent wave of high-quality open-source models, business is booming:

  • Together AI hit $100M in annualized revenue as of Feb 2025, up from $30M a year ago. 

  • Fireworks serves 5 trillion tokens every day.

  • DeepInfra is serving tens of billions of tokens/day.

IaaS platforms’ edge is cost. Closed models from OpenAI, Anthropic, and xAI may be slightly better, but for most applications, they’re overkill, resulting in companies paying for intelligence they don’t actually need. The latest Qwen3 model, for example, ranks 6th overall in the Intelligence Index, just four points behind Grok-4, but Grok is five times more expensive.

IaaS platforms focus purely on serving inference, skip the R&D costs of training models, and run cheap, open alternatives at scale. That lets them offer prices 5–10x lower than closed systems.

The trend is obvious: open models are getting better, and more companies are adopting them through IaaS platforms. But with pricing becoming the primary axis of competition due to models commoditizing, IaaS platforms are being forced to optimize their biggest cost: GPUs.

And that’s the problem. They don’t own the hardware. Instead, they rent from the same AI cloud providers—CoreWeave, Lambda, and Crusoe—all competing on how cheaply they can acquire the same infrastructure.

That’s not sustainable. To survive, IaaS platforms will need access to GPUs at their true market cost. That’s why we believe DCNs are the natural next step for IaaS platforms.

Powering the Inference Economy with Decentralized Networks

At their core, DCNs are open GPU marketplaces that enable dynamic price discovery. Instead of fixed pricing associated with bespoke, one-off rental contracts, supply and demand determine the cost of compute in DCNs, letting it converge toward its true marginal cost. That’s never existed before for GPUs.

Within DCNs, token incentives bring new suppliers online solving the cold start problem. At the same time, trustless validation mechanisms guarantee workloads and maintain their integrity even while running on untrusted hardware. Together, these features unlock access to a vast pool of idle or underutilized GPUs, including:

  • Companies that overcommitted to long-term rental contracts and no longer need the capacity;

  • Datacenter and hardware owners stuck with last-generation GPUs with limited demand; 

  • Short-term latent GPUs sitting idle between jobs.

And because DCNs cut out the middleman, there’s no AI cloud provider marking up costs, bundling services, or hiding real prices. They create the first truly open, liquid, and flexible compute market. Suppliers can monetize their GPUs without contracts, and buyers can access compute at prices that reflect actual market dynamics.

That’s why the cheapest GPUs will live on DCNs.

Targon: A New AI Compute Base Layer

Among the new generation of DCNs, Targon stands out as one that's already delivering and capitalizing on the trends outlined above.

Targon is a DCN on Bittensor (Subnet 4), designed to become the most liquid and accessible secondary market for GPU compute. The network aggregates compute, allowing anyone from individuals to datacenters to supply GPUs and set an ask price.

Manifold Labs, the Subnet operator, is the Subnet’s first value-added reseller (VAR), running a full-stack IaaS platform at Targon.com. They productize the supply by delivering inference-as-a-service through a clean API and owning the full customer experience.

This structure has worked. Targon has built a liquid order book for pricing compute, enabling them to aggregate around 1,600 H200s—more than any DCN outside of Bittensor. That’s nearly $50 million in deployed hardware, offered at some of the lowest inference prices on the market.

Using that compute, in the last month, Targon.com has processed over 639 billion tokens and is consistently running above 20 billion tokens per day, with more than 80% of usage paid.

Early usage came through OpenRouter, but today, most demand is from enterprise customers directly. With vendor lock-in gone and switching providers often as simple as changing a base URL, Targon.com has landed accounts by offering ~10% savings on inference bills. For AI-native businesses where every API call carries a $/token cost, the decision is easy. If they don’t like the service, they can switch back.

What has enabled enterprise adoption is Targon’s breakthrough in confidential compute. The team built the Targon Virtual Machine (TVM), a secure runtime combining hardware-backed attestation with NVIDIA’s nvTrust SDK for GPU verification. In short, it ensures that compute jobs are run honestly and privately, even on untrusted hardware—a milestone few, if any, DCNs have achieved.

Manifold Labs is just one VAR. Any other entity can tap into the network’s compute and build:

  • A competing IaaS, similar to Targon.com;

  • Fine-tuning service for enterprises wanting custom models;

  • Reinforcement learning pipelines for developing production-grade agents, similar to OpenPipe.

Manifold Labs has already shown this is possible by providing bare-metal GPU access to smaller AI labs and researchers actively training their own models.

The Bittensor Network Effect

When it comes to aggregating supply, Targon has a structural advantage we call the Bittensor network effect. Unlike stand-alone compute networks, Targon inherits Bittensor’s native incentive flywheel. When any of the 100+ subnets gain traction, demand for TAO rises, as it’s required to acquire their tokens. As TAO appreciates, every subnet benefits, since all subnet tokens are priced in TAO.

For Targon, a higher TAO price = a larger emission budget to distribute to compute providers. That allows the network to spend more on GPUs through higher value USD network emissions. Over time, we believe this creates a black-hole effect, pulling in more GPUs, more demand, and reinforcing Targon’s dominance.

Our Bet on Targon

Decentralized compute has waited years for the right conditions. Today, they are here. Models are commoditizing. An increasing share of inference is powered by open-source models. And the platforms serving those models are competing on price, without being locked into vendors. Add infrastructure that guarantees trust and privacy, and DCNs finally have real customers with strong economic incentives to plug in.

That’s why we invested in Targon’s token. The market conditions are finally right, and Rob Myers, founder of Manifold Labs, is the kind of operator we believe can be the first to truly execute. He’s unconventional, magnetic, and obsessed. Exactly the profile we want exposure to.

With a fresh $10.5M Series A raised by Manifold Labs, Targon is positioned to become the largest secondary marketplace for compute, and the place where GPUs could be priced the cheapest. The team is already shipping real volume and scaling fast. At this pace, we expect them to reach $20M ARR by year-end.

This content is provided for informational purposes only and does not constitute investment advice or a recommendation to buy or sell any security. Unsupervised Capital holds a position in TAO and may hold positions in the subnet tokens or other digital assets discussed herein and may buy, sell, or change positions at any time. Past performance is not indicative of future results. Digital assets involve substantial risk, including potential total loss of capital. Consult your own advisers regarding any investment decisions.

Previous
Previous

Score Investment Memo

Next
Next

Bittensor Signals #1: Reframing Subnet Valuation