Everything Becomes an Optimization Problem

If you can define an optimization goal and a reliable evaluation system, you can open almost any problem to global iteration where humans and machines can compete to find better solutions. 

This works across any domain: model training efficiency, software engineering, protein folding, financial forecasting. If you can verify the solution, the problem is fair game.

François Chollet described this dynamic for agentic coding. The engineer defines the objective and constraints, then an optimization process searches for solutions until the objective is met. This essentially creates a black-box optimizer, where you don’t care how the solution was found, as long as it passes the tests.

Andrej Karpathy proved this recently too in a more literal sense. He defined the optimization goal as training a language model to reach GPT-2 level performance as fast as possible and left an agent to work on it for two days. It ran ~700 experiments autonomously, finding 20 improvements that resulted in an 11% efficiency gain. Karpathy has been doing this type of research by hand for 20 years. And now in two days, with no human in the loop, an agent swarm beat him.

The frontier of engineering is shifting. Generating and optimizing solutions is no longer the bottleneck. Those costs are now collapsing to the price of compute. The hard part now is specification and market design. If you can design a verification system and economic engine so precise that optimizing for it actually solves the problem you care about, the optimizer will find the solution. Your job is to just build the test.

Now consider replacing a single optimizer with a market. Instead of one agent searching for a solution, hundreds compete against each other, with economic rewards flowing to whoever finds the best solution. Instead of Karpathy running one agent on autoresearch, imagine hundreds competing against him.

That’s what happens on a Bittensor subnet. A subnet is an economic verification system. Any objective function can be translated into an incentive mechanism that defines the rules of competition.

  • The subnet owner writes the rules of the competition and designs the reward function.

  • The miners optimize for the reward function, free to be creative in exploring how to generate what the subnet owner wants.

  • Validators execute the rules, judge miner performance, and verify the work.

A well-designed subnet becomes an open-market reinforcement learning environment. Where traditional RL uses reward signals to guide a single agent, a subnet uses crypto tokens to guide an entire market of competing humans and agents. Miners learn how well they're performing based on their relative crypto token rewards.

The Synth subnet on Bittensor is a working example of this. Miners are tasked with producing probability distributions of future asset prices (e.g., what’s the probability distribution of the BTC price in 1 hour). Validators grade outputs against market data using the Continuous Ranked Probability Score (CRPS), a well known method for measuring the accuracy of probabilistic forecasts. A lower score means your forecasted distribution matched how markets actually played out. Rewards are weighted toward consistent high performers. The Synth team doesn't tell miners how to do the work. They set the rules of the environment and let the market figure out the best solutions. Today, $6k per day is flowing to miners competing on that single objective.

Most miners on Synth today are still humans using AI, but the open market design treats them exactly like commoditized compute (i.e., an agent). In nearly every way, subnets are already better designed for agentic workers of the not-so-distant future than humans. 

Traditional companies are seeing the writing on the wall and are trying to converge on this architecture as well. Many are now explicitly hiring for agents. But over time, we’ll recognize these as skeuomorphic attempts at company redesigns, like the “newspaper on the internet” versions of company reorganizations. 

Operating an AI-native company in the future will feel exactly like operating a subnet. Spending the time to design a robust, autonomous workforce marketplace will be just as important as shipping new products. If you’re a founder waiting for the right infrastructure to emerge for this new world, it already exists, come build on Bittensor. 


This content is provided for informational purposes only and does not constitute investment advice or a recommendation to buy or sell any security. Unsupervised Capital holds a position in TAO and may hold positions in the subnet tokens or other digital assets discussed herein and may buy, sell, or change positions at any time. Past performance is not indicative of future results. Digital assets involve substantial risk, including potential total loss of capital. Consult your own advisers regarding any investment decisions.

Next
Next

Excerpts From Our Q4'25 Letter