[REQUEST FOR FEEDBACK] Allo Arenas

Allo Arenas

A Bittensor inspired version of Allo.capital that evolves the best capital allocation tooling possible

TLDR:

Bittensor is a decentralized network that incentivizes development through a market-based system using its TAO token and Proof of Intelligence. Allo.capital could apply a similar model to capital allocation innovation by creating “Allo Arenas”—competitive subnets where novel funding mechanisms are tested, evaluated, and rewarded based on performance. While a dedicated ALLO token could enhance alignment and governance, a tokenless or hybrid approach leveraging existing infrastructure is also viable.

What is Bittensor?

Bittensor was recommended to us by Juan Benet in February as one of the most innovative blockchain plays out there. is a groundbreaking decentralized network that sits at the intersection of blockchain technology and artificial intelligence. It creates a peer-to-peer market for digital services (trading, compute networks) where participants can collaborate, train, share, and monetize intelligence.

The network operates through a unique consensus mechanism called Proof of Intelligence (PoI), which rewards participants based on the value of their contributions to the collective intelligence. Unlike traditional blockchain networks that use Proof of Work or Proof of Stake, Bittensor evaluates the quality and value of machine learning outputs.

At the core of Bittensor’s ecosystem is TAO, its native cryptocurrency, which serves multiple functions:

  • Rewarding miners who contribute computational resources and AI models
  • Compensating validators who evaluate model quality
  • Enabling users to access and extract information from the network
  • Facilitating governance through staking

Bittensor’s architecture is organized into subnets, specialized domains where miners contribute computational resources to solve specific computational tasks (defined by subnets) while validators evaluate their performance. This structure creates a competitive environment that drives continuous innovation and improvement in capabilities.

By democratizing access to development and creating an incentivized framework for collaboration, Bittensor accelerates the advancement of intelligence generating technology while ensuring that rewards are distributed based on the value contributed to the network.

What is Allo.capital?

Allo.capital is a venture focused on building and supporting the capital allocation layer of the tokenized internet. It’s a spinout of Gitcoin, a funding festival that solves Ethereums biggest problems by running grant rounds of $1m/quarter for the Ethereum community.

Allo.capital specifically focuses on catalyzing a network of hackers, thinkers, and doers to innovate on on-chain capital allocation mechanisms. Its past it pioneered Quadratic Funding, but has also explored Retro Funding and Deep Funding. Over the next 2-10 years, we believe that trillions$$ of assets will be tokenized. We aim to build the capital allocation layer for this next generation of the internet by supporting builders of funding infrastructure with the resources they need to succeed.

Allo Arenas : How Allo.capital Could Create a Bittensor-style Arena for Capital Allocation Tools

Bittensor has successfully created an ecosystem where AI models compete and collaborate to continuously improve, with rewards distributed based on the value contributed. Allo.capital could apply this model to capital allocation by creating a similar competitive arena for evolving optimal capital allocation tools (Allo Arenas). Here’s how:

Proposed Implementation:


  1. Subnet Model for Capital Allocation
  • Create specialized subnets focused on different capital allocation strategies (e.g., grant distribution, investment evaluation, risk assessment)
  • Each subnet would contain miners developing and deploying novel capital allocation algorithms and validators evaluating their performance

  1. Proof of Allocation Intelligence (proof of AI)
  • Develop a consensus mechanism that evaluates capital allocation strategies based on predefined metrics (Value Flowed, ROI, distribution efficiency, community satisfaction).
  • A simple mechanisms could be “proof of flow”
  • Alternatively, have each subnet tokenize and judge success by how many that token performs.
  • Allocate rewards to strategies that consistently outperform others

  1. Synthetic Capital Markets
  • Create simulated environments where allocation strategies can be tested using historical data or synthetic scenarios
  • Allow real-time competition between strategies to identify optimal approaches for different contexts

  1. Progressive Learning System
  • Enable successful strategies to build upon one another through a knowledge-sharing framework
  • Create incentives for continuous improvement and adaptation to changing market conditions
  • Identify friction points for Allo builders and reduce the friction, enabling faster evolution.

  1. Real-world Implementation Track
  • Once strategies prove successful in simulated environments, provide pathways to deploy them with actual capital (perhaps via GG)
  • Create a feedback loop where real-world performance influences future development

Benefits:

  • Evolutionary Improvement: By creating competitive pressure between different capital allocation mechanisms, the system would naturally evolve toward increasingly effective strategies
  • Context-aware Solutions: Different strategies could emerge for different scenarios (early-stage funding, mature project governance, emergency response)
  • Transparency and Trust: All allocation decisions would be traceable and explainable, increasing confidence in the system
  • Community Governance: The community could vote on which metrics should be prioritized in evaluating allocation strategies

Conclusion

By applying Bittensor’s competitive intelligence model to the challenge of capital allocation, Allo.capital could create an arena where diverse strategies compete, collaborate, and continuously improve, ultimately developing more efficient and effective ways to distribute resources across the Web3 ecosystem.

Appendix A - The ALLO Token Question: Necessary or Not?

Potential Role of an ALLO Token

While Allo Capital currently operates without its own dedicated token, launching an ALLO token could provide specific advantages for a Bittensor-style arena:

  1. Incentive Alignment: A native token could directly incentivize participants who develop and improve capital allocation strategies, similar to how TAO rewards AI model contributions. (Tao is a top 50 token)
  2. Specialized Governance: An ALLO token could enable weighted voting on which capital allocation strategies should receive more resources and which metrics should be prioritized.
  3. Value Capture: The token could capture value generated by successful allocation strategies, distributing it to developers, validators, and other ecosystem participants.
  4. Network Effects: A token could help bootstrap the network by attracting early participants through token incentives.

ALLO Token vs. TAO: A Comparison

If implemented, an ALLO token would differ from TAO in several key ways:

Feature Potential ALLO Token TAO (Bittensor)
Primary Purpose Incentivize capital allocation innovation Incentivize AI model contribution
Value Metric Capital efficiency, ROI, distribution fairness Intelligence contribution quality
Economic Model Could use a different supply model focused on sustainable funding Fixed supply of 21 million with bitcoin-like halvings
Staking Dynamics Would likely stake to validate allocation strategies Stakes to run validators that evaluate AI models
Target Participants DeFi developers, economists, governance experts AI/ML developers, compute providers

The Non-Token Alternative

A compelling alternative would be to create a tokenless system that leverages existing cryptocurrencies:

  1. Multi-token Support: Accept various tokens for participation (ETH, GTC, DAI, etc.) without creating a new token.
  2. Fee-based Model: Charge small fees on successful capital allocations, which fund the system’s ongoing development.
  3. Reputation System: Use non-transferable badges or scores to track participant contributions instead of financial tokens.
  4. Integration with Gitcoin: Utilize the existing Gitcoin infrastructure and GTC token for governance decisions.

Recommendation

For Allo.capital to successfully implement a Bittensor-style arena for capital allocation tools, a dedicated token is not strictly necessary but could provide advantages for network growth and alignment.

The most practical approach might be a hybrid model:

  • Begin without a token, leveraging existing infrastructure
  • Measure adoption and effectiveness of the arena
  • If the system proves valuable and a token would enhance its utility, introduce an ALLO token with careful tokenomics designed specifically for capital allocation optimization
  • Ensure the token has genuine utility beyond speculation, with mechanics that directly tie its value to the effectiveness of the allocation strategies it supports
3 Likes

This is cool. I like the idea of starting with synthetic capital markets tested against historical data.

One way to do this: ask participants to submit agents before the project set and scoring metrics are revealed. Then use a simple randomization function to select the projects and the evaluation criteria.

Agents get access to real historical data as of the starting point, and must allocate capital based on that snapshot. Then you “fast-forward” 6–12 months and score each allocator based on how well their decisions align with actual outcomes.

You could repeat this 80,000 times with different combinations of projects and metrics, creating a leaderboard of strategies that generalize well.

2 Likes

x-posting some feedback from @drnick from x

source

https://x.com/DrNickA/status/1920929661279719490

Interesting idea. It sounds like the goal is to discover the efficiency of new mechanisms by providing an environment where:

  • Devs can compare the smart contract they’re working on against a suite of mechanisms to see how it performs
  • Researchers can compare mechanisms with different datasets and explore why and where some outperform others

Using AI and counterfactuals (this was explored last year for checking the impact of funding: https://counterfactually.org) could help estimate what-if scenarios. Shapley values might also be useful for attributing the impact of individual features in each mechanism.

What could a concrete example of an arena round between, let’s say, DirectGrants, QF, and RetroFunding look like? What do you imagine the output to be? If you were to manually do this evaluation, how would it be presented?

For example, maybe you simulate each mechanism across the same dataset of project applicants and contributors. Each mechanism could be evaluated outcomes based on predefined metrics like:

  • Total capital efficiency
  • Distribution fairness (gini coefficient?)
  • Contributor sentiment or post-round impact

Now, let’s say someone creates a QF mechanism with the addition of a yield vault on the matching pool - what would change? Would it change the incentives, increase funds available, or create new risks? The arena seems like a great space to model and stress-test these variations before deploying in the real world.

Also, something to consider is that many mechanisms have different input structures, for example:

  • Direct Grants has a fixed amount allocated per decision, often by a centralized team or DAO.
  • QF relies on user donations plus a matching pool.
  • Retro Funding distributes a pool based on votes from a review panel, often informed by past performance.

It would probably help to normalize these structures as much as possible.

Yes. Efficiency and efficiacy.

I’m not sure yet, it feels like. something we could ask ourselves during the Gitcoin 3.0 Design Sprint

Yeah the apples-to-oranges of these different structures presents a design challenge, but maybe also an opportunity to climb multiple local maxima.

1 Like

was talking to someone about allo arenas last night, and they made a great point about a beneefit of allo arenas that i’ve not yet listed. most dapp founders in ethereum dont have go to market skills. with allo arenas they just have to write the software, and the distribution is taken care of for them by gitcoin/allo arenas. it allows them to focus on a narrower subset of things they can do instead of have eto go the whole startup founder route and learn distribution, branding, and all that entails

Reminds me of the strategy/mechanism marketplace ideas we’ve discussed and this arena would be the early OpenAI gym (GitHub - openai/gym: A toolkit for developing and comparing reinforcement learning algorithms.) for the mechanisms.

Love the high-level concept, creating a competitive environment to evolve capital allocation tooling is exciting and definitely worth exploring.

That said, I think we need to recognize how much messier this gets in practice. In Bittensor, the outputs being judged are machine learning models—you can (more or less) quantify and rank their performance. But capital allocation is way more subjective. It’s not just about how much value flows, it’s about :

  • Who you’re funding
  • How they raise
  • What kind of impact they generate
  • And over what time horizon.

And honestly, sometimes the “best” outcomes aren’t even visible for years.

If we’re going to build Allo Arenas, we’ll need to think really hard about how strategies are evaluated. Otherwise, we risk optimizing for what’s measurable instead of what actually matters, which often isn’t measureable. I’m super into the idea of evolving funding mechanisms in the open but we can’t pretend it’s as simple as outperforming a benchmark. There’s a ton of nuance here.

Still, AI has a lot of potential to be used to mitigate some of these issues… and I’m hyped to see this direction. We absolutely need competition to evolve this field & if anyone’s gonna pull it off, it’s Allo!

Oh, and for the token.. I would say stick with GTC and bring it home for the long term hodlers! The believers that stuck with Gitcoin through all the ups and downs.

2 Likes

In what context for machine learning?

Would this be applied to all subnets are each can have different metrics and levels they’re focused on?

Does this have any legal ramifications if a subnet tokenizes?

Would love some examples here.

What could this knowledge sharing framework look like? There’s many potential frameworks to pull from that the community has proposed.

Can you provide an example of some incentives?

What do you feel are the current friction points? Would love to reconcile this with conversation we’ve been having as Allo builders. The call this upcoming Tuesday to discuss the protocol I hope will provide clarity and alignment here.

Why is it GG not Allo Capital?

Not sure what this means, who’s paying the fee?

100% The initial experiment will help ground the token in strong data and alignment with the goals of Allo.

Overall I feel the Arena approach is intriguing and with a strong architecture and more clarity in a couple areas seems like a viable path to capital mechanism building and validation.

A couple high level thoughts, one is how subnets are created, updated and destroyed. Don’t see details there and would self plug my Fa-Fo-Fu-Fi proposal with an emphasis on the Fo (Form) and Fu (Function) aspects of the proposal and how they have a lifecycle.

The last one is around the token, I think taking another look at Paul Gavin’s proposal for 2 token model might be interesting where this initial token is pure governance.

(post deleted by author)

Where I feel AI can help the most is compressing information without losing quality and then from there a mix of subjective and objective measurement can be placed.

:white_check_mark: Summary of Feedback from REDACTED private chat

High-Level Alignment

  • Shared Goal: Both parties want to build capital allocation mechanisms that are:

    • High-quality in outcomes
    • Simple in design
    • Decentralized (minimize reliance on central decision-makers)
    • Resistant to harmful social dynamics/games
  • REDACTED is framing this as a way to embed funding mechanisms directly into L2s/DeFi protocols.

  • You framed it as an evolutionary container to breed and select better mechanisms—he agrees the direction is similar.

Key Feedback Points

  • Start with what’s missing, not market demand:
    REDACTED argues that with limited capital, it’s inefficient to be unopinionated or overly responsive to current market needs. Instead, focus on high-leverage primitives the world doesn’t yet have.

  • Deep Funding > Metrics:
    REDACTED prefers deep funding over retro metrics-based funding, because it avoids Goodhart’s Law (the corruption of metrics once they become targets). Deep funding encourages constant evolution of how “impact” is measured.

  • Indirection is key:
    Both deep funding and metric-based retro funding use indirection (voting on metrics or mechanisms instead of directly on projects), which reduces the surface area for manipulation or social games.

Your Synthesis

  • You recognized deep funding as a good example of your confusion/diffusion framework.
  • You acknowledged the need to integrate this thinking more deeply into Gitcoin 3.0, especially the idea of indirect, evolutionary design rather than fixed metric targets.

More anon feedback:

i think the article punted on the hard question: what determines whether a subnet was successful in its capital allocation strategy or not?
Which would ultimately determine the rewards, so until there’s a good answer to this we can’t move ahead fully
How do you know if value flowed well, distribution is efficient or community is satisfied? Defining the “resolution criteria” for these things is the hard question to answer that can make this whole thing viable

more

It looks sound, the thing about allocation mechanisms is that they can be hard and complex to evaluate before there is an actual performance unlike other digital commodities

So I would think like, what would the requirements be to make sure these mechanisms are rigorously and correctly evaluated

Bittensor uses the yuma consensus in modest words computes the rewards for producing miner value evaluations that are in agreement with subjective evaluations produced by other subnet validators

So how do we weight objective vs subjective evaluations is another question

Can we start by looking at total value flowed…. Then evolve to more complex metrics?

I mean but total value flowed doesn’t tell you how effective / fair that system is right? It just tells the first preference by the community

Which could be led by narrative, etc..

Won’t speak to the to-token-or-not-to-token question.

  1. Fun experiment, run it!
  2. Will this face all the same engagement issues as other DAO plays? If validators are human, they’re going to be a bottleneck. If not, they’re effectively evals, and then who builds those evals (it’s a pretty specific and highly valuable skill that typically does not come for free).
  3. Either way seems like thing you’d game if you knew the reward structure.
  4. This is 100% happening outside of Web3 (honestly prob before even the current AI boom, feels like Delphia maybe tried some version of this a while ago, probably more in-house versions), so a good question to ask is how Web3 primitives could do this better. More broadly, better understand / do some background on how these evo approaches compare in this domain with end-to-end learning methods or analytic Black Scholes style equation modeling.
1 Like

more anon feeddback

I like the direction! The main thing that I’m skeptical about is the abilitiy to evaluate capital allocation strategies.

e.g. “Develop a consensus mechanism that evaluates capital allocation strategies based on predefined metrics”

We have thought about this a lot and I think the general conclusion is: The more short-term, tighly scoped a grant allocation mechanism is, the easier it is to evaluate. For anything long term it becomes incredibly hard. There’s just a ton of noise, it becomes almost impossible to imply any causality, etc.

It would be a dream to have a “testnet” type of model to compare different mechanisms, but I currently can’t see how that’d possible.

The simpler approach of comparing allocation mechanisms based on community satisfaction with the outputs becomes incredibly suspeptible to bias.