Hey all, wanted to share a litepaper we just published on ADA (Allo Domain Allocations), a framework we’ve been working on that aims to make on-chain fund distribution more targeted, transparent, and scalable.
TL;DR: ADA is a grants platform for managing funding pools around specific goals (like “clean air”, “open science”, “rural connectivity”, “transactions per month”), with expert-led governance, AI-assisted application screening, and reusable KYC attestations. It’s designed to reduce the manual overhead in grant programs, while improving traceability and impact measurement.
Domain Allocations: Instead of broad, catch-all funding pots, resources are split into purpose-driven pools with tightly scoped goals, keeping funds aligned with their intended mission.
Expert Governance via Safe multisig: Each pool is governed by a set of domain experts, using Gnosis Safe to ensure decisions are multi-signed, on-chain, and fully auditable.
AI-Powered Screening: AI models help filter, score, and improve applications — both before and after expert review — so reviewers can focus on high-signal proposals.
KYC/Compliance Attestations: One-time, on-chain identity/tax attestations that can be reused across pools, reducing compliance friction for both grantees and funders.
would love to know what kind of validated customer demand we’ve seen for ADA.
i have a few specific questions
why would a customer use ADA over impact-metrics focused retro funding?
the AI / impact-metric focused domain allocation seems to add a fair bit of complexity. galls law states that any complex system that works starts as a simpler system that works. would it makes sense to start with domain allocation and then add the AI/impact metrics portion on top of that?
In ADA, projects request funding and the panel of domain experts approve each application. AI-assisted reviews is used to increase quality of applications and bandwidth of reviewers.
Application approval is more similar to DirectGrants while Imact-Metrics RF is about voting on the metrics themselves and funding is calculated based off of that.
Domain Allocation doesn’t have to be metrics-driven. It can simply be a category (like Dev Tooling).
You’re absolutely right, it can start simple with just Domain Allocation and AI, KYC, Impact Metrics, etc can be added when needed.
AI can be added when the admin overhead to review applications becomes overwhelming
KYC when the time and effort it takes to manually do compliance checks is too high
Impact Metrics to assist reviewers in sense-making and decision-making
Hi Carl,
Domain Allocation is a stimulating field to consider the space moving toward. With your example of “Clean Air” I am reminded somewhat of the typical charity model. Organizations (charities) decide what is important to them, and use domain experts (either internal to the organization, or consultants) to allocate resources toward moving the needle on this goal. Outside domain experts can rate how effective they are, and regular people (non-experts) can decide for themselves how important this issue is to them, review the organization’s rating by domain experts, and thus reach a decision on whether they would like to donate to the charity or not. I see parallels to this model in what you are describing, though obviously generalized.
One of the key differences in my mind between a charity and a grants program is proactiveness - charities are typically a “push” model where grants programs are “pull.” We’ve seen limited ability to date for on-chain programs to proactively find and support capable recipients. This is an area I’m hoping to see progress on over the next 2 quarters with some organizations sharing intent to support DAOIP-5, but I was curious what your expectations were:
Does the ‘push’ model serve the idea of Allo Domain Allocations better than the ‘pull’ model?
If so, how do you see the ‘push’ model coming to Allo?
Interesting what you describe with the push vs. pull mechanisms. Here’s how I see it:
Pull model (traditional grants): projects write up a proposal and request funding. The domain experts waits for those applications to roll in, then reviews and “pulls” the money over if everything checks out.
Push model (charity or scout-style): the funder goes out and finds promising projects - using on-chain data signals, expert recommendations, or scouts - and “pushes” funds to them without a formal application process.
Right now, ADA is built around the pull model: teams submit applications, experts and AI review them, and then funds flow once the multisig signs off.
That said, I love the idea of exploring push capabilities. A few thoughts:
On-chain signal monitoring: Automatically monitor accounts (addresses) or DAOs hitting certain impact metrics (eg, public-goods dApps with high usage or research labs posting results on-chain).
Expert/AI nominating: Let domain pros or AI “scouts” flag addresses or teams for follow-on grants, putting them in a list that receives funding (Splits contract?).
Pre-approved grantee registry: Maintain a vetted list of past awardees who can get regular “top-ups” as they hit milestones - no new applications needed each time.
And the Clean Air example? Just a way to show different impact measures - like sensor data on air quality or tree-planting campaigns improving local readings.
What makes ADA stand out vs other grant platforms:
Continuous allocation pools: Put money in, vote on applications (disbursements) as needed - no rigid “rounds” just ongoing funding.
Safe multisig governance: On-chain, multi-signed decisions for total transparency. Using existing building blocks for simplicty and flexibility.
Built-in AI reviews: Speed up application screening and give experts better insights and decision-making. Directly integrated into the platform.
Seamless KYC flows: One-time attestations that projects and funders can reuse across pools.
One more thought I’m excited about: integrating Privy + Bridge.xyz under the hood to abstract away all the Web3 plumbing - so non-web3 people can use ADA like any other Web2 grants platform.