Decentralized Research Funding Using Cookie Jar

Executive Summary

This proposal introduces a lightweight, autonomous research funding mechanism using a Cookie Jar model. The system ensures continuous funding for decentralized research efforts while maintaining governance-aligned research priorities.


Research Funding Model & Process

Flow of Research Funding Using Cookie Jar

  1. Allo Studio funds research → Send funds to the cookie jar contract
  2. Researchers Are Whitelisted → AlloDAO governance pre-approves a list of researchers.
  3. Research Topics Are Voted In → The Allo Studio community selects priority research areas each quarter using Allo Research Prompts.
  4. Researchers Withdrawn Funds and leave a note → Researchers can claim their monthly payment leaving a simple note with a reference link (paper, report, GitHub, IPFS) to the research artifact.
  5. Governance Adjusts Whitelist & Research Priorities Periodically.

This ensures that funding flows continuously, reducing approval friction.


Smart Contract & Implementation Details

The cookie jarcontract supports:
:white_check_mark: Whitelist Functionality → Only approved researcher addresses can withdraw.
:white_check_mark: Fixed Withdrawal Amount → Each researcher can withdraw a pre-defined amount per cycle.
:white_check_mark: Emergency Withdrawal Option → If governance needs to reallocate funding, funds can be retrieved​.

:white_check_mark: Ensures only whitelisted researchers withdraw, maintaining security.

Fund Distribution & Dynamic Adjustments

:small_blue_diamond: Monthly Research Funding

  • Each 3 months, the Cookie Jar is refilled from the AlloDAO research budget.
  • Researchers withdraw a fixed amount, ensuring predictability in funding and ensuring that every withdrawal is attached to a research artifact.

Research Submission & Minimal Verification

:small_blue_diamond: Research artifacts can include:

:white_check_mark: Research papers or reports (Google Docs, IPFS, Notion)
:white_check_mark: GitHub repositories (code-based research)
:white_check_mark: Forum posts (community discussions, theoretical work)

Example:

**Research Topic:** Decentralized Capital Allocation

**Researcher:** 0x1234...abcd

**Date:** MM/DD/YYYY

**Summary:**

This research explores funding efficiency models for DAOs.

**Reference:** https://github.com/allo-research/grant-allocation-paper

Research Funding Alignment & Value Capture

Stakeholder Key Operations How the Model Aligns It
Allo Fund and other orgs Fund impactful research on products, software integrations and market. Funds Allo Studio researches towards relevant topics.
Allo Studio Governance Ensure research and researchers alignement Researchers and artifacts are evaluated frequently.
Researchers Predictability on compensation for ongoing research Reduces friction on research funding and create a contribution track.

Implementation Plan & Next Steps

Phase 1: Deploy Cookie Jar Contract → Deploy cookie-jar-withblacklist.sol, configure whitelist & funding limits​.
Phase 2: Define Research Whitelist and Topics → Allo Studio votes on approved researchers and research topics.
Phase 3: First Research Cycle Launches → Contributors withdraw funds with the research output embedded.
Phase 4: Governance Optimization → Review and evaluate research, adjusting cookie jar parameters based on research impact.


Why This Model Works for Allo Capital

:white_check_mark: Removes friction from research funding (whitelist-based, no approvals).
:white_check_mark: Ensures ongoing knowledge production without excessive oversight.
:white_check_mark: Allows governance to adjust research priorities dynamically.
:white_check_mark: Creates a scalable model for decentralized research funding.

Open Questions

  • Is this model simple and efficient enough for implementation?

  • How should researcher whitelists be updated over time?

  • How to deal with inactive researchers or bad quality outputs?

  • How should research storage be handled (IPFS, Notion, GitHub, etc.)?

  • How can we review and evaluate research outputs (EAS, Hypercerts, Impact Frameworks, etc)?

2 Likes

I think this is a simple and possibly powerful idea.

Ideal flow to me

  1. We publish what we think is hot here.
  2. Ppl start showing up to intelligencde pillar calls.
  3. Once we vet them for characteristics that make a good researcher (for me: intelligence, character, talent) and that theres an active task they want to deliver for… we add them to cookie jar.
  4. They can just show up to reserach calls and deliver research.
2 Likes

@deltajuliet what do you think

One question I have is around group research?
Will an individual pull funds on behalf of a group, a multisisg be added for a group and/or individual based and both researcher pull funds?

Hey @Coi,
Thank for taking the time to put this proposal together! Today I braindumped a bunch of questions and concerns I have about using a Cookie Jar mech in this manner. Please understand that I’m providing these points to be generative, not to try to block or discourage this initiative. Voila:

TW’s Critique of the Cookie Jar Raid Proposal

Problems and Issues

  • Attack Vector: Removing the proposal process creates a significant risk that users will withdraw funds without completing the associated research work, as there is no mechanism to ensure funds are used appropriately.

  • The proposal attempts to mitigate this risk by suggesting small withdrawal amounts and relying on the social reputation of the researchers, but this may not be sufficient to prevent abuse.

  • Accountability: The system encourages researchers to pursue any prompt as they see fit, which potentially discourages rigorous scrutiny of their work and reduces the likelihood that the research will align with the organization’s needs.

  • Inequity: By assuming that all contributions and contributors are of equal value, the Cookie Jar mechanism undervalues the contributions of highly skilled and experienced knowledge workers, potentially discouraging them from participating.

  • Lack of Evaluation: The Cookie Jar mechanism lacks a built-in system for peer review, revision, and editing, which are essential processes for ensuring the quality and rigor of research outputs.

  • Incentivizes Low-Quality Work: The absence of robust evaluation and accountability measures may incentivize researchers to produce fast, poorly considered, and non-peer-reviewed contributions, as the system prioritizes speed over quality.

  • No Collaboration Incentive: Whitelisted researchers have no inherent incentive to collaborate with one another, which could limit the potential for synergy and the development of more comprehensive and impactful research findings.

  • Intellectual Property: The organization may end up owning all research ideas for a low, fixed price, while the researcher may have little connection to the work, which raises concerns about fair compensation and the long-term motivation of researchers.

  • Market Inefficiency: The system lacks a mechanism to match researcher skill and experience with the value of their research, preventing fair compensation and potentially discouraging high-quality output, as researchers may not be rewarded for producing exceptional work.

  • The proposal touches on some of these issues in the “Open Questions” section.

HMW Imagine an Ideal Research System?

  • An ideal research system incentivizes contribution at every stage of idea development, not just the finished product, recognizing that valuable insights can emerge throughout the research process.
  • An ideal research system recognizes the value of contributions at all stages, from initial, nascent ideas to fully formed research, acknowledging that different types of contributions are valuable.
  • An ideal research system includes a process for culling ideas that are not viable, ensuring that resources are not wasted on unproductive avenues of inquiry.
  • An ideal research system emphasizes scrutiny of ideas by a community of researchers, fostering a culture of peer review and constructive criticism.
  • An ideal research system incentivizes knowledge sharing and collaboration among researchers, promoting the exchange of ideas and the development of shared understanding.
  • An ideal research system supports attribution, attestation, and the formation of trust networks, giving researchers credit for their contributions and establishing a record of their expertise.
  • An ideal research system allows researchers to retain ownership of their content, protecting their intellectual property rights and incentivizing them to produce high-quality work.
  • An ideal research system incorporates a fair market matching mechanism that aligns compensation with the researcher’s skill and the value of their output.

Specific Proposal Concerns

  • What are the metrics for success or failure of this experiment, and how will those metrics be tracked and evaluated to inform future iterations of the research funding model?
  • Why was the Cookie Jar mechanism chosen over alternatives like Yeeter or a small, self-governing research DAO, and what are the relative advantages and disadvantages of each approach?
  • What is the justification for removing the proposal and voting processes, and while governance fatigue is anticipated, is this a significant concern if governance is limited to the researchers directly involved in the research process?
  • The proposal does not include metrics for success or failure, which makes it difficult to assess the effectiveness of the Cookie Jar mechanism and to make informed decisions about its future use.

What About Communication Channels?

  • Where do research priorities originate, how is the research topic list generated, and what processes are in place to ensure that these priorities align with the organization’s strategic goals?
  • How are researchers chosen and curated for inclusion in the Cookie Jar system, and what criteria are used to evaluate their qualifications and expertise?
  • How will submitted research be scrutinized, and what mechanisms are in place to incentivize both researchers and the funding organization to actively engage with and evaluate the research outputs after submission?
  • The proposal mentions research priorities being selected by the Allo Studio community using Allo Research Prompts, and governance adjusting whitelists and priorities.

Organizational Decision-Making

  • How will the organization justify the research expenditure, and what is the expected return on this investment in terms of tangible outcomes or intangible benefits?
  • What are the KPIs for evaluating the success of the research program, and how will the research outputs be translated into revenue-generating products or other strategic goals?
  • How does the curation of the project board relate to the Cookie Jar funding, and what are the organization’s expectations for return on investment in terms of the value and impact of the research produced?
  • It is crucial for the organization to define its research priorities, the market value it is willing to pay for research on those topics, the acceptable loss it can tolerate, and the expected return on investment before initiating the Cookie Jar experiment to ensure that the research program aligns with its strategic objectives and provides a positive return.

Hey Travis — I’ve been sitting with your feedback for a few days now. You raised exactly the kind of foundational questions I hoped would emerge, and your critique helped me see that my original proposal was trying to do too much, too fast, with too little structure. But it was also always meant to be something built with others — so thank you again for stepping into that.


After sharing the initial version and speaking with a few folks, I decided to reshape the path of the proposal.

One major shift: we started receiving research proposals — clear, scoped, strategic, and directly aligned with Allo’s goals. Things like:

  • Afo’s Global Mapping: mapping entry points for capital coordination across public and DAO systems
  • Matter Patterns: building a semantic, interoperable layer between grants and preference expression
  • EcoDrips: experimenting with programmable semantic logic for eco-credit funding

So the real question became:
How do we support and structure what’s already happening — without getting in the way?

That’s where this new Research Jars proposal comes in.


:white_check_mark: Proposals Are Centered

Your core critique — that the original model removed the proposal process — was spot on. That version leaned too hard on trust and informality.

This new design flips that:

:white_check_mark: Each researcher submits an individual proposal
:white_check_mark: Governance (1p1v on Snapshot or Gardens, starting with Allominatis) votes to approve them
:white_check_mark: If accepted, researchers are added to a Cookie Jar (via Hats Protocol, or manually in v1)
:white_check_mark: They can claim a fixed monthly amount by posting a short public “cookie note”
:white_check_mark: At the end of 3 months, we reflect together on what was produced and learned

This keeps it simple, visible, and trackable — a structured rhythm that doesn’t try to control everything.


:gear: Governance & Group Coordination

To keep it lean, I propose:

  • Minimal engineering — use the current Cookie Jar contract
  • Hats Protocol for managing access (in v1 as social layer, potentially onchain in v2)
  • Proposal and voting via Snapshot or Gardens
  • Prompt curation + evaluation by Allominati (or another trusted working group)

I’ve also been thinking alongside Allo’s 6-month focus:

:white_check_mark: Revenue-generating builds → some proposals are exploring monetizable patterns
:white_check_mark: Structure that directs capital to what matters → this system does that
:white_check_mark: Build the “onchain capital” category → this frames the research side of it
:white_check_mark: Empower working groups → this activates that layer
:white_check_mark: Manage treasury wisely → predictable caps, clear cycles, visible evaluation

To start, I suggest initial prompts focus on:

  1. Refining existing Allo tools
  2. Exploring revenue-generation models (e.g. the DeFi Cookie Jar proposal)
  3. Building shared knowledge infrastructure for Allo as a category

Two Jars, Clear Scopes

To start, I’m proposing two themed Cookie Jars, each aligned to a different kind of contribution:


1. Ontology Jar

For contributors working on shared language, schemas, and mental models that help us understand and evolve capital coordination.

Examples:

  • Preference alignment frameworks (Matter Patterns)
  • Theory of Change logic
  • Interoperable grant schemas
  • Mapping mechanism interdependencies

Guiding Lens:
Network Shared Value (NSV) — Does this work increase the coordination capacity of the Allo network?

→ Up to 5 contributors
→ 0.08 ETH/month
→ 3-month cycle


2. Mechanism Experiments Jar

For contributors testing mechanisms and tools in the field, using Allo or adjacent protocols.

Examples:

  • AlloIRL deployments
  • QF variants like Tunable QF
  • Revenue-generating Cookie Jar structures
  • Local coordination tools in Greenpill-style chapters

→ Up to 3 contributors
→ 0.2 ETH/month
→ 3-month cycle


Evaluation — Now and Later

Right now:

  • Researchers are selected by governance
  • They publish cookie notes (with links + reflections)
  • We host an end-of-cycle community review

Later:

  • Attestations
  • Impact badges
  • Futarchy-style PASS/FAIL tokens
  • Attribution graphs

Deep Funding as a Future Layer

NSV gives us a compass — but we also need maps.

That’s where Deep Funding comes in.

It helps us see what work other work depends on — and fund upstream contributors accordingly.

In the Ontology Jar, someone might:

  • Map the dependency graph of past and current proposals
  • Assign credit weights to foundational work
  • Propose recursive funding models or reputation overlays

This gives us a long-term pathway:
Fund the research → Track its use → Reward its foundations.


What I’d Still Love Input On

  • What’s the right structure-to-flow ratio for early-stage research?
  • Are monthly notes enough, or do we need peer checkpoints?
  • What feedback loops would help make this more useful and aligned?

Next Steps (If Supported)

  1. Launch the initial proposal round
  2. Vote to select contributors
  3. Use Hats to manage roles (manual or onchain)
  4. Run a 3-month pilot
  5. Document learnings + iterate

Thanks again — your input has been instrumental in getting to this version.

Excited to hear your thoughts on what’s missing, unclear, or still questionable.

I’m curious—where does applied research fall in this model? Specifically, the kind that informs design strategy, identifies product gaps, supports DAO design, and addresses existing challenges like growth, retention, and contributor support.

Maybe framing research around current problem spaces could ensure stronger strategic alignment and direct value-add. As the space evolves, focusing on solving shared, present challenges could better align research with its role as an intelligence pillar. - the pre reqs agree with just wondering if we add another catagory ( which if its related to product development might be a different rate) , Also would the rates be the same for all or dependent on experience? how would the shortlist be determined ( I dont really like the vote only approach bc I find its not the most equitable) ?

Hey Feems,
Your message has really stuck with me, and i’m realizing i haven’t done enough to make applied research have its place in the model yet.

I’ve started mapping out some flexible tracks (like Growth, Governance, Infra, etc.) and a simple review process that could help shift attention and resources where they’re most needed. But your message made me pause and wonder whether that’s actually enough.

Do we need a dedicated track for applied work?
Would different funding tiers based on type of contribution make sense or just overcomplicate things?
And how can we structure shortlisting or review in a way that feels more representative than just voting?

I’ve been updating the doc with some of this thinking — still in motion, and far from perfect — but I’d love for you to take a look if you’re open to it. Even just a quick impression or nudge would mean a lot. This is the doc i’m referring - Research Jars

I think so - but Im not sure if a cookie jar is apprpirate for applied research as what I mean by applied research is research that is used for product development or DAO development, asssment or a validator for an intititive that requires allocation. I mean this can work more as a bounty style ( although early 2022 it was tried with dework and the quality was not good). Also are you looking at experieced reserches or general contributors? I think its a interesting idea on a cookie jar my advice is maybe research some examples , validate if reserch as a cookie jar works, I would also look at what the problem space you are trying to address Gov and Group coordination - seems a bit vague , whats the problem and how is the research cookie jar going to solve it. Also because of the market 0.08 ETH/month doesnt seem competitive to attract experience. A cookie jar maybe is better suited for upskilling or attending research based events ? I think TW made some good points. Here is some advice maybe to refine it ( but some research is required)- Why Now? Why for Allo ? Why a cookie Jar for research? what problem is it solving today? How can it be assessed as an ititives? Who are you trying to attract ( what is the experince level)? How are they selected based on experience/output? - In 2022 i remember bounties for research on dework , the quality was not good. Also start small and keep it basic, then build once its validated with background research. def research how cookie jar is used in the ecosystem

1 Like

A good example is Build Guild its not really a cookie jar but it is automatic recuring payments its in the capital allocation book ( I intereviewed austin so know it well) also check out the gpt for it - bc cookie jar is in there too https://allobook.gitcoin.co/src/pdf/onchain-capital-allocation-v2.pdf. ; GPT ChatGPT - AlloBook - Explorers Edition GPT

1 Like

having multiple tracks of cookie jars makes sense to me!

lets keep it simple and get the MVP launched :slight_smile: