Funding Readiness Framework (ARQ C1) Week 1 Update

This is the first update (Week #1) as part of our quest to design and build a funding readiness framework. See initial proposal here: Gardens - Funding Readiness Framework

What we planned to do

  • Build a research plan and task list

  • Brainstorm ideas for framework

  • Identify open questions

  • Establish a timeline for research phase

What we did

Research plan and task list

  • Established goals of the research phase

    • Identify the first iteration of the evaluation criteria that we can start testing

    • Create the list of projects that will be tested against our criteria

  • Enumerated task list

    • Brainstorm potential evaluation criteria ideas

    • Enumerate interview target list and reach out

    • Develop interview questions

    • Build a project list to be used for testing

  • Agreed on update process and cadence

    • Tasks (and granular task updates) on Charmverse board

    • Weekly updates on forum

  • Identified initial list of data sources

    • CARBON Copy’s project database + fundraising data

    • Interview subjects

    • Grant platforms

    • Blockchain for Good Alliance Incubation program

  • Created list of potential interview subjects

    • Grant round operators

    • ReFi power users

    • Impact investors

    • DeSci sages

    • Blockchain for Good Alliance

Framework ideas

  • Brainstormed evaluation criteria

  • Discussed potential framework formatting options

    • Numeric vs. text

    • Narrow (grant round specific) vs Wide (living, breathing score that can be updated over time and be used as a signal for funding readiness)

    • Projects submit text, evaluators use a numeric scale to evaluate

    • Tracking all evaluations and make them publicly available

    • Strict word limits plus proof (no extraneous information)

  • Specified a target project profile

    • Not new projects but also not VC-funded projects

    • Projects needing north of $50,000 to make the next step

  • Discussed scale options

    • 1-5 (strongly agree → strongly disagree) seemed the most suitable, especially in the context of an evaluator judging a statement provided by the project

Open questions

  • How should the paradoxes fit into the evaluation criteria, or should they even be considered?

  • Is the best approach to have projects submit text responses according to the criteria and then have evaluators (grant round operators, et al) evaluate those responses as necessary?

  • Should a project be able to update their responses over time?

  • Should we track all evaluations and use it as a sort of rating system?

Timeline for research phase

  • Set the expected duration of the research phase at 2 weeks

What we learned

  • The ideal approach may be to have the projects fill in a text-based set of questions related to our chosen criteria and then anyone can evaluate according to a “strongly agree → strongly disagree” scale

Challenges encountered

  • Speed vs depth is a difficult needle to thread

    • We’re aware that both too much depth and not enough depth may render the framework unusable
  • Decentralization vs coordination

    • Such a framework would be best if it was decentralized, but this means it may never achieve adoption or even get off the ground
  • Identifying the most relevant and meaningful criteria given the target project profile

    • We need to make sure that the framework isn’t overwhelming for projects or evaluators

    • We can’t necessarily rely on existing frameworks used by VCs

Next steps

  • Commence research

  • Build lists of test projects

  • Answer open questions

3 Likes