After reading Network Coherence through Impact Metrics (TVF, etc.), I’ve recognize that while TVF gives us a shared target, an alignment towards effective tangible, externalized value created by the network, we need also other metrics that help us to get there in an efficient way, while also setting performance standards
Introducing WEAVE – Weighted Effort & Added Value Exchange
In any network where trust and creativity are the primary currencies, relational capital is foundational. Have you ever start a project or generating impact with no more resources than your own skills, and those of the people in your network? How many times you have tried to account for the expertise and sweat put into this common effort?
WEAVE is a proposed metric that aims to quantify the human and relational energy invested by individuals and teams — from time and effort, to care, coordination, and peer support.
It serves as a primitive for later-stage efficiency analysis, offering insight into where sweat equity is being directed — and how aligned those investments are with our shared outcomes.
How WEAVE is Measured: CollabBerry’s teamPoints (TPs)
At CollabBerry, we’ve been building an ecosystem where value is surfaced, seen, and shared — even (especially) when it isn’t financial at first glance. Our teamPoints (TPs) system is a protocol for measuring WEAVE across collaborative projects.
Here’s how it works:
- Contributors agree and log on efforts the will perform across a range of categories: strategy, facilitation, design, code, conflict holding, governance, and beyond.
- Peers reflect and validate those contributions, surfacing collective perception of their resonance and relevance.
- The result is a shared map of relational investment — who showed up, where energy was given, and how that aligned with the group’s intentions.
The magic of this isn’t just the individual data — it’s the intersubjective agreement about what mattered.
WEAVE Before Efficiency
By accounting for invested human labour (WEAVE), we might be creating a foundation for evaluating what was returned (TVF or other yield metrics). This opens space to explore:
- Return on Collaboration – What tangible outcomes did our collective energy generate?
- Alignment Insights – Did our sweat flow toward what we actually care about?
- Capital Allocation Efficacy – How might resources better follow effort next cycle?
WEAVE doesn’t try to force impact into a linear model. Instead, it prepares us to reflect together on the patterns that emerge over time — especially when the results aren’t immediate or monetary.
Market Rates, Multipliers, Agreements, and the Value of Difference
Of course, not all sweat is equal. Some contributions multiply others. Some are catalytic, others consistent. This is where community agreements come in:
- Market Rates are the personal statement (agreed by the collective) about each individual rate of income. An easy way to measure this is to consider the opportunity cost of being working in full edffort and capacity to a coorporation that would hire you to do what you do.
- Multipliers in CollabBerry’s system can reflect the value of our investment (e.g. a x2 for receiving TP’s tokens instead of a monetary salary) and how it can increase thought time for incentivizing long term commitment
- These are never fixed — they’re meant to be negotiated transparently, as part of the Compensation as a Conversation approach we steward.
- In this way, WEAVE doesn’t just track inputs — it helps teams practice shared discernment about what matters.
The Challenge Inside WEAVE: Agreements, Consent, and Scaling Trust
If WEAVE is to serve as a meaningful metric, then the agreements underneath it must be coherent, legible, and adaptive.
This is not trivial.
CollabBerry’s current framework asks every contributor — and their collaborators — to navigate and align on four key parameters for each contribution:
- Role & Responsibility – What are you holding? What’s expected of you?
- % Commitment – How much of your working energy (time/attention) is flowing here?
- Reference Market Rate – If we were to benchmark your role, what would it be worth?
- Monetary Compensation (if any) – What are you actually being paid? Now, later, or never?
Together, these shape a transparent and intersubjective picture of each contributor’s “sweat equity” — forming the basis for teamPoints (TPs), and thus for WEAVE.
But here’s the rub:
These parameters require consent. And consent is a relational process — not just a data input.
Right now, CollabBerry relies on trust at the team level. Agreements are proposed, validated, and adjusted through conversation, reflection, and mutual recognition. It’s beautiful, and it works — in small, aligned teams.
Scaling Trust is the challenge.
As networks grow, how do we keep this consent-based approach from becoming bureaucratic? How do we avoid calcifying into rigid roles or extractive evaluations? How do we honor complexity without drowning in process?
Introducing AI Agents for Agreement Facilitation
As our networks grow, the challenge isn’t just about measuring contributions but ensuring that the agreements underpinning those measurements remain coherent, legible, and adaptive. CollabBerry’s current framework emphasizes trust at the team level, where contributors align on parameters such as roles, responsibilities, commitment levels, market rates, and compensation. However, as we scale, maintaining this level of trust and alignment becomes increasingly complex.
To address this, we are exploring the design of AI agents to assist in the facilitation of agreements by:
- Monitoring and suggesting updates to role definitions and responsibilities based on evolving project needs.
- Analyzing commitment levels to ensure equitable distribution of workload and prevent burnout.
- Benchmarking market rates to provide fair and competitive compensation suggestions.
- Facilitating transparent discussions around monetary compensation, ensuring all voices are heard and considered.
Building cross-org reputational references – AI Agents can anonymously aggregate and synthesize agreement data across fractal networks — helping contributors and teams understand:
- What kinds of compensation are being normalized in similar roles/orgs
- How roles are defined elsewhere
- Where there’s room to push for equity, stretch into generosity, or adjust expectations
This becomes a living pattern library of agreements — a kind of inter-org social oracle — that helps new teams bootstrap trust faster, while still allowing for localized adaptation. By integrating AI Agents, we aim to augment human decision-making, not replace it. These agents serve as tools to support our community in maintaining trust and alignment as we grow, ensuring that our collaborative efforts remain grounded in shared values and mutual respect.
WEAVE + TVF = A Fuller Picture
- TVF shows us what value has flowed outward — into markets, communities, ecosystems.
- WEAVE reveals what value has been woven inward — the human and relational threads that make the outward flow possible.
Together, they help us understand the story of our labor — not just what it earned, how much it flows, but what it meant and what can be done better in next cycle.
This might not be clear now, but by having organization nodes meassuring their WEAVE
Would love to hear others’ thoughts on where and how a metric like WEAVE could fold into broader measurement frameworks — and how we might continue evolving tools that reflect the richness of our shared creative economies and capital allocation mechanisms.