WEAVE Growth SPE: Updates

Hey everyone,

Wanted to share some updates on WEAVE as we get into Month 1 execution. There’s a team change, a budget adjustment, and some governance work happening.

SPE Naming

To align SPE naming with mission objectives, the WEAVE proposal deliverables are being operated now under WEAVE Growth SPE. Embody SPE stays its own thing, focused on embodied agent workloads. The broader roadmap lives under WEAVE Growth SPE.

Team Structure Updates

Dane decided to move on from WEAVE for personal reasons, and we parted on good terms.

Dane built the Unreal Engine avatars and most of the game UI that powered our PixelStreaming workloads. He was dedicated, hardworking, and consistently showed up when it counted. He overdelivered on his initial Embody deliverables and brought the game to a mature, production-ready state. The game builds he produced are what the Embody workloads ran on. We’re genuinely grateful for his contribution and wish him all the best going forward.

On the IP side, Dane has committed to granting the WEAVE Growth SPE entity a perpetual, royalty-free license to the source code with sub-licensing rights, allowing us to use, modify, and sublicense it as needed. The source can’t be MIT because of third-party plugin restrictions, but the license covers us fully. The packaged game goes MIT, same as all future Embody releases. We will follow up with Dane on the necessary actions and paperwork to make sure this is properly finalized. This development makes sure that the source code will become an other asset for the growth of Livepeer and ensures that Dane’s departure does not compromise our ability to create future workloads using Unreal Engine.

Budget

We will discuss actively with the community to make sure that the SPE acts according to stakeholder intent to decide what will happen to the funds that where earmarked towards Dane’s compensation.

Governance and transparency

We are currently discussing with the team the creation of a lightweight legal entity with non-profit operating provisions, where financial reporting and management obligations would be written into the founding documents themselves. The entity will operate with strict non profit provisions for the benefit of the whole Livepeer ecosystem and will possess ownership of the financial(incentives) and IP assets(unreal engine game, weave website, etc..) of the SPE.

What that looks like: budget, spending, and compensation reported publicly. Governance processes published online, how decisions get made, how workloads get evaluated, how incentive packets get managed. All incentive programs will run in the open with published criteria, schedules, and outcomes.

We’re still working out the exact structure, reporting format, and cadence with the team. Once that’s decided, we’ll share it publicly.

Month 1 Deliverables

You can find the month 1 deliverables spec bellow, we are currently finalizing this before posting the final version. Any feedback welcome.

Deliverables spec doc

WEAVE Growth SPE — Month 1 Deliverables

Version: v0.2 (April 2026)
Owner: DeFine
Technical Reviewer: Rick
Completion Rule: No deliverable is complete until Rick signs off on its evidence pack.


Month 1 Operating Logic

The dependency chain is:

  1. M1-D1 builds the open-source WEAVE Tool.
  2. M1-D2 exposes that capability through the DAO-operated hosted WEAVE App + API.
  3. M1-D3 activates the incentive loop that drives workflow creation and app creation.
  4. M1-D4 proves publicly what users did, what converted, and what became monetization-ready.

Month 1 completion is based on whether this machine exists, is live, and is measurable. Raw demand volume is a KPI outcome, not the sole approval condition.


M1-D1 — Open-source WEAVE Tool

Description: A public, open-source, self-hostable WEAVE Tool that lets any technical user create Scope .json workflows, run viability research, run QA, and extend the system independently. This is the core product layer, not the hosted WEAVE service.

Outcome: By end of Month 1, a technical user can take the WEAVE Tool repo, follow the docs, generate valid workflows, inspect viability + QA outputs, and adapt the tool for their own API or webapp if they choose.

Excludes: Hosted uptime, DAO-run API reliability, user onboarding/account systems, and incentive logic.

Sub-deliverables

M1-D1.1 — Public repo and licensing live

Criteria Done?
Repo is public
License file is present
README covers setup, architecture, usage, and extension points
Example assets or example outputs are included or linked

Evidence: Public repo URL, license file, README, architecture note.

M1-D1.2 — Workflow creation engine works

Criteria Done?
Tool can generate valid Scope .json workflows
At least 3 materially distinct workload classes are supported
Each class has a reproducible example output
Export path for generated workflows is documented

Evidence: 3 exported .json outputs, supported workload list, reproduction notes.

M1-D1.3 — Commercial viability module integrated

Criteria Done?
Workflow creation includes an explicit viability-assessment step
Viability output is generated inside the tool
Output includes target user, use case, monetization path, market rationale, and build/no-build reasoning
Viability output is saved or exportable alongside the workflow artifact

Evidence: UI or CLI artifact showing the viability step, 3 example analyses, output schema.

M1-D1.4 — QA module integrated

Criteria Done?
QA stage exists inside the tool flow
QA returns pass/fail or a structured issue list
Checks cover valid .json structure, required fields, runnable shape, and broken-config detection
At least 1 failed QA case is demonstrated

Evidence: QA checklist/spec, pass artifacts, fail artifact, output examples.

M1-D1.5 — Self-hostability proven

Criteria Done?
Fresh setup can be completed from the published docs
Tool runs without requiring the hosted WEAVE App/API
A technical user can extend or wrap the tool for their own API or webapp
No WEAVE-managed account is required to use the core tool

Evidence: Self-host setup note, fresh-run proof, extension or wrapping example.

Evidence Pack for Rick Review

  • Public repo URL
  • License file
  • README + architecture note
  • 3 sample workflow outputs
  • 3 viability outputs
  • QA pass/fail artifacts
  • Self-host proof

M1-D2 — Hosted WEAVE App + API Access Layer

Description: The DAO-operated, free hosted access layer for WEAVE. This is the managed WEAVE App and public API that use the WEAVE Tool underneath so users can access WEAVE services without self-hosting.

Outcome: By end of Month 1, a user can complete the workflow-creation path through the hosted WEAVE App or API, view viability + QA outputs, and move from workflow creation into app creation without local deployment.

Excludes: Open-source licensing proof for the core tool, standalone self-host proof, and incentive policy itself.

Sub-deliverables

M1-D2.1 — Free hosted WEAVE webapp live

Criteria Done?
Public URL loads successfully
User can complete workflow creation end-to-end through the webapp
Viability and QA outputs are visible in the hosted flow
Workflow output is viewable and exportable

Evidence: Public URL, screenshots, end-to-end web demo recording.

M1-D2.2 — Public API live

Criteria Done?
API endpoint(s) are publicly reachable
API docs are published
Workflow creation, viability, and QA are available via API
At least 1 successful API-created workflow is demonstrated

Evidence: API base URL, API docs, example request/response, API demo artifact.

M1-D2.3 — Hosted surface is clearly downstream of D1

Criteria Done?
Hosted outputs use the same core workflow format as the WEAVE Tool
Relationship between D1 and D2 is documented publicly
Hosted access policy is stated
Free access terms or limits are stated

Evidence: Public note on tool vs hosted surface, access policy doc, parity note.

M1-D2.4 — Workflow-to-app creation path live

Criteria Done?
User can move from workflow creation into an app-creation path through the hosted surface
This path is exposed in the webapp and documented in the API flow
At least 1 example app-creation flow is demonstrated
Output from this step can be used in the incentive system

Evidence: App-creation demo, UI/API docs, example artifact.

M1-D2.5 — No-self-hosting path demonstrated

Criteria Done?
A user can complete the hosted flow without local deployment
A web-based end-to-end demo exists
An API-based end-to-end demo exists
The hosted path is usable by a non-technical participant

Evidence: Web demo, API demo, hosted-path walkthrough.

Evidence Pack for Rick Review

  • Public webapp URL
  • Public API docs URL
  • Tool-to-hosted parity note
  • Web demo recording
  • API demo recording
  • Example workflow artifact
  • Example app-creation artifact

M1-D3 — Incentive Program Launch

Description: The live WEAVE incentive system that moves users from registration to workflow creation to app creation, with identity-gated enrollment to reduce bot/sybil spam. Phase 1 rewards support workflow and app creation. Phase 2 rewards unlock only after an app proves quality, monetization, and electronic payment readiness.

Outcome: By end of Month 1, incentive registration is live, users can enroll from both the WEAVE Tool and the WEAVE App, Phase 1 rewards are active, and the Phase 2 unlock rules are explicit and operational.

Sub-deliverables

M1-D3.1 — Incentive rules and governance published

Criteria Done?
Public incentive rules document exists
Phase 1 and Phase 2 are defined separately
Participant types and eligible actions are defined
Calculation method and cadence are defined
Dispute and exception handling path is defined

Evidence: Public rules URL, version/date, example calculation, dispute policy.

M1-D3.2 — Worldcoin-gated registration live

Criteria Done?
Incentive enrollment path is live
Worldcoin verification is required before incentive participation
Anti-spam / anti-sybil rules are published
At least 1 successful registration is evidenced

Evidence: Enrollment URL, verification screenshots, policy note, first successful registration artifact.

M1-D3.3 — Dual onboarding paths live

Criteria Done?
User can enter the incentive system from the WEAVE Tool path
User can enter the incentive system from the hosted WEAVE App path
Differences between self-hosted and hosted participants are documented
Both paths feed the same KPI/reporting surface

Evidence: Tool-path walkthrough, app-path walkthrough, onboarding documentation.

M1-D3.4 — Phase 1 rewards active

Criteria Done?
Phase 1 rewards are live during Month 1
Workflow creation is a qualifying action
App creation is a qualifying action
Incentive window and cadence are publicly stated

Evidence: Launch announcement, rules excerpt, active incentive window proof.

M1-D3.5 — Phase 2 activation gate defined and testable

Criteria Done?
Phase 2 activation criteria are published
Criteria require proof of quality standards
Criteria require proof of monetization
Criteria require ability to accept electronic payments
A qualifying evidence template or checklist exists

Evidence: Phase 2 rules section, qualification checklist, sample evidence packet.

M1-D3.6 — First live cycle operational

Criteria Done?
Weekly calculation or review runbook exists
First cycle calculation artifact is produced
At least 1 real participant moved through registration → action → calculation
Exception/dispute handling is usable

Evidence: Runbook, first cycle artifact, participant flow proof, exception log template.

Evidence Pack for Rick Review

  • Public incentive rules URL
  • Worldcoin-gated registration proof
  • Tool-path and app-path onboarding proofs
  • Phase 1 live-window proof
  • Phase 2 qualification checklist
  • Weekly runbook
  • First cycle calculation artifact

M1-D4 — Public KPI / Reporting Surface

Description: The public reporting layer that shows whether WEAVE is actually acquiring users, helping them create workflows, converting those workflows into apps, and moving those apps toward quality and monetization readiness.

Outcome: By end of Month 1, the public can verify acquisition, production, quality, monetization-readiness, and workflow-to-app stickiness from a live KPI/reporting surface.

Sub-deliverables

M1-D4.1 — Public KPI surface live

Criteria Done?
Public URL exists
Metric definitions are visible
Update cadence is stated
Current snapshot is visible

Evidence: Public dashboard/report URL, screenshots, metric definitions.

M1-D4.2 — Acquisition metrics live

Criteria Done?
New users are counted publicly
Incentive-verified registrations are counted publicly
Tool-path vs app-path acquisition can be distinguished or derived
Reporting window is stated

Evidence: KPI surface screenshot, acquisition methodology note.

M1-D4.3 — Production and completion metrics live

Criteria Done?
Workflows started are counted publicly
Workflows completed are counted publicly
Incentive completions are counted publicly
Apps created are counted publicly

Evidence: KPI surface screenshot, metric definitions, sample snapshot.

M1-D4.4 — Quality and monetization progression metrics live

Criteria Done?
Apps crossing the quality threshold are counted publicly
Apps that are monetization-ready are counted publicly
Apps that can accept electronic payments are counted publicly
Phase 2-eligible apps are counted publicly

Evidence: KPI surface screenshot, methodology note, status definitions.

M1-D4.5 — Stickiness / progression metrics live

Criteria Done?
Workflow-to-app conversion metric is defined
Post-action stickiness metric is defined
Repeat builder activity or return behavior metric is defined
Measurement window and method are published

Evidence: KPI definitions, methodology note, example snapshot.

M1-D4.6 — Public reporting artifacts published

Criteria Done?
Weekly report template exists
First public KPI snapshot/report is published
Caveats and methodology are published
Reporting owner/update cadence is named

Evidence: Report template, first snapshot/report, methodology doc.

Evidence Pack for Rick Review

  • Public KPI/dashboard URL
  • Metric definitions
  • Acquisition, production, and conversion screenshots
  • First public report or snapshot
  • Methodology / caveat note

Month 1 KPI Targets

These are operating targets, not the sole completion gate. Month 1 approval depends first on whether the system is live, reviewable, and measurable.

KPI Month 1 Target
New verified incentive users 10+
Workflows started 10+
Workflows completed 5+
Apps created 3+
Phase 1 incentive completions 3+
Apps reaching quality review 1+
Apps reaching monetization readiness 1+
Workflow-to-app stickiness 25%+ of workflow completers start an app path
Public KPI snapshots/reports published 1+ live snapshot and weekly cadence defined

Rick Review Gate

Rick signs off separately on each deliverable:

  1. M1-D1 technical completeness of the open-source WEAVE Tool
  2. M1-D2 hosted operability of the WEAVE App + API
  3. M1-D3 correctness and operability of the incentive mechanism
  4. M1-D4 correctness and integrity of the public KPI/reporting surface
Deliverable Evidence Submitted Rick Status Final
M1-D1 Open-source WEAVE Tool
M1-D2 Hosted WEAVE App + API
M1-D3 Incentive Program Launch
M1-D4 Public KPI / Reporting Surface
2 Likes