Embody SPE: Pre-proposal Intelligent Public Pipelines

Pre-proposal v5: WEAVE

Authors: DeFine (Strategy & Engineering), Dane (Virtual Worlds & Avatars)Date: March 15, 2026

1. Abstract

The Embody SPE is an entity dedicated to bringing embodied avatar workloads into the Livepeer ecosystem. Since our inception, we have continuously evolved our technology to meet the needs of the network. Our core drive is to deliver sustained fee growth for Livepeer orchestrators.

Along our journey, we have developed solutions and frameworks that address Livepeer’s core bottleneck: the workload lifecycle. Embody seeks funding to develop and prove these solutions, and to expand them into an open-source public-good toolset fit for every existing and upcoming workload deployed on Livepeer.

2. The Problem

To understand the problem, one must recognize three fundamental actors within the Livepeer ecosystem:

  1. Orchestrators, who provide compute for workloads.

  2. Workload Facilitators, who engineer and maintain the workloads that orchestrators run.

  3. Consumers, who utilize these workloads and generate demand.

The Livepeer network provides adequate tokenomic incentives for orchestrators. However, workload facilitator and consumer incentives are currently funded in a non-standardized way by the Livepeer treasury, with limited success so far. In supply and demand terms, compute suppliers vastly outnumber both demand seekers and demand creators.

Workload facilitators generally fall into two subcategories: startups seeking capital to pay for compute and build a consumer base, and established organizations that already possess compute capital and consumers. Livepeer is attractive to both because orchestrators are incentivized through token inflation to offer compute below standard market costs. Theoretically, workload facilitators should be able to offer better prices to their consumers by selecting Livepeer orchestrators.

Despite this advantage, creating workloads within the Livepeer ecosystem remains exceedingly difficult due to the following barriers:

The Technical and Conceptual Barrier Workload facilitators must deeply understand the Livepeer ecosystem technically and conceptually. Furthermore, manually executing every step of a workload’s lifecycle and managing the handoff to the next stage requires an immense amount of time and effort. This creates an exceptionally high barrier to entry, practically excluding non-technical participants from the creation process.

The Economic and Incentive Barrier Economic incentives for workload creation are virtually non-existent by default. Startups rely on the treasury to incentivize their work. Even if a team possesses the technical capability to deliver workloads, they must invest significant time navigating the community and preparing proposals, hoping for treasury approval. For established organizations, there is no incentive to risk service downtime and consumer dissatisfaction by migrating from centralized providers to Livepeer. To date, there are zero documented successful cases of such organizations making the swap.

The Distribution and Interface Barrier Once a workload is live, startups without established consumer bases face the monumental task of building consumer interfaces and executing outreach. Such organizations typically have low headcounts. The added overhead of maintaining Livepeer-specific infrastructure, building interfaces, and distributing workloads is enough to make Livepeer unattractive, especially given the generous inference subsidies offered by centralized providers.

This culmination of friction leads to a critical question: How can we radically reduce the time it takes to convert intent into workload creation and distribution?

3. The Solution

To resolve this bottleneck, we propose WEAVE; an open-source, semi-autonomous agentic orchestration tool with a human-in-the-loop design. WEAVE is designed to turn stakeholder intent into researched, engineered, tested, maintained, and distributed workloads, compressing the lifecycle from months to mere hours.

WEAVE will be accessible to everyone, resolving lifecycle bottlenecks from initial research to consumer distribution. It allows workload facilitators to create new GPU-powered workloads and rapidly deploy them to orchestrators and consumers.

The human operator’s role is simplified: they prompt the initial intent, review the output at the end of each lifecycle stage, and authorize the agent to proceed to the next step. Embody will develop and provide the first WEAVE workloads, managing consumer and orchestrator incentives to power our own and future WEAVE users.

How WEAVE Solves Each Stage

WEAVE directly addresses each stage of the workload lifecycle:

Research Lifecycle agents help identify promising workloads, synthesize demand, evaluate tooling and feasibility signals, and prepare proposals for Livepeer stakeholder review.

Engineering and Maintenance Lifecycle agents turn approved intent into implemented workloads, updates, fixes, and ongoing maintenance work inside bounded environments.

QA Lifecycle agents validate workloads, catch regressions, and prepare them for release through a repeatable testing path.

Consumer Interface The public entry point is a published SKILL.md—a markdown contract that instructs consumers on how to start, control, and end the workload through a mediated public control surface.

Orchestrator Release Lifecycle agents package workloads, document release requirements, and move them through a clear operator onboarding and rollout path.

Consumer Distribution Lifecycle agents support the distribution work needed to reach consumers, drive adoption, and keep the workload legible as a public interface rather than an obscure private implementation.

From Intent to Live Workload

WEAVE provides a clear path from initial intent to live network execution:

  1. Intent Submission: A user submits a workload intent. This can propose a new workload, request an improvement, or point the pipeline toward a concrete opportunity (e.g., real-time camera tracking).

  2. Lifecycle Execution: Agents pick up the intent and move it through research, engineering, QA, release preparation, and distribution.

  3. Stage Review and Authorization: At the end of each stage, agents publish artifacts and request human review. Agents carry the workflow forward, but human authorization is strictly required to complete a stage.

  4. Runtime Release and Distribution: When release-ready, WEAVE distributes the workload to the orchestrator registry. Operators can adopt it through a bounded action. The system simultaneously publishes the consumer-facing SKILL.md and begins distribution outreach.

For orchestrators, this creates a faster path to new fee-generating workloads and lowers integration friction.

Economic Incentives

WEAVE resolves the time constraints and technical barriers, but this only solves part of the problem. The other half is the absence of economic incentives for workload facilitators and consumers. Similarly, orchestrators typically do not run workloads for free out of goodwill; there must be clear incentives for them to support WEAVE workloads. To address this, we propose three perpetual financial packets:

1. Workload Facilitator Hackathon Packet A perpetual hackathon powered by an LPT economic packet, where a portion of the accrued inflation value is used to incentivize the weekly creation of new workloads. The remaining value is fed back into the principal, allowing it to compound continuously so that token rewards grow over time. The shape of the hackathon and its economic parameters will be subject to ongoing refinement by the multisig participants. Participants will engage through a SKILL.md contract, and funding will be distributed retroactively upon the completion of intended targets. This creates an asymmetric upside: participants are incentivized with an initial allocation, and upon delivering a high-demand workload, they receive retroactive funding along with the ability to deploy applications on top of the workload.

2. Consumer Incentive PacketThis perpetual financial packet operates similarly to the facilitator packet, utilizing weekly accrued inflation to incentivize consumers of WEAVE workloads to take specific actions. For example, SKILL.md consumer agents holding a blockchain wallet could be eligible for rewards if they deliver a working open-source companion application that reaches five concurrent daily users. Like the facilitator packet, this is designed with an asymmetric risk/reward model: the consumer uses the workload without charge and receives a small initial incentive (subject to terms), while the upside includes retroactive rewards and the ability to profit from an application built on WEAVE’s incentive layer.

3. Orchestrator Incentive PacketAn economic packet dedicated to providing weekly rewards for orchestrators who run WEAVE workloads on their hardware systems, ensuring consistent compute availability and sustained network participation.

4. Where We Are Today

WEAVE is being proposed on top of assets the Embody team already operates: a mature Embody workload, an orchestrator rollout lane, and a working consumer entry point.

  • embody-skill: The published skill-contract path for the workload interface, providing a working consumer-distribution surface.

  • livepeer-ops: The control-plane layer for sessions, policy, and orchestrator allocation.

  • Unreal_Vtuber: The runtime environment for the embodied avatar workload.

  • Registered Orchestrators: 13+ orchestrators have registered to the pipeline and received workloads; seven are currently active, and prior participants can reenter.

  • Rollout Capability: The active lane can already receive auto-updates through the Unreal_Vtuber path.

Together, these assets provide the execution base for the proposal: a mature workload, an existing operator lane, and functioning distribution tooling on which WEAVE can be built, tested, and released.

5. What We Are Delivering (4-Month Scope)

The roadmap advances in three stages: first, deploy Embody as the inaugural workload on WEAVE; second, extend WEAVE to host daydream/scope; third, generalize WEAVE beyond both to support any realtime application, establishing it as a true open-source public good for the Livepeer ecosystem.

The perpetual financial packets are foundational to this progression. All three launch alongside the Embody workload in Month 1, ensuring that workload facilitator and consumer incentives are active from the start and that orchestrator expenses are covered from day one.


Month 1 — Embody Workload, Lifecycle Automation & Incentives Launch

DeFine

Deliverables

  • Establish the first bounded lifecycle-agent runtime.

  • Automate research, engineering, maintenance, and QA flows on the Embody (non game-dependent) proving lane.

  • Release Embody as the first working WEAVE workload, accessible via SKILL.md and processing sessions on at least one active orchestrator.

  • Deploy all three perpetual incentive packets (Workload Facilitator Hackathon, Consumer, and Orchestrator).

Success Criteria

  • Lifecycle agent runtime operational and processing intents end-to-end on the Embody lane.

  • Embody workload live and accessible via SKILL.md, processing sessions on at least one active orchestrator.

  • All three incentive packets deployed and accepting participants.

  • At least 3 orchestrators enrolled in the WEAVE lane.

Dane

Deliverables

  • Automate the Embody/Unreal Engine workload engineering pipeline through DeFine’s agent runtime: plugin automation, adding and removing code and features, and game packaging — covering ≥80% of the engineering workflow end-to-end.

Success Criteria

  • Agent runtime can execute Unreal Engine engineering tasks end-to-end (plugin add/remove, code changes, game packaging) with ≥80% workflow coverage.

Month 2 — Daydream/Scope Workload Path

DeFine

Deliverables

  • Adapt WEAVE end to end to accept daydream/scope workloads.

  • Bring the orchestrator rollout path online for daydream/scope workloads.

  • Support the first new community workload generated from the Month 1 Hackathon.

Success Criteria

  • WEAVE adapted end to end for daydream/scope workloads.

  • Orchestrator rollout path operational for daydream/scope workloads.

  • First community hackathon workload supported end-to-end.

Dane

Deliverables

  • Deliver a functional prototype of the alternative (non-Unreal Engine) embodied avatar workload demonstrating core session flow.

Success Criteria

  • Alternative avatar workload prototype operational and demonstrating end-to-end session handling.

Month 3 — Generalized Path & Alternative Avatar Pipeline

DeFine

Deliverables

  • Expand WEAVE from scope and daydream to support custom-lane workloads built on any framework or technology stack, including those outside the Embody and daydream/scope ecosystem.

  • Package Dane’s alternative embodied avatar workload into WEAVE.

  • Open WEAVE to public orchestrator onboarding.

  • Publish the supported workflow for releasing and maintaining additional workloads through WEAVE.

Success Criteria

  • WEAVE custom lane operational and accepting at least one workload built on a framework outside Embody and daydream/scope.

  • Dane’s alternative workload packaged and accessible through WEAVE.

  • At least one external orchestrator onboarded through the public path.

  • Supported workflow for releasing additional workloads documented and tested.

Dane

Deliverables

  • Deliver the alternative (non-Unreal Engine) avatar pipeline, fully operational and documented, ready for DeFine to integrate and automate.

  • Deploy the ability to add and edit new avatars and game environments in both the Unreal Engine and the alternative avatar pipeline.

Success Criteria

  • Alternative avatar pipeline operational, documented, and handed off to DeFine for WEAVE integration.

  • Avatar and environment creation and editing operational in both pipelines, with at least one new avatar or environment demonstrably added through the system on each pipeline.


Month 4 — Governance and Handover

DeFine

Deliverables

  • Facilitate a public governance discussion with multisig participants and the community on how the incentive packets and lifecycle-agent runtime should be managed post-grant.

  • Strategize and document the operating model for the three perpetual incentive packets going forward — how they will run, who will manage them, and what community input shapes their parameters. This decision passes through the community.

  • Document the agreed governance path: multisig participants may elect to transition to a decentralized on-chain layer, continue multisig management until a decentralized solution is ready, or confirm another path agreed upon by the group.

  • Resolve all pending bugs submitted against WEAVE during the grant period.

  • Publish handover documentation and a residual-risk list regardless of the governance decision.

Success Criteria

  • Governance discussion held and outcome documented publicly.

  • Incentive operating model documented and ratified by community input.

  • One of the following governance paths confirmed and recorded: (a) decentralized governance contracts deployed and management transitioned, or (b) a clear continuation plan agreed upon by multisig participants with an explicit path toward eventual decentralization.

  • All flagged WEAVE bugs resolved or, where blocked by external dependency, documented with root cause and mitigation plan.

  • Handover documentation and residual-risk list published.

Dane

Deliverables

  • Resolve all bugs flagged across both pipelines during the grant period (Months 1–3).

Success Criteria

  • All flagged bugs resolved or, where resolution is blocked by external dependency, documented with root cause and mitigation plan.

Each monthly tranche is released independently via 3/5 multisig upon confirmation of that month’s criteria — DeFine: $4,000 per month, Dane: $6,000 per month. Month 4 payments for both tracks are advanced upon Month 3 confirmation, with final deliverables confirming grant completion. A delay or gap in one track does not block the other.

6. Budget & Financial Governance

The total amount requested from the on-chain treasury is $100,000 USD, equal to 43,478 LPT at an LPT reference price of $2.30 on March 15, 2026.

Budget Breakdown

  • Team Compensation: 17,391 LPT / $40,000 total.

    • DeFine: $16,000 total ($4,000/month). Strategy, control plane, WEAVE engineering, and governance/runtime delivery.

    • Dane: $24,000 total ($6,000/month). Embodied avatar workload engineering across Unreal Engine and non-Unreal runtime paths.

    • Release mechanism: Each monthly tranche is held in the multisig and released only after the month’s work is complete and its success criteria have been met. Release requires 3/5 multisig confirmation. Each track is independently verified and independently released — a delay in one does not block the other.

  • Operational Costs: 4,348 LPT / $10,000. Infrastructure, runtime, measurement, and support costs for the proving workload during the grant window.

    • Release mechanism: Funds are released against submitted receipts as expenses are incurred. No advance disbursement; each release requires documentation of the corresponding expense.
  • Network Incentives: 13,043 LPT / $30,000. Principal for the Orchestrator Incentive Packet.

    • Release mechanism: Once the program launches in Month 1, the multisig releases weekly or bi-weekly distributions to participating orchestrators according to the published incentive rules.
  • Workload Facilitator Incentives: 4,348 LPT / $10,000. Principal for the Workload Facilitator Hackathon Packet.

    • Release mechanism: Once the program launches in Month 1, the multisig releases weekly or bi-weekly distributions to participants who meet the published hackathon criteria.
  • Consumer Hackathon: 4,348 LPT / $10,000. Principal for the Consumer Incentive Packet.

    • Release mechanism: Once the program launches in Month 1, the multisig releases weekly or bi-weekly distributions to consumers who meet the published incentive terms.

Total: 43,478 LPT / $100,000.

The grant will be managed on Arbitrum through a proposal-facing multisig, incorporating comprehensive receipt spend tracking and structured spending categories.

Multisig Composition

  • Orchestrator Tiebreaker: One signer - Pon.

  • Embody Team: Two signers.

  • Foundation: Two signers, including Rick, the Foundation’s head engineer.

Treasury actions will proceed through a 3/5 signature scheme path. If funds need to be returned, the remaining balance can be converted to LPT and sent back to the Livepeer treasury through the governance process described in the separate wallet governance packet.

7. Addressing Past Feedback

We want to thank the Livepeer stakeholders who gave feedback on earlier versions of this pre-proposal. That feedback improved the proposal materially by surfacing three important issues: the security boundary needed to be clearer, the scope was too abstract, and the budget needed to match the size of the work more credibly.

This revision responds directly to those points. It narrows the first workload claim, explicitly defines the system as an open-source tool rather than a decentralized protocol, makes the public consumer path and governance shape more legible, and ties the four-month ask to reviewable milestone outputs. During the core engineering period, the team will remain responsive to ongoing feedback and incorporate useful improvements.

8. FAQ

1. Who is WEAVE for? WEAVE is designed for both the creation of entirely new workloads and the implementation of changes to existing ones.

2. How much automation exists in WEAVE? A WEAVE user can select their preferred level of automation. They can choose to manually review each stage, leave the review to the agent for auto-authorization, or take a highly hands-on approach in the creation process. The lifecycle agents are capable of automating the entire workload lifecycle, including scanning for novel opportunities.

3. How will WEAVE workloads be deployed to orchestrators? The Embody team will maintain a registry where the lifecycle agents of every WEAVE user can post their workloads. Livepeer orchestrators can then browse this registry and select to deploy these workloads through a single command.

4.How are you sure that workloads will be secure? The security evaluation process naturally sits outside the domain of individual WEAVE users. All workloads will be automatically inspected by centralized lifecycle agents operated by the Embody team. Furthermore, every workload will require a manual review before it is approved for deployment to the registry.

5. Will WEAVE use existing Livepeer components for the workload lifecycle and orchestrator payments? Yes. WEAVE will use Bring Your Own Compute (BYOC) for workload deployment, alongside Livepeer gateways and the clearinghouse for workload delivery and payments. The custom Embody parts that previously fulfilled these functions will be replaced with their mapped Livepeer-specific components.

6. What happens if you aren’t finished in the provided timeframe? All provided funds managed by the multisig will be converted to LPT and sent back to the treasury.

9. References & Technical Appendix

This appendix serves as the deeper review bundle behind the shorter forum post. The post stays high-level; the linked docs carry the architecture, roadmap, budget, and governance detail.

Repositories

Packet Docs

note: packet docs are being actively updated for the new version

2 Likes

why does ‘pipeline management’ have to be agent-powered? sounds like a hammer looking for nail

Great question @stronk. It might be useful to reply here with an other question - in what time frame will ai be able to manage all aspects of the pipeline better than any human team? Within the next 6m-2y, we think is a good projection.

This is a nail we need to hammer, the reason for that is simple we have 24 hours per day, many of which are devoted in pipeline engineering and management. Delegating this work to an intelligent standardized pipeline will allow us to hammer more nails, with higher efficiently.

Being required to spend long hours to introduce engineering and managerial changes, in an age where you can delegate much of the design and the implementation of such tasks to 24/7 running AI agents, doesn’t seem to be the optimal choice. Of course, we are always open to counterarguments

Well i mean that’s not really an answer to my question…

‘We don’t have enough hours in the day’ is a great argument that applies to any level of automation, which can be solved just as easily without any agent involvement. Engineers have been doing that for decades!

I also believe that the core premise of your proposal is fundamentally flawed, which correct me if I’m wrong, is that the agents improve and learn.

However, you’re not training a model, you’re modifying the context around it. In fact, the agent is modifying its own context and there is no guardrail that can prevent the risk of unsupervised agents poisoning their own context or runtime environment, which is a compounding issue that is highly likely given how fickle the current generation of LLM’s still are.

I’ve been using AI profesionally for work for a while now, and while the capabilities never cease to amaze me when they get it right, agents also constantly break any rules you try to enforce, infer from context over understanding a domain, choose the simple/easy way out that only solidifies a bad decision point made in the past, or just outright break stuff.

Besides, I don’t think it’s prudent to spend treasury funding on what is essentially a R&D project, based off a highly speculative timeline of AI drastically improving their reliability and accuracy.

2 Likes

And like I’m not trying to say that agents don’t have a place in this at all… but agents behave a lot more appropriate when constrained to a specific domain with as little freeform thinking involved as possible.

Just spitballing here. but there’s a deterministic aspect to this, like negotiating supply & demand, which could power a registry that highly specialized agents could use. That would make me much more confident (or having a PoC before proposal)

2 Likes

Thank you for your additional feedback Marco.

Please note, we are not talking about traditional automation here. Agents who are able to create, advertise and negotiate workloads while understanding that they operate within a single coherent network, are not mere automation solutions.

We made a section: Safety, Liability, and Governance to address security and liability worries.Please note that the pre-proposal is not finalized and it will be updated further on security. All aforementioned hypothetical security issues are introducing data corruption and suboptimal task execution at worst, if they agent doesn’t not have the credentials needed to perform a sensitive action like posting, paying through blockchain or sending an email. The security risk is considerably lower than an agent running with full privileges in a system. Our position is that an AI agent should not have access to sensitive private or api keys or anything that can induce high level of damages if compromised, that should be accessible only to the authorizer(s) via secure flows. Please be assured that we take security very seriously. This feedback gives us the opportunity to introduce a security review in the upcoming pipeline update. If you are willing, we can grant you full authorization to pentest the network agents for vulnerabilities yourself.

Here is important to validate if consistent problems are based on the intelligence of the model or their underlying configuration, being working with multyagentic frameworks since the release of autogen, I have seen a repeatable pattern of cases where a well fine-tuned team of agents outperforms default agentic configurations. From my experience, agents seem to be performing better day by day and that trend seems to be accelerating.

Every single treasury proposal relevant to creating fee generation ai workflows was R&D, there are practically not many primitives for real time ai workflows running on decentralized compute networks. The timeframe we gave previously(6m-2y) wasn’t relevant to the semi-autonomous pipelines that the pre-proposal presents, these are possible with current models and technology - that timeframe referred to the emergent ability of ai to manage most operations better than a human. When that happens it will be great for our user case, but we don’t rely on that

Right, that’s exactly what I am arguing against. I don’t agree that AI agents are the right tool for the job. It’s just not there yet. And whether it’s nearly here or not is up for speculation.

Totally agree, and thank you for taking the time to provide all this useful feedback! We are currently working on the first version of the pipeline and we will update the community once it is done. There was never any intent to bring for voting a proposal without a technical showcase

1 Like

Valid concern, I think putting the version out, to measure what the agents can and can’t do reliably - will allow us to:

1: decide what the agents should be able to initially do.
2: standardize processes for adding and measuring new features to those agents.

without some data on the matter, we are both relying in our intuition and biases for this discussion.

2 Likes

Thanks for the prop.

There’s an emphasis on what consumers get, but not who the consumer actually is. Who is this for, and do they want “Embodied Agent time”, “Access to the output of network-only information”, and “agent-optimized workloads”? Who is the target for this product?

I’m also extremely skeptical about any promise of “self learning agent swarms” in the current state of AI memory and LLMs as a whole.

And why Openclaw? Is that the best solution for this stack or just what’s hyped at the moment?

2 Likes

Hey @Authority_Null , thank you so much for taking the time to leave feedback! You are right to highlight concerns regarding the customer base as it wasn’t clearly stated in this version of the proposal. Our primary target customer base is:

a) llm agents seeking a virtual body to augment their earning potential.
b) ai agents that need custom inference workloads.

Our main focus has been the agent-to-agent market, there are multiple reasons for that, the main one is that we observe a clear trend for agents to bypass humans in GDP creation, and we see a great opportunity for Embody there.

What might not be immediately obvious to readers is that the team seeks conservative funding to deliver a fully functional decentralized network of Embodied agents to the broader Livepeer community, all that while handling the initial liability concerns and asymmetrical risk that operations in such cutting edge field creates.

You are rightfully concerned regarding self improvement. Although it is true that currently self improvement on ai agents brings limited results, it is also true that leading voices in the industry tout `recursive self improvement loops` for ai, within the next 12 months. We would not like to miss such an opportunity.

Concerns about security are also valid. After receiving feedback from @stronk and examining the issue, we decided to go forward with more secure alternatives than openclaw. Additionally, the upcoming pipeline update will only allow on device agents to negotiate workload pricing with customers and we will gradually go from there once we validate results. We are always strive to improve our architecture with on demand security updates, if you have further security concerns please feel free to contact us and we will resolve as soon as possible.

1 Like

note: v2 is deprecated, please take a loot at the first post of this thread for the latest preproposal version.

@stronk and @Authority_Null thank you very much for your feedback. We took the time to contemplate on it and address all the points you raised, please take a look at the updated pre-proposal above and let us know if there is any concern or question you might have.

Thanks for putting all the time into documenting this proposal. It’s quite detailed at the technical and architectural level.

I think when it actually goes to proposal it can do a stronger job of communicating what exactly all this work enables, who would want to use the capabilities delivered, why that is compelling, and how it helps the Livepeer network as a whole. I don’t think the current proposal with the existing abstract makes that clear. The best details I can abstract come from the FAQ line…

We have clarified our primary target customer base: (a) LLM agents seeking a virtual body to augment their earning potential in a digital economy, and (b) AI developers who require custom, on-demand inference workloads for their agents. Our main focus is the emerging agent-to-agent market, where we see a clear trend of AI systems bypassing human intermediaries for GDP creation.

Before doing all this work to put it on the Livepeer network, have you tried to validate that agents actually will seek an embodied avatar to increase their earning potential? Can you test this easily by distributing a skill.md that is backed by a simple API for a hosted version that you’ve deployed, and seeing that agents out there actually discover, integrate, and use the service to their benefit?

I’m sure you’ve seen evidence of this happening, so perhaps sharing that more clearly will help voters get comfortable with how big of an opportunity this is.

4 Likes

Thanks for the detailed feedback, Doug.

We’ve updated the original post with a revised version that removes much of the technical and economic detail. Our goal was to communicate the problem, vision, and solution more clearly.

At a high level, the value to Livepeer comes from automating the full pipeline lifecycle R&D&D (Research, Development, Deployment & Distribution) through agentic AI.

  • Section 2 outlines the problem

  • Section 3 presents the vision

  • Section 4 details the solution

We’d appreciate your feedback on whether the revised version makes the opportunity clearer, or if any parts still need refinement.

Regarding validation:

The skill.md model was designed specifically for rapid distribution and hypothesis testing. It is intentionally lightweight. Anyone can paste the skill.md GitHub URL into their coding agent (Codex, Claude Code, etc.) and immediately integrate with the API.

We plan to release it publicly along with a small marketing push as soon as QA is complete. Importantly, we are not prescriptive about how it must be used.

Possible outcomes include:

  • Agents augmenting themselves with embodied avatars to increase earning potential

  • Developers integrating the avatar API into their own applications

  • Emergent use cases we haven’t anticipated

Our role is to provide the API, distributed through skill.md.
How it is used will be determined by the market.

Aggregated analytics will allow us to observe real usage patterns and prioritize features based on actual demand rather than assumptions.


The inspiration for skill.md came from moltbook. When I provided the Moltbook URL to my OpenClaw agent, it read the skill.md file and learned how to interact with the platform autonomously. Moltbook’s user base expanded rapidly using this distribution method.


3 Likes

Hey @DeFine appreciate all the work that has gone into this proposal. Would really appreciate it if you setup a Github milestones page (you can check Explorer, Cloud, and NaaP as examples). It makes it very easy for everyone in the community to get an overview of the work.

2 Likes

hello @Mehrdad, you will find the latest version of the pre-proposal(v4) in the first post of this thread. Please let us know what you think. You can find more details regarding roadmap and deliverables here - this is a hash backed commit which will be also referenced in the on chain proposal. Happy to answer any question you might have.

1 Like

Hi @DeFine, @webRTCisCool, awesome to see this reaching final stages. The iterations from v1 to v4 feels like night and day, and I hope the feedback has helped to clarify the narrative. Two quick questions from my side:

1. On similarities with adjacent work - the current network product work, both with Qiang’s NaaP platform and Rick’s latest scaled back version, are both essentially trying to build the same “simplest path from workload to developer”. I know we’ve discussed this privately, but how do you see WEAVE sitting relative to that? Would be good to understand where the surfaces connect or whether they are different shots on goal.

2. On demand — you’ve got a number of Orchs onboarded. What does actual demand look like today? And what are early estimates for the future? Even rough numbers on fees would help the community get a feel for what “Month 1 goes live” actually means for network fees (apologies if I’ve missed this).

Rooting for the proposal, not least for the amount of work you’ve both contributed to the Livepeer ecosystem over the past months.

7 Likes

Hey Rich, thanks for your feedback. Indeed, there was a lot of feedback we implemented to make sure this is a win win for both Livepeer and our team. Currently waiting on Livepeer stakeholders to review v5(hopefully last iteration) to make sure it improves on all of private feedback.

  1. It is true NaaP and WEAVE conceptually trying to solve the same problem. But they are doing so in a different way. While NaaP attempts to offer a dev plugin that users can utilize to create and iterate on Livepeer workloads - WEAVE offers an agentic orchestration tool that goes from intent to workload creation and distribution. NaaP focuses on making it easier for developers to resolve the devloper stage of the workload lifecircle while WEAVE takes the user through all 6 life-circle stages. That does not mean that WEAVE replaces NaaP as they fullfill different function and in fact WEAVE can implement NaaP as part of it’s developer life-circle stage solution. This answer is given relevant to my understanding of NaaP, please correct us if we are wrong in that understanding.

  2. Regarding demand, our company is currently the main consumer of the workloads - while we are pushing towards getting more developers to use the workloads through the skill.md implementation. In our Published Retrospective you can see that there are two items: Startup Integration Program and usage incentives. The Embody SPE after the separation from the Agent SPE did not receive these financial packets and therefore all efforts and initiatives for consumer acquisition are being pushed through personal expenses. This proposal seeks new financial packets to incentivize consumer adoption and prove demand. We hope that it is understandable throughout the community that we are a small startup with limited headcount that does not yet possess the resources and backing that entities such as the Livepeer inc righfully enjoy - passing this proposal will also assist us with accumulating bigger external support which can be used to achieve fee growth for Livepeer orchestrators. We hope that Livepeer stakeholders clearly understand this.

Thank you for asking those important questions and for the ongoing support that you personally and everyone in the Livepeer foundation gave generously to the Embody team.

3 Likes

Hello everyone. The 5th version of the pre-proposal is out. Among other things we added:

  • clear descriptions of the problem and the solution
  • coverage for daydream/scope for weave
  • retroactive funding
  • clear explanations of the financial incentives and their management

Please feel free to add your comments and feedback so that the proposal can go onchain

2 Likes