Embody SPE: Pre-proposal Intelligent Public Pipelines

Pre-proposal v4: Weave, Intelligent Public Pipelines for Livepeer

Authors: DeFine (Strategy & Engineering), Dane (Virtual Worlds & Avatars)

Date: March 7, 2026

1. Abstract

Livepeer’s fee growth is constrained by a workload lifecycle and handoff bottleneck. The network still lacks a fast, scalable way to identify high-value workloads, build them, validate them, maintain them, and distribute them to orchestrators and consumers. Even when AI is used within individual stages, the lifecycle as a whole remains fragmented, and the handoff from one stage to the next is still manual.

This proposal introduces Public Workload Pipelines: a decentralized, agent-operated system for turning stakeholder intent into researched, engineered, tested, maintained, and distributed workloads for the Livepeer network. Embody proposes to build this system and use Embody workloads as the first proving case.

The objective is to deliver a public-good network extension that can accelerate workload creation and distribution, increase the flow of valuable workloads through the network, and strengthen Livepeer’s fee generation. In its final form, these pipelines are intended to be steerable by Livepeer stakeholders through LPT-based governance.

2. The Problem: The Workload Lifecycle And Handoff Bottleneck

Today, even a modest workload improvement still has to be pushed manually through a full lifecycle:

  1. Research

Identify a promising workload and evaluate demand, tooling, and feasibility.

  1. Engineering and maintenance

Build the workload and keep it working over time.

  1. QA

Validate the workload through testing and review.

  1. Consumer interface

Build the interface consumers use to access and control the workload.

  1. Orchestrator release

Coordinate release requirements and rollout with orchestrators.

  1. Consumer distribution

Reach consumers, support adoption, and keep the workload legible in public.

> AI tools can help within individual stages, but they do not remove the broader lifecycle coordination problem. Teams still have to carry intent and context back and forth across research, engineering, QA, release, and distribution, and manually manage the transition from one stage to the next. That creates cognitive strain, slows iteration, and keeps the contribution threshold high. For Livepeer, the result is slower workload creation, higher contribution friction, and weaker fee generation than the network could otherwise support.

3. The Solution: WEAVE

We propose WEAVE: a decentralized, agent-operated public workload pipeline system for Livepeer. WEAVE is designed to turn stakeholder intent into researched, engineered, tested, maintained, and distributed workloads for the network.

Livepeer stakeholders express intent, review outputs, and approve important decisions. Lifecycle agents operate the pipeline across research, engineering, QA, packaging, release, and distribution. This is intended to lower the contribution threshold: participants should not need to personally manage the full path from idea to deployed workload in order to create value for the network.

This proposal delivers WEAVE as a reusable Livepeer network extension, with Embody workloads as its first live implementation. The system is designed to ingest stakeholder intent, create new workloads, and improve existing ones across the network.

How We Solve Each Bottleneck

WEAVE is designed to address each stage of that lifecycle directly:

  1. Research

Lifecycle agents help identify promising workloads, synthesize demand, tooling, and feasibility signals, and prepare proposals for Livepeer stakeholder review.

  1. Engineering and maintenance

Lifecycle agents help turn approved intent into implemented workloads, updates, fixes, and ongoing maintenance work inside bounded environments.

  1. QA

Lifecycle agents help validate workloads, catch regressions, and prepare them for release through a more repeatable testing path.

  1. Consumer interface

The public entry point is a published SKILL.md, a markdown contract that tells consumers how to start, control, and end the workload through a mediated public control surface.

  1. Orchestrator release

Lifecycle agents help package workloads, document release requirements, and move them through a clearer operator onboarding and rollout path.

  1. Consumer distribution

Lifecycle agents help support the distribution work needed to reach consumers, drive adoption, and keep the workload legible as a public interface rather than a private implementation.

From Intent to Live Workload

In its intended final form, WEAVE gives Livepeer stakeholders a public path from onchain intent to live network execution:

  1. Intent submission

A Livepeer stakeholder submits a workload intent through an onchain transaction. That intent can propose a new workload, request an improvement to an existing one, or point the pipeline toward a concrete opportunity such as real-time camera tracking.

  1. Lifecycle execution

Lifecycle agents pick up that intent and move it through research, engineering, QA, release preparation, and distribution. The intent provider and other Livepeer stakeholders can inspect the public work as it unfolds and provide additional feedback during the process.

  1. Stage review and authorization

At the end of each lifecycle stage, the agents publish the stage artifacts, request review, and ask for approval before continuing. Agents can carry the workflow forward, but they do not authorize stage completion on their own; that remains a human responsibility.

  1. Runtime release and distribution

When the workload reaches release readiness, WEAVE distributes it to the orchestrator registry, where operators can adopt it through a bounded action and run it in the runtime. The same path publishes the consumer-facing SKILL.md and begins consumer distribution and outreach.

  1. Ongoing steering and fee participation

Once a workload is live, the original proposer can continue steering updates and authorizing important changes. In governance-led flows, WEAVE can also follow standing network guidance set through LPT-based governance, with human reviewers authorizing each stage. In the intended end state, workload-linked fee participation can flow back to the proposer or to the human steerers and authorizers who review and approve the lifecycle outputs.

For orchestrators, this creates a faster path to new fee-generating workloads, lower integration friction, and a stronger role in steering how the pipeline evolves.

4. Where We Are Today: Public Proof Points

WEAVE is being proposed on top of assets the team already operates: a mature Embody workload, an orchestrator rollout lane, and a working consumer entry point through the published SKILL.md:

  • embody-skill: the published skill-contract path for the workload interface, giving the first lane a working consumer-distribution surface

  • livepeer-ops: the control-plane layer for sessions, policy, and orchestrator allocation

  • Unreal_Vtuber: the runtime environment for the embodied avatar workload

  • registered orchestrators: 13+ orchestrators have registered to the pipeline and received workloads over time; seven are currently active in the lane, and prior participants can reenter

  • rollout capability: the active lane can already receive autoupdates through the Unreal_Vtuber path

Together, these assets provide the execution base for the proposal: a mature workload, an existing operator lane, and functioning distribution tooling on which WEAVE can be built, tested, and released.

5. What We Are Delivering (4-Month Scope)

The roadmap has three jobs: complete WEAVE across the workload lifecycle, generalize the parts already working for Embody into a reusable path for additional workloads, and ship the governance/runtime needed to hand the system over as a public good.

Month 1 — Lifecycle automation

a. Establish the first bounded lifecycle-agent runtime for those stages.

b. Automate research, engineering, maintenance, and QA flows on the Embody proving lane.

c. Release Embody as the first working WEAVE user case on the team-operated path, alongside the first orchestrator and consumer incentives.

Month 2 — Reusable workload path

a. Generalize the existing consumer interface and SKILL.md distribution path beyond Unreal-specific workloads.

b. Adapt the orchestrator rollout path for non-Unreal workloads.

c. Support the next workload lane, including the first lane beyond Unreal-only delivery.

Month 3 — Public release

a. Publish operator onboarding, release notes, and compatibility notes for the WEAVE path.

b. Open the WEAVE path to public orchestrator onboarding.

c. Publish the supported workflow for releasing and maintaining additional workloads through WEAVE.

Month 4 — Governance and handover

a. Deploy the governance contracts for the lifecycle-agent runtime.

b. Hand over the network incentives, agent security bounties, and consumer hackathon budgets to the governed system.

c. Publish handover docs and a residual-risk list for the post-grant period.

Each month is intended to produce a reviewable output that can be checked against the linked technical, roadmap, and governance docs.

6. Budget & Financial Governance

The total amount requested from the on-chain treasury is $100,000 USD, equal to 44,053 LPT at an LPT reference price of $2.27 on March 9, 2026.

Budget breakdown

  • Team compensation: 17,621 LPT / $40,000 total.

a. DeFine: 7,048 LPT / $16,000 total / $4,000 per month. Strategy, control plane, WEAVE engineering, and governance/runtime delivery.

b. Dane: 10,573 LPT / $24,000 total / $6,000 per month. Embodied avatar workload engineering across Unreal Engine and non-Unreal runtime paths.

  • Network incentives: 13,216 LPT / $30,000. Operator and workload-participation incentives for the first WEAVE lanes.

  • Operational costs: 4,405 LPT / $10,000. Infrastructure, runtime, measurement, and support costs for the proving workload during the grant window.

  • Agent security bounties: 4,405 LPT / $10,000. External review and hardening incentives for the supported WEAVE surface.

  • Consumer hackathon: 4,405 LPT / $10,000. Public-consumer adoption incentives and hackathon support for the first public workload lanes.

  • Total: 44,053 LPT / $100,000.

The package logic and roadmap mapping are described in the separate financial reference. The treasury design is intentionally simple in this post. The grant would be managed on Arbitrum through a proposal-facing multisig.

Multisig Composition

  • Orchestrator tiebreaker: one signer, currently intended to be Pon.

  • Embody team: two signers.

  • Foundation: two signers, including @Rick, the Foundation’s head engineer.

Treasury actions would proceed through that governed multisig path, and if funds need to be returned, the remaining balance can be sent back to the Livepeer treasury in LPT through the same governance process described in the separate wallet governance packet.

7. Addressing Past Feedback

We want to thank the Livepeer stakeholders who gave feedback publicly and privately on the earlier versions of this preproposal. That feedback improved the proposal materially. It surfaced three important issues: the security boundary needed to be clearer, the scope was still too abstract, and the budget needed to match the size of the work more credibly.

This revision responds directly to those points. It narrows the first workload claim, makes the public consumer path and governance shape more legible, moves deeper technical and financial detail into linked docs, and ties the four-month ask to reviewable milestone outputs. During the core engineering period, the team will remain responsive to ongoing feedback from Livepeer stakeholders and incorporate useful improvements as the work progresses.

8. References & Technical Appendix

This appendix is the deeper review bundle behind the shorter forum post. The post stays high level; the linked docs carry the architecture, roadmap, budget, and governance detail.

Repositories

  • embody-skill: https://github.com/its-DeFine/embody-skill

  • livepeer-ops: https://github.com/its-DeFine/livepeer-ops

  • Unreal_Vtuber: https://github.com/its-DeFine/Unreal_Vtuber

Packet docs

  • VF4 landing page / README: https://github.com/its-DeFine/weave-public-packet/blob/2e2da6a0ac5e730c7aa2550149f2b89fd18ad902/features/livepeer-proposal-vf4/README.md

  • Glossary: https://github.com/its-DeFine/weave-public-packet/blob/2e2da6a0ac5e730c7aa2550149f2b89fd18ad902/features/livepeer-proposal-vf4/GLOSSARY.md

  • Technical architecture: https://github.com/its-DeFine/weave-public-packet/blob/2e2da6a0ac5e730c7aa2550149f2b89fd18ad902/features/livepeer-proposal-vf4/TECHNICAL_ARCHITECTURE.md

  • Roadmap and deliverables: https://github.com/its-DeFine/weave-public-packet/blob/2e2da6a0ac5e730c7aa2550149f2b89fd18ad902/features/livepeer-proposal-vf4/ROADMAP_AND_DELIVERABLES.md

  • Financial plan and governance: https://github.com/its-DeFine/weave-public-packet/blob/2e2da6a0ac5e730c7aa2550149f2b89fd18ad902/features/livepeer-proposal-vf4/FINANCIAL_PLAN_AND_GOVERNANCE.md

  • Wallet governance packet: https://github.com/its-DeFine/weave-public-packet/blob/2e2da6a0ac5e730c7aa2550149f2b89fd18ad902/features/livepeer-grant-wallet-governance-v0/2026-03-07-contract-ledgers-packet.md

2 Likes

why does ‘pipeline management’ have to be agent-powered? sounds like a hammer looking for nail

Great question @stronk. It might be useful to reply here with an other question - in what time frame will ai be able to manage all aspects of the pipeline better than any human team? Within the next 6m-2y, we think is a good projection.

This is a nail we need to hammer, the reason for that is simple we have 24 hours per day, many of which are devoted in pipeline engineering and management. Delegating this work to an intelligent standardized pipeline will allow us to hammer more nails, with higher efficiently.

Being required to spend long hours to introduce engineering and managerial changes, in an age where you can delegate much of the design and the implementation of such tasks to 24/7 running AI agents, doesn’t seem to be the optimal choice. Of course, we are always open to counterarguments

Well i mean that’s not really an answer to my question…

‘We don’t have enough hours in the day’ is a great argument that applies to any level of automation, which can be solved just as easily without any agent involvement. Engineers have been doing that for decades!

I also believe that the core premise of your proposal is fundamentally flawed, which correct me if I’m wrong, is that the agents improve and learn.

However, you’re not training a model, you’re modifying the context around it. In fact, the agent is modifying its own context and there is no guardrail that can prevent the risk of unsupervised agents poisoning their own context or runtime environment, which is a compounding issue that is highly likely given how fickle the current generation of LLM’s still are.

I’ve been using AI profesionally for work for a while now, and while the capabilities never cease to amaze me when they get it right, agents also constantly break any rules you try to enforce, infer from context over understanding a domain, choose the simple/easy way out that only solidifies a bad decision point made in the past, or just outright break stuff.

Besides, I don’t think it’s prudent to spend treasury funding on what is essentially a R&D project, based off a highly speculative timeline of AI drastically improving their reliability and accuracy.

2 Likes

And like I’m not trying to say that agents don’t have a place in this at all… but agents behave a lot more appropriate when constrained to a specific domain with as little freeform thinking involved as possible.

Just spitballing here. but there’s a deterministic aspect to this, like negotiating supply & demand, which could power a registry that highly specialized agents could use. That would make me much more confident (or having a PoC before proposal)

2 Likes

Thank you for your additional feedback Marco.

Please note, we are not talking about traditional automation here. Agents who are able to create, advertise and negotiate workloads while understanding that they operate within a single coherent network, are not mere automation solutions.

We made a section: Safety, Liability, and Governance to address security and liability worries.Please note that the pre-proposal is not finalized and it will be updated further on security. All aforementioned hypothetical security issues are introducing data corruption and suboptimal task execution at worst, if they agent doesn’t not have the credentials needed to perform a sensitive action like posting, paying through blockchain or sending an email. The security risk is considerably lower than an agent running with full privileges in a system. Our position is that an AI agent should not have access to sensitive private or api keys or anything that can induce high level of damages if compromised, that should be accessible only to the authorizer(s) via secure flows. Please be assured that we take security very seriously. This feedback gives us the opportunity to introduce a security review in the upcoming pipeline update. If you are willing, we can grant you full authorization to pentest the network agents for vulnerabilities yourself.

Here is important to validate if consistent problems are based on the intelligence of the model or their underlying configuration, being working with multyagentic frameworks since the release of autogen, I have seen a repeatable pattern of cases where a well fine-tuned team of agents outperforms default agentic configurations. From my experience, agents seem to be performing better day by day and that trend seems to be accelerating.

Every single treasury proposal relevant to creating fee generation ai workflows was R&D, there are practically not many primitives for real time ai workflows running on decentralized compute networks. The timeframe we gave previously(6m-2y) wasn’t relevant to the semi-autonomous pipelines that the pre-proposal presents, these are possible with current models and technology - that timeframe referred to the emergent ability of ai to manage most operations better than a human. When that happens it will be great for our user case, but we don’t rely on that

Right, that’s exactly what I am arguing against. I don’t agree that AI agents are the right tool for the job. It’s just not there yet. And whether it’s nearly here or not is up for speculation.

Totally agree, and thank you for taking the time to provide all this useful feedback! We are currently working on the first version of the pipeline and we will update the community once it is done. There was never any intent to bring for voting a proposal without a technical showcase

1 Like

Valid concern, I think putting the version out, to measure what the agents can and can’t do reliably - will allow us to:

1: decide what the agents should be able to initially do.
2: standardize processes for adding and measuring new features to those agents.

without some data on the matter, we are both relying in our intuition and biases for this discussion.

2 Likes

Thanks for the prop.

There’s an emphasis on what consumers get, but not who the consumer actually is. Who is this for, and do they want “Embodied Agent time”, “Access to the output of network-only information”, and “agent-optimized workloads”? Who is the target for this product?

I’m also extremely skeptical about any promise of “self learning agent swarms” in the current state of AI memory and LLMs as a whole.

And why Openclaw? Is that the best solution for this stack or just what’s hyped at the moment?

1 Like

Hey @Authority_Null , thank you so much for taking the time to leave feedback! You are right to highlight concerns regarding the customer base as it wasn’t clearly stated in this version of the proposal. Our primary target customer base is:

a) llm agents seeking a virtual body to augment their earning potential.
b) ai agents that need custom inference workloads.

Our main focus has been the agent-to-agent market, there are multiple reasons for that, the main one is that we observe a clear trend for agents to bypass humans in GDP creation, and we see a great opportunity for Embody there.

What might not be immediately obvious to readers is that the team seeks conservative funding to deliver a fully functional decentralized network of Embodied agents to the broader Livepeer community, all that while handling the initial liability concerns and asymmetrical risk that operations in such cutting edge field creates.

You are rightfully concerned regarding self improvement. Although it is true that currently self improvement on ai agents brings limited results, it is also true that leading voices in the industry tout `recursive self improvement loops` for ai, within the next 12 months. We would not like to miss such an opportunity.

Concerns about security are also valid. After receiving feedback from @stronk and examining the issue, we decided to go forward with more secure alternatives than openclaw. Additionally, the upcoming pipeline update will only allow on device agents to negotiate workload pricing with customers and we will gradually go from there once we validate results. We are always strive to improve our architecture with on demand security updates, if you have further security concerns please feel free to contact us and we will resolve as soon as possible.

1 Like

note: v2 is deprecated, please take a loot at the first post of this thread for the latest preproposal version.

@stronk and @Authority_Null thank you very much for your feedback. We took the time to contemplate on it and address all the points you raised, please take a look at the updated pre-proposal above and let us know if there is any concern or question you might have.

Thanks for putting all the time into documenting this proposal. It’s quite detailed at the technical and architectural level.

I think when it actually goes to proposal it can do a stronger job of communicating what exactly all this work enables, who would want to use the capabilities delivered, why that is compelling, and how it helps the Livepeer network as a whole. I don’t think the current proposal with the existing abstract makes that clear. The best details I can abstract come from the FAQ line…

We have clarified our primary target customer base: (a) LLM agents seeking a virtual body to augment their earning potential in a digital economy, and (b) AI developers who require custom, on-demand inference workloads for their agents. Our main focus is the emerging agent-to-agent market, where we see a clear trend of AI systems bypassing human intermediaries for GDP creation.

Before doing all this work to put it on the Livepeer network, have you tried to validate that agents actually will seek an embodied avatar to increase their earning potential? Can you test this easily by distributing a skill.md that is backed by a simple API for a hosted version that you’ve deployed, and seeing that agents out there actually discover, integrate, and use the service to their benefit?

I’m sure you’ve seen evidence of this happening, so perhaps sharing that more clearly will help voters get comfortable with how big of an opportunity this is.

2 Likes

Thanks for the detailed feedback, Doug.

We’ve updated the original post with a revised version that removes much of the technical and economic detail. Our goal was to communicate the problem, vision, and solution more clearly.

At a high level, the value to Livepeer comes from automating the full pipeline lifecycle R&D&D (Research, Development, Deployment & Distribution) through agentic AI.

  • Section 2 outlines the problem

  • Section 3 presents the vision

  • Section 4 details the solution

We’d appreciate your feedback on whether the revised version makes the opportunity clearer, or if any parts still need refinement.

Regarding validation:

The skill.md model was designed specifically for rapid distribution and hypothesis testing. It is intentionally lightweight. Anyone can paste the skill.md GitHub URL into their coding agent (Codex, Claude Code, etc.) and immediately integrate with the API.

We plan to release it publicly along with a small marketing push as soon as QA is complete. Importantly, we are not prescriptive about how it must be used.

Possible outcomes include:

  • Agents augmenting themselves with embodied avatars to increase earning potential

  • Developers integrating the avatar API into their own applications

  • Emergent use cases we haven’t anticipated

Our role is to provide the API, distributed through skill.md.
How it is used will be determined by the market.

Aggregated analytics will allow us to observe real usage patterns and prioritize features based on actual demand rather than assumptions.


The inspiration for skill.md came from moltbook. When I provided the Moltbook URL to my OpenClaw agent, it read the skill.md file and learned how to interact with the platform autonomously. Moltbook’s user base expanded rapidly using this distribution method.


3 Likes

Hey @DeFine appreciate all the work that has gone into this proposal. Would really appreciate it if you setup a Github milestones page (you can check Explorer, Cloud, and NaaP as examples). It makes it very easy for everyone in the community to get an overview of the work.

1 Like

hello @Mehrdad, you will find the latest version of the pre-proposal(v4) in the first post of this thread. Please let us know what you think. You can find more details regarding roadmap and deliverables here - this is a hash backed commit which will be also referenced in the on chain proposal. Happy to answer any question you might have.

1 Like

Hi @DeFine, @webRTCisCool, awesome to see this reaching final stages. The iterations from v1 to v4 feels like night and day, and I hope the feedback has helped to clarify the narrative. Two quick questions from my side:

1. On similarities with adjacent work - the current network product work, both with Qiang’s NaaP platform and Rick’s latest scaled back version, are both essentially trying to build the same “simplest path from workload to developer”. I know we’ve discussed this privately, but how do you see WEAVE sitting relative to that? Would be good to understand where the surfaces connect or whether they are different shots on goal.

2. On demand — you’ve got a number of Orchs onboarded. What does actual demand look like today? And what are early estimates for the future? Even rough numbers on fees would help the community get a feel for what “Month 1 goes live” actually means for network fees (apologies if I’ve missed this).

Rooting for the proposal, not least for the amount of work you’ve both contributed to the Livepeer ecosystem over the past months.

2 Likes

Hey Rich, thanks for your feedback. Indeed, there was a lot of feedback we implemented to make sure this is a win win for both Livepeer and our team. Currently waiting on Livepeer stakeholders to review v5(hopefully last iteration) to make sure it improves on all of private feedback.

  1. It is true NaaP and WEAVE conceptually trying to solve the same problem. But they are doing so in a different way. While NaaP attempts to offer a dev plugin that users can utilize to create and iterate on Livepeer workloads - WEAVE offers an agentic orchestration tool that goes from intent to workload creation and distribution. NaaP focuses on making it easier for developers to resolve the devloper stage of the workload lifecircle while WEAVE takes the user through all 6 life-circle stages. That does not mean that WEAVE replaces NaaP as they fullfill different function and in fact WEAVE can implement NaaP as part of it’s developer life-circle stage solution. This answer is given relevant to my understanding of NaaP, please correct us if we are wrong in that understanding.

  2. Regarding demand, our company is currently the main consumer of the workloads - while we are pushing towards getting more developers to use the workloads through the skill.md implementation. In our Published Retrospective you can see that there are two items: Startup Integration Program and usage incentives. The Embody SPE after the separation from the Agent SPE did not receive these financial packets and therefore all efforts and initiatives for consumer acquisition are being pushed through personal expenses. This proposal seeks new financial packets to incentivize consumer adoption and prove demand. We hope that it is understandable throughout the community that we are a small startup with limited headcount that does not yet possess the resources and backing that entities such as the Livepeer inc righfully enjoy - passing this proposal will also assist us with accumulating bigger external support which can be used to achieve fee growth for Livepeer orchestrators. We hope that Livepeer stakeholders clearly understand this.

Thank you for asking those important questions and for the ongoing support that you personally and everyone in the Livepeer foundation gave generously to the Embody team.

1 Like