Final Version of the AI Video SPE Phase 4 Proposal to be posted on-chain
Abstract
The AI Video SPE requests 56,560 LPT from the Livepeer Treasury to continue scaling real-time and custom media AI workloads on the network. This phase builds on three successful funding rounds and focuses on maturing ComfyStream as a real-time AI engine while productionizing the Bring Your Own Container (BYOC) pipeline to support broader, permissionless developer adoption. These efforts directly serve orchestrators, creators, and infrastructure builders — and further establish Livepeer as the leading decentralized platform for AI-powered media compute.
Mission
Since January 2024, the AI SPE has worked alongside the Livepeer community, Ecosystem team, and broader contributors to validate the impact, potential, and competitiveness of bringing AI-based video compute jobs onto the Livepeer network. Across three successful, treasury-funded phases, we’ve built foundational infrastructure, launched batch and generative pipelines, supported early demand partners, and led backend development and GenAI pipeline R&D enabling the emergence of real-time AI capabilities under the Cascade roadmap.
As adoption grows, two critical challenges remain:
- Real-time AI infrastructure still requires refinement to support scalable, production-grade workloads.
- Developers exploring new media use cases lack a clear, permissionless path to deploy on Livepeer.
In Phase 4, the AI SPE will address both. We’ll deepen our contributions to the real-time AI ecosystem, further optimizing ComfyStream and developer tooling, while onboarding broader forms of media AI compute through continued development of the BYOC initiative.
This phase continues our community-funded mission: to establish Livepeer as the premier decentralized platform for media-related AI compute — empowering developers, orchestrators, and creators to run real-world AI workloads on an open, resilient, and permissionless network.
Rationale
As outlined in this past April’s AI SPE Strategic Update, we will continue to work closely with our first design partner application (Livepeer Inc’s Daydream.live, still in beta) for many reasons, not the least of which is showing the world concrete evidence that the Livepeer network is capable of performing real-time media AI workloads.
As we have found in the previous phase, the current state of Daydream.live and our Comfystream backend does not yet demonstrate this sufficiently. Our solution is a combination of R&D on foundational GenAI model inference acceleration, the incorporation of modern GenAI techniques for visual quality and real-time control and - equally important - improving latency, performance, stability, and resiliency of the Go-Livepeer + AI-Worker runtime stack. Many of these directly translate into learnings which are being applied to the development of an exciting new entrypoint for deploying new media AI use-cases onto the livepeer network, BYOC.
BYOC ( Bring your own Container )
The BYOC initiative emerged in response to increasing demand from design partners—particularly enterprise clients—who want to run their AI jobs on the Livepeer network while maintaining control over their containerized environments.
Many of these partners already have optimized containers suited to their unique workflows and infrastructure. The intent of BYOC is to allow them to integrate directly into the network without having to significantly modify their existing systems.
This aligns with one of the key lessons we’ve learned from previous SaaS-style efforts: the Livepeer network’s strength lies in being a decentralized compute protocol that distributes workloads, not in acting as a vertically integrated SaaS provider. Additionally, BYOC is posed to solve the coordination challenge that arises because the Livepeer network combines characteristics of spot vs. on-demand GPU markets.
At its core, BYOC is designed to be a generalized and decentralized media processing infrastructure. It offers a streamlined, template-driven onboarding process for containerized AI workloads, dramatically lowering the barrier for entry.
This facilitates open experimentation, enabling a broader community of developers and researchers to build and deploy innovative real-time AI pipelines on Livepeer. With support for complex and composable workflows, BYOC expands the range of supported use cases beyond creative video generation, including applications like AI avatars, video intelligence, transcription, and more.
Ultimately, building out our BYOC solution is a key pillar in our strategy to diversify demand and compute capabilities on the Livepeer network. By enabling a broader set of use cases and making it easier for external partners to experiment and contribute, we foster ecosystem growth, encourage community-led innovation, and position Livepeer as the preferred infrastructure for real-time, containerized AI media processing at scale.
Comfystream
The tool that the AI SPE has developed and integrated into the existing Livepeer network stack, Comfystream, enables fast iteration and serving of creative real-time media AI workflows. The AI SPE continues to work closely with the infrastructure team at Livepeer Inc to build resilient and performant real-time Video AI capabilities on the network using Go-Livepeer and the AI SPE’s AI-Worker, leveraging Comfystream as a flexible, runtime-configurable pipeline.
We will continue rolling out support for public Orchestrators to run this stack, driving the observed marked increase in fees on the AI subnet, as well as generating and collecting data that we use to further ensure a reliable and smooth operational experience for Orchestrators.
This foundational work will enable Livepeer Inc, the Livepeer Community, and the creative media community at large, to utilize Comfystream and the familiar ComfyUI workflow builder to quickly and reliably, deploy and serve their own highly performant real-time video AI products on the Livepeer network.
The ComfyUI Stream Pack is a set of modular components for use in ComfyUI ( and by extension, Comfystream ) that we have shared with the community to further enable rapid development. With support from Livepeer Inc and partners in the form of meetups and hackathons, we can accelerate innovation far beyond what any single team could achieve alone.
In addition to the Generative AI demonstrations seen to date, we are also developing the addition of a performant, real-time data channel alongside streaming video, enabling use cases such as transcription and other video intelligence models. This real-time data channel further drives the diversification of our demand generating capabilities in real-time AI far beyond just creative video generation and augmentation. Combining this with BYOC will allow developers to leverage Comfystream as a task-agnostic execution engine for building and deploying their own creative AI media pipelines on Livepeer.
Our research team is continually integrating and accelerating open source models and developing new modular components which are released to the community via the ComfyUI Stream Pack, as well as deployed in our current partner app (Daydream) to receive direct feedback for quick iterative research.
This ensures that we stay at the forefront of real-time AI by fostering and enabling a strong research community by offering access to real-world datasets, user feedback, and infrastructure for open experimentation. Supporting applied research in the open will foster early adoption and position Livepeer as a meaningful platform for real-time media AI innovation.
SPE Governance Structure
A collaborative effort coordinated between SPE engineers, with input from community and existing demand partners. Led by Peter Schroedl.
Since the initial proposals, the AI SPE has evolved from collaborating closely with the Ecosystem team to kickstart Livepeer’s AI efforts into a more independent, mission-driven engineering unit. Collaboration and guidance from the Livepeer Foundation, as well as Network and Growth Advisory boards will begin to be incorporated into the direction of the SPE as those initiatives are realized.
The Livepeer AI SPE now supports demand-generating partners across real-time, batch, and BYOC use cases. Despite this increased independence, collaboration with Livepeer Inc.’s engineering and product teams has deepened — particularly around the Cascade real-time AI roadmap.
The group continues to receive operational support from both Livepeer Inc. and the Livepeer Foundation.
Core Contributors (Funded by SPE)
- Peter – Lead AI Engineer & SPE Proposer (Full-time)
- Prakarsh – AI Researcher (Full-time)
- John (Elite Encoder) – AI Engineer, Comfstream Lead (Full-time)
- Brad (Ad Astra Video) – Software Engineer, leads onboarding of new demand-side use cases (Part-time)
- Jason (Everest Node) – Junior Software Engineer (Full-time, recently joined)
- Senior AI Engineer – Integration of new demand-side driven AI models (Full-time)
- Senior AI Engineer – Focuses on infrastructure engineering for real-time AI integration (Full-time)
Ecosystem Support (Funded by Livepeer Inc. or Livepeer Foundation)
- Rick – Vision & Growth Advisor (Funded by Livepeer Foundation)
Offers strategic guidance on key questions to align initiatives with the broader ecosystem’s long-term goals. - Ben – Finance Support (Funded by Livepeer Foundation)
Delivers financial administration expertise to ensure efficient resource allocation and secure treasury management. - Xilin – Product Management Support (Funded by Livepeer Inc.)
Provides expertise in roadmap development and guides impactful deliverables. Facilitates collaboration with the Livepeer Inc. engineering team to coordinate resources in pursuit of Daydream-focused deliverables. - Mariyana – HR Support (Funded by Livepeer Inc.)
Assists with hiring, team coordination, and administrative processes to enable smooth day-to-day operations.
Deliverables
In Phase 4, we aim to build on the work started in Phase 3 — kickstarting Livepeer’s real-time AI journey and expanding the network’s demand surface through the new BYOC pipeline. This dual focus strengthens Livepeer’s position in real-time AI, as outlined in the Cascade roadmap, while enabling a broader range of developers to deploy custom use cases and contribute to the network’s long-term growth.
-
BYOC Pipeline Stabilization and Expansion:
In the last phase, we launched a working prototype of the Bring-Your-Own-Container (BYOC) pipeline and successfully processed the first custom workloads with the Agent SPE. This phase will focus on maturing that prototype into a stable, production-ready pipeline capable of supporting a broad range of media-centric AI use cases. This initiative is only just now growing beyond the initial prototype phase and will require significant planning and development to grow into a turnkey solution. We will continue to work with new and existing design partners to identify MVP use cases. Investigation, planning, and incorporating feedback from partners and the community on these findings will be a central aspect of our Phase 4 BYOC work. -
ComfyStream Optimization and Expansion:
In the last phase, ComfyStream was rebuilt into a modular, community-friendly interface for real-time AI workflows with native ComfyUI integration, full documentation, and reusable nodes. This phase will focus on hardening it as the production backbone for real-time media compute on Livepeer by improving core performance, stability, and reliability. -
ComfyUI Stream Pack Expansion via Applied Research:
To grow the ecosystem, we’ll continue expanding the Stream Pack, and improve onboarding resources to support a broader community of workflow creators building on ComfyUI.
Building on the initial Stream Pack released last phase, we will expand its capabilities through applied research and share the results of these with the community via new nodes and accompanying documentation and reports on how we achieved these results. Our goal is to make the Stream Pack a go-to toolkit for building production-ready real-time AI pipelines on Livepeer, while fostering a deeper feedback loop between researchers and real-world builders.Key Deliverables:
- New nodes for real-time media for high-impact use cases such as video understanding, transcription, segmentation, and avatar animation.
For more details on deliverables, milestones, and resource allocation, please refer to our Mission and Roadmap page. This page also outlines our long-term vision, the complete 2025 roadmap, and opportunities for community contributions to decentralized AI compute on Livepeer. While these deliverables represent our primary objectives, we remain flexible in adapting to emerging opportunities and evolving needs.
Milestones and Timelines
By October 2025, the AI Video SPE will aim to have:
-
Onboarded three (3) additional demand partners to the BYOC alpha program
Early demand partners such as the Agent SPE have already provided invaluable feedback. Onboarding more partners will allow us to identify use-cases and MVP requirements. It will also serve as an valuable engineering feedback loop for us to continually to expand the capabilities of the BYOC stack while ensuring a secure and robust solution
-
Technical RFC:
Publish a Request for Comments that defines the end-to-end BYOC architecture(s) based on requirements identified by collaboration with demand partners. The RFC will serve as the canonical engineering spec and unblock parallel implementation work. -
Public BYOC Roadmap:
Publishing a comprehensive roadmap detailing critical enhancements identified through close collaboration with design partners, as well as core functionality required to bring BYOC to parity with existing go-livepeer/ai-runner stack. -
Validated Production-Ready Pipeline:
We are committed to shipping the first stable BYOC pipeline on the public Livepeer AI network to showcase generalized AI job onboarding at scale, and drive fee generation supported by partners and our own budget to eligible Orchestrators. -
Onboarded ten (10) public orchestrators onto the Comfystream and BYOC Stacks
Leveraging our existing and planned design partners on-chain staging and production environments, continued work to onboard public orchestrators will produce more diverse fee generating opportunities for the Livepeer Orchestrator and Delegator communities. -
Released three (3) new data-channel enabled models as Comfystream nodes
The addition of a performant, real-time data channel alongside streaming video enables use cases such as transcription and other video intelligence models. This deepens diversification of the Comfystream stack capacities far beyond just creative video generation and augmentation. -
Released three (3) new real-time foundational video model nodes via the ComfyUI-Stream Pack
Enabling the community to leverage a more diverse set of models in their own custom creative realtime workflows on the Livepeer network and sharing their work with the public, boosting visibility and engagement while continuing to drive fee generation for node operators. -
Enabled five (5) new compelling new Comfystream-enabled pipelines
Demonstrating capabilities driven by our research team, such as temporal consistency, motion tracking, segmentation, and other video and data-channel driven capabilities, these pipelines (as Comfystream workflows) can be utilized by the growing ComfyStream creator community, in publication of our research, new design partners, and/or Daydream.live
Budget Breakdown
The total amount requested from the on-chain treasury is 56,560 LPT to cover operational expenses through September 2025. This request factors in a deficit remaining from Stage 3 funds due to price volatility. Our funding assumption uses a price of $6.86 per LPT, following the 30 day SMA from TradingView. If the fourth-round funding is not fully utilized by November 30th 2025, the SPE will either: (1) roll over the remaining funds into a new proposal; or (2) return the unused funds to the treasury.
Projected June-September 2025 Spending:
Category | LPT | USD | Description |
---|---|---|---|
SPE Contributors | 51,020 | $350,000 | Compensation for 5 existing core contributors, new hires to accelerate real-time AI workflows and support the BYOC initiative, and Stage 3 deficit. |
Infrastructure & Software | 2,915 | $20,000 | For infrastructure related to cloud services, server costs, and software tools required to support development. |
ETH Gateway & Gas Fees | 1,458 | $10,000 | Funding for on-chain transaction fees related to development and operational activities. |
Travel | 1,166 | $8,000 | Subsidize SPE members to attend the Livepeer community summit - a critical opportunity to get face to face working time together and strengthen connections. |
Total | 56,560 | $388,000 |
This funding will ensure the AI SPE can continue delivering impactful contributions to the Livepeer network, supporting growth in AI-powered video workflows and infrastructure.
Transparency and Accountability
Similar to previous phases, AI SPE team members will provide regular updates to the community during the community calls. Milestones will be tracked through Karma GAP. Additionally, a comprehensive retrospective report will be delivered at the four-month mark, covering deliverables, key learnings, a budget overview, KPIs, and success metrics.
Note: Final price calculations are based on an LPT price of $6.86, derived from the 30 day Simple Moving Average as reported on 6/9/25 at 9:41PM on TradingView : Technical Analysis of Livepeer (COINBASE:LPTUSD) — TradingView