Abstract
The AI Video SPE requests 97,500 LPT from the Livepeer Treasury to continue scaling real-time and custom media AI workloads on the network. This phase builds on three successful funding rounds and focuses on maturing ComfyStream as a real-time AI engine while productionizing the Bring Your Own Container (BYOC) pipeline to support broader, permissionless developer adoption. These efforts directly serve orchestrators, creators, and infrastructure builders — and further establish Livepeer as the leading decentralized platform for AI-powered media compute.
Mission
Since January 2024, the AI SPE has worked alongside the Livepeer community, Ecosystem team, and broader contributors to validate the impact, potential, and competitiveness of bringing AI-based video compute jobs onto the Livepeer network. Across three successful, treasury-funded phases, we’ve built foundational infrastructure, launched batch and generative pipelines, supported early demand partners, and led backend development and GenAI pipeline R&D enabling the emergence of real-time AI capabilities under the Cascade roadmap.
As adoption grows, two critical challenges remain:
- Real-time AI infrastructure still requires refinement to support scalable, production-grade workloads.
- Developers exploring new media use cases lack a clear, permissionless path to deploy on Livepeer.
In Phase 4, the AI SPE will address both. We’ll deepen our contributions to the real-time AI ecosystem, further optimizing ComfyStream and developer tooling, while onboarding broader forms of media AI compute through continued development of the BYOC initiative.
This phase continues our community-funded mission: to establish Livepeer as the premier decentralized platform for media-related AI compute — empowering developers, orchestrators, and creators to run real-world AI workloads on an open, resilient, and permissionless network.
Rationale
As outlined in this past April’s AI SPE Strategic Update, we will continue to work closely with our first design partner application (Livepeer Inc’s Daydream.live, still in beta) for many reasons, not the least of which is showing the world concrete evidence that the Livepeer network is capable of performing real-time media AI workloads.
As we have found in the previous phase, the current state of Daydream.live and our Comfystream backend does not yet demonstrate this sufficiently. Our solution is a combination of R&D on foundational GenAI model inference acceleration, the incorporation of modern GenAI techniques for visual quality and real-time control and - equally important - improving latency, performance, stability, and resiliency of the Go-Livepeer + AI-Worker runtime stack. Many of these directly translate into learnings which are being applied to the development of an exciting new entrypoint for deploying new media AI use-cases onto the livepeer network, BYOC.
BYOC ( Bring your own Container )
The BYOC initiative emerged in response to increasing demand from design partners—particularly enterprise clients—who want to run their AI jobs on the Livepeer network while maintaining control over their containerized environments.
Many of these partners already have optimized containers suited to their unique workflows and infrastructure. BYOC allows them to integrate directly into the network without having to significantly modify their existing systems.
This aligns with one of the key lessons we’ve learned from previous SaaS-style efforts: the Livepeer network’s strength lies in being a decentralized compute protocol that distributes workloads, not in acting as a vertically integrated SaaS provider.
At its core, BYOC is designed to be a generalized and decentralized media processing infrastructure. It offers a streamlined, template-driven onboarding process for containerized AI workloads, dramatically lowering the barrier for entry.
This facilitates open experimentation, enabling a broader community of developers and researchers to build and deploy innovative real-time AI pipelines on Livepeer. With support for complex and composable workflows, BYOC expands the range of supported use cases beyond creative video generation, including applications like AI avatars, video intelligence, transcription, and more.
Ultimately, BYOC is a key pillar in our strategy to diversify demand and compute capabilities on the Livepeer network. By enabling a broader set of use cases and making it easier for external partners to experiment and contribute, we foster ecosystem growth, encourage community-led innovation, and position Livepeer as the preferred infrastructure for real-time, containerized AI media processing at scale.
Comfystream
The tool that the AI SPE has developed and integrated into the existing Livepeer network stack, Comfystream, enables rapid prototyping, fast iteration, and serving of creative real-time media AI workflows. The AI SPE continues to work closely with the infrastructure team at Livepeer Inc to build resilient and performant real-time Video AI capabilities on the network using Go-Livepeer and the AI SPE’s AI-Worker, leveraging Comfystream as a flexible, runtime-configurable pipeline.
We will continue rolling out support for public Orchestrators to run this stack, driving the observed marked increase in fees on the AI subnet, as well as generating and collecting data that we use to further ensure a reliable and smooth operational experience for Orchestrators.
This foundational work will enable Livepeer Inc, the Livepeer Community, and the creative media community at large, to utilize Comfystream and the familiar ComfyUI workflow builder to quickly and reliably, deploy and serve their own highly performant real-time video AI products on the Livepeer network.
The ComfyUI Stream Pack is a set of modular components for use in ComfyUI ( and by extension, Comfystream ) that we have shared with the community to further enable rapid development. With support from Livepeer Inc and partners in the form of meetups and hackathons, we can accelerate innovation far beyond what any single team could achieve alone.
In addition to the Generative AI demonstrations seen to date, we are also developing the addition of a performant, real-time data channel alongside streaming video, enabling use cases such as transcription and other video intelligence models. This real-time data channel further drives the diversification of our demand generating capabilities in real-time AI far beyond just creative video generation and augmentation.
Our research team is continually integrating new open source models and developing new modular components which are released to the community via the ComfyUI Stream Pack, as well as deployed in our current partner app (Daydream) to receive direct feedback for quick iterative research.
This ensures that we stay at the forefront of real-time AI by fostering and enabling a strong research community by offering access to real-world datasets, user feedback, and infrastructure for open experimentation. Supporting applied research in the open will foster early adoption and position Livepeer as a meaningful platform for real-time media AI innovation.
SPE Governance Structure
A collaborative effort coordinated between SPE engineers, with input from community and existing demand partners. Led by Peter Schroedl.
Since the initial proposals, the AI SPE has evolved from collaborating closely with the Ecosystem team to kickstart Livepeer’s AI efforts into a more independent, mission-driven engineering unit. Collaboration and guidance from the Livepeer Foundation, as well as Network and Growth Advisory boards will begin to be incorporated into the direction of the SPE as those initiatives are realized.
The Livepeer AI SPE now supports demand-generating partners across real-time, batch, and BYOC use cases. Despite this increased independence, collaboration with Livepeer Inc.’s engineering and product teams has deepened — particularly around the Cascade real-time AI roadmap.
The group continues to receive operational support from both Livepeer Inc. and the Livepeer Foundation.
Core Contributors (Funded by SPE)
- Peter – Lead AI Engineer & SPE Proposer (Full-time)
- Prakarsh – AI Researcher (Full-time)
- John (Elite Encoder) – AI Engineer, Comfstream Lead (Full-time)
- Brad (Ad Astra Video) – Software Engineer, leads onboarding of new demand-side use cases (Part-time)
- Jason (Everest Node) – Junior Software Engineer (Full-time, recently joined)
- Senior AI Engineer – Integration of new demand-side driven AI models (Full-time)
- Senior AI Engineer – Focuses on infrastructure engineering for real-time AI integration (Full-time)
Ecosystem Support (Funded by Livepeer Inc. or Livepeer Foundation)
- Xilin – Product Management Support (Funded by Livepeer Inc.)
Provides expertise in roadmap development, identifies fee-generating opportunities, and guides impactful deliverables. Facilitates collaboration with the Livepeer Inc. engineering team to optimize resources and avoid duplication of effort. - Ben – Finance Support (Funded by Livepeer Foundation)
Delivers financial administration expertise to ensure efficient resource allocation and secure treasury management. - Mariyana – HR Support (Funded by Livepeer Inc.)
Assists with hiring, team coordination, and administrative processes to enable smooth day-to-day operations. - Rick – Vision & Growth Advisor (Funded by Livepeer Foundation)
Offers strategic guidance on key questions to align initiatives with the broader ecosystem’s long-term goals.
Milestones and Timelines
By October 2025, the AI Video SPE will aim to have:
-
Onboarded two (2) additional demand partners to the BYOC pipeline
Early demand partners such as the Agent SPE have already provided invaluable feedback. In addition to our support bringing this to production, early onboarding of two more partners will validate early interest and usage of BYOC, and demonstrate the capability to diversify demand across the network. It will also provide valuable engineering feedback and prove real-world utility as we continue to stabilize and optimize the pipeline. The bandwidth necessary to support additional partners using this solution should decrease as the solution matures.
-
Onboarded five (5) additional public orchestrators onto the Comfystream and BYOC Stacks
Leveraging our existing and planned design partners on-chain staging and production environments, continued work to onboard public orchestrators will produce more diverse fee generating opportunities for the Livepeer Orchestrator and Delegator communities. -
Released three (3) new data-channel enabled models as Comfystream nodes
The addition of a performant, real-time data channel alongside streaming video enables use cases such as transcription and other video intelligence models. This deepens diversification of the Comfystream stack capacities far beyond just creative video generation and augmentation. -
Released three (3) new real-time foundational video model nodes via the ComfyUI-Stream Pack
Enabling the community to leverage a more diverse set of models in their own custom creative realtime workflows on the Livepeer network and sharing their work with the public, boosting visibility and engagement while continuing to drive fee generation for node operators. -
Enabled five (5) new compelling new Comfystream-enabled pipelines
Demonstrating capabilities driven by our research team, such as temporal consistency, motion tracking, segmentation, and other video and data-channel driven capabilities, these pipelines (as Comfystream workflows) will be utilized by the growing ComfyStream creator community, in publication of our research, new design partners, and/or Daydream.live
Deliverables
In Phase 4, we aim to build on the work started in Phase 3 — kickstarting Livepeer’s real-time AI journey and expanding the network’s demand surface through the new BYOC pipeline. This dual focus strengthens Livepeer’s position in real-time AI, as outlined in the Cascade roadmap, while enabling a broader range of developers to deploy custom use cases and contribute to the network’s long-term growth.
-
BYOC Pipeline Stabilization and Expansion:
In the last phase, we launched a working prototype of the Bring-Your-Own-Container (BYOC) pipeline and successfully processed the first custom workloads with the Agent SPE. This phase will focus on maturing that prototype into a stable, production-ready pipeline capable of supporting a broad range of media-centric AI use cases. We’ll improve core elements like container loading, orchestrator security, job selection logic, and payment mechanisms to ensure reliability, extensibility, and fair value distribution. In parallel, we’ll work closely with new demand partners to validate improvements and ship the first stable BYOC pipeline to support generalized AI job onboarding at scale. -
ComfyStream Optimization and Expansion:
In the last phase, ComfyStream was rebuilt into a modular, community-friendly interface for real-time AI workflows with native ComfyUI integration, full documentation, and reusable nodes. This phase will focus on hardening it as the production backbone for real-time media compute on Livepeer by improving core performance, stability, and reliability. We’ll also add a real-time data channel to unlock new use cases like transcription, overdubbing, video understanding, and object detection. To grow the ecosystem, we’ll continue expanding the Stream Pack, and improve onboarding resources to support a broader community of workflow creators building on ComfyUI. -
ComfyUI Stream Pack Expansion via Applied Research:
Building on the initial Stream Pack released last phase, we will expand its capabilities through applied research and direct collaboration with creators and academic partners. This includes contributing new nodes for real-time media understanding, optimizing node performance, and curating reusable workflows for high-impact use cases like transcription, segmentation, and avatar animation. Our goal is to make the Stream Pack a go-to toolkit for building production-ready real-time AI pipelines on Livepeer, while fostering a deeper feedback loop between researchers and real-world builders.
For more details on deliverables, milestones, and resource allocation, please refer to our Mission and Roadmap page. This page also outlines our long-term vision, the complete 2025 roadmap, and opportunities for community contributions to decentralized AI compute on Livepeer. While these deliverables represent our primary objectives, we remain flexible in adapting to emerging opportunities and evolving needs.
Budget Breakdown
The total amount requested from the on-chain treasury is 97,500 LPT to cover operational expenses through September 2025. This request factors in a deficit remaining from Stage 3 funds due to price volatility. Our funding assumption uses a price of $4 per LPT, though closely watching current price action to adjust. If the price of LPT improves subsequent to this proposal passing, funding could extend beyond September and potentially cover operations throughout the rest of 2025, however, this is highly dependent on LPT market performance. If the fourth-round funding is not fully utilized by November 30th 2025, the SPE will either: (1) roll over the remaining funds into a new proposal; or (2) return the unused funds to the treasury.
Projected June-September 2025 Spending:
Category | LPT | USD | Description |
---|---|---|---|
SPE Contributors | 87,500 | $350,000 | Compensation for 5 existing core contributors, new hires to accelerate real-time AI workflows and support the BYOC initiative, and Stage 3 deficit. |
Infrastructure & Software | 5,000 | $20,000 | For infrastructure related to cloud services, server costs, and software tools required to support development. |
ETH Gateway & Gas Fees | 3,000 | $10,000 | Funding for on-chain transaction fees related to development and operational activities. |
Travel | 2,000 | $8,000 | Subsidize SPE members to attend the Livepeer community summit - a critical opportunity to get face to face working time together and strengthen connections. |
Total | 97,500 | $388,000 |
This funding will ensure the AI SPE can continue delivering impactful contributions to the Livepeer network, supporting growth in AI-powered video workflows and infrastructure.
Transparency and Accountability
Similar to previous phases, AI SPE team members will provide regular updates to the community during the community calls. Additionally, a comprehensive retrospective report will be delivered at the four-month mark, covering deliverables, key learnings, a budget overview, KPIs, and success metrics.