AI SPE Phase 3: Retrospective

Leadership Transition

Hey all :waving_hand:,

With the launch of the Livepeer Foundation, I’ll be stepping down as Lead Engineer of the AI SPE to fully focus on the Foundation’s mission: stewarding Livepeer’s long-term vision, coordinating core development efforts, growing and diversifying the ecosystem, and progressively decentralizing network governance into the hands of the community.

In my new role as Lead Engineer at the Foundation, I’ll be managing the Network Advisory Board, coordinating engineering across the ecosystem, identifying technical talent to support core initiatives, and supporting collaboration between teams building on Livepeer. My focus will be on ensuring that groups like the SPEs and founders have the tools, guidance, and resources they need to succeed, while also championing the technical vision that positions Livepeer as the open infrastructure layer for decentralized media AI.

As part of this transition, I’m excited to announce that Peter Schroedl will be stepping into the Lead role for the AI SPE over the next month. Peter has been a critical contributor to the AI SPE, bringing deep expertise in software engineering and distributed computation at scale. His strategic thinking and steady focus on long-term goals have been instrumental in our successes. In addition to his contributions within the SPE, Peter has been a long-time orchestrator and open-source contributor to the broader Livepeer ecosystem, with a clear passion for the community. I have full confidence that Peter will lead and grow the AI SPE team as Livepeer’s AI capabilities continue to scale and mature. I’ll be working with him during the handover, starting with this retrospective we prepared together, before Peter leads the next AI SPE proposal, building on the foundations we’ve laid over the past year.

I’m extremely grateful for the trust the community placed in me to spearhead Livepeer’s AI journey last year. It’s been incredible to see what we’ve achieved across the ecosystem in such a short time, from initial batch pipelines running on public orchestrators, to real-time AI pipelines developed with Livepeer Inc, and most recently, custom container jobs like the new Agent SPE Vtuber (see updates below). It’s been exciting to watch the evolution of Livepeer’s AI capabilities. It’s equally inspiring to see so many groups joining the AI and transcoding journeys, helping grow the network through public goods funding. I’m proud of what we’ve built together and excited for what’s ahead.

Finally, I’ll still be around in an advisory capacity from the Foundation side — similar to how Doug supported the AI SPE throughout past phases — and will continue to help the group as needed. I’m excited for what’s ahead — for the Livepeer ecosystem, the Foundation’s mission, and for the AI SPE’s continued growth under Peter’s leadership.

With gratitude,
Rick

Introduction from Peter

Fellow Orchestrators, Delegators, and Core Contributors,

I’m honored to step into the Lead role for the AI SPE and humbled by the opportunity to carry forward the momentum this team has built.

My passion for AI and machine learning started early - discovering a port of the original Eliza expert system on an early Macintosh sparked a curiosity that later led me to spend nearly four years researching and building a deep-learning chess engine with the guidance of an excellent professor.

Prior to joining the AI SPE, I had been laser focused on building scalable big data pipelines that powered an ML observability platform that empowered fortune 500 companies to monitor, evaluate, and improve their models in production. Over the past 11 months with Rick and the team, I’ve been deeply grateful for the opportunity to work hands-on with bleeding-edge ML models and techniques, from building more traditional pipelines like lip-syncing, text-to-speech, and image segmentation, to real-time video ComfyUI nodes and reproducible, dockerized TensorRT builds for accelerated inference.

The momentum we’re seeing around Comfystream, BYOC (bring-your-own-container) and AI agents is very promising. I look forward to continuing to support and grow these projects, as well as expand on a vision where Livepeer becomes the backbone for an even wider range of generative AI as well as more traditional ML workloads applied to streaming media - giving orchestrators more ways to differentiate, offer specialized compute, and maximize fees earned on the network.

I’m extremely excited to continue this work with Livepeer Inc, the community and now the Livepeer Foundation, to scale what we’ve built and push the boundaries of what the Livepeer Network, generative AI, streaming video, and machine learning can create together.

Peter

Phase 3 Retrospective

Introduction

At the start of last year, the AI SPE — together with the Ecosystem team at Livepeer Inc — began working toward the mission of validating the impact, potential, and competitiveness of bringing AI-based video compute jobs onto the Livepeer network by driving adoption through real-world deployments, as outlined in this treasury proposal.

We began by enabling basic generative batch jobs on the network, then focused on stabilizing the software and delivering more media-centric pipelines. After two successful phases (retrospectives here and here), this document reflects on everything we achieved during Phase 3 — where we joined Livepeer Inc under the Cascade initiative to kickstart real-time AI jobs on the network, a domain where Livepeer has a natural edge thanks to its experience in decentralized video compute.

In this third phase, we executed on the three core objectives outlined in our proposal: (1) launching foundational real-time AI pipelines and supporting infrastructure — laying the groundwork for a scalable, media-centric developer experience; (2) enabling a vibrant developer ecosystem through documentation, support, and tools like ComfyStream and the ComfyUI StreamPack; and (3) generalizing and optimizing batch and generative pipelines to diversify demand, support partner use cases, and promote long-term sustainable growth.

Key Milestones

These milestones reflect the most meaningful progress made in Phase 3, each backed by key deliverables. For a complete overview of technical contributions, see the commit history and release notes of the main Livepeer repositories.

  1. Successfully completed the first real-time AI hacker program:
    The Ecosystem team–led AI Builder Cohort, run in collaboration with Encode Club, brought together 15–20 ComfyUI workflow creators to design and showcase real-time AI workflows using ComfyStream. The AI SPE supported the program by providing documentation, onboarding, and technical support — surfacing critical insights that directly improved the ComfyStream experience and shaped the real-time AI GTM strategy.

    Contributors: Encode club, En, Ria, Rich, John, Hunter, Ryan, Xilin, and collaborators across the AI SPE and Livepeer Inc. teams

  2. Onboarded the first real-time AI orchestrators:
    We dedicated engineering resources to support Livepeer Inc’s Infra Pod in stabilizing the real-time network stack — contributing to performance improvements and resolving key bottlenecks. This collaboration enabled the onboarding of the first public orchestrators capable of serving real-time AI jobs, marking a major milestone in the network’s production readiness.

    Contributors: INC engineering, Peter, John

  3. Launched ComfyStream as a standalone project:
    Under John Mull’s leadership, ComfyStream was rebuilt into a modular, community-friendly tool with native ComfyUI integration and its own branding. The project is now available as a custom node with full documentation, along with a companion ComfyUI Stream Pack developed by the research team — providing a suite of core nodes and example workflows that developers use to compose live video pipelines. Expanded with ongoing feature improvements and optimizations, ComfyStream now enables a growing community of creators to build and experiment with real-time AI workflows, laying the groundwork for future adoption on the Livepeer network.

    Contributors: John, Ryan, Jason, Peter, Varshith, Amrit, Marco, Brad, Rick, Victor, Yondon, Pawel, and open-source contributors

  4. Formed a real-time AI research team:
    To bridge cutting-edge research with real-world builder needs, we formed a dedicated research team in collaboration with Livepeer Inc. In this first phase, the team established early collaborations with leading research groups and contributed the initial research-driven nodes to the ComfyUI Stream Pack. These foundations set the stage for a full feedback loop — where academic advances inform creative workflows, and researchers gain insight through real-world usage, datasets, and deployment on the Livepeer network.

    Contributors: Prakarsh, Varshith, Marco, Ryan, Rick, Qiang, Eric, Jeff Liang, Chenfeng Xu

  5. Improved job routing for batch and generative AI workloads:
    Longstanding issues affecting orchestrator discovery and job routing for batch and generative AI jobs were addressed by refining the selection algorithm, suspense behavior, and error handling. Developed in collaboration with Livepeer Inc — and shaped by testing and feedback from the Agent and Cloud SPEs — these improvements significantly boosted network stability and job reliability, laying the foundation for more scalable and diverse AI workloads in future phases. Additional support was provided to the Agent SPE to ensure the generative jobs they depended on could run reliably on Livepeer, with targeted enhancements made to the pipelines they used.

    Contributors: Brad, Rafal, Rick, Agent SPE, Cloud SPE

  6. Delivered first prototype of Bring-Your-Own-Container (BYOC):
    We delivered a working proof-of-concept for BYOC support, allowing developers to run custom compute containers on the Livepeer network. This work was informed by early feedback from partners like the Agent SPE, who were interested in deploying their own containerized workloads. As part of this track, we also developed a lightweight JavaScript implementation to create payments that lets developers request compute jobs directly from the network using just a wallet — bypassing the need for a centralized gateway. These tools enabled the Agent SPE to successfully build and test their custom container to the Livepeer network, marking an early but important step toward enabling generalized, permissionless compute on Livepeer.

    Contributors: Brad, Agent SPE

Supporting Initiatives & Cross-Ecosystem Contributions

Orchestrator operations improvements

In our phase 3 proposal, we committed to improving orchestrator operations through features like automated container updates, efficient model downloading, and simplified setup. As development progressed, we shifted focus toward priorities that more directly supported near-term ecosystem growth — including improvements to orchestrator discovery and job routing, prototyping BYOC (Bring Your Own Container) infrastructure, and laying foundational support for real-time AI jobs. These efforts significantly improved job reliability and network extensibility, enabling support for more diverse workloads. While some operational enhancements were deferred, we shipped incremental features such as the Voting CLI and an early implementation of auto-reward call retries (currently under review). Full container and model lifecycle improvements will be revisited in the next phase, now that foundational systems are in place.

Expanded hardware metrics

Initial work on orchestrator hardware metrics began in the previous phase, with core functionality implemented and merged on the ai-runner side and a working proof of concept in place on the go-livepeer side. During Phase 3, additional improvements were required to ensure performance, reliability, and compatibility with the Kafka-based messaging layer. We focused on refining this integration to enable accurate and consistent reporting between orchestrators and gateways. These updates have now been merged and rolled out — allowing gateways to display detailed information about orchestrator hardware capabilities, including GPU type, vRAM, and supported AI job types. However, due to the time invested in finalizing this rollout, we did not yet extend the metrics to include additional specs like CPU or RAM. That expansion remains on the roadmap for a future phase.

AI network reliability initiative

During the second phase, we deployed an AI orchestrator with base GPU capacity alongside a dedicated AI gateway to manage peak traffic and ensure a rapid feedback loop with startups. Initially, we planned to continue these services as needed into the next phase. However, with approximately 50 orchestrators now active on the AI network, we were able to significantly scale down batch processing capacity. This optimization reduced costs from $16K per month at the outset of the batch network to just $5K, now allocated solely for development servers. As a result, we will discontinue this initiative in the next phase and instead request funding only for internal development infrastructure, while continuing to collaborate with orchestrators to expand model variety and availability across the network.

Bounty program & ecosystem contributions

We began this phase aiming to expand the bounty program to support real-time AI work, particularly around ComfyStream. However, the fast-moving development timeline, limited reviewer capacity, and early maturity of the stack made broad open-source contributions difficult.

To support progress in this area, we onboarded Jason Stone (i.e. everest-node), who had previously managed our bounty program, as a core contributor via a consolidated ComfyStream bounty. Thanks to our prior collaboration and minimal onboarding overhead, Jason delivered several key improvements. As the stack stabilizes, we plan to open more areas of ComfyStream development to the wider community.

We also redirected bounty resources to other high-impact areas — most notably, the under maintained Explorer. Jason and other contributors completed bounties to improve its reliability and usability. In parallel, we began working with an external group through the bounty and grants program to explore forming a dedicated team for long-term Explorer maintenance — a track the Livepeer Foundation is now continuing.

In total, five bounties were completed during this phase, with a combined payout of 1,800k LPT.

Deferred Scope for Future Phases

On-chain fee differentiation

This deliverable aimed to enable job-level fee classification directly on-chain, improving transparency and helping orchestrators, developers, and delegators make more informed decisions. It was designed to replace the manual, gateway-based fee tracking system with a scalable, protocol-native mechanism for distinguishing between AI, real-time, live, and VOD jobs.

While we developed an initial technical specification and held exploratory discussions with other core contributors, we lacked the engineering capacity to fully implement, audit, and shepherd the proposal through governance. This initiative will now be taken over by the Livepeer Foundation, which is well-positioned to steward protocol-level improvements and coordinate the necessary resources to advance this upgrade in a future release.

AI video startup program II (TBD)

The second cohort of the AI video startup program — originally planned to onboard new startups building real-time AI applications on Livepeer — was postponed. In coordination with the Ecosystem team, we determined that the real-time AI stack was still too early in its development to reliably support external builders, lacking the necessary stability, documentation, and tooling (e.g., SDKs, APIs).

Given the promising outcomes from the first cohort and the potential to diversify network demand through startup engagement, this initiative remains an area of interest. The team will continue to evaluate when the real-time infrastructure is mature and developer-ready enough to support a successful second cohort.

Network impact

Since launching the AI SPE, we’ve focused on building the core infrastructure, developer tooling, and orchestration support needed to run diverse AI workloads on the Livepeer network — from batch and generative pipelines to BYOC and, more recently, real-time AI in collaboration with Livepeer Inc. While we initially built pipelines ourselves, our role has shifted toward core network development and supporting other teams in building and scaling their own.

Since the start of our AI journey, the network has seen:

  • 50 AI node orchestrators onboarded
  • 10 new AI job types added
  • 32,000 winning AI tickets redeemed by orchestrators
  • 42 ETH distributed for AI job execution.
  • $12,000 highest weekly orchestrator payout.

These milestones represent the compounding effect of infrastructure laid down by the AI SPE and the collective contributions of the growing Livepeer AI ecosystem.

Key Learnings

  1. ComfyUI greatly lowers the barrier to entry for AI creators:
    While earlier batch and generative pipelines were often built through deep integration with the core network stack, newer approaches like BYOC and ComfyUI have opened more flexible pathways for developing and deploying new AI workloads — without direct reliance on the core team. ComfyUI-based workflows, in particular, allow creators to build using intuitive visual interfaces and a growing library of reusable and customizable nodes, without requiring deep technical expertise. While Livepeer’s infrastructure still needs improvement to support the deployment of a broader range of these workflows at scale, this model shows strong potential: enabling more AI creators to experiment, prototype, and run their workflows on the network. This shift unlocks a wider range of creative applications powered by Livepeer.

  2. The ComfyUI community brings powerful momentum to real-time AI:
    Through participants in the AI Hacker Program and collaboration with key contributors like Ryan Fosdick and Marco Tundo, we saw firsthand the depth of domain expertise and creative energy within the ComfyUI ecosystem — particularly in real-time AI workflows. While our core team focused on foundational tooling, the broader community brought a level of scale and specialization that enabled faster, more diverse experimentation. This revealed a key insight: by equipping the ComfyUI community with tools like ComfyStream and the ComfyUI Stream Pack, we can accelerate innovation far beyond what any single team could achieve alone. In the next phase, we plan to deepen our support for this ecosystem to further grow the network’s real-time AI capabilities.

  3. Open, research-led development may define long-term differentiation:
    This phase marked our first step toward building a research ecosystem, with a small team collaborating directly with academic groups to port early research into the Livepeer stack. While this hands-on approach yielded initial results, it proved difficult to scale and offered limited value back to the researchers themselves. Moving forward, we believe Livepeer’s edge lies in enabling the research community directly — by offering access to real-world datasets, user feedback, and infrastructure for open experimentation. Supporting applied research in the open will foster early adoption and position Livepeer as a meaningful platform for real-time media AI innovation.

  4. Extensibility and expertise will Help scale diverse AI workloads:
    While real-time AI may offer Livepeer a unique advantage, this phase also surfaced interest from teams like the Agent SPE exploring job types that didn’t align with existing batch or real-time pathways. To support them, we worked closely to understand their needs and introduced a BYOC pipeline — enabling containerized workloads to run on the network without embedding application-specific logic into the core protocol. This experience reinforced the importance of keeping Livepeer’s core software lean yet extensible, ensuring it can support a wide range of use cases without being tied to any single product. As more builders join the network, a modular architecture — paired with accessible core expertise — will help support broader adoption across an expanding set of use cases.

  5. Lessons in decentralized collaboration:
    This phase highlighted that decentralized, cross-organizational collaboration — while not always easy — can work well in practice. The collaboration between our open-source group and Livepeer Inc engineers, guided by shared goals, strong engineering practices, and clear communication, enabled meaningful progress toward the Livepeer Cascade roadmap. Special thanks to Xilin for helping bridge product and engineering across teams. This experience underscored the potential for decentralized contributors to work productively toward a shared vision and reinforced the importance of effective coordination as the ecosystem continues to grow.

Finance & Holdings

The AI SPE received 35,000 LPT in funding from the Livepeer Treasury for Phase 3, bringing total funding since the first proposal in 2024 to 75,000 LPT. In the first quarter of 2025, the SPE spent over $257,000 (USD), bringing cumulative expenditures to nearly $947,000.

Phase 3 Key Spending Highlights

  • SPE Contributors: $217,827 — The largest allocation went to AI-focused contributors, reflecting the core development effort driving the initiative forward.
  • Infrastructure & Software: $19,878.26 — Directed toward infrastructure and software primarily supporting developer needs, network scaling, and early orchestration efforts.
  • Bounty Program: $19,433 — Used to fund community-driven bounties that foster decentralized participation.

We anticipate incurring additional expenses to sustain operations as we finalize and submit the Phase 4 proposal.

Holdings

As of May 15, 2025, the AI SPE holds the following assets on its balance sheet:

AI SPE Balance Sheet # Tokens $ Value/Token $ Value
LPT 29,028 $5.79 $168,077
ETH 3.91 $2,534 $9,908
USDC 7,890 $1.00 $7,890
Staked LPT 1,004.38 $5.79 $5,815
Incurred expenses still be to paid -$155,091
Total Assets $36,599

The majority of our remaining assets are held in unstaked LPT, with a smaller portion currently staked to the AI SPE orchestrator. As we wind down this node, the staked LPT will be unstaked and may be redeployed to another node to continue earning rewards on otherwise idle assets. Throughout the year, our holdings in LPT, ETH, and USDC were actively managed to support operational needs and maintain liquidity amid volatile market conditions.

Conclusion & What’s Next

Phase 3 marked a turning point in Livepeer’s AI journey — moving beyond initial experimentation, early validation, and foundational network stabilization toward broader ecosystem expansion and the beginning of product-market fit discovery across multiple AI compute tracks.

In close collaboration with Livepeer Inc, we took the first concrete steps toward the Cascade roadmap: launching real-time AI pipelines, improving job discovery and distribution for real-time workloads, enhancing orchestrator reliability, and delivering essential tooling like ComfyStream, the ComfyUI Stream Pack, and developer documentation. Together, these efforts laid the groundwork for a scalable, developer-friendly platform — one designed to power a new wave of real-time AI applications and foster a vibrant community of creators and researchers around it.

In parallel, we expanded the AI demand landscape through Bring-Your-Own-Container (BYOC) support — enabling builders like the Agent SPE to deploy custom, media-centric jobs on the network. This shift toward a modular, permissionless, and product-agnostic architecture reflects a broader vision: making Livepeer the open infrastructure layer for all media-related AI compute, from batch to generative to real-time. While still early, we’ve begun to see promising signs of interest, with initial ecosystem activity around real-time and agent-driven workloads, and new partners exploring how to bring their compute jobs onto the Livepeer network.

Looking ahead, Phase 4 will focus on accelerating this momentum — deepening ComfyStream’s integration with ComfyUI, expanding support for new data types and media workloads (e.g., real-time transcription, segmentation, and labeling), scaling BYOC adoption, and hardening the infrastructure to support AI jobs at greater scale. As we double down on real-time capabilities, we’ll also continue investing in applied research, modular pipeline extensibility, and open experimentation — all in service of making Livepeer the leading decentralized platform for media AI.

A detailed proposal with updated goals, milestones, and budget will follow shortly. Thank you for your continued support — we’re just getting started. :rocket:

8 Likes