Introduction
At the start of this year, the AI SPE set out with a mission to validate the impact, potential, and competitiveness of bringing AI based video compute jobs onto the Livepeer network by driving adoption through real-world deployments, as outlined in this treasury proposal. Our initial focus was on proving the cost-efficiency and performance of new generative AI video capabilities on the network. Since then, we’ve collaborated closely with the community, design partners, and startups to accelerate adoption and demonstrate the potential of AI jobs on Livepeer.
During the first phase, we harnessed Livepeer’s decentralized payment and compute infrastructure to deploy generative AI pipelines, enabling orchestrators to process AI jobs and earn initial fees while allowing pioneering builders to experiment and innovate on an alpha-stage network. This effort resulted in a stable network and a successful prototype launch at ETH Berlin. As outlined in our first-phase retrospective, these pipelines were a crucial first step toward our broader mission of establishing Livepeer as the leading decentralized platform for media-related AI compute jobs within three years.
Building on this foundation, the second phase of Livepeer’s AI journey, as detailed in this treasury proposal, focused on enhancing AI subnet performance, improving developer experience, scaling the network, and expanding service offerings. We prioritized delivering media-centric AI pipelines, such as Whisper, Text-to-Speech, and Segment Anything 2, enabling expanded GPU support and implementing key network upgrades for scalability and reliability. We also released developer tools, including SDKs and documentation, while strengthening open-source collaboration through bounty programs, hackathons, and ecosystem initiatives.
Shifting from a foundational approach in the first phase, the second phase embraced a demand-driven strategy by collaborating closely with startups from the Startup Program and Hackathon participants. Real-world use cases directly informed our development roadmap, driving increased demand and higher fees on the network while addressing practical challenges and gathering valuable insights that shaped the 2025 roadmap, as outlined in the Livepeer Cascade post. These insights, supported by extensive market research conducted by the INC product and leadership teams, ultimately led to a strategic focus on real-time AI video—a domain where our existing network is uniquely positioned to excel and potentially become a market leader.
In this forum post, we reflect on the key milestones, share critical learnings, provide a financial overview, and outline next steps as we transition into the third phase of the AI SPE journey.
Achieved Milestones
Since our second treasury proposal passed in May 2024, we have steadily progressed toward its deliverables and milestones. Despite some engineering tasks being deprioritized due to shifting strategic priorities, we collaborated closely with the community to drive significant improvements in the Livepeer network’s scalability, stability, and capabilities, while enhancing developer accessibility through robust SDKs and improved documentation—positioning Livepeer AI for success in 2025.
In parallel, the ecosystem team effectively showcased these advances through events, competitions, hackathons, and programs, boosting community engagement, social interactions, and open-source contributions. Originally scheduled to conclude on September 15th, this phase was extended to December thanks to Sarah’s exceptional financial leadership, which enabled additional deliverables. We also want to acknowledge Doug, Rich, and Shannon for their invaluable guidance on the project vision and roadmap, as well as the rest of the ecosystem team for their continued operational support. Lastly, we thank Livepeer.cloud for providing the public gateway.
The sections below detail all engineering and ecosystem key deliverables accomplished during this phase, together with the main stakeholders. For a more comprehensive overview of engineering progress and individual contributions, please refer to our changelog or review the release notes in the associated Livepeer AI repositories.
Main deliverables
-
Enhanced AI Subnet Performance, Metrics, and Orchestrator-Gateway Operations
We implemented core infrastructure upgrades to enhance scalability, reliability, and operational transparency across the Livepeer AI network. The introduction of the AI Remote Worker allowed orchestrators to attach multiple GPU worker nodes to a single main orchestrator, boosting scalability and enabling larger GPU pools to join the network. We also upgraded the Docker Manager to load pipeline-specific containers, reducing dependency conflicts and improving job scalability by removing the need for a single consolidated container. Further, we added support for multiple containers per capability and released External Container support, enabling orchestrators to manage compute jobs using custom logic while maximizing GPU utilization through container stacking.
On the operational side, we improved the Gateway and Orchestrator Documentation and introduced auto-pulling containers on startup, simplifying orchestrator setup. To enhance network visibility, we shipped an AI Network Fee Dashboard and implemented Orchestrator and Gateway Metrics, supported by community-built Grafana dashboards, enabling real-time monitoring of job activity, requested models, pricing, GPU configurations, and job latencies.
We also introduced capability-based pricing, allowing gateways to set individual prices for specific job types, enabling more precise cost management based on processing demands. In addition, we provided new API Endpoints, offering real-time access to orchestrator details such as supported models, GPU availability, and pricing configurations.
Contributors: Rick, John, Brad, Peter, JoinHive (Livepool), Mike (Xodeapp), Marco (stronk-tech), and other open-source contributors
-
Provided a Clear Entry Point for App Developers and Expanded Service Offerings
We significantly improved the developer onboarding experience by releasing client SDKs for Python, Javascript, and Go, simplifying integration with the Livepeer AI network. Additionally, we enhanced API error handling and published comprehensive, developer-focused documentation and API references.
On the service side, we introduced new pipelines, including Text-to-Speech, Audio-to-text, and Segment Anything 2, while optimizing both new and existing pipelines for better inference speed, processing stability, and output quality.
We also expanded service capabilities by supporting additional models in existing pipelines and introducing dynamic model loading, enabling orchestrators to serve hundreds of custom styles through LoRA-based fine-tuning for pipelines such as I2T and I2I.
Contributors: Rick, John, Brad, Peter, Karbon, Marco (stronk-tech), Jason (Everest node), and open-source contributors in the bounty program.
-
Launched the First Livepeer Software Bounty Program
To expand the open-source community and reward contributors, we launched the Livepeer Software Bounty Program. This initiative encouraged developers to contribute to Livepeer’s open-source ecosystem, resulting in over 23 completed bounties with a total reward value of 1,300 LPT.
While the program generated significant community engagement, limited engineering team capacity caused delays in reviewing completed bounties. To streamline the process, we transitioned program management to the Ecosystem team. Moving forward, our team will continue to post new bounties, while the Ecosystem team oversees program operations, ensuring a smoother and more scalable experience for contributors.
Contributors: Rick, Rich, John, Brad, Peter, Jason (Everest node) and open-source contributors in the bounty program.
-
Launch of the First Cohort in the Livepeer AI Startup Program
As part of Phase 2, the ecosystem team launched a three-month startup program, onboarding eight innovative startups building products on the Livepeer AI network. The program fostered a close feedback loop with the engineering team, enabling continuous product refinement and driving technical improvements based on real user feedback.
The program concluded with a successful Demo Day in October, where all eight startups showcased their products. Key takeaways from the program helped identify areas for enhancing network performance, SDK quality, and documentation, directly informing the implementation of new AI pipelines mentioned earlier.
Contributors: En, Karbon, Rich, Rick, John, Brad, Peter, Jacopo, along with the innovative startups participating in the program.
-
Hosted the First Livepeer AI Hackathon and Multiple Successful Side Events
Between September and October, the ecosystem team collaborated with Encode Club to host the inaugural Livepeer AI Video Hackathon. This virtual event attracted over 50 hackers who developed a diverse range of applications and tools leveraging the Livepeer AI network, competing for exciting prizes. A detailed event summary is available here.
In addition, we partnered with Flipguard to host smaller competitions, encouraging creators to complete challenges and win LPT prizes. These initiatives further boosted community engagement and strengthened participation within the Livepeer ecosystem.
Contributors: Encode club, En, Karbon, Flipguard and Jacopo.
-
Extra deliverable: Implemented the first realtime AI pipelines
In the final two months of Phase 2, we worked closely with Livepeer INC to launch Livepeer’s real-time AI journey, as detailed in the Livepeer Cascade post. With Xilin acting as the bridge between teams, we delivered the first real-time AI pipelines, tested and improved core network changes to support real-time AI, developed comprehensive developer documentation, and created a developer environment to facilitate adoption.
As a result, we successfully deployed the first real-time pipelines on the Livepeer network. These include real-time artistic video animation with Stream Diffusion, avatar creation using Liveportrait, video depth map overlays with Depth Anything, and selective object animation with Segment Anything 2 and Florence.
Contributors: Rick, John, Brad, Peter, Xilin, Ryan (ryanontheinside), Yondon and INC engineering.
Postponed Initiatives
As we executed our roadmap, we made strategic adjustments in response to emerging challenges and evolving opportunities. While prioritizing core infrastructure upgrades and operational improvements, some features outlined in our Back-end Deliverables were deprioritized due to resource reallocation.
The most notable of these was the Compute-Based Pricing Mechanism, intended to enable orchestrators to set prices based on actual compute usage rather than per pipeline. Similarly, On-Chain Ticket Distinction, aimed at improving the accuracy and reliability of the Network Fee Dashboard, was deferred due to its complexity and resource needs.
Initial efforts toward VRAM Optimization revealed trade-offs, such as slower processing speeds, leading to a temporary pause for further research. Additionally, work on a Data Aggregator for centralized job metrics and an Efficient Job Distribution Mechanism based on VRAM estimation and availability was postponed to ensure focus on higher-priority network upgrades.
These features remain strategically important and are planned for inclusion in future proposals by our group or have been handed over to Livepeer INC. We remain committed to revisiting these initiatives, building on the robust technical foundation established during this phase, and aligning them with our roadmap to ensure continued progress.
Network impact
During the second phase of Livepeer’s AI journey, several promising signals emerged, highlighting the potential of running AI-powered workloads on the Livepeer network:
- 41 AI Node Orchestrators actively processed jobs, contributing to network growth and stability.
- Hundreds of thousands of jobs were successfully completed across 8 distinct AI pipelines, showcasing the network’s capacity to handle diverse workloads.
- 8 Startups were onboarded onto the AI network, accompanied by multiple smaller applications launched through hackathons and ecosystem initiatives, reflecting growing interest and adoption.
- Over 7,000 winning tickets were paid out to orchestrators, amounting to approximately 9 ETH (>$24K at the current price).
- The highest-ever weekly payout was recorded, highlighting the increasing economic opportunities for orchestrators and validating Livepeer’s viability as a decentralized AI compute platform.
Although the network is still in its early stages and has not yet achieved exponential adoption, these milestones highlight its strong potential for scaling AI workloads on Livepeer.
Key Learnings
-
Scaling supply to match demand remains a challenge
Despite significant infrastructure upgrades that improved the reliability and scalability of the Livepeer AI network, challenges persist with the current Docker-based manager, particularly in inefficient container lifecycle management, inadequate out-of-memory handling, limited hardware resource management, and scalability challenges in metrics communication and dynamic job scheduling. During development, we identified that established open-source platforms, supported by robust ecosystems, have already addressed many of these complexities.
Moving forward, we plan to explore technologies like Ray Serve for efficient hardware management, dynamic resource allocation, and scalable inference serving. Additionally, we will explore Docker Swarm or Kubeflow, leveraging their container orchestration capabilities, container isolation, dependency management, and security benefits. By integrating these platforms, we can tackle our scaling challenges while focusing engineering efforts on Livepeer-specific network innovations. This approach allows us to leverage proven, industry-standard solutions to build a scalable and reliable AI infrastructure.
-
Real Demand Generation Requires a Unique Differentiating Factor
While there is promising interest from startups leveraging Livepeer’s decentralized AI network for generative and batch AI jobs, these areas face intense competition from established providers. As highlighted in the Livepeer Cascade Vision, focusing on real-time AI video offers a clear opportunity for differentiation.
Livepeer’s native media streaming capabilities and decentralized infrastructure uniquely position it to lead in real-time AI video processing. By launching real-time AI services, we can attract a vibrant community of builders, foster open-source innovation, and unlock new funding opportunities—accelerating adoption across both real-time and batch AI workloads.
As highlighted in our Mission and Roadmap, significant resources will be allocated in early 2025 to support Livepeer INC in advancing real-time AI services. At the same time, we will continue to invest in batch job capabilities by generalizing pipelines and enhancing developer resources, including comprehensive documentation and tooling. This dual approach ensures that Livepeer can capitalize on the real-time AI opportunity while maintaining strong support for batch workloads. Core infrastructure improvements, such as better job distribution mechanisms, will benefit both areas.
This strategic focus positions Livepeer to become the leading decentralized platform for media-related AI compute within the next three years.
Finance
The AI SPE received 25K LPT in funding from the Livepeer Treasury for Phase 2, bringing the total funding for 2024 to 50K LPT. Over the second half of 2024, the SPE spent over $435K in USD terms, bringing the total spend for the year to nearly $690K. The financial breakdown demonstrates a strategic allocation across contributors, infrastructure, and programmatic initiatives to drive forward the objectives outlined in the Phase 2 pre-proposal.
Phase 2 Key Spending Highlights
- SPE Contributors: The largest share of the budget — nearly $300K — went to AI-focused contributors, reflecting the core development effort.
- Infrastructure & SW: Investments of nearly $58K enabled scaling and network improvements during H2 2024.
- Hackathon and Bounties: Programs fostering decentralized participation, such as bounties and hackathons, totaled nearly $60K for Phase 2.
We expect ~25K in additional expenses in the second half of December. For a detailed breakdown of historical spending, please refer to the AI SPE public financials.
Holdings
As of December 16, 2024, the AI SPE holds the following assets on its balance sheet:
AI SPE BALANCE SHEET (as of December 16, 2024) | # Tokens | $ Value/Token | $ Value |
---|---|---|---|
Arbitrum | |||
LPT | 690 | $16.50 | $11,382 |
ETH | 3.9 | $4,000 | $15,684 |
USDC | 13,777 | $1.00 | $13,777 |
Staked LPT | 7,270 | $16.50 | $119,955 |
Ethereum | |||
ETH | 0.49 | $4,000 | $1,960 |
USDC | 1 | $1.00 | $1 |
Total Assets | $162,760 |
The majority of our remaining assets are in the form of LPT staked toward the AI SPE Orchestrator to maintain its position in the top 100, support the network, and earn rewards on our idle assets. Throughout the year, holdings in LPT, ETH, and USDC were actively managed to support ongoing operations.
Conclusion
The second phase of the AI SPE has showcased the Livepeer network’s growing potential as a leading platform for AI-powered video compute. Through strategic infrastructure upgrades, robust developer tools, and dynamic ecosystem initiatives—including hackathons, the startup program, and other demand-focused efforts—we’ve built a solid foundation while gaining valuable insights to guide the next phase.
Looking ahead, we will focus on scaling real-time AI offerings, enhancing network scalability and performance, and empowering developers to seamlessly integrate both real-time and batch media-related compute jobs into our permissionless network. These advancements will unlock new revenue opportunities for orchestrators, diversify service offerings, and strengthen Livepeer’s competitiveness in the decentralized AI landscape. A detailed proposal with key milestones and our 2025 roadmap will be shared soon, charting the path toward establishing Livepeer as the premier decentralized platform for media-related AI compute.
With several core contributor groups, many open-source developers, and Livepeer INC united behind this common goal, we are better positioned than ever to succeed. The lessons learned and the groundwork laid in earlier phases have paved the way for broader adoption, innovative applications, and sustained growth in the years ahead.
We extend our heartfelt gratitude to the Livepeer community for your unwavering support, insightful feedback, and invaluable contributions. Your dedication drives our progress, and we look forward to building the future of decentralized AI—together .