Hello Livepeer Community,
We are excited to present our proposal for AI Metrics and Visibility. Our plan is to enhance the visibility of AI capabilities within the Livepeer network by developing a service hosted by Livepeer.Cloud SPE. This service will test the network, analyze the results, and publish these insights for the Livepeer Explorer, providing valuable metrics to the all stakeholders of the community.
Thank you for considering our proposal. We look forward to your feedback and questions.
@MikeZupper - xodeapp.xyz
@speedybird - speedybird.eth
@papa_bear - solarfarm.papa-bear.eth
Table of Contents
Livepeer.Cloud SPE
Our Misson
The Livepeer Cloud Special Purpose Entity (SPE) has successfully completed its first project, achieving key milestones such as launching the Livepeer Cloud Gateway, integrating Owncast, simplifying documentation, and launching a Gateway dashboard for public use. These accomplishments have laid a solid foundation, and we now propose to build on this success to further our mission of increasing accessibility and adoption of the Livepeer network. Leveraging our experience with Livepeer.Cloud Gateway operations and our active participation in the AI subnet, we have identified gaps in metrics that need addressing.
Our proposal aims to increase visibility into the AI subnet by implementing an AI Job Tester and Livepeer Explorer AI Performance Leaderboard. This project is designed in close collaboration with the AI SPE to ensure it complements their ongoing efforts. Once completed, these enhancements will provide similiar insights comparable to the Transcoding features.
Approach/ Strategy
Key Components
Below are the components we envision as necessary for the final solution. Based on the current Livepeer “Stream Tester” design, our strategy will focus on updating existing applications and components where feasible, and building new ones where existing applications cannot be updated to meet the needs of AI workloads.
AI Job Tester
A tester application will be developed that will execute test jobs against the active set of AI Orchestrators. The stream tester application will be used to support requirements analysis. Due to its current technical debt however, a new application will be developed for AI job testing.
Vercel Leaderboard Data
The AI Job Tester’s output will be captured and stored in the same fashion as the Transcoding “Stream Tester” application. This will allow the AI test data to be exposed via an API for use by the Livepeer Explorer. The current leaderboard-serverless vercel app application will be enhanced to support AI and Transcoding data.
Explorer AI Leaderboard
Livepeer’s Explorer will be updated to add an AI Performance Leaderboard. The AI Performance Leaderboard will read data from the Vercel Leaderboard API and present them in a similar manner to the current Transcoding Performance Leaderboard.
AI Tester Architecture
Expected Impact
Building these new capabilities will have an immediate impact on Livepeer. The Livepeer Explorer will now be able to display the top-performing Orchestrators running on the AI subnet.
Orchestrators
- Orchestrators will have access to their performance data via the “Vercel Leaderboard Data.” This allows them to create Grafana dashboards to show their performance and availability.
- Orchestrators can use the Livepeer Explorer AI Performance Leaderboard to see how they rank among all active orchestrators.
- Orchestrators can use the AI Job Tester data to determine which models to load warm and which pipelines to support.
Gateway Operators
- Similar to the transcoding selection process, Gateway Node operators can use the AI Job Tester performance data to exclude orchestrators with consecutive failed tests.
Delegators
-
The Livepeer Explorer will provide visibility into AI Node Operator performance, helping delegators make informed decisions by identifying those who significantly contribute to the AI subnet’s performance. This should result in stake moving to high-performing nodes.
-
The AI Job Tester is key to establishing node operators’ reputations and proving their credibility within the AI subnet. This will help attract stake and entice real demand on the network.
App Builders
- Application builders will have better visibility into which models are supported on the network, their usage, and performance metrics.
Key Metrics
Here are some of the potential metrics that will be collected and stored as part of this project:
- Timestamp
- Region
- Model
- Pipeline
- Upload Time
- Download Time
- Round Trip Time (RTT)
- Inference Time
- Error Codes/Conditions
- Orchestrator ETH Address
Milestones
The key milestones for this project include delivering the Tester and Leaderboard before making changes to the Explorer, though the plan is to deliver all capabilities in a single milestone at the project’s end. The entire solution is expected to take up to three months from the date of approved funding, depending on the team’s development velocity and potential changes. The timeline could be influenced by adding development resources to accelerate the effort.
Transparency
To foster transparency, the Livepeer.Cloud SPE team will be deliver all source code and documentation to their respective Github repositories. The contents will be publicly available and licensed with the permissive MIT license.
Additionally, the core team will all be compensated through this proposal. The teams members are:
Mike Zupper - Architect and Technical Lead
papabear - Community Lead, Testing, and Documentation
Speedy Bird - Technical Implementation and Documentation
The three members have all previously made open and consistent contributions to this community. They are experts in enterprise software systems from web applications and video processing to advanced analytics and AI applications. We are frequently present in the community events and forums.
Livepeer Inc, and the AI SPE will NOT be providing any funding to Livepeer.Cloud SPE for this effort. Our work is funded solely by the Livepeer Treasury Proposal.
Governance
The SPE team will distribute the funds among themselves after payment. The community’s input before and during the SPE will be collected via:
- A Livepeer forum post seeking feedback on the proposal prior to submission.
- Attendance at the following events to discuss the proposal, collect feedback, and reshape (if necessary) the proposal prior to funding: Weekly Water Cooler Chat & Treasury Chat
- After approval, the team will continue to attend the following events to present progress: Weekly Water Cooler Chat & Treasury Chat
If the team finds based on community feedback that additional milestones are desirable, the SPE will produce a revised milestones and/or another proposal to fund such initiatives. In fact, the SPE team believes this proposal naturally lends itself to building out additional capabilities.
Funding
The funding requests will cover the following expenses:
- SPE Team Member Activities for Requirements, Development, Testing, and Documentation
- Community and AI SPE Collaboration and Engagement
- SPE Hardware, DevOps, and Network Fees
The cost breakdown for these efforts is as follows:
Component | Cost |
---|---|
Livepeer Explorer Modification | $37,000 |
Leaderboard Data Collection and API Modifications | $46,000 |
AI Job Tester & Test Routines | $49,000 |
Infrastructure, DevOps, Network Fees | $21,000 |
Total | $153,000 USD |
note: funding to be converted to LPT at time of proposal submission