AI Video SPE Stage 2: Pre-Proposal


After a successful first stage, the AI Video SPE is now requesting a second round of funding to advance the subnet’s development and meet the growing demand for AI processing jobs.

The AI Video SPE received their first round of funding on January 22nd after a pre-proposal and passed treasury proposal. This first stage aimed to build a stable V1 of the AI subnet, which could leverage the existing protocol and network architecture to perform generative AI jobs and support a small number of consumer applications.

Today, users can run supported diffusion models within the three implemented pipelines: text-to-image, image-to-image, and image-to-video. These AI inference jobs are efficiently routed to Orchestrators connected to the AI Subnet. Leveraging the released documentation, early builders have already begun creating the first decentralized AI applications on the Livepeer network.

As we advance into the second stage of our AI Video Subnet journey, our next goal is to enhance the demand-side user experience. This phase focuses on refining the AI subnet to bolster efficiency, enhance speed, improve output quality, and increase GPU support, all aimed at delivering superior service quality. Additionally, it includes developing an easy-to-operate gateway for self-hosted infrastructure and user-friendly language-specific SDKs for building against gateway services. Observing the performance and challenges within the subnet across different stakeholder groups—node operators, gateway operators, end-developers, etc.—will be essential for achieving these objectives.

Looking ahead to stage 3, the network expansion phase (not included in this proposal), users will be able to build and specify their own pipelines, bringing their models and workflows to the network for processing. Throughout the rest of the year, we will focus on researching how the network marketplace can evolve to keep these workflows and models warm and performant. Additionally, this stage involves exploring provenance verification methods to ensure that Orchestrators perform the expected jobs.


The mission of the SPE has remained unchanged: to validate the impact, potential, and competitiveness of bringing AI based video compute jobs onto the Livepeer network.

The AI Video SPE will focus primarily on enhancing the technical and product capabilities of the network, including improving both the Orchestrator and developer experience, providing new pipelines, analytics, access to easy-to-run gateways, and enhancing network performance. The SPE will work closely in conjunction with the Livepeer Ecosystem team, who will focus on generating more demand on the network.


Core Contributors (paid for by SPE)
Lead Engineer & SPE Proposer - Rick (full time)
AI Engineer - John | Elite Encoder (part time)
AI Engineer - to recruit (full time)
Solutions Engineer - to recruit (part time)
Research & Development - Yondon (part time)

Livepeer Ecosystem Team (not paid for by SPE)
Vision & Roadmap - Doug
Project Management - Rich
BD & Growth - Shannon
Marketing - Jacopo
Finances - Sarah
Technical Product Manager - coming soon
Developer Relations - coming soon

Livepeer Cloud SPE (not paid for by SPE)
Gateway Providers - Mike | Xodeapp, Papa Bear | Solar Farm, Speedybird |


In this second stage, our goal from the backend side is to significantly enhance our AI subnet core software and pipelines across multiple dimensions:

  1. Enhance AI Subnet Performance and Service Quality: Our objective is to substantially improve the efficiency, speed, and quality of outputs within the AI subnet.
  2. Clear Entry Point for dApp Developers: Provide a well-defined access point for dApp developers who are looking to start building on the network.
  3. Expand GPU Compatibility: Enable GPUs with lower VRAM and server GPUs to connect to our subnet, broadening our hardware support and increasing network capacity.
  4. Refine Pricing Structure and Enhance Transparency: We aim to optimize how we structure pricing and communicate it to users, ensuring both clarity and fairness.
  5. Provide Clear Metrics: Provide transparent AI Subnet job metrics, encompassing model usage and latency. This will empower orchestrators to select supported models wisely and assist (decentralized) Applications in choosing the most suitable models for their needs.
  6. Expand Model and Pipeline Offerings: We plan to introduce new models and develop additional pipelines, thereby enhancing our service offerings and accommodating a more comprehensive range of applications.
  7. Foster Open-Source Collaboration: Establish clear and accessible pathways for open-source contributors to effectively engage with and contribute to the AI subnet development.

For further information on these deliverables, please see our public Notion page here. Note that the details are open to change as the SPE progresses.

Simultaneously, the AI Video SPE will work with the Ecosystem Team to expand the number and size of design partners during the Startup Program.


  1. Subnet Launched Successfully - May 24th
    The fast-approaching milestone is focused on ensuring that orchestrators can be onboarded smoothly and that (decentralized) applications have a straightforward pathway to begin development on the AI subnet. This has largely been achieved in stage 1 of the AI Video SPE.
  2. Mainnet Launched Successfully - August 1st
    This phase will concentrate on enhancing the quality of service within the AI subnet. Efforts will include refining the onboarding process for dApp developers, expanding GPU compatibility, optimizing the selection algorithm, improving communication within the subnet, reducing container load times, and addressing edge cases for greater reliability.
  3. Hackathon Completed - September 15th
    This demand-focused milestone focused on builders is to ensure the documentation is clearly in order, there are easy-to-build on gateways and enough pipelines to integrate a wide variety of use cases into different applications.


By September 15th, the AI Video SPE will aim to have:

(1) 50 active nodes with high-performance scores on AI subnet
The software + documentation supports Orchestrators in easily onboarding to the Livepeer AI Subnet and successfully earning rewards by performing AI inference jobs on their GPUs. Analytics will also be set up to ensure we can track the most efficient Orchestrators.

(2) <5 cents for 1s of generative video
The subnet network delivers a reliable, cost-effective, and performant experience to gateways requesting the subset of available AI jobs on the network. This will enable us to validate cost-effectiveness against other compute networks and centralised cloud providers.

(3) Capacity to support 80 1s video requests per minute
The subnet will support a high throughput of requests across different applications and gateways.


Bounty Program (led by SPE, paid by SPE)
The SPE will run a bounty program to streamline contributions from Orchestrators and core developers. This will broaden development efforts and enable the team to move faster with specific tasks set to the community. Bounties will be focused on testing harnesses, payments and subnet observability, and supporting additional pipelines.

Startup Program (led by Ecosystem team, paid by Grants)
The Ecosystem team will work with 4-8 key design partners to continue optimising the subnet, receive regular feedback from design partners and add use cases and pipelines. Each startup will be given a small grant plus up to $20k of infrastructure credits to use on the AI subnet. The SPE will work closely with the Ecosystem Team to ensure the program yields maximum value to the subnet.

Hackathon (led by SPE, paid by SPE)
Following Mainnet launch, the SPE and Ecosystem team will run a hackathon to test how developers can build on the subnet and attract new builders to the Livepeer ecosystem. This will enable us to have a clear call to action for developers interested in


The AI Video SPE has requested funds to accelerate the next stage of development in the process. If the second round funding is not used up fully by September 15th, the SPE will either: (a) rollover the funds; or (b) return the funds to the treasury.

The total amount requested from the on-chain treasury is an additional 25,000 LPT.

Additionally, the AI Video SPE requests for the remaining amounts from the first funding round to be rolled over into the second round. We expect 6,980 LPT and 12,495 USDC to roll over from the first round to the second. This will be cover the following costs with adequate buffer given price volatility:

  • Bounty Program - 5,000 LPT
  • Hackathon Program - 5,000 LPT
  • AI Video SPE focused wages & recruitment - 15,000 LPT
  • Marketing & Communications Budget - 2,000 LPT
  • Infrastructure Costs - 3,000 LPT

For transparency, the following will be paid for by Grants, and not via the SPE / onchain treasury: Startup Program (12,000 LPT), Event Sponsorship (3,000 LPT)

And the following will be paid for by Livepeer Inc, and not via the SPE / onchain treasury: ecosystem team member salaries (Doug, Rich, Shann, Sarah, Jacopo, Technical Product Manager, Developer Relations). Similarly, the Livepeer Cloud SPE will use their their own funding.


I appreciate the thoroughness of the AI Video SPE proposal and the clear focus on developing innovative AI-driven video solutions. However, I believe it would be beneficial to include more explicit plans for collaboration with existing companies and industries(eg. OpenAI, Pika AI, Microsoft …).

While the design partner program is a strong starting point for refining the subnet and gathering feedback, establishing partnerships with established companies would further validate the use cases and enhance the adoption of the technology. Integrating the expertise and needs of current businesses could also provide valuable insights into real-world applications, aligning the network’s capabilities with market demands.

I suggest incorporating a section in future proposals that outlines strategies for engaging with existing enterprises and leveraging their expertise to create more robust and market-ready solutions.


Hi nhis, thanks for your patience in getting a response and for raising a great point. Collaborations and partnerships are key to expanding reach to new developers and ecosystems through integrations with third parties.

Having worked closely with Rick on the AI SPE, I know this will be a priority for the next stage of the AI SPE (~Q4 2024-Q1 2025). We will then look to expand after validating the cost effectiveness and reliability of the network. As with any marketplace, you have to be careful to not over-supply or over-demand too soon.


Hi @nhis, thanks for your patience. @honestly_rich, thanks for jumping in on this point. As @honestly_rich mentioned, these collaborations are key to expanding our reach and are reserved for the next phase (approximately Q4 2024 - Q1 2025). This aligns perfectly with our vision of validating the cost-effectiveness and reliability of the network before expanding further.