Pre-proposal: Livepeer FrameWorks SPE Pilot phase

Livepeer FrameWorks SPE: Pilot phase

Author(s): The MistServer Team
Contact: developers@mistserver.org


Abstract

The FrameWorks SPE proposes to strengthen the Livepeer protocol by bridging the gap between transcoding infrastructure and real-world media applications.

This includes:

  • Providing dedicated engineering resources to ensure stability, feature enhancements, automated testing and a clear & complete documentation.
  • Providing onboarding and infrastructure support for teams building on Livepeer.
  • Operating an independent, Livepeer-powered E2E media pipeline that validates new transcoding features in real-world deployments.

By focusing on low operating costs, easy integration and strategic partnerships, FrameWorks aims to provide a viable, scalable alternative to traditional full-service video providers.

This first phase serves as a pilot to build trust and credibility within the Livepeer ecosystem while keeping the initial funding request modest.


Mission

The media industry is highly complex. Building reliable, scalable streaming applications requires complex deployments and industry expertise.

Livepeer Inc has laid a robust foundation for decentralized video infrastructure. However, there remains a gap between what the network offers versus what video applications need.

This proposal builds on their achievements by addressing key areas where we can contribute with our expertise.

The MistServer team proposes a Special Purpose Entity to take ownership of maintaining and enhancing the transcoding pipeline, ensuring that node operators have the support, documentation, and features they need.

FrameWorks will also bridge the DevOps gap by offering an E2E media pipeline that businesses can directly integrate or self-host, rather than relying on Livepeer Studio for infrastructure, support or custom features.

Instead of replicating Studios’ high-complexity, full-service model, FrameWorks aims to build a scalable, low-overhead alternative aimed at easy integration with external business logic.

Result: A more stable, performant, and accessible transcoding pipeline for node operators and Livepeer-powered media applications.


Terminology

Some of these terms are not present in this pre-proposal, but can be helpful when browsing through the feature board.

  • E2E media pipeline: Provides the core infrastructure for a full media pipeline. Including but not limited to: ingesting, processing, storing, and delivering video.
  • Transcoding: The process of decoding a video stream, transforming it (resolution, bitrate, etc.), and encoding it for delivery or storage.
  • Segmented Workflow: A process of breaking video streams into discrete segments for transcoding. A full segment is required before it can be transcoded by the network.
  • Streaming Workflow: A continuous processing method where the video stream is sent in small chunks and immediately transcoded.
  • Intel QSV: Intel’s “Quick Sync Video” hardware for video encoding/decoding, allowing efficient transcoding on Intel CPUs and GPUs.
  • AV1: A royalty-free, high-efficiency video coding format which is gaining in popularity.
  • Latency: In the context of Livepeer we consider the delay between the stream ingest at a Gateway node and receiving the transcoded frames back.
  • Netint: Specialized hardware device for video transcoding.
  • LPMS: Livepeer Media Server, the core transcoding library used within the Livepeer stack.
  • FTE: Full Time Equivalent, indicates the amount of hours dedicated to a project. 1 FTE equals one fulltime employee, but could also be two people each contributing 0.5 FTEs worth of effort.

Structure

The MistServer Team leads the SPE, with Marco (stronk-tech.eth) acting as the primary decision-maker and point of contact.
The SPE is open to expand the core SPE team with additional applications from outside the MistServer team.

Structure

  • Lead: Marco, long-time Orchestrator and MistServer maintainer.
  • Core Team: MistServer maintainers with expertise in transcoding and live streaming.
  • Open-Source Contributors: Developers in the Livepeer community who take on bounties.
  • Advisors:
    • Rick (transcode.eth): AI SPE Lead.
    • Brad (ad-astra-video.eth): AI SPE Engineer, also familiar with the transcoding codebase.
    • Josh: Livepeer Inc engineer with extensive experience with relevant code repositories (LPMS and go-livepeer).
    • Rich: Livepeer Ecosystem Growth Team

Responsibilities

  • Core Team: Scoping out tasks, assigning bounties, conducting code reviews, and core development of the transcoding pipeline.
  • DDVTech: The business entity of the MistServer team, responsible for hiring, team coordination, and administrative processes.
  • Advisors: Provide strategic & operational guidance and brainstorming about potential solutions.

About the MistServer Team

The MistServer team is composed of experts in live streaming and media server technology. Our journey began in 2009 when we set out to build a better media server following the failure of a media-related project due to unreliable software. Since then, MistServer has become our core business, and we’ve dedicated our professional lives to advancing live streaming technology.

We bring over half a century of combined hands-on experience in live streaming and media server development, including experience managing streaming infrastructure (like Picarto).

This hands-on expertise positions us uniquely to lead the FrameWorks SPE and contribute meaningfully to the ecosystem.


Scope

The core responsibilities of this SPE include:

- Making the Livepeer transcoding pipeline more robust and competitive.

Adding codecs, adding device support, reducing latency and enhancing transcoding jobs with more parameters.

- Supporting node operators.

Identifying & addressing common pain points, like replacing the static session limit with smarter GPU load balancing and improving Gateway Documentation.

- Ongoing core maintenance

This is a task which also sees ownership from existing core contributors. The Transcoding SPE intends to jump in (wherever required) to assist with tasks like keeping the build pipelines up-to-date, rebasing the LPMS FFMPEG patches and fixing bugs or crashes.

- Research & integrations

The media industry landscape changes over time (slowly evolving). WebRTC and SRT are now common methods to transport media, but are unsupported by Gateway nodes.
These kind of features could also be supported by siderunning applications, like how WebRTC has limited support through go-livepeer’s MediaMTX integration.
This topic covers exploring enhancements to the Gateway with additional protocols or improving integrations with external applications.

- Expanding testing & QA practices

Implementing automated testing to ensure network stability and prevent regressions.
This includes writing feature-specific tests for each change we make, while also expanding coverage with additional regression or benchmark tests.

- Bridging the DevOps gap for media applications

Providing support to entities looking to build on the network as well as setting up an E2E media pipeline, making it easier for applications to integrate Livepeer-powered streaming without reliance on Livepeer Studio.

Any development work will of course be published open-source and under the Unlicense.


Phase 1: Pilot

Goals

  • Gather pain points from Gateway and Orchestrator operators.
  • Prioritize roadmap items to address critical gaps in the transcoding pipeline.
  • Set up a transcoding bounty program.
  • Scope out the E2E media pipeline.
  • Cross off the first few items from the roadmap.

Timeline

March 2025:

  • Set up operations and governance structure.
  • Identify and re-prioritize key tasks for this quarter.
  • Launch a bounty board for community contributions.
  • Initial discussions on FrameWorks infrastructure.

April – July 2025:

  • Engineering work on Q2 key tasks, explained in the roadmap below.
  • Next phase planning: Identifying FTE needs and define the FrameWorks infrastructure roadmap.

Roadmap

We have published a feature board where anyone can request items, vote on priorities, and comment on issues.
The roadmap will be prioritized based on continuous conversations with node operators, as well as the needs of inbound opportunities for our E2E media pipeline.

Initial Q2 goals are:

  • Improve documentation for Gateway operators.
  • Pull Netint integration over the finish line.
  • Pull AV1 codec support over the finish line.
  • Add Intel QSV support.
  • Smarter session limits & load balancing for transcoders.

This roadmap indicates our short-term goals. Not all of these features might see completion in Q2.

Future Phases

If the pilot phase succeeds future requests may include:

  • Expanding the core SPE team to increase engineering capacity.
  • Addressing long-term goals and more complex tasks, including transitioning to a streaming workflow or expanding the Transcoding job type with more parameters (for example: allowing non-realtime, high quality transcodes for VoD processing).
  • Further development & deployment of the FrameWorks E2E media pipeline.

Budget Breakdown

Funding Period: March – June 2025
Total Ask: 15,000 LPT

Breakdown:

Item Amount Explanation
March: SPE Kickoff & onboarding Free Structuring the SPE, setting up communication channels, onboarding contributors, gathering feedback from Gateway & Orchestrator operators, initial design work on the E2E media pipeline.
April – June: Core Development 12000 LPT Managing bounties, active community participation, feature development, testing, and infrastructure work.
Community Incentives 3000 LPT Open-source contributor incentives to drive external contributions.

Rate Justification

For 4,000 LPT per month, the MistServer team operates the SPE while providing 1 FTE of dedicated engineering work. At $5 per LPT, this equates to $20,000 a month (~$115/hr), which is a reduced bulk rate for long-term commitments. This ensures that developers assigned to the SPE remain fully committed, without being pulled into other commercial projects.

If future proposals require additional engineers, we can use the DDVTech entity to hire freelancers or full-timers. This approach allows us to:

  • Offer security & guarantees to SPE hires through an established legal entity, of course with access to our office and team’s expertise for onboarding.
  • Provide additional engineering capacity at a lower cost, ensuring efficient use of treasury funds.

We are open to adjusting the LPT request or FTE commitment based on community feedback, but believe the amounts are fair given the technical expertise required and in comparison with common rates in the media industry.


Success Metrics:

Defining success metrics for a broad core-development SPE like this is difficult. We encourage feedback on what we can do to measure impact and ensure accountability.

  • Core Contributions:

    • Completed bounties.
    • PRs submitted and merged into the Livepeer codebase.
    • Increase in test coverage.
  • Feedback & Adoption:

    • Positive feedback from Gateway & Orchestrator operators.
    • Growth in the number of Gateway operators on the network, onboarded through the FrameWorks SPE.
    • Transcoded minutes on the E2E media pipeline.

Transparency and Accountability

Engagement with protocol participants is an important part of this SPE. We will work closely with Gateway & Orchestrator operators to collect issues or requests to put on the feature board. We gather community input through multiple channels:

  • Discord threads & forum discussions.
  • a Canny task board to track development progress, request items or discuss tasks.
  • Titan’s weekly water cooler sessions.

Leftover LPT will roll over into future proposals or return to the treasury if the SPE disbands for any reason.

Reporting:

  • Quarterly progress reports will include:
    • Amount of LPT spent, staked, or held.
    • Progress on any of the success metrics.

4 Likes

Looks fantastic, how are you gonna spend those LPT? Are they gonna be staked or dumped immediately? Any plans for gateway UI similar to mistserver so devs could setup all needed parameters via UI instead of diving into CLI and other blockers?

3 Likes

Hi Marco - I am excited to see this proposal and I am wholly aligned with its goals.

As you know from your work in livestreaming, stability is absolutely crucial. With this in mind, I would like to see funding and personnel specifically allocated for building out a highly comprehensive automated testing suite, including regression testing with every new feature launch. This will not only ensure stability for the new features on this ambitious roadmap but also flush out issues in the existing transcoding pipeline.

I believe this should be included in the proposal itself so that the community gains visibility into QA practices. This visibility will be important for building mainstream confidence in the quality of the network.

With that addition, this proposal would have my support.

4 Likes

Looks fantastic, how are you gonna spend those LPT? Are they gonna be staked or dumped immediately?

MistServer certainly intends to build ownership in the protocol by staking the LPT that are allocated to the team through this SPE.
On the other hand, allocating an FTE to this SPE comes with a real ‘opportunity cost’ to us (we have to pull resources from Mists’ roadmap of expanding in the ‘traditional broadcasting’ market or hire & train a new core-MistServer-engineer).
So it will be a combo of staking the tokens, selling of smaller chunks periodically and build a small FIAT buffer.

If required, we can add a section to the proposal regarding fund/wallet management. Even though the core team only consists of MistServer at this moment, arguably we should have a structured approach from the start with the goal of expanding the core SPE team.

Any plans for gateway UI similar to mistserver so devs could setup all needed parameters via UI instead of diving into CLI and other blockers?

tl;dr: not in this phase, however:

There are two approaches which we could consider:

Embedded web UI in go-livepeer (like MistServer)

There is certainly merit in having this. However, I want to point out that for running Gateway’s specifically there’s not a lot of initial setup or config, they mostly get their parameters from the entity making the job request and pass it forward.

Now, an Orchestrator/Worker sees a lot more CLI usage and config changes. In any case it would certainly be nice to change a nodes’ config or other on-chain actions directly from a interface. There are security concerns that go along with a (remote) web-interface, so careful consideration is required (or an authentication mechanism).

Orchestration platform

While some protocol customers are only looking for transcoding, most are looking for a more full-featured solution.

The biggest complexities do not really come from managing the Gateway nodes, but due to the sheer amount of extra stuff that goes along with a pipeline that requires transcoding. Without video-specific expertise it is challenging to get to an actual product.

This is a big strength of Livepeer Studio: it has spent years abstracting away most complexities and provide a nice SDK.
Their software stack is open-source, but quite complex. Studio needs to be a generic platform with high scalability.

There is GWID which has made some good steps to create a platform to deploy infrastructure, but still leaves a lot to be desired. At the monent it only does the Gateway deployments and only on cloud providers.

We have March planned to gather feedback from Gateway operators to balance out our roadmap (which is currently heavily focussed on O/T operations). We will definitely ask them about this topic and may use part of our ‘community incentives’ to move GWID forward or a broader collaboration after the pilot phase.

Also just want to comment that people that want to build directly on the network also benefit a lot from having example open-source livepeer powered stacks like StreamPlace (formerly known as Aquareum) will provide.

3 Likes

I’m excited about this SPE, as more core video development within the ecosystem and more development in support of Gateway operators and Orchestrator operators who support video-specific jobs can only be a plus. I’ve shared some feedback to @stronk and team, and I want to surface a couple things for consideration, so that this SPE can best orient its roadmap and deliver impact for the community here.

1. The technical and feature development work to support things like AV1, QSV, and Netint, will not bring demand to the network unless this SPE (or some other group) is actually taking them to market, working with users who want them, and “selling” these capabilities.

Livepeer Studio has not seen any demand for these capabilities amongst its existing or target user base. They’re focused on Realtime video AI at the moment (where AI fees have begun to surpass transcoding fees already network wide), and therefore are unlikely to be going out to try to bring them to market. As such, this SPE’s technical work would be more effective if it was also building these capabilities on behalf of users that they were serving. And therefore, should also consider supporting users end to end. While all the software may not be put together perfectly already to do this out of the gate, this team does have significant experience supporting end to end streaming and transcoding users in custom workflows, and is up for the challenge - especially on new and experimental use cases.

If the SPE doesn’t have interest in doing that, then perhaps its not worth funding the transcoding feature improvements, and instead focusing more on the other aspects mentioned like gateway and transcoding node operations on behalf of the other network operators as users.

2. Newer or experimental features in the transcoding pipelines will likely need to exist on client forks for awhile, until battle hardened by real production users, before being prioritized to be merged into the stable codebase.

As highlighted in the discussion above, production grade reliability is paramount for many of the end-to-end streaming users. The existing transcoding pipeline on the network is the most mature and battle hardened element of the Livepeer stack, and any regressions to it due to experimental features that are untested in production workflows puts a lot of the usage at risk. However due to much of the core team’s focus on realtime AI, spending the time to test, review, verify, merge the new features on transcoding would be a big distraction.

As such the best approach would be to start with experimental forked clients that demonstrate the capabilities. O’s can run these nodes if they want (much like they ran the forked AI nodes initially), in support of any real demand that these capabilities are generating. When actually battle hardened and provably stable, they can be merged into the core codebase. I think this is a viable path forward that should work for everyone, but it does once again shine a light on point 1, that without real demand, there won’t be high motivation for testing/merging/implementing these on the main branch.


A side note about the broader core development in the ecosystem: I realize there’s a big weakness/gap here, in that technical improvements should be able to be reviewed/merged from all parties, regardless of the priorities of a core team within Livepeer Inc. The ecosystem team is working on addressing this through our governance, treasury, and SPE creation - to ensure that in the medium term we have core dev teams that serve all these groups in the ecosystem, rather than just the current realtime AI priorities. This is important that we close this gap!


Anyway, as mentioned up top, even though I highlight these two points of input to help shape the direction of this SPE, I am still very optimistic about finding the right framework for these types of contribution, from this team of longtime contributors. I have no doubt that many of the contributions listed here will be delivered soundly, and form a solid basis for the tech and operations of our network. I just want to make sure we’re also considering how and when they’ll be used and how and when they can be deployed, before we start off on a lot of development for development’s sake.

5 Likes

As some of you might have noticed, the proposal has been delayed a bit. Taking into account the received feedback, we’ve decided to re-focus the SPE to better align with the needs of the ecosystem.

This new approach takes lessons from Studio but doesn’t try to replicate it; Studio already has an awesome video offering with a mature SDK to go along with it. FrameWorks will focus on Livepeer’s core advantage: cost-effectiveness. We can supplement the ecosystem with an E2E service that is very lean and help entities replicate this setup, minimizing operating costs even further.

We’ve updated the proposal to reflect this change in scope. Let us know your thoughts!


Re: the technical roadmap & initial priorities

We totally agree that proper testing and an experimental fork make sense for some of these features before merging them into mainline. Running our own Gateway nodes will also give us better insights into Orchestrator selection and swapping, which we’re keen to improve.

Doug also raised some good points about ensuring that our roadmap supports network demand and operator usability. The initial development roadmap still remains the same, just to give a brief explanation as to why these initial items were chosen:

  • Gateway Documentation: Many operators don’t realize how flexible the HTTP endpoint is for transcode jobs. Also, we want to provide a short crash course on the media landscape, helping new Gateway operators understand core concepts like codecs, containers, frame types, and transport protocols.

  • AV1 codec support: Even though this might not see a lot of demand out of the gate and the network is currently not that optimized for VoD workflows, AV1 is just awesome for that use case and nice to start supporting for future proofing. Among the major video codecs (H264, H265, VP9, and AV1), it’s the only one currently missing.
    Starting with devices from 2023+ AV1 hardware decoding is standard and as late as 2024 did browsers actually start adding support for it. These were both major barriers for adoption, so we fully expect this codec to grow in demand. Since Brad already completed most of the work, this is a low-hanging fruit to round out network capabilities.

  • Netint support: Probably the item with the least impact, as not a lot of people run Netint cards… However, adding support is low effort now that LPMS runs on a more modern FFmpeg base. The feature was already completed & merged in 2022, except for updates to the build pipelines. We’ll be ordering one of the newer gens Netint cards to do some testing and see if it’s compatible. The first gens were a real PITA to work with, but these cards are very cost-effective to provide transcode capacity at scale.

  • QSV: Enables ‘proper’ CPU transcoding! Very nice for transcoding pools or any Orchestrator looking for cheap remote machines. Also great at doing AV1 transcodes, QSV has been really impressive in capacity & quality on newer devices (2022+?).

  • Smarter session limits: a common pain point for O/T’s. The static session limit and benchmarking tool aren’t well suited for real workloads, sometimes overloading the decoder/encoder chips.

Note that, with the added scope for the E2E media pipeline, development on some of these items might roll over into the followup proposal.

4 Likes

This is awesome to see, particularly around Gateway documentation and educational material. Exciting to see the scope expand and I can fast see this becoming a crucial SPE in the ecosystem.

Further to the content, are there plans to provide any ongoing dev support for developers / builders builders operating gateways?

I realise this may be time intensive, but having actual humans to turn to when creating new gateways could provide outsized value to new devs entering the ecosystem.

2 Likes

Hi Rich, thanks for taking the time to share your thoughts!

Yes, we definitely want to be a point of contact for anyone looking to build directly with Gateway nodes. Supporting developers and tinkerers for free is something we already do with MistServer and we’d extend that same approach to this SPE.
This is indeed time consuming, so this might be one of the areas where the SPE would want to hire someone in the future. Hopefully anyone we help get set up will also stick around in the community to share their knowledge, like how Xeenon was very active when they were working together with Studio and running Catalyst nodes.

We also think there’s room for a generous free tier for those who don’t want to self-host: Picarto for example runs its entire infrastructure for well under $10K/month, so we don’t need heavy subsidization to provide a solid but limited free CDN for people to try out.

For enterprise use cases (SLAs, strict capacity requirements, guaranteed turnaround times, building custom integrations, proactive monitoring/alerting, managed deployments, …), we’ll have DDVTech to offer proper support contracts.

1 Like

I believe this s a crucial and needed SPE due to the deprioritization of Livepeer studio.

I do think this SPE should be in talks with the studio team to take over its current customers as early as possible to understand their needs - it will be a win win for the ecosystem: studio can fully focus on daydream and app that interact with the Livepeer network via studio will get S tier support from a team of live-streaming experts ( mist team is god tier when it comes to live-streaming tech).

From my side, I am happy to be an early user of this initiative and provide as much feedback as possible, Ive built many apps on studio over the past 4 years and understand the pain points that an app like studio needs to solve perfectly.

From the harmonic SPE side we can also help onboard new users into self hosting / using gateways.

2 Likes

Appreciate the very kind words!

I agree that talks with the Studio team are going to be very important moving forward.

There’s 2 immediate roadblocks that come to mind when thinking of taking over any customers:

1. Compatibility issues with Studio

  • While we see some similarities between Catalyst’s components and our planned pipeline, we’re likely to start from scratch.

  • At least one major component (ironically, our own LoadBalancer) will not be used (nor the Catabalancer). Instead, we’re building Foghorn which includes other nice clustering features like config sync, self-healing, NAT traversal (hole punching / proxying) and built-in GeoIP support.

  • We did arrive at similar supporting utilities: what we will call the Helmsman API (equivalent to catalyst-api) and the Periscope utility (equivalent to livepeer-data), though our approach to inter-node communication and feature set will differ (IE we’re unlikely to use Serf, Gossip Protocol or BigQuery).

TL;DR we won’t be compatible with Studio’s current SDK, docs, etc. This makes any customer handover challenging.

2. Product side too early

We’re happy to do early partnerships, but at the current stage we only have 1 FTE available. Since core development is also a big component of the SPE, we might not have the resources to jump head first in the product side of things out of the gate.


So there’s still a lot of work to do on our end before can think of supporting users directly.

For now we have a scope document that describes some persona’s and gives a high-level overview laying out all the layers of a media pipeline, its core features and components we’d be using/building. This will be our starting point before making further commitments on the E2E pipeline. Realistically, it will take a few months to flesh out an MVP before we’re ready for any onboarding.

3 Likes

This proposal meets all the vote readiness criteria outlined in the Livepeer Governance Hub:

  • It has been live in the forum for 7+ days.
  • It follows the standard SPE pre-proposal template
  • All feedback received during the discussion phase has been addressed.

It’s ready to proceed to a vote! :rocket:

1 Like