Decentralized Metrics and SLA Foundations for Livepeer

Pre-Proposal: Decentralized Metrics & SLA Foundations for NaaP

Executive Summary

The Cloud Special Purpose Entity (Cloud SPE) proposes a focused effort to improve the reliability, transparency, and usability of the Livepeer Network through the delivery of decentralized metrics and SLA foundations. This work provides the core infrastructure required to support the Livepeer Foundation’s and Livepeer Inc’s stated Network-as-a-Product (NaaP) initiative, informed by recent Livepeer ecosystem deployments and feedback from the Transformation SPE’s research and focus groups.

Livepeer is evolving from a low-latency, decentralized transcoding network into a productized infrastructure layer capable of delivering predictable, auditable, and programmable services. This evolution requires decentralized metrics and SLA signals. Without them, Livepeer cannot offer verifiable performance guarantees, enable transparent service differentiation, or support intelligent, reliability-aware routing.

Furthermore for Livepeer to succeed as a core infrastructure layer for AI video, robust metrics and SLAs are foundational. Prospective users must be able to evaluate clear performance expectations and costs for common workloads and compare them directly against alternative networks. At the same time, the Livepeer network itself must observe real performance data across popular job types to support effective node selection, job negotiation, and reliability enforcement.

This proposal addresses these requirements by delivering the minimum viable decentralized metrics and SLA primitives necessary to unlock the early phases of the NaaP initiative and support Livepeer’s transition toward production-grade infrastructure.

Cloud SPE has already delivered multiple high-impact public goods to the ecosystem, including the free Cloud SPE Gateway, AI job tester, serverless API infrastructure, orchestrator support tooling, and public transparency dashboards. Building on this proven foundation, the proposed work enables real-time performance visibility and objective SLA measurement across transcoding, AI batch processing, and real-time AI video inference.

This proposal delivers the missing metrics foundation required to move Livepeer from “network” to “networked product,” aligning protocol capabilities with NaaP requirements while preserving decentralization, transparency, and open access. Cloud SPE has the delivery experience and operational context required to execute this work as durable public goods for the Livepeer ecosystem.


Prior Treasury-Funded Work, Operator Experience, & Public Goods

Cloud SPE has completed multiple Livepeer Treasury–funded initiatives delivering protocol-adjacent infrastructure, including public gateways, AI job testing systems, and metrics and transparency tooling that operate in production and interface directly with Livepeer Gateways, Orchestrators, and Workers.

Cloud SPE also operates Livepeer infrastructure as a Gateway, Orchestrator, and Worker operator, including SpeedyBird and Xodeapp. This direct operational experience informs the design of this proposal, ensuring that metrics and SLA primitives reflect real-world performance variability, deployment constraints, and operational trade-offs.

All infrastructure, APIs, schemas, dashboards, and reference implementations delivered through prior and proposed work are operated as public goods and remain openly accessible to the Livepeer ecosystem. Many of these were direct, freely available contributions outside of Treasury proposals with open source code and utilized heavily by many Livepeer participants. These public goods have materially improved both accessibility and adoption across the Livepeer ecosystem. The Cloud SPE will continue to maintain a free public gateway, AI job testing infrastructure, and public metrics dashboards as part of this commitment.


Motivation

Livepeer Inc and the Livepeer Foundation are preparing the future of the network with the NaaP initiative, where the network becomes a product that can be consumed programmatically with:

  • Predictable performance
  • Transparent metrics
  • SLA-backed quality guarantees
  • Ability to monitor and deploy workloads

Cloud SPE’s proposed work is instrumental to enabling NaaP. Specifically, this proposal delivers:

  • Decentralized events required for SLA and performance calculations
  • Canonical SLA scoring primitives for objective evaluation
  • Metrics calculation and publishing required for auditability and transparency

When combined, these features make network performance data directly observable and consumable by any participant without intermediaries. This data stream also provides a durable foundation for additional network analytics, including performance diagnosis, capacity planning, and future protocol capabilities built on shared, decentralized metrics.


Problem Statement

In the current Livepeer ecosystem, determining where to send transcoding or AI jobs requires an understanding of node operator capabilities and performance characteristics. The network does not currently provide sufficient real-time, durable data to support this evaluation. Developers are forced to rely on incomplete queries, time-consuming sampling, or centralized measurement, resulting in stale or misleading insights.

This limits the network’s ability to present itself as a predictable and reliable platform for developers and integrators. Without decentralized metrics and SLA signals:

  • Performance claims remain unverifiable
  • Service differentiation is opaque
  • Intelligent routing decisions lack objective grounding
  • The network remains best-effort rather than productized

This proposal resolves these gaps by delivering a standardized, decentralized metrics pipeline and SLA foundations that operate across transcoding and AI workloads while surviving node restarts and heterogeneous deployments. These data points in turn can be leveraged to demonstrate the Livepeer Network as a go-to platform with clear service levels and related pricing. In addition, the data will enable Livepeer to objectively evolve towards providing cost-effective scaling and industry-leading latency for real-time AI workloads.


Proposed Work

Below is a focused roadmap aligned with near-term NaaP milestones and targeted for delivery in the first months of 2026, with a metrics catalog under development that enumerates the specific signals addressed within this timeframe.

While no benchmarking or performance-testing system can fully capture a network with rapidly evolving models, workflows, job types, and geographies, this public-goods effort is designed to provide a neutral, transparent, and publicly auditable “universal gateway” view. The goal is to offer a broadly representative baseline for third-party users, rather than relying on the perspective of any single company or narrowly defined production workload.

Milestone 1: Decentralized Metrics Collection & Aggregation

Key Deliverables:

  • Evaluate reliability, scalability, and operational suitability of Streamr for decentralized metrics publishing
  • Initial standardized event taxonomy and data schema published for community review
  • Initial decentralized metrics ingestion and aggregation pipeline
  • Correlation of gateway, orchestrator, worker, transcoding, and AI inference metrics
  • Validation of durability, partitioning, and access control strategies
  • Technical feasibility of Runner <> Orch data sharing

Success Metrics:

  • Verified durability across node restarts and heterogeneous deployments
  • Community-reviewed event schema
  • Demonstrated correlation between gateway, orchestrator, and worker events

This first milestone delivers a validated technical foundation demonstrating that decentralized metrics and data processing can operate reliably across diverse network actors. This milestone emphasizes feasibility and architectural readiness, rather than full production completeness.


Milestone 2: SLA Computation & Observability

Key Deliverables:

  • Community-reviewed SLA scoring architecture and data model
  • Prototype SLA score computation using historical and near-real-time performance data.
  • Public SLA metrics API and documentation
  • Basic public dashboards for SLA and performance visibility

Success Metrics:

  • Deterministic SLA scores reproducible from published data
  • SLA metrics successfully published via an API for evaluation purposes

This milestone delivers objective SLA signals for visibility and evaluation, without enforcing routing or selection changes, and prioritizes correctness and usability over completeness.


Future Possibilities & Follow-On Work

The following capabilities are intentionally deferred to future phases once decentralized metrics and SLA foundations have been validated:

  • SLA-informed selection logic integrated into go-livepeer
  • Open Pool enhancements based on SLA signals
  • Orchestrator and worker onboarding workflow improvements
  • Enforcement mechanisms or policy-level SLA guarantees
  • Advanced metrics exploration (e.g., CUDA Utilization Efficiency [CUE])
  • Expanded dashboards and developer tooling

These items represent a natural and well-defined progression path for Livepeer in its journey towards being a reliable and scalable platform.


Technical Architecture Overview

The AI Analytics architecture is a modular, real-time metrics pipeline designed to observe, stream, aggregate, and present performance data across Livepeer. It cleanly separates job execution from analytics, enabling scalable monitoring without impacting network operations.

The approach embraces decentralization and data democratization by publishing data directly from nodes for public consumption. This enables data to be consumed without having to deal with service discovery of all Orchestrators, which currently is limited by manual processes or network topology. As nodes operate through their lifecycle, including when processing a job, they will emit events detailing the activity that are published to a configurable set of channels. For the proposed solution, we will focus on leveraging Streamr, a decentralized data network, as the channel. A data consumer will then pull this data down from a Streamr stream, conduct various transformations, and publish them to a data warehouse. This consumer can be hosted by anyone (enabling other Gateway providers to host privately and adapted as needed), and the Cloud SPE will host an instance for public use. During the day-to-day operations of Livepeer, different node types will be producing data to ensure a holistic, trustworthy set of data. This means that Gateways, Orchestrators, and Worker Nodes will contribute data. This data will then be able to be correlated for deep insights and cross-validation to limit bad actors and accurately measure jobs as they pass through the network. Lastly, an API will be built to enable Gateways to leverage these data points and for a public dashboard that enables visibility into the network.

Core Components:

  1. Job Tester/Gateway - Responsible for validating network based on public test scenarios
  2. Metrics Publisher - integrated with go-livepeer; pushes data between and from nodes to a decentralized event stream
  3. Metrics Access Controller - controller for publishing data to ensure only active Orchestrators can publish data
  4. ETL Pipeline - Data transformation engine to read event logs and produce metrics (e.g. SLA score)
  5. Data Warehouse for metric storage
  6. API Layer for scoring, dashboards, workflows
  7. Analytics Dashboard - minimal, publicly accessible with data from ETL engine

The architectural components and data flow can be visualized as follows.
A detailed breakdown of this diagram can be found here.


Deliverables Summary

We will perform extensive testing to ensure accuracy and reliability. This will include automation as well as manual testing of metrics generation, data stream publishing, data stream consumption, metrics processing, APIs, and dashboards. Based on the results of our testing as well as insights gained from the current Livepeer Cloud usage, we will optimize where appropriate.

Each milestone will conclude with:

  • A public demo
  • Code proposed for merge or published to a public repository
  • Documentation and community review

After completion of the proposal, all code will be put up for review and inclusion into several key GitHub repositories including go-livepeer and others as needed. All of this code will be available to the public to deploy for their own use cases.

Livepeer.Cloud (SPE) Infrastructure will be updated to run all the latest analytics solutions and can be thought of as a “reference implementation” for future/current Gateway Operators. Cloud SPE will operate this infrastructure as a reference implementation and address issues identified through ongoing use. Operational findings may result in minor corrective updates necessary to ensure correctness and reliability.

Core Deliverables

Category Deliverables
Metrics Decentralized publishing, taxonomy, durability, storage
SLA Computation, API, scoring models, documentation
Dashboard Metrics explorer, SLA summary
Community Tools APIs, repo docs, SDK examples, Docker templates
Public Goods AI testers, APIs, Streamr datasets

Delivery Approach & Iteration

This proposal reflects an ambitious but focused scope intended to deliver meaningful foundations. Cloud SPE will prioritize delivery of the core decentralized metrics and SLA primitives described above, and will iterate on implementation details as development progresses to ensure correctness, usability, and reliability.

As with prior work, scope and technical decisions will be informed by real-world testing, operational feedback, and ongoing collaboration with the community. Where trade-offs are required, preference will be given to delivering durable, extensible primitives rather than incomplete or brittle features. This approach ensures the work remains aligned with available resources while producing outputs that can be confidently extended in future phases.


Governance

The SPE team will distribute the funds among themselves after payment. The community’s input before and during the SPE will be collected via:

  1. A Livepeer forum post seeking feedback on the proposal prior to submission.
  2. Attendance at the Weekly Water Cooler Chat to discuss the proposal, collect feedback, and reshape (if necessary) the proposal prior to funding.
  3. After approval, the team will continue to attend the Water Cooler to present progress and a monthly update in the Livepeer forum.

Funding

The requested funding reflects a focused scope across two milestones that establish the benchmarks Livepeer requires to credibly position its cost and performance against other providers. This includes:

  • Requirements, Development, Testing, and Documentation
  • Community, Livepeer Inc, Livepeer Foundation and AI SPE Collaboration and Engagement
  • Enhancements include the Livepeer metrics framework, Streamr integration, AI Job Tester improvements, analytics consumers, and a public ‘reference implementation’ API.
  • SPE Hardware and DevOps (Development and Production)
Milestone Scope Amount
Milestone 1 Decentralized metrics foundation $100,000
Milestone 2 SLA computation & observability $80,000
DevOps & Infrastructure Hosting, hardware, and job testing costs $20,000
Total $200,000

Timeline

Delivery is anticipated to take approximately six months (and already underway as of November 2025). This is dependent on the team’s development velocity and subject to change. Preliminary design and validation work has begun to reduce delivery risk.

  • November 2025 - Works begins on Milestone 1
  • February 2026 – Milestone 1: Decentralized Metrics Foundation
  • April 2026 – Milestone 2: SLA Computation & Observability

Conclusion

This proposal aims to implement a real-time analytics foundation for Livepeer node operators, leveraging the Streamr Network to democratize performance monitoring and decision-making. By publishing real-time metrics, we will provide valuable insights into node performance, reliability, and capabilities, ultimately improving the overall efficiency and robustness of the Livepeer protocol through the enablement of the Network as a Product initiative (NaaP) and programmable SLAs.

Cloud SPE has delivered significant public goods, transparency tooling, and gateway infrastructure that have meaningfully improved the Livepeer ecosystem. With the direct engagement of Livepeer Inc and the Livepeer Foundation as our design partners, this next project will unlock a critical requirement for NaaP: decentralized metrics and SLA scoring

We respectfully request Treasury funding to execute this roadmap and provide the foundational building blocks for Livepeer’s transition into a robust, productized decentralized video and AI infrastructure network.

Cloud SPE has the delivery experience and operational context required to execute this work.

2 Likes

I’m struggling to see the value proposition here relative to the cost.

Spending ~$200k to measure ~1k/day worth of traffic does not seem justified. If the network does not yet have meaningful traffic, investing heavily in tooling to measure that traffic data feels premature. At the end of the day, this proposal delivers statistics—not direct growth, not new demand, and not clear revenue impact.

We’ve already seen a similar issue with the previous AI job tester, which ended up providing little real-world value. I’m concerned we are heading down the same path again, potentially spending ~13% of the current treasury on something that does not materially move the network forward.

I also have serious concerns about the implied working hours and pricing. If the justification is something like “2,000 hours at $100/hour,” that’s one interpretation—but another developer might argue the same scope could be completed in 200 hours. Who is validating that the proposed effort and cost are reasonable? What external benchmark or reviewer is confirming that this level of spend is justified?

The only scenario where this might make sense is if major gateways believe this work will unlock insights that drive a 10× increase in Livepeer traffic. But realistically, there are fewer than five active gateways on the network today, so that outcome seems highly unlikely.

Finally, there’s a broader concern around treasury management. LPT price continues to decline, and large treasury expenditures add further dilution pressure. If we continue spending aggressively without clear ROI, we risk depleting the treasury while its value is falling. When that happens, how will we fund genuinely critical contributions—those that drive organic traffic, real usage, and sustainable revenue for the network?

I’m not opposed to spending, but I strongly believe treasury funds should be prioritized toward initiatives that directly grow demand and usage, rather than expensive tooling that measures a network that hasn’t yet reached meaningful scale.

We have spent this for ai job metrics which is useless. (What real value did it add for that $150k?)

2 Likes

Hey @speedybird and @MikeZupper, great to see this proposal live.

As outlined in the updated updated network vision and the roadmap, building these foundations now feels necessary to move from simply having the technical ability to onboard new use cases to reliably supporting production workloads at scale and being accountable to clear, measurable service expectations. With BYOC fully released, the runner stack generalized, and further communication improvements coming from pytrickle, the network is increasingly capable of supporting a broad range of demand-driven realtime applications. Without shared, network-wide observability and SLA signals, though, it’s hard to turn that capability into sustained, real demand.

Potential demand partners quickly move from “what cost reduction can you provide?” to “what performance and reliability can you commit to, and how do we measure it?” Today, that data is fragmented across gateways and orchestrators, or missing entirely, which limits experimentation and slows the feedback loop that comes from real usage. Putting these data foundations in place gives concrete answers to those questions, shifts engineering from experimentation toward production-driven improvement, and makes it clear which supply providers are performing well, where bottlenecks exist, and what needs to change to meet customer requirements. Paired with the protocol group’s work on ticket distinction, this also makes it easier to see which treasury-funded efforts are actually bringing in real demand and which areas are worth continued investment.

Looking forward to seeing you continue the work you already started and to the first data endpoints that help gateways and engineering teams onboard customers and improve the network.

1 Like

Thanks for the pre-proposal!

I’m highly skeptical of using streamr as the proposed solution. A decentralized data network sounds great but:

  • Not everything needs to be decentralized. To me, the benefits of decentralized metrics are way smaller than the disadvantage of higher development and operational costs
  • While the data might end up on a decentralized network, the collection (Tester, Access Controller) is still highly centralized. So why not just keep the data in a redundant way on 2-3 servers?
  • Streamr’s numbers don’t look good: Streamr 2025 Q3 Transparency Report. Their MC/token price is down 90% this year alone, they’re now at a ~$4M MC. The only way to keep the network up is to mint even more tokens. This could end up in a downward spiral pretty quickly.

Regarding cost: I can see @rickstaa‘s argumentation, but also agree with @Karolak. Is there a a way to scale down the project and costs (e.g. by removing the decentralized aspect or certain non critical metrics) while keeping the critical deliverables?

2 Likes

Thanks for this proposal. The work here represents an important step on the roadmap to ensure the network is competitive as an infrastructure for running realtime AI video workflows.

I see that a bunch of the early feedback revolves around the estimations for the amount of hours and money required to deliver on this work (relative to the value provided). I think those questions are very fair and should be debated. I also think it’s very fair to discuss whether a decentralized storage infrastructure is necessary to deliver on the value props offered here. But I wanted to chime in why this work is important and how it differs from previous “job tester” frameworks.

In my opinion, there have been two different use cases for job testing and metrics.

  1. The first was aimed at providing useful information to delegators about orchestrator performance. Delegators needed to know which O’s were performing well on transcoding (or AI jobs) so that they could stake towards them, help route work via the selection algorithms, and help reward high performing O’s through reward cuts so they could bootstrap hardware. While hard to do, this was accomplished via the transcoding tester, and a little less successfully through the AI tester (beacuse it is hard to keep up with the latest models and test all workflows). But it still generally provided information about which O’s were dedicating capacity and expertise.

  2. The second use case is providing network level performance data to apps and enterprises about how the network performs as a whole. To choose Livepeer over alternative GPU clouds, users would need to know for popular workflows, how Livepeer performs in terms of reliability, scale, performance, and price. Establishing this is not easy and requires significant engineering, testing frameworks, design decisions in terms of how to package up the data in a consumable way and communicate it to the world.

The work proposed here in this proposal is far more aimed at the latter use case - in service of establishing PMF for the network as a leading and competitive realtime AI infra - something that will be very important if we’re looking to get adoption at scale.

Anyway, as mentioned, lots of good questions about the appropriate proposal size and hours required and incremental steps to get there. But I’m excited about the Cloud SPE team taking on the work to establish this important data for the network.

3 Likes

Thanks for the prop :slightly_smiling_face:

Transparent metrics is extremely important and I know the Cloud SPE is capable of building out the infra. My main concern is where this goes after it’s built, which appears to be relatively out of scope for the prop. My questions are, what applications will use it? How will it be presented on the Explorer? How will devs coming in from outside the ecosystem know it exists?

1 Like

Thanks for putting together this proposal, especially with all the supporting materials.

A dashboard with some high level metrics will be useful understand what is happening on the Livepeer network, eg a supercharged explorer. It’s always nice to be able to point to graphs and drill down on a few things. For that goal, it feels like the project scope and architecture can be tightened up to minimize cost and risk.

The go-livepeer node (and tools derived from it) already has most of the data that I imagine might be useful; it’d be a matter of extracting what’s there and presenting it in a nice format. I’m very happy to provide more specific guidance on this and other technical points in Discord to help narrow down the proposal.

That being said, the target customer for this data is also a little unclear to me. Showing network-level numbers to potential adopters is very different from monitoring your own workloads running on the network. The former is essentially marketing data and only needs a snapshot every now and then. Monitoring for the purposes of observability has very different requirements. Understanding this difference would help with scoping.

4 Likes

Intro

Thank you to everyone who took the time to read the pre-proposal and share thoughtful feedback in the forum and during the Watercooler. We want to acknowledge upfront that many of the concerns raised are valid and important, particularly around scope, cost, architectural complexity, and clarity on how this work will ultimately be used.

This response is intended to clarify intent, address the major points of feedback, and share how we have refined the architecture and framing as a result.


1. Building Robust Network Infrastructure

This proposal is intentionally framed as a network-level capability, not a one-off dashboard, script, or testing utility.

Treasury-funded work carries an implicit responsibility: anyone must be able to operate, audit, evolve, and hand off what is built without fragility. That sets a different bar than demonstrating feasibility with ad-hoc scripts or local tooling. While simpler approaches can validate ideas, they do not meet the requirements for long-term ownership, operator safety, or public accountability.

The goal here is to fund durable metrics and SLA foundations that Livepeer can safely rely on over time as part of the Network-as-a-Product (NaaP) initiative—aligned with feedback and direction from both the Foundation and Livepeer, Inc.

Importantly, feasibility is no longer the open question. The harder and more consequential problem is building something the network can depend on after feasibility has already been demonstrated.


2. Themes from Community Feedback

Across the discussion, several themes came up repeatedly:

  • Cost vs current network scale
  • Concern about over-engineering or premature productionization
  • Risk and viability of Streamr / decentralization
  • Unclear post-build adoption (“who uses this and how?”)

We agree these are the right questions to ask. The proposal should not be justified simply because something is “nice to have,” but because it establishes a responsible foundation the network can grow into.


3. Proposal Updates

Based on feedback from the community and the Foundation, we refined both the scope and architecture of to reduce risk, dependencies, and cost while preserving the core objectives.

Specifically, we narrowed the problem to a focused, pull-based metrics foundation and removed components causing concern.

The following provides a high level of the adjustments we will be making to our proposal:

  • Removed reliance on decentralized pub/sub infrastructure (Streamr removed).
  • Shifted to a pull-based metrics model, where gateways and analytics engines retrieve bounded, versioned metrics from orchestrators on a fixed interval
  • Orchestrators persist job-level metrics locally (instead of an event log) and expose them for scraping.
  • Eliminated the need for global publish permissions or event access controls.
  • Service discovery will be handled via existing manual mechanisms.
  • Custom jobs (BYOC) may integrate job-level metrics incrementally using documented interfaces.
  • Focuses on Video AI workloads and the metrics catalog; transcoding extensions are deferred.

What did not change:

  • The need for job-level observability, not just point-in-time metrics.
  • Support for the requested metrics not generated by Livepeer today
  • Preserved support for multiple consumers (Gateways, Explorer, researchers).
  • The ability to compute deterministic, reproducible SLA signals.
  • Public APIs and dashboards for transparency.
  • Alignment with NaaP milestones and future protocol evolution.

This refinement reflects community input—particularly around risk, complexity, and operator safety—while preserving the core goal of delivering network-level observability rather than isolated metrics.


4. MVP does not mean throwaway

A recurring concern was whether this is “too much, too soon.”

We want to be explicit about the distinction we are making:

  • Feature scope is intentionally minimal for MVP
  • Engineering responsibility is intentionally production-grade from day one

In other words, we are minimizing what is built, not how responsibly it is built.

This allows anyone to:

  • Rely on the system
  • Evolve it incrementally
  • Avoid replacing fragile MVPs later at significantly higher cost

5. Target Audience

To address questions regarding who the user is of this solution, this proposal responds to needs expressed by the Livepeer Foundation and Livepeer Inc. The primary audiences for this work are:

#1. Public Orchestrators, who provide essential GPU computing capacity to the network by meeting SLAs

#2 Gateway Providers, who provide permission-less O management utility, and the workload management utility.

#3. Inference service and tooling providers (such as Daydream), who deploy workloads to the network and manage services for their users, typically through a self-hosted Gateway, and serviceable APIs.


6. Responsibly Using the Treasury

We understand concerns around treasury usage and ROI, especially in the current market environment. Treasury funding must be applied in a strategic manner that results in long term, durable solutions reflecting what consumers would expect of a scalable network. We focus on delivering solutions that meet production needs and scale, rather than one offs.

The revised scope reflects a deliberate reduction in design surface while preserving the rigor required for long-term use. The requested funding covers analysis, design, testing, documentation, and operation of a public reference implementation. As Doug stated in the above feedback, “establishing this requires significant engineering, testing, and design decisions”.

This work is intended to avoid replacement cost later, when the network is larger, more visible, and harder to change. Cloud SPE is committing to operate this as a public reference implementation, as we have done with prior infrastructure.

We remain open to continued dialogue, provided the resulting system is something that anyone can responsibly depend on. Cloud SPE has already invested significant upfront design and validation effort into this work and will continue to do so as part of our commitment to delivering a high-quality foundation.


7. Closing

We appreciate the candid feedback and the time the community invested in reviewing this proposal. The architectural refinements shared here are a direct result of that engagement.

Our intent is not to optimize for the lowest-effort implementation, but to fund infrastructure that is durable, accountable, and aligned with Livepeer’s long-term direction as a production-grade network. The upcoming proposal revisions will reflect a deliberate narrowing of scope and architecture to reduce risk, complexity, and cost while preserving the core objective: establishing a trustworthy metrics and SLA foundation the network can build on. It is not intended to solve every future use case, but to deliver a responsible baseline that can be evaluated, adopted, and extended as the network grows.

We welcome further feedback and look forward to moving forward with the community and the Foundation.

— Cloud SPE Team

5 Likes