Proposal for a Fairer and More Decentralized Stream Distribution System

I think I’m the biggest proponent of using sensible metrics to determine where work goes (and have been trying to get some things moving in the right direction for quite some time).

I actually fully agree with @stronk on this point - and disagree that this would make the network more centralized.
Broadcasters are the only ones in a position to gauge the quality of a transcode - by whatever metrics they consider important - and it makes sense to continuously monitor for these metrics and switch to orchestrators/transcoders that give the best “bang per buck” so to speak. In some cases, one might prefer the fastest transcode possible (low latency use cases), while in others your might want the most efficiently stored or most high quality transcode possible (long term storage of video assets), or some balance between those (or even optimize for a specific algorithm known to be used by a specific node based on content in the source video).
Broadcasters have to track these metrics, because they want the best transcode and the staking system is not enough to provide guidance on which nodes are the best ones to send work to (it’s not specific enough and only provides a simple ranking based on a single highly ambiguous metric).
It makes sense then, that broadcasters publicize in some way which nodes they consider to perform well and poorly, based on their own interests. For Livepeer-Inc-operated broadcasters, this can simply mean staking towards orchestrators that provide high quality transcodes by metrics their customers find important, but I’d go even further and encourage publishing the data those staking decisions are based on. This also allows those running orchestrators/transcoders to understand what metrics are important and improve their nodes in meaningful ways that add more value to the network by actually providing more valuable services.

I actually want to go even further than just this, since there are some key problems with the media flow going source → broadcaster → orchestrator → transcoder → orchestrator → broadcaster → destination. The most clear problem here is latency: making a total of 6 hops between different nodes, by itself, will add about a second or so of latency even in the most ideal conditions (and usually significantly more). However, that’s actually not the key reason this needs to change: the fact often more than one transcoder operates behind an orchestrator means that their weakest/strongest transcoders determine the performance of the orchestrator as a whole, and that orchestrator is also the single geographic point of entry and exit for a stream. This penalizes transcoder pools, since it discourages having geographically distributed transcoders (even though that is a highly valuable property) and causes poorly picked transcoders for a specific job to negatively affect their ability to provide work that would suit them better.
Ideally, the orchestrator acts more as a representative for transcoders, rather than as an actual man in the middle. It can facilitate the broadcasters in finding good transcoders for the work they have, and act as the trusted payment gateway for the transcoders so they can focus on running an optimal system rather than needing to worry about anything on-chain. Of course, nothing stops anyone from doing both at the same time (or even all three, including running a broadcaster themselves) - but my hope is some of the community will specialize in these separate tasks rather than everyone doing everything. It just makes sense, as one person can’t be good at everything at once and specializing in something is just plain more fun sometimes.

There are way more reasons I would encourage certain changes - I plan to do a proper write-up about this somewhere in the coming month. :slight_smile:

5 Likes

I just want to make it clear that the issue is not a personal attack on you, but it’s impossible not to mention your orchestrator by name, as the stake discrepancy is solely between your node and everyone else’s. How else do we address the issue? It’s the elephant in the room.

I don’t believe anyone is saying you don’t deserve to be where you are now. We simply want to understand how to compete.

My Pixel price is 900 so that I can support the network’s VOD implementation btw. But if that’s resulting in fewer winning tickets overall, maybe I’ll change it back. My price has been 900 for months now and I really don’t think it makes a difference.

3 Likes

This is a great response. Maybe I should refrain from adding input to this thread as I am unable to see the problem clearly from a technical point and cannot provide technical solutions. All I can do is call the issue as I see it, and maybe this isn’t the place for that. I don’t want to make anyone feel singled out, but I also don’t know how to address the problems without doing exactly that.

1 Like

Yeah you’re right, centralization is the wrong term here… since we basically only have one B anyways :slight_smile:
Fully agree with your post. My point was more regarding how “gameable” a metric is. If latency would be the primary/only metric, everybody (that can afford it) would just setup an O in the same datacenter. With the current setup (having a latency threshold and then distribute by stake with a random factor), this is avoided.

Well the math is quite simple: Unless VOD brings you more than 30% more work, it’s not worth it to have the pixel price 30% lower from a payout perspective.

3 Likes

It is true that in the top 10 orchestrators in terms of LPT stake, only Vires transcodes today.

This means that at least 6 million LPT are staked on orchestrators that do not transcode at all. It would certainly be good to see these LPT staked on orchestrators that deserve it more.

But I suppose that in most cases, these LPT belong to these orchestrators (or their customers).

They would only need to add a GTX 1070 to their server to retrieve ETH that they clearly have nothing to do with today.
And as a result, there would be no more work left for orchestrators located after this top 10.

I don’t have the solution to this problem.

1 Like

Hi everyone,
Vires,
Firstly, I wanted to apologize if it seemed like we were fighting against you. That was not our intention. We were simply using your case as an example to illustrate the issues that small orchestrators are facing within the Livepeer network. We are not against you and not saying that you not deserving to our own whale within the network.

Another problem that we have noticed is that some orchestrators are simply leaching LPT rewards without actually contributing any value to the network. This is a problem that affects all orchestrators, but it is particularly harmful to smaller players who may not have as many resources to invest in their operations.

We believe that it is important for the Livepeer community to work together to address these issues and create a more fair and sustainable network for everyone involved. We hope that by bringing attention to these challenges, we can encourage a more collaborative and inclusive approach to network governance and help to build a stronger, more decentralized Livepeer ecosystem.

4 Likes

Yeah @vires-in-numeris I also mentioned you specifically in my post, but didn’t mean to imply you are not a valued member of the orchestrator community or don’t deserve to have some fat stake (How many orchestrators are there even left from the period when reward calls were $100 per round?)

OP’s point was to make changes to job distribution to make it a bit easier for smaller orchestrators to compete, but this forum post has escalated to also cover fee distribution and stake concentration.
We might need to split this up into separate topics since each of these topics warrants their own discussion on how this can be tackled and how much of an actual difference it would make.
I think we are all aligned here on the goal to attract and retain orchestrators and allow them to dedicate themselves to the Livepeer ecosystem (including helping on the demand side)

4 Likes

Hi everyone,

This is a discussion that has been going on in the Livepeer community for a long time now.

I will outline a potential solution from a high-level below that I briefly discussed with some Livepeer team members in the past, but would require some serious protocol level changes. Maybe it’s not that bad though, Livepeer used to be at the frontier of protocol development and research, let’s get back in that mindset.

I would like to start with saying that there have been attempts at addressing this in the past:

  • Explorer improvements
  • Playing with alternative selections (i.e. increasing the pinged set of 8 O’s)
  • Arbitrum Migration (lower cost for reward and moving stake around)

I’m no longer part of the Livepeer team so I can’t speak to any efforts after I left.

I don’t think an orchestrator like @vires-in-numeris is the problem, yes he runs a big service but he has also been a community member for a very long time who is very active has contributed a lot to the ecosystem. This makes it that he has a recognisable and valuable “orchestrator brand”.

I experienced the same when building Livepool, it was a very valuable tool to the community and I believe many of the existing orchestrators first got their feet wet through Livepool. @dob also wrote a twitter post on building value adding orchestrators earlier: https://twitter.com/petkanics/status/1644078470635352068 . However I do think that the effects of this can be very slow and not guaranteed.


However, I don’t think this is sufficient to built a healthy network for the long-term.
I also would like to point out that there is a separate problem: there are orchestrators calling reward consistently without having ever transcoded real streams.

This issue was first discussed during the discussions of LIP-34 (InflationChange Parameter Update · Issue #34 · livepeer/LIPs · GitHub), which was a proposal to reduce the amount by which inflation would adjust per round. Inflation was quite high at the time and participation rate well above 50%. LIP-34 passing meant pro-longing inflation.

In the topic I warned about the status quo that existed at the time and still exists today: there are orchestrators getting big chunks of LPT rewards without running actual GPU capacity while the whole idea for inflation was to provide a subsidy to bootstrap the network when demand was low. Not free lunch.

Rewards in Livepeer from inflation are not tied to any ability to actually perform work, even if there is no demand. This is in contrast to block rewards in PoS, where even if there is no demand you produce empty blocks for which you get a reward. An analogy for Livepeer would be a proof-of-transcode or when there is demand something like distributing inflation according to productivity (The Graph tries to do this using the Cobb-Douglas production function).

This brings us to problem 1:
Rewards should be distributed to those performing real work or having the capacity to perform real work

The fact that orchestrators with a lot of stake who don’t perform actual work are also getting inflation is reducing the pie for nodes that are providing an actual service to the network. Solving this issue would already increase the network ownership % of orchestrators who provide valuable work.


Once we have a solution for fairer stake distribution this can already lead to an improvement in work distribution, but there’s one big elephant in the room. The competitive advantage to win work is currently only based around orchestrator latency and stake.

There’s been a long ongoing discussion in the community to include price per pixel in the selection process, which is a great way to increase competitiveness for smaller node operators. While this would mean reduced earnings from fees, if you read the paragraph above about distributing rewards according to production, it would grant you more LPT from rewards, if you are able to perform more work by being more competitive on price.

This brings us to problem 2:
There should be additional parameters for selection beyond stake and initial ping latency


While all of the above requires a lot of additional thought and research, I am a firm believer for years now that this is a solid way forward to create a more durable protocol at the core layer. And while priorities were rightfully shifted elsewhere in the past, maybe now is a good time to revisit some of the workings of the protocol.

6 Likes

I have a slightly different take here. The crux of the problem is that LP is not gaining enough traction, which makes us - the Orchestrators - more susceptible to a power law (winner takes all kind of situation). At this stage enforcing heavy artificial measures could risk the protocol because it’s like sacrificing your best players at an extremely critical stage of the game. There is no one solution, but my mental model is whatever mechanism we use should optimize for the performance/reliability of the overall network not for a “fair” distribution of rewards. I’m saying this and I’m a 2-month-old orchestrator (joincluster.eth) with a stake from one delegator and the LP grant, but I’m trying to unbias myself.

Enforcing artificial distribution of rewards is a slippery slope, where poorly performing orchestrators might get rewarded, which will impact the overall performance of LP leading to less work for all of us, and potentially discouraging high-performing orchestrators to continue investing in their infrastructure.

It’s a tricky matter because you also don’t want to discourage good smaller Orchestrators from investing in their pools and risk becoming a centralized system. I don’t know how exactly the rewarding works but it feels to me that LP is already using the right mix of metrics and we can certainly tweak the weights of those metrics, but by any means, it shouldn’t come at the expense of the network performance. A good next step could be to build different financial models and evaluate them.

Where does LP Inc stand? it is not clear to me from the discussion

2 Likes

All the risks you mentioned are already happening. Rewards already go to poorly performing Orchestrators. Making changes to avoid that is not artificial. In my opinion, the slippery slope is to keep avoiding this problem as it grows larger and larger while demand remains low.

2 Likes

I think to this point, if stake is not for the security of the network a soft cap on the weight of stake is a valid implementation. If over x LPT the importance of LPT staked in the selection process reduces by a set % then it would elevate the other metrics used in the calculation without eliminating the benefit of having more stake past that limit.

Say the cap was 100k lpt and everything above it had a 20% value on 2.4 mil LPT it would scale as if there was 560k (100,000+(2,300,000*.20)) lpt staked bringing the ultra high nodes a little closer to the soft cap.

If the goal is transcoding stability, putting the highest emphasis on performance is what is best for the network and would reward orchestrators who make that a priority.

2 Likes

I think there’s a little misunderstanding here @mamood

I suggest you re-read my post about the status quo which exists for years now, this status quo can be summarised as:

There are large LPT holders that have been receiving enormous amounts of subsidies by running an orchestrator and not performing any real work (e.g. 0x9c106 and 0x4f47 )

No one is talking about artificially driving rewards, but a change in how rewards are distributed so they flow more naturally to those actually performing valuable work on the network.


It’s not an issue for the network functionality/security

@vires-in-numeris While I agree poor stake distribution is not a huge issue in the network today, it can definitely become one over time. Current security issues are limited to governance and being able to sway votes, one thing here is that delegators do retain their voting power if they choose to. But if there’s ever protocol changes that do require some form of better stake distribution (e.g. fraud proofs, fishermen, … ) then it could pose a security issue down the line.

A better stake distribution is always better for a distributed network even though the impact can vary heavily from protocol to protocol.

1 Like

This point is not a direct response to any particular post in this thread, but is important background related to this discussion. It’s been on my mind that this should be re-highlighted somewhere:

Many of the proposed changes, frustrations, etc…sort of get at critiques of the whole work token model, which in many ways was invented by Livepeer, and adapted by many additional work protocols since. The question, “why token”, comes up often when people are trying to understand blockchain coordinated networks, and the work token model provides a firm answer to that question. It is completely fair to learn from the experiences and question this model to try to improve it, by the way. At its core, the work token model can be simplified as:

  • One staked (or delegated) token offers the opportunity to compete for one unit of work - and hence, one unit of fees.
  • As demand on the network grows, there becomes more fees available per token. Hence there is demand for the token in order to do the work to access these fees.
  • The staked token, combined with an unbonding period, serves as a security mechanism to ensure the worker nodes don’t cheat or attack the network. This security has value → the more stake a node stands to loose, the more jobs it can secure for various parties and higher value jobs (premium content for example) it can be “trusted” to perform. Much like when a construction company works on a skyscraper, they have to put down a much larger surety bond than when they work on an individual home. Hence the justification for more delegated stake → more jobs.
  • Creating this security and work allocation within a protocol-controlled network (LPT) rather than any other representation of value (ETH?) is critical for the incentives and alignment. (Protocol can be owned by those doing the work, protocol can’t issue ETH as incentive for bootstrapping, governance actions can update incentives and solve practical issues, strong aligned community around LPT, etc.)

Of course there are practical challenges to this when the rubber hits the road on reliable global video transcoding: performance, location, latency, reliability, O’s inability to just buy more token at the scale required to win more work, supply exceeding demand (this is what the inflationary LPT rewards are for to offset during bootstrapping), inefficient delegators who don’t act in their own self interest, ongoing research on the verification and slashing security mechanisms, and more.

At its core, I am a believer in the fundamentals of the work token model. And we should always make iterative updates to address the practical issues to ensure the network is delivering on its potential. But I wanted everyone to see those fundamental assumptions and keep them in mind as they are proposing changes that would break the model, and potentially render the superpowers of LPT void. Or if you have an alternative model, by all means share for consideration.

3 Likes

From this perspective, an idea immediately jumps to mind for me: if tokens represent an opportunity to compete for work, perhaps they should also represent an obligation to perform work.
The problem most discussed in the thread so far is that there are orchestrators that perform no work yet have a lot of stake, which allows them to grab inflationary rewards without any risk or effort as they do no actual work.
If inflation is not distributed based on stake, but distributed based on work successfully performed (which is, in turn, highly influenced by stake, as it determines the pecking order), that would mean that the nodes performing well would get most rewarded from inflation while nodes that just hold stake and perform no work receive no reward either.
This would bring a currently “missing” incentive to those that stake: the incentive to stake towards nodes they believe will do a good job only. After all, their stake then brings them no rewards if they stake towards a node that doesn’t perform.
:thinking:

1 Like

This is great in spirit. But as far as the protocol is aware, work performed == fees earned through PM tickets. And these are trivial to self deal. Just set up a B and send a PM ticket to your own O. (This is a current weakness of even looking at fees earned on the explorer, though not often exploited.)

It would be a bad solution to just trust the fees earned only from the Inc B. But maybe there’s some governance solution where governance approves a set of Bs to include in the protocol observed dataset?

This all depends on the client implementation, though. In the past, the community has presented evidence of the Binance O receiving real PM tickets from Livepeer Inc Bs (not self-dealing) despite not transcoding.

(I’m having trouble finding the Discord post but I bet @Authority_Null or @Wiser have the info)

2 Likes

The last big discussion around this happened a few months ago.
Not sure if this link will take you to the right place in Discord but…

2 Likes

sorry @dob but trying to frame this as we’re criticising the “work token model” is short sighted, and frankly wrong. It’s time to accept that Livepeer’s current model, while at the frontier of protocol design back then, might not completely suffice anymore today, nor will it at scale.

I already warned the community in 2019 that pro-longing inflation without changing the core protocol mechanics would have bad outcomes. Yet the vote was swayed by bad actors such as Bison Trails and Multiclown capital who would happily get more tokens. What I mentioned would happen, has literally played out, Bison Trails got more tokens without having ran a single GPU 4 years later and so did Multiclown who by now has dumped on retail and it’s come out by now that they were one of the main grifters in crypto.

In spirit @Thulinma is very right, stake is the obligation to perform work, not a one-way street. So let’s not shoot down those arguments with implementation details that come from thinking within the constraints of the current system. It just indicates even more that change is needed to improve those constraint and create a better model. (In fact I’ve already proposed a reasonably sybil resistant mechanism in the past, i.e. fishermen).

1 Like

@NicoV @dob

If I’m understanding this conversation correctly, the crux is that Livepeer protocol does not currently enforce economic penalties for not doing work, because it’s not really aware of what “work” is. This is not working, because economically powerful entities can maintain a slot on the top 100, earn LPT, and do nothing.

It also depends on broadcasters to police Os who don’t do work, by not sending them streams. This is working (the questions about Binance’s O notwithstanding); it will become a more powerful mechanism in a many-B world.

Re: the first problem (no economic penalties for not doing work), I thiiink I agree that this is a protocol-level problem. Slashing is a critical component of PoS systems, and in most systems you can clearly measure the validation work that’s done. I fully agree that we can’t rely on Livepeer Inc test streams to determine whether an O is doing work (lots of problems there, including the possibility that someone configures an O to accept only test streams) - but this feels like a clear problem with the economics of the protocol.

4 Likes