50% delegation threshold

How would you decide which orchestrators not getting rewards ? this would have a massive side affect like delegators leaving because their LPT rewards aren’t guaranteed or all the stake would move to couple orchestrators. In this scenario I can see massive sell off of LPT.

A sell pressure is okay when it’s needed. If there is low demand, it’s normal that lpt would be and should be sold off somewhat. Unnecessarily inflating the token however, because you wanna keep all 100 orchestrators alive in a low demand world, is more hurtful to the project in the long run imo.

Look, ok, let’s say the current situation is something like

O1: 1000 inf. lpts
O2: 990 inf. lpts
O99: 20 infl lpts
O100: 10 infl. lpts

When there is low demand, if we decrease the threshold from 50% to whatever number (would be determined by a function of demand in the code), inflation is decreased and let’s say it goes

O1: 500 inf. lpts
O2: 445 inf. lpts
O99: 10 infl lpts
O100: 5 infl. lpts

Now, the lower ranked O’s probably wouldn’t survive with lower amount of Lpts and would need to shut down temporarily and as you said there could be sell pressure for lpt or there could be more staking with the higher ranked O’s. That’s ok though. Demand is low anyway, fewer O’s would be able to perform the same work anyway. When demand picks back up, inflationary threshold is increased and we go back to first scenario, in which we slowly get 70 or 80 or 100 O’s again.

Why are we trying to keep the number of O’s constant at 100? Why is that number not variable depending on market conditions?

Once again you are:

  • Ignoring Delegators entirely in your proposals
  • Pretending like Orchestrators are the ones receiving all of these inflationary LPT rewards
  • Not explaining how having more inflation when there is more demand is a good thing as increasing levels of demand is the entire goal here
  • Pretending like the only value Orchestrators add is transcoding capacity

I’ve gone over all these points in my earlier post

1 Like

No, I’m not. If a delegator’s orchestrator shut down because of lower rewards in my scenario (Please see my reply to Pon above) , then he might decide to stake with a higher ranked O, or he might decide not to delegate at all and sell. That’s what Pon said in his above post (That there would be sell pressure) and I agreed and said that’s okay if there is sell pressure when there is low demand and there would probably be fewer delegators in that scenario as well.

No I’m not. Same answer as above.

Of course higher demand is the ultimate goal. What we are discussing here though is the compensation in a low demand vs. high demand environment. When demand is low, fewer O’s would be enough. When demand increased, the network would need more gpu power, right? So instead of, say 40, more orchestrators (even maybe 100 again) would be needed to come in to the network and do work. So more lpt sould need to be generated for them and their delegators. Please see my example scenario in the above post.

Look, as much as there might be reasons not to do my suggestion (I don’t really see them), I believe it would be healthier for the network. The only obvious trade-off would be more centralization. That’s it. Otherwise the network doesn’t need as many inflationary Lpt in a bear market (low demand) as it does in a bull market and if that meant fewer orchestrators would be online for the network until demand picks up, then so be it. That’s what I think.

Look, as much as there might be reasons not to do my suggestion (I don’t really see them)

I think that says enough then, does it? You’re proposal is the opposite of why LPT inflationary rewards are a feature of the protocol in the first place. You would be fine to run the risk of alienating a significant chunk of the stakeholders in the Livepeer protocol (who you would still need to get on your side to approve any changes to the smart contracts) and lose their knowledge, experience and effort they are putting into actively making a difference in generating demand. Pandering to people who only hold LPT for speculation is not the way to get there.

That’s a big trade-off for a decentralized platform. It’s an even bigger trade-off if as you seem to be proposing having the network run at close to 100% capacity. If a single node goes down the percentage of capacity loss is significantly more detrimental to the network.

I have been in the streaming media business for 25+ years and I can’t stress enough how important a robust, reliable, and redundant network is to successful media streaming on the Internet. Depending on the job and client we generally have 2-3x the expected capacity required available and live.

1 Like

Your proposal, in essence, suggests reducing the complexity of the network in order to mitigate inflation. While it may be agreed upon that continuous minting of LPT is not the optimal approach, operating the network at a minimal level would be highly detrimental. In such a scenario, stake would become centralized among a select few entities, granting them significant control over the voting process and the lion’s share of rewards, while others would struggle to participate.

The primary objective for all stakeholders is to enhance the demand for the network. It would be exceedingly challenging to convince potential users to test the network for their applications if we lack the necessary capacity and redundancy. I can envision a state of panic, as everyone scrambles to scale up their operations or join the network as new nodes, attempting to accommodate or compensate for the increased demand. This approach is clearly illogical from a business standpoint. Furthermore, as noted by @papa_bear , if one node were to fail, the entire network would collapse in a cascading fashion due to overwhelming strain on the remaining nodes.

Perhaps a more viable alternative would be to introduce a burn mechanism, whereby a small fraction of LPT is permanently removed from circulation with each transaction. For instance, a random number like 1 LPT could be burned for every transaction.

I strongly recommend that you engage in further research and strive to gain a deeper understanding of the intricacies of the system.

We all recognize that high inflation is bad for the token. The current system is designed to have LPT fill the gap of the lack of ETH earnings, with the goal of reducing inflation over time as demand increases and the need for LPT inflation diminishes

This proposal would increase inflation in the long term. What do we do when the protocol matures and we find mainstream adoption? Wouldn’t endless amounts of inflation be an even bigger problem then? We want less LPT inflation when there is actual demand and OP conveniently keeps ignoring the role of LPT inflation in the first place and give his own take on what it’s supposed to be doing. It is not meant to be the main source of income but sadly at the moment that is the case due to the lack of ETH earnings

It’s just not thought out that well. Drastically changing the inflation will cause big issues as existing Orchestrators will raise their reward cut to stay viable. Rewards for participants of the network will plummet and so will participation, along with the token price. Smaller Orchestrators, who again are a very minor ‘cost’ but have tons of benefits to the protocol will not be viable and quit.
Why are we trying so hard to unload all of the human capital aquired over the years?

In laymens terms:

  • We currently have two carrots, LPT and ETH earnings
  • ETH earnings still need time to grow and cannot sustain the community
  • LPT earnings are poisonous in the long term

The proposal would:

  • Cause starvation, without proper consideration of those side-effects
  • Increase the size of the poisonous LPT carrot together with demand, rather than the other way around
1 Like

My only questions is: How do we objectively evaluate “market conditions” or “amount of work on the network”?

Market conditions as in price? In that case we would build an AMM oracle and adjust inflation based on trading pairs?

And as for network usage, all transcoding is done off chain, how can we determine how much work is on the network?

I have an idea for an answer to this but I need to know more. For example, how do the orchestrators charge their work in eth terms? Is it something like “I’ll charge xxx Ethers for 1 million pixels” or how exactly do you quote your price? Each orchestrator is free to charge whatever they’d like, correct? And is it like a free market? A dapp can choose whichever O it wants to work with, correct? And it’s also about geographical proximity, right? I also remember something like more Lpt staked to an O gets more work? Is this correct?
Thank you.
So where I’m trying to get here is, if we know how much each O charged for their work, we could divide their payments recieved (which is available onchain) to their charging price and maybe approximate the number of pixels they transcoded.

We charge per pixel. For instance Titan Node charges 899 wei per pixel. You can view each Orchs price on the Livepeer Explorer. However this is only what is publicly displayed to the explorer node based in NY. Some Orchs may have different prices per region or price per Broadcaster.

Yes total control of the price they charge.


Yes Broadcasters can select specific Orchs, a list of Orchs, or use the entire list of 100 Orchs.

Technically no, it’s about performance, you need to be faster than Real Time. However proximity will likely play a large role in this. But if you’re transcoding on a machine made in 1999 then it won’t matter how close you are :wink:

If the Broadcaster uses the default selection algorithm that is shipped with the public Livepeer repo then yes stake is a large factor in selection.

So would you think this would work? @Titan-Node

Perhaps, you would need to regularly ping all the nodes on the network and ask for updated prices and keep track of all that data. I’m not sure if a node will tell you what price per Broadcaster is tho, @0xB79 can we poke a node and extract all the Price Per Broadcaster settings from a node?

All this is great but we still run into the issue of people just sending themselves work and claiming “Hey I just did a billion hours of transcoding and won 100 ETH” to which they just paid themselves.

This is the main issue with tying any incentives to “work done”. How do we know any is organic?

If inflation was tied to ETH payments I would just recycle 10 ETH in payments every 10 seconds and increase inflation to infinity just for the lolz. :stuck_out_tongue:

I see. I guess there is no real way of calculating the actual demand then. Which means even the O that does the least work maybe is actually doing a lot of transcoding (but perhaps getting paid little Eth because it’s the bear season) and indeed needs to be supported by lpt rewards so that it can continue doing what it’s doing.

One thing though: I remember you saying payments are down but number of transcoded pixels are stable. How do you know that? I guess you were just talking about your station, correct?

Making it even more complicated is the probabilistic micro payment system. @obodur you can read more about it here… Streamflow: Probabilistic Micropayments | by Yondon Fu | Livepeer | Medium So it’s possible that an orch can do a lot of work and receive no payment or the opposite do very little work and receive several payments. In theory and mostly in practice it usually works out that orchs are paid close to their statistically expected amount over time but it’s common to either be ahead or behind this value for periods of several months.