Transcoder Campaign: Captain Stronk

Captain Stronk: the Orchestrator with community values and a mission to make the Livepeer network the more robust


:wrench:   In need of open source contributions?
:globe_with_meridians:   Looking for a global high performing orchestrator?
:chart_with_upwards_trend:   On the hunt for low commission rates?
:rocket:   Captain Stronk comes to the rescue!


Stake with Captain Stronk


About Stronk

Hi! I’m a full stack video engineer at OptiMist Video. My main responsibility is working on MistServer, an open source, full-featured, next-generation streaming media toolkit
Besides running this Orchestrator I have a passion for making music, maxing out my level on Steam and cuddling with my cats on the couch

Why stake with Captain Stronk?

  • We have been active since December, 2021 and have never missed a reward call
  • To support our contributions to the Livepeer ecosystem
  • We have a professional setup with monitoring, alerts and high uptime. We have consistently been in the top 10 performers and will scale operations as needed to keep it that way
  • Our goal is to run at a 8% reward and 30% fee cut. This way our delegators will have high returns, while ensuring we earn enough revenue to fund our operations and community contributions. We will lower our reward commission over time, as our stake increases

:rocket: In short: we are setting a gold standard in terms of reliability and performance and set an example of how orchestrators can contribute back to the Livepeer ecosystem


Stake with Captain Stronk


Our setup

We are located in Leiden, with a dedicated Livepeer machine and gigabit up/down connection. We are also cooperating with other high performing orchestrators to ensure high availability across the globe while keeping our costs as low as possible.

We have the following GPU’s connected and ready to transcode 24/7:
1x GTX 1070     @ Boston
1x RTX 3080      @ Las Vegas
3x GTX 1070Ti   @ Leiden
1x Tesla T4         @ Singapore

:bulb: Tip: You can visit our public Grafana page for advanced insights


Stake with Captain Stronk


Published Projects

As a part of our mission to make the Livepeer network more robust, we are maintaining the following projects

:wrench: Tip: Feel free to request custom development work if you have have need of something which can benefit the Livepeer network as a whole

Orchestrator API and supplementary explorer - link

Useful data indexing tool which provides information on events happening on Livepeers smart contracts. The API aggregates data from the Livepeer subgraph, blockchain data, smart contract events and more. This data gets cached so that, even if one of these data sources has issues, the API can always return the latest known info without interruption to service.

:construction: Status: Prototype has been running for over a year now. A major refactor is currently in progress, as well as a redesign of the frontend.
The new version will be implemented generically (extensible to other projects than just Livepeer) as well as buffering the state over time (allowing getting the state at a specific point in time)

Stronk Broadcaster - preview

We are working on turning our Livepeer machines into a CDN and provide a simple website where users can trial the Livepeer network. The website allows visitors to stream into the network from the browser and view and share these streams with anyone. It includes basic statistics, like the delay of the transcoded video tracks versus the source track

:construction: Status: Global ingest and delivery of live media is enabled. Side-by-side comparison of WebRTC being streamed from the browser versus what the viewers receive is also functional. Next up: Enable streaming through a canvas to enable overlaying video tracks and adding overlays (or other custom artwork)

Dune Livepeer Dashboard - link

Provides statistics on events happening on Livepeers smart contracts

:ballot_box_with_check: Status: Finished, exploring new stats to add

Orchestrator Linux setup guide - link

Solves the knowledge gap of Windows users who want to improve their setup by running their Orchestrator operations on a Linux machine

:construction: Status: First public release. We have received plenty of feedback from orchestrators in order to cover more information

Orchestrator Discovery Tracker - preview

Pretend Broadcaster which does discovery checks to all active Orchestrators and makes the response times publicly available. These are the same requests a go-livepeer Broadcaster makes when populating their list of viable Orchestrators

:raised_hand: Status: Prerelease. On hold while we work on above projects


Stake with Captain Stronk


Shoutouts

The Livepeer network has an awesome community of Orchestrators. In case you are not convinced of staking with Captain Stronk yet, the following Orchestrators are also a good spot to stake your LPT:

titan-node.eth: the face of Livepeer and also a public transcoding pool. Hosts a weekly water cooler chat in the Livepeer Discord to talk Livepeer, Web3 or tech in general
video-miner.eth: transcoding pool run by a group of Orchestrators (including myself)
xeenon.eth: a web3 media platform and promising new broadcaster on the network

6 Likes

Stonk is stronk. Stake with stronk :handshake:

1 Like
Update: progress report on the Stronk Broadcaster project

Besides global transcoding using our hosted Orchestrators and Transcoders, captain-stronk.eth is now capable of global ingest and delivery of live media.


Up next

Create a webpage to manage streams, stream into the network using WebRTC and view statistics

4 Likes

Update to the Stronk Broadcaster project


A few weeks ago the Stronk Broadcaster experiment was established. A CDN powered by MistServer combined with Livepeer nodes to enable low latency livestreaming from the browser

Powered by the broadcast-in-browser component of StreamCrafter, a new project from the OptiMist Video team (the maintainers of MistServer), you can now visit a prototype to ‘craft’ livestreams from the browser

Disclaimer: This is a very early release, so expect some unfinished UI elements or instability. The supporting infrastructure is rolled out on a small scale and can only support a small load of streams and viewers
Note that switching to another tab during a broadcast can cause the browser to pause the broadcast to save resources

Features Included

  • Input
    • Camera + microphone
    • ScreenShare + audio
  • Edit
    • Processing input tracks into a new stream (size, position, opacity)
  • Output
    • Broadcasting using WebRTC to the Stronk CDN
  • View
    • Share a link to your stream

Short Term Roadmap

  • UI upgrades
  • Input
    • Camera modes: facing, environment
    • images/gifs
  • Edit:
    • add overlays or logos
    • simple free hand drawing
  • Output
    • Ingesting as MKV via HTTP Put requests
    • Streaming to custom targets (rather than just my CDN)
    • Record to local file

Long Term Roadmap

  • More UI upgrades
  • Mobile compatible UI
  • Input:
    • Other live streams
    • Tracks emitted from WebRTC peers
  • Edit:
    • Generate audio effects

Other areas being researched

  • web3 integration: login with metamask, dstorage, dcdn
  • supporting a VoD workflow rather than live only
  • add (local) audio clippings as a (loopable) input source
  • transcode audio between eg opus/aac before broadcasting
  • Using low-level WebCodecs API to support more inputs and outputs

If there are any feature requests, feedback, ideas or questions on how this all works, ask away on Discord or this Forum post

4 Likes

Introducing the latest expansion of the renowned Stronk operation! We are excited to share our latest venture: operating a hive of Swarm Bees

Our integrated stack combines the strengths of MistServer (CDN, Ingest), Livepeer (Transcoding), Swarm (Storage, Messaging), and an innovative browser-based broadcaster.

Stake to captain-stronk.eth and support us on an exciting journey as we merge cutting-edge technologies and explore the potential of this groundbreaking combination. Together, they create an unparalleled ‘Media Gateway’ that will redefine the future of media experiences.

1 Like

Hey there, readers and delegators! Today, we’re (again) diving into the realm of MistServer, Livepeer, and Swarm.

First of all, while combining MistServer, Livepeer and Swarm opens up a world of possibilities, building on cutting-edge technologies also brings risks. Are all potential use cases viable at this point in time? For instance, can we use Swarm for live media delivery or will it introduce too much delay compared to a traditional CDN? What about storing segments for extended periods of time on Swarm? Is it worth the cost when we can self-host S3 storage using MinIO?

These are complex questions which will be explored over time, but let’s focus on one exciting aspect which is a bit easier to manage: Swarm’s messaging capabilities, which allows you to send data or messages directly to another ETH address

For the MVP, we will bring together all individual parts of our work over the last few months and make our first steps towards a Web3-powered Media Gateway

Operation QuickLink

Running infrastructure for live media and video on demand is a lot of work. We’re talking about deploying nodes, topping up Swarm stamps, balancing server load, etc. Thankfully, Livepeer node operators have plenty of surplus capacity. Wouldn’t it be great if our delegators could use some of that capacity to publish their own media?

Well, how do we make this happen? Let’s start by simplifying things. Instead of relying on a traditional database, we’re going to make use of the blockchain directly. The delegator’s ETH address becomes their account, and their LPT stake determines their pecking order. No more hassle with password resets or lengthy registration forms.
When a delegator wants to view or publish using our infrastructure, they are free to do so as long there is the capacity for more streams or more viewers. If there is no capacity, you have to wait for a slot to open up.

Our initial MVP follows a controller-worker architecture. The node operator deploys a media-cluster, which consists of a central media-oracle node and one or more media-workers. These workers play a vital role, combining MistServer, a Swarm Bee, and a Livepeer Broadcaster. The Livepeer Broadcaster takes care of transcoding, while MistServer handles ingest, stream replication, and content delivery. The Swarm Bee enables secure communication and control within the media-cluster.

The media-oracle acts as the gateway for both viewers and publishers. It hosts a user-friendly webpage that makes publishing a breeze. Publishers can easily composite inputs, add overlays, mix audio, and publish directly from their browsers.
It’s all about providing a smooth and secure experience for everyone involved

Stay tuned for more info on the development of the MVP

1 Like

Integration with Swarm for content delivery

Standards on building a media pipeline are quite high nowadays. People are spoiled with platforms like Youtube, Twitch and Netflix providing them with content which loads instantly, on any device and at a low latency.

To meet modern standards, MistServer is used for ingest and content delivery. This way we can:

  • Do realtime audio transcoding to support low latency streaming.

WebRTC uses Opus audio while most commonly AAC audio is ingested. This requires realtime audio transcoding. Similarily, if WebRTC is used for ingest, audio needs to be transcoded to AAC to support delivery using HLS

  • Support tons of transport protocols

Depending on the network stability and device type, different transport protocols are more suited for content delivery. By leveraging the full potential of MistServer we can provide a pleasant viewing experience under any condition

  • Transmux to different containers

A transport protocol only supports a limited set of containers. For example, HLS only supports TS or MKV segments. Having a proper media server in the stack means we can transmux our content before delivery to enable any kind of transport protocol

A big part of this project is exploring the viability of a fully decentralised workflow, with a CDN powered by Swarm alone. Since Swarm is not built for delivering media content, there are lots of unknowns: Can we get the latency low enough? How does the cost compare to simply self-hosting S3 compatible storage using MinIO? How easy is it to integrate? What is the device support like without being able to transmux content?

There are two approaches we can take here:

File based approach

The most obvious approach is to use a file based approach. This approach involves segmenting the media into MKV or TS segments and writing out a HLS playlist, which can then be shared to viewers. This method inherently has a high latency and might come with significant storage costs, but would be a fairly robust solution with good device support

Streaming workflow

Since Swarm also supports feeds we want to explore streaming protocols, like WebM or MP4 over WebSocket, and deliver those using Swarm feeds. In theory this would enable low latency streaming, but since we would be the first to try this approach there are no guarantees this method is stable enough. One thing we want to measure when using this approach is the jitter. A high jitter would mean that players still need to keep a fairly significant distance to the live point so no interruptions in playback occur

Both of these approaches are being implemented in MistServer initially by directly interfacing with a local Swarm Bee, as this would allow us to get started quickly and get some hard data on the viability of Swarm as a CDN. If the results are good, the browser based media studio StreamCrafter will also be integrated with Swarm directly. This would enable publishing and viewing directly from one webapp, which can be hosted on dStorage as well for a fully decentralised media platform

TL;DR

In conclusion, with MistServer we can achieve low latency streaming, support multiple transport protocols, and seamlessly transmux content. We are working towards enhancing the media pipeline by integrating Swarm for content delivery. However, we can’t achieve this vision alone. We invite you to support our research and development efforts by staking to our node. Join us in shaping the future of media delivery today.

2 Likes

Web2 vs Web3: Exploring Storage and Messaging Solutions

In earlier posts we talked about the potential of Swarm as a storage and messaging solution for web3 applications. However, as this is still cutting edge technology there are unknowns in terms of availability, data persistence, price, features and performance.

In order to make meaningful conclusions about the viability of Swarm as a platform for media pipelines, comparisons have to be made with traditional solutions. To that end we have rolled out two additional pieces to our media-cluster :

Storage

Storage is required to be able to record livestreams and upload videos. MistServer supports replication between nodes and creating clips from streams effortlessly, but in order to store content for an indefinite duration, a solution for storage is required

MinIO is a S3 compatible storage layer which can easily be self-hosted and provide redundancy over multiple regions. There’s a cost to this, as you have to host a server with plenty of storage in each region. However, once it is running it is trivial to write and read from this storage layer with no additional costs and to manage access control.

This will provide us with a baseline to which we can compare Swarm’s storage layer and allows us to offer multiple options depending on the users’ preferences

Messaging

Communication between media-workers and the media-oracle in a cluster can be sensitive. In order to make deploying and managing nodes easier we want a solution where media-workers can broadcast their availability to the oracle, send triggers to the API and receive trigger responses from the API. media-workers can also sync their config between each other using this architecture

Messaging over Swarm requires premining trojan chunks and has a cost as a stamp has to be attached to each message. We expect this to be more robust then relying on a small set of self-hosted MQTT brokers and can be very useful in the discovery process. At the same time it’s important to have a fallback solution to manage costs for high volume channels like the triggers coming from media-workers

For our stack we opted to roll out Eclipse Mosquitto, an open source message broker that implements the MQTT protocol. This will give our stack an additional layer of communication besides Swarm and direct HTTP requests between nodes

TL;DR

In conclusion, we rolled out MinIO for storage and Mosquitto for messaging in our media-cluster. These additions provide a baseline for comparing Swarm with traditional solutions.
We invite you to stake to our node and support our research and development as we explore the options for a fully-featured, web3 powered media pipeline

:movie_camera: Status update on the Stronk Broadcaster project :film_projector:


Hey everyone! Here to share a brief update on the Stronk Broadcaster project, which has been progressing nicely over the past weeks

Setting up a media pipeline consists of many different parts which all need to come together. You’d want a stack which is easily deployable, maintainable and scaleable. It needs to support features like recording, low latency livestreaming, access control and modern transport protocols.

But, even if you manage to create the best media pipeline in the world, that is not enough in and of itself to generate more demand for the network. We believe that the most important piece is the interface where creators and their followers can find each other and interact

A couple of months ago we introduced the StreamCrafter, an in-browser broadcasting studio to compete with the likes of OBS studio.
Powered by the Stronk Media pipeline we’ve been building up over the past few months, this is the place where content creators can let their imagination run wild and livestream from any device straight from the browser
When doing user testing with the prototype, we got tons of valuable feedback on what worked well and what didn’t. People loved how easy it was to do low latency streaming from the browser and how you can composite multiple screenshares, cameras and other assets onto a single canvas to broadcast.
What people didn’t like was the interface. This has been our main focus the past few weeks and our internal release now has a completely rewritten interface and an additional TCP based ingest method, which will provide an even more robust experience for broadcasters and viewers using the StreamCrafter

But wait, there’s more!
Since we are fastly approaching a releasable state of the StreamCrafter, we want to maintain this momentum and immediately move on to the next milestone of the Stronk Broadcaster project.
We’ve been in contact with the Swarm foundation and are excited to announce we’ve been accepted into their grants program to write a research paper on various methods to store and deliver media content on Swarm. This grant will be an invaluable resource for any builder that wants to integrate Swarm into their stack and give insights into how they can achieve the lowest cost, widest device/player support and quickest time to load and seek

In the next post we will provide a full overview of what features and subprojects are part of the Stronk Broadcaster project, how far along they are and an estimate of when we feel confident to move from the prototyping stage into the first alpha release
Stay tuned for more info on the StreamCrafter and the infrastructure that powers it, and don’t forget to stake to our node to support our development efforts

2 Likes

:movie_camera: Status update on the Stronk Broadcaster project :film_projector:


Hey everyone!

Development of the Stronk pipeline originally started as an experiment to put myself in the shoes of a new broadcaster, with the goal of setting up a global, load balanced media pipeline capable of low latency livestreaming
After that, plenty of detours have been made to dive deeper into specific integrations that could be done to make the life of a video builder or system integrator easier.

Now, some of you might be wondering what the actual progress is on the project, so today we are here for a quick overview of how our work is coming together:

:wrench: What are we making?

We’re bringing together all individual parts of our work over the last months to develop a reference implementation of a fully-featured web3-powered media pipeline for both live and video-on-demand content. This stack is designed to be transparent, easily deployable and modifiable to power your own solutions. We want to cover all bases, from deploying the software to your infrastructure to delivering the media to a sample webapp.

The Stronk Media Pipeline:

  • Ingest: SRT/RTSP/RTMP, WebRTC, MKV [1] and more
  • Transcoding: local audio transcoding and Livepeer video transcoding
  • Inter-node communication and system health: Mosquitto
  • Global load balancing
  • Recording/VOD: Swarm & S3 (MinIO)
  • Delivery: HLS, WebRTC, WS/MP4, MP4, WebM and more
  • Interface: StreamCrafter

[1] NEW browser based protocol as a supplement for WebRTC ingest. By streaming MKV data through a TCP socket or WebSocket you can still get a realtime streaming experience, but much more resilience against poor network condition or packet loss. Since MKV has a wide codec support, you can immediately deliver tracks without requiring transcoding to enable wider device support. This method of ingesting has been added to MistServer specifically to provide a better browser-based broadcasting experience

:dart: Why?

We want to be prepared for the next wave of video builders and prospective Broadcasters looking to get started with live media or VOD. The idea is to provide a barebones package which exposes the full potential of MistServer and provides all the base functionality a new Broadcaster would need, ready to be tailored to their specific use cases.
By having a reference implementation in place, we can easily onboard new prospects and provide them with the necessary tools to get going as quickly as possible

:package: Deliverables:

  • An browser-based broadcasting studio for seamless ingest and viewing using the media pipeline
  • An overview of the various components that make up the media pipeline, explaining how they interact with each other
  • Easily deployable packages, simplifying setup and scaling the operation

:muscle: What’s working:

We’ve made significant progress on the following core components:

  • Ingest: Smooth ingest of any of MistServer supported protocols
  • Transcoding: Local surround to stereo audio downmixing using gstreamer, as well as conversion between AAC and Opus formats. Any video transcoding tasks are handed off to the livepeer Broadcaster node, with dynamic transcode profiles based on the source stream
  • Content delivery: Seamless, global load-balanced delivery to viewers using any of MistServer supported protocols
    With realtime delay on the source track, ~3-5 seconds delay on the transcoded tracks
  • Recording: Recording or clipping live content
  • StreamCrafter: Browser-based broadcasting studio

:hourglass_flowing_sand: What’s in progress:

We’re actively working on the following features:

  • Storage on Swarm: Integrating Swarm for efficient and decentralized content storage
    We have a running Grant from the Swarm foundation for integrating and researching various methods for storage and delivery of media content on Swarm
  • API integration: Implementing access control, database connectivity, and the ability to read on-chain data
  • MQTT integration: Enabling config synchronization, stream health, and inter-node communication
  • VOD uploading

Once the above is finished we will have the full media pipeline proof-of-concept up and running. The next steps would be:

  • A long roadmap of new features and enhancements to the StreamCrafter
  • Packaging the stack into media-oracle and media-worker packages
  • Deployment guides: Comprehensive documentation to assist users in deploying their own media pipelines.
  • Wallet management: Establishing a single escrow wallet to efficiently manage B’s and Bees.

:new: There are other experiments in progress, like deploying the entire stack to a single-board computer to box the entire stack up in a easily scalable and portable package with local video transcoding. More on that later

:date: Timeline

The StreamCrafter will be unveiled at IBC2023 on Saturday 16 September at 17:00. This will be the initial public Beta release. A code repository and StreamCrafter-specific roadmap will follow soon after that

After that the focus will be on Swarm integration: being able to record to and stream through Swarm. We estimate 4-8 weeks to complete the Grant, which will unlock full streaming and recording capacity in MistServer.

As a conclusion to the year we want to finish up API integration. Here the goal is to make a limited amount of streams, recording and transcoding available to Delegators, with overcapacity being free to trial by anyone


We’re thrilled about the progress we’ve made so far and can’t wait to bring the Stronk Pipeline to the video builder community! Stay tuned for more updates and don’t forget to stake to support our research and development as we explore the options for a fully-featured, web3 powered media pipeline! :muscle::rocket::new_moon:

3 Likes

To all video builders,

Building a platform using live media or VOD is tough. Most of you are already familiar with Livepeer Studio, which offers the full feature set of a media pipeline, including transcoding on the Livepeer network, for an affordable price

In this post however I want to highlight MistServer: an open-source media toolkit for developers who want to build and self-host their media pipeline

MistServer provides you with all the tools you need: supporting any protocol, transport method, DVR, an embeddable player with telemetry and most importantly: integration options which give you exact control over the flow of media and who is allowed to access your content
MistServer is tightly integrated with Livepeer and is able to provide transcoding using Livepeer Studio or your own set of Broadcaster nodes
Note that building your own pipeline is not for the faint-hearted - do check out Livepeer Studio to see if that fits your use case

Our software is applied for a wide variety of use cases: from ingest and CDN for livestreaming platforms like picarto.tv to self-driving cars which need realtime transport of a video feed

Today I want to highlight our latest milestone: a complete revamp of the documentation [1] [2]
This is just one step in our journey to reposition MistServer, with a completely rewritten website, API and interface following over the coming months

Our next release for MistServer is just around the corner, which will feature:
Clipping streams and vods into any output format, rewritten HLS input support, tons of fixes and the first public release of the StreamCrafter (which required some of these fixes)

If you need help to get started or have feedback, don’t hesitate to ask me or contact us @ info@mistserver.org

[1] https://docs.mistserver.org/
[2] Docker
(The docker image will get some more love soon-ish with pre-configured deployments, but this image can get you started quickly already)

:link: Status update on the Stronk API & Explorer project :unlock:


While we anxiously await the next MistServer release for the official alpha release of the StreamCrafter, progress has been made on the Stronk API & Explorer project (formerly known as the ‘Orchestrator API and supplementary explorer’)

This project was once started at the request of Ryan (NightNode), who wanted to have insight in what was happening on the Livepeer smart contracts. Back then Livepeer (and Web3 in general) was new to me, but something functional has been up and running since March 2022

The Stronk API & Explorer have been a useful tool to power Grafana dashboards, websites and even other API’s, providing them with reliable and quick access to coin prices, smart contract events, ENS data and more

There are three phases to reach the end vision we have in mind for this project:

Phase 1: Feature parity

The current API contains a lot of ‘prototype grade’ code. This makes it tough to maintain, extend or deploy. We are currently in the process of rewriting the entire API to open the way for the next two phases. Once the new API offers all of the features of the old API, the Stronk explorer will switch over and the old repository will be archived.

Phase 2: Subgraph substitute

The Stronk API processes smart contract events just like the Livepeer subgraph. However, we still require them for specifics like the Delegators or the total stake of an Orchestrator.

This phase will focus on adding more complicated reducer functions to our event parser until we can remove the subgraph as a dependency

Why?

There are two main reasons why we’re not skipping phase 2:

  • The subgraph has been unreliable at times. We want our API to have recent data available without delay or interruption
  • Querying the subgraph is not free: someone is paying for those queries. The Stronk API runs on any free tier offered by RPC providers

This does not mean we don’t like the Graph protocol. Their Subgraph Studio and hosted service make it way easier for any project to get started. But since there is a clear path into giving our API a similar utility by porting over the Livepeer subgraph reducer functions, why not take the time to remove this fairly significant dependency

Phase 3: Buffered state

Lastly, being able to query the state at a specific point in time is an interesting possibility. The idea is to encode the state similar to video data to allow us to rewind to a specific point in time. Since the state is way easier to encode than a video frame, emitted events are not that frequent and data is immutable, we expect to have no issues with query speed or the amount of storage required to store the entire state over time.

1 Like

Update on the Stronk Orchestrator


Due to recent pricing changes for transcode work, ETH fees earned from transcoding has decreased significantly. As a result we have been taking a good look at our income and expenses to see where we could offset some of these losses

First of all Delegators might have noticed that the reward commission was changed to 15% and the fee commission was set to 0%. This way the Stronk operation receives a steady, predictable income stream while any excess revenue goes straight to our Delegators. The idea is that once every few months this reward commission is adjusted, just like we’ve been doing in the past

Another change is to our transcoding setup: the Stronk Orchestrator is a global operation, trying to cover any area where transcode demand comes in from Broadcaster nodes. When we first started 23 months ago we started out with three machines to cover the US (Las Vegas, Chicago, New York). Eventually this turned into one cloud GPU in Las Vegas and one colo machine in Michigan

This new setup was still quite costly, had some weird GPU’s in the mix (P4’s) and still provided excess capacity. Especially since traffic on the west coast would only really rise if there were issue with B nodes on the east coast. Pon-node found a great deal for running colo machines in Kansas and through testing with Stronk Broadcasters we found that the location was suitable for replacing the Las Vegas and Michigan machines

So effective immediately, transcodes in the USA will go through our new machine in Kansas, giving:

  • realtime transcoding to broadcasters located anywhere in the USA
  • a better GPU, so no reduction in capacity
  • a low cost, especially due to subletting spare colo machines to other Orchestrators

This should give us the required cost reductions which, together with the recent stake increase, means the reward commission can stay at 15%, so the current commission rates will stay in effect until the next major stake change or LPT price change

1 Like