Livepeer Inc Daydream Product Update

Livepeer Inc Daydream Product Update

Hi all. I wanted to provide a product update from Livepeer Inc so that the community could see all that’s being built as we focus on Livepeer’s Realtime AI infrastructure opportunity.

Much of this information is available by following along on the Daydream X Account, the Daydream Discord, and the Scope release notes, but is summarized here for the Livepeer community.

Daydream

As you know, Livepeer Inc is primarily focused on finding demand for the Livepeer Network’s realtime AI capabilities through the Daydream brand. Realtime AI video is an early space where Livepeer is suited to offer unique value because of its GPUs and low latency streaming stack, expertise, and ecosystem alignment - however the arrival of the market remains dependent upon when key open source model capabilities drop that are acceptable to enterprise grade commercial use cases.

Because of the early state of the market, Daydream has developed an open source community oriented workflow management tool called Scope.

Scope is a local-first, open-source engine for real-time AI generation across video, audio, and computer vision. Creators compose custom pipelines visually, plug into their existing tools, and run inference on their machine or in the cloud via the Livepeer Network.

It incorporates the latest realtime AI models, lets users configure them into custom workflows we pre-and-post-processors, LORAs, audio + video input sources and outputs, and more. The early tinkerers and researchers in this market are using Scope to get access to, and control, the latest innovations in the space. And as more mainstream usable models arrive, this foundation will be the best control plane to get the outputs that users want. Here are some of the latest features to ship.

  • Real-time video compositing powered by VACE. Edit and transform live video with inpainting, outpainting, and style transfer running at interactive framerates. This is continuous inference, not asynchronous generation. Every frame goes through the pipeline.
  • Node-based workflow builder. Build custom AI pipelines by connecting nodes visually, spanning generative video, audio, and computer vision. A new scheduler node lets you fire timed triggers across the graph, and the canvas UX is designed to get you from zero to running workflow fast.
  • Remote inference via the Livepeer Network. Scope offloads GPU-heavy workloads to the cloud rather than requiring a local 4090. Artists and developers without high-end hardware become direct network consumers. This is the link between Scope adoption and Livepeer demand.
  • Plugin architecture for community-built nodes. The v0.2.4 release introduced a node abstraction system that lets anyone ship a custom Scope plugin. Initial examples include a YouTube input source and a local LLM prompt enhancer. The ecosystem is just getting started.
  • LoRA style control. Apply and blend LoRAs for fine-grained aesthetic control, from subtle adjustments to full style transforms. Scope handles LoRA manifest resolution as part of workflow configuration.
  • Shareable, importable workflows. Export and import full pipeline configurations, including LoRA manifests and dependency resolution. This makes community sharing practical and lowers onboarding friction for new users finding Scope through Daydream.live.
  • Beat-synced parameter modulation. Parameters sync to Ableton Link and MIDI clock, making Scope viable for live performance where timing matters. OSC support extends reactive control to any tool that speaks the protocol.
  • Integrations for professional creative tools. Scope connects to TouchDesigner (including the StreamDiffusionTD plugin by dotsimulate’s Lyell Hintz), Resolume, and other creative software via NDI, Spout, Syphon, OSC, and MIDI. This is where Scope reaches the communities most likely to run persistent, high-throughput inference on the network.
  • Audio generation alongside video. Pipelines can return audio alongside video output, streamed over WebRTC. Scope is no longer a video-only tool.
  • LTX-2 video model with ~18x faster loading. The latest LTX-2 integration ships with real-time pacing controls, faster model loading, faster prompt changes, and Ampere GPU compatibility. New users get text-to-video out of the box.
  • MCP server for agentic workflows. Scope exposes an MCP server interface, connecting it to AI agent frameworks. Early infrastructure for a category of agentic, real-time AI applications that could drive sustained inference demand on the network.

Realtime Audio Generation

While realtime AI video and world models remain interesting early prototypes, many of the users who are getting value out of Scope are overlapping with the audio space. Often times performers, VJs, musicians are using scope to generate visuals for live performances, concerts, or event installations, because Scope enables the AI outputs to respond to the audio inputs themselves. Naturally, this is leading to a lot of interest in integrations with audio tools (Resolume), and even generative audio generation itself.

More to come on this in a future update (or join the upcoming water cooler chats to see live demos), but the team has worked on some research breakthroughs to get audio generating in realtime using one of the previously batch-only open source models. This is a big break through that lets creators actually configure AI as an instrument that plays along or modifies songs in real time, responding to many different input or prompt changes instantly.

Video is early and requires some model and workthrough breakthroughs before being widely adopted, however this realtime audio generation is compelling and high quality enough to make an impact today. Look for some proof of concept applications coming soon to test driving demand to Livepeer through these channels.

The Path to the Livepeer Network

As with all new capabilities developed by within Inc, typically we first work on validating demand with early users through prototypes and early productionization of our products. Typically the fastest path here is using cloud infrastructure at low volumes so that the community doesn’t have to incur the work of adding support for new experimental capabilities if there’s no actual demand or market fit.

The next phase typically consists of Livepeer Network pilots, using self-run infrastructure on the network and working with select orchestrators willing to run these job types early as they’re developed, so that they can be built the right way.

Next you typically see scaled load testing and redudnancy/backup service provided by the network. The Inc products typically use the cloud for some traffic, and duplicate it to the network to evaluate its performance and realiability. Eventually shifting more and more to the network as its primary source of traffic. Why? Because the Livepeer network is typically far more cost effective than the cloud - and ultimately needs to be as reliable or better.

Finally Inc deprecates any cloud infrastructure entirely, and relies entirely on the network. This is the path that Studio went through in the early days, before relying solely on the Livepeer network for transcoding traffic. And it is the path that Daydream and Scope are undergoing as well. Currently Scope is routing traffic through the Livepeer network, and is onboarding select O’s to troubleshoot the integrations for popular workflows.

Livepeer Studio

With the focus on Daydream and the Realtime AI opportunity, Livepeer Studio continues to support existing customers, but is not primarily focused on growing further demand generation through transcoding or streaming. As previously communicated, we’re working with the Frameworks team to carry the torch on transcoding and a modern low latency streaming stack on top of the network, and pursuing demand generation efforts on that track. And Orchestrators will likely see transcoding volumes from Studio ramping down in the coming months..

Orchestrator guidance from a Daydream perspective

Daydream is trying its hardest to find Product-Market-Fit and drive scaled demand to the Livepeer Network! But as we remain early in that process across our ambitious bets, its important to note that many of these GTM efforts are experiments. New models and capabilities drop, each often times requiring different hardware. We are committed to bringing demand to the Livepeer network, and the best way for Orchestrators to support is to flexibly provision hardware in support of these experiments. That may mean spinning resourced capacity up and down as experiments ebb-and-flow. It likely does not mean over-investing in purchased dedicated hardware until it is clear there is scaled demand for a job. Orchestrators should share openly and honestly what pricing they need to charge in support of these experiments, and consider the impact of delegated stake and reward cuts on their P&L when doing so.


Thanks everyone. Looking forward to showing off some of the latest Scope and realtime AI capabilities on the upcoming water cooler chats.

6 Likes

Thanks for the detailed update! I really appreciate the clear guidance for Orchestrators regarding hardware investment. We will stay flexible, adapt our resources dynamically, and be ready to support the Daydream experiments as they scale.

​That being said, I really don’t think classic transcoding is a lost cause! :innocent: I strongly believe it still has a lot of untapped potential, it just needs a sales team . Exciting times ahead for the network!

4 Likes