Agent SPE: Phase 2 Pre-Proposal

Agent SPE: Phase 2 Proposal

Revision 2.0, last edited March 30 2025
Proposal Author(s): Titan (Team Lead), Phoenix (Secretary & Treasurer), DeFine (Engineer)

Abstract

After successfully completing our pilot Phase 1 with way better than expected results, the Agent SPE is requesting 30,000 LPT from the Livepeer treasury for Phase 2 (6 months) to develop, test, and incentivize the use of our recently released VTuber streaming Agents and unleash the next generation of autonomous agents onto the Livepeer network.

Mission

The Agent SPE works to position Livepeer as the essential infrastructure backbone for the autonomous agent revolution. Imagine a streaming landscape dominated by AI avatars—not just passive entertainment, but intelligent entities that can interact with platforms, execute transactions, and generate revenue autonomously. Our continuously evolving VTuber Agent is just the beginning of what could become a multi-trillion dollar market.

Once these agents attain legal recognition similar to businesses—a trend already emerging in academic and regulatory discussions—they’ll require robust, decentralized infrastructure for their operations.

We’re not just building a different flavor of AI Agents; we’re building into the future of autonomous commerce. These AI agents will soon be ordering products, commissioning services from other agents, managing investment portfolios, and monetizing content 24/7—all while generating network fees for their infrastructure providers. Currently, major AI frameworks default to centralized resources, but as agents become more sophisticated and regulation evolves, decentralized networks will be essential for compliance, reliability, and scale. Phase 2 ensures Livepeer becomes the go-to infrastructure for the autonomous agent economy, positioning us to capture substantial network fees from what could become the next major technological paradigm shift.

Rationale

Originally, we set out to replicate simplified, 2D VTuber avatars as an open-source counterpoint to commercial solutions—a goal that was both ambitious and necessary for laying our technical foundation. However, we have rapidly evolved beyond basic 2D models and are now deploying full 3D “Metahumans” capable of humanlike realism, expressive facial tracking, and fully customizable environments. This leap in sophistication not only enriches the viewer experience but also catalyzes a host of new creative opportunities for content creators and aspiring AI developers.

By harnessing the capabilities of Unreal Engine’s MetaHuman technology alongside Anima VR Neurosync LLM repositories, we have achieved a level of immersion that bridges the gap between artificial and human performance. This breakthrough positions us squarely at the intersection of AI-driven entertainment and decentralized streaming infrastructure—where we can offer a uniquely powerful solution that benefits both creators and infrastructure providers.

Looking ahead, our refined focus on 3D avatars opens up a broader technological roadmap: interactive environments, modular plugins, and advanced AI integrations that push beyond simple VTuber presentations. Through Phase 2, we intend to elevate our product to a new level of realism, integrate deeper with Livepeer’s decentralized architecture, and expand our partnerships with AI frameworks that demand efficient, scalable video solutions. In doing so, we aim to anchor Livepeer as the backbone for future autonomous agent ecosystems, advancing us from initial experimentation into tangible, next-generation AI-driven commerce.

Market Opportunity

Our research on the streaming market led to identifying a broad market opportunity involving video livestream and autonomous virtual agents.

Market Size: The global streaming market was valued at approximately $473.39 billion in 2024 and is projected to reach $1,690.35 billion by 2030, growing at a CAGR of 21.2%.

Streaming market by content type:
• Video Streaming (55%)
• Audio Streaming (25%)
• Gaming (15%)
• Other Content Types (5%)

We can assess that about â…” of all streaming tasks could be performed from autonomous agents. When the streaming Market is divided into verticals we can also assess the potential that each industry vertical has to be converted to Autonomous Agents.

We have identified the existing industries with the highest potential to be converted to autonomous virtual agents. Additionally we expanded our research to identify future markets where visual agents will be integrated by default without competing with their human counterparts.

Streaming Verticals Most Replaceable by AI:

  1. Existing Industries
    • Content Discovery & Recommendation (85% automation potential)
    • Customer Support & Chatbots (80% automation potential)
    • Gaming & Interactive Media (75% automation potential)
    • Content Moderation (70% automation potential)
    • Analytics & Business Intelligence (65% automation potential)
  2. New Industries
    • Vertical-Specific AI Agents (90-95% automation potential)
    • Autonomous Virtual Performers (95-100% automation potential)
    • Simulation and Digital Twin Management (90-95% automation potential)

Creating our Vtuber and in extension our autonomous virtual agents using a modular and extensible design allows us to provide infrastructure to all the identified industry verticals with a single software solution. We expect that our Vtuber and agent solutions will not only bring more revenue directly from tool and pipeline usage, but also introduce inference revenue coming directly from the autonomous agents. Furthermore it is also evident that using inference credits towards autonomous agents so that they are able to generate positive ROI and thus spend even more inference to the network creates a positive reinforcement with miniscule to none downsides as the inference costs directly return to the network.

SPE Governance Structure

  • Hierarchy: Titan (Team Lead) manages overall direction. Phoenix (Secretary & Treasurer) oversees budget and reporting. DeFine (Engineer) handles technical development.
  • Decision-Making: Internal votes occur for major decisions.
  • Reporting: Monthly updates are posted, referencing deliverables and budget usage.

Milestones and Timeline

Three-Point Roadmap: The Future of AI Autonomous Agents

Month Milestone Key Activities
Months 1-2 Elevating the VTuber Product - Technical Excellence- Livepeer Integration- Streamlined Onboarding- Comprehensive Analytics- Dynamic Customization
Months 3-4 Expanding Adoption - Multi-Platform Deployment- Protocol Partnerships- Startup Integration Program- Usage Incentives- Use Case Demonstrations
Months 5-6 Forging Autonomous Virtual Entities - Financial Sovereignty Infrastructure- Diversified Revenue Frameworks- Self-Governing Economic Protocols- Economic Viability Demonstration

1. Elevating the VTuber Product (Months 1–2)

  • Technical Excellence: Achieve unprecedented avatar realism with industry-leading sub-100ms latency and emotional expression capabilities indistinguishable from human performers.
  • Livepeer Integration: Develop direct integration pathways connecting our VTuber framework with Livepeer’s core infrastructure.
  • Streamlined Onboarding: Develop an intuitive zero-to-stream process requiring less than 5 minutes setup, democratizing access to professional-quality streaming for creators of all technical backgrounds.
  • Comprehensive Analytics: Implement a robust statistics framework to capture and analyze VTuber usage patterns, performance metrics, and audience engagement.
  • Dynamic Customization: Create an extensive personalization system allowing users to modify environments, attire, and facial characteristics of their VTuber avatars through an intuitive interface.

2. Expanding VTuber Adoption (Months 3–4)

  • Multi-Platform Deployment: Test and verify our VTuber technology across all major streaming platforms (Twitch, YouTube, TikTok) and Web3 environments to ensure consistent reliability and performance, all powered by Livepeer’s infrastructure.
  • Protocol Partnerships: Offer our inference pipelines to more protocols and frameworks across the ecosystem. With ElizaOS, we’re deepening our technical integration by implementing their technology directly into our VTuber system, while simultaneously pursuing a stronger partnership given our shared interest in autonomous agents.
  • Startup Integration Program: Launch a dedicated program offering up to $10,000 in Livepeer streaming credits to early-stage startups building with our VTuber technology. Selected startups will receive technical support, integration assistance, and promotional opportunities through our network. This initiative aims to foster innovation while establishing Livepeer as the infrastructure of choice for the next generation of AI-powered streaming applications.
  • Usage Incentives: Create a rewards program with up to $15,000 in incentives for creators and developers who regularly use our VTuber solution and contribute to our usage analytics database.
  • Use Case Demonstrations: Develop and showcase practical VTuber implementations across diverse sectors (gaming, education, customer service, entertainment) to inspire wider adoption and creative implementation.

3. Forging Autonomous Virtual Entities (Months 5–6)

  • Financial Sovereignty Infrastructure: Implement ElizaOS to create an elegant wallet architecture where AI agents independently manage finances, provision resources, and execute transactions with minimal friction and maximum security.
  • Diversified Revenue Frameworks: Design clean, modular monetization systems through ElizaOS Web utilities, enabling agents to seamlessly participate in subscription services, patronage networks, and digital marketplaces with transparent value flows.
  • Self-Governing Economic Protocols: Craft refined economic governance protocols using ElizaOS, allowing agents to intelligently balance operational costs, strategic investments, and infrastructure fees through principled algorithmic decision-making.
  • Economic Viability Demonstration: Showcase autonomous agents achieving ROI within 30 days, presenting clear metrics and visualization tools that elegantly communicate the commercial potential of self-sustaining AI entities on the Livepeer network.

This strategic roadmap elevates our agents from experimental prototypes to economically viable autonomous participants in digital marketplaces—progressing toward commercial maturity while establishing Livepeer as the essential infrastructure for autonomous virtual agents.

Budget Allocation

Category Amount (LPT) Purpose
Wages 15,000 Team compensation
Expenses 5,000 Operational costs, servers, inference, entity
Subcontractors 5,000 Specialized expertise
Reward Programs 5,000 Incentivizing adoption
Total 30,000

Detailed Budget Allocation

  • Wages (15,000 LPT): Core team compensation.

  • Expenses (5,000 LPT): Operational expenses covering servers, inference costs, and other entity costs.

  • Subcontractors (5,000 LPT): Payment terms for specialized contractors to be determined at engagement start, with explicit agreements on LPT payments or conversions to USD.

  • Reward Programs (5,000 LPT): Dedicated programs to incentivize VTuber adoption. Participants must enable statistics aggregation to qualify.

Funds will be distributed over a 6-month period with monthly or quarterly reporting. Retroactive rewards may also be provided to external contributors aiding in plugin development or specialized tasks.

The team commits to fulfill the roadmap regardless of token price fluctuations.
We are also discussing measurements to reduce the operational impact that fluctuating token prices may have in our ability to fulfill our operations.

Deliverables

  • Fully Customizable Open-Source VTuber: Comprehensive, configurable AI avatar system integrated into the Livepeer ecosystem, featuring industry-leading latency and expressive capabilities.

  • Templates and Code Support: Various deployable frameworks including Game Streamer templates, Commentator templates, Twitch integration templates, interactive livestream frameworks, and autonomous broadcasting scenarios.

  • Educational Resources and Analytics: Extensive tutorials, comprehensive documentation, and open-source statistics on VTuber agent usage, alongside Profit and Loss (PnL) data, supporting startups entering the ecosystem.

  • Creative Incentive Programs: Workshops and structured rewards aimed at stimulating open-source development and innovation within the Livepeer community.

  • Enhanced Inference Consumption Pipelines: Infrastructure improvements designed to substantially boost Livepeer inference utilization network-wide.

  • Autonomous Agent Infrastructure: Groundbreaking developments positioning Livepeer as a leading platform in autonomous AI agent markets, establishing the framework for self-sustaining AI entities within video streaming.

All deliverables will be referenced in monthly progress tracking reports.

Transparency and Accountability

We will publish monthly reports detailing:

  • Spending: Breakdown of LPT usage, including wages, operational costs, and reward programs.
  • Technical Progress: Updates on plugin features, streaming metrics, and performance testing.
  • Community Engagement: Summaries of hackathons, user feedback, and marketing traction.

We maintain a rigorous cadence of communications:

  • Technical updates at least twice weekly, with one update consistently delivered during the community watercooler chat.
  • Comprehensive monthly reports detailing progress, challenges, and financial activity.
  • Complete transparency of all blockchain transactions from our multisig wallet, meticulously recorded in the SPE books.

We will implement robust statistics and metrics tracking systems directly integrated into the VTuber product during Phase 2 that will allow us to:

  • Measure the quality and impact of our development work through real-time usage data.
  • Identify areas for continuous improvement based on user interaction patterns.
  • Provide data-driven insights to the community about VTuber adoption and streaming metrics.
  • Establish quantifiable benchmarks for success across all deliverables with opt-in telemetry.
  • Maintain complete transparency with the community through regular reporting of these metrics.

Agent SPE

3 Likes

Hey there, here is my opinion on this SPE:

I believe this is a valuable product overall and worth supporting.

That said, I would encourage the team to focus more on bringing this to prosumers. If the MVP is ready, it could be beneficial to start getting real-world usage and feedback.

Regarding some of the budget items:

  • I’m not convinced hackathons provide significant value in this context. Having worked with multiple companies in the Web3 space, I’ve seen that the ROI is often quite low.
  • The team currently lacks a dedicated business development, marketing, or product lead—I’d recommend considering someone to help drive these efforts.
  • It may be worthwhile to set aside funds for creative activations around the vTuber to generate excitement and engagement.

On a separate note, from the Harmonic Spe side, we’d be happy to help connect the team with ecosystem partners who can help raise awareness for the project.

3 Likes

Hello.

  1. We are open on replacing the hackathon with other incentives if they can be more efficient. Hackathons can be great if they are well structured and fuelled by clear intention on what the target is. We will publish more details on the hackathon structure tomorrow, we are always very open to additional feedback

  2. This is something that is under discussion on how to proceed about, an option could be to outsource some parts to different spes too. What is unique in our case is that we are able to directly market via our Vtuber Avatar, therefore we can also directly use a part of the Marketing allocation to purchase inference from the network.

  3. Yes totally! As part of this proposal we are working on a list of incentives for Vtuber user cases. What is of paramount importance to us is the ability to gather analytics from genuine Vtuber usage. The incentives should align with that and reward product users that have the higher viewership for the amount of data they are contributing.

We really appreciate the feedback @pablov ! And we will be happy to connect with you and Nick and discuss further.

2 Likes

I d love to see the current spending report with details for two phases. I d like to also see monetizing ideas around that. Costs of running 1hr of vtuber stream vs potential price for streamers etc.

4 Likes

Spending Report and Phase Two Budget Overview

Hello! The 1st month spending report has already been published, and the second one is currently being produced. These reports cover our purchases of 3rd-party services and the use of funds to buy network inference.

The SPE books, maintained by the treasurer, include member compensations. Notably, these incurred a ~50% reduction compared to what was proposed earlier—yet we still managed to deliver more than promised within the 3-month timeframe.


Phase Two Financial Management

We’re introducing enhanced financial management strategies to ensure liquidity across all budget items, making them resilient to token value fluctuations. (Budget allocations are subject to change.)

Detailed Budget Allocation

  • Wages (15,000 LPT): Core team compensation, converted to USD at proposal approval time to maintain stability.
  • Expenses (5,000 LPT): To be staked; staking rewards will cover operational costs. The principal will only be liquidated for inference costs in emergencies.
  • Subcontractors (5,000 LPT): Payment terms defined upon engagement, with options for staked LPT or USD equivalents.
  • Reward Programs (5,000 LPT):
    • 3,000 LPT for VTuber adoption incentives
    • 2,000 LPT for a hackathon focused on PVP Profit and Loss (PnL) performance

Note on Wage Compensation

Wage values for SPE members will be converted to USD upon proposal approval. If the LPT price fluctuates by more than 20% from proposal posting to token release, members may:

  1. Liquidate to USD with an agreed pay cut
  2. Wait to liquidate at a later date
  3. Reconstruct the spending budget to prevent pay cuts

Technical Clarification: GPU Usage

@Karolak — the pricing depends on the setup. Titan, for example, ran it on a GTX 1060. GPU usage comes from:

A. LLMs
B. The Game (renders VTuber model)

  • The system uses three LLM pipelines. Two can be remote.
  • The third pipeline (audio to face animation) can run efficiently on CPU.
  • You can:
    • Run the game on a lower-end GPU
    • Offload LLM inference to the network
    • Host everything locally

Cost varies significantly. We’ll share setup cost details next week.

Demo Specs:

  • GPU: Nvidia Tesla T4
  • Text LLM: Remote (Livepeer pipeline)
  • Text-to-Voice LLM: ElevenLabs
  • Audio-to-Face LLM: Local GPU
  • Game: Local GPU

Monetisation Strategy: High-Leverage Focus

We’re concentrating efforts on Autonomous Video Agents—AI agents with real-time visual presence, blockchain wallets, and task execution capabilities. These agents can serve as:

  1. Personalised Teachers
  2. Entertainers/Streamers
  3. Personal Assistants
  4. Freelancers
  5. Researchers

Limited only by inference supply and imagination.

These agents represent potential new streams of GDP production. They can:

  • Operate 24/7 across the internet
  • Maintain multiple parallel engagements
  • Rely on:
    • Persistent Memory
    • Data-driven learning
    • On-chain financial transactions

This creates a self-sustaining value loop: agents generate income → spend on inference → improve output → earn more → reinvest. In this model, Livepeer serves as the core infrastructure provider powering all inference pipelines for these agents.

@pablov There are some references on the hackathon structure on the Detailed Budget Allocation section above. We are still working on the full hackathon structure(full info on next proposal update), while being very focused on creating a competitive and fun experience for potential participants rather than following traditional hackathon formats

2 Likes

Good read, cant text to voice llm be replaced with text to audio ppl within livepeer network?

2 Likes

Absolutely. Locally I can engineer that soon. For integration with Livepeer this needs to also be discussed with @rickstaa and the AI SPE to find the most optimal way.

Anyway. Ideally, we want to combine the pipelines together..

For Example:

A. LLM (Large Language Model) B. Text-to-Audio Synthesis

If we combine pipelines A and B, we can create a unified LLM-to-Voice pipeline, streamlining the process from text generation to natural speech synthesis.

Taking this a step further, the ultimate goal would be to evolve this into a comprehensive LLM-to-Face pipeline, integrating all 3 pipelines into one:

  1. Text Generation (LLM)
  2. Audio Synthesis (Text-to-Speech)
  3. Facial Animation (Voice-to-Face)
2 Likes

How was the 10-30% increase in network fees calculated? We understand that many fees were sent to the SPE’s orchestrator and recirculated again, so saying that the fees are getting inflated … Is there any way to see what works

1 Like

I had understood that the free credits allocated for development by the treasury were also meant to encourage orchestrators to maintain inference pipelines.
However, since the vast majority of the funds return to your own orchestrator, I don’t really see the impact of this point.

But perhaps it is a misunderstanding on my part.

3 Likes

Hello. That is a great question. We have a detailed plan on how to collect high quality relevant data.

The network fees from phase 1 came mostly from Eliza llm usage. Statistics
The majority of the credit funds are still unused, because they need to be spend as Vtuber inference credits.

The Eliza pipeline is limited to really basic tracking, we are only able to track if the request was for text or image generation. For Vtuber inference tracking, we are building an analytics aggregation component on top of the software solution which will be on by default, unless turned off in the configuration. Direct Vtuber pipelines are expected to also be equipped with the same analytics feature. Additionally the new incentives are designed to reward usage only if the aggregation component is turned on.

We will update on Monday with exact details on the tracking system and it’s planned implementation if you would like to review that further.

2 Likes

Hello Utima. I explained on the post above regarding how the credits used till now for the llm pipeline and the differences with the way we plan to use them for Vtuber inference. The majority of credits is unspent and will be spent to incentivize the orchestrators when we have the analytics module out, this way we can make sure that we incentivize genuine inference demands.

An other issue up for discussion is how to avoid price discrepancies between the orchestrators when providing inference credits. Should it cover a constant price for task x executed from gpu y? We definitely need to take care of such important details to minimise potential abuse and guarantee maximum capital efficiency

2 Likes

Is there any possibility of introducing a paid LLM endpoint API at some point? This seems like a compelling idea, especially considering the decentralized nature of the ecosystem. Anyone could run their own Gateway<>Orchestrator setup, and with a crypto-based payment system, such an API could be offered to Eliza OS users at minimal cost. No centralized company would be required—everything could run fully on-chain. Or, even better, Eliza OS users could easily set up their own Livepeer Gateway, load it with an ETH deposit, and reserve resources.

The concept is straightforward: I want to use your (or my own) Gateway endpoint, so I send ETH to address X, and in return, I receive access to the API with X tokens. This would allow for seamless monetization of inference work. For instance, I could say, “Hey Orchestrators, I’ve set up a Gateway and I’m paying this amount for access to this specific model, as I need it for my Eliza AI Agent.”

To support this, a Gateway marketplace explorer would be necessary, where Gateways can announce their models and pricing in comparison to available models online. Something like this: https://www.media.network/marketplace/cdn. Additionally, integrating Eneche support for one-click self-deployment of Gateways, with a deposit feature available through a WebUI, could make the process much simpler. Users would only need to rent a server and input their address/password/user details, after which scripts would automatically install everything needed, including Livepeer Gateway, through Docker. This would fully decentralize Gateways, while keeping the process easy enough for anyone to set up with just a few clicks, enabling them to start buying inference or at least beaconing demand to the supply side of the common marketplace.

Once monetized and with real money flowing, I could reinvest in better hardware—enabling me to run more powerful models, like DeepSeek 70B, for example.

For reference, I’ve had zero errors over the past month, with thousands of successful requests completed on my machines. This demonstrates the reliability of such setups and the potential for scaling.

I’d love to hear others’ thoughts on whether this could be a viable addition to the ecosystem!

Partnering with Moxie could also be a potential option. They could handle selling API access, allowing ETH to flow into the network. While the system works well, there must be a starting point for monetizing it—otherwise, it risks stalling once the treasury funds run out, which would be a real shame. I believe Orchestrators are ready to invest in proper GPUs for larger LLMs, if such a deal is possible. Right now, it feels like nobody will take the risk to buy hardware if we continue to rely solely on treasury funds.

1 Like

I’m confused by this too… The SPE seems like it’s paid itself to pay itself to pay itself in a closed loop that doesn’t benefit the network any more than surface level marketing that “fees have increased”, when in reality most O’s are earning less than before. I’ve been a bit out of the loop on this but when is the traffic going to be sent to the whole network, and is the pipeline ready for use by all O’s?

1 Like

The payments going to the Agent SPE is for the VTuber pipeline that no one else is running right now.

All the other Eliza OS inference that is 10-30% of network increase goes to the public network. Including us paying for inference to the Cloud SPE Gateway as well as payments going from the Agent SPE Gateway. Unfortunately the VTuber pipeline is mixed in with the public inference. Maybe we should move it to a separate Gateway/Orchestrator set up to differentiate the two.

Also I don’t think Os are earning less than before. Everyone eligible for LLM requests likely has seen a bump in income over the past couple months.

2 Likes

I haven’t seen any orch announcements on how to run these pipelines, or any streamlined info on how to get involved. I mentioned this to @brad-ad-astra-video and @rickstaa but Livepeer could really use a central hub for users, gateways and orchs to see what’s available and get involved quickly, otherwise how are all the individual SPE’s going to offer a practical use case for Livepeer?

1 Like

I’ve definitely seen a reasonable amount of LLM requests on my AI Orch during the past few weeks.

And to be fair, the AI Docs (for basic AI Orch setup) together with a few minutes of Discord research will give you enough info on how to run the LLM pipelines. And as long as supply is not an issue, the focus should be on brining in demand and not one-click onboarding for Orchs :slight_smile:

2 Likes

We could even take it a step further. The ElizaOS could just read the on-chain registry, pick the Orch, and send the job request directly to the Orch. The Eliza developer would only need to use a web UI to deposit funds and enter their private key into the Eliza .env file. (Which they already do with their EVM config)

Basically it would just be a Gateway built into ElizaOS in typescript. No need for a separate service or server. Just Eliza Agent <> Livepeer Orchs.

2 Likes

How would micropayments be managed in this setup? Typically, a gateway sends tickets, but I was considering embedding the gateway directly into Eliza OS. I think we’re aligned on the concept, though I don’t have a detailed technical overview—just experimenting with the idea.

In my view, every user should automatically function as a gateway. This is similar to stream.place, where each node effectively acts as a gateway, distributing work across the open network. It integrates the Livepeer library, allowing users to choose between local transcoding or open network transcoding. A user can simultaneously act as a Gateway, Orchestrator, and Transcoder—without even needing to understand those roles. If someone wants to transcode remotely, all they need to do is load an ETH address—boom, done.

This approach enables rapid scaling through user participation, eliminating the bottlenecks of a single centralized gateway struggling with high traffic. Instead of one gateway handling massive loads, each user runs a small-scale instance, and collectively, these distributed gateways scale efficiently. However, for this to work seamlessly, the UI/UX needs to be frictionless—users shouldn’t have to deal with technical details. Ideally, they should just toggle a few options, and scripts should handle everything in the background via .env config variables.

I think the biggest blocker for Livepeer is its aspiration to be a fully decentralized network while still relying on centralized gateways. While Orchestrators and Transcoders are decentralized, this asymmetry limits growth. That’s why I’m excited about what stream.place is doing—if Eliza follows a similar path and gains traction, we might finally see the long-awaited breakthrough.

1 Like

This was the idea we had for Gwid, where we see Livepeer becoming a gateway agnostic network. Every app or user will use their own gateway instead of relying on a centralized gateway. Like you said for this to happen ,the technicalities of setting up the gateway with it other management tooling must be stripped away. And this is what we have managed to achieved. Deployment to a cloud provider(with analytic scraping ,prometheus,grafana) , funding deposit panel, setting configuration, a webUI shell to interact with the Livepeer cli instead of "sshing"into your Gateway remote server . All from a web app.
Where we are currently, is allowing user side load other apps to use with their Gateway so far Owncast[Low hanging fruit] and next Load Balancer. We see possiblities of adding Ai agent to this integration . This will bring more usage.

Where we see this going ,is a “Self-service Gateway marketplace” , every capable stakeholder(Os, builders ,etc) could take the initiative to source demand themselves , even on a small scale. There are numerous API marketplaces where the Gateway endpoint like llm, image-gen —could be listed and monetized as paid APIs. Even offering AI agent as API too.

All you have to do is rewrite the gateway in typescript :sweat_smile:
But in seriousness, the ticket management and selection algo can be written in the language that is compatible for the product you are using. For Eliza, we can write it in typescript and handle all tickets, orch selection and payments through Eliza OS via a typescript function. This would likely be easier than the go-livepeer implementation anyway because we are not dealing with streaming video segments. We are only json requests, which is a lot easier from my experience.

If we wanted a quick implementation just to get started, we could also just attach our go-livepeer binary into the dock-compose Eliza OS script and have it point to itself. That way if someone wanted to launch an Eliza Agent with docker (not sure how many do) we can just spin up a livepeer gateway in the same container and send inference to localhost.

Yes we are on the same page. And with the movement towards truly decentralized hosting of Agents, the agent would technically need a decentralized source of inference. What better option than getting 100 inference providers on-chain and dealing with them directly, rather than some centralized API endpoint (which other “Decentralized” services are doing).

1 Like