Pre-proposal - E² - Education & Experience Focused SPE

FINAL REPORT – PHASE ONE (PILOT)
Part One

E² – Education & Experience Focused SPE
.originstory
ORIGIN STØRIES | LIVEPEER
Date: May 11, 2025

Executive Summary

Over the past six months, the E² – Education & Experience Focused SPE has explored the practical integration of AI technologies into creative sectors through research, prototyping, and public engagement.

This pilot was designed to investigate and map AI’s creative and technical potential across diverse cultural verticals. Working with design partners in film, theatre, broadcast, music, drag, sports, and food helped us to better identify real-world opportunities, challenges, and emerging workflows.

Our guiding intention was twofold:

  1. Two-way Knowledge Transfer – to better understand where value can be added through AI in cultural production.

  2. Framework & Foundation Building – to seed initiatives that are replicable and scalable beyond the pilot.

Example Framework excercise for partners - here is XI XI Productions

Key Verticals & Partner Engagement

We chose our verticals strategically and conducted more than 60+ hours of workshops, deep dives and brainstorm sessions These are verticals that have potential from both creative and technical uses of AI.

For instance:

  • Broadcast is searching for ways to extend creativity to viewers doing live events
  • Film - AI dubbing solutions to allow for English distribution of films
  • Music Events – Looking for new experiences to offer guests.
  • Food & Hospitality – Real-time visuals and personalization in dining experiences
  • Sports – Early concepting for in-stadium and fan experiences

Deliverables Overview

  1. Educational Video Series
  • Six long-form episodes we’ll be released as part of the series with the first two launching 12 may
  • Short-form video content tied to each episode for social distribution 12 (pieces total)

Each episode is around 7 minutes and has a different focus

  1. A Brief History of AI
  2. Prompt Design and Pipelines for Visuals
  3. Everyday Prompting: How to System Message
  4. Music Visuals and AI
  5. Non-creative AI use cases for Media and Entertainment
  6. Open Source: What is it why is it important and how does it work?

You can watch a sneak preview of the first two episodes here -

1- A Brief History of AI
2- Prompt Design and Pipelines

2. Experiential Prototypes

We conducted 2–3 key experiences

  • Amsterdam: A dance music experience using AI pipelines and real-time ComfyUI generation via Daydream, with a spotlight on local talent (February).

The success of this event showed the opportunity and value that co-created visuals can add to a dance music event. During this we further developed the framework we use for live events and that we will now start to enhance for venues, including a Comfystream/Daydream integration.

Framework for events (Github)

  • Mall of Scandinavia (Sweden): Collaboration with music duo 7000 Apart, testing real-time live visuals on a band in a high-traffic space. The concept is interesting however there will still be some limitations in this concept including adoption by videos.

  • YouTube Streaming Pilot: We arr testing real-time visuals with the The Sound Group, YouTube channel, building toward a bi-weekly livestream leading up to Amsterdam Dance Event (ADE),

  1. Demand
  • 10,000+ generations executed primarily using real-time ComfyUI stream will total 12+ hours of streaming across live events and live streaming (2 test and 1 publicly promoted stream).
  • We are also to run another 10k generations alongside the Miss South Africa Demo with XI:XI Productions. We are currently discussing archival assets and footages.
  • We will deliver a final count so far on the 20th May.
  • We now have two clear real world use cases for generation demand with a potential for these to be midterm to long-term opportunities - music events (live/livestream) and broadcast. Our goal is to continue to home in on these over the next six months, including targeting larger Broadcast events (10M+ viewership), and pushing opportunities forward with venues.
  1. Audience Reach
  • 500,000+ social impressions reached through a combination of email newsletters (e.g. MTF Labs, 8,000+ subscribers), partner YouTube channels, internal distribution, and social media content.
  • As mentioned we will begin to roll out the series which will add to our existing impressions which now stand around 200k. We expect to exceed this especially with some of the upcoming announcements and localised press around partner territories.

Funding Amount: 4,000 LPT

Development - 17.5%
Operational + Administrative - 7.5%
Workshops + Education - 30%
Public Experiences- 30%
Marketing & Partnerships - 15%

While we were able to accomplish our goals this budget was very tight. We knew this going into it and wanted to first really test the potential here and also showcase what what we were capable of beginning to build. Working with media and entertainmen companies it’s clear that having larger funding not only changes the quality of who you can work with the conversations that can be had nevertheless this was a great first start pilot this SPE as well as the initial groundwork needed to begin operations.

Key Outcomes

  • Established Design Partnerships: Deep collaborations with six organizations across Europe, Africa, and Australia, who will continue testing workflows and participating in edu and experiential pilots until 2026.

  • Word-of-Mouth Momentum: Generated strong grassroots interest in Livepeer technology by centering conversations around real-world creative tools. This included mentions from the stage at Eurosonic Noorderslag, Google x Nordic Music Tech, Music Tech Netherlands Meet-Up, as well as countless conversations with potential partners and collaborators.

  • Lip-sync & Localization Tooling: Or film partner will begin shooting a feature on 9 June. We we have the opportunity to prototype live dubbing (Swedish → English), with potential for replication across multiple projects there’s already interest from their co-production team. Spoke with @dob as well and Ben and Rick about this and would like to also try to collaborate with the Cloud SPE and @rickstaa AI SPE on the pipelines, workflow and templates for replication.

It is based on the classic Swedish book, Doctor Glas, and is directed by the director of Ondskan/Evil which also start the films producers.

  • Educational Workshops: Confirmed the opportunity for a workshop/experience at the end of September, partnering with Nordic Music Tech. 20% of Stockholm’s work first works in tech and there is a large push for innovation at the moment, especially with recent regional breakouts like Loveable.

  • AI Reader: It’s estimated that more than 40,000 actor self tape each day. The need for a reader is not only costly but time inefficient AI provides a solution for this. This concept was developed with our theatre design partner Adam and there is an opportunity to continue to bring it for it as there are some but very few competitors in this space. It has the potential to increase demand on LLM and conversational AI pipelines.

Frameworks & Strategic Insights

  • Evaluation Models Created:
    Including adaptations of the Walt Disney Method, and a “Problem-Opportunity” deck for deciding when and how to develop ideas. All frameworks will be published along with the subsector reports on 16 May.

  • AI Literacy Remains a Priority:
    While younger or experimental creatives adopt AI quickly, many decision-makers (venues, producers, institutions) lack contextual understanding. This gap continues to limit innovation and adoption.

  • Educational Contexts Drive Use:
    Workshops and experience remain a uniquely effective way to bridge the theory/practice gap. Participants move quickly from passive interest to experimentation when given hands-on exposure.

*Pipelines aren’t consistent
While we have Livepeer pipelines built into VPM, The event generator template, and the playground for partners, calls are not consistent with output. It’s something we want to look at being able to better understand how to keep models warm for users at any given time.

In Progress & Next Steps

Experiences & Content

  • Testing livestream mixes on YouTube with real-time AI videos with TSG Music week of 12 May.

  • Distribution of video series dropping week of 12 May

*By invitation musician workshop with city of Adelaide (UNESCO City of Music) - 22nd May

Market Integration

  • Preparation underway for a demo with the South African Broadcasting Corporation, with potential for integration into their national broadcast workflows.

  • Pre-production for dubbing prototype, including conversations with PhD Candidates as KTH (Royal Institute of Technology) and planning with the film production team.

  • Preparations in place for real-time and visual co-creation experience during ADE, alongside partners such as Thijs Verhulst from Angle of Impact

Expanded Testing

  • Strong commitments from venue and partners for Phase Two testing and knowledge transfer events.
  • Continued evaluation of promising sub-markets like food experiences and live television production.

Conclusion

This pilot has shown that creative sectors are eager to explore the possibilities of AI—if given meaningful tools, educational resources, and context-sensitive design.

By combining public experiences, educational storytelling, and technical prototyping, we’ve laid the groundwork for sustainable integration of Livepeer’s infrastructure into real-world cultural production. The relationships and frameworks seeded during this phase will continue to evolve—and we are already engaged in planning for further collaborations across film, music, and live media.

We look forward to continuing this work in the months ahead a and contributing to the growth of the Livepeer AI Ecosystem!

3 Likes