AI Video Compute Technical Update 3/18/24

I agree with you that I think there’s opportunity on the network for nodes to advertise some of the key info such as VRAM available, and price differently based on those requirements. I’d imagine some apps with low latency generation requirements would pay a lot more for the fast response times, and apps that don’t have that requirement would rather pay less for the lower end cards to do the work async.

Your questions about the O timeouts, failovers, and the ultimate potential designs for bringing this to the one unified Livepeer network (after learning on a more experimental sub network first), are probably all future work. And I know @rickstaa will be heavily involved in experimenting with that stuff, even on the subnet.

3 Likes