Transcoding Verification Improvements: Fast & Full Verification

Does the verification happen on the GPU and how resource intense is the process? How much (V)RAM should an O leave available for verification?

Just to clarify - the machine learning verifier mentioned in the quote can currently be used by a broadcaster and it runs on a CPU, but it is not a part of the fast and full verification design.

The fast verification process described in the OP relies on an orchestrator computing perceptual hashes (the upcoming node software release will use MPEG-7 video signatures) for the transcoded results that it returns to a broadcaster. If the orchestrator is transcoding on GPU(s), this additional computation will also run on the GPU(s). The actual computation itself should be relatively lightweight (at least compared to video decoding and encoding) so while we expect GPU utilization to increase we don’t think the increase will be substantial. Additionally, the computation will be run directly on video frames that are already decoded by the NVDEC chip on a GPU which should result in little to no additional VRAM usage. Based on the latest benchmarks, we do not expect the session capacity of orchestrators to be meaningfully impacted by the introduction of this additional computation. With that being said, we’ll definitely be looking to do more benchmarks and testing after the node software release is rolled out.

3 Likes