Conference Proceedings
- Home
- Taking the headache out of timed metadata for live video
Taking the headache out of timed metadata for live video
Description
The trend to merge video and time metadata is now mainstream, but inherent challenges still exist when it comes to merging multiple live video feeds with multiple sources of timed metadata in the media and entertainment (M&E) space. For example, captioning, digital rights management and synching multiple live streams with multiple cameras are a few among others. This creates barriers for live bettering, sports, and events to create better viewing experiences for their end-users.
Why is it a challenge? Today, many live video operators use HTTP-based OTT workflows sending video feeds from the camera to the Content Delivery Network (CDN). However, these workflows are subject to latency up to seven seconds, if not more. Also, this does not allow the live video operators to process the live streams and leverage data without encoding and transcoding them, raising the cost and overall complexity of the workflow. In addition, workflows generally use SDI VITC timestamp versus UTC for each frame creating a discrepancy of synchronization across multiple metadata sources between different camera feeds across various locations, and degrades the overall viewing experience. How did we solve this? KLV, a SMPTE data encoding standard also used by the military to embed data in live video feeds, combines metadata with geospatial visualization, offering a new way to enhance the user experience enabling new use cases such as precise synchronization and timestamping of event highlights across multiple live video streams. As practical use cases, a Precision Time Stamped wall clock embedded in live video streams can enable effective sport adjudication, betting, gamification…. Why choose this topic? Timed metadata has always been a pain in the ass. Right? Well, we solved that using a military-standard for good. Our mission is to positively impact society by simply moving media. This talk was presented at Demuxed ’22, a conference for video nerds in San Francisco featuring amazing talks like this one. Demuxed ’22 was made possible by sponsors like our Platinum sponsor Daily (https://daily.co) and organized by people from Mux (https://mux.com). For more information about the conference and community, see https://2022.demuxed.com.Conference
Speakers
Other Proceedings
Here are some other proceedings that you might find interesting.
What Codec Should I Use?
Alan Resnick
Doing Server-Side Ad Insertion on Live Sports for 25.3M Concurrent Users
Ashutosh Agrawal
Is now the time to solve the deepfake threat?
Roderick Hodgson
Super Resolution: The scaler of tomorrow, here today!
Nick Chadwick
The do's and don'ts about Streaming security
Javier Brines Garcia
Modeling the conceptual structure of FFmpeg in JavaScript
Ryan Harvey
Objectionable Uses of Objective Quality Metrics
Richard Fliam
RTMP: web video innovation or Web 1.0 hack… how did we get to now?
Sarah Allen
Large-Scale Media Archive Migration to the Cloud
Konstantin Wilms
HEVC Upload Experiments
Chris Ellsworth
Related Courses
Below are some courses that might interest you based on the learning categories and topic tags of this conference proceeding.
What Codec Should I Use?
Alan Resnick
Doing Server-Side Ad Insertion on Live Sports for 25.3M Concurrent Users
Ashutosh Agrawal
Is now the time to solve the deepfake threat?
Roderick Hodgson
Super Resolution: The scaler of tomorrow, here today!
Nick Chadwick
The do's and don'ts about Streaming security
Javier Brines Garcia
Modeling the conceptual structure of FFmpeg in JavaScript
Ryan Harvey
Objectionable Uses of Objective Quality Metrics
Richard Fliam
RTMP: web video innovation or Web 1.0 hack… how did we get to now?