Events Add an event Speakers Talks Collections
 
SIGCOMM 2020
August 11, 2020, Online, New York, NY, USA
SIGCOMM 2020
Request Q&A
SIGCOMM 2020
From the conference
SIGCOMM 2020
Request Q&A
Video
Neural Enhanced Live Streaming: Improving Live Video Ingest via Online Learning
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
264
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

Live video accounts for a significant volume of today's Internet video. Despite a large number of efforts to enhance user quality of experience (QoE) both at the ingest and distribution side of live video, the fundamental limitations are that streamer's upstream bandwidth and computational capacity limit the quality of experience of thousands of viewers.
To overcome this limitation, we design LiveNAS, a new live video ingest framework that enhances the origin stream's quality by leveraging computation at ingest servers. Our ingest server applies neural super-resolution on the original stream, while imposing minimal overhead on ingest clients. LiveNAS employs online learning to maximize the quality gain and dynamically adjusts the resource use to the real-time quality improvement. LiveNAS delivers high-quality live streams up to 4K resolution, outperforming WebRTC by 1.96 dB on average in Peak-Signal-to-Noise-Ratio on real video streams and network traces, which leads to 12%-69% QoE improvement for live stream viewers.

About speaker

Jaehong Kim
Master at KAIST
Share

Hi, Angie, on Chromecast. In this video, I'm going to talk about Uranus to live streaming improving life with you in. Just be our online learning. This is a short video and if you want to know the details, please watch the 20-minute video. Live video traffic has experienced the rapid growth which arise online streaming services such as YouTube and twitch with the number of streamers and beers in the past rise, supporting high-quality life has become ever more critical. Live streaming system consists of two main components of

the injured side concerns. The delivery of Life video from original streamer to a media server. In the Stream are almost a high-quality live video to the media server stream to video chat with multiple different localities. Second at the distribution side, streaming optimize the quality of experience and still cross years. However, there still exists key limitations in an end-to-end live. With your delivery. The stream quality is fundamentally constrained by the

streamers and its computational capacity. for example, if the network bandwidth between the streamer and the media server is scarce, Or existing relax computing power on their devices to impose live streaming real time. Industry quality suffers by limiting, the transport adoption. And restricts the quality of the entire delivery Downstream. This potentially deprives thousands of years of their opportunity to enjoy high-quality live stream. Therefore, in this

work before us in the first half delivery between the streamer in the media server name in the English and the video quality of a witch's dream at ingest. By leveraging, the company in Port Angels server, right now supplies super resolution on the original stream at the interstate before its distribution to the end gears. Is the high quality video is now available as a media server. This enables thousands of years to download high quality live stream

However, there are two primary challenges of applying supervised illusionz in live stream. First Contact, original cannot be prepared on the Fly for the new life content. It is known that surprised. You wasn't Janice provide great benefit been trained and used on the same content, they give me the content of a approach. In fact, to prepare the content of our DNA for per video, require ten minutes of training or a woman Attack video, your products cannot be easily adapted to the live with your delivery on scissor pendant delay. Requirement. Navigation to change

into a video from past life session cannot deliver reliable code to gain compared to the content of a t and ends. This is because the live content can be very new and different from history session. Address to challenge. Our solution is to employ only trained with Presley. Why live stream? Second the Train. The Dan in the line, did you quite as powerful Computing devices. The computing power at the streaming site is heterogeneous. Instead we can leverage the complete part in This Server which is typically or call him where I'm at. However, the challenge is that

wrong to labels for online training or not available. As a media server. To address the challenge rely on a Kyocera Vision. That even a fraction of branches labels are sufficient or Old Line train. Call the video frame Square, large redundancy training with, only a subset of frame, still produces almost the same court, the game compared to having all friends. Second in addition to sampling of frames training with oppression of Groucho stables near me, the patches still provides significant Training, Game, combining these two factors. We can transmit Rancho

Stables to the media server. Using very small amount of bandwidth sending 5% of Fame for every 2 seconds with jpeg compression requires, 124 kilobytes per second. Never seen this, our solution is that the streamer sent personal training data to the hinges server? Explain in detail, leibniz operates, both at the streamer in the server side at the streamer, start the camera captures, the high-quality stream before his compression and life is not simple. Partial high-resolution data, which is used for training. Then

the streamer sings. The simple strain data purchase along with the compressor live video frames. The media server online training and surprise you with your new friends operate in parallel line, training, learns, new pictures of a video stream online from the streamer in Abyss. The first retrain to Janet to Super illusion processor. Do online training, post, however, involves clothing non-trivial challenges first. The high-quality patches that applied transmits journey in this life. And with, with live video, Allocating large

vendors were training pitchers. Can you produce a graduation? However leaves less than this for life with you which can cause Court adjudication, Address, the challenge reintroduce, the quality of demising scheduler. That effectively balance is the location of McMahon with between training process and I video. II just a remote support. A large number of streams simple, there are average 90,000 concurrent streams on Twitch stream. Darkness can only provide benefit of online training to The Limited

number of concurrent streams is required to address the challenge reintroduce, the content of the trainer. Now, I'll briefly explain the two design components The gold quality up to my new schedule is to locate Bandits use between the video and training process that results in Mexico video quality. Since we observed that you can I listen to is Kanki function or call Chelsea. My sister is crazy and send it to find Touch free trade, that leads to maximum video quality. Then the scheduler update the target rate of pets and video

respectively. The goal of content audit training is to improve the resource efficiency of the light rain. Are Clinton of the trainer suspends the training, when it detects the situation in the training game and results training, when it detects the team or content change. In this way, Diagnostics able to adapt the amount of training time to the real-time politicking. Putting the design components all together. Here is the overview of lightning system. There are two additional design components that I have not talked about this video, but they are explained in the

20-minute video. Diagnosis is implemented on top of the state-of-the-art open source in just framer, Roberta. See this. It is agnostic to video, Codec, and transport layer. Revalue Delilah by answering the following three questions, I will not go into the details as it is playing in the 20-minute video. Summarize, the results or performance where to see forever is 1.96 tbps. And I'm under constraint Network environment. Second life, not deliver Solara court again with using only 25% of the few researchers compared to the continued strength. And finally right now or

this is Charles to 69% or live stream viewers. Before ending the spot, I would like to show you, then the last half is what we're registered. The Neighbors on their Network constrained environment in the right half is what leibniz delivers under the same environment. You can see that begin with S. To sum up. I have briefly talked about three things in this top first or not is a new life with you. In the screamer that nessus, the reasons change color TV online, earning

enough, introduces novel design components, including courtship to my scheduler and content of the trainer has produced significant for live stream viewers. More information can be found in our website. Thank you for attention and if you have any questions, I'll be happy to take them during the live training session. Thanks young. The first question on slack is from meeting at Facebook and on the question is that is on his own prison to battambang which which was the breakdown of the bandwidth usage for

the live video stream as opposed to be training patches. And the second is when the trainer suspends the training, do you continue clothing samples or do you just suspend the uploading to to save? First of all, thank you for the question. So I actually the ratio of vinegar is used between the live video streaming 21st just depends on the content and the available and on in our evaluation for you. People on average training pictures about five to 10% of the available. If they're the

system, suspense the training, our system cost me almost the same pose with a minimum of 25 cubic feet per second to validate the Vienna court, and detect the constant changes and changes detective. The system reboots traps and starts to send more pictures, The second question we have is from Jonny, Kang Zoom. The depression is that the content of a super resolution network is essentially doing video compression just training and testing on the same video. In this case, how is it conceptually different than being in base video?

Actually, that's a really good points on it. Mentions that in your effort is copying the video compression, which is training video. And the difference between the Universe video compression and it possesses the video frames on the it's a surprise in your network, is a different kind of a New Codex that Upscales, the low resolution video on the device. Okay. And maybe take one more question is also, on Zoom from Marco, but the question is for the presented, BSNL

games. Did you use the same or different videos, train, and how much training did or did you need to get this? So, we didn't use the same or I'll same video to change. We retrieve our real-time environment. So at the same time, it is online. And the second question is a measure of the strength required for the electronic. So, as I introduced in the video, we have a schedule that locates the required for the training at the wrong time. So it changes changes, depending on the Yes, that was a follow-up question on the painting itself was that

did you do the online training from scratch or do some fine-tuning of some more? So, the question from Chandler area, our system basically starts from the generic, the animal that, which is trained on the standard Benchmark on starting from the best model, and our relationship. Still provides a game compared to the Future. Ain't Thank you and shut them. Open the memo questions actually on slack slack as a lesson with you and please make sure you okay. There are some questions on the zoo, G&A,

interesting, approach and I stalk. The question is, does one need to be careful sampling. The high-resolution patches. Is it simply periodically by the Motions events in the video stream? Thank you for the question. Brightness and post the pictures. I'll call you. I'll call you when the letter Q is empty. National Defense information. So, I guess Sasha is asking whether there's anything different. If the, if the system detects anything changes, We used to have Jason and formation to Central Europe in the

video. I think that you answered the question. Another question from Billy mang. Are there any statistics about additional operating expenses of Life? Knobs for introducing new servers and gpus? Answer of your picture in the paper and you can decide. Amount. Do I have a follow-up question on that from from a spider thing? So I can call you try to save the training or cost right by a mom, adoption to the continent to continent changes frequently. You would not how we might.

How you might Reviews of the GPU usage in the SR interest super-resolution infer. A really good point, since the video frames there, a large bruise under 15 friends. If you do start NBA reviews for author, Nancy friends. In this way, we can also further reduce the resources for a nanny for us. Okay. So call U-Haul has a follow-up question and I think they got this question. Many times has been asked by people like you $10 from Facebook. So the question is how do you measure? Did you measure how much bandwidth with use for live streaming versus the training patches? Is there anything done

the training patches to reduce its bandwidth consumption. There is a connection problem. Are you guys there? Okay. if you share their last question, is there a problem? Can you hear me? Yes, I hear you. How should I answer the question again? Oh yes, your weed. We haven't heard anything from you and training content unavailable Benji's for Samsung or one- is available and dress up about five to 10%.

Cackle comments for the website

Buy this talk

Access to the talk “Neural Enhanced Live Streaming: Improving Live Video Ingest via Online Learning”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “SIGCOMM 2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

November 9 - 17, 2020
Online
50
94
future of ux, behavioral science, design engineering, design systems, design thinking process, new product, partnership, product design, the global experience summit 2020, ux research

Similar talks

Tong Meng
Research Assistant at UIUC
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Kuntai Du
PhD Candidate at University of Chicago
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Daehyeok Kim
Senior Researcher at Microsoft
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “Neural Enhanced Live Streaming: Improving Live Video Ingest via Online Learning”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
949 conferences
37757 speakers
14408 hours of content