When you’re thinking about streaming media, you probably fall into certainly one of camps: Either you already know something about transcoding, or you’re wondering why you keep hearing about it. For those who aren’t certain you need it, bear with me for a couple of paragraphs. I’ll clarify what transcoding is (and isn’t), and why it may be critical in your streaming success — especially if you wish to deliver adaptive streams to any device.

So, What Is Transcoding?

First, the word transcoding is commonly used as an umbrella term that covers a number of digital media tasks:

Transcoding, at a high level, is taking already-compressed (or encoded) content; decompressing (decoding) it; after which in some way altering and recompressing it. For instance, you may change the audio and/or video format (codec) from one to a different, comparable to converting from an MPEG2 source (commonly used in broadcast television) to H.264 video and AAC audio (the most well-liked codecs for streaming). Different fundamental tasks may embrace adding watermarks, logos, or different graphics to your video.

Transrating refers specifically to changing bitrates, equivalent to taking a fourK video input stream at thirteen Mbps and converting it into one or more lower-bitrate streams (also known as renditions): HD at 6Mbps, or different renditions at three Mbps, 1.8 Mbps, 1 Mbps, 600 kbps, etc.

Transsizing refers specifically to resizing the video frame; say, from a decision of 3840×2160 (4K UHD) down to 1920×1080 (1080p) or 1280×720 (720p).

So, once you say “transcoding,” you is likely to be referring to any mixture of the above tasks — and typically are. Video conversion is computationally intensive, so transcoding often requires more highly effective hardware resources, together with faster CPUs or graphics acceleration capabilities.

What Transcoding Is Not

Transcoding should not be confused with transmuxing, which can be referred to as repackaging, packetizing or rewrapping. Transmuxing is once you take compressed audio and video and — without altering the precise audio or video content — (re)package it into different delivery formats.

For example, you might have H.264/AAC content, and by changing the container it’s packaged in, you’ll be able to deliver it as HTTP Live Streaming (HLS), Clean Streaming, HTTP Dynamic Streaming (HDS) or Dynamic Adaptive Streaming over HTTP (DASH). The computational overhead for transmuxing is far smaller than for transcoding.

When Is Transcoding Critical?

Simply put: Transcoding is critical if you want your content material to achieve more finish users.

For example, let’s say you want to do a live broadsolid using a camera and encoder. You could be compressing your content with a RTMP encoder, and choose the H.264 video codec at 1080p.

This needs to be delivered to on-line viewers. However for those who attempt to stream it directly, you will have just a few problems. First, viewers without sufficient bandwidth aren’t going to be able to view the stream. Their players will be buffering continuously as they wait for packets of that 1080p video to arrive. Secondly, the RTMP protocol is now not widely supported for playback. Apple’s HLS is way more widely used. Without transcoding and transmuxing the video, you will exclude nearly anybody with slower data speeds, tablets, mobile phones, and linked TV devices.

Using a transcoding software or service, you may concurrently create a set of time-aligned video streams, every with a unique bitrate and frame dimension, while converting the codecs and protocols to reach additional viewers. This set of internet-friendly streams can then be packaged into several adaptive streaming formats (e.g., HLS), permitting playback on virtually any screen on the planet.

One other frequent example is broadcasting live streams utilizing an IP camera, as could be the case with surveillance cameras and visitors cams. Once more, to reach the largest number of viewers with the very best quality allowed by their bandwidth and devices, you’d wish to assist adaptive streaming. You’d deliver one HD H.264/AAC stream to your transcoder (typically situated on a server image in the cloud), which in flip would create multiple H.264/AAC renditions at completely different bitrates and resolutions. You then’d have your media server (which is likely to be the identical server as your transcoder) package those renditions into one or more adaptive streaming codecs earlier than delivering them to end users.

If you loved this short article and you would like to obtain more information relating to Security Center VMS Sharing kindly take a look at the website.