Glossary of Terms
Compression / Encoding
Video information is encoded to transport it over the internet and to deliver it to various kinds of video players. Along the way, it is often compressed to make the file size smaller and more transportable. The more compressed the video is, the less quality is retained, and the more degraded the image usually becomes. Compression is always a compromise between quality and file size. Video compression is also often known as video encoding. More here on Wikipedia.
Codec
The word “codec” stands for compression-decompression. A codec is a compression algorithm that is used to compress the video information at one end using an encoding program, and decompress it at the other end for playbacks, such as in a video player like VLC, or Quicktime, or using a hardware decoder chip inside a DVD player. Basically, it is a piece of software that makes your video readable by your computer, allowing you to play it. Without the correct codec, you won’t be able to play either the audio or video or both. More here on Wikipedia.
Format
A format or “container format,” is used to bind together video and audio information, along with other information, such as metadata or even subtitles. You’ve probably heard of things like .mp4, .mov, .wmv, etc. These are all container formats that put the audio and the video together. For example, a .mp4 file might use the mp3 audio codec together with the h264 video codec, or a .avi container might use aac audio with an Xvid video codec. More here on Wikipedia.
Standard
A standard, such as the MPEG standards set by the Motion Picture Experts Group, are a set of A standard, such as the MPEG standards set by the Motion Picture Experts Group, is a set of rules that video codecs and formats are designed to adhere to. This standardisation allows manufacturers and software designers to anticipate the kind of video, audio, and other information that their software or microchips will have to deal with.
For example, MPG1 is used in VCDs, while MPEG2 is used in DVDs. A new, more advanced part of the MPEG4 codec, known as H.264, is set to become the standard that is used by the next generation of HD-DVD and other video formats and codecs. More here on Wikipedia.
Bitrate
The bitrate is the amount of data a file uses per second to store the audio and picture information. Video bitrates are much higher, as they must describe the highly complex visual information in each frame, whereas audio bitrates are much smaller. Bitrates are usually measured in kilobits per second, also known as kbit/s or kbps.
Bitrates can be constant or variable, where the codec adjusts the amount of data based on how complex the video and audio is at the time. We choose to use a constant bitrate setting in the recommended settings listed here, as many systems react better to a constant bitrate rather than spikes in the data throughout the file. It is also easier to predict the resulting file size. More here on Wikipedia.
Frame Rate
Frame rate is the number of video frames per second. PAL formats usually use 25 frames per second (fps) and NTSC formats usually use 29.97 or 30 fps. We tend to round a 29.97 frame rate to 30.
Some cameras produce lower frame rates. If your source file has a lower fps, it is possible to compress your video using these rates instead, but it’s usually better to avoid frame rates higher than 30. More here on Wikipedia.
Deinterlacing / Decombing
Older video technologies such as Standard Definition PAL or NTSC video for broadcast television used two interlaced fields per frame of video. When these files are played on a computer, or if other systems re-encode these files, what is known as interlacing artifacts can occur. This often looks like a “comb” effect of horizontal lines across the screen. If you have interlaced content, you need to de-interlace it. However, de-interlacing can produce its own problems, including loss of fine resolution.
Handbrake uses a sophisticated filter called Decomb as part of the encoding process, which you can leave on all the time, and it will only deinterlace frames which have interlacing issues present. The settings we recommend above use the Decomb filter. See more here.
Audio Sample Rate
The audio sample rate refers to how many slices the audio information is sliced into or sampled throughout the file. CDs are usually sampled at 44.1kHz, and video is usually at 48kHz. It is usually better to either keep the original sample rate or change it to 48kHz rather than choosing to use 44.1kHz for video. More here on Wikipedia.
H264
H264 (or MPEG4-AVC) is a modern, fast, and very efficient video codec, and it’s currently the best choice for web video in 2014. It is part of the Motion Picture Experts Group MPEG4 standard and is also supported by the HTML5 standard, which means many web browsers support playback of this codec natively. See more on H264 on Wikipedia.
H265
Also known as High-Efficiency Video Coding (HEVC), H265 is a video compression standard designed to be the successor to the widely used H.264. In comparison, H265 offers from 25% to 50% better data compression at the same level of video quality or substantially improved video quality at the same bit rate. See more about H265 on Wikipedia.
MP4
MP4 is a file container format (see above) developed by the Motion Pictures Expert Group. It refers to the wrapper around the video and audio codecs, such as H264 and AAC. It usually has the file extension “.mp4,” but it can also have “.m4v,” “.mov,” and others. More here on Wikipedia.
Compression Artefacts
This refers to the areas of the picture that can look grainy or blocky, or contain other distortions of colour and movement in a video file that has been highly compressed. These artefacts are created when there is not enough information to describe that part of the picture accurately. When compressing video, the aim is to create the smallest file with the least amount of compression artefacts. More here on Wikpedia.
Ripping
Ripping refers to the process of copying a movie from a DVD and then compressing it to a new video file. Sometimes this requires working around anti-copying technologies. Handbrake was designed for this purpose. More here on Wikipedia.
There are many good sources of information about compression on the web, but be careful: some articles contain misinformation because few really understand the technology and terminology involved. Below are links to information about open source video codecs, open standards, and a glossary of terms.
Guides and Tutorials
Try Handbrake’s own wiki for a guide to all of Handbrake’s features and settings:
This page lists video compression software available for Linux:
Video Tutorials
2-Minute Handbrake H264 Video Tutorial
This is a very quick 2-minute tutorial on using Handbrake to make a video with the settings we have recommended.
50+ H264 Video Tutorials
Vimeo has resources on how to compress video with the settings we have recommended, using about 50 different programs, including iMovie, Toast, Avid, Vegas, Premiere, Quicktime, iMovie, and Final Cut Pro.
In-Depth Handbrake H264 Tutorial (One hour)
This is an incredibly in-depth 10-part series on using Handbrake to make H264 video. If you watch this whole series, you will have a very thorough understanding of all the settings that are selected. It is a little out of date, as it was produced in 2010, but most of the information is still relevant today.
DVD-Ripping Guide for Handbrake
Here is a great step-by-step manual for ripping DVDs using Handbrake on FLOSSManuals.
Forums
This is a follow up to the doom9 website and forums which hark back to the early days of xVid. You can find threads here about popular current codecs, future compression software on the horizon, video players’ and DVD rippers. The site’s users are mainly people interested in ripping and compressing DVD or BluRay movies.
Here you can check the Video Conversion topic for threads about compression, but there are also other topics such as authoring disc-based media, video streaming, camcorders, DVD ripping, editing, programming multimedia applications, and video issues specific to Mac and Linux.
Another forum, without listed topics, but you can search for solutions to many issues and read what other users have to say (or ask for help yourself).
Open Standards and FOSS Codecs
It is worth mentioning that Handbrake actually uses a free and open-source software (FOSS) version of H.264 called x.264, and Handbrake itself is FOSS too. FOSS codecs are released under free software licenses such as the GPL which ensure community ownership, allowing others to freely distribute, modify, and contribute to them. H264 can be considered an open standard. Open standards are technical definitions for video formats and codecs that are publicly released and have had an open process in their design.
It’s important to invest in FOSS and open standards to prevent video technology from falling into exclusively corporate hands, which may introduce serious concerns about affordability, security, and access. As community media practitioners, it’s easy to see why we need to support independent media infrastructures such as radio and TV stations – the same principles apply when it comes to media software and internet video technologies.