Codecs and File Types – A Crash Course

On more than a few jobs this last year, I’ve run into a recurring nightmare – the lack of a specific delivery requirement. As an editor asking the question “What kind of output do you need?” the response “Whatever you think is best” is not exactly useful. Only very slightly more useful is “a good quality MOV file”.

It seems to me that people feel obligated to tell you something, *anything* so as to avoid those most terrifying words; “I don’t know”. I can understand that. So here’s my attempt to provide a quick crash course in video files.

It’s important to know this as early in the process as possible, because you set up some of these attributes (frame rate, pixel size) at the beginning of the editing process, and if becomes a much bigger job to alter it later than simply getting it right in the first place! It also had an impact on rendering and exporting times. For example, if you know you’re going to be producing an output for standard def TV, you might want to start your project in that resolution, with a timeline that accommodates interlaced material. But we’ll get to that later.

File Type;
This refers more to how the file is played back than how it is encoded. Another way to think of it is with the analogy of a mail delivery. You could get a letter or a box containing the item being sent, and the file type is equivalent of the type of box or envelope.

If you’re making a file for playback on a computer, the three most often used types are Quicktime (MOV), Windows Media Player (WMV), and Flash (FLV). If you’re making a disc, you have DVD or BluRay, but we need not concern ourselves with the technicalities of those formats for the moment.

Which of these you’ll need will depend on the system that the file needs to be played on. For a PC, WMV files are best, being compatible with anything that uses Windows. For Mac, QuickTime is best as it’s bundled with the operating system. Flash is often used on the web (YouTube and Vimeo for example take users videos of almost any format and transcode to Flash for their players, though they have begun to use the alternative HTML5 web standard player as well), although WMV and QuickTime can also be used.

This is the most complicated part. The codec is the means by which a video is compressed and decompressed (hence the abbreviation co-dec). The tricky part is that you are sometimes technically able to use the same codec with different file types. A Flash FLV file can use the H.264 codec, though that codec is more commonly used for making MOV files. Videos from the iTunes store and trailers on the Apple website both use this codec, though are wrapped in the .m4v file type – confused yet?

For the most part, if you know what file type you want, the most common consumer delivery codecs (Windows Media 9, H.264 or Flash codec) will suffice.

Pixel Resolution;
HD TVs are 1920×1080 (just 1080 for short), Apple’s HD Movie rentals are at 1280×720 (720 for short). Standard def TV in the UK is 720×576, in the USA it’s 720×480. People get extremely hung up on pixel sizes, mostly because it’s easy for someone to understand that a TV that is capable of “Full HD” at 1920×1080 pixels is better than an SD TV at 720×576. But it’s only one element of making a movie file.

If you’re making something for the web for example, even if you’ve shot 1920×1080, you might want to make the movie smaller so that users can download faster, or reduce the processor load if you know the movie will be played on an older computer.

If you’re going to be projecting the file on a big screen you’ll want as much resolution as possible, so usually you’ll try to keep in the maximum resolution as you shot.

If you’re showing your film on a monitor, then it’ll be whatever the maximum resolution of the monitor is.

The RED ONE camera shoots in 2k (2048×1152) or 4k (4096×2304), which can technically be uploaded to YouTube, but is extremely (and somewhat prohibitively) processor-intensive to view. Digital cinema projectors can handle 2k & 4k, but my knowledge in this area is very limited, so I won’t get into it here.

Moving on brings us to bit-rate. Bit-rates are what determine the size of the file, speaking in terms of megabytes and gigabytes. There is no right answer about what bit-rate your file should be, it depends entirely on the end user. Really just as useful is to find out how big (in megabytes) the file can be and then work out the bit-rate that will meet that requirement (based on the duration of the clip in question).

For reference though, as a guide, here are some common bit-rates.
The maximum bit rate of DVD is 7 Mbps (that’s megabits per second – distinct from megabytes, helpfully).
The maximum bit-rate of BluRay is 40 Mbps.
Apple’s iTunes HD movies and TV shows are 5 Mbps.

What I tend to use is something like 500kbps for client viewing approval versions, as they don’t need to be so beautiful as the final output and keep the file size small for sending over the Internet.

Interlaced or Progressive;
In the dark ages, to squeeze more resolution from a standard TV, a system was created where-by two half-frames (called fields) were recorded in the same time as one full frame would be, but slightly offset from each other – upper field and lower field.

For playback on a computer, the file needs to be progressive, because computer monitors work in a different way to old televisions. Also HD files, whilst capable of being interlaced (often for capturing sporting events for example where you want a smoother motion) tend to be progressive, as this more closely replicates the look of film, which obviously is a “progressive” frame by frame system. Interlacing also still exists as an option on BluRay, and as the default for DVDs.

I hope this blog entry has been informative and not too technical. Comments and corrections are always welcome, and I’ll update this guide as they come in, and technology marches inevitably forward!