Often the projects we work on require that our clients send us files. Usually these are logos, sometimes graphical branding elements, and sometimes existing footage, such as what’s called running footage from a car manufacturer. There’s a problem we have to address with almost every client: While most people know what jpgs and pngs are, these aren’t the best way to send us images. And rarely does anyone know the difference between 4:2:0 and 4:2:2 footage (it’s an important one). With that in mind, here are a few of the basics when it comes to file formats of both images and video. Pull on your suspenders and grab your pocket protectors—it’s about to get real geeky up in here.
Image Filetypes
Images are easier, so I’ll start with those.
- Some filetypes, such as jpg, are lossy, meaning not all data is preserved from the original when you save the file. This makes for really small filesizes, but poor quality images.
- Most filetypes don’t have transparency. For logos, this is obviously important.
- Most filetypes are raster images, meaning every pixel is mapped. These files can be low or high resolution, but can’t be resized without losing quality. Vector filetypes store points and curves rather than individual pixel data. These can be resized to any scale.
Here’s what you need to know:
For logos and graphics, these are the best formats: eps, ai, svg, pdf, indd, or a high resolution png with transparency.
Don’t send us: low resolution pngs, jpgs, gifs, docs, or any other format you see lying around.
Video Filetypes
Videos are way more complex than images, so bear with me. Unlike images, videos don’t rely on a single filetype with basic quality settings. There are more than a couple of things that determine what type of video you’re looking at and its quality.
- A video’s extension (mov, mp4, mpg) is its container. It doesn’t directly determine the quality of the video, but different containers allow for different types of videos.
- A video’s codec is the most important aspect of a video. Most web videos are encoded in H.264 or WebM. In fact, most web videos you watch have both available since different browsers prefer different formats.
- (Usually) independant of container and codec is the video’s resolution. Common resolutions today are HD (1920 x 1080 pixels) and UHD (3840 x 2160).
- Then comes bitrate. Bitrate is simply how many bits of information are streamed per second. The higher the bitrate, the higher the quality, but the larger the filesize.
There are generally three categories applied to video formats: source footage from the camera, mezzanine footage sometimes used in the edit process, and videos for delivery. Source footage will always have the best quality, since it hasn’t been modified. Mezzanine footage (typically encoded in ProRes or DNxHD, ending in mov or mxf) is encoded at a very high quality, but can take up significant space on your hard drive. Delivery footage for web (including YouTube, et al) has a very small filesize, but much lower quality file. Most people wouldn’t notice much different between a very high quality web file and a mezzanine file, but, unlike mezzanine footage, web files don’t have the latitude necessary to go through an edit. Think of it as the difference between a bolt of cloth and a finished shirt. Sure, you could cut the shirt to pieces to make a new one, but would you want to?
So why the confusing array of formats? Why not use a single standard, or set of standards? The answer, as you might imagine, comes down to filesize. On a typical shoot on our Canon Cinema cameras, we film between 20 and 50 gigabytes of footage. That generally adds up to a couple hours of footage, or roughly 3 megabytes per second. Our cameras do a pretty good job of keeping the filesize down. We’ve been testing out the Blackmagic Ursa with a look to upgrade. Depending on what format you’re shooting in, it varies from 27.5MB/s to 180MB/s. Filming for an hour straight, that potentially adds up to over 632 gigabytes! Not something you want to try to post to YouTube. A typical 60 second commercial in ProRes is around 2 gigabytes. Still not down to size. Encoded for web in H.264, that file is down to roughly 60 megabytes, less than 3% of the mezzanine size and roughly 0.15% of the total amount of footage we might shoot in a day. A large part of that is bitrate—simply, how much information is there. To break that down a bit, you have to look first at chroma subsampling.
What codec is used also determines the chroma subsampling of a video. 4:2:0 footage only stores chroma (color) information in one out of four pixels. Most mezzanine footage is 4:2:2, which stores chroma information in every other pixel. 4:4:4 footage stores chroma information in every pixel. Many professional-grade cameras shoot raw footage, which not only stores information in every pixel, but more information than can even be displayed at one time, which means it gives you a lot of latitude to play around with what the footage looks like in post-production. Believe it or not, web video is invariably 4:2:0! Gotta keep that filesize down.
Here’s what you need to know:
If you have the source footage straight from the camera, send us that! If not, we really need mezzanine footage to work with. The files we need usually end in mov or mxf and are almost always encoded in ProRes or DNxHD (so if you see one of those two words in the filename, you’ve found it!). If you’re looking at an mp4, that’s probably just for web use and not for editing.
So if you ever happen to send us footage or graphics and we tell you it’s in the wrong format, now hopefully you have a better understanding as to why. If you’re unsure of what you have, send it over! We can tell you pretty quickly whether it’ll work for whatever it is we need it for.