Apple recently announced that its new iPhone 13 Pro and Pro Max could record in the ProRes format. This was met by industry professionals with a mixture of excitement and incredulity. One of the reasons that some cinematographers and colorists questioned the logic was that they saw ProRes as “old” and “inefficient” while others claimed that it wouldn’t be that much better than HEVC. But ProRes is not just another format, it’s something that greatly accelerated the way that digital was adopted by the professional industry and it is one of the primary reasons why it is still relevant today.
Combined with a new set of lenses (or cameras to be more accurate) which adds a genuinely useful close-up focal length and surprisingly good dynamic range and color response, this starts to look a little more “pro” than “phone”.
When Filmic Pro got in before Apple and released their implementation of the ProRes capabilities for the iPhone 13 Pro there was a collective gasp from many in the industry as test images started to appear and the quality difference was immediately apparent. However, this really only scratches the surface of why the need for ProRes?
Now that the ProRes enabled update to the iOS is available and Apple have also announced M1 Pro and M1 Max with ProRes hardware acceleration on the new MacBook Pro 14” and 16” models, it’s perhaps worth revisiting some of the reasons why ProRes was and still is such a big deal for the industry and why Apple would be doubling down on this family of codecs.
ProRes basically began as Apple’s second attempt to solve the problem of editing HDV. Apple and its Final Cut Pro application had ridden the wave of accessible desktop video editing in the late 1990s and one of the key developments that made this possible was the DV video format. DV was one of those things that punched above its weight from day one. The small, lightweight, and affordable consumer and prosumer cameras delivered picture quality that was far closer to traditional broadcast quality than had ever been possible outside of traditional TV camera systems. But the FireWire interface that was built into the cameras and into the Apple Mac’s of the day made it possible to get the pictures and sound into a computer with no additional hardware and just as importantly, no loss of quality. The data traveled straight from the tape along the FireWire cable to the Mac. Their new Final Cut Pro app then made it possible to edit this footage with frame-accurate precision while maintaining the “broadcast-ish” picture and sound quality.
It is hard to put into perspective now what a big deal all of this was back then. Tape to tape edit suites were extremely expensive and to start with, digital NLE systems were even more so. Suddenly individuals and businesses could go to their local electronics store and get everything they needed to be able to shoot and edit video to a standard that only very recently had only been possible for TV stations and major facilities. For schools, universities, drama clubs, and many other organizations, even advertising agencies could suddenly create video content in-house to a standard that simply wasn’t possible before.
Around the same time, HDTV was starting to make an impact at the high end of the industry. Initially, Sony’s HDCAM tape format made shooting HD 1920 x1080 a similar process to shooting on the industry-standard Digital Betacam format, and over the next few years, post-production also became manageable, particularly with the introduction of Blackmagic Design’s HD Decklink cards that allowed HD SDI video sources to capture directly into a computer.
So when the HDV format was announced in 2003 it promised to bring these two huge leaps in video technology together. The affordable and accessible DV with the higher resolution of HD. However, HDV actually delivered something closer to the original intention of DV, which was a high-end home video format rather than an entry-level professional format. The compromise to get a 1920 x 1080 image recording to work with the little tape format involved a completely different approach to compression. Where DV compressed each standard definition frame as a separate still image using a data rate of 25 mbps, HDV used MPEG-2 to compress Groups Of Pictures (GOP) which is the same system used by DVDs, Digital Broadcast TV, Streaming services, etc.
It is a very efficient system for the delivery of images because it makes it possible to encode visually similar picture quality at drastically lower data rates. One big downside is that it is much more processor intensive to edit with because editing requires that the whole group of pictures is decoded as you play, scrub or step frame by frame… the exact things that you do constantly while editing. While this has become much less of an issue with more powerful computers, in the mid-2000’s it was a big barrier, and a lot of people wanted to edit the HDV footage with the easy processes they expected from DV.
Apple solved this problem by developing the Apple Intermediate Codec or AIC. Like HDV, AIC was 8 bit 4:2:0 but at around four times the data rate and most importantly using intra-frame rather than inter-frame long GOP compression. By converting the HDV footage to AIC it was possible to edit smoothly on the Macs of the day with only minimal quality loss.
The Pro’s Demand More
While the DV solution worked for consumers, the quality loss from the HDV format that was already pretty shaky for picture quality was not really acceptable. Because of this when forced to work with HDV material, say for additional footage in documentaries, many post professionals would convert the HDV material to uncompressed HD video, leading to massive file sizes that didn’t improve the footage but simply made it practical to work with and prevented more quality loss.
In 2007 Apple addressed these concerns with a new intermediate format for professional use which they called “ProRes”. The family of variants were all 10 bit 4:2:2 or better and allowed post facilities to convert lower quality footage to a format that didn’t degrade the quality significantly and allowed for a much more practical post workflow than either the highly compressed originals or the uncompressed conversions.
Like the DV format before it, ProRes massively over-delivered. It quickly became accepted as a robust and practical format for broadcast mastering and exchanging material between facilities. While the data rates were significantly higher than HDV or AIC, they were still a lot smaller than uncompressed video. This meant much more manageable storage and a lot of off-the-shelf hard drives became useable instead of custom-built RAID arrays for high-end editing and finishing.
As well as being intra-frame and relatively easy to encode and decode, all of the variants of ProRes were both frame rate and resolution-independent. This was also a big deal in the late 2000s as Sony’s HDCAM-SR tape format had become the defacto mastering format for HD broadcast and even some feature films. While the SR variant of HDCAM was similar quality to ProRes and also came in YUV 4:2:2 and RGB 4:4:4 formats and 10-bit color depth, it was locked to the 1920 x 1080 resolution and specific frame rates. Because ProRes wasn’t tied to a physical tape format these limitations were not the issue they were for HDCAM-SR.
When RED introduced the RED ONE as the industry’s first 4K, Super 35mm sensor camera to record compressed RAW files shortly after the introduction of ProRes, many facilities found that decoding the 4K RAW files put a huge amount of stress on the performance of their finishing systems and it quickly became common to decode the RECODE RAW and ProRes provided the perfect intermediate format for that as well. Any changes to the RAW settings could be done during the conversion, preserving the value of the RAW format, and then post could continue easily and smoothly with the 10 or 12-bit ProRes files retaining the kind of quality latitude for color grading and VFX that people were used to from film scans.
This reinforced the utility of ProRes even more and it became more and more common to use it as a mastering format and to transfer footage between facilities or systems. By 2010 the phrase “we’ll just use ProRes” was already in common usage and generally made it possible for people to relax a little about post-production workflow.
So when the venerable old film camera manufacturer ARRI decided to make a serious attack on the digital cinema camera market with their new ALEXA camera system, they were able to guarantee instant acceptance from the post side of the industry by having it record internally straight to the ProRes format. While there are lots of other reasons we all love the ARRI cameras, this little practical decision was a critical component that allowed large sections of the industry to rapidly and easily transition from shooting film or earlier digital formats to the new ARRI system. Once the files were delivered to the post house they slotted seamlessly into the existing systems and workflows. Although uncompressed ARRI RAW eventually became popular and practical the initial reputation of the ALEXA was built largely on the ProRes files and many films and particularly TV series continue to use ProRes on the ALEXA because of the more manageable file sizes and easy workflow.
As the ALEXA became the defacto standard for high-end cameras it further entrenched ProRes as the defacto choice for a practical, high-end digital codec in cameras and in post with even Sony and RED eventually giving in to demand and offering it as an option.
Under The Hood
While Apple has always been a little tight-lipped about exactly how ProRes is able to outperform so many other codecs, the basics are pretty straightforward. All of the ProRes variants are at least 10 bit, giving them 64 times the color depth of an 8-bit recording. They use intra-frame compression using the well-established DCT (discreet cosine transform) compression method and they do it using a smartly managed variable bit rate compression scheme and a well-chosen range of target data rates.
One of the key advantages of ProRes in the post-world is how well it resists generation loss. The concept of generation loss was everywhere in the analog world. Every time you copied tape or film, there would be a loss of quality. The better the quality of the format and the system, the smaller that loss would be but it would always be there and if you made a copy of the copy, the quality losses would pile up. In a world where this quality loss was an inherent part of the tape-to-tape editing process, part of the magic of the DV format was that you could import via FireWire and edit natively in the DV codec then record back to DV tape with no loss of quality.
Although you can copy a digital file as many times as you want without generational quality loss once you de-compress and then re-compress then generational loss is back. Any time you want to color correct, add effects, or even a simple title, compressed video images need to be de-compressed then processed, and then re-compressed. Somehow Apple’s engineers managed to make that process as transparent as possible for ProRes and this makes it a great format for things like transferring files between facilities. It is still a common workflow for many productions to edit natively in the camera source codec(s) but then media manage to ProRes to send to the finishing facility. The ubiquity and reliability of ProRes mean that for things like documentaries that use footage from a variety of sources it is a much smoother and safer way of moving a film from edit to finishing.
An Old Format?
The suggestion of ProRes being past its use-by date is more than a little misplaced. Firstly on a practical level, the near-universal acceptance of ProRes across the post-production industry makes it one of, if not the easiest format to work with. Secondly, it works. The picture quality is high and consistent while the file handling is easy. Thirdly, it’s not actually that old by comparison. Almost all of the compressed video formats we use today are part of a very small number of extended families with newer formats like XAVC actually being variants of the h.264 family. While HEVC/h.265 is slightly newer and a great delivery and consumer-level camera format, ProRes still has very real advantages for professional use.
But Why Put It In A Phone?
The short answer is that you want to take the pictures shot by that phone into a professional workflow and control them properly in post. H.264 and HEVC pictures often look great playing from the source file but when you try to adjust them or convert them to other formats is when you often get into trouble and this is exactly what you need to do in a professional context. Years ago this didn’t matter much as the cameras were pretty basic and didn’t have much tonal subtlety or dynamic range to preserve. But as the cameras have been getting better and better over recent years, it has become more and more obvious that the quality bottleneck is actually now the recording codec. ProRes solves this problem with spectacular simplicity.
The Proof In The Pudding
After years of pleading with Apple to put ProRes into an iPhone, there was a niggling bit of doubt when I heard the news that they had actually done it. What if the theory didn’t hold up? What if it actually didn’t look much better than 10-bit HEVC? Luckily ProRes has yet again, not let me down.
As the test footage that has been doing the rounds has already shown, there is a visible difference straight out of the camera with the more gentle compression. But when the iPhone ProRes images come into post and start getting manipulated in FCP or Resolve, the difference is mind-blowing. In fine details, there is a visible lack of “shimmering” as the temporal inter-frame compression of h.264 or HEVC tries to hang onto details between compression key frames. Even more noticeable is how smooth gradations like skies and walls can be controlled using power windows with silky smooth results showing no banding or blocking. Another issue with highly compressed formats is that they behave less predictably with specialized LUTs such as film emulations which distort the dynamic range in a non-linear way and differently across the color channels. Finally, a highly compressed original will almost always suffer more from high levels of compression for delivery such as required for streaming and almost all content created now will be streamed at some point to a variety of devices in a variety of data rates and codecs.
Of course when all of these factors are combined the results multiply the effects, so footage with a film LUT applied, fine detail, high dynamic range, and gradients applied to skies or walls is one of the most difficult things for a compressed image to survive, especially when rendered back to a format like h.264. So of course that’s one of the first things I wanted to test. In these test shots which were captured with the standard camera app in UHD 4K, ProRes HQ you can see how easily it handles all of these potential downfalls. The fine details are preserved consistently over time, even in dark shadows, the sky is smooth and natural and the film emulations behave exactly as I would expect from a professional camera.
On the feature film DARK NOISE I recently spent a lot of time traveling around shooting scenery footage. Some of this required special lenses like extreme telephoto but a lot of the time it was just a matter of keeping the camera kit in the car in case we got a great sunset. But a conventional camera requires a lot of supporting equipment on the same scale. Tripod, lenses, etc. My scenic kit ended up filling most of the back of my SUV. Now, the iPhone wouldn’t replace all that for the specific shoots but for pulling over by the side of the road and shooting a sunset it would be fantastic. In a similar way, it would be amazing with the three different focal lengths built-in and very good stabilization for grabbing a quick cutaway or close up and knowing that it can actually intercut with the big cameras when required.
Moreover, ProRes has the latitude to be able to grade images from different cameras to match and is not possible with more highly compressed formats.
I’m not going to retire my pro cameras because I’ve got the iPhone 13 Pro Max. But will I sometimes leave them at home and still be able to get useful shots? Absolutely.
I have in the past actually shot footage on earlier model iPhones that I would have loved to have used in serious projects. The footage would have been useable except for the compression artifacts. ProRes changes this completely.
Cinematic mode is a wonderful tool for personal use, instantly giving footage a beautifully flattering and dramatic look to shots of family and friends. But ProRes is what takes it to a whole new level for a filmmaker.
They say the best camera is the one you have with you but with ProRes recording, the one we carry in our pockets is suddenly a whole lot more useful as a professional tool.