Adam Westbrook, judge in the new Digital Film section of the PPY annual awards, gives his take on the winners along with those of the fellow judges – ’24′ Director of Photography Rodney Charters ASC and Dslrnewsshooter’s Dan Chung.
As more and more photographers, often armed with video-capable DSLR cameras, move into the world of multimedia we’re seeing a new, and exciting, visual style emerge.
Combining a photographer’s eye for a fantastic image, and a commitment to creative storytelling, it is one of the most exciting areas in modern journalism. This has been highlighted for Dan Chung, Rodney Charters and myself as we judged this year’s ‘Digital Cinema’ category in the Press Photographer’s Year Awards.
Watching a selection of entries I really sensed a desire to move away from the look and feel of traditional video and television journalism. We see more character stories, more creative shots and brave treatments – yet still rooted firmly within photojournalism.
This year’s winning entry was a really unique style of storytelling from The Guardian in London, which grabbed me particularly:
Part of Laura Barton’s Barton’s Britain series, it was shot by Felix Clay and edited by Elliot Smith & Shehani Fernando. You might think, on first glance, there’s not much to this, especially compared to other submissions, which included films shot in Afghanistan. Members of the panel were captivated by it – here’s my personal comment on the entry:
“Barton’s Britain: The Bridgewater Canal is the result of a videographer clearly in love with images and a reporter in love with words. Every shot is framed with a photographers eye, and Laura Barton’s script guides the viewer into an enchanting world of pastoral England.
“Worthy of note are the wonderful colours, the subtle musical elements and, in particular, a clear separation from the traditional formulaic ways of doing video & television. Sound has not been overlooked either and the audio of water rippling along the canal edge goes a long way to rounding off the scenes.
“Even though this is not hard news as we know it, as a piece of digital cinema, ‘The Bridgewater Canal’ is the only film which took me somewhere and made me feel something.’
The affordability of high quality gear and the ability to publish for free ought to see more passion pieces like this – the internet and the style lend themselves perfectly to this kind of storytelling.
As his first attempt at video film making with a DSLR it is extremely brave and ambitious and particularly impressed Rodney:
“Given that they were all about to come under fire I am not surprised he didn’t wander about looking for great angles however I sense he is a great shooter and he is exactly the kind of guy who needs encouragement in what is a brave new field of hybrid war cinematography on the front lines.”
In an area with sometimes more debate than anything else, it’s great to see some solid examples of pioneering video storytelling. The Press Photographers Year proves not only do photojournalists have a real desire to move to multimedia, they have a great skill for it too.
Felix, Laura and Heathcliff’s work should act as an inspiration to any journalists looking to expand into video storytelling as well.
The Press Photographer’s Year competition is sponsored by Canon and a full list of winners along with a gallery of the fantastic images can be seen here.
You can read more of Adam’s musing on the world of new media over on his blog and he has published and e-book titled ‘Next Generation Journalist’
Cameraman & Editor Simon Lee test drives Avid Media Composer 5 using Canon 5DmkII footage.
Avid is competing more head on with Apple Final Cut Pro these days which hopefully means better, cheaper tools for editors as the competition intensifies. Whilst the newly released Avid Media Composer 5 is not a total small studio solution, it’s an elegant and powerful piece of software. A major selling point is the newly improved Avid Media Access (AMA) which Avid claims now works natively with file based media from XDCAM and RED cameras as well as with Quicktime codecs like Apple’s ProRes and H.264 recorded by HDDSLRs. The ability to edit natively with these formats should mean new workflow possibilities and valuable time saved.
Avid's user interface
Linking media as a AMA volume
Avid has set the standard in non-linear editing and once dominated the market. In recent years Final Cut Pro slowly knocked Avid off the top spot in terms of market share, due in part to it’s lower price point coupled with its openness to 3rd party hardware and ever expanding toolset. It now seems Avid has been making concerted efforts to regain some lost ground. Avid has not only dropped it’s once high price, MC5 starts at $2,295 US, but it also starts to free users from expensive proprietary hardware, allowing economical HDMI monitoring with the Matrox MX02 mini. Add the fact that Avid is now releasing upgrades in a matter of months rather than years and you’ve got a behemoth on the move.
The differences between editing software are continuously diminishing – a good storyteller and able craftsman should be able to use pretty much any system and create credible work. Each has their own strengths of course and choice often depends on budget, workflow needs and individual preferences. Avid’s editing tools are some of the finest around. Some notable functions include the indispensable top and tail, which accomplishes an editorial task with 1 keystroke compared to FCPs 3 or more. Then there is the ability to load entire sequences into the source window, view it in the timeline and edit it into the record sequence. This is very useful when working with hundreds of clips from file based media. There are also some excellent effects included like Time Warp with Fluid Motion – this is just excellent for speed changes and since Avid is already harnessing the GPU there’s more real time performance. Avid also takes the organizational headaches away from the editor with it’s well developed media management. I’m just scratching the surface here and those who’ve worked with Avid know they get a high performance editing solution designed to meet tight deadlines and work well in collaborative environments.
The MC5 color correction tools
For the purpose of this blog, I tested out 23.976p material from the Canon 5DmkII with a trial version of Avid MC5 on a Macbook Pro, Core i7, 2.66Ghz, 4GB 1067 Mhz DDR3, Nvidia GeForce GT 330M, 500GB 7200rpm HDD.
To begin with, I set up a new 1080p 23.976 project and plugged in a usb cf card reader and card with footage from the 5DmkII, asked Avid to connect to the AMA volume and selected the DCIM folder. The footage shows up in a bin with the name of the formatted CF card, in this case EOS_DIGITAL. Playback from the card in the source window at real time was stuttery so I copied the H.264 footage to the internal hard disk which seemingly improved things, though it still wasn’t slick and playback was consistently problematic. For playback at 2x or more the problem was simply pronounced. Whilst I could, technically speaking, edit and finish with the native footage, it certainly wasn’t a straightforward or smooth operation. Editing with H.264 is processor intensive so I’m very interested to find out how this is working on more powerful workstations.
For good performance I found it necessary to transcode to one of Avid’s DNxHD intermediate codecs. For the DNxHD 175 X 10bit variation, the storage requirement is similar to that of Apple’s ProRes 422 (HQ). Transcode times were as follows, DNxHD 115 ~ video length x 2, DNxHD 175 ~ video length x 1.5, DNxHD 175 X ~ video length x 1.7. Not exactly great if you’re on a tight deadline but slightly better than transcoding to ProRes 422 on the same machine with Final Cut Pro’s Log & Transfer tool, which came out at ~ video length x 2.2 for ProRes 422 and ~ video length x 2 for ProRes 422 (HQ).
Once transcoded, everything works as it should and playback is exceptionally smooth at all speeds. Poking around in MC5′s interface revealed some new features which are similar to Final Cut. Most surprising for me, since i’ve been away from Avid for a while, is the ability to select everything left or everything right of the playhead. You can now also drag, drop, ripple and roll without the need to select individual tools. Other features include the ability to bring up audio waveforms on individual tracks in the timeline and insert up to 5 RTAS Protools audio plugins per track, which is very similar to adding effects in Soundtrack Pro.
Applying RTAS audio filters in Avid
On the downside it’s still not possible to adjust the audio levels in the mixer on the fly and color correction hasn’t really improved. The scopes don’t react in real time when making adjustments and there’s not a great deal of grading power. In this regard, you might want extra plug-ins to compensate or have the need to move to another application with more creative control.
An Avid timeline
In conclusion, editing with native H.264 is a let down on the Macbook Pro, but once transcoded to DNxHD it’s smooth sailing. It’s a joy working with the newly improved interface and toolset. On the AMA front, I already briefly tested some XDCAM EX footage in MC5 and was floored by how quick, easy and smooth this seems to work. Working with AMA media seems very similar to working with media in Final Cut. To enjoy Avid’s famous media management capabilities you still the need to import or transcode to one of Avid’s codecs. I’m sure Avid will continue to improve upon AMA and add features which will appeal to Final Cut users and a broader market. It’s going to be interesting to see how Apple responds to Avid’s and Adobe’s recent offerings with it’s next Final Cut Pro release.
I’ve just been chatting to Bruce Sharpe – the creator of the popular audio syncing software Pluraleyes. We discussed the new beta 1.2 version of Pluraleyes for Final Cut Pro which is available today. The software allows you to easily sync high quality sound from an external audio recorder, like the popular Zoom H4n, with the audio track recorded by a DSLR camera. If you are new to Pluraleyes have a look at what it does here.
According to Bruce in addition to stabilising the features added in the last beta it adds a new one. In the old version of Pluraleyes the sequence you were trying to sync had to be named pluraleyes which was slightly annoying, now you can just choose the sequence you want to sync from a list.
The other features of beta 1.2 are:
Option for single output sequence
Option to replace audio (great for dual-system applications)
Select sequence to be synced
Support for locked tracks
You can download a free trial of Pluraleyes for Final Cut beta 1.2 here
It also works with an existing license and a new full license costs $149. The beta can be installed and run alongside the older version 1.1 release for now.
I’ve been using Pluraleyes regularly for my productions now when shooting DSLR video on the 1DmkIV, 7D and 550D. Indeed even though we now have manual audio controls with the 5DmkII and the Juicedlink DT454 I still prefer to use external audio recorders when I can. As Bruce pointed out to me, with Pluraleyes you can get great audio simply with a recorder and camera with no cables at all to get in the way. I frequently use a small shotgun mic on top of the camera for run and gun shooting and Bruce says he does this too, but instead of plugging it into the camera or an audio box like the Juicedlink, he plugs it into a Zoom recorder on his belt and then uses Pluraleyes to sync the sound later.
In the future Bruce intimated that the Sync drift correction found in sister standalone software Dualeyes may show up in Pluraleyes soon. A version of Pluraleyes for Sony Vegas has been around for a while, a Adobe Premiere version of the software is in Beta right now and Bruce also told me that a version for Avid is ‘on the roadmap’
I was introduced to Akira Hasegawa and his art during the filming of “Children of Enlightenment”, my documentary about Japanese youth counterculture. The renowned director of over 4,000 commercials and the pioneer of the Digital Kakejiku (D-K) art form, Akira is a true renaissance man. He is deeply philosophical and even brews spectacular sake. He is such a fascinating subject that I immediately decided to devote an entirely separate documentary to Akira in order to fully capture the essence of the man and his work. Based on the strength of my photography and film projects, he entrusted me to tell his story and bring his art to American audiences. Collaborating closely with Christopher Frey at Cross Media International, this led to the development of a new television series featuring Akira’s art.
The first step was deciding how to film it. Viewing one of my favorite films, “Baraka”, on Blu-ray was a revelation. It was shot in 70mm film, and I had seen it several times in 70mm, but I was startled at the image quality at 1080p. It turns out that they did an 8K scan from the original negative! It makes a big difference when you downconvert to HD from 4K or 8K, not to mention future-proofing your footage to some extent – 4K displays already exist and are the wave of the future. The picture is perceptibly sharper and richer. While our finishing format would be 1080i for broadcast and 1080p for Blu-ray, I wanted to start with the largest, highest resolution format possible.
Obviously, we did not have the budget for 70mm film. For the timelapse sequences, we needed a camera that could capture Akira’s art at night. We quickly settled on the Canon 5DmkII due to its huge full-frame sensor, low-light capabilities, and its overall image quality. We would capture the timelapse sequences at 5.6K RAW – more than ten times the resolution of standard 1920×1080 HD and near IMAX quality. For the real-time cinematography, we decided to shoot with the Red One at 4K RAW. We would use the Birger Canon EF mount and fast Canon L lenses to allow us to maintain a consistent look. I brought DP Paul Leeming onboard to handle the Red One, while I tackled the timelapse photography.
Our first shoot was at the Grand Ise Shrine, Japan’s most revered and sacred site. Akira’s D-K Live art installation took place at the entrance to the shrine – the historic Uji Bridge – as part of the re-opening and re-dedication ceremonies of the traditional wooden structure, which is rebuilt every 20 years. The event coincided with “Bunka no hi”, the Japanese national holiday celebrating culture and the arts, and was watched by a crowd of 250,000 on the banks of the Isuzu River.
It was a daunting shoot that lasted a day and a half. Akira’s D-K art had never before been captured in timelapse, let alone with this kind of technology. The D-K imagery changes once a minute, synchronized to the human heartbeat. It changes so slowly that if you are looking at it in real time, you may not notice it changing at all. It was critical to figure out proper exposures and intervals prior to the D-K Live installation. The only problem – Akira does not do test runs. We finally persuaded him to allow us a quick dry run the evening before, despite a light rain.
Setting up a timelapse shot on the 5DmkII
After a few scant hours of sleep, we were back out at the shrine at 4:30am capturing the sunrise. It was an honor to be given access to film at the shrine, so we wanted to respect the sanctity of the temple and environment. We had a very small crew, so we were able to operate in a very unobtrusive manner while remaining agile and mobile. Since we had a lot of ground to cover in a short time, it was critical that we be able to move quickly. Timelapse intervals ranged from 1 to 4 seconds.
One thing both Akira and I wanted to make sure to capture was the transition from day to night – as the sun sets, Akira’s D-K shines and rises. Having had no opportunity to test this, it was going to be a bit of a nail-biting shot in the dark. I would only have one shot at this. Rather than going with aperture priority and then removing the flicker in post, I decided to use multiple cameras shooting in manual, starting the shots one stop over, halting one stop under, then repeating. Length of exposures ranged from 1/3 to 1.3 seconds at 3 second intervals.
Peter H. Chang films Akira Hasegawa setting up projectors for D-K
During the D-K Live exhibition, I set up three cameras to shoot simultaneously at 3-5 second intervals. I found myself sprinting back and forth between the cameras, as two of them were placed on opposing river banks. The CF cards filled up quickly shooting RAW timelapse. Luckily, I had a Nexto DI eXtreme on hand, which allowed me to offload and back-up cards immediately and reuse them. It got a bit frantic at times. Next time I will bring more cards!
The shoot culminated with the spellbinding sight of thousands of people carrying candlelit red lanterns across the bridge. I wish we’d had more cameras rolling! It was tough to maintain a steady timelapse shot with the crowds of people swarming past since the tripod legs would constantly get bumped and the wooden bridge vibrated with all the pedestrians. For this scene, I used a 1/2s shutter to allow for a bit of motion blur and 1 second intervals for the timelapse. I attempted several pans, but due to interfering foot traffic they were unsuccessful.
Paul Leeming operates the Red One
Coupled with fast Canon L lenses (17, 24, 35, 85, 135, 70-200), the high ISO capabilities of the 5D mark II excelled at capturing Akira’s kaleidoscopic D-K display with crystal clarity and minimal noise. One of our biggest challenges was capturing the low light real-time footage at night on the Red One. With the D-K art projection, candlelit lanterns, and a bit of moonlight as the only light sources, it was difficult to hold enough light to give the images clarity and color depth. However, the dynamic range of the Red sensor and shooting in 4K allowed us to downconvert the image to HD in post, resulting in less noise and a sharper image. While the Ise Shrine episode was shot on Red’s first generation camera sensor, subsequent episodes will be shot on the new Mysterium-X sensor, capable of greater dynamic range and much improved low light performance.
Paul Leeming films D-K in front of Uji bridge
In addition to the timelapse photography, I also captured 1080p video with the 5D mark II throughout the shoot for the upcoming standalone Akira Hasegawa documentary. Altogether, for this first episode, we captured nearly 9 hours of footage on the Red, and over 15,000 still timelapse frames on the Canon 5D Mark II.
To post-process the timelapse stills, I used Adobe Lightroom 3 beta. This was a somewhat slow and cumbersome process – it took a week to edit and then export 15,000 RAW files. I then brought those sequences into Adobe Premiere Pro CS4 alongside the R3D files, which Premiere handles natively. As luck would have it, Paul was on the beta team for Adobe Premiere Pro CS5 so we were able to use CS5 for color correction – it works much better with Red footage than CS4 because it incoporates the RedSpace color science that we shot with.
With the premiere episode of Lightscapes, I am thrilled to help bring both the Ise Shrine and the art of Akira Hasegawa to American audiences.
LIGHTSCAPES premieres Monday, June 21 on Discovery HD Theater at 7:30 AM ET / PT.
LIGHTSCAPES is produced for Discovery HD Theater by Cinefugue Productions and Cross Media International. http://lightscapes.tv
He tells of the technique used to create some stunning time-lapse sequences using just regular equipment and some lateral thinking.
I was shooting photos at the Shanghai Expo on assignment for Getty Images editorial and after the opening weekend I decided to stick around for a bit longer and shoot moving images of some of the more interesting buildings there. My friend Dan O’Connor and I had previously worked on a time-lapse piece shot in Beijing and I had been looking for opportunities to stretch my horizons – shooting in some different settings. Of all the buildings I had visited the Danish pavilion seemed like the most challenging – so I chose to try that one first.
First I did some general time-lapses of the whole structure, then I started to play with the idea of following the building’s spiral structure – its dominant element. I wanted to shoot a time-lapse moving the camera following the spiral, shooting towards the building’s centre. I first tried to slide my camera down the spiral with my camera on a clamp around the railing, using a towel between the clamp and the rail. This method turned out to be very unstable and I had to rethink. Next I figured that a better option was to place the camera on a tripod (Gitzo GT3541XLS with Manfrotto 405 head) and move it manually down the spiral – shooting a still every four seconds using a remote control timer switch. It took me a couple of test runs but as I practiced gradually the bumps disappeared. Once I knew I could do it well I made a double run following the infinite spiral shape of the building’s top deck.
For most of the time-lapses I set the camera to manual exposure mode, occasionally adding a ND filter to slow the shutter speed. If there was a significant movement or light change then I opted to shoot Aperture Priority exposure mode instead.
Seppe's 5DmkII ready to shoot timelapse
The main challenge at the Danish pavilion – and at the entire Expo – is to work with such a big crowd around you. The constantly changing subject in frame combined with the tough conditions behind the camera, such as the bustle and the heat, made shooting hard work. These conditions force you to think harder and I found this to be as much a positive as a negative element.
Each time-lapse sequence consisted of about 300 images. I assembled the video and time-lapses in Final Cut Pro and used the music to bring out the best of the images. Composer Wim Mertens’ music is ideal for time-lapses and this type of editing – I like to use the structure of the music to organize the images and determine pace and mood.
The regular video was shot on the Canon 5DmkII at 24p. Most of the lenses used were Canon L primes (24, 50, 100) together with some L zoom lenses (16-35 and 70 – 200). Unfortunately I don’t have any rig or shoulder support so most handheld video was done using a tripod with legs folded together, often upside down for extra stability.
Outside the Pavillion
I have only been doing time-lapses for a couple of months now, but it has been a very intense learning process, in which I have been exploring the format from scratch. I believe it doesn’t have to be all clouds and traffic. However, there are certain rules it seems – you need to have either dramatic change, a strong sense of direction or a strong rhythm in them.
In total I spent a week and a half on the Expo site in between my other assignments and managed to shoot around fifty time-lapse sequences. We have completed another Shanghai Expo piece, this time from the UK pavilion, shot in a very similar way with the same idea and setup. I hope you enjoy it.
The Canon Eos7D 'Franken camera' rig used by Travis Fox
Recently, a couple people have asked me about my (and these are not my words) ghetto fabulous or franken-camera Canon Eos 7D video system. In some ways, the DSLR system for me simply represents a better camera, not a fundamental shift in video storytelling. Over the years, I’ve changed cameras when technology changed and quality got better but my style has more-or-less remained constant.
The biggest reason I had for not adopting DSLRs sooner was ergonomics. I could deal with the lack of timecode, the audio fixes and the overheating, but I simply couldn’t handhold the thing steady and interact with my characters at the same time. I wanted a DSLR built like the Sony Z1U, which I used lovingly for years. I checked out all the standard “rigs”, the Zacuto and the Redrockmicro, but they seemed to push me towards holding the camera like a Betacam, not cradling it like a baby or a football as is my practice.
In the end, I ended up saving money and getting a kit I could deal with. A $8 bracket (it’s literally the cheapest flash bracket that B&H stocks) holds the Ikan V5600 monitor out in front to the left of the lens just like the Sony Z1U. I splurged on another bracket (it was a hundred bucks), which holds the audio gear and balances the camera out by moving weight to the back of the camera.
The Canon 7D rig 'in the field'
When setting up a DSLR rig, it always seems to be two steps forward and one step back. I was concerned about weight so I opted for the M-Audio Microtrack audio recorder instead of the other options with XLR inputs. (I had long ago abandoned trying to deal with the camera’s audio) But the battery on the Microtrack sucks, so I had to use a separate USB battery to charge the thing when its non-replaceable battery dies in the middle of the day. Still, the weight of both units is still considerably less than the other options.
With the ergonomics worked out, one of the first assignments I had was a series of stories with NPR’s Adam Davidson in Haiti for PBS/FRONTLINE. As soon as I headed out in the hot Haitian sun I was confronted with new issues to work out. The biggest surprise was the overheating. I had worked with a 5DmkII in the Chihuahuan desert in July, so I thought I was prepared, but in Haiti the 7D would shut down sometimes after only 30 minutes of shooting in the heat of the day. I quickly changed the way I work in order to minimize this DSLR flaw.
The other big headache was one I expected. My plan was to sync the audio at the end of each day of shooting. I used Pluraleyes and it worked about 90 percent of the time. Great, but that other 10 percent had a detrimental effect on my sleep during the 12 days I was in Haiti. After I had synced the audio, I exported the whole day’s shoot into one file in XDCAM422 30p format (This is a standard at FRONTLINE and we chose the format so it would more easily integrate with their Avid systems). That exported file became, for all practical purposes, my raw file. I imported into a new project to edit the piece. I also sent these files to FRONTLINE for them to prep the films for television broadcast.
Since the Haitian trip, the newest headache is keeping pace of changes to the DSLR system. Manual audio controls on the 5DmkII has made me question my decision to go with the 7D and makes me wonder if a similar audio fix will come to the 7D or if I should switch back to the 5DmkII. And with the release of Avid 5, the technical folks at FRONTLINE report that they will soon be able to deal with native 5DmkII/7D files, which will (thank god!) save me tons of time converting everything to other formats.
Travis with a Gyro stabilised 7D rig on his latest project
I’m a huge proponent of the DSLR system for documentary films. I ran out to buy the Kessler Pocket Dolly after I saw Khalid Mohtaseb’s amazing work from Egypt. I shoot everything ‘wide open’ to minimize depth of field. Yet, I don’t really get what is meant by “cinematic journalism.”
Dan Chung has asked me to chime in on the debate. So at the risk of being dragged into a blog screaming match, here goes… My understanding of the term is that these new tools – the film-like DSLR cameras and lenses mixed with new lightweight cinematic tools such as the pocket dolly and mini-jib – have created a new form of journalism, dubbed cinematic.
For me, it’s simple. The journalism part of videojournalism or documentary film is about the story. The story is made up of several aspects: the visuals, the writing, the characters, the editing, etc. So if the visuals change – let’s say improve – how does that alone change the story, the journalism? I think a better term might be cinematic videography, or what about the old-fashioned cinematography?
I believe Khalid’s piece from Haiti is what sparked this debate, so here are my thoughts. I found the cinematography incredibly beautiful, even inspiring, even if the pocket dolly shots were over-used. I wish I could be such a great shooter. That said, it’s not a story, at least in the traditional journalistic school-of-thought. It’s a montage, a visual essay, and that’s OK. There’s a precedent and a tradition of these types of pieces in the history of documentary film, Koyaanisqatsi perhaps being the benchmark.
I enjoy watching Koyaanisqatsi, but if I watch it late at night I find myself dozing off, while a great documentary with a riveting story (even with average cinematography) tends to keep me awake. The notion I’m poking at here is that the story trumps all else, that focusing wholly on any one aspect – be it the visuals, narration, etc – is no substitute for great storytelling.
I’m currently co-producing and shooting an hour-long documentary for PBS/FRONTLINE. It’s the first time the show has commissioned a film on the DSLR system. I’m using all the cinematic techniques I can: the Pocket dolly, fast prime lenses and a gyro stabilization system for tracking shots. But this doesn’t mean FRONTLINE has gone cinematic; it’s just the best way at this juncture to get the best visuals for the incredible storytelling that FRONTLINE is famous for. The film is an outgrowth of the Law and Disorder project, a joint adventure between FRONTLINE, Propublica and the Times-Picayune. It airs on PBS and online August 25th.