INCEPTION: Paul Franklin – VFX Supervisor – Double Negative

After beginning his career at Digital Film and MPC, Paul Franklin helps to the creation of the studio Double Negative in 1998. Since then he has supervised projects such as THE LEAGUE OF EXTRAORDINARY GENTLEMEN and HARRY POTTER AND THE ORDER OF THE PHOENIX and has supervised all the movies of Christopher Nolan since BATMAN BEGINS. In the following interview, he talks to us about his work on INCEPTION.

What is your background?
I originally studied sculpture at university in the 80s which is where I first started experimenting with computer graphics. I combined this with the student theatre and magazine work that I was doing at the time which then lead me into film making and animation. I worked in video games for a while as an animator/designer and then moved into film and television in the early 90s. In 1998 I helped to set up Double Negative VFX.

How was your collaboration with Christopher Nolan with which you have already worked on BATMAN BEGINS and THE DARK KNIGHT?
Chris is a fantastic director to work with – he is very demanding, always pushing you to raise the bar in every area, but he also gives you a lot of feedback and involves you in the creative discussion which makes you feel a part of the whole movie making process. Chris told me at the beginning of INCEPTION that it would be an all-consuming experience, and he was right!

Can you explain how you created the sequence in which Paris is folded over itself?
Returning to the Paris environment, Ariadne, played by Ellen Page, demonstrates her new-found ability to control the dreamworld by folding the streets in on themselves to form a giant « cube city ».

The Dneg vfx team spent a week documenting the Paris location where main unit was scheduled to shoot. Seattle-based Lidar VFX Services did a great job scanning all the buildings and then delivering highly detailed data from which Double Negative built a series of Parisian apartment blocks. It wasn’t possible to get above the buildings so the Dneg VFX modellers sourced photographs of typical Paris rooftops to fill in the missing areas. We implemented the new pertex texture mapping techniques in Renderman to allow the CG team to avoid the laborious UV coordinate mapping that is usually associated with models of this type. The final folded streets featured fully animated cars and people – anything that’s not on the flat in the final images is CG.

How did you created the impressive scene of the cafe in Paris?
Early on in INCEPTION, Ariadne is taken into a dreamworld version of Paris by Cobb, played by Leonardo DiCaprio. When Ariadne realises that she is actually dreaming she panics and the fabric of the dream starts to unravel, disintegrating violently and flying apart in all directions.

Special Effects Supervisor Chris Corbould created a series of in-camera explosions using air mortars to blast light weight debris into the Paris street location. Whilst giving an extremely dynamic and violent effect on film, the system was safe enough that Leo and Ellen were able to actually sit in the middle of the blasts as the cameras rolled. Director of Photography Wally Pfister used a combination of high speed film and digital cameras to capture the blasts at anything up to 1000 frames a second which had the effect of making the turbulent debris look like it was suspended in zero gravity, giving the impression that the very physics of the dreamworld were failing.
Starting with a rough cut of the live action, the Double Negative VFX animation team used the in-house Dynamite dynamics toolset to extend the destruction to encompass the whole street. The compositors retimed the high-speed photography to create speed ramps so that all explosive events started in real-time before ramping down to slow motion which further extended the idea of abnormal physics. As the destruction becomes more widespread the team added secondary interaction within the dense clouds of debris to sell the idea of everything being suspended in a strange weightless fluid medium.

What did you do in the scene in which Ellen Paige turns large mirrors on a bridge in Paris?
Ariadne continues her exploration of the limits of the dreamworld by creating a bridge out of the echoing reflections between two huge mirrors.

We had scouted a bridge over the River Seine in Paris (Bir-Hakeim bridge which had been previously featured in LAST TANGO IN PARIS) which had a really interesting structure: a Metro rail deck overhead with a pedestrian walkway underneath framed by a series of cast-iron arches. Chris wanted this bridge to reveal in an interesting way as part of Ariadne’s playful exploration of her new-found ability to control the dreamworld. During preproduction we worked up various concept animations of the bridge assembling itself in a blur of stop-frame construction, but it always ended up looking slightly twee and overly-magical – Chris was interested in something elegant that, whilst simple in concept, would defy easy analysis by the viewer. In an early discussion I mentioned that from certain angles the arches resembled the infinite reflections generated by two opposed mirrors – Chris thought that this was an interesting idea and eventually asked the question « what could you do on set with a really big mirror? ». I got together with Special Effects Supervisor Chris Corbould who got his team to build an eight foot by sixteen foot mirror that could be swung shut on a hinge, effectively forming a huge reflecting door. Dneg then got to work on a series of animations that explored the range of what we might be able to get in camera with this rig and we arrived at a series of camera setups which then formed the basis for Chris and Wally’s shooting plan. This gave a great start for us in VFX, but as big as the mirrored door was (it’s size being limited mainly by weight as it was already up to 800 Lbs) we still needed to do a lot of work. The compositing team set about removing the support rig and crew reflections and then adding in the infinite secondary reflections as well as the surrounding environment. The result is a series of shots so subtle in their execution that you’re not really aware of any digital intervention until the very last moments of the sequence. In fact most of what you’re looking at is digital with only the actors being real – even their reflections are digital doubles in many cases.

Were you involved on the extreme slow motion and what did you do on them?
We shot slow motion using both the Photosonics 4ER (which uses standard 35mm film) and the Phantom digital camera. Slow motion photography involves a trade off between speed and quality – the faster the camera runs (and thus the slower the resulting image) the lower the quality of the picture. We came up with a solution to this by shooting at as high a frame rate as possible whilst still maintaining the quality and then slowing the footage down even more in post using various respeed tools inside Dneg’s in-house version of Shake. Some things were impossible to shoot slow motion, such as the falling rain in the wide shots of the van coming off the bridge, so instead we created all of the falling rain as VFX animation.

Can you explain to us the shooting of the train that attacks the heroes? What have you done on this sequence?
The train is pretty much all in camera, in other words we really had a full size train on the street crashing through the cars. Special effects and art department built the shell of the train on a truck body – Double Negative then removed the truck’s wheels and added metal train wheels. The fractured road surface was created in CG and additional work was done in compositing to add shadows to the building facades, increasing the overcast rainy-day look.

How was the shooting of the amazing corridor fight and how did you create this sequence?
The « spinning corridor and hotel room » sequence was all in-camera. Chris Corbould’s special effects team built a huge rotating set to create the effect of Arthur (Joseph Gordon-Levitt) and the sub-security guards running over the walls and ceilings. The same principle applied for the scene inside the spinning hotel room. The only VFX work was a simple removal of a camera rig from the background of the final shot in the sequence.

How did you created the zero gravity effect?
The zero-g look was achieved through the use of cleverly designed stunt and special effects rigs which were then removed digitally by Double Negative in post. For the zero-g fight, where Arthur grapples with the security agent a vertical version of the hotel corridor set was built and the performers were dropped into it on wires with the camera filming them from the bottom end of the set. For the most part the actors hid their own wires, but when they became visible they were painted out with CG set extensions being used to fill any gaps that were left in. Much of the rest of the zero-g sequences, such as in the elevator shaft, was achieved on a horizontal set with Joseph Gordon-Levitt being held in a special « see-saw » rig or suspended from a small crane. Once again, Dneg removed all the rigs and repaired the backgrounds where necessary.

Did you created digital doubles for the corridor and zero gravity sequences?
The only digital double work in the zero-g sequences is a brief moment when Arthur is tying up the sleeping dreamers where we replaced the heads of two of the stunt actors with CG heads of Cillian Murphy (Fischer) and Ken Watanabe (Saito). Everything else is done with real people!

During the sequence in the mountain. Was the landscape real or is it all CG?
The landscapes are all real save for a small bit of terrain at the base of the Fortress when seen in the wide shots. The location of the snow scenes (Kananaskis County in Alberta, Canada) was absolutely spectacular – the only thing we had to do was add the digital Fortress in the wide shots and paint out the odd building in the background.

How did you created the avalanche?
The avalanche is for real. The special effects team collaborated with the local mountain patrol to trigger avalanches with strategically dynamite charges. We added the Fortress in the background and the little falling figures on the cliff face, but otherwise it’s all the real deal.

How was the collaboration with New Deal Studios?
New Deal are a great bunch of guys. I’ve worked with them directly before on LEAGUE OF EXTRAORDINARY GENTLEMEN and of course on THE DARK KNIGHT and Double Negative’s relationship with New Deal goes right back to PITCH BLACK, our first movie in 1998. Ian Hunter, New Deal’s VFX supervisor, did a fantastic job with his team, creating a sixth scale version of the central section of the Fortress and then rigging it for a dynamic collapse and pyrotechnic destruction.

Can you tell us about the sequence at the edge of the ocean. How did you created this city that is falling apart?
The Limbo City shoreline is, perhaps, the scene that has the most obvious symbolism of any of the dream environments. The sea represents Cobb’s subconcious mind and the city is the mental construct that he built within it – having once been beautiful and pristine, the city is now mutating and crumbling back into the subconcious sea, symbolising Cobb’s state of mental collapse. Chris wanted the city to take on the aspect of a glacier, slowly sliding out into the sea with giant architectural « icebergs » splitting off and drifting away in the water.

During pre-production both Art Dept. and VFX worked on concept designs for Limbo City devoting particular attention to the decaying shoreline. However, even after several weeks’ work we weren’t getting anything that Chris felt happy about – everything was just a little bit too literal. We discussed the idea of Limbo having started out as an idealistic modernist city that has started to collapse back into the sea of Cobb’s subconsciousness. We started with the basic concepts: a city of modern buildings and a glacier. We took a simple polygonal model of the glacier, built from photographic reference, and developed a Maya-based space-filling routine that populated the interior with basic architectural blocks with the height of each block being determined by the elevation of the glacier at that point. We then began to develop a series of increasingly complex rules that added street divisions or varied the scale of the buildings or added damage, all determined by samples taken from the glacial model. After each new rule was added we reviewed the resulting structure and then refined the process.

Once we had reached a certain level of complexity our VFX art director developed a series of paintings from the CG renders provided by the procedural system and these then fed back into the development of the rules. In this way we arrived at a city layout that had familiar features such as squares, streets and intersections, but which had a totally unique structure that felt more like a natural landform – a cliff being washed into the waves with architectural « icebergs » floating out to sea. The VFX animation team then used Houdini to create the collapsing architecture which was primarily referenced from natural history footage of glaciers rather than from building demolitions, adding giant splashes with Dneg’s proprietary Squirt fluids system. The hero shot from the sequence, featured in many of the online trailers, was developed from a helicopter plate that we shot with the INCEPTION aerial unit in Morocco – that’s actually Leo and Ellen walking through the waves. The final look of the city shoreline was created by using lots of reference of derelict housing developments as well as bomb damaged buildings in Iraq and other war zones.

Can you explain the final sequence and its gigantic city that mixed old house and skyscrapers? Was it all shot in front of a greenscreen?
We shot inside the actual house (an early 20th century « craftsman house » in the San Gabriel valley in Pasadena California) and used the location for both the scenes in Cobb’s memory and Limbo. For the Limbo shots we built a large greenscreen, supported on a platform, outside of the windows. The cityscape was created from the same CG setup used for the scenes of Cobb and Ariadne walking through the deserted city. Great attention was paid to the compositing with a lot of time spent on getting the depth of field and exposure right.

How was shot the top? Was it CG or real?
The top was shot for real, there was no CG for it.

How long did you work on that show?
I first read the script in February 2009 and then started on the show properly in April of that year. Our final delivery was at the end of May 2010 – so, in all about 13 or 14 months.

How many shots have you done and what was the size of your team?
We worked on 560 shots of which 500 are in the final film. In total we had about 230 people working on the visual effects over the duration of the show.

What is your next project?
Right now I’m taking a bit of a break – we’ll see what comes later this year!

What are the four films that would have given the passion of cinema?
My favourite films are not necessarily visual effects films, but they all feature visual innovation and take a rigorous approach to story telling. I love David Lynch’s films, in particular THE STRAIGHT STORY which I think is a powerfully emotional film about a very singular man’s journey across rural America. My favourite film is Alexander Korda’s 1940 version of THE THIEF OF BAGDAD, which features some of the earliest use of bluescreen – I love its totally consistent sense of fantasy and powerful drama and it also looks absolutely incredible. That film, perhaps more than any other, is what got me interested in making films and visual effects myself.

A big thanks for your time.

// WANT TO KNOW MORE?
Meet the Filmmakers: Podcast of Paul Franklin at the Apple Store in London.
Double Negative: Dedicated INCEPTION’s page on Double Negative website.
fxguide: Paul Franklin’s podcast and New Deal Studios work on fxguide website.

© Vincent Frei – The Art of VFX – 2010

JONAH HEX: Ara Khanikian – Lead Compositing – Rodeo FX

Ara Khanikian evolves in the midst of visual effects of Montreal since nearly 10 years, he has gone through many studios like Buzz Image, Hybride or Rodeo FX. He has worked on projects such as THE FOUNTAIN, 300, THE X-FILES, TERMINATOR SALVATION or TWILIGHT: ECLIPSE.

What is your background?
I studied 2d/3d animation in 1999, worked freelance for a couple of years, then joined the team at Buzz for about 5 years, did 2 years at Hybride, and then joined the team at Rodeo FX around 2 years ago.

How was the collaboration with the director and production VFX supervisor?
We had a very good relationship with the director and vfx supervisor. We would touch base very regularly using video-conferencing and cinesync sessions to discuss the progress of the shots.

What did Rodeo on this show?
Our workload changed a lot over the period of time that we worked on this project. We, initially, were awarded around 20 shots, and it obviously included a fair amount of matte paintings composited with greenscreen footage. We had some shots where we had to create CG crows. Some hero ones that occupied a large portion of the screen and also large flocks, We also did a lot of tests and r&d for cg fire and smoke that would be used to enhance live action elements. Unfortunately, for story-telling purposes, these shots never made it in the final cut of the film. We ended up delivering 5 shots. They’re the shots where Jonah Hex arrives at Independance Harbour, Virginia. We created 2 matte paintings and CG crows for these shots.

Can you explain to us the creation of a matte-painting shot from scratch to final image?
We always start with research, we like getting a lot of photographs. We’ll go out and take photographs of anything and everything that could help us, we’ll get visual references from movies and paintings. We usually even shoot practical elements like smoke and fire, crowd elements, flags, etc etc.. anything that will help us take a shot to the next level. Then we do some concept work and try to nail the overall mood, the basic layout of the matte painting and, of course, the lighting and composition. We present it to the director and when he’s happy with the result, we start with the actual matte painting work. In our pipeline, the matte painters will usually work in Photoshop while the matte painting TDs start prepping a 3d scene with the correct camera infos and start the modeling that will be used for the camera projections. Once that’s approved, it gets rendered and handed off in a couple of hi-rez layers to the compositor where it gets composited with all the other elements for the shot, which usually involves practical elements (like smoke, fog, and anything that would give more life to the shot). In one of our shot, we even had crowd and people walking around, shot on greenscreen that would be used to populate the streets of the matte painting.

What are the references that gave you the director?
For this project, we looked a lot at old photographs of New-Orleans. The style and architecture of that city was a great match to what the director wanted to see. We also photographed some of the older buildings in Old Montreal that looked great because of the moody dimmed lighting that’s present there.

Have you done CG crows?
Yes, we did. All the crows in our shots are CG. They were all done in XSI and rendered with Mental Ray in openEXR. We worked with another Montreal based facility for the crows. We had 2 hero shots of crows where we would see them « up close and personal ». These 2 crows were hand-animated and they look and feel really awesome. We also used a more procedural approach for the flock of flying crows that we added in another shot.

Did you used lots of 3D projection for your mattes?
It depends. If the shot is static or has very slight camera movements, we usually don’t need to do any 3d projections, the matte painting is simply exported to the compositor who will take care of the tracking. For some other shots, we will need to match-move (usually in 3d-equalizer), create a 3d scene, and then use camera projections.

What was your margin of creativity on this project?
We actually had a lot of creative input on our shots and the director was very open to our suggestions.

How long have you worked on this movie and what was the size of your team?
We worked for about 3 months, and our team varied between 5 and 10 artists.

What was the challenge on this film and how did you overcome it?
One of the biggest challenges we had was a technical one. This film was entirely shot with an anamorphic lens and one of the shots that we worked on had a very wide angle lens that created a huge amount of lens distortion. The shot had a fairly complex and long camera move (I believe it was a 21 seconds shot) that showed Jonah Hex on his horse arriving on a bridge and the camera would gradually pull out and reveal the city of Independance Harbour, Virginia. Only part of this bridge was built on set and was filmed in front of a greenscreen. We created a CG extension to this bridge, the matte painting of the city and a flock of CG crows. Dealing with this much lens distortion in a big establishing shot with a lot of perspective and parallax change had its share of technical challenges. The matchmove and tracking were fairly complex because of this. In the end, it worked out beautifully.
We had also done a lot of r&d and tests on a sequence where Jonah Hex would get his face burned with a branding iron by his nemesis, Quentin Turnbull. We had to create a very hot looking branding iron, and fumes and smoke effects coming out of the tip and especially find a convincing look of Jonah’s skin melting as soon as the brand would touch his skin. It was very gory! His skin was melting and burning, smoke was coming out of everywhere! The director decided to go with a much more graphic look that looked a lot like the comic book for that whole scene.

What are your softwares?
For compositing, we used Flame and Nuke. The matte painting department mostly used Photoshop and Softimage|XSI.

What did you keep from this project?
It was a very interesting and fun project! It allowed me to discover the dark and mysterious universe of Jonah Hex which I really didn’t know.

What are your next projects?
We just finished our work on RESIDENT EVIL: AFTERLIFE, and are starting SOURCE CODE and THE IMMORTALS.

What are the four films that gave you a passion for cinema?
I guess I’m part of a whole generation that really got inspired by classics such as STAR WARS, E.T., CLOSE ENCOUNTERS OF THE THIRD KIND, INDIANA JONES and BACK TO THE FUTURE trilogies.

A big thanks for your time.

// WANT TO KNOW MORE?
Rodeo FX: Official website of Rodeo FX.

////

Rodeo FX – credits list

Visual Effects Supervisor
Sébastien Moreau

Visual Effects Producers
Nina Fallon
Benoit Touchette

Visual Effects Coordinator
Josiane O’Rourke

Compositors
Ara Khanikian
Laurent Spillemaecker
Vincent Poitras
Simon Devault
Christophe Chabot-Blanchet

Art Director, Matte Paintings
Mathieu Raynault

Matte Painters
Frédéric St-Arnaud
Sithiriscient Khay

3D Artists
Jeremy Boissinot
Moïka Sabourin
Marilyne Fleury
Daniel Rhein

Camera Matchmove
Jean-François Morissette

System Administrator
Curtis Linstead

© Vincent Frei – The Art of VFX – 2010

THE A-TEAM: Bill Westenhofer – VFX Supervisor – Rhythm & Hues

AtRhythm & Hues for nearly 15 years, Bill Westenhofer has overseen many projects such as BABE, STUART LITTLE, CATS & DOGS, MEN IN BLACK 2 or THE CHRONICLES OF NARNIA. In 2008, he received an Academy Award ® for Achievement in Visual Effects for his work on THE GOLDEN COMPASS.

What is your background?
I’ve been working in the visual effects industry for over 15 years. I have a master’s degree in computer science from The George Washington University in Washington DC, specializing in graphics algorithms. My formal training is technical, but I’ve been drawing, painting, and animating on my own since I was very young. My current role as Visual Effects Supervisor combines both disciplines. I have to creatively direct the team of artists while helping to develop the technical approaches to achieve the looks we need.

How was your collaboration with director Joe Carnahan and production visual effects supervisor James Price?
From the start, Joe and Jamie emphasized the « fun » factor of THE A-TEAM. They wanted high energy, dynamic action which meant a lot of objects close to the lens and fast moving cameras. I thought our collaboration worked very well. We were able to bring a lot of ideas to the table and they likewise were great in crafting fun sequences and in helping us whenever an action or ‘gag’ wasn’t working.

What sequences have been made by Rhythm & Hues?
We worked on two sequences in the film: « The Tank Drop » and « Long Beach Harbor ».

Can you tell us about the design and the creation of the crazy freefall tank sequence??
This sequence was both the most fun and the one that caused the most « sweat » at the studio. The challenge was the sheer insanity of a tank falling through the sky and redirecting itself with its main gun. Whenever you push the believability of physics you run the risk of the whole thing falling apart. I really think we were able to walk the fine line in telling the story of what the tank was doing and yet maintaining just enough weight that it worked with a degree of plausibility.

The sequence was prevized before was came on board. The previs established most of the cuts that you see in the final product and nailed down the details of the action. R&H created several shots for a very early teaser trailer and based it very closely on this initial previs. Once those were out the door, we reconsidered the action with the ‘believability’ in mind and made the adjustments that finally made it into the film. It was interesting to see how your perception of whether something was working or not changed as the rendering of the clouds and the tank became more realistic. A lot of the early previs animation proved to be too ‘light’ with the tank responding too heavily to its main gun, for example.

How did you create so realistic clouds??
The clouds were, by far the most challenging part of the sequence for our R&D folks. We didn’t have any aerial photography and we knew we would be flying right up to and sometimes through the clouds. This meant we would have to create fully rendered volumetric clouds. The clouds were also going to be very important in the shots compositionally, and to provide a sense of speed, so we needed an efficient ways to visualize how they would work in the animation stage. The technique we settled on was to make a library of predefined cloud ‘caches’. Analogous to the pre-light stage in a regular 3D object (like the plane or tank), we setup turntables so we could adjust characteristics of each cloud – the amount of ‘wispiness’, design areas with smooth detail next to clumpy cumulus puffs, etc. This was designed in Side Effect’s Houdini. We then took these caches and made lo-res iso surfaces which were handed to layout artists who composed the ‘cloud landscape’. The iso-surfaces were lo enough to be interactive during the animation stage, and the animators, in fact, had the ability to add of move them to help the sense of speed, etc.

Once they got to the render stage, Cloud lighters placed scene lights to represent the sun, simulate bounce lighting from cloud to cloud and also simulate some of the complicated internal light scattering in the cloud. We did try to simulate that within the volume renderer, but it proved to be very expensive. To make up for that, one of our TDs, Hideki Okano, developed a tool to place internal lights where there would be the most internal scattering in a full simulation. He also developed a feature we called ‘crease lighting’ which mimics a phenomenon in cumulus clouds where the ‘creases’ between lumps are actually brighter than the lump because of an increase in water vapor density as you move in from the edges.

For the actual render, Houdini’s MANTRA delt with the actual cloud visibility calculations and was the framework for a ‘ray-march’ render. At each ‘march-step’, however, a custom volumetric calculator called « FELT » (Field Expression Language Toolkit) written by Jerry Tessendorf was used which had the ability to add additional multi-scattering terms. After initial renders, we had the ability to add more detail by ‘advecting’ the volume caches – increasing the ‘wispy’ quality. We also added realism by mixing clouds with different levels of ‘sharpness’ together, often within the same 3D space.

As a final touch, in a few specific shots where a plane passes through a cloud, we added the ability to animate the clouds from the plane’s airflow. This achieved the wing ‘vortex’ effects you see as it emerges from the cloud.

This sequence presents major challenges especially with particles and parachutes. How did you achieved them?
We used Houdini extensively for all sorts of explosions, missile trails, burning engines, etc. For the most detailed explosions we used Houdini’s fluid simulation with thermal heat propagation combined with traditional particle effects and a few flame cards. One relatively simple effect that was harder than it looked were the tracers. In animation, they simply used straight ribbons to suggest where the bullets should go from a story point. Once we had to realized them with more realistic ‘ballistic’ flight, our effects animators had to actually « aim the guns », leading the targets etc to achieve a similar effect. While a little bit of cheating was possible (bending their flight-paths for example), you could only push this so far before it looked wrong. The effects animators ended up with their own mini ‘shooting gallery’.

As for the parachutes, one of the effects I’m most happy with is a shot where you see the canopies being straffed by the aforementioned tracers. The effects artist worked with his « aim » until we were happy with the amount of impacts and the choreography of the bullet paths. He then created geometry markers that noted where each bullet entered and exited the canopy. This was handed back to modeling who punched varying sized tears in the right places. Finally, a « technical animator » went?back and animated impact waves to the surface that corresponded to the hits. It was a lot of hand work, but I thought it worked beautifully in the end.

The sequence of the dock in Long Beach is another crazy sequence. Can you talk to us about the shooting of this sequence. Was it shoot entirely on a bluescreen or some part where shoot on the real dock?
Much of it was shot for real at a dock in Vancouver, Canada. For the most part, during the first half of the sequence you are seeing a real dock and a CG ship with containers. A few shots were added later and evolved as the edit came together and these were blue-screen set pieces with CG backgrounds created with photo-mapped geometry. One interesting bit involves the first two establishing shots of the ship on the water. Live plates were photographed (over the ocean and at the dock), but the task of perfectly matchmoving ship wakes and reflections proved so difficult that we ended up replacing the water completely. The digital water ended up being such a good match that it worked perfectly. Once the ship starts to explode, a lot of the shots were blue screen pieces with digital backgrounds.

The sequence turns to the massive destruction of the dock. How did you handle all these elements collide and destroy themselves?
Again we used Houdini for a rigid body simulation of the ship and containers. Once the rigid body sim was run, a damage pass was run to add gross deformations to the containers based on where they impacted with other objects. A fully detailed simulation of the damage proved cost prohibitive, so for the most part, wherever more specific detail was needed (or when the containers were close to camera), animators went back with deformation tools and blend shapes to hand craft the damage. Another detail was added when container doors opened and contents started to spill on the dock. This was also done with a combination of rigid body simulation and hand animation.

Weta Digital has also participated in this sequence. How was the collaboration?
In the original sequence, the ship is hit by the missile, lists, and the containers spill onto the dock. We then went back to have the initial missile hit trigger a series of secondary explosions that ultimately split the ship in half. Unfortunately for us (R&H) we didn’t have the capacity to take on the additional shots and effects work that it would require, so Fox asked Weta to step in and tackle those. We gave them all of our assets – ship, containers, dock gantries, etc and they created several new shots to depict the additional explosions. Once the ship starts to list, we had a few shots (even before the cut change) that were blue screen?shots of the actors, on the ground or hanging from partial set pieces.?For these we used our CG simulations, and photomapped environment. In a few cases, the new continuity required us to abandon aerial plates and make fully synthetic shots for some of the wides. Weta handled the majority of these, but in the few cases where we had done significant?work and the continuity impacts were manageable, we finished them.

How long have you worked on this project?
I actually came on the project in January, taking over for another supervisor who had to leave for personal reasons.

Was there something that prevented you from sleeping on this show?
Fortunately, the futon couch in my office allowed me to sleep well – hehe.
Actually, the hardest part was just working with the complex material in the ever shortening timeline of post productions. Studios want to see finished renders much earlier in the process than?ever before.

What was the size of your team??
We had about 120 artists on the show.?

What is your software pipeline?
We used Houdini and Mantra for much of the effects work. We also use Maya for modeling. The rest of the work was done in our?in house proprietary tools including our renderer ‘wren’ and compositing?software ‘icy’.

What did you keep about this experience?
This project pushed our pipeline which had been tailored for 3D character films. It showed where we needed?improvements – many of which are being implemented as we speak. The same goes for my career. This was a welcome change from digital lions and creatures and was a lot of fun. I’m very happy with the clouds and tank sequence in general, and many of the ship shots in the Long Beach sequence – especially the ones where the ship takes up part of the background looked absolutely convincing. There are of course the obvious effects work once the ship starts to explode, but I think people might be surprised there was work done in many of the ‘in-between » shots.

What is your next project?
Will let you know once I do (laughs).

What are the four films that gave you the passion for cinema?
STAR WARS and RAIDERS OF THE LOST ARK as a kid…
JURASSIC PARK was the one that made me rush out to California…
THE GODFATHER though is still one of my favorite films.

A big thanks for your time.

// WANT TO KNOW MORE?
Rhythm & Hues: Official website of Rhythm & Hues.
fxguide: Complete article about THE A-TEAM on fxguide.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Ben Morris – VFX Supervisor – Framestore

After working several years at MillFilm on films such as BABE 2, GLADIATOR or LARA CROFT, Ben Morris joined Framestore in 2000 and participate on projects such as TROY, CHARLIE AND THE CHOCOLATE FACTORY and as visual effects supervisor on THE GOLDEN COMPASS and PRINCE OF PERSIA.

What is your background?
I studied at Art College and then did a Mechanical Engineering degree. Having left university, I joined Jim Henson’s Creature Shop designing and developing computer based Performance Animation Control Systems. I moved into CG during post-production on BABE 2 at MillFilm and moved to Framestore in 2000, where I have worked to the present day.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
I really enjoyed working with both of them. Tom, in particular, was a very creative and inspiring VFX Supervisor to work with. He comes from a facility background and has a invaluable practical knowledge of how shots are put together.  He also has an great sense of design and visual style, which shows through in all the work he supervised on PRINCE OF PERSIA.

What are the sequences made by Framestore?
The Hassansin Vipers and the Sandroom at the end of the film.

Were there real snakes on the set or are they all in CG?
There is one brief shot of a real python at the beginning of the Hassansin’s Den sequence – all the Vipers are CG.

How did you create the CG sand?
(Answer by Alex Rothwell, Lead FX artist)
Before starting the work, we first need to be clear in our minds about how we thought that much sand would move. There was no reference for a moving body of sand the size of a football field so we had to imagine what we thought it would look like with the help of our concept artists and try and realize that. Fast moving sand exhibits some fluid like properties but there are also key aspects of the movement that are un-fluid like. We contemplated doing a lot of fluid simulation work to model the movement of the sand, but large simulations are extremely time consuming and are not as directable as other solutions. Above everything we wanted a system that could be exactly controlled by an artist reacting to the director’s or supervisor’s comments.

The whole sequence was blocked out by the animators using geometric surfaces to represent the sand’s surface, we were able to get most of the key movement of the sand signed off in this way before an fx artist became involved. Once the layout of the shot had been finalized we had a custom plugin in Maya that took the animated geometric surfaces representing the sand and were able to produce a flow of particles that replaced the geometric surface in the final render. The plugin was able to create particle movement that appear fluid like and was dictated by the gradient of the under lying surface. Any additional flow detail could be controlled via maps, allowing the artist to quickly and visually paint the sand flow direction, including any turbulence and spay. The number of the semi-simulated particles was increased at render time via a custom particle management system dubbed pCache. This system allowed us to generate the number of particles need to produce a convincing render without the overhead of the extra processing and storage. The sand artists were able to write shader like scripts that gave complete control over the up scaling process and could also be used to produce addition surface detailing and displacement. In some of the wide shots over a billion points are being rendered.

Can you tell us about the shooting of the final scene in which the sand flows into the void?
Dastan is a mixture of real Jake Gyllenhaal and the odd digi-double. Jake really threw himself into the challenge and worked very hard to do most of the stunts himself. It really paid off in post, as we only had to do one face replacement in the entire sequence.

Can you tell us about your collaboration with Double Negative for the Oasis sequence?
The collaboration worked very well. For a few shots we needed to animate and render Vipers which were caught in the time-freezing effect created by Dastan releasing the dagger’s sand. Both companies worked on the same backplates, some of which had ‘virtual’ camera moves created by DNeg. Once we got approval for element in the shot we would package up a bundle of data for Dneg (reference animated geometry, 3D render elements and the approved comp).

What was the biggest challenge on this show?
Creating the epic scale of the environment and destruction required in the Sandroom. We always referenced back to the early concept work created by our VFX Art Director Kevin Jenkins perfectly captured the ‘look and feel’ of the sequence before we started working on it.

How many shots have you done and what was the size of your team?
We worked on approx. 220 shots and completed 125 for the film. We had 60 crew working on the project over a period of 2 years.

Was there some shots that prevent you from sleeping?
We had a couple of trailer shots involving complex sand simulation and rendering which delivered pretty close to the wire, but that’s the great thing about trailers – they flush out all the bugs before final delivery.

What did you keep about this experience?
Working with Tom Wood was an absolute pleasure and our relatively small crew created some really outstanding visuals from concept design through to final delivery. So I guess we’ll all keep some beautiful pictures …

What is your next project?
I have started working on a great project with a very good director, but sadly I can’t talk about it right now.

What are the four films that gave you the passion for cinema?
STAR WARS, BLADE RUNNER, DUNE and DARK CRYSTAL.

A big thanks for your time.

// WANT TO KNOW MORE?
Framestore: PRINCE OF PERSIA dedicated page on Framestore website.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Stephane Ceretti – VFX Supervisor – MPC

Stephane Ceretti worked for nearly 12 years at BUF in Paris on films such as ALEXANDER, MATRIX 2 and 3, HARRY POTTER 4 or BATMAN BEGINS. Then he moved to MPC in London as VFX supervisor on PRINCE OF PERSIA. Since this film, he joined Method Studios still in London, where he oversaw THE SORCERER’S APPRENTICE.

What is your background?
First of all I am french. I spent the first 12 years of my career at Buf Compagnie in Paris, where I had the chance to work as VFX supervisor on films such as ALEXANDER, MATRIX 2 and 3, HARRY POTTER 4, BATMAN BEGINS, THE PRESTIGE, SILENT HILL and BABYLON AD. I joined MPC in 2008 as VFX supervisor on PRINCE OF PERSIA.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
Very good ! On this kind of production we usually spend most of our time with the VFX supervisor. Tom Wood used to work at MPC, that helped breaking the ice very quickly. I ended up going to Morrocco and in Pinewood studios where we shot most of the battle sequence happening at the beginning of the film. That period of time spent on the shoot is essential to understanding what the director is after as well as getting a visual sense of the universe that the production designer and Tom Wood wanted to depict in the movie.

What did MPC on this show?
MPC was in charge of developping the look of the City of Alamut and its surroundings. We also had to create a CG persian army for the opening battle sequence. Our work covered simple set extension to complement the sets built in Morroco to views of a Full CG representation of the city of Alamut for wide opening shots. Our biggest task was to create the enviroment and armies that are attacking the Eastern Gate of Alamut. Considering this was mostly shot in Pinewood studios we had a big task in front of us to make it look like it was shot in location and give the scope that this sequence needed.

What references did you have for the city of Alamut?
The production designer did a layout and design of the city : the walls, the inner city with, the various palaces and gardens, the big white and gold temple at the base of the super high tower in the middle of the city. We then had to extrapolate from that and create the entire city. We did a lot of research and based on stills that Wolf Krueger gave us from Rajasthan in India we ended up creating a map of indian locations we would have to visit to create a library of buildings, trees,villages and cities. We then sent our digital photographer James Kelly there for 3 weeks to shoot as much textures and references as he could. James also came to Morroco to shoot stills of the sets, as we would have to extend these and mix and match them with india locations. We also took stills of morrocan locations for the surroundings of the City, as well as India locations and as I was away in Corsica for a break I took some other stills of corsican mountains which ended up being the perfect match for what we wanted. Again, we ended up using a mix of all these sources to create the city surroundings.

In terms of the look of the city, and the light ambience, we spent a lot of time looking at references stills in books and on the net, but Tom showed us painting from the Orientalist painters. These were stunning and gave us a good sense of the style of light and levels of haze and myst and dust we
would have to put in the city.

Were you in contact with Jordan Mechner and Ubisoft?
Not really, no.

Can you explain to us how you recreate the city in CG?
It was a big undertaking. Based on the thousands of stills that James took back from India and Morroco we created a library of buildings sorted by styles and sizes. We then took the layout from the Art department and created some layout tools based on alice, our crowd system that we customized to accomodate for buildings and city props, to design the city space. This first interactive pass allowed us to do quick modifications that we could show to Tom to get approval. Once the main squares/gardens/streets and palace and markets had been layed out from key views of the city, we could get into the minutia of customising some pieces of the city by hand to match specific shot needs. We had to create management tools to allow us to decide what kind of props would be used, where to put the trees (we created a huge library of trees, with particles leaves that we could render and keep the memory usage manageable) …
The city was a huge asset to work with but Renderman handled the renders pretty well.

During shooting scenes in the city of Alamut, can you tell us the size of the real set?
The sets built in Morocco were huge, but it was never big enough ! so we ended up having to extend quite a lot of them. But the East Gate set that was built inside the 007 stage in Pinewood studios was really huge, it took most of the space in the studio and we were really close to the ceiling, making it difficult to light and operate. Blue screen coverage was also difficult. It was one of the biggest interior sets I’d ever seen.

What was the proportion of extras and digital doubles made with ALICE during the attack of the Persian army?
It depends on the shots, but sometimes we had about 50 to 100 extras and ended up making it 20 to 50 times bigger. I think we had a total of 300 to 400 extras on big wide shots but again we ended up having many many more in CG. PRINCE OF PERSIA was not a huge crowd show for us on that sense, all the crowd shots we had ended up being fairly simple.

After ROBIN HOOD, this new project is an opportunity to admire ALICE work on an army. How do you ensure that the rendering of these shots do not take years?
Compared to the city shots, the army shots were a real piece of cake I can tell you !

About the beautiful shot that rotates 360 degrees around the Prince Dastan before his jump. How did you make it?
We shot Jake in front of a green screen on a set in Pinewood, with the wooden beam on which he stands … and all the rest is CG. The East Gate on which he is standing is a CG representation of the set in Pinewood, the close surroundings is a 3D reconstruction of the Moroccan sets with extra top ups and then in the back you can see our 3D city and the surrounding CG mountains based on stills from Corsica. So it’s a big collage of many techniques and locations and CG. We also have CG armies and city crowd into the shots. It was one of the most complex shot to get right as we do a lot
of work with atmospherics, the light coming from the sun…

Can you tell us about the shots for the giant sandstorm that destroyed Alamut?
We did Alamut in these shots, but we did not do the sandstorm and the destruction. These were shared with another facility.

What was the biggest challenge on this project?
Getting the city to render, and the Golden Palace extensions to look real !

How many plans have you done and what was the size of your team?
around 300 shots in the end, and maybe around 80 to 100 people worked on it, but not all at the same time.

Is there a shot or a sequence that makes you lose hair?
They all do !

What did you keep about this experience?
It was great working with MPC for the first time, as well as working with Tom and Mike. Also, being on a Bruckheimer production is really demanding but extremely rewarding and quite fun, they are really passionate about the work and always push for more, which is cool from an artist’s point of view.

What is your next project?
Well, I just finished working on another Bruckheimer Production called « THE SORCERER’S APPRENTICE » for Method Studios in London. And I am starting onto another project from Marvel shooting in the UK.

What are the four films that gave you the passion for cinema?
I can’t choose, they all give me a passion for Cinema, even the ones that don’t have visual effects in them ! I am quite eclectic in my tastes, so I can enjoy a movie like PRINCE OF PERSIA or STAR WARS or a Chris Nolan movie or a french movie but not for the same reasons. I could not really choose just 4 movies …

A big thanks for your time.

// WANT TO KNOW MORE?
MPC Breakdown: VFX Breakdown for PRINCE OF PERSIA.
The Moving Picture Company: PRINCE OF PERSIA dedicated page on MPC website.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Sue Rowe – VFX Supervisor – Cinesite

Sue Rowe is one of the few female VFX supervisors in the business. She works at Cinesite for over 10 years and oversaw movies such as TROY, CHARLIE AND THE CHOCOLATE FACTORY, X-MEN 3 or THE GOLDEN COMPASS. She just finished the visual effects of PRINCE OF PERSIA.

What is your background?
I have a degree in tradition animation and worked as a commercials animator for few couple of years, before retraining in computer animation and taking an MA at Bournemouth University.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
Mike was very enthusiastic on set and we worked closely with Tom Wood, who we had worked with previously. Tom comes from a facility background (Note: MPC and Cinesite), so he understands the technology involved and the dynamic worked well.

What sequences did Cinesite contribute to in the film?
We created over 280 shots and the key sequences we worked on were the Avrat parkour jump sequence, establishing the views of the city of Nasaf, the prince’s home town.
The most exciting sequence for us was the Hassassins’ attack, where we created five separate weapons which were hand animated in an exciting, fast moving battle using whips, blades and fire. We also created the ‘youthening’ of the king Sharaman (Ron Pickup) and his brother (Sir Ben Kingsley), the death of king and the lion hunt sequence.

What references did you have for the town of Nasaf and how did you recreate it?
We had a good start using the location in Morocco, it was a real privilege to visit these historic sites and simply augment them. We also visited an exhibition on Ancient Persia at The Tate gallery in London, called « lure of the East » on British Orientalists paintings, and carried out internet and photographic research for generic Arabic art, clothing, tiles and architecture.

As I was on set for the duration of the shoot in Morocco, I was able to bring home high resolution stills which captured the real lighting conditions, in addition to the usual camera data information we were supplied topographical scans called Lidars of the environments.

Can you tell us a typical day on the shooting in Morocco?
We would usually start at 5.30am when the sun came up and drive to the various desert locations. Filming was in general 12 hours per day. Whilst we were filming it was Ramadan, which when combined with the daily temperature of over 40 degrees centigrade, proved to be a challenging environment to work in.

During the shoot we took high dynamic range photography, which would provide us with a lighting environment, to which we would match our computer generated cities.

What was the size of the real set?
There were several full-sized elaborate sets, which were initially filmed in Morocco, then recreated at Pinewood. The backgrounds were shot with blue screens, so that we could replace the set environments with sky domes taken in Morocco. This allowed for wire removal on parkour jumping sequences, in particular for some of the more difficult stunt work.

What did you do on the chase scene in the Avrat market?
We created 3D set extensions as the real locations were just not big enough for the scope of the film. We also created 3D and 2D arrows for the action sequences. In some cases we did wire and rig removals to the stunt doubles to make it more dramatic we added a digital face replacement over Jake’s stunt double. We also added general atmosphere to shots using 2D smoke and dust elements augmented and composited into shots to convey a city environment. Additionally, we composited digital matte paintings into the background and added sky replacements for look consistency throughout the sequence.

Can you tell us about Hassassin weapons? How did you create and animate them?
The Hassassins sequence was great fun to work on. We choreographed the sequence with CG Supervisor Artemis Oikonomopoulou and our Animation Director Quentin Miles. Although we shot references for the whips on set, the stunt team only had a handle in their hands so we had some freedom regarding where the whip would fall. Add to that a few swings and dust hits and it’s a pretty dynamic sequence. Tom shot the stunt guys on set practicing with a real whip and we researched the way the whip recoils in some detail. When the whip cracks it’s because the move is faster than the speed of sound and creates a sonic boom.

The Hassasin are always surrounded by a mysterious looking cloud. Again this ended up as a visual effect as it’s impossible to control smoke on an exterior set. As the shots developed, the cloud became more ominous. Both the cloud and the sand trails were done in Houdini. We used Autodesk Maya to create and animate the weapons.

How did you make Sir Ben Kingsley younger?
The director, Mike Newel, didn’t want to cast another younger actor to play the king and his brother in the flash-backs. What we did was show him a test which made him look 20 years younger. Mike really like it as it meant he could get the performances of the real actors, which is what he wanted.

However, there are conditions to this approach. As it’s a 2D effect but it relies on good data being gathered during the shoot. We cast two youth doubles who stood in straight after the take with the original actors so we could take high res digital stills in the same lighting conditions. This needed to be timed well as no one likes holding up a film set but the end result was worth it. Tom Wood comes from a facility background so he knew that 10 minutes extra on set can save months of work later down the line in post.

What we did was to take photos of a younger person’s skin textures like the pores and skin surface glow. We added darkened eye lashes and thickened hair, and removed wrinkles and age spots. These were then tracked onto the actors’ skin using our in-house software Motion Analyser, which basically sticks the new skin on top of the old skin – like a digital skin graft.

Can you explain how you created the lioness?
Using Autodesk Maya, the lioness was generated to reflect a creature that looked starved and malnourished. We really wanted to present a lioness which was bordering on emaciated, to emphasise her need to hunt. To achieve this look we graded the lioness to have washed-out fur and deeply emphasised her bone structure around the rib cage and hips. As the hunt scene progresses the lioness is speared through the mouth by a CGI spear, which was also created using Maya.

What was the biggest challenge on this project?
Just the pure variety of Visual effects needed for the show. We had many little sequences that each needed to be designed and look development signed off. Working on a Bruckheimer film means everything needs to be bigger and better than the usual so we used to say “what would JB do?” and make ourselves give it that extra 100%!

What is your pipeline at Cinesite?
For the cities, firstly, after concept work we built a number of buildings in 3D that could be placed to recreate a town layout. These were all unique and could be manipulated individually. The basic town layout was left to a dedicated “town planner”, who would get the buildings in roughly the right position, then render them. These would all be tweaked on a shot by shot basis for general aesthetics, but adhered to the basic town structure.
These layouts would be passed to a lighter, who controlled displacements, ageing of the buildings as well as lighting situations. They then passed from lighter to compositor, where additional layering techniques were used, such as adding smoke, 2D props, to give the city an animated, “lived in” feel.

For the whips, we would initially try to use the hand gestures and moves from the plate as a starting point for animation. Often, we would need to warp or varispeed the plates to add dynamism to the shots. Once we had whip animation signed off, these would go through lighting to compositing, where “depthing”, glints, collision impacts would all be added to give a heightened sense of danger.

The smoke, which was used in the Death of Sharaman sequence, had its own effects pipeline. We had to body track the dying king and use his skin as a smoke emitter, as well as the cloak that was killing him. This smoke followed Sharaman around and appeared to be emanating from him.

How many shots have you done and what was the size of your team?
285 of our shots made the final film, but we produced 320 in total. The team size was 60 artists.

Is there a shot or a sequence that prevented you from sleeping?
The lion hunt weighed heavily on my shoulders as I had convinced Tom we could do a better job doing a CG lion than the real lioness. She was a fat and contented animal who really didn’t want to roar so we replaced all the real footage with a more hungry wilder animal to give the sequence the edge it needed.

What did you keep about this experience?
That lighting a scene is the key, adding real atmospherics over the top make it photo real.

What is your next project?
I am currently supervising Cinesite’s work on JOHN CARTER OF MARS, for Disney, which is due out at the cinema in 2012.

What are the four films that gave you the passion for cinema?
ERASERHEAD, David Lynch
LUXO JR, John Lasseter
DIMENSIONS OF DIALOGUE, Jan Svankmajer
BLADE RUNNER, Ridley Scott

A big thanks for your time.

// WANT TO KNOW MORE?
Cinesite: PRINCE OF PERSIA dedicated page on Cinesite website.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Michael Bruce Ellis – VFX Supervisor – Double Negative

Michael Bruce Ellis worked for over 10 years at Double Negative, he begins at the roto department of the studio and then quickly rise and is visual effects supervisor on movies such as WORLD TRADE CENTER or CLOVERFIELD. He recently completed the visual effects of PRINCE OF PERSIA.

What is your background?
I began my career as a graphic designer in TV, working on Channel Identities, Promos and Title Sequences. I switched career in 1999 to join Double Negative’s 2D department as a Roto Artist. Apart from a short stint at Mill Film to work on one of the HARRY POTTER movies, I’ve been at Dneg ever since.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
Mike Newell is a great Director who is very focused on storytelling and the actor’s performances, he’s not so concerned with the minutiae of visual effects. Tom Wood had a great deal of input in coming up with creative solutions, we had a lot of scope to try out ideas and concepts although rule number one was always that the storytelling is crucial and cannot be obscured by the images, however beautiful!

What did Double Negative made on this show?
Dneg were asked to work on 4 main scenes in the movie, which involve the « magical » aspects of the story. The three rewinding scenes when the dagger of time is activated and the climactic Sandglass end sequence.

We had around 200 shots, which took us 18 months to complete.

Can you tell us about the visual design of the slow motion effect?
Early on in the project we’d discussed creating a very photographic open shutter look for the “rewind” effect. Tom Wood had given us reference on long exposure photography in which a moving subject creates a long smear effect as it moves through frame. This had been done before with static objects frozen in time using an array of several stills cameras with long exposures, which we’re then cut together to make a consecutive sequence. But this gives the appearance of a camera moving around a frozen object, we wanted the camera moving around a moving human form that had a frozen long exposure. This, as far as we knew, had not been done before so we needed a new technique in order to achieve it. Lead TD Chris Lawrence began exploring a technique called Event Capture to see if it could help us achieve the look we wanted.

Can you explain to us what it is and what it can do?
We’d done some work previously on “Event Capture”. The QUANTUM OF SOLACE freefall sequence used the technique, then we developed it further for PRINCE OF PERSIA, it allowed us to achieve something that couldn’t be done any other way.

This is a technique which records a live scene using multiple cameras, then reconstructs the entire scene in 3D, allowing us to create new camera moves, slip timing of the actors, change lighting, reconstruct the environment and pretty much mess around with whatever we wanted.

The technique works by shooting the action with an array of locked cameras set in roughly the path that you plan your final camera to move along. We ultimately used a maximum of 9 cameras at a time. Precise calibration of camera positions, lens data and set details allows us to combine all 9 cameras to reconstruct a 3D scene which has original moving photographic textures.

As our new 3D camera moved around the scene we transition between each of our 9 cameras to give the most appropriate texture. One problem we found with this technique is that as our photographic textures are derived from locked camera positions, specular highlights tend to jump over an image rather than smoothly roll over a surface as they do in real photography. He had to correct this by manually painting out such problems.

The great advantage of this technique was that it answered all of our technical requirements while giving us great creative freedom. With some restrictions based on texture coverage, we could essentially redesign live action shots after they’d been shot. The camera is independent from the action. A camera move can be created after the shot has been filmed, actors’ timing can be slipped and they can be manipulated to break them apart or change them as if they were conventional 3D.

Can you explain us how was the shooting for the slow motion sequences?
Each rewind scene is constructed so that we see a regular piece of action leading up to the dagger being pressed, the action then rewinds to a earlier part of the scene then the action plays forward again with an alternative outcome.

The rewind effects work had to fit seamlessly into a regular forward action scene and we’d need the actors to repeat everything as closely as possible. It seemed like the most logical thing to do was to shoot the rewinds straight after the forward action as it appears in the movie. The actors still had the moves and performances fresh in their minds and we could shoot with the same sets and keep the lighting set-ups as similar as possible.

The technique we were employing required clean, crisp photography with a minimum of motion blur but a maximum depth of field. This gave us a better result when projecting our 9 cameras onto 3D geometry and was valuable in creating convincing new camera moves as it meant that we could apply our own motion blur and depth of field.

This was a problem because we’d need a lot of light hitting our subjects and all of our rewind scenes occurred at night or indoors. John Seale and our VFX DoP Peter Talbot came up with a way of boosting the scene lighting universally by 2 to 4 stops. It meant that the rewinds could keep the same lighting feel with shadows and highlights matching the forward action but give us the best possible images to work with.

So it was really the transition in the shoot schedule from Forward action to Rewind Action that took the longest time to set up because we had to accommodate this boost to the lighting. As soon as we had the first rewind set-up in the can, the others followed much more quickly. We’d carefully planned the position of each camera and marked up the set accordingly so we were quickly able to set-up our cameras for each shot.

Did you create digital doubles for these sequences?
Yes but not in the conventional sense. Event capture gave us a digital human form for each of the actors. But the process is not perfect and we still had to do a lot of body tracking. We ended up with grayscale Digi-doubles onto which we projected moving textures from our 9 cameras, giving us real photographic textures on very accurate 3D human forms.

Can you tell us how you create those beautiful particles?
Our effects 3D supervisor Justin Martin and 3D leads Eugenie Von Tunzelmann, Adrian Thompson and Christoph Ammann developed a look and technical approach for the particles. All of our sequences revolved around the magic sand and we wanted the viewer to feel that they were seeing the same substance in the intimate rewind shots as in the wide sandglass chamber shots. When we’d created a 3D figure with full photographic textures and a new camera move, we were free to try numerous creative ideas for both the rewind trail effect and the ghost particle effect. We did some work in streamlining our particle set-up, we did tests to push up the amount of particles we could render to a billion. In the end we found that we didn’t need that many with about 30 million particles on the ghost body and 200 million airborn particles. We found that we could create very organic magical particles using Squirt (our own fluid sim) and Houdini.

How did you work with Framestore (which made the snakes) for the Oasis sequence?
Maya scene files and rendered elements were passed backwards and forwards between the facilities. In some of the shared shots it proved more efficient for dneg to take the shot to Final, in others it was Framestore. We just kept an open dialogue between facilities to keep work on shared shots flowing as smoothly as possible. For the rewind shots it made sense for dneg to create a camera move then pass that over to framestore to render a snake which we’d then get back both comped and as an element, in order to create rewind trails.

The final scene is really very complex. How did you achieved it?
The brief for the sandglass scene at the end of the movie was to create a digital environment that felt 100% real yet had an enormous light emitting crystal tower in the centre filled with moving twisting sand. The sand at times needed to present images from the past inside the crystal. And the chamber has to be collapsing all around the actors. Also if that wasn’t enough the sand inside the crystal had to escape and start destroying everything, barreling into walls and knocking down stalactites.

We knew we could create the underground rock cavern but the crystal was a bigger challenge.
What does a 300 foot crystal filled with light emitting sand look like?
We looked for reference but there really isn’t anything, it wasn’t ice. The closest thing we found was some giant underground crystals but they just looked like a photoshoped images.

In the end we went out and bought a load of crystals from a local New Age store, we shone lasers through them, lit them with different lights and played around with them copying what it was that made them feel like crystal, the refraction, the flaws etc. Peter Bebb, Maxx Leong and Becky Graham used this and built an enormous version of it.

The biggest challenge of this sequence was achieving the scale, the crystal is such a crazy object. We went to a quarry in the UK and took lots of photos. We reconstructed the rock surface in 3D and projected textures onto the geometry, so that it became a very real rock surface built to a real scale.

Another thing that helped us with scale was adding all the falling material. Christoph Ammann and Mark Hodgkins spent a lot of time working on the way that rocks would fall from the roof and break up and how they would drag dust and debris with them. Getting the speed of falling material right really helped with our scale, adding atmosphere also helped, we added floating dust particles which are barely readable but which kind of subconsciously add a feeling of space and distance.

What was the biggest challenge on this show?
Our most challenging role on PRINCE OF PERSIA was to create the Dagger Rewind effect.

Our brief for the Dagger effect consisted of three main requirements, which were needed to tell this complex story point.

The person who activated the dagger needed to detach from the world so that they could view themselves and everything around them rewinding. We as the viewers needed to detach with them so that we could see the rewind too. We needed a way of treating the detached figure to tell the viewer that he is no longer part of our world. We called this the “ghost” effect.

The world that the ghost sees rewinding needed to have a signifying effect which would show us that it was the magical dagger that was rewinding time. When the dagger is activated we needed to see people moving in reverse in a magical way. We called this the “rewind” effect.

The dagger needed to change the whole environment in some way when time is rewinding so that we could clearly tell the difference between rewinding shots and regular forward action shots.

So we needed an approach to the Dagger Rewind effect which could achieve all of these things. The same actor would need to appear twice in many shots moving both forward and in reverse simultaneously with 2 distinctly different looks, the “ghost” and the “rewind” effects. We’d need to freeze and rewind some aspects of the same shots. We’d need to relight scenes.

On top of all of this, we knew that we’d need an approach that was very flexible. We knew that the choreography of each shot was going to be very complicated with inevitable changes to actors positions or camera moves needed to help convey the story as clearly as possible. Who was standing where, which direction are they moving in, are they in regular time, frozen or in reverse were all questions that could be answered at the previs stage but we knew that with the addition of the “looks and effects” that we wanted, this choreography would probably need to change a little after shooting.

How many shots have you done and what was the size of your team?
200 shots with a small team, which ramped up to around 100 artists at our busiest time

Was there anything in particular that prevented you from sleeping?
The most difficult shot occurs when Dastan activates the dagger for the first time, bursting out of his body as a particle ghost and watching himself rewind in time. The shot travels from a mid shot to extreme close up then back out to a wide. We had to design everything about the shot. Its fully CG and we get very close to Dastan’s face who had to be completely recognizable. It’s also absolutely covered in particles, which the camera passes through. Editorially we had to tell a crucially important story point, creatively it had to look magnificent and it was a huge technical challenge. Yep…a little lost sleep on that one!

What are the four films that gave you the passion for cinema?
JAWS – I love everything about it…particularly the rubber shark.
ALIEN – Giger and Scott made this movie feel like it came from another planet!
BLUE VELEVT – Lynch really gets under the skin
THREE COLORS trilogy – beautiful movies
SOME LIKE IT HOT – can’t stop at 4

A big thanks for your time.

// WANT TO KNOW MORE?
Double Negative: PRINCE OF PERSIA dedicated page on Double Negative website.

© Vincent Frei – The Art of VFX – 2010

THE CRAZIES: Josh Comen and Tim Carras – VFX producer and VFX supervisor – Comen VFX

Founded in 2006 by Josh Comen, Comen VFX has participated in many projects including TV series like THE SOPRANOS or WEEDS and on movies such as A PERFECT GETAWAY, NEXT, RISE or THE SPY NEXT DOOR.

In the following interview, they talk about their work on THE CRAZIES.

JC= Josh Comen, VFX Producer // TC= Tim Carras, VFX Supervisor

What is your background?
JC: For the past eights years I have worked as a visual effects producer on feature films, television, music videos, and commercials. Comen VFX was founded in 2006. It is part of Picture Lock Media, parent company to Comen VFX and Picture Lock Post.

TC: I first became involved in visual effects at the University of Southern California, where a group of us organized a student-run VFX studio. I subsequently worked as a freelance compositor, designer and effects supervisor before joining Comen VFX as visual effect supervisor in 2007.

Can you explain to us the creation of Comen VFX?
JC: I created Comen VFX for the sole purpose of having a company that could quickly adapt to the needs of both the director and the production. Visual Effects and the methods to complete them on budget and on time are always changing. I thrive on navigating those waves, and charting our course!

What kind of effects have you made on this movie?
TC: We did a range of shots on THE CRAZIES, including compositing, set extensions, bullet hits, and paint work. In addition, we designed and composited a graphical user interface for the Sheriff’s computer.

What were the challenges on this show?
TC: Designing the computer user interface was the biggest creative challenge we faced on THE CRAZIES. It had to be visually simple, but efficient at conveying specific information to the audience at a glance. It had to feel organic, but we couldn’t borrow any design elements from familiar Mac or PC systems. It’s amazing how much of the visual language of computing comes from the two main operating systems in use today, and how much R&D is required to generate original artwork that feels natural. And of course, we had to create all that on a tight schedule and with finite resources.

How was your collaboration with the director?
JC: We would receive feedback from the director directly and via editorial.

TC: Breck Eisner has a keen sense of what he wants in his movie, but he also understands the utility of visual reference material. Even for shots that might be taken for granted in another context, Breck was always in interested in seeing sample images we’d prepare to help communicate the look of the shot, or sending samples of his own. Having pictures to look at allowed us to communicate in a much more visual way than words alone.

What is your software pipeline?
TC: This show occurred while we were in the middle of transitioning from Shake to Nuke, so the compositing was split about half and half between those platforms. We also used Photoshop for computer UI design, and Motion for particle systems.

What did you keep from this experience?
TC: Good communication is everything. When everyone involved is working toward the same goal, things tend to fall into place organically.

What is your next project?
We are currently working on THE FIGHTER, HOLLYWOOD DON’T SURF and YOUNG AMERICANS.

What are the 4 movies that gave you the passion for cinema?

JC: There are certainly many movies that have given me a passion for cinema. At the top of that list for me would be RISKY BUSINESS because I am all for the messages it gives: Life is about taking risks, you gotta risk big to win big. I thrive on taking risks!

TC: I think THE MATRIX and DARK CITY were the first films in the digital age that really got me thinking about visual effects as a tool that could really change the way we tell stories. Peter Jackson’s LORD OF THE RINGS trilogy extended that concept into bigger and brighter environments and characters. But setting VFX aside, what grabs my attention is films like THE SHAWSHANK REDEMPTION, where a fascinating story is told in a way that is unique to cinema.

Thanks for your time.

// WANT TO KNOW MORE?
Comen VFX: Official website of Comen VFX.

© Vincent Frei – The Art of VFX – 2010

ROBIN HOOD: Richard Stammers – VFX Supervisor – The Moving Picture Company

After starting his career in 1992 at Animal Logic, Richard Stammers joined MPC in 1995. He participates in many projects of the studio and work as vfx supervisor on such movies as WIMBLEDON, THE DA VINCI CODE and its sequel ANGELS & DEMONS or ELIZABETH: THE GOLDEN AGE.

What is your background?
I trained as a graphic designer in 1991, but my final year at university was spent predominantly doing traditional animation. My first job in the industry was at Animal Logic in 1992 where I was employed as a junior designer creating televising graphics and animation. Whilst I was able to learn the vfx tools of the trade in Australia i kept my design roots and upon returning to London I split my time between vfx compositing and designing/directing TV graphics and commercials. I joined MPC in 1995 to focus entirely on creating visual effects initially in commercials and later making the transition to features in 2002.

What did MPC do on this movie?
One of MPC’s main challenges was to create the invading French Armada and the ensuing battle with the English army. A CG fleet of 200 ships and 6000 soldiers were added to the 8 practical boats and 500 extras used in principal photography.  MPC used Alice, its proprietary crowd generation software to simulate the rowing and disembarkation of French soldiers and horses, with all water interactions being generated using Flowline software.  The defending English archers and cavalry where also replicated with CG Alice generated clips and animated digital doubles. MPC relied predominately on its existing Motion Capture library for much of Robin Hood, but a special mo-cap shoot was organised to gather additional motion clips of rowing and disembarking troops and horses.

MPC’s digital environment work was centred on two main locations; London and the beach setting for the French invasion and final battle.  A combination of matte painting and CG projections were used to recreate the medieval city, which featured the Tower of London and included the original St. Paul’s Cathedral and old London Bridge under construction, in the city beyond.  The production’s football field sized set provided the starting point for MPC to extend vertically and laterally, and in post production alternate digital extensions were also created to reuse the set three times as different castle locations.  Each extension was a montage of existing castles chosen by Ridley Scott and production designer Arthur Max.  For the beach environment, MPC had to create cliffs that surround the location, and were added to 75 shots.  Once approved in concept, the cliff geometry was modelled using Maya and interchangeable cliff textures were projected depending on the lighting conditions.

MPC was also responsible for creating the arrows for various sequences on the film.  Practical blunt arrows were used in production where ever possible, but most shots presented safety issues so digital arrows were animated instead.  Arrows were added to over 200 shots, with 90% of these being handled by the compositing team using Shake and Nuke.  MPC developed proprietary 2D and 3D arrow animation tools to assist with the volume of arrows required, which included automatically generating the correct trajectory and speed, and controls for oscillation on impact.

How was your collaboration with Ridley Scott?
Very good. He’s always very clear and concise about what he wants and also takes an interest in the financial implications of his requirements, and will spend the vfx budget where he feels it’s most suited. He usually would brief me with a quick sketch and would often follow up with a more detail by drawing over a print out of a shot. I’d get my team at MPC to interpret this into the 3d realm as simple un-textured Maya geometry over the plate, and re-present this to Ridley for approval. Where there was any ambiguity over a shots requirements I’d present a few options to choose from, so we had a clear brief before starting any detailed work on a vfx shot.

Can you explain to us the shooting of the French Armada and it’s landing?
The location for this shoot was at a beach called Freshwater West in Pembrokeshire, Wales. The crew of up to 1000 people were there filming for 3 weeks, in order to capture enough footage for the 20 minute screen time the battle was edited to. Further time was scheduled at Pinewood studio’s Paddock Tank and Underwater Stage to complete some of the shots that were considered impractical or too dangerous to achieve on location. The production where able to create 4 real working landing craft and 4 rowboats to represent the armada, and as many as 500 extras on some days. Ridley’s shooting style for this battle involved staging large scale performances each lasting 4-5 minutes and get as many cameras covering the shots he needs. This would take some time to set up and rehearse, and then it would be frenetic for a few minutes whilst they shot. He’d do several takes then move on to the next key stage of the battle.

The shooting conditions were extremely difficult and varied which caused great continuity problems. Changing light and weather created the usual inconsistencies, but the changing tide moved at 1 meter per minute so the size of the beach constantly was fluctuating, and the shooting crew had to be equally mobile, with all equipment on 4×4’s or trailers. For the vfx crew this meant the 10 cameras Ridley was using were moving constantly, so wrangling all the camera data and tracking markers, essential for our mactchmove department, was a huge task. We overcame much of this by capturing all camera locations with Lieca Total Station surveying equipment, and later incorporated the data in to a Maya scene with a LIDAR scan of the beach location. All cameras were armed with zoom lenses to deal with Ridley’s constant request to reframe for particular compositions he wanted, and often we’d find takes that had been shot half a dozen different focal lengths. Despite me reminding Ridley that we needed to avoid zooming during takes (because of the added complexity of the matchmove process) inevitably some of the shots later turned over to MPC were incredible difficult to work with.

How did you create those shots and what was the part of CG in the plates?
During the end battle most of MPC’s work was supporting what was already present in the plates, in some cases the number of extras was sufficient, and we’d be only adding a few boats into the background. But with 10 cameras filming and only 8 practical boats, most shots needed MPC’s digital armada, CG soldiers or environment work to augment the background. There were also a handful of wider shots that where MPC created the entire invasion or battle and much of the background landscape too. Each CG shot went through the same basic pipeline: first the film scans would go to the matchmove department for camera tracking and to the comp department for colour balancing to create a ‘neutral’ grade for consistent CG lighting.

The prep team would also handle any clean up such as marker removal or camera crew removal at this stage. Once a Maya camera was available the environment department would handle creating the cliff and the layout team would place the armada, which started from a master boat formation, and animation cycles which could be scaled or offset to suit the conditions of the sea. We usually go through a couple of versions of refinement to make it work compositionally and in context to the cut. Once I had approved the boat layout the crowd and layout teams set to work with our ALICE software to place all the soldiers in the boats and the beach with the appropriate animation. At this stage we’d send a temp version to the editorial team to cut in so Ridley and Pietro Scalia, the editor, had a chance to comment. At this stage we’d know the CG content of each shot and could accurately identify the rotoscoping requirements to create all the mattes necessary to place the cg behind the foreground live action. Whilst we waited for feedback on our layouts we continued into lighting and rendering and got the effects team working on the water interactions for the boats and crowds. Once we’d established a few key shots this process worked well. There was generally little or no feedback from Ridley so we could progress into comp quickly and get the shots looking more final.

Can we explain the creation of a crowd shot with your software Alice?
The first stage of preparing for a large crowd show like Robin Hood is to identify the motions that are going to be required. ALICE has a very sophisticated underlying motion syntheses engine that can take multiple inputs from any combination of motion capture clips, keyframe animation cycles & physics simulations which it can manipulate to give us the resulting simulations we see on screen, this gives us a great deal of freedom when deciding how to tackle a show.

For Robin Hood we relied predominately on MPC’s existing mo-cap library but extended it with new mo-cap data captured over a 2-day shoot, specifically targeted towards the disembarkation of soldiers & mounted cavalry, along with the rowing motions for the boat crews in each of the different boats. Once all the new motions arrived at MPC they were processed into the existing library through our motion capture pipeline, where our crowd team started to the create the motion clip setups and motion trees which would drive the agents for the whole show.

With ALICE being fully proprietary it allows us to quickly write anything from a new behaviour, such as inheriting motion from the boat the crowd agent is occupying, to simple tools that automate and simplify tasks for other departments. For the first time ALICE was used by our Layout department who took on the challenge of populating the whole Armada.

The crowd team produced a large number of different caches for each of the different rowing motions and disembarkations required for the various different boats. We then wrote a simple interface, which the Layout team could then use to rapidly set-up, randomize, change, and offset the caches to populate all of the boats in a few simple steps.

Once the first pass had gone through layout, the crowd team would take over any of the shots, which required more complex simulations to top up the action. This generally involved tweaking/adding to the disembarking to make it feel more chaotic, ranging from people being dynamically hit with arrows to stumbling through the water whilst providing the data required for the FX team to add in the interactions.

Once I was happy with the combined work of crowd and layout the next stage was to do the cloth simulations for all of the agents. Most agents only required the looser cloth of the lower body and any flags that were being carried to be simulated and this was handled by ALICE’s inbuilt cloth solver, before the resulting caches automatically flowed into FX and lighting departments.

There are a very large number of arrows that are drawn in this movie. How did you manage this?
Knowing that we have a large number of arrows shots on the show, meant we needed an efficient process to deal with them. I’d had great success on a past show Wimbledon (2004) animating tennis balls to mimed rallies, much of which was achieved as a 2d only compositing solution in shake. I felt that we could do the same on Robin Hood, as the trajectories were similar, but even simpler. One of the shows compositing leads, Axel Bonami took the process further by developing a series of shake macros, which only required the artists to place the start or end position of an arrow. The macro would use a still of a real arrow at the most appropriate perspective to work for the shot and then automated the animation process. He added further controls for impact oscillation to so the artists if necessary could dial this in. Arrows were added to over 200 shots, with 90% of these being handled by the compositing team using Shake and Nuke. MPC also developed proprietary 3d arrow animation tools to assist with large volumes of arrows where the 2d solution was unproductive. This was essentially a Maya particle system but could be tied into to the ALICE pipeline to allow crowd agents to fire arrows or be killed by them.

How long have you been working on this project?
I started in March 2009, and we delivered our final shot on 12th April 2010, so around 13 months in all, which seems to be about the minimum these days for VFX supervising a show right the way through.

What was the biggest challenge?
There’s a sequence of shots where the merry men return to England in King Richard’s ship. The production weren’t able to shoot this boat at sea, and Ridley wanted it to be windy and rough so the chances of shooting the right kind of sea plate were slim. It was storyboarded as one wide shot only so we looked into stock footage to use, but Ridley wasn’t happy with any of the options. Instead he turned to a previous film of his, ‘White Squall’, and cut in a sequence of shots from there, which featured a modern sailing ship and included insert shot of the sails. The 5 shots we created involved replacing this ship with a medieval CG replacement. There were no similarities between the 2 styles of boat, and further more it was so close to camera we had to completely rebuild our asset to a higher level of detail, and populate the deck with CG sailors, horses and windy canopies. We had no camera information for the plate and they were ‘scope anamorphic’ so the machmoves were tricky too. The finals were beautiful – a real testament to the teams that made it all work – a great example of just dealing with whatever gets thrown our way!

Was there shots that made you lose your hair?
Well things never got that bad, but there were a few shots that I did worry over. One was an arrow POV shot that represents the moment at the end of the film when Robin Hood fires a deadly blow at Godfrey, Mark Strong’s character, as escape the battle on horseback. Pietro felt that this was an important moment in the film that mirrored a similar moment nearer the beginning of the film when Robin wounds Godfrey in a similar attempt to kill him. There were many discussions on how we could shoot it but no clear solution that worked with the limitations of the beach location. MPC created a previs of the shot, as it was important to visualise the key elements required and how we could break the down into achievable chunks. The first half of the shot sees mostly sky and digital environments that we were already creating, but the second half flies right into the back of Godfrey’s neck whilst he galloped along the shoreline. As the shot took place in the shallow waters of the beach, this was something I did not want to attempt as a full CG shot, because of the complexity of recreating the sea. I opted to shoot a moving plate of the beach and sea to match the previs as best as possible, and separately shoot Mark Strong’s stunt double as a bluescreen element, so we could manipulate it to work for the shot.

The practical and cost effective solutions were to shoot the background plate with a miniature helicopter and the foreground stunt man riding a partial mechanical horse, with MPC creating a full replacement CG horse. 20 MPH winds hampered the plate shoot and left us with only a few usable takes requiring significant stabilisation, and the cameras proximity to Godfrey’s neck required a slow Super Technocrane move to avoid injuring him. As we had to speed ramp the shot much faster in post we compensated with the stunt man performing his riding actions in slow motion. It was an uncomfortable set of elements to work with, and required a lot of manipulation to piece together. The final solution involved creating a BG almost entirely in CG but retaining the live action sea, which was camera projected back through our previs camera. Godfrey’s element was successfully pinned to a hero cg galloping horse and we started getting something that was working. But the nature of a smooth arrow trajectory made the shot look so clean and out of context to the surrounding shots, and this is where most of my concern lied. It was always going to be delivered late in the schedule, the last week in fact, and there would be no time to re-conceive the shot in another way if Ridley didn’t like it. So we set about adding as many of the attributes of the surrounding shots as we could. We changed the sky to something less pretty, added camera shake, layers of smoke to pass through, we dirtied up the beach by matte painting extra detail like clumps of seaweed, and added more depth hazing overall. And with the shot carefully graded to match the shot it cut to we had success. It took it far enough away from the feel of the previs, and worked really well in the cut – it’s a great moment in the film.

What did you remember about this experience?
The shot exceeded my expectations, which is always great. As a VFX supervisor you have to be a jack-of-all-trades, but you work with teams of artists who are masters at their disciplines, so you take for granted high expectations – exceeding them is always a bonus.

What is your next project?
Well, nothing confirmed. I’m busy at MPC pitching on possible news shows, but nothing I can talk about yet.

What are the four films that have given you the passion for cinema?
As a student, sequences created by Ray Harryhausen and Terry Gilliam are what inspired me to take up animation. TERMINATOR 2 and JURASSIC PARK both had jaw-dropping moments, which to me pushed the boundaries of VFX at a time when I was quite junior to the industry. They inspired me to do better. I always loved David Lynch’s DUNE and Ridley’s ALIEN, I’m happy to watch these again and again – few films have that effect on me these days.

Thanks so much for your time.

DETAILED SHOT BREAKDOWN

Robin and Merry men leaving the Tower of London.
The foreground live action plate was shot on the backlot of Shepperton Studios. MPC created a digital matte painting of the castle walls, Tower and the river. The element used for the river was taken from a plate shot at Virginia Water, Surrey. Ridley wanted the town of London to be full of life and make the river bank busy like a market, so MPC bolstered the limited number of extras with around 200 CG people in the town, CG guards on the castle and cloned live action boats on the river.  In the foreground additional huts were created to increase the housing density, and multiple layers of smoke were added. When reviewing the final version of this shot at MPC, Ridley said he liked this it so much so he wanted to live there! This is 1 of 14 other London environment shots that MPC created for Robin Hood.

Robin and Merry arriving at the Tower of London in King Richard’s ship.
The live action helicopter plate was shot on location at a lake in Virginia Water, Surrey. The aerial unit used a Panavision Genesis camera for their photography. MPC created a CG environment where much of the original backplate was replaced with the Tower, the surrounding city of London and landscapes beyond.  The design of the Tower and it immediate surroundings were a collaboration between the Visual Effects and Art Departments, with the final layout and orientation coming from meetings with Ridley Scott, production designer Arthur Max and visual effects supervisor Richard Stammers. Whilst quite a substantial set was constructed as a river-side entrance to the Tower, the jetty, wall and archways occupied such a small part of the plate in this case, but provided MPC with the ‘anchor point’ to add their digital extensions. Environment lead Vlad Holst built the city in Maya with basic geometry to represent all the key features. This was presented to Ridley for comments and some adjustments were made before all the matte painted projections were started. The final DMP’s created by matte painter Olivier Pron extended the city to the horizon and incorporated the original stone London Bridge under construction, and old St Paul’s Cathedral in the distance. The lake was extended to become a river as a rendered CG element, in order to incorporate all the reflections of the new digital environment. The banks were populated with CG boats and CG crowds gathered to witness what they believe to be King Richard’s return from the crusades. King Richard’s ship and some of the foreground rowboats were in the original plate, but these were added to with 2d replications, and the motor wake of Richard’s ship was removed.

The combined armies of King Phillip and the Northern Barons approach the beach where the French Armada have begun landing.
The live action helicopter plate was shot on location at Freshwater West, Pembrokeshire in Wales and was captured using a Panavision Genesis camera.

This shot was turned over to MPC early on in the schedule and became key development shot, to test the look our CG assets. It was used to conceptualise the digital environment work, which required the creation of cliffs that surrounded this location – a necessary story point to create a tactical advantage for the English archers. Also the shot was used to determine the layout and number of boats in the French Armada and the numbers of soldiers on the beach. It paved the way for over a 150 other shots that required views of the cliffs or the French Armada.

For the design of the cliffs, MPC’s environment lead Vlad Holst created some Photoshop concepts for Ridley. Initially these were based on the white chalk cliffs of Dover, as this was the scripted location of the French invasion. The final design however, was based on the practical necessity to have a real cliff location to shoot non- VFX shots, which was in close proximity to the main beach location in Wales. These cliffs, whilst quite different from the concepts were a good geological match to the beach, and ultimately provided a better blend to the sand dunes behind the beach. Textures of the cliffs captured by the aerial unit were tiled, graded and projected onto simple Maya geometry that blended to a Lidar scan of the beach location. The cliff geometry went through a number of shape variations for Ridley’s comments with the approved version including a wide access path to the beach for the bulk of the cavalry and a narrow gorge from which Marion could join the battle later.

Ridley wanted to feel that the end battle involved around 2000 soldiers on each side. The French Armada was made up of 200 CG boats, and this shot featured about half the visible fleet and 1500 disembarked French soldiers. The practical photography provided a good guide for scale and lighting, with 4 landing craft, 4 rowboats, over a hundred extras on the beach and 25 cavalry in the foreground. Ultimately much of this was replaced with CG when the beach was widened in order to maintain continuity of the tide position throughout the sequence. Boat layout and animation was handled in two stages, divided by a period where matchmove artists would roto-animate the waves in the backplate. This allowed for detailed animation and interaction with the ocean surface to be achieved.

MPC’s crowd simulation software ’Alice’ provided digital artists with the tools to handle the number of CG soldiers required. Alice utilised MPC’s motion capture library for most of the animations but with specific actions like rowing, disembarking soldiers and horses being realised through a specific mo-cap shoot. Digital effects elements such as wakes and splashes were created for the boats and CG soldiers in the water, using pre-cached Flowline simulations, which were automatically placed with each Alice crowd agent at render time.

The small numbers foreground cavalry were multiplied with the addition of full CG riders. Safety regulations prevented the helicopter’s camera from being close enough to the live action cavalry, so Ridley requested that MPC add the additional CG characters right into the foreground and under the camera. For this task, ‘Alice’ crowd agents, which are inherently suited to being smaller in frame, were promoted to having a high level of detail. Additional modelling, texturing, animation, cloth and fur simulations were required to provide the extra details and nuances to what became almost full frame CG renders. The effects team again provided interaction elements for the horses’ hooves, in the form of mud clumps, grass and dust, augmented further in the final composite with additional live action dust elements.

// WANT TO KNOW MORE?
The Moving Picture Company: Dedicated page to ROBIN HOOD on MPC website.

© Vincent Frei – The Art of VFX – 2010

IRON MAN 2: Ged Wright – VFX Supervisor – Double Negative

After working at Mill Film for HARRY POTTER AND THE CHAMBER OF SECRETS, Ged Wright joined Double Negative in 2002 and works on HARRY POTTER AND THE GOBLET OF FIRE, HARRY POTTER AND THE ORDER OF THE PHENIX or 10,000 BC. He has just finished overseeing IRON MAN 2.

What is your background?
I worked in Australia doing commercial work for a number of years before relocating to the U.K. In 2001. I joinedMill Film for HARRY POTTER AND THE CHAMBER OF SECRETS and then moved to Double Negative in 2002 and have been here ever since.

How was your collaboration with director Jon Favreau and production visual effects supervisor Janek Sirrs?
We worked closely with Janek throughout the project making sure we gathered enough reference information and data whilst in Monaco and Downey in L.A.
The process was more involved and had much more involvement from Jon Favreau as we moved into the animation and postviz stage of the project and got into the beats and details of how to tell the Monaco fight sequence.

What are the sequences made at Double Negative?
We were responsible for the Historic Grand Prix race in Monaco which culminates with an on track battle between Whilplash and Iron Man in his suitcase suit.

Can you tell us about the shooting of the Monaco’s sequence? What was real elements and those in CG?
2nd unit photography took place in Monaco, without the actors or any of the art department cars. Initially Janek was looking to shoot at least some real race cars in Monaco however the logistics of shooting on location proved to much to overcome.
Production was able to obtain permission to shut off areas of the racetrack, which in Monaco means functioning city streets, on the lead up to the race in the early hours of the morning.
A hotted up Porsche with Vista cameras mounted at the front and rear was driven as quickly as possible through these areas and the plates served as the basis for the in car driving shots.
In addition to the work 2nd unit was doing we shot 180 panoramas from either side of the track, about every 15 feet along the track which served for reflection and plate reconstruction information.
All of the race cars are CG up until they are cut in half which was handled practically in L.A. with CG enhancements.

How did you recreate Monaco in CG?
Most of Monaco was a combination of matte painting and reprojections using the photography we had taken, we ended up with around 7 TB of photographic data.
The fight area which was built as a set in L.A. was also built digitally by us for when we could not use the photography or needed to extend it.

Can you explain us more about racing cars cut in half?
The cars were rigged by SFX to cut in certain ways and tumble down the track. We added whip contact effects with Monaco and the crowd behind them.

How did your recreate the lighting of the shooting?
Our lighting pipeline is HDR based, we shoot as much HDR information as possible onset.
This was complicated for the fight sequence as the lighting in Downey was unfortunately overcast, coming from the wrong direction and there was a very large green screen where the harbour should be. So we rebuilt the lighting environment from stills and painted out the sun and any additional lights to allow more flexibility once lighting the shots.

Have you developed specific tools (for lightning or fire) for this sequence?
We used a number of inhouse tools and relied heavily on Houdini for our FX work.

How did you collaborate with Legacy Effects for Iron Man armor and Ivan Vanko?
The whip FX were designed and implemented at dneg. The suit Ivan wears was practical and handled by Legacy.

About the mobile armor Iron Man. How have you designed and build it? Have you received elements from ILM?
The MKV armour was separate from the work ILM did and there was no overlap of the work on this project.
Legacy built a 1/3 size model which we used as a starting place which was then refined and added to through out the project, we were modelling the suit and suitcase until quite late in the project, with the MK5 being made up of over 3000 individual pieces.

Can you explain how you animate the deployment of the suitcase into the mobile armor and its choreography?
We began with a lot of concept art which resembled comic book frames, this was very useful but could only take us so far.
In 3D the first step was to take the fully formed armour and try and fit it into the suitcase, which it does….just.
Jon wanted the armour to move in a consistent and mechanically believable manner which was a challenge considering what we need the individual pieces to do.
In the end focusing on what each shot of the suit up sequence needed to most clearly communicate was the key to solving this problem.

What information Jon Favreau gave you for Iron Man’s animation?
Jon has a very clear idea of how Iron Man should move and had established a language in the first film so there was a lot of catching up for us to do. One of the key challenges for us was the interaction with Whiplash as they are connected for half the sequence and there is only so much we could do to alter the performance, transition of weight etc.

How did you achieved to render so realistic metal look for the armour?
The shaders were built with the latest version of dneg’s inhouse shader set-up which allows extensive use of co-shaders. This allowed the lookdev artists to build and experiment with shaders in a more intuitive way.

What was the biggest challenge on this film?
The suitup sequence gave us the most sleepless nights.

What was the most difficult shot to do? And how did you achieve it?
There is no stand out shot in this case, most of the shots in the sequence had a large number of disciplines working on them so in a sense one of the more difficult challenges is keeping a track of such complex work.

How many shots have you done and what was the size of your team?
We finalled 250 shots with around 200 crew touching the shots over the course of the project.

What did you keep from this experience?
I learnt a great deal and are pleased with the result and how hard everyone worked, i’m not sure you are ever completely happy with the final result which helps when embarking on the next show.

What is your next project?
I’m currently inbetween shows.

What are the four films that gave you the passion for cinema?
IN THE MOOD FOR LOVE, TERMINATOR 2, WHITNAIL AND I and HOWARD THE DUCK…

A big thanks for your time.

// WANT TO KNOW MORE?
Double Negative: Dedicated page IRON MAN 2 on Dneg’s website.

© Vincent Frei – The Art of VFX – 2010