THE A-TEAM: Bill Westenhofer – VFX Supervisor – Rhythm & Hues

AtRhythm & Hues for nearly 15 years, Bill Westenhofer has overseen many projects such as BABE, STUART LITTLE, CATS & DOGS, MEN IN BLACK 2 or THE CHRONICLES OF NARNIA. In 2008, he received an Academy Award ® for Achievement in Visual Effects for his work on THE GOLDEN COMPASS.

What is your background?
I’ve been working in the visual effects industry for over 15 years. I have a master’s degree in computer science from The George Washington University in Washington DC, specializing in graphics algorithms. My formal training is technical, but I’ve been drawing, painting, and animating on my own since I was very young. My current role as Visual Effects Supervisor combines both disciplines. I have to creatively direct the team of artists while helping to develop the technical approaches to achieve the looks we need.

How was your collaboration with director Joe Carnahan and production visual effects supervisor James Price?
From the start, Joe and Jamie emphasized the “fun” factor of THE A-TEAM. They wanted high energy, dynamic action which meant a lot of objects close to the lens and fast moving cameras. I thought our collaboration worked very well. We were able to bring a lot of ideas to the table and they likewise were great in crafting fun sequences and in helping us whenever an action or ‘gag’ wasn’t working.

What sequences have been made by Rhythm & Hues?
We worked on two sequences in the film: “The Tank Drop” and “Long Beach Harbor”.

Can you tell us about the design and the creation of the crazy freefall tank sequence??
This sequence was both the most fun and the one that caused the most “sweat” at the studio. The challenge was the sheer insanity of a tank falling through the sky and redirecting itself with its main gun. Whenever you push the believability of physics you run the risk of the whole thing falling apart. I really think we were able to walk the fine line in telling the story of what the tank was doing and yet maintaining just enough weight that it worked with a degree of plausibility.

The sequence was prevized before was came on board. The previs established most of the cuts that you see in the final product and nailed down the details of the action. R&H created several shots for a very early teaser trailer and based it very closely on this initial previs. Once those were out the door, we reconsidered the action with the ‘believability’ in mind and made the adjustments that finally made it into the film. It was interesting to see how your perception of whether something was working or not changed as the rendering of the clouds and the tank became more realistic. A lot of the early previs animation proved to be too ‘light’ with the tank responding too heavily to its main gun, for example.

How did you create so realistic clouds??
The clouds were, by far the most challenging part of the sequence for our R&D folks. We didn’t have any aerial photography and we knew we would be flying right up to and sometimes through the clouds. This meant we would have to create fully rendered volumetric clouds. The clouds were also going to be very important in the shots compositionally, and to provide a sense of speed, so we needed an efficient ways to visualize how they would work in the animation stage. The technique we settled on was to make a library of predefined cloud ‘caches’. Analogous to the pre-light stage in a regular 3D object (like the plane or tank), we setup turntables so we could adjust characteristics of each cloud – the amount of ‘wispiness’, design areas with smooth detail next to clumpy cumulus puffs, etc. This was designed in Side Effect’s Houdini. We then took these caches and made lo-res iso surfaces which were handed to layout artists who composed the ‘cloud landscape’. The iso-surfaces were lo enough to be interactive during the animation stage, and the animators, in fact, had the ability to add of move them to help the sense of speed, etc.

Once they got to the render stage, Cloud lighters placed scene lights to represent the sun, simulate bounce lighting from cloud to cloud and also simulate some of the complicated internal light scattering in the cloud. We did try to simulate that within the volume renderer, but it proved to be very expensive. To make up for that, one of our TDs, Hideki Okano, developed a tool to place internal lights where there would be the most internal scattering in a full simulation. He also developed a feature we called ‘crease lighting’ which mimics a phenomenon in cumulus clouds where the ‘creases’ between lumps are actually brighter than the lump because of an increase in water vapor density as you move in from the edges.

For the actual render, Houdini’s MANTRA delt with the actual cloud visibility calculations and was the framework for a ‘ray-march’ render. At each ‘march-step’, however, a custom volumetric calculator called “FELT” (Field Expression Language Toolkit) written by Jerry Tessendorf was used which had the ability to add additional multi-scattering terms. After initial renders, we had the ability to add more detail by ‘advecting’ the volume caches – increasing the ‘wispy’ quality. We also added realism by mixing clouds with different levels of ‘sharpness’ together, often within the same 3D space.

As a final touch, in a few specific shots where a plane passes through a cloud, we added the ability to animate the clouds from the plane’s airflow. This achieved the wing ‘vortex’ effects you see as it emerges from the cloud.

This sequence presents major challenges especially with particles and parachutes. How did you achieved them?
We used Houdini extensively for all sorts of explosions, missile trails, burning engines, etc. For the most detailed explosions we used Houdini’s fluid simulation with thermal heat propagation combined with traditional particle effects and a few flame cards. One relatively simple effect that was harder than it looked were the tracers. In animation, they simply used straight ribbons to suggest where the bullets should go from a story point. Once we had to realized them with more realistic ‘ballistic’ flight, our effects animators had to actually “aim the guns”, leading the targets etc to achieve a similar effect. While a little bit of cheating was possible (bending their flight-paths for example), you could only push this so far before it looked wrong. The effects animators ended up with their own mini ‘shooting gallery’.

As for the parachutes, one of the effects I’m most happy with is a shot where you see the canopies being straffed by the aforementioned tracers. The effects artist worked with his “aim” until we were happy with the amount of impacts and the choreography of the bullet paths. He then created geometry markers that noted where each bullet entered and exited the canopy. This was handed back to modeling who punched varying sized tears in the right places. Finally, a “technical animator” went?back and animated impact waves to the surface that corresponded to the hits. It was a lot of hand work, but I thought it worked beautifully in the end.

The sequence of the dock in Long Beach is another crazy sequence. Can you talk to us about the shooting of this sequence. Was it shoot entirely on a bluescreen or some part where shoot on the real dock?
Much of it was shot for real at a dock in Vancouver, Canada. For the most part, during the first half of the sequence you are seeing a real dock and a CG ship with containers. A few shots were added later and evolved as the edit came together and these were blue-screen set pieces with CG backgrounds created with photo-mapped geometry. One interesting bit involves the first two establishing shots of the ship on the water. Live plates were photographed (over the ocean and at the dock), but the task of perfectly matchmoving ship wakes and reflections proved so difficult that we ended up replacing the water completely. The digital water ended up being such a good match that it worked perfectly. Once the ship starts to explode, a lot of the shots were blue screen pieces with digital backgrounds.

The sequence turns to the massive destruction of the dock. How did you handle all these elements collide and destroy themselves?
Again we used Houdini for a rigid body simulation of the ship and containers. Once the rigid body sim was run, a damage pass was run to add gross deformations to the containers based on where they impacted with other objects. A fully detailed simulation of the damage proved cost prohibitive, so for the most part, wherever more specific detail was needed (or when the containers were close to camera), animators went back with deformation tools and blend shapes to hand craft the damage. Another detail was added when container doors opened and contents started to spill on the dock. This was also done with a combination of rigid body simulation and hand animation.

Weta Digital has also participated in this sequence. How was the collaboration?
In the original sequence, the ship is hit by the missile, lists, and the containers spill onto the dock. We then went back to have the initial missile hit trigger a series of secondary explosions that ultimately split the ship in half. Unfortunately for us (R&H) we didn’t have the capacity to take on the additional shots and effects work that it would require, so Fox asked Weta to step in and tackle those. We gave them all of our assets – ship, containers, dock gantries, etc and they created several new shots to depict the additional explosions. Once the ship starts to list, we had a few shots (even before the cut change) that were blue screen?shots of the actors, on the ground or hanging from partial set pieces.?For these we used our CG simulations, and photomapped environment. In a few cases, the new continuity required us to abandon aerial plates and make fully synthetic shots for some of the wides. Weta handled the majority of these, but in the few cases where we had done significant?work and the continuity impacts were manageable, we finished them.

How long have you worked on this project?
I actually came on the project in January, taking over for another supervisor who had to leave for personal reasons.

Was there something that prevented you from sleeping on this show?
Fortunately, the futon couch in my office allowed me to sleep well – hehe.
Actually, the hardest part was just working with the complex material in the ever shortening timeline of post productions. Studios want to see finished renders much earlier in the process than?ever before.

What was the size of your team??
We had about 120 artists on the show.?

What is your software pipeline?
We used Houdini and Mantra for much of the effects work. We also use Maya for modeling. The rest of the work was done in our?in house proprietary tools including our renderer ‘wren’ and compositing?software ‘icy’.

What did you keep about this experience?
This project pushed our pipeline which had been tailored for 3D character films. It showed where we needed?improvements – many of which are being implemented as we speak. The same goes for my career. This was a welcome change from digital lions and creatures and was a lot of fun. I’m very happy with the clouds and tank sequence in general, and many of the ship shots in the Long Beach sequence – especially the ones where the ship takes up part of the background looked absolutely convincing. There are of course the obvious effects work once the ship starts to explode, but I think people might be surprised there was work done in many of the ‘in-between” shots.

What is your next project?
Will let you know once I do (laughs).

What are the four films that gave you the passion for cinema?
STAR WARS and RAIDERS OF THE LOST ARK as a kid…
JURASSIC PARK was the one that made me rush out to California…
THE GODFATHER though is still one of my favorite films.

A big thanks for your time.

// WANT TO KNOW MORE?
Rhythm & Hues: Official website of Rhythm & Hues.
fxguide: Complete article about THE A-TEAM on fxguide.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Ben Morris – VFX Supervisor – Framestore

After working several years at MillFilm on films such as BABE 2, GLADIATOR or LARA CROFT, Ben Morris joined Framestore in 2000 and participate on projects such as TROY, CHARLIE AND THE CHOCOLATE FACTORY and as visual effects supervisor on THE GOLDEN COMPASS and PRINCE OF PERSIA.

What is your background?
I studied at Art College and then did a Mechanical Engineering degree. Having left university, I joined Jim Henson’s Creature Shop designing and developing computer based Performance Animation Control Systems. I moved into CG during post-production on BABE 2 at MillFilm and moved to Framestore in 2000, where I have worked to the present day.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
I really enjoyed working with both of them. Tom, in particular, was a very creative and inspiring VFX Supervisor to work with. He comes from a facility background and has a invaluable practical knowledge of how shots are put together.  He also has an great sense of design and visual style, which shows through in all the work he supervised on PRINCE OF PERSIA.

What are the sequences made by Framestore?
The Hassansin Vipers and the Sandroom at the end of the film.

Were there real snakes on the set or are they all in CG?
There is one brief shot of a real python at the beginning of the Hassansin’s Den sequence – all the Vipers are CG.

How did you create the CG sand?
(Answer by Alex Rothwell, Lead FX artist)
Before starting the work, we first need to be clear in our minds about how we thought that much sand would move. There was no reference for a moving body of sand the size of a football field so we had to imagine what we thought it would look like with the help of our concept artists and try and realize that. Fast moving sand exhibits some fluid like properties but there are also key aspects of the movement that are un-fluid like. We contemplated doing a lot of fluid simulation work to model the movement of the sand, but large simulations are extremely time consuming and are not as directable as other solutions. Above everything we wanted a system that could be exactly controlled by an artist reacting to the director’s or supervisor’s comments.

The whole sequence was blocked out by the animators using geometric surfaces to represent the sand’s surface, we were able to get most of the key movement of the sand signed off in this way before an fx artist became involved. Once the layout of the shot had been finalized we had a custom plugin in Maya that took the animated geometric surfaces representing the sand and were able to produce a flow of particles that replaced the geometric surface in the final render. The plugin was able to create particle movement that appear fluid like and was dictated by the gradient of the under lying surface. Any additional flow detail could be controlled via maps, allowing the artist to quickly and visually paint the sand flow direction, including any turbulence and spay. The number of the semi-simulated particles was increased at render time via a custom particle management system dubbed pCache. This system allowed us to generate the number of particles need to produce a convincing render without the overhead of the extra processing and storage. The sand artists were able to write shader like scripts that gave complete control over the up scaling process and could also be used to produce addition surface detailing and displacement. In some of the wide shots over a billion points are being rendered.

Can you tell us about the shooting of the final scene in which the sand flows into the void?
Dastan is a mixture of real Jake Gyllenhaal and the odd digi-double. Jake really threw himself into the challenge and worked very hard to do most of the stunts himself. It really paid off in post, as we only had to do one face replacement in the entire sequence.

Can you tell us about your collaboration with Double Negative for the Oasis sequence?
The collaboration worked very well. For a few shots we needed to animate and render Vipers which were caught in the time-freezing effect created by Dastan releasing the dagger’s sand. Both companies worked on the same backplates, some of which had ‘virtual’ camera moves created by DNeg. Once we got approval for element in the shot we would package up a bundle of data for Dneg (reference animated geometry, 3D render elements and the approved comp).

What was the biggest challenge on this show?
Creating the epic scale of the environment and destruction required in the Sandroom. We always referenced back to the early concept work created by our VFX Art Director Kevin Jenkins perfectly captured the ‘look and feel’ of the sequence before we started working on it.

How many shots have you done and what was the size of your team?
We worked on approx. 220 shots and completed 125 for the film. We had 60 crew working on the project over a period of 2 years.

Was there some shots that prevent you from sleeping?
We had a couple of trailer shots involving complex sand simulation and rendering which delivered pretty close to the wire, but that’s the great thing about trailers – they flush out all the bugs before final delivery.

What did you keep about this experience?
Working with Tom Wood was an absolute pleasure and our relatively small crew created some really outstanding visuals from concept design through to final delivery. So I guess we’ll all keep some beautiful pictures …

What is your next project?
I have started working on a great project with a very good director, but sadly I can’t talk about it right now.

What are the four films that gave you the passion for cinema?
STAR WARS, BLADE RUNNER, DUNE and DARK CRYSTAL.

A big thanks for your time.

// WANT TO KNOW MORE?
Framestore: PRINCE OF PERSIA dedicated page on Framestore website.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Stephane Ceretti – VFX Supervisor – MPC

Stephane Ceretti worked for nearly 12 years at BUF in Paris on films such as ALEXANDER, MATRIX 2 and 3, HARRY POTTER 4 or BATMAN BEGINS. Then he moved to MPC in London as VFX supervisor on PRINCE OF PERSIA. Since this film, he joined Method Studios still in London, where he oversaw THE SORCERER’S APPRENTICE.

What is your background?
First of all I am french. I spent the first 12 years of my career at Buf Compagnie in Paris, where I had the chance to work as VFX supervisor on films such as ALEXANDER, MATRIX 2 and 3, HARRY POTTER 4, BATMAN BEGINS, THE PRESTIGE, SILENT HILL and BABYLON AD. I joined MPC in 2008 as VFX supervisor on PRINCE OF PERSIA.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
Very good ! On this kind of production we usually spend most of our time with the VFX supervisor. Tom Wood used to work at MPC, that helped breaking the ice very quickly. I ended up going to Morrocco and in Pinewood studios where we shot most of the battle sequence happening at the beginning of the film. That period of time spent on the shoot is essential to understanding what the director is after as well as getting a visual sense of the universe that the production designer and Tom Wood wanted to depict in the movie.

What did MPC on this show?
MPC was in charge of developping the look of the City of Alamut and its surroundings. We also had to create a CG persian army for the opening battle sequence. Our work covered simple set extension to complement the sets built in Morroco to views of a Full CG representation of the city of Alamut for wide opening shots. Our biggest task was to create the enviroment and armies that are attacking the Eastern Gate of Alamut. Considering this was mostly shot in Pinewood studios we had a big task in front of us to make it look like it was shot in location and give the scope that this sequence needed.

What references did you have for the city of Alamut?
The production designer did a layout and design of the city : the walls, the inner city with, the various palaces and gardens, the big white and gold temple at the base of the super high tower in the middle of the city. We then had to extrapolate from that and create the entire city. We did a lot of research and based on stills that Wolf Krueger gave us from Rajasthan in India we ended up creating a map of indian locations we would have to visit to create a library of buildings, trees,villages and cities. We then sent our digital photographer James Kelly there for 3 weeks to shoot as much textures and references as he could. James also came to Morroco to shoot stills of the sets, as we would have to extend these and mix and match them with india locations. We also took stills of morrocan locations for the surroundings of the City, as well as India locations and as I was away in Corsica for a break I took some other stills of corsican mountains which ended up being the perfect match for what we wanted. Again, we ended up using a mix of all these sources to create the city surroundings.

In terms of the look of the city, and the light ambience, we spent a lot of time looking at references stills in books and on the net, but Tom showed us painting from the Orientalist painters. These were stunning and gave us a good sense of the style of light and levels of haze and myst and dust we
would have to put in the city.

Were you in contact with Jordan Mechner and Ubisoft?
Not really, no.

Can you explain to us how you recreate the city in CG?
It was a big undertaking. Based on the thousands of stills that James took back from India and Morroco we created a library of buildings sorted by styles and sizes. We then took the layout from the Art department and created some layout tools based on alice, our crowd system that we customized to accomodate for buildings and city props, to design the city space. This first interactive pass allowed us to do quick modifications that we could show to Tom to get approval. Once the main squares/gardens/streets and palace and markets had been layed out from key views of the city, we could get into the minutia of customising some pieces of the city by hand to match specific shot needs. We had to create management tools to allow us to decide what kind of props would be used, where to put the trees (we created a huge library of trees, with particles leaves that we could render and keep the memory usage manageable) …
The city was a huge asset to work with but Renderman handled the renders pretty well.

During shooting scenes in the city of Alamut, can you tell us the size of the real set?
The sets built in Morocco were huge, but it was never big enough ! so we ended up having to extend quite a lot of them. But the East Gate set that was built inside the 007 stage in Pinewood studios was really huge, it took most of the space in the studio and we were really close to the ceiling, making it difficult to light and operate. Blue screen coverage was also difficult. It was one of the biggest interior sets I’d ever seen.

What was the proportion of extras and digital doubles made with ALICE during the attack of the Persian army?
It depends on the shots, but sometimes we had about 50 to 100 extras and ended up making it 20 to 50 times bigger. I think we had a total of 300 to 400 extras on big wide shots but again we ended up having many many more in CG. PRINCE OF PERSIA was not a huge crowd show for us on that sense, all the crowd shots we had ended up being fairly simple.

After ROBIN HOOD, this new project is an opportunity to admire ALICE work on an army. How do you ensure that the rendering of these shots do not take years?
Compared to the city shots, the army shots were a real piece of cake I can tell you !

About the beautiful shot that rotates 360 degrees around the Prince Dastan before his jump. How did you make it?
We shot Jake in front of a green screen on a set in Pinewood, with the wooden beam on which he stands … and all the rest is CG. The East Gate on which he is standing is a CG representation of the set in Pinewood, the close surroundings is a 3D reconstruction of the Moroccan sets with extra top ups and then in the back you can see our 3D city and the surrounding CG mountains based on stills from Corsica. So it’s a big collage of many techniques and locations and CG. We also have CG armies and city crowd into the shots. It was one of the most complex shot to get right as we do a lot
of work with atmospherics, the light coming from the sun…

Can you tell us about the shots for the giant sandstorm that destroyed Alamut?
We did Alamut in these shots, but we did not do the sandstorm and the destruction. These were shared with another facility.

What was the biggest challenge on this project?
Getting the city to render, and the Golden Palace extensions to look real !

How many plans have you done and what was the size of your team?
around 300 shots in the end, and maybe around 80 to 100 people worked on it, but not all at the same time.

Is there a shot or a sequence that makes you lose hair?
They all do !

What did you keep about this experience?
It was great working with MPC for the first time, as well as working with Tom and Mike. Also, being on a Bruckheimer production is really demanding but extremely rewarding and quite fun, they are really passionate about the work and always push for more, which is cool from an artist’s point of view.

What is your next project?
Well, I just finished working on another Bruckheimer Production called « THE SORCERER’S APPRENTICE » for Method Studios in London. And I am starting onto another project from Marvel shooting in the UK.

What are the four films that gave you the passion for cinema?
I can’t choose, they all give me a passion for Cinema, even the ones that don’t have visual effects in them ! I am quite eclectic in my tastes, so I can enjoy a movie like PRINCE OF PERSIA or STAR WARS or a Chris Nolan movie or a french movie but not for the same reasons. I could not really choose just 4 movies …

A big thanks for your time.

// WANT TO KNOW MORE?
MPC Breakdown: VFX Breakdown for PRINCE OF PERSIA.
The Moving Picture Company: PRINCE OF PERSIA dedicated page on MPC website.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Sue Rowe – VFX Supervisor – Cinesite

Sue Rowe is one of the few female VFX supervisors in the business. She works at Cinesite for over 10 years and oversaw movies such as TROY, CHARLIE AND THE CHOCOLATE FACTORY, X-MEN 3 or THE GOLDEN COMPASS. She just finished the visual effects of PRINCE OF PERSIA.

What is your background?
I have a degree in tradition animation and worked as a commercials animator for few couple of years, before retraining in computer animation and taking an MA at Bournemouth University.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
Mike was very enthusiastic on set and we worked closely with Tom Wood, who we had worked with previously. Tom comes from a facility background (Note: MPC and Cinesite), so he understands the technology involved and the dynamic worked well.

What sequences did Cinesite contribute to in the film?
We created over 280 shots and the key sequences we worked on were the Avrat parkour jump sequence, establishing the views of the city of Nasaf, the prince’s home town.
The most exciting sequence for us was the Hassassins’ attack, where we created five separate weapons which were hand animated in an exciting, fast moving battle using whips, blades and fire. We also created the ‘youthening’ of the king Sharaman (Ron Pickup) and his brother (Sir Ben Kingsley), the death of king and the lion hunt sequence.

What references did you have for the town of Nasaf and how did you recreate it?
We had a good start using the location in Morocco, it was a real privilege to visit these historic sites and simply augment them. We also visited an exhibition on Ancient Persia at The Tate gallery in London, called “lure of the East” on British Orientalists paintings, and carried out internet and photographic research for generic Arabic art, clothing, tiles and architecture.

As I was on set for the duration of the shoot in Morocco, I was able to bring home high resolution stills which captured the real lighting conditions, in addition to the usual camera data information we were supplied topographical scans called Lidars of the environments.

Can you tell us a typical day on the shooting in Morocco?
We would usually start at 5.30am when the sun came up and drive to the various desert locations. Filming was in general 12 hours per day. Whilst we were filming it was Ramadan, which when combined with the daily temperature of over 40 degrees centigrade, proved to be a challenging environment to work in.

During the shoot we took high dynamic range photography, which would provide us with a lighting environment, to which we would match our computer generated cities.

What was the size of the real set?
There were several full-sized elaborate sets, which were initially filmed in Morocco, then recreated at Pinewood. The backgrounds were shot with blue screens, so that we could replace the set environments with sky domes taken in Morocco. This allowed for wire removal on parkour jumping sequences, in particular for some of the more difficult stunt work.

What did you do on the chase scene in the Avrat market?
We created 3D set extensions as the real locations were just not big enough for the scope of the film. We also created 3D and 2D arrows for the action sequences. In some cases we did wire and rig removals to the stunt doubles to make it more dramatic we added a digital face replacement over Jake’s stunt double. We also added general atmosphere to shots using 2D smoke and dust elements augmented and composited into shots to convey a city environment. Additionally, we composited digital matte paintings into the background and added sky replacements for look consistency throughout the sequence.

Can you tell us about Hassassin weapons? How did you create and animate them?
The Hassassins sequence was great fun to work on. We choreographed the sequence with CG Supervisor Artemis Oikonomopoulou and our Animation Director Quentin Miles. Although we shot references for the whips on set, the stunt team only had a handle in their hands so we had some freedom regarding where the whip would fall. Add to that a few swings and dust hits and it’s a pretty dynamic sequence. Tom shot the stunt guys on set practicing with a real whip and we researched the way the whip recoils in some detail. When the whip cracks it’s because the move is faster than the speed of sound and creates a sonic boom.

The Hassasin are always surrounded by a mysterious looking cloud. Again this ended up as a visual effect as it’s impossible to control smoke on an exterior set. As the shots developed, the cloud became more ominous. Both the cloud and the sand trails were done in Houdini. We used Autodesk Maya to create and animate the weapons.

How did you make Sir Ben Kingsley younger?
The director, Mike Newel, didn’t want to cast another younger actor to play the king and his brother in the flash-backs. What we did was show him a test which made him look 20 years younger. Mike really like it as it meant he could get the performances of the real actors, which is what he wanted.

However, there are conditions to this approach. As it’s a 2D effect but it relies on good data being gathered during the shoot. We cast two youth doubles who stood in straight after the take with the original actors so we could take high res digital stills in the same lighting conditions. This needed to be timed well as no one likes holding up a film set but the end result was worth it. Tom Wood comes from a facility background so he knew that 10 minutes extra on set can save months of work later down the line in post.

What we did was to take photos of a younger person’s skin textures like the pores and skin surface glow. We added darkened eye lashes and thickened hair, and removed wrinkles and age spots. These were then tracked onto the actors’ skin using our in-house software Motion Analyser, which basically sticks the new skin on top of the old skin – like a digital skin graft.

Can you explain how you created the lioness?
Using Autodesk Maya, the lioness was generated to reflect a creature that looked starved and malnourished. We really wanted to present a lioness which was bordering on emaciated, to emphasise her need to hunt. To achieve this look we graded the lioness to have washed-out fur and deeply emphasised her bone structure around the rib cage and hips. As the hunt scene progresses the lioness is speared through the mouth by a CGI spear, which was also created using Maya.

What was the biggest challenge on this project?
Just the pure variety of Visual effects needed for the show. We had many little sequences that each needed to be designed and look development signed off. Working on a Bruckheimer film means everything needs to be bigger and better than the usual so we used to say “what would JB do?” and make ourselves give it that extra 100%!

What is your pipeline at Cinesite?
For the cities, firstly, after concept work we built a number of buildings in 3D that could be placed to recreate a town layout. These were all unique and could be manipulated individually. The basic town layout was left to a dedicated “town planner”, who would get the buildings in roughly the right position, then render them. These would all be tweaked on a shot by shot basis for general aesthetics, but adhered to the basic town structure.
These layouts would be passed to a lighter, who controlled displacements, ageing of the buildings as well as lighting situations. They then passed from lighter to compositor, where additional layering techniques were used, such as adding smoke, 2D props, to give the city an animated, “lived in” feel.

For the whips, we would initially try to use the hand gestures and moves from the plate as a starting point for animation. Often, we would need to warp or varispeed the plates to add dynamism to the shots. Once we had whip animation signed off, these would go through lighting to compositing, where “depthing”, glints, collision impacts would all be added to give a heightened sense of danger.

The smoke, which was used in the Death of Sharaman sequence, had its own effects pipeline. We had to body track the dying king and use his skin as a smoke emitter, as well as the cloak that was killing him. This smoke followed Sharaman around and appeared to be emanating from him.

How many shots have you done and what was the size of your team?
285 of our shots made the final film, but we produced 320 in total. The team size was 60 artists.

Is there a shot or a sequence that prevented you from sleeping?
The lion hunt weighed heavily on my shoulders as I had convinced Tom we could do a better job doing a CG lion than the real lioness. She was a fat and contented animal who really didn’t want to roar so we replaced all the real footage with a more hungry wilder animal to give the sequence the edge it needed.

What did you keep about this experience?
That lighting a scene is the key, adding real atmospherics over the top make it photo real.

What is your next project?
I am currently supervising Cinesite’s work on JOHN CARTER OF MARS, for Disney, which is due out at the cinema in 2012.

What are the four films that gave you the passion for cinema?
ERASERHEAD, David Lynch
LUXO JR, John Lasseter
DIMENSIONS OF DIALOGUE, Jan Svankmajer
BLADE RUNNER, Ridley Scott

A big thanks for your time.

// WANT TO KNOW MORE?
Cinesite: PRINCE OF PERSIA dedicated page on Cinesite website.

© Vincent Frei – The Art of VFX – 2010

PRINCE OF PERSIA: Michael Bruce Ellis – VFX Supervisor – Double Negative

Michael Bruce Ellis worked for over 10 years at Double Negative, he begins at the roto department of the studio and then quickly rise and is visual effects supervisor on movies such as WORLD TRADE CENTER or CLOVERFIELD. He recently completed the visual effects of PRINCE OF PERSIA.

What is your background?
I began my career as a graphic designer in TV, working on Channel Identities, Promos and Title Sequences. I switched career in 1999 to join Double Negative’s 2D department as a Roto Artist. Apart from a short stint at Mill Film to work on one of the HARRY POTTER movies, I’ve been at Dneg ever since.

How was the collaboration with Mike Newell and the production VFX supervisor Tom Wood?
Mike Newell is a great Director who is very focused on storytelling and the actor’s performances, he’s not so concerned with the minutiae of visual effects. Tom Wood had a great deal of input in coming up with creative solutions, we had a lot of scope to try out ideas and concepts although rule number one was always that the storytelling is crucial and cannot be obscured by the images, however beautiful!

What did Double Negative made on this show?
Dneg were asked to work on 4 main scenes in the movie, which involve the “magical” aspects of the story. The three rewinding scenes when the dagger of time is activated and the climactic Sandglass end sequence.

We had around 200 shots, which took us 18 months to complete.

Can you tell us about the visual design of the slow motion effect?
Early on in the project we’d discussed creating a very photographic open shutter look for the “rewind” effect. Tom Wood had given us reference on long exposure photography in which a moving subject creates a long smear effect as it moves through frame. This had been done before with static objects frozen in time using an array of several stills cameras with long exposures, which we’re then cut together to make a consecutive sequence. But this gives the appearance of a camera moving around a frozen object, we wanted the camera moving around a moving human form that had a frozen long exposure. This, as far as we knew, had not been done before so we needed a new technique in order to achieve it. Lead TD Chris Lawrence began exploring a technique called Event Capture to see if it could help us achieve the look we wanted.

Can you explain to us what it is and what it can do?
We’d done some work previously on “Event Capture”. The QUANTUM OF SOLACE freefall sequence used the technique, then we developed it further for PRINCE OF PERSIA, it allowed us to achieve something that couldn’t be done any other way.

This is a technique which records a live scene using multiple cameras, then reconstructs the entire scene in 3D, allowing us to create new camera moves, slip timing of the actors, change lighting, reconstruct the environment and pretty much mess around with whatever we wanted.

The technique works by shooting the action with an array of locked cameras set in roughly the path that you plan your final camera to move along. We ultimately used a maximum of 9 cameras at a time. Precise calibration of camera positions, lens data and set details allows us to combine all 9 cameras to reconstruct a 3D scene which has original moving photographic textures.

As our new 3D camera moved around the scene we transition between each of our 9 cameras to give the most appropriate texture. One problem we found with this technique is that as our photographic textures are derived from locked camera positions, specular highlights tend to jump over an image rather than smoothly roll over a surface as they do in real photography. He had to correct this by manually painting out such problems.

The great advantage of this technique was that it answered all of our technical requirements while giving us great creative freedom. With some restrictions based on texture coverage, we could essentially redesign live action shots after they’d been shot. The camera is independent from the action. A camera move can be created after the shot has been filmed, actors’ timing can be slipped and they can be manipulated to break them apart or change them as if they were conventional 3D.

Can you explain us how was the shooting for the slow motion sequences?
Each rewind scene is constructed so that we see a regular piece of action leading up to the dagger being pressed, the action then rewinds to a earlier part of the scene then the action plays forward again with an alternative outcome.

The rewind effects work had to fit seamlessly into a regular forward action scene and we’d need the actors to repeat everything as closely as possible. It seemed like the most logical thing to do was to shoot the rewinds straight after the forward action as it appears in the movie. The actors still had the moves and performances fresh in their minds and we could shoot with the same sets and keep the lighting set-ups as similar as possible.

The technique we were employing required clean, crisp photography with a minimum of motion blur but a maximum depth of field. This gave us a better result when projecting our 9 cameras onto 3D geometry and was valuable in creating convincing new camera moves as it meant that we could apply our own motion blur and depth of field.

This was a problem because we’d need a lot of light hitting our subjects and all of our rewind scenes occurred at night or indoors. John Seale and our VFX DoP Peter Talbot came up with a way of boosting the scene lighting universally by 2 to 4 stops. It meant that the rewinds could keep the same lighting feel with shadows and highlights matching the forward action but give us the best possible images to work with.

So it was really the transition in the shoot schedule from Forward action to Rewind Action that took the longest time to set up because we had to accommodate this boost to the lighting. As soon as we had the first rewind set-up in the can, the others followed much more quickly. We’d carefully planned the position of each camera and marked up the set accordingly so we were quickly able to set-up our cameras for each shot.

Did you create digital doubles for these sequences?
Yes but not in the conventional sense. Event capture gave us a digital human form for each of the actors. But the process is not perfect and we still had to do a lot of body tracking. We ended up with grayscale Digi-doubles onto which we projected moving textures from our 9 cameras, giving us real photographic textures on very accurate 3D human forms.

Can you tell us how you create those beautiful particles?
Our effects 3D supervisor Justin Martin and 3D leads Eugenie Von Tunzelmann, Adrian Thompson and Christoph Ammann developed a look and technical approach for the particles. All of our sequences revolved around the magic sand and we wanted the viewer to feel that they were seeing the same substance in the intimate rewind shots as in the wide sandglass chamber shots. When we’d created a 3D figure with full photographic textures and a new camera move, we were free to try numerous creative ideas for both the rewind trail effect and the ghost particle effect. We did some work in streamlining our particle set-up, we did tests to push up the amount of particles we could render to a billion. In the end we found that we didn’t need that many with about 30 million particles on the ghost body and 200 million airborn particles. We found that we could create very organic magical particles using Squirt (our own fluid sim) and Houdini.

How did you work with Framestore (which made the snakes) for the Oasis sequence?
Maya scene files and rendered elements were passed backwards and forwards between the facilities. In some of the shared shots it proved more efficient for dneg to take the shot to Final, in others it was Framestore. We just kept an open dialogue between facilities to keep work on shared shots flowing as smoothly as possible. For the rewind shots it made sense for dneg to create a camera move then pass that over to framestore to render a snake which we’d then get back both comped and as an element, in order to create rewind trails.

The final scene is really very complex. How did you achieved it?
The brief for the sandglass scene at the end of the movie was to create a digital environment that felt 100% real yet had an enormous light emitting crystal tower in the centre filled with moving twisting sand. The sand at times needed to present images from the past inside the crystal. And the chamber has to be collapsing all around the actors. Also if that wasn’t enough the sand inside the crystal had to escape and start destroying everything, barreling into walls and knocking down stalactites.

We knew we could create the underground rock cavern but the crystal was a bigger challenge.
What does a 300 foot crystal filled with light emitting sand look like?
We looked for reference but there really isn’t anything, it wasn’t ice. The closest thing we found was some giant underground crystals but they just looked like a photoshoped images.

In the end we went out and bought a load of crystals from a local New Age store, we shone lasers through them, lit them with different lights and played around with them copying what it was that made them feel like crystal, the refraction, the flaws etc. Peter Bebb, Maxx Leong and Becky Graham used this and built an enormous version of it.

The biggest challenge of this sequence was achieving the scale, the crystal is such a crazy object. We went to a quarry in the UK and took lots of photos. We reconstructed the rock surface in 3D and projected textures onto the geometry, so that it became a very real rock surface built to a real scale.

Another thing that helped us with scale was adding all the falling material. Christoph Ammann and Mark Hodgkins spent a lot of time working on the way that rocks would fall from the roof and break up and how they would drag dust and debris with them. Getting the speed of falling material right really helped with our scale, adding atmosphere also helped, we added floating dust particles which are barely readable but which kind of subconsciously add a feeling of space and distance.

What was the biggest challenge on this show?
Our most challenging role on PRINCE OF PERSIA was to create the Dagger Rewind effect.

Our brief for the Dagger effect consisted of three main requirements, which were needed to tell this complex story point.

The person who activated the dagger needed to detach from the world so that they could view themselves and everything around them rewinding. We as the viewers needed to detach with them so that we could see the rewind too. We needed a way of treating the detached figure to tell the viewer that he is no longer part of our world. We called this the “ghost” effect.

The world that the ghost sees rewinding needed to have a signifying effect which would show us that it was the magical dagger that was rewinding time. When the dagger is activated we needed to see people moving in reverse in a magical way. We called this the “rewind” effect.

The dagger needed to change the whole environment in some way when time is rewinding so that we could clearly tell the difference between rewinding shots and regular forward action shots.

So we needed an approach to the Dagger Rewind effect which could achieve all of these things. The same actor would need to appear twice in many shots moving both forward and in reverse simultaneously with 2 distinctly different looks, the “ghost” and the “rewind” effects. We’d need to freeze and rewind some aspects of the same shots. We’d need to relight scenes.

On top of all of this, we knew that we’d need an approach that was very flexible. We knew that the choreography of each shot was going to be very complicated with inevitable changes to actors positions or camera moves needed to help convey the story as clearly as possible. Who was standing where, which direction are they moving in, are they in regular time, frozen or in reverse were all questions that could be answered at the previs stage but we knew that with the addition of the “looks and effects” that we wanted, this choreography would probably need to change a little after shooting.

How many shots have you done and what was the size of your team?
200 shots with a small team, which ramped up to around 100 artists at our busiest time

Was there anything in particular that prevented you from sleeping?
The most difficult shot occurs when Dastan activates the dagger for the first time, bursting out of his body as a particle ghost and watching himself rewind in time. The shot travels from a mid shot to extreme close up then back out to a wide. We had to design everything about the shot. Its fully CG and we get very close to Dastan’s face who had to be completely recognizable. It’s also absolutely covered in particles, which the camera passes through. Editorially we had to tell a crucially important story point, creatively it had to look magnificent and it was a huge technical challenge. Yep…a little lost sleep on that one!

What are the four films that gave you the passion for cinema?
JAWS – I love everything about it…particularly the rubber shark.
ALIEN – Giger and Scott made this movie feel like it came from another planet!
BLUE VELEVT – Lynch really gets under the skin
THREE COLORS trilogy – beautiful movies
SOME LIKE IT HOT – can’t stop at 4

A big thanks for your time.

// WANT TO KNOW MORE?
Double Negative: PRINCE OF PERSIA dedicated page on Double Negative website.

© Vincent Frei – The Art of VFX – 2010

THE CRAZIES: Josh Comen and Tim Carras – VFX producer and VFX supervisor – Comen VFX

Founded in 2006 by Josh Comen, Comen VFX has participated in many projects including TV series like THE SOPRANOS or WEEDS and on movies such as A PERFECT GETAWAY, NEXT, RISE or THE SPY NEXT DOOR.

In the following interview, they talk about their work on THE CRAZIES.

JC= Josh Comen, VFX Producer // TC= Tim Carras, VFX Supervisor

What is your background?
JC: For the past eights years I have worked as a visual effects producer on feature films, television, music videos, and commercials. Comen VFX was founded in 2006. It is part of Picture Lock Media, parent company to Comen VFX and Picture Lock Post.

TC: I first became involved in visual effects at the University of Southern California, where a group of us organized a student-run VFX studio. I subsequently worked as a freelance compositor, designer and effects supervisor before joining Comen VFX as visual effect supervisor in 2007.

Can you explain to us the creation of Comen VFX?
JC: I created Comen VFX for the sole purpose of having a company that could quickly adapt to the needs of both the director and the production. Visual Effects and the methods to complete them on budget and on time are always changing. I thrive on navigating those waves, and charting our course!

What kind of effects have you made on this movie?
TC: We did a range of shots on THE CRAZIES, including compositing, set extensions, bullet hits, and paint work. In addition, we designed and composited a graphical user interface for the Sheriff’s computer.

What were the challenges on this show?
TC: Designing the computer user interface was the biggest creative challenge we faced on THE CRAZIES. It had to be visually simple, but efficient at conveying specific information to the audience at a glance. It had to feel organic, but we couldn’t borrow any design elements from familiar Mac or PC systems. It’s amazing how much of the visual language of computing comes from the two main operating systems in use today, and how much R&D is required to generate original artwork that feels natural. And of course, we had to create all that on a tight schedule and with finite resources.

How was your collaboration with the director?
JC: We would receive feedback from the director directly and via editorial.

TC: Breck Eisner has a keen sense of what he wants in his movie, but he also understands the utility of visual reference material. Even for shots that might be taken for granted in another context, Breck was always in interested in seeing sample images we’d prepare to help communicate the look of the shot, or sending samples of his own. Having pictures to look at allowed us to communicate in a much more visual way than words alone.

What is your software pipeline?
TC: This show occurred while we were in the middle of transitioning from Shake to Nuke, so the compositing was split about half and half between those platforms. We also used Photoshop for computer UI design, and Motion for particle systems.

What did you keep from this experience?
TC: Good communication is everything. When everyone involved is working toward the same goal, things tend to fall into place organically.

What is your next project?
We are currently working on THE FIGHTER, HOLLYWOOD DON’T SURF and YOUNG AMERICANS.

What are the 4 movies that gave you the passion for cinema?

JC: There are certainly many movies that have given me a passion for cinema. At the top of that list for me would be RISKY BUSINESS because I am all for the messages it gives: Life is about taking risks, you gotta risk big to win big. I thrive on taking risks!

TC: I think THE MATRIX and DARK CITY were the first films in the digital age that really got me thinking about visual effects as a tool that could really change the way we tell stories. Peter Jackson’s LORD OF THE RINGS trilogy extended that concept into bigger and brighter environments and characters. But setting VFX aside, what grabs my attention is films like THE SHAWSHANK REDEMPTION, where a fascinating story is told in a way that is unique to cinema.

Thanks for your time.

// WANT TO KNOW MORE?
Comen VFX: Official website of Comen VFX.

© Vincent Frei – The Art of VFX – 2010

ROBIN HOOD: Richard Stammers – VFX Supervisor – The Moving Picture Company

After starting his career in 1992 at Animal Logic, Richard Stammers joined MPC in 1995. He participates in many projects of the studio and work as vfx supervisor on such movies as WIMBLEDON, THE DA VINCI CODE and its sequel ANGELS & DEMONS or ELIZABETH: THE GOLDEN AGE.

What is your background?
I trained as a graphic designer in 1991, but my final year at university was spent predominantly doing traditional animation. My first job in the industry was at Animal Logic in 1992 where I was employed as a junior designer creating televising graphics and animation. Whilst I was able to learn the vfx tools of the trade in Australia i kept my design roots and upon returning to London I split my time between vfx compositing and designing/directing TV graphics and commercials. I joined MPC in 1995 to focus entirely on creating visual effects initially in commercials and later making the transition to features in 2002.

What did MPC do on this movie?
One of MPC’s main challenges was to create the invading French Armada and the ensuing battle with the English army. A CG fleet of 200 ships and 6000 soldiers were added to the 8 practical boats and 500 extras used in principal photography.  MPC used Alice, its proprietary crowd generation software to simulate the rowing and disembarkation of French soldiers and horses, with all water interactions being generated using Flowline software.  The defending English archers and cavalry where also replicated with CG Alice generated clips and animated digital doubles. MPC relied predominately on its existing Motion Capture library for much of Robin Hood, but a special mo-cap shoot was organised to gather additional motion clips of rowing and disembarking troops and horses.

MPC’s digital environment work was centred on two main locations; London and the beach setting for the French invasion and final battle.  A combination of matte painting and CG projections were used to recreate the medieval city, which featured the Tower of London and included the original St. Paul’s Cathedral and old London Bridge under construction, in the city beyond.  The production’s football field sized set provided the starting point for MPC to extend vertically and laterally, and in post production alternate digital extensions were also created to reuse the set three times as different castle locations.  Each extension was a montage of existing castles chosen by Ridley Scott and production designer Arthur Max.  For the beach environment, MPC had to create cliffs that surround the location, and were added to 75 shots.  Once approved in concept, the cliff geometry was modelled using Maya and interchangeable cliff textures were projected depending on the lighting conditions.

MPC was also responsible for creating the arrows for various sequences on the film.  Practical blunt arrows were used in production where ever possible, but most shots presented safety issues so digital arrows were animated instead.  Arrows were added to over 200 shots, with 90% of these being handled by the compositing team using Shake and Nuke.  MPC developed proprietary 2D and 3D arrow animation tools to assist with the volume of arrows required, which included automatically generating the correct trajectory and speed, and controls for oscillation on impact.

How was your collaboration with Ridley Scott?
Very good. He’s always very clear and concise about what he wants and also takes an interest in the financial implications of his requirements, and will spend the vfx budget where he feels it’s most suited. He usually would brief me with a quick sketch and would often follow up with a more detail by drawing over a print out of a shot. I’d get my team at MPC to interpret this into the 3d realm as simple un-textured Maya geometry over the plate, and re-present this to Ridley for approval. Where there was any ambiguity over a shots requirements I’d present a few options to choose from, so we had a clear brief before starting any detailed work on a vfx shot.

Can you explain to us the shooting of the French Armada and it’s landing?
The location for this shoot was at a beach called Freshwater West in Pembrokeshire, Wales. The crew of up to 1000 people were there filming for 3 weeks, in order to capture enough footage for the 20 minute screen time the battle was edited to. Further time was scheduled at Pinewood studio’s Paddock Tank and Underwater Stage to complete some of the shots that were considered impractical or too dangerous to achieve on location. The production where able to create 4 real working landing craft and 4 rowboats to represent the armada, and as many as 500 extras on some days. Ridley’s shooting style for this battle involved staging large scale performances each lasting 4-5 minutes and get as many cameras covering the shots he needs. This would take some time to set up and rehearse, and then it would be frenetic for a few minutes whilst they shot. He’d do several takes then move on to the next key stage of the battle.

The shooting conditions were extremely difficult and varied which caused great continuity problems. Changing light and weather created the usual inconsistencies, but the changing tide moved at 1 meter per minute so the size of the beach constantly was fluctuating, and the shooting crew had to be equally mobile, with all equipment on 4×4’s or trailers. For the vfx crew this meant the 10 cameras Ridley was using were moving constantly, so wrangling all the camera data and tracking markers, essential for our mactchmove department, was a huge task. We overcame much of this by capturing all camera locations with Lieca Total Station surveying equipment, and later incorporated the data in to a Maya scene with a LIDAR scan of the beach location. All cameras were armed with zoom lenses to deal with Ridley’s constant request to reframe for particular compositions he wanted, and often we’d find takes that had been shot half a dozen different focal lengths. Despite me reminding Ridley that we needed to avoid zooming during takes (because of the added complexity of the matchmove process) inevitably some of the shots later turned over to MPC were incredible difficult to work with.

How did you create those shots and what was the part of CG in the plates?
During the end battle most of MPC’s work was supporting what was already present in the plates, in some cases the number of extras was sufficient, and we’d be only adding a few boats into the background. But with 10 cameras filming and only 8 practical boats, most shots needed MPC’s digital armada, CG soldiers or environment work to augment the background. There were also a handful of wider shots that where MPC created the entire invasion or battle and much of the background landscape too. Each CG shot went through the same basic pipeline: first the film scans would go to the matchmove department for camera tracking and to the comp department for colour balancing to create a ‘neutral’ grade for consistent CG lighting.

The prep team would also handle any clean up such as marker removal or camera crew removal at this stage. Once a Maya camera was available the environment department would handle creating the cliff and the layout team would place the armada, which started from a master boat formation, and animation cycles which could be scaled or offset to suit the conditions of the sea. We usually go through a couple of versions of refinement to make it work compositionally and in context to the cut. Once I had approved the boat layout the crowd and layout teams set to work with our ALICE software to place all the soldiers in the boats and the beach with the appropriate animation. At this stage we’d send a temp version to the editorial team to cut in so Ridley and Pietro Scalia, the editor, had a chance to comment. At this stage we’d know the CG content of each shot and could accurately identify the rotoscoping requirements to create all the mattes necessary to place the cg behind the foreground live action. Whilst we waited for feedback on our layouts we continued into lighting and rendering and got the effects team working on the water interactions for the boats and crowds. Once we’d established a few key shots this process worked well. There was generally little or no feedback from Ridley so we could progress into comp quickly and get the shots looking more final.

Can we explain the creation of a crowd shot with your software Alice?
The first stage of preparing for a large crowd show like Robin Hood is to identify the motions that are going to be required. ALICE has a very sophisticated underlying motion syntheses engine that can take multiple inputs from any combination of motion capture clips, keyframe animation cycles & physics simulations which it can manipulate to give us the resulting simulations we see on screen, this gives us a great deal of freedom when deciding how to tackle a show.

For Robin Hood we relied predominately on MPC’s existing mo-cap library but extended it with new mo-cap data captured over a 2-day shoot, specifically targeted towards the disembarkation of soldiers & mounted cavalry, along with the rowing motions for the boat crews in each of the different boats. Once all the new motions arrived at MPC they were processed into the existing library through our motion capture pipeline, where our crowd team started to the create the motion clip setups and motion trees which would drive the agents for the whole show.

With ALICE being fully proprietary it allows us to quickly write anything from a new behaviour, such as inheriting motion from the boat the crowd agent is occupying, to simple tools that automate and simplify tasks for other departments. For the first time ALICE was used by our Layout department who took on the challenge of populating the whole Armada.

The crowd team produced a large number of different caches for each of the different rowing motions and disembarkations required for the various different boats. We then wrote a simple interface, which the Layout team could then use to rapidly set-up, randomize, change, and offset the caches to populate all of the boats in a few simple steps.

Once the first pass had gone through layout, the crowd team would take over any of the shots, which required more complex simulations to top up the action. This generally involved tweaking/adding to the disembarking to make it feel more chaotic, ranging from people being dynamically hit with arrows to stumbling through the water whilst providing the data required for the FX team to add in the interactions.

Once I was happy with the combined work of crowd and layout the next stage was to do the cloth simulations for all of the agents. Most agents only required the looser cloth of the lower body and any flags that were being carried to be simulated and this was handled by ALICE’s inbuilt cloth solver, before the resulting caches automatically flowed into FX and lighting departments.

There are a very large number of arrows that are drawn in this movie. How did you manage this?
Knowing that we have a large number of arrows shots on the show, meant we needed an efficient process to deal with them. I’d had great success on a past show Wimbledon (2004) animating tennis balls to mimed rallies, much of which was achieved as a 2d only compositing solution in shake. I felt that we could do the same on Robin Hood, as the trajectories were similar, but even simpler. One of the shows compositing leads, Axel Bonami took the process further by developing a series of shake macros, which only required the artists to place the start or end position of an arrow. The macro would use a still of a real arrow at the most appropriate perspective to work for the shot and then automated the animation process. He added further controls for impact oscillation to so the artists if necessary could dial this in. Arrows were added to over 200 shots, with 90% of these being handled by the compositing team using Shake and Nuke. MPC also developed proprietary 3d arrow animation tools to assist with large volumes of arrows where the 2d solution was unproductive. This was essentially a Maya particle system but could be tied into to the ALICE pipeline to allow crowd agents to fire arrows or be killed by them.

How long have you been working on this project?
I started in March 2009, and we delivered our final shot on 12th April 2010, so around 13 months in all, which seems to be about the minimum these days for VFX supervising a show right the way through.

What was the biggest challenge?
There’s a sequence of shots where the merry men return to England in King Richard’s ship. The production weren’t able to shoot this boat at sea, and Ridley wanted it to be windy and rough so the chances of shooting the right kind of sea plate were slim. It was storyboarded as one wide shot only so we looked into stock footage to use, but Ridley wasn’t happy with any of the options. Instead he turned to a previous film of his, ‘White Squall’, and cut in a sequence of shots from there, which featured a modern sailing ship and included insert shot of the sails. The 5 shots we created involved replacing this ship with a medieval CG replacement. There were no similarities between the 2 styles of boat, and further more it was so close to camera we had to completely rebuild our asset to a higher level of detail, and populate the deck with CG sailors, horses and windy canopies. We had no camera information for the plate and they were ‘scope anamorphic’ so the machmoves were tricky too. The finals were beautiful – a real testament to the teams that made it all work – a great example of just dealing with whatever gets thrown our way!

Was there shots that made you lose your hair?
Well things never got that bad, but there were a few shots that I did worry over. One was an arrow POV shot that represents the moment at the end of the film when Robin Hood fires a deadly blow at Godfrey, Mark Strong’s character, as escape the battle on horseback. Pietro felt that this was an important moment in the film that mirrored a similar moment nearer the beginning of the film when Robin wounds Godfrey in a similar attempt to kill him. There were many discussions on how we could shoot it but no clear solution that worked with the limitations of the beach location. MPC created a previs of the shot, as it was important to visualise the key elements required and how we could break the down into achievable chunks. The first half of the shot sees mostly sky and digital environments that we were already creating, but the second half flies right into the back of Godfrey’s neck whilst he galloped along the shoreline. As the shot took place in the shallow waters of the beach, this was something I did not want to attempt as a full CG shot, because of the complexity of recreating the sea. I opted to shoot a moving plate of the beach and sea to match the previs as best as possible, and separately shoot Mark Strong’s stunt double as a bluescreen element, so we could manipulate it to work for the shot.

The practical and cost effective solutions were to shoot the background plate with a miniature helicopter and the foreground stunt man riding a partial mechanical horse, with MPC creating a full replacement CG horse. 20 MPH winds hampered the plate shoot and left us with only a few usable takes requiring significant stabilisation, and the cameras proximity to Godfrey’s neck required a slow Super Technocrane move to avoid injuring him. As we had to speed ramp the shot much faster in post we compensated with the stunt man performing his riding actions in slow motion. It was an uncomfortable set of elements to work with, and required a lot of manipulation to piece together. The final solution involved creating a BG almost entirely in CG but retaining the live action sea, which was camera projected back through our previs camera. Godfrey’s element was successfully pinned to a hero cg galloping horse and we started getting something that was working. But the nature of a smooth arrow trajectory made the shot look so clean and out of context to the surrounding shots, and this is where most of my concern lied. It was always going to be delivered late in the schedule, the last week in fact, and there would be no time to re-conceive the shot in another way if Ridley didn’t like it. So we set about adding as many of the attributes of the surrounding shots as we could. We changed the sky to something less pretty, added camera shake, layers of smoke to pass through, we dirtied up the beach by matte painting extra detail like clumps of seaweed, and added more depth hazing overall. And with the shot carefully graded to match the shot it cut to we had success. It took it far enough away from the feel of the previs, and worked really well in the cut – it’s a great moment in the film.

What did you remember about this experience?
The shot exceeded my expectations, which is always great. As a VFX supervisor you have to be a jack-of-all-trades, but you work with teams of artists who are masters at their disciplines, so you take for granted high expectations – exceeding them is always a bonus.

What is your next project?
Well, nothing confirmed. I’m busy at MPC pitching on possible news shows, but nothing I can talk about yet.

What are the four films that have given you the passion for cinema?
As a student, sequences created by Ray Harryhausen and Terry Gilliam are what inspired me to take up animation. TERMINATOR 2 and JURASSIC PARK both had jaw-dropping moments, which to me pushed the boundaries of VFX at a time when I was quite junior to the industry. They inspired me to do better. I always loved David Lynch’s DUNE and Ridley’s ALIEN, I’m happy to watch these again and again – few films have that effect on me these days.

Thanks so much for your time.

DETAILED SHOT BREAKDOWN

Robin and Merry men leaving the Tower of London.
The foreground live action plate was shot on the backlot of Shepperton Studios. MPC created a digital matte painting of the castle walls, Tower and the river. The element used for the river was taken from a plate shot at Virginia Water, Surrey. Ridley wanted the town of London to be full of life and make the river bank busy like a market, so MPC bolstered the limited number of extras with around 200 CG people in the town, CG guards on the castle and cloned live action boats on the river.  In the foreground additional huts were created to increase the housing density, and multiple layers of smoke were added. When reviewing the final version of this shot at MPC, Ridley said he liked this it so much so he wanted to live there! This is 1 of 14 other London environment shots that MPC created for Robin Hood.

Robin and Merry arriving at the Tower of London in King Richard’s ship.
The live action helicopter plate was shot on location at a lake in Virginia Water, Surrey. The aerial unit used a Panavision Genesis camera for their photography. MPC created a CG environment where much of the original backplate was replaced with the Tower, the surrounding city of London and landscapes beyond.  The design of the Tower and it immediate surroundings were a collaboration between the Visual Effects and Art Departments, with the final layout and orientation coming from meetings with Ridley Scott, production designer Arthur Max and visual effects supervisor Richard Stammers. Whilst quite a substantial set was constructed as a river-side entrance to the Tower, the jetty, wall and archways occupied such a small part of the plate in this case, but provided MPC with the ‘anchor point’ to add their digital extensions. Environment lead Vlad Holst built the city in Maya with basic geometry to represent all the key features. This was presented to Ridley for comments and some adjustments were made before all the matte painted projections were started. The final DMP’s created by matte painter Olivier Pron extended the city to the horizon and incorporated the original stone London Bridge under construction, and old St Paul’s Cathedral in the distance. The lake was extended to become a river as a rendered CG element, in order to incorporate all the reflections of the new digital environment. The banks were populated with CG boats and CG crowds gathered to witness what they believe to be King Richard’s return from the crusades. King Richard’s ship and some of the foreground rowboats were in the original plate, but these were added to with 2d replications, and the motor wake of Richard’s ship was removed.

The combined armies of King Phillip and the Northern Barons approach the beach where the French Armada have begun landing.
The live action helicopter plate was shot on location at Freshwater West, Pembrokeshire in Wales and was captured using a Panavision Genesis camera.

This shot was turned over to MPC early on in the schedule and became key development shot, to test the look our CG assets. It was used to conceptualise the digital environment work, which required the creation of cliffs that surrounded this location – a necessary story point to create a tactical advantage for the English archers. Also the shot was used to determine the layout and number of boats in the French Armada and the numbers of soldiers on the beach. It paved the way for over a 150 other shots that required views of the cliffs or the French Armada.

For the design of the cliffs, MPC’s environment lead Vlad Holst created some Photoshop concepts for Ridley. Initially these were based on the white chalk cliffs of Dover, as this was the scripted location of the French invasion. The final design however, was based on the practical necessity to have a real cliff location to shoot non- VFX shots, which was in close proximity to the main beach location in Wales. These cliffs, whilst quite different from the concepts were a good geological match to the beach, and ultimately provided a better blend to the sand dunes behind the beach. Textures of the cliffs captured by the aerial unit were tiled, graded and projected onto simple Maya geometry that blended to a Lidar scan of the beach location. The cliff geometry went through a number of shape variations for Ridley’s comments with the approved version including a wide access path to the beach for the bulk of the cavalry and a narrow gorge from which Marion could join the battle later.

Ridley wanted to feel that the end battle involved around 2000 soldiers on each side. The French Armada was made up of 200 CG boats, and this shot featured about half the visible fleet and 1500 disembarked French soldiers. The practical photography provided a good guide for scale and lighting, with 4 landing craft, 4 rowboats, over a hundred extras on the beach and 25 cavalry in the foreground. Ultimately much of this was replaced with CG when the beach was widened in order to maintain continuity of the tide position throughout the sequence. Boat layout and animation was handled in two stages, divided by a period where matchmove artists would roto-animate the waves in the backplate. This allowed for detailed animation and interaction with the ocean surface to be achieved.

MPC’s crowd simulation software ’Alice’ provided digital artists with the tools to handle the number of CG soldiers required. Alice utilised MPC’s motion capture library for most of the animations but with specific actions like rowing, disembarking soldiers and horses being realised through a specific mo-cap shoot. Digital effects elements such as wakes and splashes were created for the boats and CG soldiers in the water, using pre-cached Flowline simulations, which were automatically placed with each Alice crowd agent at render time.

The small numbers foreground cavalry were multiplied with the addition of full CG riders. Safety regulations prevented the helicopter’s camera from being close enough to the live action cavalry, so Ridley requested that MPC add the additional CG characters right into the foreground and under the camera. For this task, ‘Alice’ crowd agents, which are inherently suited to being smaller in frame, were promoted to having a high level of detail. Additional modelling, texturing, animation, cloth and fur simulations were required to provide the extra details and nuances to what became almost full frame CG renders. The effects team again provided interaction elements for the horses’ hooves, in the form of mud clumps, grass and dust, augmented further in the final composite with additional live action dust elements.

// WANT TO KNOW MORE?
The Moving Picture Company: Dedicated page to ROBIN HOOD on MPC website.

© Vincent Frei – The Art of VFX – 2010

IRON MAN 2: Ged Wright – VFX Supervisor – Double Negative

After working at Mill Film for HARRY POTTER AND THE CHAMBER OF SECRETS, Ged Wright joined Double Negative in 2002 and works on HARRY POTTER AND THE GOBLET OF FIRE, HARRY POTTER AND THE ORDER OF THE PHENIX or 10,000 BC. He has just finished overseeing IRON MAN 2.

What is your background?
I worked in Australia doing commercial work for a number of years before relocating to the U.K. In 2001. I joinedMill Film for HARRY POTTER AND THE CHAMBER OF SECRETS and then moved to Double Negative in 2002 and have been here ever since.

How was your collaboration with director Jon Favreau and production visual effects supervisor Janek Sirrs?
We worked closely with Janek throughout the project making sure we gathered enough reference information and data whilst in Monaco and Downey in L.A.
The process was more involved and had much more involvement from Jon Favreau as we moved into the animation and postviz stage of the project and got into the beats and details of how to tell the Monaco fight sequence.

What are the sequences made at Double Negative?
We were responsible for the Historic Grand Prix race in Monaco which culminates with an on track battle between Whilplash and Iron Man in his suitcase suit.

Can you tell us about the shooting of the Monaco’s sequence? What was real elements and those in CG?
2nd unit photography took place in Monaco, without the actors or any of the art department cars. Initially Janek was looking to shoot at least some real race cars in Monaco however the logistics of shooting on location proved to much to overcome.
Production was able to obtain permission to shut off areas of the racetrack, which in Monaco means functioning city streets, on the lead up to the race in the early hours of the morning.
A hotted up Porsche with Vista cameras mounted at the front and rear was driven as quickly as possible through these areas and the plates served as the basis for the in car driving shots.
In addition to the work 2nd unit was doing we shot 180 panoramas from either side of the track, about every 15 feet along the track which served for reflection and plate reconstruction information.
All of the race cars are CG up until they are cut in half which was handled practically in L.A. with CG enhancements.

How did you recreate Monaco in CG?
Most of Monaco was a combination of matte painting and reprojections using the photography we had taken, we ended up with around 7 TB of photographic data.
The fight area which was built as a set in L.A. was also built digitally by us for when we could not use the photography or needed to extend it.

Can you explain us more about racing cars cut in half?
The cars were rigged by SFX to cut in certain ways and tumble down the track. We added whip contact effects with Monaco and the crowd behind them.

How did your recreate the lighting of the shooting?
Our lighting pipeline is HDR based, we shoot as much HDR information as possible onset.
This was complicated for the fight sequence as the lighting in Downey was unfortunately overcast, coming from the wrong direction and there was a very large green screen where the harbour should be. So we rebuilt the lighting environment from stills and painted out the sun and any additional lights to allow more flexibility once lighting the shots.

Have you developed specific tools (for lightning or fire) for this sequence?
We used a number of inhouse tools and relied heavily on Houdini for our FX work.

How did you collaborate with Legacy Effects for Iron Man armor and Ivan Vanko?
The whip FX were designed and implemented at dneg. The suit Ivan wears was practical and handled by Legacy.

About the mobile armor Iron Man. How have you designed and build it? Have you received elements from ILM?
The MKV armour was separate from the work ILM did and there was no overlap of the work on this project.
Legacy built a 1/3 size model which we used as a starting place which was then refined and added to through out the project, we were modelling the suit and suitcase until quite late in the project, with the MK5 being made up of over 3000 individual pieces.

Can you explain how you animate the deployment of the suitcase into the mobile armor and its choreography?
We began with a lot of concept art which resembled comic book frames, this was very useful but could only take us so far.
In 3D the first step was to take the fully formed armour and try and fit it into the suitcase, which it does….just.
Jon wanted the armour to move in a consistent and mechanically believable manner which was a challenge considering what we need the individual pieces to do.
In the end focusing on what each shot of the suit up sequence needed to most clearly communicate was the key to solving this problem.

What information Jon Favreau gave you for Iron Man’s animation?
Jon has a very clear idea of how Iron Man should move and had established a language in the first film so there was a lot of catching up for us to do. One of the key challenges for us was the interaction with Whiplash as they are connected for half the sequence and there is only so much we could do to alter the performance, transition of weight etc.

How did you achieved to render so realistic metal look for the armour?
The shaders were built with the latest version of dneg’s inhouse shader set-up which allows extensive use of co-shaders. This allowed the lookdev artists to build and experiment with shaders in a more intuitive way.

What was the biggest challenge on this film?
The suitup sequence gave us the most sleepless nights.

What was the most difficult shot to do? And how did you achieve it?
There is no stand out shot in this case, most of the shots in the sequence had a large number of disciplines working on them so in a sense one of the more difficult challenges is keeping a track of such complex work.

How many shots have you done and what was the size of your team?
We finalled 250 shots with around 200 crew touching the shots over the course of the project.

What did you keep from this experience?
I learnt a great deal and are pleased with the result and how hard everyone worked, i’m not sure you are ever completely happy with the final result which helps when embarking on the next show.

What is your next project?
I’m currently inbetween shows.

What are the four films that gave you the passion for cinema?
IN THE MOOD FOR LOVE, TERMINATOR 2, WHITNAIL AND I and HOWARD THE DUCK…

A big thanks for your time.

// WANT TO KNOW MORE?
Double Negative: Dedicated page IRON MAN 2 on Dneg’s website.

© Vincent Frei – The Art of VFX – 2010

IRON MAN 2: Danny Yount – Creative Director – Prologue Films

Danny Yount started as an self-taught designer and creates film credits in Prologue Films for projects such as IRON MAN or ROCKnROLLA.

What is your background?
I started as a self-taught designer that loved using computers during a time when the industry did not take them seriously as a graphic design tool. But in 1988 the mac was not color yet. When the color models came out in 1990 everything changed. I feel lucky – I was also there when the shift to digital video started happening as well as the beginning of web publishing. Watching the computing industry transform the arts was very inspiring – it finally seemed that anything is possible for the individual – even someone like myself who did not go through art school. I’m now a Director at Prologue Films – I create main titles and direct live action sequences for film, tv promos and commercials.

What did Prologue make on this show?
We designed the visual effects for all the holographic and computer screen interfaces and interactions that occur in Tony Stark’s lab. We also shot live action to supplement the main title edit as well as designed the animated typography for the credits.

How was your collaboration with director Jon Favreau and the production visual effects supervisor Janek Sirrs?
Great. We had worked with Jon on the first movie so it was a thrill to be asked again. It was our first time working with Janek. What was exciting is that we were called in very early to help them visualize everything before shooting principal photography. That level of collaboration is what made everything work so well – it really gave all of us the leverage to explore everything deeply. Jon and Janek had so many ideas about things that were so inspiring to think about. We came back with additional ideas and tests that really pushed us to our limit creatively. Much of it was not used but it was really a joy to explore them and push ourselves.

Can you tell us how you designed the main title?
The filmmakers wanted the opening credits to be type over picture – starting with the intro of Mickey Rourke and his father as he passed away. Rourke then begins the task of building his own RT in a makeshift lab in his apartment. Underlying this sequence is the narrative of Tony’s successes and stardom via insert shots of newspapers and magazines hung on the wall. The problem was that they had not shot enough supportive material for this so we needed to set up our own film shoot at our studio – essentially recreating the wall and workbench. We matched the film stock and camera and got to work. We also did a few macro insert shots like the RT lighting up and sparks falling to the floor, etc. I thought it would be a good way of intensifying the edit a little.

How was the shooting of the sequences with holograms?
We attended the shoot, but the director shot all material for that and we were handed the plates.

Did you create previz to help the crew and Robert Downey Jr. For the shooting?
We did a little at first, but mostly we developed the ideas with the team at Marvel and provided a few camera tests. They then hired a company who specializes in previz to work with the director in-house to nail down the many story points in development. We used that study as a basic editorial template and went from there. The scene that required previz was where Tony Stark discovers the secret of the molecule from the expo model.

Can you explain to us the creation of a shot like those in the Tony Stark’s lab from scratch to the final result?
After a long discussion with Janek Sirrs about the kinds of things he and the Director wanted for the film, we went away for a while and rallied a small group of designers to create motion tests and styleframes. We also had a few vfx-based designers working on motion tests – Adam Swaab, Troy Barsness, Sean Werhli and Jesse Jones. Most of the aesthetic was created by Ilya Abulhanov and myself, but our talented team of designers and animators inspired us to push it even further as we saw what was possible. The expo sequence lead by Paul Mitchel is a good example of that. We came up with a look that was very intricate using a combination of contour shading and particles attached to vertices.

Aside from the standard procedures of matchmoving, tracking, roto, painting and wire removal, the most challenging aspects involved the interactions with Robert Downey Jr.. We really wanted everything to flow naturally but not come across as too gimmicky or canned like many interfaces can be in movies. We are first and foremost graphic designers who pay a lot of attention to visual communication and details like that – so every motion and animation in these complex scenes has a very specific purpose and acts precisely on cue to the actor’s performance and Jarvis the computer system.

Have you received some elements from ILM or supervisor Janek Sirrs like 3D scans of the sets?
Yes, we got LIDAR scans from them that informed us of correct spatial relationships. We also shot a lot of reference photography of the set in case we needed to pull a few rabbits out of our hats. One scene in particular where this was useful was the 3d photogrammetry reconstruction of the “hall of armor” shot. It started as a locked off plate but the director wanted a more dynamic move applied to it.

The shots of the sequence with the city model hologram are rather long. Did that cause you some problems (technically and artistically)?
Paul Mitchel (the designer that lead the team on that sequence) can best describe the challenges:

“The Expo hologram sequence posed a number of issues both technically and artistically. Artistically we had to handle a lot of detailed visual information on screen at one time, so we had to find the balance between the right level of complexity verses too much confusion. We needed to make sure Tony wasn’t overwhelmed by the information presented to him. We also had to make sure we enhanced his performance with holograms matching his eye line and arms reach.

Technically the challenges were getting this long sequence to feel like one coherent piece and getting it rendered for our weekly reviews with Marvel, so it took a long time to finalize the subtleties in each shot. Getting the holographic look right and consistent was a challenge as it needed to feel like the same hologram in the wide and close up shots, each shot had it’s own issues which effected the light and transparency of the hologram.”

How did you proceed to assure the shot continuity?
That’s one of the most challenging things with so many moving parts – getting everything to synch perfectly form scene to scene. And many times scenes were shortened or lengthened editorially which provided more challenges as time went on. But once the edit was locked we were all good – it’s just a little bumpy at times before that happens. Also viewing details every step of the way is extremely important. We ran all our sequences from a FrameThrower system to a large plasma display for our internal reviews with the teams involved.

Which softwares did you use?
AfterEffects, Maya, Nuke, Shake and the Flame.

How did you design the screens content and their animations?
Ilya Abulhanov and Clarisa Valdez designed all the screens and directed a team of terrific animators and designers to excecute the many detailed and complex interactions and data. Daniel Kloehn and Takayuki Sato spent countless hours animating the insanely detailed interactive components. It was definitely a labor of love – and fear – but mostly love.

How many shots have you done and what was the size of your team?
We did around 120 shots – many of them were omitted from the final film as the run length was reduced at the request of the studio. I think in the end we ended up delivering around 90 for final. At the height of production we had over 30 people working on the film in 3 separate teams lead by Paul, Ilya and myself.

How long did you work on the show?
A little over a year.

What did you keep from this experience?
We learned more about what we are capable of as a studio. We started as a small motion studio in Kyle’s house, so to see us working on this scale was very exciting. The director called us to work on this because he liked the level of detail and aesthetic we bring to the table. And we have so many talented designers coming up with crazy next-level stuff so the energy and stamina we had as a group really showed for this one. We could have all used a little more rest in the end but an opportunity like this only comes around once in a while. We gave it everything we have and I think it shows.

What is your next project?
Right now we are working on a couple of sequences for the remake of TRON as well as the end credits. The footage I saw looked amazing and I admire the director’s perspective and design sensibilities so this should be a lot of fun. I’m also a huge fan of the original film.

What are the four films that gave you the passion for cinema?
BLADE RUNNER, STAR WARS, LOGAN’S RUN and ROLLERBALL. By today’s standard the latter 2 would be silly but as a 12 year old they melted my brain. But Joseph Kozinski is doing the remake of LOGAN in addition to TRON so maybe there’s a chance to reconcile that. I hope I can do something for that also but we’ll see.

To extend the list I would add ALIENS, ROBOCOP and T2.

Thanks a lot for your time.

// WANT TO KNOW MORE?
Prologue Films: Dedicated page IRON MAN 2 on Prologue’s website.

IRON MAN 2 – PROLOGUE FILMS – VFX BREAKDOWN

Prologue Films – credits list

Sequence Designers
Ilya Abulhanov
Paul Mitchell
Danny Yount

Executive Producer
Kyle Cooper

VFX Supervisor/Producer
Ian Dawson

VFX Associate Producer
Elizabeth Newman

VFX Coordinator
John Campuzano

Technical Directors
Jose Ortiz
Miles Lauridsen

Design
Clarisa Valdez
Chris Sanchez

Animators
Alasdair Wilson
Jorge Alameida
Troy Barsness
Kevin Clark
Morris May
Darren Sumich
Jonny Sidlo
Takayuki Sato
Kyung Park
Joey Park
Alvaro Segura
Man Louk Chin
Daniel Kloehn

Compositors
Chad Buehler
Christopher DeCristo
Sam Edwards
Christopher Moore
Brett Reyenger
Matt Trivan
Renee Tymn
Bob Wiatr

© Vincent Frei – The Art of VFX – 2010

HOW TO TRAIN YOUR DRAGON: Simon Otto – Head of character animation – Dreamworks

After banking studies in Switzerland, Simon Otto pass through Les Gobelins animation school before being hired by Dreamworks Animation. It was In 1997 and since Simon has not left Dreamworks and worked almost all projects in the studio as THE PRINCE OF EGYPT, SPIRIT, SINBAD, FLUSHED AWAY as a animator and lead animator. He recently completed HOW TO TRAIN YOUR DRAGON on which he was head of character animation.

Can you tell us about your background?
I wanted to become a cartoonist at an early age. I was completely enchanted by the old Disney movies and admired the stories Franquin, Uderzo or Herge were able to create with their incredible draftsmanship. In rural Switzerland, working in such an artistic field was as unrealistic as becoming an astronaut, so I ended up doing a banking apprenticeship in my hometown.
I knew quite quickly that being a banker was not going to make me happy, so I started pursuing my artistic career with determination. After my military service I worked as a snow sculptor for about a year and shortly thereafter managed to get into the F&F Schule fur experimentelle Gestaltung in Zurich. After a summer internship at an animation studio in Lausanne (Animagination), I found my way to Les Gobelins animation school in Paris, which is one of the most prestigious animation schools in the world.

In my year, there were about 900 applicants and after three rounds of testing, they finally accepted 20 students into the program. Lucky for me, the big Hollywood studios knew about Les Gobelins, since a lot of their talent were former students. I had a contract offer in my hands about a year later. I moved to Los Angeles right after my graduation and started working as a 2D animator on THE PRINCE OF EGYPT in the Summer of 1997. I’ve been at the studio ever since and have worked as an Animator and Supervising Animator on many of the Studio’s animated features both in 2D and in CG, including THE ROAD TO EL DORADO, SPIRIT: STALLION OF THE CIMARRON, SINBAD: LEGEND OF THE SEVEN SEAS et SHARK TALE. I have worked as a character designer on OVER THE HEDGE and before HOW TO TRAIN YOUR DRAGON I did some additional animation on THE BEE MOVIE and KUNG FU PANDA and then was one of the Supervising Animators on the Aardman co-production, FLUSHED AWAY.

How many time have you worked on HOW TO TRAIN YOUR DRAGON?
I spent three and a half years on this production. For the first two years, I oversaw the entire process of bringing Nico Marlet’s character designs into the digital world. This meant doing a lot of design work myself and working closely with the modelers, riggers and the surfacing department. During this time we also had to figure out a lot of systems for the complex dragon rigs, such as simulations, flap cycle systems and pose libraries for example. Once we achieved the desired look, I took these “digital puppets” and fleshed out their personalities along with a few of our senior animators.

Once production started, my job was to make sure the style, quality and clarity of the animation was as desired while making sure we executed efficiently. I consulted the directors on everything related to character animation and, along with our seven Supervising Animators, oversaw the entire character animation team.


What is a Animation Supervisor?
On previous CG films at DreamWorks, a Supervising Animator used to oversee the animation of an entire sequence and animate all the character animation with a team of about 5-10 animators. On HOW TO TRAIN YOUR DRAGON, I was lucky to have a team of Supervising Animators available that were incredibly experienced and a good part of them had been Supervising Animators on 2D films prior to working on DRAGON. With this kind of firepower and considering the complexity of the characters – we had realistic human characters as well as complex dragon rigs – I felt strongly about approaching the movie using a hybrid casting system where each Supervising Animator was in charge of a specific character. So, depending on which character was leading the shot, we cast the shot out to the specific character teams. The advantage was that animators became tremendous experts in their characters and we were able to track very specific performance ideas throughout the movie.


How was the collaboration with the directors?
Animation directors usually spend a great deal of time with the character animation department. Since the directors are at the studio all the time, we manage to get about 1½ hours of dailies in the morning and 1½ hours of walk- around-time in the afternoon, where the directors walk from desk to desk to spend one-on-one time with individual artists. The times increase the closer to the end of the production we get. DRAGON was a bit different, since our directors came to the production fairly late and they still had a lot of writing to do. We mostly handled all the big issues in dailies and then I did walk-arounds with the Head of Layout and each specific Supervising Animator.
The Head of Character Animation and the Supervising Animators have a lot of exposure to the directors early on in a production as they discuss characters and sequences. Once sequences get launched into animation, we get all the animators together and discuss one shot after another. Animators usually come prepared and already bring ideas to the launch.
Chris Sanders and Dean DeBlois have a lot of experience with animation. They knew what seasoned animators could bring to the characters. Throughout the production there was a tremendous sense of respect and appreciation between the animators and the directors, which created an environment of great creativity. The directors presented their vision with competence and clarity. Particularly when it came to giving feedback, they were very grateful for good ideas and respected quality work. Fear wasn’t a factor, which allowed animators to even make fools of themselves when presenting their ideas.

Were you able to purpose ideas and if yes, which ones?
Most great ideas came out of some sort of collaboration, and as a group, the animation team brought a lot of those to the film. Personally, I’m most proud of the creation of Toothless as a character that had to walk the line between a fierce creature and an adorable companion. We were looking for ways to bring the audience into the experience and have them be reminded of their own pets.
I also particularly like the decision to make the last fireball shot after the giant explosion a slow-motion shot, which I had suggested. This shot gives me chills every time I see it.

What was the main challenge on this movie?
The biggest challenge lay in the style of the animation and the creation of a believable world. You can’t make a movie about Vikings without having beards, hair and fur constantly interacting. And because of our story’s dramatic undertone, we felt it required an animation style that was fairly realistic, so a purely graphic or cartoony solution to that problem wasn’t going to cut it.
Soon enough, we also realized that we had a tremendous opportunity at hand. All movies about dragons that had been created up to this point, had either hand drawn cartoony dragons in them or featured CG creatures that had to match a live action plate. We, on the other hand, had a world of dragons to create that could seem realistic but that didn’t need to match a photorealistic image. We were able to really have fun with them, make them colorful, create different species and make them funny and dangerous at the same time.

Was were the animation references for the dragons?
We looked at what had been done before in movies, but soon realized that we needed to dig deeper. So, we looked everywhere in nature for references. For each of the characters we created a movie playlist that mixed all of the references together and we would update this list as we went through production, collecting more and more inspirational material as we went along.
The Gronckle for example is a cross-breed between a crocodile, a bumble bee and a Harley-Davidson. Toothless was inspired by the stare of a wolf, the behavior pattern and overall look of a black panther and the wing beat of a bat. For shot specific actions, we ended up studying wombats, small birds of prey and especially cats and dogs.

How was the collaboration with the others departments (concept, rigging, …)?
It starts out very linearly, where an approved design gets handed from one department to the next and the characters go through the standard approval process with directors, production designer and VFX supervisor. This usually only lasts for a short while, because the show is generally still searching for the final story and production design and things have to get adjusted and remodeled and sometimes even re-rigged. The “Terrible Terror,” the little gecko-like dragon used to be the original Toothless, for example.
We were a fairly small group of people for quite some time. Nico Marlet was the main designer and we had worked closely together before on OVER THE HEDGE. We put a lot of effort into maintaining the graphic quality of the original ideas that were on paper. There was a lot of talent and willingness coming from the modeling and rigging departments and we ended up being very satisfied with the results.

What was the most complicated character to animate?
The two-headed dragon was the most complex character to animate, but luckily we didn’t have that many shots of him. Toothless was very tricky, because he could look different quite quickly depending on the camera angle and of course because he is a quadruped with two sets of wings and a tail with a fin at the end. His rig is approximately 8 times as complex as the dragon rig in the original SHREK.

Hiccup was the hardest character to make interesting. This is a common hurdle in animation, because the hero is the character with the least amount of caricature in order to be the hero for all.

What was a typical day on DRAGON?
At the peak of production, which lasted around 8 months, I would usually get in around 7:30 am and animate on my shot until about 9:30 am. After that I would have a series of sequence meetings, scheduling meetings and other discussions before I’d do rounds with the animators around 11:00 am. After a short lunch I would try and reply to e-mails and notes and do some draw-overs for animators before I had to go to more task-specific meetings. At around 4:30 pm we usually started dailies that could last up to three hours. After a quick bite I would go back to my desk and try and animate if possible. I usually tried to finish my day before 10:00 pm.
DRAGON was a very aggressive production schedule, due to the late arrival of Chris & Dean. It was quite atypical for DreamWorks as animators usually manage to get their work done in a regular 8-10 hour day.

What is your pipeline at Dreamworks?
We use a lot of proprietary software, except for previz and layout, which are done in Maya. Our animation software is developed in house and is called Emo. We have significant development resources and are currently working very closely with Intel and HP in developing our next generation software packages.

What were the shots that prevented you from sleeping?
Fine tuning Stoick’s beard simulation was a huge undertaking and demanded several months of rigging time.
One of the biggest frustrations though, was the work we did on the flying shots. In order to create a real sense of speed, some of our shots had to travel at 700-1000 km/h. So, if there were just the slightest imperfection in the camera curve, our characters would be jumping around as if they were really badly animated. This caused some tense nerves amongst the animators.

What was your best moment on this show?
Showing the directors and producers our first pass of the sequence where Hiccup befriends Toothless. It was a truly emotional moment. We knew we had a film in our hands that was going to be special.

What is your next project?
I’m not sure yet, but due to its outstanding run at the U.S. box office so far, everything is pointing towards a sequel to HOW TO TRAIN YOUR DRAGON.

What are the four films that gave you the passion for cinema?
THE JUNGLE BOOK, THE ARISTOCATS, BACK TO THE FUTURE and INDIANA JONES – RAIDERS OF THE LOST ARK.

// EN SAVOIR PLUS ?
HOW TO TRAIN YOUR DRAGON: Official website for HOW TO TRAIN YOUR DRAGON.

© Vincent Frei – The Art of VFX – 2010