PIRATES OF THE CARIBBEAN – ON STRANGER TIDES: Stephane Paris – CG Supervisor – Cinesite

Stephane Paris began his career in Paris in many studios like Duran Duboi on films like ASTERIX & OBELIX: MISSION CLEOPATRE or IMMORTAL (AD VITAM). He then worked at MPC and Weta Digital. In 2006, he joined the teams of Cinesite. He supervised the CG of movies like CLASH OF THE TITANS, PRINCE OF PERSIA: THE SANDS OF TIME or THE CHRONICLES OF NARNIA: THE VOYAGE OF THE DAWN TREADER.

What is your background?
I began my career in Paris where I worked as a digital modeller on numerous feature film, television and commercial projects. I joined Cinesite in 2006 and have supervised an array of blockbuster work for the company, including CLASH OF THE TITANS (2010), PRINCE OF PERSIA: THE SANDS OF TIME (2010) and THE CHRONICLES OF NARNIA: THE VOYAGE OF THE DAWN TREADER (2010). I was also on the GENERATION KILL (2008) team, for which our work was awarded an Emmy and VES Award nomination.

How was your collaboration with director Rob Marshall and production VFX supervisor Charlie Gibson?
I never actually met with Rob Marshall; Charlie Gibson was our main contact throughout the production. I hadn’t worked with him before, but he has a long and prestigious track record in effects. I found him very easy to work with and we found creative solutions for shots by working together.

Can you explain the different steps for creating the huge environment during the sequence of the chase in London?
It was quite a task because the final shots had lots of composited layers. We relied heavily on our proprietary tool, csPhotoMesh, which enabled us to build CG sets for environmental CG extensions. We used camera projection techniques set up in Maya and transferred for final application in Nuke. We also created fully shaded, photo-realistic models of the buildings to extend the Greenwich location and make it look like a convincing 19th century city.

What references have you received from the production for the buildings and streets?
We mainly managed the references for the sequence on our own – Cinesite has a team dedicated to that task. Our VFX Photographer Aviv Yaron went out on location on the streets of Greenwich and on set at Pinewood Studios, taking thousands of photographs which were key to the creation of the environments. These photos were used with our proprietary tool csPhotoMesh to create 3D scan-like geometries. We also used the photos to create high resolution textures to reapply to our models.

Can you explain in detail the creation of digital doubles?
We didn’t create any digidoubles, but we composited elements of the crowd using Nuke to populate the shots in the areas of the city we extended.

How did you create the FX elements such as coal burning?
In the sequence where Jack Sparrow is escaping through the streets on a horse and cart, burning coal spills onto the street. We used rigid body simulations to create the breakage as the coal hits the ground and Houdini to create the fire emanating from the pieces as well as the smoke it generated.

Can you explain the creation of the frog?
The clients provided us with photographs of poisonous tree frogs. We completed look development in Renderman, then created a model using the photographic reference, which we rigged and animated using Maya – this included subtle and realistic movements like blinking and breathing.
The main frog was red, but we created several coloured versions, most of which populate a jar which Barbossa holds up to inspect. The most tricky part was to get the shading right, but in the end we found a good balance between subsurface scattering and specular passes, which I think make it pretty believable.

Can you tell us in detail the creation of the wooden leg of Barbossa?
We initially took photos of the wooden leg prop on location and the production provided us with a 3D scan. We used the photos and scan to create a full CG leg, from trousers and straps down to the wooden peg. Creating the entirety of the leg was easier, in many cases, than just adding the lower section. The top of the leg was rigged and animated to match the actor’s movement.
In some shots, where the actor’s performance had him touching the leg, we needed to recreate and animate his hand and sleeve digitally, to achieve a good interaction.

Do you have developed specific tools for this project?
Yes, our Head of Visual Effects Technology, Michele Sciolette, led the efforts to build the stereo production pipeline and develop a number of new tools to meet these challenges. These included csStereoColourMatcher, a fully-automated tool designed to compensate for colour differences between stereoscopic image pairs, and csStereoReSpeed to determine the best respeed methodology for any given shot. We also used csPhotoMesh, which already existed, but we certainly developed it further throughout the course of this project.

What was the biggest challenge on this project?
The whole project went pretty smoothly from a CG perspective and there was very little to be concerned about. We had good direction from Charlie Gibson and our in-house VFX Supervisor Simon Stanley-Clamp. I’m pretty pleased with how it all went.

How long have you worked on this film?
I became involved in the testing around July 2010 and we delivered at the end of March 2011, so around 8 months in total.

What was the size of your team?
The size of the team fluctuated, but I’d say we had around 50 people, including everyone from tracking through to lighting.

What do you keep from this experience?
This was my first native stereo project. Aspects of the CG are greatly affected by the stereo aspect so it was definitely a learning experience for me.

What is your next project?
I’m working on a really exciting project, but I’m not allowed to say what it is at the moment!

What are the 4 movies that gave you the passion of cinema?
THE THING (John Carpenter)
ROBOCOP
ALIENS
RAIDERS OF THE LOST ARK.

A big thanks for your time.

// EN SAVOIR PLUS ?

Cinesite: Official website of Cinesite.
fxguide: Article about PIRATES OF THE CARIBBEAN – ON STRANGER TIDES on fxguide.

© Vincent Frei – The Art of VFX – 2011

THOR: Paul Butterworth – VFX Supervisor & Co-founder – Fuel VFX

Paul Butterworth has worked at Animal Logic in 1993 as compositing artist and later became the first Australian VFX supervisor! He also directed numerous commercials and do a lot of matte paintings. In 1998, he oversee 1300 shots in 8 months for the TV series FARSCAPE. In 2000, he founded with Dave Morley, Andrew Hellen, Simon Maddison and Jason Bath, the studio Fuel VFX. He oversaw the effects of X-MEN ORIGINS: WOLVERINE and is currently working on Ridley Scott’s PROMETHEUS.

What is your background?
I’m originally from England, and graduated with Honors in Graphic Design from Middlesex University. In the early 90s I travelled a lot and worked for various creative companies in Europe and the US in photography, illustration, animation, graphic design and commercial directing. Then I landed in Australia and immediately fell in love with it. My first jobs were in compositing and design with Zap and Animal Logic and I became the first VFX Supervisor in Australia. I continued building my portfolio in matte painting and art directing which kept me very busy on commercials and feature films. In 1998 when I was at Garner MacLennan Design, my concept art and style-frame work won the pitch for the visual effects work on the Sci-Fi Channel’s FARSCAPE. I took on the VFX Supervisor and VFX Art Director role for that show and we delivered 1300 shots in about 8 months. In those years, I worked closely with the people that were to become my business partners in Fuel VFX – Dave Morley, Andrew Hellen, Simon Maddison and Jason Bath – and we formed the company in 2000. Since then, we’ve continued to grow and now have about 65 full-time staff and about 30 freelancers working for us. I continue to direct commercials and VFX Supervise some of our feature film work.

How was your collaboration with director Kenneth Branagh and production VFX supervisor Wesley Sewell?
Working with Ken and Wesley was a fantastic experience, very collaborative, open and honest. As with any working relationship, it’s not just about doing good work, it’s about building trust – the trust that we can handle complicated problems, that we can interpret the brief and that we can deliver world-class work. Both Ken and Wesley were prepared to listen to our ideas and input and gave us great opportunities to build on the brief.

How did Fuel VFX get involved on this film?
It was really a combination of having been involved in the initial test phase of THOR back in the early part of 2009 and Marvel being happy with our subsequent work on IRON MAN 2.

What are the sequences made by Fuel VFX?
On Thor we worked on some of the ‘Bifrost’ shots – this is the name from Norse mythology of the rainbow bridge used to travel between worlds. In the film, this is interpreted as a wormhole that traverses the universe. The other main part of our work was creating effects within Odin’s chamber. The main part of this work was creating the energy field that Odin sleeps beneath, as well as Thor smashing through a wall.

Can you explain in detail how you created Bifrost? Have you received any specific references from the director for the Bifrost?
Kenneth had indicated that he wanted the Bifrost to be a ‘scientific trip’ where we traversed space in an interesting, unique way. In reality, if you were traveling through this wormhole at that speed, everything around you would be black, which isn’t very interesting! So I worked closely with our Head of Design Brendan Savage and gathered a lot of reference based on natural phenomena, such as the polar auroras and imagery from the Hubble telescope. Then we went through several iterations of concepts and testing to achieve the right look, consulting with Kenneth, Wesley and Marvel throughout the process. At the same time, our technical team led by CG Supervisor Roy Malhi spent many months programming a detailed fluid system to help the CG artists interpret the energy ribbons and nebula in the 3D space. Finally, our compositing team led by Tim Walker did a great job to piece the scores of elements together.

How did you handle so many particles? Your render farm has had to suffer isn’t?
Roy Malhi, CG Supervisor // We developed in-house tools to deal with the mass amounts of data that we were required to fly through, taking advantage of the fact that Renderman is exceptionally good at handling a large number of points at render time. We went through the creative process in Maya, developing the look with small amounts of data until we were happy with the look. Our render farm didn’t take a big hit, because we split the data-heavy processes across many of our older machines and then assembled it back together for rendering.

Can you explain the shooting scenes of Bifrost departure and arrival? Are the actors were hung on wires?
For all shots, except one where Chris Hemsworth was shot on a wire, we created digital doubles. The shots required for Bifrost were all CG. So we workshopped with Kenneth, VFX Supervisor Wes Sewell and Marvel, and did a lot of work to develop what the Bifrost wormhole would look.

About the Chamber of Odin. How did you create the energy field on his bed? And what references have you received for this energy field?
For Odin’s Chamber, we developed a dome and curtain of light rays that hover over Odin’s bed. This dome of light suggests harnessed power and energy that revitalizes him as he sleeps. We took a lot of reference from the natural world such as the corona of the sun and gave the sleep effect plenty of volume and space. CG Supervisor Pawel Olas used both fluid and particle simulations in shifting shades of gold and bronze to reference the sun and give it its majestic look.


What was the biggest challenge on this project?
Perhaps the most demanding task on this film was to successfully interpret what the Bifrost would look like. You can’t Google what these things look like – they are totally imagined and within the heads of the stakeholders. So to extract that and interpret it for the big screen was an interesting challenge creatively. Technically, probably creating fluid simulations that could be art-directed and used for both the bifrost and Odin’s chamber shots. Part of the difficulty with solving these is that we had to ensure they would work in stereo.

How long have you worked on this film?
We started working on it in 2009. Over that time, we conducted several tests especially for the Bifrost shots.

How many shots have you made and what was the size of your team?
We had about 25 people working on Thor and we delivered 55 shots.

What is your next project?
We’re currently working on CAPTAIN AMERICA: THE FIRST AVENGER for Marvel as well as Ridley Scott’s PROMETHEUS for 20th Century Fox.

A big thanks for your time.

// WANT TO KNOW MORE?

Fuel VFX: Dedicated page about THOR on Fuel VFX website.

© Vincent Frei – The Art of VFX – 2011

X-MEN FIRST CLASS: Laurens Ehrmann – VFX Supervisor – Plug Effects

After explaining to us his work on THE PREY, Laurens Ehrmann of Plug Effects is back on The Art of VFX and tells us their contribution on X-MEN: FIRST CLASS.

How was your collaboration with director Matthew Vaughn and Production VFX designer John Dykstra?
At our level, we were in direct contact with Stephane Ceretti which was based in Soho as Production VFX Supervisor.

Can you tell us how Plug Effects got involved in this film?
Stephane Ceretti, with whom we have long worked at BUF, contacted us to work on a small sequence that included only 11 shots. In the end, we worked on 46 shots left throughout the film.

What have you done on this film?
Our work has focused mainly on making simple compositing effects. We worked on the sequence of the bar early in the film, during which Charles Xavier made the acquaintance of a young woman with heterochromia.
We also made some clean-up works in various other sequences (wire removal, props removal, reflections in the Magneto helmet, etc …).

Can you explain in detail the creation of the eyes for the bar sequence?
On most shots, we simply color corrected the eyes, taking care to add some material and a little edge around the pupil. The eyes were very dark at the beginning, it was necessary for some shots to add detail with textures.

What indications or references did you receive?
The indications were very simple. A green eye, one blue eye. We then refined the colors among the reviews.

How did you proceed for the clean-up shots?
Each shot is different, this kind of work does not require a specific technique. Often it is a mixture of small matte paint, roto-paint …

Can you explain the removal of the crew reflections on Sebastian Shaw helmet?
To remove the crew reflections in the helmet, we just have to clean up an still frame which we then matchmoved on the helmet.

Did you created a 3D helmet?
No. It’s purely 2D.

What was the biggest challenge on this project?
Technically, there was no real challenge. However, as we are a young studio, it was important for us to cultivate a trusting relationship with the FOX studio, as well as all the vendors involved in the project.

Is it that there was a shot or a sequence that prevented you from sleeping?
Not really.

How long have you worked on this film?
We worked six weeks on this film.

How many plans have you made and what was the size of your team?
We worked on 46 shots. The team consisted of four artists.

What do you keep from this experience?
The great feeling of bringing our modest contribution to a movie of this scale.

What is your next project?
We are currently working on the effects of two French features.

A big thanks for your time.

// WANT TO KNOW MORE?

Plug Effects: Official website of Plug Effects.

© Vincent Frei – The Art of VFX – 2011

THOR: Jonathan Harb – VFX Supervisor & Founder – Whiskytree

Jonathan Harb began his career as a production assistant in the art department at ILM in 1996, he later became concept artist, matte painter, supervisor of the matte painting department and VFX supervisor. At ILM, he worked on such films as MEN IN BLACK I and II, the new STAR WARS trilogy, THE PERFECT STORM, or I.A. In 2007, he founded Whiskytree and oversaw the effects of movies like TERMINATOR SALVATION or TRON LEGACY.

What is your background?
I grew up in the South of the United States, and earned a degree in industrial design from the College of Design at North Carolina State University. Although I began college as a mechanical engineer, I discovered industrial design as a freshman. After transferring to the College of Design there was no looking back. From college I started as a production assistant in the art department of ILM. I worked at ILM just past 11 years, along the way gaining experience as a concept artist, matte painter, supervisor of the matte department and visual effects supervisor. I opened Whiskytree 4 years ago, and have been running the company, and supervising & producing the company’s work, since. My credits include THOR, TRON LEGACY, TERMINATOR SALVATION, all 3 STAR WARS prequels and so on, all the way back to the first MEN IN BLACK film.

How was your collaboration with director Kenneth Branagh and VFX supervisor Wesley Sewell during production?
Terrific. Wes was generous on-set in making sure that we reviewed work together with Ken. Ken’s approach to Asgard struck me as very matter-of-fact from the onset. One would think that the subject matter would be hard to get one’s head around, but Ken had clear feelings from the start as to what worked and what didn’t. Wes produced more research than I thought one person could possibly amass. He also worked with a very collaborative style, and the whole effort ended up as a very close creative partnership.

How did Whiskytree get involved on this film?
I received a call from Di Giorgiutti, Visual Effects Producer extraordinaire, who was looking for a vendor to take on designing and building Asgard for the film. That call was one I will remember.

What are the sequences made by Whiskytree?
Whiskytree’s work for THOR is exclusively on Asgard. The establishing shots and wide vistas of Asgard seen throughout the film are ours, as are shots in the coronation sequence, exterior of Odin’s vault, banquet hall sequence, healing room sequence, and a few other places where we see interior and exterior views of Asgard.

Can you tell us what you’ve received from production to create these magnificent views of Asgard?
Bo Welch and his team had created many production paintings as a starting point for the look of Asgard. They also had some geometry, and this formed the basis for our work on some pieces of Asgard architecture. For example, the art department had some pretty refined geo of Odin’s Tower that we drew heavily from in building the version you see in the film. They also sent us up pics of statue maquettes, and even sent us a few wall tiles from the throne room set. The story of Patrick Bareis, the VFX PA, managing to ship a stack of 7-foot long tiles via Fed-Ex is a good one…

What were the indications of Kenneth Branagh for Asgard?
Asgard is meant to be very old, yet very advanced. Ken’s direction, and that of the folks at Marvel, who were also very involved in developing the look of Asgard, was always toward a minimal, yet very sophisticated look. Creating the details of Asgard’s architecture was a very challenging design problem, and on many occasions required several passes at individual buildings or architectural details.

On so many projects it is tempting to resolve a design by adding details and intricate structures to flesh them out, and initially we proceeded this way. Not so with Asgard. Ken et al. were very clear in steering us toward less bitty details, with a more refined sense of larger structures. Very few structures on Asgard contain greeblies that are so common in so many depictions of other fantastical places as seen in other films. This sense of refined minimalism sets Asgard apart.

Can you explain in detail the creation of one of these shots?
Several of our team worked for many months on creating potential camera angles to use for the film. We even sent our Art Director, Joe Ceballos, and one of our matte painters, Juan Pablo Monroy, down for a few days to work directly with the production in creating unique angles to use for the film. Wes and Di also spent time with us here at Whiskytree to help drum up exciting angles of Asgard.

Simultaneously, Susumu Yukuhiro, one of our matte painting supervisors, headed up a team of artists who created Asgardian buildings for months on end. We knew that we needed a myriad of structures to populate the wide exterior vistas in the film, and as there would be very little of seeing the same structure more than once, we created dozens and dozens of detailed buildings to be used throughout the film.

In other development efforts, our computer graphics supervisor, Votch Levi, worked with our tools developer, Paul Hudson, to create tools and techniques to simplify and automate the processes we would use in manipulating the very large scene files required to build wide views of Asgard. We used Arnold for rendering, and its existing integration into Softimage was a boon for us. Votch and Paul built tools for distributing trees with ease, made laying out scenes a snap with our asset manager, Distill, and optimized scene files and tracked down bugs where needed. One of our TD’s, Sam Cuttriss, built an in-house crowd system using ICE, and a few other tools came into use along the way. Also, our Compositing Supervisor, Brandon McNaughton, contributed final polish to many shots throughout our work on the film.

Susumu Yukuhiro, Matte Painting Supervisor // All of the Asgard establishing shots were fully 3D matte paintings, as all the shots had dynamic camera moves, and this was was a stereo show. We built a huge Asgard building library, and made a general Asgard city plan. This helped all the artists to understand where things were and what you were supposed to see in each direction. For the very first establishing shot of Asgard, where the camera comes out from the water and flies through the cliffs to reveal Odin’s Tower, we went through many versions of pre-viz to find the best possible dramatic introduction. Once the camera was approved, it was a matter of assembling the layout with published CG assets such as buildings, cliffs, and trees. Our asset management system enabled us to assemble the complex 3D environment scenes much easier than the traditional way. We used Softimage to create all the 3D elements, and rendered with Arnold. Employing Arnold in this show was one of the best decisions we made for the show, and it allowed us to render with all of the expensive calculations such as GI, glossy reflection, and 3D motion blur, without costing too much render time. Additionally, all the effects work such as cloud/myst, flags/banners, water, and crowds were done with ICE in Softimage, and these libraries of effects also helped toward recycling for use in other shots. Also, while we originally planned to replace all of the CG cliff renders with image-based projections at the end, the sophisticated lighting and quality of the Arnold renders allowed for only about 10% of touch-up painting in the end.

Did you use special software to create the environment, including water and clouds?
We used ICE in Softimage for these effects.

Did the shiny Asgard structures cause you some troubles?
Yes, many troubles. Glossy surfaces are notorious for heavy renders and sizzling artifacts, and we experienced both in great quantity. Marcos at Solid Angle was great in tuning up Arnold’s sampling part of the way through the production, and that greatly reduced many of our render times.

How was the collaboration between different VFX vendors?
Excellent. We created a layout of Asgard from a distance for BUF to use in their distant views from Heimdahl’s Observatory, and successfully exported incredibly heavy scenes to them that you see in much of the film. Even though BUF uses their own proprietary tools for much of their own work, we found common technical ground, and Nix, their VFX Supervisor, was great to work with. It helped that he and I met and had a beer during our time on set, it’s always easier when you have a face for a name. We also had a large shared shot where we created the opening of the shot, and BUF handled the end of a very large pan as Thor and the Warriors 3 and Sif ride out of Asgard. No one would guess that two different houses created renders that contributed to the final shot, such was the success of our collaboration. Jake Morrison, an additional VFX supervisor on the film, also did a lot to ensure that we were able to seamlessly sync scenes with BUF.

Can you describe the thought process and development of creating the fantasy realm of Asgard.
Joe Ceballos, Art Director // We set out to make Kirby’s vision of Asgard a reality. The goal was to create a utopian realm in an extraterrestrial setting. We used curvilinear forms and manipulated scale to construct a golden city which reflected an ideal balance of both organic and industrial influences. The city layout was based on fractal geometry to embody the idea of organized chaos. It was important for Whiskytree to build a world that felt as though Odin designed each detail with intent; that every building, interior, and outdoor space was perfect in both form and function.

How did you manage this huge environment?
Joshua Ong, Matte Painting Supervisor // Whiskytree’s proprietary asset publishing system, Distill, allowed artists to easily share cameras, models, HDRI environment spheres, shaders, light-rigs, and animation rigs and provided a solid system to control asset versioning. When creating shot layouts, artists imported individual referenced models, cameras and light rigs from Distill and composed the shots. The sheer number of models needed to populate Asgard required artists to combine models into manageable clusters that were then published into a shot’s layout library. A master lighting/rendering scene referenced layout clusters and contained all pass information. Using Distill in concert with XSI’s model referencing paradigm allowed the ultimate flexibility for making changes to model meshes or materials that could be propagated all the way up to the master lighting scene at any stage.

What was the biggest challenge on this project?
Finding the look of the establishing shots of Asgard.

Has there been a shot or a sequence that prevented you from sleeping?
Yes, the shot at the end of the movie that takes us back to Asgard as we move above the clouds off of earth. This shot emerges over the rainbow bridge and is a long slow orbit around Odin’s Tower. The scene was full of a very large number of glossy materials, trees, and so on. Several hundred frames of this scene, rendered from each eye for stereo, amounted to an immense amount of proc time, right at the final crunch of delivering our last shots on the film. This shot was in fact the last one we delivered, and required a few sleepless nights for several of us.

What are your softwares at Whiskytree?
Votch Levi, Computer Graphics Supervisor // The Whiskytree VFX pipeline is designed with an artists workflow in mind. We use an in house developed asset manager named Distill coupled with Shotgun to organize all of our 3D assets and tasks. Distill is accessed via the NetView browser in Softimage and provides a graphical interface for artists to publish their scenes and references shared models. Distill handles the publishing of all scene components including Cameras, Rigs, Models, Materials, Textures, and Animation data. Callbacks within Distill handle post conversion of published assets allowing for automated conversion of Softimage assets to Nuke and other DCC applications.

Through Distill shots are divided into disciplines and individual tasks allowing artists to work in parallel and continually develop individual shot components. If the shot demands more resources or an accelerated schedule we use Jonathan’s “Force Multiplication” technique and divide the shot into smaller pieces for additional artists to develop and progress. Distill acts as the hub bringing the artists together, allowing the team to easily share shot components.

Once assets are published we take full advantage of Softimage’s referencing system and Arnold’s ability to render ultra dense scenes. During publishing, Softimage assets are converted into the Arnold Scene Source (ASS) format. ASS files act like a proxy in Softimage and defer loading of complex geometry until rendering. The Distill/Softimage/Arnold combo gives our lighters and matte painters the ability to create and render scenes with hundreds of millions of triangles and thousands of models while focusing on shot direction and aesthetics without the worry of software limitations and technical challenges.

How long have you worked on this film?
We worked on the film for 15 months, principal photography through post.

How did you organize the show and what was the size of your team?
Sumriti Bhogal, Production Manager // We had developed a very tightly knit group through the successful completion of our work on TERMINATOR SALVATION and TRON. Jonathan and I worked with our coordinator and PA to quickly facilitate all client feedback to our discipline Supervisors. We primarily utilized Shotgun during production of THOR to schedule and track our progress. We made sure to have fluid communication with our client and had a strong familial approach that can sometime be overlooked on large projects.

What did you keep from this experience?
Projects like this one are rare opportunities.

What is your next project?
CAPTAIN AMERICA.

What are the 4 movies that gave you the passion of cinema?
BIG TROUBLE IN LITTLE CHINA, UNFORGIVEN, FORREST GUMP and THE GODFATHER.

A big thanks for your time.

// WANT TO KNOW MORE?

Whiskytree: Official website of Whiskytree.

© Vincent Frei – The Art of VFX – 2011

THOR: Eric Fernandes – CG Supervisor – Digital Domain

Eric Fernandes has worked in many studios such as Industrial Light and Magic, Weta Digital or Dreamworks Animation and Digital Domain. He has worked on such films as STAR WARS EPISODE II: ATTACK OF THE CLONES, LORD OF THE RINGS: THE TWO TOWERS and RETURN OF THE KING but also on KUNG FU PANDA and AVATAR.

What’s your background and what was your position on this show?
My name is Eric Fernandes and I was the CG Supervisor on THOR for Digital Domain, working out of the Vancouver office. Since the VFX Supervisors on THOR were primarily working out of Los Angeles, it was my job to oversee the pipeline and maintain the aesthetic and technical direction for the show from Vancouver. We would present our work to the VFX Supervisor at DD, Kelly Port, on a daily basis for feedback and approval, and then that would be shown to the overall VFX Supervisor, Wesley Sewell, usually once a week in a Cinesync session. Cinesync allows multiple people in different locations to view streaming video simultaneously, as well as draw or annotate on the image in real-time.

What’s your software and hardware pipeline?
We are almost entirely Linux based at DD, with HP Workstations, primarily HP 8600s with Nvidia Quadro graphics cards running the CentOS operating system. For modeling and texturing we do occasionally use Windows machines for specific non-Linux applications like ZBrush or Photoshop. Our software pipeline consists of a wide variety of off-the-shelf tools, which are supplemented by in-house tools and custom development efforts. For THOR, modeling was done in Zbrush, Mudbox, and Maya. Texturing was done primarily using Mari from The Foundry. We also used Mudbox for displacements, as well as Photoshop for specific additional painting work. Animation and rigging were done almost entirely in Maya with custom DD plug-ins for the rigging system, and tools which animators and TDs wrote as well. FX elements were generated 90% in Houdini, using DD’s proprietary Rigid Body Dynamics system called « Drop », as well as its volumetric renderer, « Storm », for all snow and breath effects. Some particle fx work and miscellaneous elements were generated straight out of Maya and rendered through Mental Ray as well. Our lighting pipeline on this film was entirely Renderman, using the latest Renderman Studio software, including Slim and Renderman for Maya. Our render farm is managed through Qube in Vancouver, and as with all DD productions, Nuke was used for compositing.

What did Digital Domain on this project?
DD was responsible for the Jotunheim sequence, which is the home world of the Frost Giants. We also did the work involving the Frost Giants in other locations in the film, such as in the prologue for Earth, or when the Frost Giants raid Asgard to attempt to steal the Casket of Winters. DD’s work consisted of approximately 350 shots, ranging from relatively simple set extensions all the way to 90 fully CG, stereo-rendered creature shots with hundreds of Frost Giants, hero creatures, and ridiculous amounts of rigid body dynamics. Internally we referred to the Jotunheim sequence as « AVATAR » meets « 2012 », based on the combination of full-CG creature work with huge destruction elements.

Can you tell us how many assets you have built for the movie? What was the size of the team and how did you organize such a big production?
We built the entire world of Jotunheim, from the wide scale shots of the planet, to each building and rock that you see on the screen that wasn’t practically shot on set. The actual set was pretty minimal, so I would say 80-90% of what you see in the environment on Jotunheim is entirely CG. We built about 30 to 40 buildings or structures, a dozen or so mountains and cliffs, piles of rubble and debris, and countless other small architectural elements like bridges and walkways. On the creature side we modeled 12 variants of the hero Frost Giants with different markings, scar patterns, and costumes, along with a Frost Beast creature, which Marvel asked us to help them design about halfway through production. We also created 7 digi-doubles, one for each of the main characters in the film who appeared in Jotunheim. Ultimately we reached a crew size of about 200 people on this film, with about 80% of the crew in Vancouver and 20% in Venice, California. DD has a long history of working on large projects, so there is a legacy of strong support systems and procedures for everything from staffing and hiring artists, to asset tracking and scheduling. A wide variety of tools are used to organize and manage a film like this, from commercial products such as Shotgun and Qube, to inhouse tools like our asset management or daily viewing system.

Were there new challenges on this project? Did you adapt any asset or tool from your previous works?
We did not necessarily face new challenges on this film, but the large amount of work combined with a wide variety of skills – photorealistic environments, creature animation, destruction fx, stereo 3D, etc. – proved to be the biggest challenge. We basically had to create hero creatures that could stand in for their live-action counterparts, and do some very convincing and challenging environments and fx development. From a hero creature standpoint, we had to create creatures that could hold up full screen, match their on-set versions, and be able to populate them in scenes ranging from one frost giant to hundreds of them. And the Frost Beast was an entirely new creature we created from scratch, spearheaded by Miguel Ortega and Chris Nichols, which popped up as a new character halfway through production on the film. For FX development we had to come up with so many different and unique looks and systems, that was a big challenge in and of itself. We had to do cg snow and breath in almost every shot, lightning for Thor’s hammer, as well as fx for the hammer spinning, huge cloud vortexes that the Aesir travel through, growing ice weapons, blood and gore effects, and huge dynamic simulations of collapsing buildings and terrain. In short, we had just about every fx challenge on this film except fire. So it was a daunting logistical, technical, and artistic challenge to do all of that work in a 9-month period of time and maintain the quality that both Marvel and DD were expecting. Our FX Supervisor Ryo Sakaguchi did a great job with a team of approximately 12 fx artists in Vancouver, along with some initial development and support from Venice-based Houdini developers.

The other challenge we had on this film was doing it from a new facility in Vancouver, which was set up in January 2010 to work on TRON: LEGACY. So one of our big tasks was integrating the pipeline from the main studio in Venice, training the artists, and building a cohesive group of people to work on this film from scratch. It was also the first film at DD Vancouver that we dealt with the frond end of the pipeline, including asset development such as modeling, texturing, and lookdev. So it was a great learning experience for us to get up to speed quickly and experience a “trial by fire,” and thankfully we were given the freedom to recruit and hire a very talented group of experienced artists from around the world to make that happen. And of course we leaned heavily on the long history of development and software at DD, from its in-house Houdini tools, to a rich and deep history with Nuke for compositing.

Did you used previs for your shots? Did you change shots from the storyboard to the final result?
Previs was done by « Third Floor, » who did a great job and worked directly with Marvel on that element. But we certainly did a lot of all-CG shots and helped Marvel conceptualize some of the more complex camera moves, such as the shot where the Frost Beast runs underneath the ice and the camera goes upside down. But yes, in general there were ongoing changes between the camera department and animation and layout to refine the cameras and layout of the shots throughout the film.

What’s the most challenging shot for you?
There were probably 2-3 shots that were the most challenging for different reasons. Late in the film we were asked to do a shot for the prologue, which had the Frost Giant army lined up against Odin’s guards. The previs showed thousands of characters lining up for battle, and our pipeline up to that point was primarily hero-character driven, so we hadn’t implemented any crowd systems like Massive, and it was honestly too late to go that route. So our Pipeline Supervisor, Tim Belsher, and our lead shader writer, Mark Davies, came up with a custom, instance-driven delayed geometry approach for this one shot that they worked on for the final month of the film. Animation baked out predetermined movements of about 10-20 characters in simple cycles with cape/cloth sims. This was then combined into a delayed geometry rib archive with the shaders and pre-rendered occlusion maps, which were called procedurally through Renderman.

Also challenging shots were any of the shots where the Frost Beast is chasing the Asgardians out of the city and you see buildings and ground collapsing beneath them. This involved a huge amount of back and forth between layout, fx, anim, and lighting right down to the final days of us working on the film. Shots like these had complex rigid body dynamics simulations for the ground and buildings, volumetric smoke, particles, mocap animation, custom animation, and soft body, rag-doll dynamics on characters flying through the air after interacting with the RBD simulation. Integrating all of the different data coming from multiple packages like Maya and Houdini, along with the
myriad of internal tools used at DD like our volumetric renderer, Storm, and doing it for both left and right eye in stereo 3D was quite a challenge, to say the least.

Can you explain to us how did you design and create Jotunheim?
Jotunheim went through a number of design changes along the way, and ultimately we designed the planet with large chunks taken out of it to imply decay, along with giant icy canyons and crevasses. Some of the shots we spent a long time trying to get right were the establishing shots at the landing area where the Asgardians first arrive, and where they flee back to when chased by the Frost Beast. Think of the Grand Canyon times 100 for size.

And as we worked on that, the primary way to express distance and scale of a canyon are the horizon line and whatever fog/atmosphere you add in between the viewer and the farthest object. It turned out to be very difficult for us to get the sense of scale that Marvel was looking for with this super-sized canyon. If we added a lot of fog or atmosphere it looked like clouds or a low lying fog layer, because over that large of a distance it accumulates to full opacity very quickly. You can’t visually discern the difference between clouds that are 100 miles away or 10,000 miles away if they stack up horizontally on each other. And no matter how far we pushed back the far wall of the canyon there is still a vanishing point on the screen that also doesn’t change a whole lot in proportion to the distance you’re moving it in world space. Again, it’s really hard to represent the difference of a cliff wall that is 100 miles away, versus one that is 10,000 miles away if you don’t have something in between to establish distance and scale cues. After a couple months of iterations we ended up with something everyone was relatively happy with, although it probably works better in the 3D space than anything else we did on the film.

As for what Jotunheim is, at the time that Thor arrives early in the film the planet is a decayed, crumbling world. Buildings have aged or toppled over, ice shelves have moved around, and the world is dying and inhospitable. A lot of this was driven initially by what they shot on set, and what the art department conceptualized, which were these crystalline, snowflake patterns, and hexagonal tubes made from granite and ice. The buildings are mostly vertical, simplistic construction designs meant to imply that the Frost Giants would have created them with the materials available to them organically. Marvel asked us to get away from buildings with a lot of architectural details or ornate construction motifs. At one time we had concepts for statues, busts, and very Gothic looking burned out structures, which we eventually moved away from. So as we expanded the world and built it out beyond the small area of the set we tried to keep in mind that these enormous buildings and palaces were made of a similar combination of granite and ice. Even though the buildings are large they’re all composed of the same, fairly basic building blocks and materials, at the same time we tried very hard to avoid anything resembling the « Fortress of Solitude » look since that’s a completely different superhero altogether. So we avoided diagonally crossing columns, or anything that implied a more clean, very crystalline look. We went with grungy, dirty mixtures of granite and ice.

What were your references for Jotunheim?
As with any film we looked at a ton of reference, from Mayan/Inca pyramid designs to Gothic architecture reminiscent of bombed out churches and other structures in World War II. We looked at anything that evoked primitive yet powerful societies, as well as reference of cities that have decayed over time. Additionally we had some very talented concept artists on the film, and our matte painters David Woodland and Mat Gilson did a lot of concept art to establish the look of the world along with Claas Henke. It was a very collaborative process with director Kenneth Branagh and all of the folks at Marvel, which was a lot of fun for us. They gave us a lot of leeway to pitch ideas and concepts to them and really treated us as a partner in helping to design the world and figure out what worked and what didn’t in the context of the story.

How did you create the right look for the ice effect?
I would say the big difference in this film with the ice look is that Jotunheim is a decayed, crumbling world, and the Jotuns built their structures out of a combination of what you might call granite and ice. So the good part for us is that there is actually very little of the typical real-world properties of ice evident, with things like refraction or reflection, which is expensive to compute. Believe it or not, we did an entire film on an ice planet without a single reflection, which saved us a great deal of time and look development as we didn’t need to rely on ray-tracing or faking in reflections. We achieved the look of the ice and ground with a combination of fairly simple diffuse shaders, good specular maps, and subsurface scattering. It was really driven by the look needed in the shots, and the story Marvel wanted to tell rather than any real-world simulation of ice properties. We also benefited from the fact that Marvel was trying to avoid comparisons to another superhero property that has a very ice-like, crystalline look.

Can you explain in detail the collapse effects?
The collapsing effects were some of the most difficult shots we did on Thor. They were all done in Houdini, using a number of proprietary rigid body dynamics tools that DD has written over the years like « Drop ». As you probably can tell from films like THE DAY AFTER TOMORROW and 2012, DD has always had a very robust in-house set of tools for this type of work. The workflow itself is fairly straightforward. Modelers create a surface, whether it’s a building or the ground plane, and makes sure not to build it with any overlapping vertices or intersecting geometry. This is vital for the RBD simulation to work properly. The fx department then pre-scores the model into chunks where they would like it break. After that they run the simulation through the RBD system and it breaks the geometry, creates any needed interior faces, and spits out a per-frame geometry sequence in the bgeo format. This is then brought into lighting, either as live geometry, or through a proprietary DD tool called geoTor, which is a bgeo-to-Renderman delayed geometry work flow. Once that’s rendered out it’s up to comp to integrate it with the actors, add camera shake, as well as any additional elements which help integrate the cg to the live action plate like dust or lens flares.

How did you make frozen effects of Heimdall?
Actually DD did not do the frozen effects on Heimdall, so I can’t answer this. I believe either BUF or Luma did the Heimdall shots. We only added the Frost Giants for these.

Can you tell us the features for the Frost Giants and Beasts?
Eric Petey, animation supervisor // The Frost Giants are large humanoids, standing approx 3 to 3.5 meters in height. They are very strong – they have a lot of mass but unlike large humans on Earth they are not slow as a result. Blue, battle-scarred skin and piercing red eyes complete the look of these fierce warriors. The Frost Beast measures around 8 meters at the shoulder – enough to dwarf even the Frost Giants. It’s a very solid monster, built to bash and slash with a battering ram-like head complete with large blade-shaped tusks and gnarled teeth. Every part of it is dangerous: powerful muscular arms and paws are armed with huge claws, and a very long tail carries sharp spikes on its tip like a medieval mace.

How did you animate the Frost Giants and the Beast?
Eric Petey, animation supervisor // For the Frost Giants we used a combination of motion capture and key-frame animation, and the motion capture was often modified with key frame action on top. There were some live-action Frost Giants in the sequence as well, a number of times it’s a mix even within the same shot. CG Frost Giants were used from deep background right up to close foreground. There was no use of Massive or other crowd simulators. The Frost Beast is purely key-frame animation.

How did you create their skeletons and their rigs?
Eric Petey, animation supervisor // The Frost Giant skeletons are quite complex, using the same physically accurate skeleton system developed by Walt Hyneman and his rigging team for another one of our recent films. The animation puppets were complex but not unwieldy – it was possible to reduce the level of control for blocking, and ramp it up for more detailed work. The deformation rig used a combination of pre-simulated muscle and cloth deformations, with additional dynamic muscle jiggle tools. It was complex but allowed us to save a lot of time on the back end by not having to run dynamics passes on every shot.

Can you tell us how did you make them interact with the real actors?
Eric Petey, animation supervisor // Sometimes actions that involved interaction with actors were motion captured, using the shot footage as reference. Of course some modification is necessary to make contact points and eye lines work. But often it’s just a matter of the animators and supervisory staff keeping a close watch on all the subtle things that make interaction with performers believable. In that way it was not unlike most films that combine live action and CG characters.

A big thanks for your time.

// WANT TO KNOW MORE ?

Digital Domain: Official website of Digital Domain.

© Vincent Frei – The Art of VFX – 2011

THOR: Kelly Port – VFX Supervisor – Digital Domain

Kelly Port began his career in 1995 at Digital Domain on APOLLO 13. He participated in many projects of the studios such as TITANIC, RED PLANET or THE LORD OF THE RINGS: THE FELLOWSHIP OF THE RING. He will also oversee the effects of movies like WE OWN THE NIGHT, STAR TREK or THE A-TEAM.

Can you tell us at what point did Digital Domain enter the project?
We certainly had a presence on the set supporting Wes Sewell, the overall visual effects supervisor, especially in regards to gathering on-set reference photography and camera data. We set up a texture photography booth and shot all the Frost Giants in costume – as well as the villagers for the prologue sequence – and we covered a lot of the sets. These are all critical elements that come into play later in post-production, when we are developing assets and need the ability to match camera moves. Scott Edelstein was critically involved on set, and he eventually went on to lead our digital environments team.

How did you design the environment and the character?
Many of the original concepts and designs came from Marvel’s art department and production design team. Of course, we had to then take these concepts and run with them, and develop them in 3D and with much higher detail. In terms of character work, we had Nick Lloyd and Miguel Ortega finalize the Frost Giant designs. When the Frost Beast was being designed, we presented dozens of concepts. Marvel combined some of those ideas with their own concepts, and ended up incorporating everything into a hybrid creature that became the final result. So, as with everything, it was a very collaborative approach.

What was the real size of the set?
The only set that was actually built was the lower area of Laufey’s palace, which is where they have a conversation with Laufey prior to the battle itself. It’s open on one end, like a horseshoe. So anything looking out into that open area was a digital environment, and anything above the set also required extension. We also added more refined architectural elements to the set here and there as needed.

Were there some full CG shots?
In the Jotunheim sequence, we had almost 90 shots that were all-CG. The stereo on those shots, of course, didn’t need to be ‘dimensionalized’ since we provided both “eyes” on all of those.

What were the lighting challenges in these sequences?
The biggest challenge for lighting lead Betsy Mueller was to create not only the mood and tone of the sequence, but also to create the scope of the world our heroes found themselves in. By using depth perspective, layers of light and shadow, and of course atmospheric snow and fog, we were able to address all of those challenges.

The battle includes huge amounts of shattering and breaking up of the ground and structures. Can you tell us your approach and tools and the challenges in producing these effects?
Ryo Sakaguchi supervised the fx animation on the Jotunheim sequence. The really big shots for the fx team were those that involved complex rigid body dynamics (RBD) simulations. All of the structures and the terrain had to be modeled according to some very precise specifications in order for our fracturing algorithms to work correctly. For example, the models couldn’t have any overlapping vertices. Once the models were fractured, we’d run the sim, and after adjusting the parameters over the course of many iterations, we’d get something that looked good. One of the biggest challenges, though, was getting this now broken-up geometry back into our lighting pipeline with proper Uv’s, textures and displacements. While the initial modeling and UV mapping were all done out of Maya, all of the fracturing and simulations were done out of Houdini with our proprietary tools, then back to Maya for lighting out of prman.

What role did matte paintings play in the environments and how did you create them in 3D space?
Our lead matte painters and conceptual artists, like Matt Gilson and Minoru Sasaki, played a critical role in finessing the final look of many of shots. They not only came up with amazing concepts, they took those ideas into the final picture by using techniques that utilized multiple projection cameras within Nuke and Maya.

There are a lot of atmospheric effects like fog, snow and other particles. What were your techniques to create them?
All of our snow and atmospheric effects were done out of Houdini. We simulated a variety of turbulent wind conditions and density and saved them out as different libraries. Ideally, we could load a particular library for a given shot and it would work out great; but often we’d have to adjust the simulation slightly. For any given shot, at a minimum we would generate, a background, mid-ground and foreground layer of snow and fog. This proved to be enormously helpful to Stereo D in converting the 2D final composite to stereo 3D.

What are your softwares and pipeline ? Did you create specific tools for this project?
I’m especially interested in the compositing tools, and any particular compositing challenges in the project. In terms of what software we used on the project, it was primarily Maya, Houdini and Nuke. For developing assets, we also used Z-Brush, Mudbox, Mari and Photoshop.

How did you bring Frost Giants to life ? Did you use motion capture?
Eric Petey supervised animation on the project, and yes many Frost Giants were fully animated CG characters. We used motion capture extensively for the Frost Giants, and those motion clips were captured at Giant Studios in Los Angeles. Giant has a great virtual production setup where you could have two 6’ tall performers fighting each other in the mocap volume, for example, but then on the monitor you would see a 6’4” Thor fighting a 10’ tall Frost Giant composited and animated in real time. This was very useful when it came down to physical contact. For example, if Fandral stabbed a Frost Giant in the chest, we needed the motion capture performer to aim higher, like around the neck or head, because the CG character would be much taller than its human counterpart performing in the mocap volume. Ultimately the cleaned-up motion capture worked quite well for many of the shots, but in cases where the director wanted to change the animation and we didn’t have the proper clip, we would have to animate those by hand. And, of course, the Frost Beast was always animated by hand.

Can you explain how you create the Frost Beast? What were the main challenges with him?
The Frost Beast was a true “monster mash” of various ideas. Creature designer Nick Lloyd came up with dozens of concepts that we presented to Marvel, and ultimately they came up with a final look utilizing various elements of our designs (as well as their own creative touches). Miguel Ortega built a digital model that was textured by Christopher Nichols using elephant skin as a major reference for its exterior look. The face resembles that of a turtle crossed with the rancor from Star Wars, something Marvel specifically referenced in their design notes. Obviously nothing like the Frost Beast exists nature, so Eric Petey and his team animated this creature without the benefit of mocap – it was all done by hand to mimic the motion of a large cat, but of course it’s powerful like a rhinoceros and can burst through anything in its path. That created challenges for computer graphics supervisor Eric Fernandes and the fx animation team, which had to shatter ice and rock as the Frost Beast chases after our heroes. It was a real team effort to put such a menacing creature on screen for this film.

Did you enhanced some camera moves?
Almost all the camera moves with a live-action component were match-moved from what the production camera was doing, although sometimes we would extend the move out higher or alter it to some degree, in order to fit the shot better. For the all-CG shots, they typically started out as previs from Third Floor or Digital Domain’s internal previs team. We would take that and incorporate it into our world scale and refine it if necessary.

Can you explain how was the collaboration with the stereo team?
Working together with production and post-conversion company Stereo D, we devised four types of stereo shots. Type 1 would be where we didn’t have to do anything, which was equivalent to what would happen on all the non-visual effects shots. Type 2 was an all-CG shot, where we would render a left and right eye, resulting in a “true stereo” shot; we did about 90 of these. Type 3 was where we would provide Stereo D with any layers that would help make their job easier – mattes, z-depth information from the renders, snow and atmospheric layers, etc. Type 4 was where Stereo D would post-convert the plate and provide us with a camera, and then we would render out both eyes for “true stereo” CG elements.

A big thanks for your time.

// WANT TO KNOW MORE?

Digital Domain: Official website of Digital Domain.

© Vincent Frei – The Art of VFX – 2011

KUNG FU PANDA 2: Rodolphe Guenoden – Supervising Animator & Fight Choreographer – Dreamworks

Rodolphe Guenoden is working in animation for over 20 years. Within Dreamworks Animation, he participated in projects such as THE PRINCE OF EGYPT, THE ROAD TO EL DORADO, SINBAD: LEGEND OF THE SEVEN SEAS or KUNG FU PANDA.

What is your background? What was your role(s) on this film?
I have been a traditional animator for the past 22 years, but also tried my hands at CG animation on the first KFP movie. I am also a storyboard artist. On KUNG FU PANDA 2, as with the first one, I served as a story artist, a supervising animator, and fight choreographer.

You are a true Swiss Army Knife, how did you manage so many different roles on this film?
Well, I like taking movies from beginning to end, and especially even more with those movies. The studio has always been great at letting me express myself in story and animation. Since I had animated the fight scenes in SINBAD, LEGEND OF THE SEVEN SEAS. The studio let me try my hands at choreographing the fight scenes on the Panda movies too, as well as supervising some of the more « talking » sequences. As an animator, my love for acting couldn’t be satisfied with only action scenes, of course.

What was it like collaborating with Director Jennifer Yuh Nelson?
Jen and I have been working together for years now, since SINBAD, so it was a great pleasure to have her direct this movie. We always brainstormed ideas for action scenes in order for those scenes to serve the characters emotionally and make the story progress. An action scene without a tone or character moments becomes very quickly tiring and lacks interest. Jen always pushed to get the best out of those moments.

Can you explain how you approached the script for this new adventure?
In the storyboarding process, the script is never « locked ». As we draw the scenes, the plot and the characters evolve. The script remains a foundation from which we are able to explore. Jennifer has always been fantastic at keeping her focus towards the story she wanted to tell, and still giving us some freedom to explore our scenes.

The film mixes different styles of animation. How did you choose these different approaches and what were your references?
It quickly became part of the pedigree for the Panda franchise to mix animation techniques. The first movie started with Po’s dream, and we wanted it to be treated differently from his « real » world. For this new movie, Jennifer thought it’d be nice to explore Po’s memories in a similar manner. That decision allowed me to animate in 2D again, which I love. The opening of the movie, about Lord Shen’s past, needed to be told as a legend or a folk story people would tell kids. That’s when it seemed like the right choice to use traditional Chinese shadow puppets. The production designer Raymond Zibach and the art director Tang Heng succeeded at giving this beautiful look.

How many seconds of animation were made per week?
Animators are encouraged to produce about 5 seconds of animation per week, depending also of how many characters are on screen. It can vary because of the scenes’ difficulty.

Can you explain in detail the choreography creation of the many battles in the film?
Having been practicing martial arts for over 20 years, it was somewhat natural for me to imagine and draw those scenes. I’ve drawn in thumbnails most of the scenes, and also animated in 2D some others, to explore the movement deeper and satisfy my own animation hunger. On those thumbnail pages, I also wrote specific notes that explained the fragmentation of the movements. It was easier and faster for the CG animators to work from those drawings.

What are your references and inspiration for the fights choreography?
The inspiration is guided by the tone and purpose of the scenes. Then I had to imagine how to serve those scenes. Of course, since the first movie, we’ve all been watching numerous martial arts movies of all origins and eras. We also studied the original animal forms the Furious Five are based on through books and videos. But we had to take some liberties too in order for the personalities of the characters to be expressed in their way of moving and fighting.

What was the biggest challenge with Lord Shen?
Ha! I think the biggest challenge was having done Tai Lung in the first movie! We had to create a villain for this new episode that was unique. Since it was decided he was going to be a peacock, we wanted him to still be surprisingly dangerous, even lethal. We also wanted him to be a great sword master. As Beijin Olympics were going on, I stumbled upon some clips of rhythmic gymnastics and was awed by the flexibility and coordination. I tried to implement that in the way he moves, mixed with academic sword fighting. It makes him graceful and unpredictable.

How have you created and animated the huge plumes of Lord Shen?
His tail was a great challenge, but I wanted him to use it as a limb of sorts while fighting. To explore the possibilities at the development process, I had done a 2D test that the Head of Animation Dan Wagner translated into CG. It was important to try things out early enough to help the rigging department visualize what we wanted to accomplish.

Did the wolves cause you any problems?
The problem they caused is their numbers. They were always so numerous on screen; it slowed the process for the animators to render their scenes on some occasions. But it was great to have also so many « punching bags » around for our heroes!

Were there any shots or sequences that prevented you from sleeping?
Having to help so many animators at the same time for those big battle sequences was quite intimidating. It was a constant juggle, making sure I could feed everybody what they needed while supervising some other sequences. It seems to have been smooth for everybody though.

How long did you work on the film?
I started working on the sequel since 2008, almost right after the release of the first movie which had started in 2003. So it all makes a few years with those characters. Because the world created and the characters being so appealing, it never became boring or redundant. It’s a fantastic experience for me.

What was the size of your team?
The size of the animation team always varied depending of the sequences we were all working on. The battle sequences demanded more hands, obviously. The full animation crew was of 50 or 60 animators at the end of the production. But each supervisor has to lead from 5 to 8 animators generally.

What did you take away from this experience?
I love this movie for having pushed things farther. It’s more epic, ambitious and more emotional. It is great that we get to witness Po grow as a character, see part of his past and see how his relationship evolve with the Furious Five. On the personal level, it was a fantastic professional experience to work with Jennifer as the director. She is a visionary.

What is your next project?
We’ll see….

What are the 4 movies that gave you the passion of cinema?
I guess all of Spielberg movies in the 70s and 80s got me drugged on movie making among others. BASIL OF BAKER STREET made me want to mix movie making and drawing and become an animator.

A big thanks for your time.

// WANT TO KNOW MORE ?

Dreamworks Animation: Official website of Dreamworks Animation.

© Vincent Frei – The Art of VFX – 2011

X-MEN FIRST CLASS: Vincent Cirelli – VFX Supervisor – Luma Pictures

Vincent Cirelli and his brilliant team atLuma Pictures are back on The Art of VFX. After given life to the impressive Destroyer on THOR, Vincent discusses his work on the super-heroes of X-MEN FIRST CLASS.

How was your collaboration with director Matthew Vaughn and VFX designer John Dykstra?
Vincent Cirelli, VFX Supervisor // We worked primarily with John Dykstra on the film. This was actually our second time working with him, as we also teamed up for HANCOCK a couple years ago. John is one of the top minds in VFX for a reason and it goes beyond pure technical capability and creative vision. He has such an understanding of both the studio side of VFX (managing expectations, etc.) and the way in which effects houses have to work that he kept the relationship very comfortable throughout the process. He would always see the greater good in how to execute workflows and find viable solutions to get around certain roadblocks. We were also impressed with how John was always on the same page with the director.

Payam Shohadai, Executive VFX Supervisor // Dykstra really went out of his way to articulate his ideas in a way that was easily understood. In VFX, concepts can be very nebulous at times and John did an excellent job finding the best way to communicate what he needed from us.

How did Luma get involved on this film?
Vincent Cirelli, VFX Supervisor // Luma was tasked by FOX to work on X-MEN ORIGINS: WOLVERINE a few years ago and we picked up 120 shots, including character work, for the film. The high quality of the work and quick turnaround (about 5 weeks) must have made an impression, because FOX came back to us again to animate the snakes for Uma Thurman’s Medusa character in PERCY JACKSON & THE OLYMPIANS: THE LIGHTNING THIEF. The relationship has kept growing from there.

What are the sequences made by Luma?
Vincent Cirelli, VFX Supervisor // Luma worked on several sequences for the film: character design and animation, green screen work and also set extensions. We designed three characters for the film, working with the studio, John Dykstra and our in-house designer, Loic Zimmermann on Banshee, Havok and Darwin.

Richard Sutherland, CG Supervisor // One of the larger set extensions that we worked on was a giant radio telescope. We were able to reuse this asset across multiple scenes taking place near the X-mansion.

Can you explain to us the creation of Havok effect?
Raphael A. Pimentel, Animation Supervisor // We were a part of the design process from the beginning, long before receiving any plate photography. We hired a body double that resembled the actor playing Havok in order to get a representation of the physical movements involved during his energy ring attack. By looking at that early on we could realize the full impact and pitch the concept to the studio in the preliminary stages.

What references have you received for Havok effect?
Vincent Cirelli, VFX Supervisor // John Dykstra was intrinsic in the development of Havok’s energy rings. He suggested we add heat spots to the rings, which would spin and create a feeling of motion as they left Havok’s chest. This element turned out to be a cornerstone of the effect.

How did you create the gigantic dish?
Richard Sutherland, CG Supervisor // We looked at radio telescopes from around the world. The foundation of the design came from scopes we found in New Mexico. However, the film needed something much larger than most of the reference we came upon, so we ended up taking design cues from the New Mexico dish, but looking to large scale structures like bridges for proper architectural scale modeling of the dish that you see in the film.

How did you create the scenes of flying Banshee?
Richard Sutherland, CG Supervisor // For Banshee’s flying scenes, we had to take two different tacts. For shots where the character is at a distance from the camera, we created a digital double for him that we could then manipulate. For shots that were closer in, we worked with green screen elements and altered plate photos to maintain the flying effect.

Can you explain the creation of Banshee sound wave?
Vincent Cirelli, VFX Supervisor // The waves of sound that Banshee creates had a few different inspirations. The primary influence came from physics renderings of sound wave movement through space. Luma then developed toolsets designed in-house to create the geometry for the effect and then combined with it fluid simulations inside of Maya. We also looked to nature for the scenes where Banshee is underwater, drawing insight from the way in which dolphins blow out rings of bubbles.

Have you created digital doubles for your mutants?
Raphael A. Pimentel, Animation Supervisor // We created a digital double for Banshee, but the more interesting double from an artistic standpoint was Darwin, a shape-shifter whose ability is that of reactive evolution, or almost instantaneous mutation into various forms to protect him from harm. Because of the changes in Darwin’s physical appearance (growing gills, sprouting a tortoise shell and turning into molten metal) every scene required a new, precise digital double and plate replacement. We had to pay special attention to the actor’s facial expressions to make sure they translated in a believable way through the myriad effects.

Can you explain in detail the creation of the magnificent shot of Darwin’s death?
Richard Sutherland, CG Supervisor // Working with Darwin our lighting department generated a complex system of 3D procedural textures to transition between a digital human, fire, metal, and molten rock that maintained the actor’s subtle performance while morphing from one property to the next. In order for the Darwin’s effect to feel more realistic and complex than a simple wipe across his surface, we developed a series of data passes which allowed us to transition between varying materials in a truly 3D volumetric way. Since the final image is created using multiple 3D data passes inside of Nuke, it gave us the latitude to make timing changes, design tweaks in a very efficient way.

What was the biggest challenge on this project?
Vincent Cirelli, VFX Supervisor // The Darwin character was by far the biggest (but also most rewarding) challenge, as he took the most effort due to all his transformations. However, we also had the most leeway to have fun with that character and play with his look and the effects more than with the other characters.

How long have you worked on this film?
Steven Swanson, Senior Producer // Because of how involved we were in character design, Luma was already working on this as far back as about eight months. Five months were spent in character renditions, concept art and lighting before the heavy lifting was started. Once shot execution began it was roughly three to four months.

How many shots have you made and what was the size of your team?
Steven Swanson, Senior Producer // 95 shots, 60 crew.

What do you keep from this experience?
Vincent Cirelli, VFX Supervisor // Everything we work on brings an evolution in the techniques we use from a creative standpoint. What we find though is that due to advancing technology, the changing art directors, etc., we are usually forced to develop new tools on every project, creating a new way of working that best fits the needs of the client and our own. That being said, from a technical standpoint, our arsenal of weapons continues to grow. We have a robust software development team at Luma that is expanding our toolbox with every project, inventing new workflows that help us execute shots faster and better than on the previous one. So we can always take that with us into the next project.

What is your next project?
We have just delivered CAPTAIN AMERICA: THE FIRST AVENGER and are currently in production on IN TIME and UNDERWORLD 4: NEW DAWN.

A big thanks for your time.

// WANT TO KNOW MORE ?

Luma Pictures: Dedicated page about X-MEN FIRST CLASS on Luma Pictures website.

© Vincent Frei – The Art of VFX – 2011

X-MEN FIRST CLASS: Stephane Ceretti – Production VFX Supervisor

After his great work on PRINCE OF PERSIA at MPC, Stephane Ceretti joined the team of Method Studios. He oversaw the shooting of the second unit for CAPTAIN AMERICA. Working with the VFX legend John Dykstra, Stephen talks about his work as supervisor on X-MEN FIRST CLASS.

How was your collaboration with director Matthew Vaughn?
Initially, as I joined the project in post, I had very little interaction with him. Especially since the movie was still shooting when I was in LA for a few weeks and I was spending most of my time in the office catching up with the movie and preparing the London office. Visual effects designer John Dykstra was the main contact point with Matthew. Then as the project was getting close to finishing, Matthew came back to London and our Soho based offices became the hub for reviews in 2K and with the studios using cineSync. I could then spend a bit more time with him, and get to know him better. I was showing him the shots for final approval projected in 2K and was communicating the notes back to LA, which were discussed with John in cineSync.

What was his approach about visual effects?
Matthew is very much looking at what VFX can do to help tell the story. He does not use VFX just as an aesthetic device: it needs to serve a purpose, which makes it all the more interesting for us. He always questioned shots in terms of where they were in the cut, is this animation working in the context, what can we do to this shot to make it more exciting yet meaningful… He also had a complete trust in John Dykstra, who wouldn’t ? I really think that made everything much more simple. With the kind of schedule we had to complete the movie and the number of shots involved and their complexity, you must have this trust. The shots are coming together sometimes very late in the process and a director must trust his supervisor that everything will work in the end. John and Matthew were on the same wavelength and the bond that John created with him on the movie from the beginning made everything much more efficient at the end.


How did you get involved on this film?
I was 2nd Unit VFX supervisor on CAPTAIN AMERICA from April to December last year, and was only supposed to do the shoot which took place in the UK where I am based, as they were going back to LA for the post. In November, X-MEN was looking for an additional supervisor as they were about to move to post. I knew Denise Davis, the VFX producer, and she knew I was available so we got in touch. I met John and we discussed what he expected from me, and I told him that the first movie I worked on was under his supervision (BATMAN & ROBIN done at BUF in Paris where I started in 1996). Looked like we had a few stories to share! X-MEN was also shooting in London (and most of the time next to us!) … And they were supposed to stay in Soho for the post so it was ideal to jump onto that show once Cap’s shoot ended. I always liked the X-MEN universe (in fact the all Marvel Universe) and it felt natural to move from CAPTAIN AMERICA to a group of mutants! I am VFX supervisor for Method Studios in London but my agreement with them allowed me to be loaned out to Marvel and Fox on these productions. Method Studios London got involved on the show doing 23 comp shots during Havoc’s montage training sequence and in the Club where Erik and Charles meet Angel for the first time. These were supervised by Sean Danischevsky.

Can you tell us about your collaboration with a VFX legend like John Dykstra?
As you said John is an absolute legend. I was so pleased to be working alongside him. First of all, John is a great guy to be with, not just from a professional point of view, but just as a human being. He is good fun. Very serious about things but still trying to have fun while surrounded by all the madness. From a professional point of view, I was in London while they were in LA. I was in charge of taking care of all the smaller vendors on the show that were in the UK and Europe, about 7 of them, somewhere around 300 shots. I supervised their work to get them to a certain point before their shots could be shown to John for final feedbacks and ultimately Matthew and the studio. I was also in charge of tech-checking shots from most of the vendors, meaning I would scrutinize the shots once they got to 2K and make sure they had no technical flaws. So I had constant connection and communications with John to ensure that we were always on the same page regarding these shots. We used cineSync a lot internally between us to discuss submissions and establish turnovers and work descriptions for vendors.


How was one of your typical day?
Very busy! Our London office was just 2 people, Michael Cheung, my VFX coordinator who did an amazing job, and myself. We had a very military organization and that was the only way to get the job done to be honest. We started days gathering all the feedback coming back from LA for our vendors. We set up cineSyncs with all our vendors in Europe and the UK. We would do the calls after downloading their new submissions. We would also download all the night submissions from the US, as well as New Zealand. We had a huge amount of submissions every day so it was essential to get all the submission forms put together so I could review everything before LA woke up. Towards the end of the production we also had to prepare 2K reviews for Matthew while he was in London.
I had some reviews at Cinesite in the afternoon and sometimes at MPC. Around 5PM, cineSyncs between John and the team in LA and our 2 major UK studios: MPC and Cinesite. We had 2K reviews with Matthew around 6/6:30PM and then at 7:30/8PM we had cineSyncs between Matthew and John/Editorial/Studio following the 2K reviews.
After that, we would gather all the latest submissions from the European/UK studios for my reviews and all the potential finals would be sent to LA for reviews with John. And that’s it … Quite intense. But lots of fun!

Were you often on the set and how this happened?
Not really, Rob Hodgson joined the show a few weeks after I arrived and he took care of all the second unit (and some of the first unit, with John) shoot that happened in LA and about. I was never really needed on set, but I did pay a few visits while I was in LA to discuss with John.

You have worked many years in various VFX studios, what feeling to be this time on the client side?
I thought it was really interesting to be on this side. I had already supervised entire shows but always within a facility. The funny thing about it is that I stand on both sides (vendors/studios). I think I had a lot of empathy with our vendor supervisors! The show was really demanding but good fun, and I really enjoyed working with all the supervisors from all the facilities. I am really pleased with the work all our smaller facilities did on the show, it was really good.


Did you go to the different vendors or all was done via teleconference?
Most of our reviews were done with cineSync. Our vendors were in Germany, France and London. Even for the ones that are close to you sometimes it is quicker to get on a cineSync because your day is so tight that you don’t want to spend time in transport.
On the other end, being in Soho, where all companies are so close to each other was really good. It meant we could go to Cinesite and MPC in less than a 5 minute walk. It helped tremendously at the end when we had projections in 2K for Matthew for all the vendors which took place at MPC. Matthew would go out of a DI/Reel or Mix review and we could jump to MPC quickly for a finals review at very short notice, that was brilliant.

Ensuring the cohesion of visual effects from as many different VFX vendors had to be a real challenge. Can you explain how you were organized and how you achieved this challenge?
It took a lot of work. Towards the end of the film, the big battle involves all the mutants and a lot of facilities were involved on every shot with interaction between them. It was a bit of a puzzle initially but big kudos to Denise Davis our VFX producer, Serge Riou our Digital Producer and Alexa Hale our VFX coordinator in LA as well as Rob and John, we came up with a plan for each shot and with the help of all our vendors it went pretty well! We were all very focused on these « crossover » shots and they went quite smoothly in my opinion considering their complexity.

How did the collaboration with various VFX supervisors?
It went well. John was the driving force, Rob and myself were there mainly to make sure he could focus on the main vendors and the big shots and his interactions with Editorial, Matthew and the studio. Rob did a fantastic job jumping into the fast moving train of the LA shoot and I was there to help making sure the VFX department was open 24 hours a day and no time was wasted with the time difference and feedback for all our European based vendors. We were all pushing towards the same goal : finishing the movie in time and making sure we would push the shots until the very last minute.

What was your feeling to participate on the X-MEN saga?
It was great fun, especially since it was a prequel. It made it all the more exciting. Seeing all the characters you have been used being played by different actors, with a different style, all of it taking place in the 40s and 60s was a lot of fun. I was really keen to work with John and Matthew as well.

What was the biggest challenge on this project?
Time and Quantity!

Has there been a shot or a sequence that prevented you from sleeping?
All of them!

How long have you worked on this film?
5 months.

How the number of VFX shots in this movie?
Around 1100, but considering the shared shots and the shot that were cut, I think it must have been 1200/1300 …

What do you keep from this experience?
A lot of good memories. I worked with wonderful people in the VFX department and with the vendors as well. It was exhausting and stressful but so rewarding in the end. I like the movie a lot, and I am very proud to have worked on it. I think it’s really fun and fresh. Working with a legend like John Dykstra was a big thing for me, and I have also been very impressed by Matthew Vaughn and Fox’s relentless commitment to push the film towards the success it is today.

What is your next project?
I am back at Method Studios at the moment, in London. We are preparing work on a big movie with Gods and DemiGods in which I will be involved as well as some other exciting projects coming after the summer.

A big thanks for your time.

// WANT TO KNOW MORE ?

Method Studios: Site officiel de Method Studios.

© Vincent Frei – The Art of VFX – 2011

X-MEN FIRST CLASS: Matt Johnson – VFX Supervisor – Cinesite

After he had explained the impressive work of Cinesite on THE CHRONICLES OF NARNIA: THE VOYAGE OF THE DAWN TREADER, Matt Johnson is back on The Art of VFX. He talks about his work on X-MEN FIRST CLASS.

How did your collaboration with director Matthew Vaughn and VFX designer John Dykstra?
Working with both Matthew Vaughn and John Dykstra was terrific. VFX is such a collaborative process, because we are all working together to get the best looking images on screen. The added bonus for me is that like for many of your readers, John Dykstra is a bit of a hero figure. I kept pinching myself on set, thinking, “I’m having conversations with the STAR WARS guy!” Which, to be honest, is the reason I got into visual effects in the first place. It’s even better to know that John Dykstra is a nice guy.

How did Cinesite get involved in this film?
Cinesite are lucky to have worked on a bunch of Fox projects, and indeed previous X-MEN movies. I guess a combination of our track record and saying the right things in the bidding meetings helped!

What are the sequences made by Cinesite?
It was great that we had a broad cross section of work on the show, ranging from redesigning the retro sixties version of the Cerebro Room effect, through to creating fully CG environments like Moscow’s Red Square and a post-apocalyptic Washington DC. Additionally, we created Azazel effects, which included a bunch of cool fight sequences.

How was the shooting of the shots in which Azazel disappears?
We were keen to limit the number of locked off disappearances, when it’s obvious to the audience how the clean plate has been created. There are, of course, a few of those, but during the fight sequences we tried to allow the camera moves to be as dynamic as possible, going back and shooting tiled clean plates, which we tracked and blended into the principal photography.

Can you tell us how you create the awesome fight sequence between Azazel and CIA agents?
I’m glad you thought it was awesome! As I said, it was a very free-form multi-camera shoot. We had everything from hand-held cameras, steadi-cams and the occasional shot on a tripod. We didn’t want the visual effects process to hamper the dynamic nature of the action, so we utilised the clean plate technique I previously described. All credit goes to the stunt guys, and indeed Jason Flemyng (Azazel) who was clearly revelling in the opportunity to become a martial arts star! One of the cool things about Azazel is the fact that he can use his tail as a weapon, which added another dimension to the fight choreography. We applied several techniques, everything from getting the stunt performers to stand on tiptoe, to physically yanking them from side to side with a rope, to give the impression that they were being throttled with Azazel’s tail.

Azazel is, of course, influenced by the Night Crawler from X2, so we were tasked with creating a teleportation effect that is redolent of that movie. However, Azazel’s devilish persona required a more pyrotechnic disappearance. Steve Shearston, our Lead FX TD, created complex fluid simulations for both the smoke and fire effects. These simulations interacted with rotomated digi-doubles. This allowed the performers to appear to interact with the smoke and fire elements. For example, as Azazel teleports into frame, the fluid effects dynamically move and interact with the actor’s performance. The 3D artists rendered several different passes that the compositing team could tweak to blend the effects into the plate.

Did you create previz for this impressive scene to help prepare the choreography and the shooting?
We didn’t use previz in this case. We wanted to allow the stunt co-ordinators free reign in creating a cool fight sequence. We worked closely alongside the stunt guys to make sure we took full advantage of the opportunities that Azazel’s character presented.

Did you create a digital double for Azazel and agents?
We did create a fully CG Azazel, however the live action performances were so dynamic that we ended up not having to use it. It’s a cool turntable though!

What references or indications have you received for the teleportation effects?
Like I mentioned earlier, we looked at the Night Crawler effect in X2. We also referenced a lot of live action pyrotechnic elements in order to make our fluid simulations feel as realistic as possible. We didn’t want the effect to look like a genie appearing in a puff of smoke!

What was your sense of recreating the Cerebro with a retro look?
One of the things that I liked most about this X-MEN movie was the James Bond 1960s feel. Everything from the costume and production design through to the anamorphic cinematography. All that stuff just looks cool. For the Cerebro effect, Matthew was keen to reference the smoky environment from the first X-MEN movie. What we tried to do in this film, to keep a sense of the old school “optical” nature of the production was to get the smoke or atmosphere to look as real as possible. We shot several blue screen figures, both as groups with simple camera moves and also as static full frame images. We used the camera move as a basis for the different shots and then underwent a complex process of retiming the action to give it the Cerebro sense of travel through Xavier’s mind.

We created the scenes using the 3D capabilities of Nuke and we were able to populate these plates with other blue screen elements, which were all positioned correctly within Z space using a Nuke created 3D camera to rephotograph the scene. These characters could be made to appear on cue as the camera whooshed past them. The atmosphere in this environment used a combination of elements, both full 3D smoke fluid simulations and live action smoke elements, reprojected onto Nuke geometry in order to impact the correct sense of depth and parallax as the camera pushed through the frame. We took this effect to extremes in the transition through to the Hellfire Club, even subtly adding matching cigarette smoke to our composite, which blended through into the live action photography as a guy brings a cigarette to his lips. The audience will never see it, but it was kind of fun to do!

How did you create the set extension for Cerebro?
In keeping with the 1960s James Bond style, the inspiration for this set came from the opening sequence of YOU ONLY LIVE TWICE. The brief for this movie was to keep everything very simple. The guys had only just invented the process, so it needed to look a little more low-tech, rather than the shiny splendour of the later movies. So basically, we created a CG geodesic dome, which we then carefully lit and tracked in to the live action green screen photography.


How did you create the shot showing Washington DC devastated?
For the Washington DC scene, the Hero mutants were shot on a plinth against a green screen. We camera tracked the action and created an over-sized panoramic matte painting of a post apocalyptic Capitol Hill. This painting was projected onto basic 3D geometry in Nuke. Within this Nuke environment we added a bunch of live action pyrotechnic smoke and fire effects. We also added scintillating smoke and flame effects for the distant burning buildings on the horizon. In front of the Capital building, we created a Massive CG mutant army based upon scans and texture photographs of five mutants in costume.


What references did you receive for the creation of the various establishing shots?
For the Washington DC view we designed some concepts ourselves and presented these to production, refining them until Matthew was happy with the layout. Google Image Search has proved very useful on this show and I rapidly became an expert on 1960s Cold War hardware and locations.

Can you explain the creation of the impressive shot showing the military parade on Red Square in Moscow?
The Red Square shot, for me personally, was the most fun part of the production. Aviv Yaron, our Head of Visual Effects Photography, and I spent a week in Moscow texture photographing in Red Square. It was a terrific experience and I learnt an awful lot about Russian bureaucracy!

Aviv utilised a robot controlled pan and tilt head, which enabled us to shoot very high resolution over-lapping tiled panoramas from various locations from ground level in the Square itself. We were also lucky enough to obtain permission to photograph from various windows along the façade of the GUM department store. This building was the closest to the actual angle required in the plate. These high resolution images were used to create both fully 3D buildings, based upon photo mesh photogrammetry techniques and also as projection elements that formed part of a vast Nuke layout.

David Sewell, our Compositing Supervisor, worked extensively on this shot, along with Anthony Zwartouw, our CG Supervisor, in order to blend the 2D projection and CG buildings successfully. As I mentioned earlier, I became an expert on Soviet military hardware, finding footage online of the actual 1962 military parade; we matched the vehicles exactly to the types used in that period. The various vehicles were all modelled in 3D, including rocket launchers, tanks and armoured personnel carriers. These vehicles were augmented by a marching Soviet army, cheering crowds and the more static members of the Polit bureau. All the people were Massive simulations, with marching based upon multi-camera reference video I took of an actual Russian soldier. We modelled and textured a variety of soldiers and Russian citizens and used them to populate the scene.

What was the biggest challenge on this project?
The biggest challenge was also what made it most interesting, and that was the diversity of the work. In any given dailies session I could be looking at the latest iteration of a Soviet tank, through to the dynamic fluid simulations of the transportation effect. It was nice to have such a wide variety of work, and a team who were so keen on getting their work to look cool.

Has there been a shot or a sequence that prevented you from sleeping?
I never get to sleep during post production. I’m one of those people who will wake up at 3am and instantly start thinking about work. I think some of my best ideas come in this sleep deprived state.

How long have you worked on this film?
I was involved in the movie throughout shooting until the end of post production, which totalled about nine months.

How many shots have you made and what was the size of your team?
We worked on 115 shots.

What do you keep from this experience?
Apart from a great trip to Moscow, I met lots of good people, some of whom I’m working with on my next project. Plus, as I said earlier, meeting a childhood hero and finding out that he’s a nice bloke was also gratifying.

What is your next project?
I’m already hard at work on the next project, but apparently I can’t tell you what it is yet!

A big thanks for your time.

// WANT TO KNOW MORE ?

Cinesite: Dedicated page about X-MEN FIRST CLASS on Cinesite website.

© Vincent Frei – The Art of VFX – 2011