KICK ASS: Mattias Lindahl – VFX Supervisor – Double Negative

Coming from Sweden, Mattias Lindahl began his career at Lego, then he left for London where he worked at Jim Henson’s Creature Shop and at Double Negative in 2001. After nearly 10 years at Double Negative, he returned to Sweden this year and works at Fido.

What is your background?
After graduating “Computer Graphics for Media Applications” in Skellefteå, Sweden in 1997. I started my 3D career at the digital department at LEGO in Denmark. I then moved to the UK where I joined Jim Henson’s Creature Shop in 2000. I started working for Double Negative in 2001 with the submarine film, BELOW. I stayed at Double Negative for a long time, up to February 2010. I returned to Sweden this year and joined Sweden’s largest VFX firm Fido in Stockholm.

How was your collaboration with director Matthew Vaughn?
I already knew Matthew from having worked with him on STARDUST. I worked closely with him from the very beginning of previs when we began KICK ASS, back in summer 2008, all the way through shooting and to the very last day before the film was shot out in February 2010. It was a long job…!

What was the challenge on this project?
The challenge was creating nearly 850 shots for an independent film on a tight budget. We had to always come up with cost effective solutions.

Can you explain in detail the opening scene?
This was the first sequence that we prevised. It was great for me since the shots that we created in previs (compositions and actions), were pretty much what ended up on film. All the plates where shot in Toronto so a big part of the job was to make it look like New York City. Sam Schweir (Double Negative) and I spent a week on top of skyscrapers in New York taking thousands of stills (for various scenes) to be used as backgrounds and 2.5D projected matte paintings. All the stills where taken at 3 exposures, stitched together and baked in to open EXR format. The stuntman was later shot on greenscreen. For the shot where he leaps over the edge we ended up replacing the legs in CG since the real ones didn’t really work for the action. The shot where he crashes in to the car is built up out of several elements: a SFX wired car on location, greenscreen crowd, CG building and a greenscreen stuntman that we dropped in to a bunch of boxes.

Can you tell us how you have conceived and achieved the animated sequence at Fido?
A 2,5D technique was used throughout to realise the Comic Book sequence. From the very beginning the idea behind the sequence was that you where meant to be able to stop the sequence on any frame and it should look like a frame from the graphic novel. So it became apparent early on that we had to work together with John Romita Jr and his team (Tom Palmer and Dean White).

John created a set of storyboards based on the lines from the script. We then took the boards and created an animated previs. A lot of work went in to the storytelling and the use of 3 dimensional moves through the comic book world. Once the previs had been approved by Matthew I flew over to see John in New York. Final tweaks where done to the compositions of each frame in conjunction with John. Once John and his team had finished the artwork, the team at Fido built and tweaked the geometry around to fit John’s drawings. The artwork was then projected on to the 3D geometry to allow us to travel around it in 3D space.
This sequence was what first got me excited about this project. It really was great to work on this piece together with John. He is such a legend in his field and getting the opportunity to bring his iconic artwork in to a cinematic experience was a great honour.

How did you design the sequence of first person view?
The idea was to create a Doom style effect with Mindy’s POV shots. It was created and shot by 2nd unit. Tim Maurice-Jones and Peter Wignall was very much instrumental in the making of this sequence. Wyld Stallyons designed the IR goggles interface and I put the shots together at The Senate.

What were your references and influences to the building and the apartment of Frank D’Amico?
Matthew wanted Frank’s apartment to be a 1930 style high-rise. The idea was that this building would have been one of the taller buildings in Manhattan, but many much taller and more modern buildings have been built since. So whenever looking at the building it would be dwarfed by the new enormous skyscrapers. The building itself was modeled on “Commerce Court North” in Toronto, which was the first high-rise to be built in Toronto in 1930.

How did you create them?
For the exterior, plenty of reference images where taken on location in Toronto. We also carried out an extensive survey of the building, using a TPS Total Station. We actually ended up doubling the width of the building, since it became clear that the interior design of Frank’s apartment would not fit inside of the building. All the views out of the apartment where created using the photography acquired in New York. The team at Double Negative (headed up by 2D Supervisor, Peter Jopling and 3D Supervisor Stuart Farley) created the look development and Lipsync Post made the composites for the largest bulk of the shots.

What did you do on the final confrontation in the apartment?
When Mindy runs wild in the corridor, all the blood hits are shot elements composited. I headed up a 4 day elements shoot. We shot loads of blood hits, exit and entry wounds, fire, smashing glass, muzzle flashes, bullet hits, etc.  We also created a CG knife, rope, gun clips and gun.
When Dave shoots the crap out of the apartment there were greenscreen comps of the New York exterior, tracer fire from the gatling guns, CG shells from the gatling guns, CG breaking glass, jet-pack effects, set extension on Frank’s apartment and additional smoke and debris.

Can you tell us about the flight scenes of the final sequence?
It was all shot motion control. We shot helicopter plates in Toronto and New York. These plates where then tracked and previs characters where animated to get sign off on flight of the characters. This animation then formed the basis for the motion control shoot. There were some big fly-by’s in the sequence and the moves where far to big for us to be able to shoot it at 1:1 on the greenscreen stage. So we used a system called aim-cam that we have developed at Double Negative. It basically allows you to take out the z-depth out of a move. This is what we shot. We then reverse engineered the moves and put the z-depth back in the comp with 3D information passed on from Maya to Shake using a propriety tool called dnplane-it.

In some flight shots, the motion blur is sometimes very strong so we don’t see the characters. Did you have some problems with those shots?
I suppose it is just the nature of an object flying past camera at high speed shot at 24fps. If we where not to use the right amount of motion blur, we would have ended up with a strobing effect.

In the final sequence, the light passes from night to day. This has been a puzzle to connect all right?
Yep, you could say that again. It was tough, but it was a great challenge set by Ben Davis (DOP). Ben had this idea that it’s just before dawn when Mindy arrives at the apartment. When Dave turns up on his jet-pack there is a little bit of light in the sky. Magic hour arrives when Dave and Mindy enters Frank’s study and the full sunrise happens as our heroes escape on the jet-pack. Because of this elaborate change of light throughout the sequence, I had to make sure that we where covered when we did our stills shoot. This meant that each location that we went to in New York, we had to photograph in daylight, dusk or dawn and night. Because we took all the stills bracketed (3 different exposures) we then had a huge range to grade the backgrounds to fit all these subtle light changes throughout the sequence.

As a production VFX supervisor on the film, how did you chose sequences that would be made by other studios?
I had to look at the various vendors strengths and give them the work that I felt comfortable they would finish to the highest standard. As with everything in this world it’s also about the finance. So Andy Taylor (VFX Producer) had to make sure they could deliver on budget.

Can you explain to us the sequence’s distribution in the other studios?

Double Negative:
The Armenian opening sequence, all Frank’s apartment exterior shots (CG building), Atomic Comics street extension, Dave hit by car, Look Development for Rasul’s interior and rooftop, Look Development for the views out of Frank’s apartment, The warehouse set extension for New York background, Russian getting nuked in the microwave, Dave shooting up the apartment on his jet-pack, Frank shot by Bazooka, Dave and Mindy escaping on the jet-pack, Mindy’s roof top, Dneg also did all the previs.

The Senate:
Atomic Comics exterior views, Rasul’s interior and rooftop, warehouse on fire, Mindy’s POV fight in warehouse and Big Daddy on fire.

Lipsync Post:
Dexter Fletcher in car crush, Mistmobile interior driving shots, Mindy goes wild in Frank’s apartment, Exterior views out of Frank’s apartment.

Ghost:
Screen inserts, cinema sign, car crush New York extension.

Fido:
Comic Book Sequence. A 1.5 minute long full CG sequence that tells the backstory of how Damon and Mindy became Big Daddy and Hitgirl.

Wyld Stallyons:
Screen insert artwork designs for computers, mobile phones and security cameras.

How many shots were done by Double Negative?
150ish

What was the sequence that prevented you from sleeping?
The Comic Book Sequence. It was such an important sequence to get right. Both visually and from a storytelling point of view. It went through many different iterations before we got something that everyone was happy with.

What did you keep from this experience?
Never give your phone number to John Romita Jr. (laughs).

What is your next project?
Some very interesting highend projects at Fido. But it’s too early to discuss.

What are the four films that gave you the passion for cinema?
THE GRADUATE, THE BIG BLUE, THE ABYSS and NATURAL BORN KILLERS.

Thanks a lot for your time.

// WANT TO KNOW MORE?
Double Negative: Dedicated page to KICK ASS on Double Negative website.
Fido: Dedicated page to KICK ASS on Fido website.

© Vincent Frei – The Art of VFX – 2010

AVATAR: Interview Neil Huxley – Art Director – Prime Focus

Neil Huxley has worked more than 5 years at Digital Pictures Iloura as Flame operator before coming to the United States where he worked as art director on movies such as GAMER or WATCHMEN at yU+co. It’s him that create the beautiful opening title sequence of this show. In 2009, he joined Prime Focus.

Hi, can you explain your career path in VFX?
My first job after graduating was UI design in interactive media production. In 2002 I started in VFX as a Flame Op at Digital Pictures Iloura in Melbourne, and then moved more into vfx design after art directing and designing the SALEM’s LOT titles sequence for TNT. I moved to LA in 08 where I worked as an art director for yU+co. There I directed some cool broadcast projects, idents, titles sequences etc… and art directed the title sequence to Zach Snyder’s WATCHMEN. The Mark Neveldine and Brian Taylor-directed movie GAMER in 2008/09 was the first project where I really tackled interface design in a film context. That project then led me to AVATAR,

How did Prime Focus get involved on AVATAR?
I think it was Chris Bond and Mike Fink’s relationship with the VFX producer Joyce Cox and our showreel that landed us the gig. We also had experience with stereoscopic movies like JOURNEY TO THE CENTER OF THE EARTH. Originally I think we only had the Ops center at the start of production but as the project progressed we got more shots from other vendors who had too much on their plates plus James Cameron really liked what he was seeing from us.

What are the sequences made at Prime Focus?
We worked on over 200 shots which included the Ops Centre, Biolab, and Hells Gate exteriors.

What elements did you receive from the production and Weta?
Well we would receive a number of assets depending on the shot and the sequence. Everyone had a look to match to so we would share assets as much as production would allow. We were sent everything from on set photography, concept art, in progress renders from other vendors, 3D models, textures etc etc. It was very exciting for us to see what other vendors were working on.

How did you design the hologram? And the one with Home Tree in particular?
We worked with Jim on the basis that this table would display multiple satellite scans orbiting Pandora so we looked at Lidar imagery. We wanted the table projections to be particle-based to mimic LIDAR mapping so we used our in-house particle renderer Krakatoa. A lot had to be modeled in house or at least reworked since it was previz quality, motion builder files, and not high res enough. The Home Tree had to be rebuilt so we could generate Krakatoa PRT Geo Volumes, a particular grid (LevelSet) representing geometry to mimic the LIDAR-Scan look Jim wanted. Home Tree in particular was re-modeled based on productions concept art. We then added projection beams, icons, glows and dust mites for added detail.

Were you able to propose ideas or did the artistic team of James Cameron already determine everything?
Jim and the production were always open to ideas – some of the screens and animation design was nailed first time, other elements took a few variations and revisions – it was great to work with a director with such a strong creative vision, you know exactly the direction in which the captain is steering the ship so to speak.

Can you explain to us the creation of an Operations Room shot?
The Ops Center and Bio Lab scenes in AVATAR included interactive holographic displays for dozens of screens and a ‘holotable,’ each comprising up to eight layers, rendered in different passes and composited. The Ops Center itself had over 30 practical plexes alone. To enable easy replacement of revised graphics across the massive screen replacement task, we developed a custom screen art graphic script, SAGI. This enabled us to limit the need for additional personnel to manage data, deliver the most current edit consistently, reduce error by limiting manual data entry and minimize the need for artists to assemble shots.

Our pipeline department built a back-end database to associate screen art layers with shot, screen and edit information, and a front-end interface to enable users to interact with it. The UI artists could update textures in layers, adjust the timing of a layer, select shots that required rendering, manage depth layers by adding and deleting as necessary and view shot continuity — while checking the timing of screen art animation across multiple shots.

The immersive screens were treated as a special case because of the sheer size of the practical plex glass element. In the case of creating graphics for the immersive screens, there were several unique factors and challenges to consider:

–The large size and prominent placement of these screens
–Their curved, semi-circular shape that we see from both sides
–The background layer is displayed as a “window to the world”, behaving like a world-space environment instead of a localized overlay

To ensure that the after effects animation graphics would appear correctly once mapped onto 3D geometry modeled to match the practical immersive screens, special UV pre-distortion algorithms were applied to the source imagery. For the background layers virtual environments, flight trajectories and icons were modeled and animated in 3d animation software then rendered with a stereo camera rig. Due to the extreme curvature of the immersive screens, special UV pre-distortion algorithms were applied to the source imagery to ensure the graphics would appear correctly once mapped onto the 3D match geometry. The resulting sequence was then processed via the SAGI/ASAR pipeline, with special attributes associated with the immersive screen type invoking a scripted UV mapping system to emulate a “virtual periscope” effect as the immersives rotated to match the action of the practical screens in the plates.
Additional passes were created by the lighting and rendering team to help better integrate the screens into the photography, such as reflections, lighting and clean plate elements.

Did the stereo cause you trouble?
The stereo pipeline was already set up from the teams work on Journey to the Center of the Earth so we were ready. We had stereo dailies at least 3 times a day which really helped in pushing these shots through. Stereo problems I’m sure were the same for anyone else doing stereo projects, LE RE discrepancies, convergence issues etc which all get ironed out as you go.

Have you worked with other studios like Hybride for the screens or Framestore for the Hell Gate exteriors?
We matched a look that was established in one of Dylan Cole’s amazing concept matte paintings for the Hells Gate exterior for a particular shot. I think Framestore did the bulk of that work so they provided us with some great reference too.

Prime Focus has many branches worldwide. Do you allocate sequences between them or all was centralized in Vancouver or Los Angeles?
Most of the work was done in LA under the guidance of Chris Bond, with some additional support from the Winnipeg studio.

How was the collaboration with James Cameron and Jon Landau?
Working with James Cameron and Jon Landau was an amazing experience for all of us. One I would repeat again without hesitation.

What did you keep from this experience?
We were a part of one of the biggest, most spectacular films of all time, and I got to live out a schoolboy dream of working with James Cameron.

What are the 4 movies that gave you the passion for cinema?
Tough question! There are so many films that have inspired me over the years. My brother and I would sit all day in front of the TV in our underpants on summer break and watch movies religiously. I think one summer we watched BIG TROUBLE IN LITTLE CHINA like 30 times! I think the movies that really affected me as a kid were BLADE RUNNER directed by Ridley Scott, THE TERMINATOR and ALIENS from James Cameron and John Carpenter’s THE THING.

Thanks for your time.

// WANT TO KNOW MORE?
Prime Focus: Dedicated page to AVATAR on Prime Focus website.

© Vincent Frei – The Art of VFX – 2010

OCEANS: Arno Fouquet – VFX Supervisor – L’E.S.T.

Arno Fouquet and Christian Guillon from L’E.S.T. were responsible for general visual effects supervision on OCEANS, the new documentary feature by Jacques Perrin. The visual effects were dispatched between BUFMikros Image and Def2Shoot.

What is your background?
After an audio-visual college at Valenciennes, I started working (as intern at the beginning) at Excalibur. It was a famous special effects company for the shooting. We used rear projection, Motion Control shooting, models, matte painting (not digital, glasses painted)… it was exciting. Then I went to my military service at the photo service in the Air Force, that’s where I discovered the “digital effects” by faking photos (Photoshop) for the internal newspaper of the base. When I left the army, Excalibur had begun to equipped with machines for digital effects. The machines were available… Me too… I start with lots of tutorials and trained myself with effects softwares.
A first movie with digital effects arrived at Excalibur, Francis Vagnon, the VFX supervisor, offered me to help make the special effects. Few years later Christian Guillon and Francis Vagnon created L’E.S.T. and asked me to be part of the adventure.

Can you tell us about L’E.S.T.?
Since L’E.S.T. creation, Christian Guillon has decided to propose a new approach to visual effects for film, based on the general idea of engineering, because it’s not possess the tools that is essential, since anyone can do it, but to design, organize and coordinate their use.
L’E. S.T. is a traditional structure, focusing on the job of visual effects supervision and limited to a narrow scope, the feature film.
This method has led us to produce in-house only a small part of the effects that we were entrusted, and to outsource a larger part in according to their nature and / or quantity, while retaining total responsibility.
What I like about this philosophy is that we no longer see the other special effects companies only as competition but as partners.

How was your collaboration with Jacques Perrin?
Only happiness, Mr Perrin is a director and producer who forces the compliance. I admit I started the project with some apprehension. Jacques Perrin is not what we call a effects filmmaker, and OCEANS is primarily a documentary film, whose shooting had started four years earlier.
It was clear that we were not going to work with Jacques Perrin in the same way as we did with Frederic Forestier director of the last ASTERIX.
With Christian Guillon, we set up various tools and steps work so that Jacques Perrin and Jacques Cluzaud (second director) have not the feeling of losing control on effects sequences. Finally our two directors were like fish in water in the middle of the VFX.

How did you decide the sequence’s attribution to the different VFX studios?

We have since the preparation separated the effects in 3 parts:
– We give to Mikros, the “gallery” sequence, in which actors were walking in the middle of a set that is 90% CG. We are working with Mikros since a very long time. We had just finished “PARIS 36” on which Mikros did a very good job on with digital set extensions. It seemed pretty obvious that this sequence was for them.

– The second big piece was for BUF, it’s the “Planetarium” sequence, and the shot is a camera movement upward that goes from a stormy sea to a wide shot of the Earth with satellite in the front. For these two sequences, there were graphics research, but also a certain technical prowess, for he was connected with the plan a sequence sequence of sea storm really shooted. I remind you that this film called OCEANS, Jacques has been more than attentive to the connection and the veracity of the raging sea. There was a lot of R & D on this sea

– The third piece was all effects called compositing effects, like the sequence of the takeoff of the rocket. These effects have been entrusted to Def2Shoot, which the advantage, among others, to be in the same places as the film lab (Digimage). It was a very good first collaboration with them.

How was the collaboration with the different supervisors?
I must be lucky, so far I’ve always listened very closely with all the supervisors with whom I worked. On this film, we worked with Hugues Namur (Mikros), Nicolas Chevalier (BUF), Frederic Moreau and Bastien Chauvet (D2S). There has always been a true collaborative effort between us. I saw them since the preparation, we discuss possible approaches (each company has its own operation and methodologies). They are usually present on set, and obviously they are there to all stages of post-production. I was fortunate to work with very good supervisors, who have nothing to prove, and therefore have no ego problem. Our common point is that we all do this business with the same passion.

Did L’E.S.T. made only supervision or have you made some effects?
We actually made some effects on a small sequence. I knew that this sequence would require a lot of roundtrip with the editing and then with digital calibration. These trips take a long time. By mutual agreement with Christian Guillon, we decided to make this sequence internally, to avoid wasting precious time to one of our providers.

Can you explain in detail the creation of the sequences of the Aquarium, Gallery and the Planet?
Aquarium:
For this sequence, we have initially been shooted the aquarium in Atlanta. This aquarium is one of the largest in the world, with a glass 19 meters long and 9 meters high. The shot in motion was shooted with a “Revolver” head. The actors were filmed a few weeks later in the green screen studio in Paris, again using head motion-control to to reproduce the same movement of the selected take.

Gallery:
This sequence was filmed in the old ferry terminal, « The city of Sea » of Cherbourg. The sequence of the gallery is the result of a very narrow collaboration with Jean Rabasse (Head designer and art director). This sequence is a mixture of real set and digital set extension, and is also a mix of real taxidermy animals produced by the art team and full CG animals.

A first “summary” modeling of the scenery (and all animals) was made, thanks to blueprints provided by art team, in order to start working the framing of the scenes directly in this virtual setting. We have therefore made working sessions with the directors and the chief operator (Luciano Tovoli) and cameraman (Luke Drion). During these sessions, setting scenes was able to unleash his imagination and test all the movements and frames they wanted. All these shots were then taken to the edit, and considered as “classics” rushes.
We went on the set up with computer animation, and we asked Mikros to go with a workstation on which were the scene modeled in Maya. This allowed us to mix, through Cinesoft, the shots that had just been filmed with the 3D scene. The post-production has been made in 4k on Maya and composited on Nuke.

On the planet that has been done at BUF, we followed the same methodology.

What kinds of challenges presented OCEANS and how did you achieved them?
OCEANS is primarily a documentary film, with shots and images never seen before. It was not that special effects of the fiction part discredit the veracity of these incredible images. The effects had to invisible and perfect, it was out of question that we speak of OCEANS as a effects movie.

Have you encountered difficulties in particular?
For time reasons, we have started doing many effects before the filming of the documentary is completed. The film was then under construction while we were doing the effects. It was therefore be flexible and more responsive as possible to the requested changes on some big effects.

What the number of VFX shots in OCEANS?
We made 150 shots.

What was the most complex sequence to do?
I think the shot was the most complex is the satellite shot. As I said, the shot is edited directly behind a sequence of storm, we have that the full CG sea connects perfectly with the other real sea. The other difficulty involved the modeling of the satellite. Because obviously, it had to be a real satellite (OCEANS is a documentary film, it was therefore inconceivable to make a fantasy satellite). Obviously it was not easy to get blueprints on a top secret satellite that ESA’s launch few months later.

How long have you worked on this project?
More than a year.

What do you remember about this experience?
Meetings incredible peoples and collaborative work with passionate and exciting peoples.

What is your next project?
For the moment it’s a too early, I can not tell you anything yet.

What are the 4 movies that have given to you the passion for cinema?
Many films of the 80s like BLADE RUNNER, DUNE, THE COOK THE THIEF HIS WIFE & HER LOVER, PARIS TEXAS.

Thanks for your time.

// WANT TO KNOW MORE?
BUF: Dedicated page to OCEANS on BUF website’s.
Mikros Image: Dedicated page to OCEANS on Mikros website’s.

© Vincent Frei – The Art of VFX – 2010

GREEN ZONE: Charlie Noble – VFX Supervisor – Double Negative

Charlie Noble works in visual effects for over 15 years. He joined Double Negative at its beginnings in working on PITCH BLACK. He will participate by result in numerous projects from studios like ENEMY AT THE GATES, FLYBOYS or THE DARK KNIGHT. He has also supervised THE BOURNE ULTIMATUM directed by Paul Greengrass, that he meet again on GREEN ZONE.

What is your background?
2D. Film opticals, Parallax Matador, Cineon, Shake

The VFX of GREEN ZONE are almost all invisible. What have you done on this movie?
Thank you. Only “almost”? (laughs)
I think of any work that I’ve been involved with over the past 20-odd years, I am most proud of what Double Negative delivered for Greenzone. There are around 650 visual effects shots in the movie, a handful being all CG. Our task was to take the Spanish, Moroccan and UK locations and root them firmly in the Iraq of April 2003. All the damaged buildings, aircraft ( apart from one Puma ) and tanks were ours, as were a lot of the palm trees.

What references did you have to rebuild Baghdad?
We did a considerable amount of internet trawling for reference images from the first half of 2003. As quite a few of the landmark Greenzone buildings have changed since the war, it was important for us to start out from as accurate a place as possible. Using an in-house photogrammetry tool, “dnPhotofit”, we could extrapolate dimensions of building using just a few images.

Most textures were hand painted, derived from web-sourced images. To better support the narrative, we subsequently increased the level of damage on a few key buildings to really underline the destruction that was wrought in the Shock and Awe bombing campaign that began in earnest on 21 March 2003.

Have you used previz on the set to help the director and his cameramen?
A few big establisher shots were previzzed but for the most part, the nature of the show didn’t really lend itself to  that level of shot planning from a vfx point of view. What we did do was to take previz quality versions of all our buildings on location with us to use as a live virtual set . Courtesy of Stein Gausereide, at any given location we would set up our camera and snap our cg model to the environment. The camera had IMU’s (accelerometers) attached to all 3 axis, with this data being fed into the maya camera. Paul and the camera operators could then pick up our camera and wave it around the location and see live what our additions were going to be, with the maya output layed over the feed from the camera.

The shooting style of Paul Greengrass is very frenetic. Your matchmove and roto artists should have lost their hairs. How did you manage these aspects?
Greenzone was our third picture with Paul Greengrass, after United 93 and The Bourne Ultimatum, so we knew what we were up against with regards matchmoving. The main thing that we have learnt over the years is to record as much camera info as physically possible when shooting. As 90% of the film was shot with zoom lenses, it was vital that we knew the focal length of each frame. To assist here we built encoders that the very helpful camera dept allowed us to mount on their matte rails. These were basically toothed wheels that locked into the focal length ring and whenever the focal length changed, the ring turns, turning our wheel which sends its data down a line to a laptop that our matchmovers carried behind the camera. After a couple of weeks the equally helpful Dragon Grips, kindly offered to carry the laptops for us ( mainly to get us out of the way ! ), before we modified the systems to operate wirelessly. Once back in the UK we shot grids for each lens moving through the zoom range. This gave us accurate focal length measurements for any given position on the ring and a corresponding distortion solve, to enable us to apply the exact lens distortion to the CG for any given focal length.

In addition to one matchmover per camera on-set, we had another matchmover with a Leica TotalStation, surveying each set and start/end camera positions for each take. Once shots come in, a focal length curve is derived from the on-set data, to produce an undistorted plate with accompanying zoom curve. The matchmovers then typically used dnPhotofit to snap the photography to our surveyed sets, to achieve the matchmove. This all makes it sound much easier than ,of course, it was given the low light conditions and extreme motion blur in some shots! Hats of to all the matchmove crew, led by Dan Baldwin. They did a terrific job.
Once over the matchmove hurdle, the next task was to split the live action up into its relevant layers to enable it to be sat into the virtual environment. One of the fairly atypical aspects of the set extension work on this show was that there is never a clear line between photography and CG, the CG often starts by camera and extends to the BG with the live action rotoed in and amongst it.
We were fortunate to have some highly talented roto artists on our crew, a number of whom have now gone on to form the core of our new office in Singapore. All rotoscoping on the show used Noodle; our in-house roto tool.
Whilst essential to the process, roto only gets you a certain part of the way and its up to the compositors to bring back all the edge detail from the original plate, using all manner of keys, filters and, sometimes, painting. There was nothing easy about any step of the process and even with the best pipelines in the world, it still comes down to some very talented artists working very hard.

How did you recreate the « Shock and Awe » sequence?
The final high wide in the Shock and Awe sequence was an extremely important shot to really demonstrate the enormous scale and unimaginable force of the missiles that rained down on Baghdad around 21 March 2003. We can all remember the footage beamed from the Palestine Hotel where most foreign journalists had been corralled.
The shot starts as Al-Rawi’s convoy of 4×4’s pull out of his gate at speed and the camera rises up from 5ft to 200ft to witness Baghdad under bombardment. We shot a plate of the cars coming out of the gate with a 50ft crane up, on location in Morocco.
This served as invaluable reference for the shot which ended up being entirely CG, due to the need to extend the camera move up and to wash the foreground with light from the mid ground CG explosions.

The huge smoke plumes that we see were created using our in house fluid solver dnSquirt and some maya fluids, rendered with our renderer; dnB. We actually sculpted all these vast plumes to match the size and shapes that we saw in the footage from the real thing. Exploding buildings were achieved with our in house rigid body solver “dynamite”. Hundreds of passes were rendered out; volumetric atmospherics, lighting passes, smoke, fog, explosions, fire, tracer, buildings, exploding buildings, trees (lots of trees, all gently animating), the Tigris river, foreground cars, exhausts, dust kicked up from the road, etcetc. There were about 150 layers each with secondary lighting passes and IDs all expertly knitted together by Sean Stranks in comp, CG by Dan Neal and FX by Mike Nixon. In addition, on this sequence, our work actually started inside the house as Al-Rawi prepares to leave. We added falling dust and damage to the walls, then when we come outside, we added tracer arcing up into the sky, illuminating CG trees which we added around the courtyard. We also added reflections into car windows of what we were about to see in the high wide. All quite subtle stuff but vital to keep up the energy of this opening scene. We used Houdini L systems for all the trees which provided a useful layout tool. The L systems trees were baked out as a series of rib archives with varying levels of dynamics to simulate anything from gentle wind to chopper wash, so that the animators and the layout artists could pick up pre-animated trees and place them into their Mata scenes using simple bounding boxes for placement. At render time the curves become trunks, fronds and leaves using custom in-house shaders which created displaced surfaces from the curves.

Can you tell us more about the airport’s sequence?
The airport scene really encapsulates our work in Greenzone. We’re taking the Moroccan location and plonking it right slap airside in front of what was Saddam International Airport in Baghdad. The arrival of Zubaidi was shot the same way Paul likes to shoot all his dialogue scenes, the action is allowed to play out for nice long takes with 2 cameras on short zooms in and amongst the action and one long lens off to one side, with cameras sometimes leap frogging – one re-loading while the other carries on. The scene was shot on an open expanse of tarmac at an airbase north of Rabat. Faced with this style of filming, multiple cameras all looking 360 degrees throughout takes, it was clearly not an option to rig a few miles of bluescreen, half a mile high round the action, so the roto and comp artists really came into their own here. Matt Smith built the airport and was the CG lead for the sequence. Again, the internet was trawled for reference images from the time and these + dnPhotofit were used as layout and modelling guides. Everything was modelled from scratch and textures hand painted. There had been a fire-fight at the airport but we added extra damage to the terminal building to support the narrative. This damage was achieved by using the last frame of a rigid body simulation. This scene features the only real chopper in the movie – the Puma that Zubaidi lands in. All other aircraft in the film are CG; Blackhawks, Chinooks, C-130s etc. The opening aerial shot is all CG and has all our CG hardware on the tarmac and unloading from C-130’s; Fork-lifts, 5-tonners, Humvees, HEMTs, Bradley tanks, Abrams M1A1 tanks, diggers, Iraqi civilain cars, US gvt SUVs and landrovers along with a few hundred soldiers.
Once on the ground, we have a shot of Zubaidi’s Puma landing, escorted by a CG Chinook in extreme foreground and another behind it.
This sequence was another matchmove challenge, with handheld zooming cameras and just tarmac and sky and 30 or so milling journalists. We had the immediate tarmac, the Puma, the Journalists, a couple of Humvees and 2 tents, everything else is CG. Comp lead was George Zwier who did a great job.

How did you create the Assassins’ Gate?
This shot was re-purposed slightly in post. We shot on a broad leaf tree lined avenue in Rabat. It was actually the road that leads up to the Royal residence. We knew where we were going to place the CG Assasin’s Gate and construction had positioned 2 containers one on top of the other either side of the road with a black drape hung between them to cast the correct shadow onto anything that was to pass underneath the Gate. We marked out the footprint of our model on the road and positioned 20ft green poles at each of the corners. We also had our virtual set mix/overlay system with us so everyone could see what we’d be adding. The Gate itself was beautifully modelled and textured by Tom Edwards from web sourced images and hand painting. Paul really wanted to underline the damage that had been inflicted on the key gvt. buildings in the Greenzone and so we stripped out the real trees, added huge damaged buildings back from the road and dressed in palm trees lining the avenue. We also added fg tanks, razor wire, Greenzone sign and Bradley tanks complete with driver – we had the frame work for the tank constructed from 4×2″ panted blue so the driver was standing at the correct height. Another lovely composite from Sean Stranks.

Can you tell us how you create shots in the Green Zone in particular
We were keen to be as accurate as possible to the geography of the GreenZone. Our CG co-supervisor, Julian Foddy, became something of our tour guide, making sure that the key buildings were layed out correctly. We started modelling key government buildings way before the shoot and because we weren’t too sure of shot design they were all modelled and textured to a high LOD.
When it came to using them we would typically take a digital wreaking ball to them to match ref photography or dress in specific art directed additional damage to better suit the composition of the shot. Theres a great Miller POV shot on the way to the Palace where we start out looking out of the side of his Humvee before panning round to look out the front. We shot on a nice open boulevard in Rabat with a road dressed with sand, but apart from the immediate fg Humvee and the road, everything outside is CG, all the damaged buildings, trees, passing Humvee on the other side of the road and the distant Palace. Here’s an explanation from Julian describing how he approached the damage: “We used custom mel scripts, coupled with a dneg proprietary boolean plug -in, that allowed us to use a geometry plane as a ‘knife’ on other geometry. Firstly the building was modelled intact. The modeller would then take a high resolution plane, and add noise and undulations to the plane, similar to the line of a crack or shatter. This plane was the positioned intersecting the wall where we wanted the damage to be, and the scripts run to slice the wall into two pieces. This process was repeated over and over, until walls had been ‘blown to pieces’. Another in house tool ‘dynamite’ was then used to dynamically scatter chunks of the wall and other associated debris onto the ground. ”

What about the shots of the Palace and the poolside?
The shots outside the front of the Republican Palace were shot in southern Spain on an airbase. Art department constructed the front door and portico in front of a large white building of approximately the same height as the real thing in Baghdad. We marked up the driveway
on the location and any other CG additions that the action would come close to (or have danger intersecting with). We also had our virtual mix/overlay system with us. This was really important as a framing guide for this sequence as for one of the angles we had the CG gardens in foreground with the live action in the mid ground and CG Palace behind. So, in addition the the Palace, we have CG grass, trees, shrubs, and fountains in foreground , layed out as per the real thing using Googlearth and the copious reference photography that exists for this location, as well as CG midground vehicles and CG soldiers.

Was the flyover shot over Baghdad and the Green Zone 100% CG?
A helicopter plate filmed in Morocco for the beginning of the shot served as the only live action element. The shots begins with a ground rush of urban Baghdad, before the camera tilts up to reveal the expanse of the Greenzone. The camera travels over the crossed swords of Qadisiya, past the UFO-like tomb of the Unknown Soldier and bomb damaged ministerial complexes before honing in on Saddam’s Republican Palace. Apart from the first 150 frames, which is mainly a retimed and re-projected version of the Moroccan plate, the shot is entirely CG. Julian Foddy lit and rendered all the CG for this shot (he modelled a lot of it too and did a fair amount of the texturing ) – pretty Herculean. Graham Page composited it. I did ask him how many layers he had to deal with and I’m told it was around a hundred but with each of those layers being made up of auto-precomped layers, its pretty hard to put a figure on it. Theres in excess of 500,000 frames of CG in the shot amounting to 5 TB of data. The vast ground expanse was textured in camers space by Tim Warnock.  Jules  used a “brick mapping” approach to overcome the challenge posed by the 38,000 trees in the shot. This meant we were rendering high detail point clouds rather than actual geometry, so we moved seamlessly through varying levels of detail as the trees grew nearer camera. Tonnes of extra detail was added with the addition of buses, people, cars, choppers, smoke from smouldering buildings and FX atmospherics passes from Federico Frassinelli.

Why and how did you recreate the Black Hawk helicopters?
We couldn’t get hold of the real thing and even if we had you can understand the authorities being reluctant to let us fly them around over their capital city but as we had shots of Special Forces teams getting in, out, taking off and landing in real helicopters we had to film with something practically. Fortunately for us, the opening on a Huey is of a similar size and the Moroccan airforce kindly made 2 available to us. Having a real helicopter was really the only way to do these shots. You have something the approximate size that kicks up all the right atmos – has the right presence and forces a level of authenticity that would otherwise been hard to achieve. They were modelled from scratch using rough dimensions gleaned from the internet and using photogrammetry coupled with loads of internet stills. We finished them to a very high level of detail as in some shots we run right up to the door – 3 ft away from camera. They were lit by Bruno Baron who is a genius. The choppers look amazing. The compositors had to work hard to integrate the CG choppers into the shots, keeping as much of the real dust/atmos as possible but also using CG FX dust from Federico and Mike Noxon to bed them into the plates. The FX work is also critical to these shots, matching their sims perfectly to the practical. The choppers also have CG pilots up front animated by Will Correia. The daytime scene when the SF team land on the dusty soccer pitch and snatch Millers prisoners was one big CG chopper moment. Another takes place at night as they race from their base, across the tarmac and into their blackhawks. For this scene we just had the one Huey, one light and our 6 foreground SF team. Everything else was CG; the base, the perimeter fence, bg vehicles, the hero blackhawk that we run up to and the 2 other blackhawks complete with digi doubles running across the tarmac. Its fair to say that these shots were testing for all departments particularly matchmove – not much to track and roto – black shapes against black.

Is the night shots like the market has been easier to create?
The crane up at the end of the market-place stand-off was modelled, lit and rendered by co-cg supervisor Dan Neal and composited by Walter Gilbert. The only real thing in the shot is the rotoed element of Miller walking away from camera. Tom Edwards surveyed the set and took a number of high res tiled panoramas of the marketplace and surrounding buildings as well as a number of HDR spherical lighting images. What you see is the result of a fairly immense job from Dan from all the market-place structure – loads of boxes, wooden posts, tarpaulins, fires, smoke and buildings to the arriving Humvees and troops that get out and run into the square, the choppers, explosions, tracer. Its another kitchen sink shot!

What was the most complex sequence?
The most complex shot has to be the flyover. As far as sequences go they were all complex.

What do you remember this movie?
A really great crew – both shooting and in post.

What is your next project?
My garden. I’m helping out round the company doing bits and pieces at present, a bit of ATTACK THE BLOCK, a bit of HARRY POTTER, some INCEPTION, some bidding.

What are the 4 movies that have given to you the passion for cinema?
Thats a tough one.
CASABLANCA, KES, MARY POPPINS, UNFORGIVEN, TOY STORY. I’ve given you one extra.

Thanks for your time.

// WANT TO KNOW MORE?
Double Negative: Dedicated page to GREEN ZONE on Double Negative website’s.

© Vincent Frei – The Art of VFX – 2010

SHERLOCK HOLMES: Jonathan Fawkner – VFX Supervisor – Framestore

Jonathan Fawkner talks with us in the following interview about his work on SHERLOCK HOLMES, he has done in parallel with AVATAR. Note that Framestore received the VES AWARD for Outstanding Supporting Visual Effects in a Feature Motion Picture for SHERLOCK HOLMES.

Hello Jonathan, you’ve had a busy year. First AVATAR and then SHERLOCK HOLMES. How did you manage those projects at the same time?
Well it was certainly an interesting time. I shared the AVATAR supervision duties with Tim Webber so that certainly helped. The key to the whole thing was a great production team that I couldn’t have done it without. The AVATAR day actually was a bit later than the SHERLOCK day and I was able to juggle the demands quite easily once we got a the days planned out and everyone knew where I would be and when. But SHERLOCK was the “day job” and it delivered first, even though it came out later.

How was your collaboration with Guy Ritchie and Chas Jarrett?
Both Chas and Guy made the whole project really enjoyable. Both had a great attitude to the vfx. Chas had a meticulous plan from the outset and I was really impressed with his execution of the shoot and attention to detail and creative method of achieving shots and elements. Guy trusted Chas on most of the vfx work and had a very instinctive reaction to most shots. His mind was made up, more often than not in the first few frames. If something did not sit right with him, he would let us know otherwise we just got on with it. They were both a lot of fun to work with.

What are the sequences made at Framestore?
Framestore took on the lion’s share of the work, including the “Wharf explosion” and “Shipyard” sequences. About 450 shots in all.

I guess you had to hide many elements of the current London?
We called it “past-ification”. It was a key strategy to most of the material as Guy wanted to shoot on location wherever possible. We approached it in a number of ways but they ranged from simply painting something away, to replacing large sections of background with matte painting, to replacing whole foreground elements in CG, particularly on the river Thames sequences.

Can you explain in detail about the sequence of the fight beside the ship?
There was actually a quarter of a ship built on a location in a real Victorian shipyard at Chatham, Kent in keeping with Guys desire to shoot as much on location as possible. Enough of the ship was built to enable the close up shots of the actors in front of the hull to be shot for real. Our work there was limited to “pastification” of areas that could not be dressed and the addition of atmosphere and any views out of the doors at either end of the slipway. This included a river Thames and 19th century river wharf beyond.

Whenever we went for a wide shot we extended the set ship with our 3D lit and textured version, including all the ropes, chains, platforms and shipbuilding paraphanalia that were needed. When the ship started to move though the whole thing was CG and all the aforementioned materials were simulated to react accordingly.

Chas shot the ship on three separate occasions. With the practical ship, without the practical ship but with the destruction detritus in place, and with a completely clean shed for background plates. In the end we ended up replacing the shed with a CG projection mapped version as so much of it was being “pastified” including the whole of the roof.
As the ship is released by the Dredger character, it rips a capstan from the ground which does a lot of damage. We used a propriotary rigid body simulator called fBounce to very efficiently simulate hundreds of destructions of platforms, slipways, barrels, ladders and everything needed to devastate the area. With the addition of multiple smoke and atmos layers and a complicated chain simulation we were able to complete the effect. The water impact was enhanced by a trip to a lifeboat station where we filmed multiple launches to gather the elements we needed to composite our much larger uncontrolled ship launch.

What did you do on the big slow motion sequence with explosions?
This was a sequence that was not originally in the script but Guy had the idea for putting his actors in very real and tangible danger. There were no digital doubles or stuntmen used in the sequence. It is all Robert, Jude and Rachel. The whole thing was shot on a location in Liverpool that could not use any pyrotechnics so we were presented with quite a clean plate. Chas had story boarded the sequence and established an order for the explosions and where they would happen. These were lit with small flambos and the actors bombarded by air mortars which added some interactive lighting and gave the actors something to react to. But all of the explosions and destruction was added by us.

Chas executed a high speed motion control green screen shoot on a one-to-one scale mockup of the location. It was an ingenious modular construction that allowed each explosion to be seated in the correct scale surroundings. We shot each ignition separately on a Frog moco rig travelling at nearly 70mph in order that we could match or exceed the shutter speed of the plate. All of the interaction with the characters and set was then achieved with pain-staking compositing and lighting bled from the explosions themselves. We added a CG brick wall collapsing and some CG fire to a mimed performance from Robert during one shot which was about 30 seconds long, and also debris, embers and smoke to each explosion. Guy wanted the characters to be bombarded wherever possible and to be fully engulfed, and yet to be sure that it was our actors who were in the shot and not stunt men. In a couple of shots we rip a hole through clothes to further emphasise their proximity.

What was the biggest challenge on this show?
I knew the wharf explosion would go to the wire and it did. Guy was very attached to the sequence and he kept adding more and more. They were some of the first shots started in earnest and the last shots finalled.

How long did the post-production?
We worked for about a year.

Did you encounter some difficulties?
Due to a very well executed shoot with a good ammeinable crew and a down to earth director, the whole show passed relatively un-troubled. We had a good time.

I read that Framestore Reykjavik worked on the show. What have they done?
They helped us out on the animation side. There was a lot of simulated animation in SHERLOCK but the ship and bouncing capstan required a more creative touch. The Iceland team were free and able to help out.

Why open a branch in Reykjavik?
We had a lot of talented Icelandic crew. They wanted to be at home and we wanted to work with them still. We have crew in New York as well. The talent is pooled, and technology means we all get to collaborate pretty seamlessly.

What is your next project?
I am now on NARNIA: THE VOYAGE OF THE DAWN TREADER.

Thanks for your time.

WANT TO KNOW MORE?
Framestore: Dedicated page to SHERLOCK HOMES on Framestore website’s.

© Vincent Frei – The Art of VFX – 2010

PERCY JACKSON AND THE OLYMPIANS: THE LIGHTNING THIEF: Guillaume Rocheron – VFX Supervisor – MPC

Guillaume Rocheron begins his career at BUF in 2000. Having worked on films such as PANIC ROOM, ALEXANDER and BATMAN BEGINS, he left to join the teams of MPC in London. He worked on SUNSHINE, 10’000 BC or HARRY POTTER AND THE HALF BLOOD . In 2009, he moved to MPC Vancouver to work on PERCY JACKSON.

What is your background?
I have worked for MPC since May 2005. Before hand, I spent 5 years at BUF Compagnie, working on a number of commercials and some film projects including ALEXANDER for Oliver Stone, BATMAN BEGINS and THE MATRIX RELOADED. I started at MPC London towards the end of production for HARRY POTTER AND THE GOBLET OF FIRE, i then moved on to lead lighting TD on X-MEN3 and ELISABETH THE GOLDEN AGE and CG Supervisor on 10.000 BC, HARRY POTTER AND THE HALF BLOOD PRINCE, GI JOE: RISE OF COBRA, SHANGHAI and NIGHT AT THE MUSEUM 2.

How did you get involved on PERCY JACKSON?
Discussions about moving to the Vancouver studio started in January 2009. At that time, the studio was still pretty small but I was interested as the visual effects market was growing very fast in Vancouver. PERCY JACKSON being awarded to MPC was the perfect opportunity for me as the type of work was right up my street. 2 months later, I was on a plane to Vancouver ; 4 days later, i was on set! In the following weeks, most of the rest of the key team members arrived from MPC London. By the end, around 85 people worked on the project in the Vancouver studio.

How was the collaboration with the director and the production supervisor Kevin Mack?
After the shoot, Chris Columbus, Kevin Mack and the movie production team were based in San Francisco and we were in Vancouver. We reviewed the work with Kevin on a daily basis via Cinesync. Kevin knew what the director was after and was giving us notes and comments to make sure what we were doing was fitting within the context of the film.

What are the sequences made at MPC?
MPC worked on 9 sequences, the main ones being the Minotaur Attack, the Hades Bonfire and the Hades Mansion where we took care of the Hellhounds and the Lost Souls.

Hades is awesome. How did you create and animate it?
Hades was by far the most difficult character for 2 reasons: Firstly, he has to perform and secondly, he is a character made entirely of charcoal and fire. When he is not transformed as a demon, Hades is played by Steve Coogan. One of our challenges was to integrate characteristics of his acting into our CG character. To achieve this; we captured dialog scenes and a library of facial expressions using the Mova Contour Capture system which allowed us to create a very high resolution animated reconstruction of Steve Coogan’s face in 3D. We then used and improved our in-house motion blending tools and facial rigs to manage and manipulate the pretty dense data. The point cloud was around 600 tracking markers per frame which meant we could capture all the subtleties of the performance, even including the small vibrations under the eyelids which are generally filtered like noise by standard motion capture solutions.

The perception of a performance can change a lot once transferred into a 12 foot character like Hades. Because of the size and camera angle difference and the fact that his face is made of charcoal and lava cracks instead of humanflesh, we had slightly tweak the performance. The challenge was to do this in a non destructive way, paying attention not to change or remove key elements that defined the original performance that Chris Columbus wanted to capture.

The second challenge was to create the fire in which Hades appears and is also emitted from his giant wings. The fire was an important component in the style and acting of the character : the flames are slow and gentle when the dialog is quiet, they get bigger and faster when he suddenly gets angry and they become really fast and explosive when he wants to show his power by throwing fireballs. They require such precise control that we quickly took the decision to use CG fire instead of elements. In general, CG fire is used for quick events like fire balls, explosions, burst of flames etc… In Hades case, the fire is always there, but sometimes its not doing anything spectacular so you have time to really look at it, analyse his movement and details. It was impossible to use the standard techniques which are generally used when creating a fairly low resolution simulation and artificially enhance the visual details by mixing it with 3D noises or fractals. These don’t contribute to the quality of the movement and the simulation. For a few years now, we have worked with Flowline, the fluid simulator from Scanline and we’ve pushed it as far as we could within the time we had for the project. Every voxel of the fluid simulation was around the size of 1 millimeter which ensured that almost every pixel of visible fire on screen was contributing to quality of the movement. This is around 50 times the resolution at which we’ve simulated fire previously. Each wing was taking around 15 hours of simulation per shot, splitting the simulation across 3 machines which seems long but is finally pretty reasonable if you take in account the resolution they’ve been done at. Fluid simulations are by nature very difficult to control. Our FX team really did a fantastic job in finding methods and rules that would allow us to control that fire according to what Hades would do in every shot, without compromising the quality of the movements and details.

Can you explain to us the conception of the Lost Souls?
For the Lost Souls, the challenge was different than it was for Hades fire. We had to create a supernatural fire inferno in which hundred of creatures are trying to escape. Kevin Mack shot movement and action references with 3 HD cameras which we used to create a library of movements for our CG creatures. We then enlarged the fireplace by destroying his edges using PAPI, MPC’s rigid bodies dynamic software, so the fire inferno could get a lot bigger and so the effect could look a lot more threatening. Using the same technology developed for Hades fire, we created different layers of simulation: a fire vortex inside the fireplace, huge flames rolling on the exterior walls as well as a fire element for each creature within the fireplace. We spent a lot of time then integrating the fire in the plate, added flying embers and fine-tuning heat distortion.

MPC creates a whole bestiary for this movie. What references did you have?
For all the characters, we started from concept artwork by Aaron Sims as approved by the Director. For Hades, the Hellhounds and the Minotaur, we spent time converting the 2D concept into a 3D sculpt in Zbrush. It was important to make a version that would work in 3D and to get it approved by the director before starting the time consuming task of making a “production ready” character. We used hyena anatomy reference for the Hellhounds and a mix of human and bull references for the Minotaur body.

How did you create these mythical creatures?
After getting our Zbrush 3D concept approved, we modelled the creatures in Maya, paying attention to muscle groups. We then layed out a skeleton and muscles using MPC’s in-house solutions. The rendering was done in PRman via Tickle, MPC’s rendering tool and ShaderBuilder, MPC’s look development tool. Fur groom and fur dynamics were done using MPC’s fur tool, Furtility.

What was the most complicated creature to do?
Without hesitation, Hades. On top of the CG fire and the facial performance, we spent time defining how we would light him, with Hades being his own light source. We wrote tools to convert our fluid caches into Renderman point clouds so the fire would illuminate Hades using the color bleeding technique.

How did you add the legs of Chiron and Grover?
We roto-animated each actor in 3D so we could have their exact position in each shot and connected the CG legs or horse body perfectly to their waistline. We then animated the CG body so it would work with the actor’s performance. The most time consuming task was in fact painting out the actor’s legs from the plates. It is a pretty straightforward task when the background is pretty empty but can be very difficult when you have a full crowd behind the actors !

How many shots did you work on?
160

Did you use specific in-house software (for hair and fire)?
MPC uses a wide range of in-house software. On top of the shot pipeline and asset management tools, Tickle is our rendering interface with PRman, ShaderBuilder is our look development tool, Alice is our crowd software ( that we are using to process motion capture clips ) and we are using Flowline by Scanline for large scale fluid simulations.

What is the shot that prevented you from sleeping?
There’s been more than one !

Have you encountered any difficulties as such as unplanned things?
The unexpected always happens as you move through production on a project, some ideas that end up working great and some others that need changes. We were asked for example to make the Lost Souls look more threatening than what was defined in the original idea; so we had to change a few things to make the fire bigger, see more of the creatures etc… At the end, I think these changes were right as they really gives a nice momentum to the sequence.

What did you keep from this experience?
Being involved on a project like PERCY JACKSON has been a great opportunity, mainly because of the range of effects we’ve had to do : 5 different creatures, couple of transformation shots, some crowd and destruction and a lot of fire !

What is your next project?
SUCKER PUNCH by Zack Snyder…

What are the 4 movies that gave you the passion for cinema?
It is hard to lock it down to 4 films only. I really love movies by Brian De Palma and David Fincher.

Thanks for your time.

WANT TO KNOW MORE?
The Moving Picture Company: Dedicated page for PERCY JACKSON on MPC’s website.

AVATAR: Daniel Leduc – VP & Visual Effects Supervisor – Hybride

Daniel Leduc is a true pioneer in the visual effects in Canada, he founded with 3 other partners the company Hybride that has achieved the effects of many films such as the SPY KIDS trilogy, SIN CITY, 300 or JOURNEY TO THE CENTER OF THE EARTH. They just completed AVATAR.

What is your background?
I started working in this field 30 years ago as an online video editor, specialized in special effects for commercials. My three associates & myself founded Hybride in 1991 and at that time, we worked mostly on TV shows, commercials and R & D. Our offices are located outside the urban centers mostly for confidentiality reasons. It is the way we approach each project that has allowed us to build a solid reputation in this field, and over the years, I went from operator to Visual Effects Producer. Hybride is now comprised of 85 full time employees.

Did you follow any specific training?
Actually, I haven’t followed any specific training or taken any special courses. Early in my career, in the 1970’s, I wanted to work as cameraman so I was advised to follow a technicians course. After that, I started out as colorist / editor and with time, I gradually learned my job and acquired new qualifications as I went along.

Can you explain to us how it is that Hybride came to made to work on one of the most anticipated films of the last decade?
Weta Digital (New Zealand) had been working on the project for years and when they realized the initial workload had grown much larger than the initial predictions, the production company decided to call other vendors with stereoscopic experience to help complete the missing shots.

Since Hybride has experience in the stereoscopic VFX (JOURNEY TO THE CENTER OF THE EARTH, FINAL DESTINATION 4, SPY KID 3D: GAME OVER), they decided to give us the mandate of creating VFX shots for the Link Room.

How was the collaboration with James Cameron and Jon Landau?
All of our meetings with Mr. Cameron were done via videoconference so it wasn’t necessary for him to physically visit our studios. With the different tools available on the Internet, there’s no reason for anyone to travel anywhere, everything can be done over the Web!

Before beginning a scene, James Cameron would first explain how he saw things and then proceeded in explaining what he expected from us in terms of research and management. We’d then hold daily videoconferences with the team’s 2D and 3D supervisors.

What are the sequences made at Hybride?
In concrete terms, Hybride’s contribution include set extensions, animating on-screen computer data as well as designing and animating virtual characters mostly for the Link Room. It’s a room where the humans go to take control of their avatar. We also produced a number of shots for the interior of the Dr. Grace’s mobile laboratory.

For the contents of screens, were you able to suggest things or was everything already done by James Cameron’s art department?
Actually, they provided us with a definition of the approach they wanted to take, a technical guideline. They described to us how they imagined the computers in the world of Pandora, what they thought they’d look like and how they’d work. We had to create content screens respecting their predefined style. Since the images that we received were generally still frames, we had to create animated content, variations of colors and movements for each of the screens. Color was a concern for James Cameron; he needed to see a cut of several shots to get an idea of the general appearance.

Have you worked with Prime Focus who has also created screen animations?
We all worked together, but indirectly. Lightstorm Entertainment took care of the management aspect and they would decide when to send our material to other studios, and vice versa. For the screens however, we have to keep in mind that each room had a different vocation so of course, the screens did not all have the same functions have all the same function or purpose.

Who was the link between the different studios?
We worked mainly with Steve Quale who was the Visual Effects Supervisor on a daily basis, but we also worked with Yuri Bartoli, who was the Graphics Supervisor.

How have you designed the digital extension of the Link Room?
There had been a huge preparation on the production’s part since all sets had been already scanned. They also provided us with content for the still screens, even though everything changed during postproduction. The actual setting of the Link room included the center console and the right part of the set only – we had to generate the other part. We also created content for a dozen monitors as well as create content for tablets the technicians held. We therefore had to recreate these missing elements in a credible manner all the while making sure they were aesthetically pleasing.

As for the scanners, we replaced on-set items such as the rings at the head of the scanner beds. These rings lighted up and turned when a person took control of their avatar. However, there were no actual moving parts on the set: we simply replaced the pieces of green cloth by our virtual scan rings. Finally, we designed and animated virtual characters to fill in empty spaces for some of the shots.

Have you encountered any difficulties with the stereoscopy?
Not really, we already knew the Pace camera systems. We had an intermediate version of them on SPY KIDS 3D: GAME OVER and another variation for JOURNEY TO THE CENTER OF THE EARTH so we already knew what to expect from these cameras.

What was the biggest challenge on this project?
What was interesting is that, in is futuristic world, James Cameron wanted all screens to project a stereoscopic image. We therefore tested simulating projections that would interact, or not, with the camera’s rotation and we also tested with the operator watching the screen. We then tested projections with 2D content that would only appear in stereo when the camera would move sideways. In the end, we finally delivered screen content which was in stereo, but cut off by the limits of the screen thanks to the twinning of several techniques.

How many shots did you work on?
We produced 114 stereoscopic shots and also provided numerous graphic and technical elements used in a large number of shots produced by other vendors involved in the project.

What is your pipeline?
Mainly, we work with 3D XSI, but we also use 3de, Maya, ZBrush and Unfold 3D. For compositing, we use Flame and we also have a few Fusion stations.

Have you encountered any difficulties in particular?
We haven’t encountered any technical difficulties in particular – we loved working on this project.

What did you keep from this experience?
We are very grateful to have the opportunity to be part of Pandora’s universe and more importantly, we feel very privileged to have had the chance to play a role in creating one of the largest blockbusters of the decade.

What is your next project?
We are presently working on 2 productions, but we unfortunately cannot mention the names for the moment, for confidentiality reasons.

What are the 4 movies that gave you the passion for cinema?
I always loved 1950’s & 1960’s science fiction movies such as JOURNEY TO THE CENTER OF THE EARTH, THE CLASH OF THE TITANS and SINBAD – these types of animation movies. I also loved STAR WARS, which was really impressive for the time and I enjoyed THE MATRIX for the technical angle.

Thank you for your time.
It was a pleasure, thank you.

© Vincent Frei – The Art of VFX – 2010

DAYBREAKERS: James Rogers – VFX Supervisor – Postmodern

James Rogers works in the world of the Australian visual effects for over 15 years. After a journey by Square USA, where he worked on FINAL FANTASY, he then worked as VFX supervisor on ANIMATRIX: THE FINAL FLIGHT OF OSIRIS. He joined Postmodern in 2003 where he supervise the visual effects of many commercials and movies such as DAYBREAKERS, AUSTRALIA and KNOWING.

How did you get involved on this movie?
We were approached to do this movie some time ago. A commercials director we work closely with was developing a different film with the DAYBREAKERS producers. He introduced us, and we started talking. We had just finished DEATH DEFYING ACTS for director Gillian Armstrong, which was a lot of fairly complicated set extensions. Our initial involvement with DAYBREAKERS was to provide set extensions, so we were in a good position to get going. In the end it became a lot bigger than that.

The visual effects are divided between Kanuka Studio, the directors and you. How did the effects distribution?
I think (and we finished this film in 2008, so forgive my poor memory), but I think we did about 350 shots. I think Kanuka did about 15, and the Speirig Brothers did all their wishlist shots themselves, which was no small list, something like 300 shots as well.

Why the directors wanted to make some visual effects themselves?
They had a lot of shots which didn’t fit into their budget, and they are pretty good VFX operators, so it made sense that they would do things which weren’t regarded as “primary” fx shots. I don’t think they would have it any other way.

How was your collaboration with the Spierig brothers?
It was great. They are very, very digitally saavy. They had already created 3D animatics for most of the film themselves. They knew exactly what they wanted—which is not to say they weren’t very collaborative—but we didn’t have to explain a lot of reasons why something couldn’t or should be done. Sometimes we experimented with things which failed initially, but they had patience, they could see through the rough versions, and understood where stuff was going. You don’t often experience that.

What are the sequences done at Postmodern?
We did a lot of stuff. We did almost all the modernisation and extensions of cities; we burnt Ethan Hawke (a lot); we created a digital subsider (deformed Vampire); car chase sequences; we did the blood farms shots; and we cut Sam Neill’s head off.

What was the references given by the directors for the blood farm?
A lot of reference came from George Liddle, the production designer. The brothers had certainly given him some pretty clear direction, too. So we worked it up based on input from both George and the Brothers.

How did you have designed and achieved the blood farm?
Initially, to contain the budget, we were just going to duplicate a single live action element. In the end, we combined the live action element with a lot of matte painting and 3D people. Once we started going 3D, we started adding camera moves. It was a fairly organic process, but it was nice to be able to add something more to the shots the further we got into the production process.

What indications did you have to create the futuristic city?
Again it was George who provided us with some great reference. There was a definite design, a language if you like, that they wanted to use in this film. So we followed that as closely as we could. We also had rules, such as how and when the vampires got around, how they had to retrofit their cities to be sun-proof, and so on. It was quite fun to create a universe where the vampires were the dominant species, and not one where they were the hunted.

What have you done on the subsiders?
The subsiders were originally going to be a full body prosthetic. We were originally just going to add digital “wings” and claws for the attack sequence in Edward Dalton’s (Ethan Hawke) house, as well as some rig removal. There was an issue on set where the wire-harness under the prosthetics ended up making it look like the subsider had a diaper on… so in the course of working out a solution to that, we ended up replacing the subsider with a digital version. We had already been working on 3D skin ann sub-surface scattering techniques, so it was a logical evolution. The exciting thing for us is that it only took 2 weeks to get it to a position where it was almost done – a fully rigged characters, with muscle and veins systems. We were pretty pleased with that. Of course, this was a low-budget film, so we had to do it within our budget. Luckily it was a risk which paid off.

Can you explain to us how did you put fire on Ethan?
The fire was a combination of live action elements, which we shot using a model of a torso, painted black, and filled with gas pipes. We also shot other fire elements and then use a lot of manipulation in Flame to get them to stick to the plates of Ethan. Something which made a big difference was adding the heat haze; but the big difficulty was balancing the fire elements so that they would read well against the back plates. We deliberately avoided using CG flames for these shots, because I think sometimes you can get easily sidetracked trying to create something “real” when you can just as easily go and shoot it. Another nice thing about fire, is you can warp it and manipulate it quite extensively before it looks “wrong”. So a combination of approaches worked well for this, but finally it was rather lo-fi, when compared to other vampire films. However, it worked.

Can you explain to us how you proceed for the vampires explosions?
We shot a whole lot of explosion elements and buckets of blood and pig intestines. The vampires were warped in compositing and we stuck it all together. It was good fun.

Have you participated in the beautiful opening credits and if so, what did you do?
We designed the credit sequence and worked on matte paintings for the backgrounds. Ben Nott (cinematographer) shot some awesome stuff and the brothers did some great work there too. It came together well.

What is the shot that prevented you from sleeping?
All of them. In a production like this you tend not to sleep too much. I think the hardest sequence was the car chase where the 3 hummers (don’t) cross the bridge. There’s no good reason for it to be hard, it was just one of those things. It was difficult logistically to shoot, it was difficult to get right and make it work with what we had. But usually it is not the shots you think will be hardest in the beginning that cause you the most pain… it’s usually the simplest ones. As usual, the devil is in the details.

What are your software pipeline?
3D was mainly Maya. For compositing we used Shake and Flame. The film was shot on Panavision Genesis, and we worked mainly in PanaLog. The high-speed material was shot on film. It was nice to have a combination of high-speed and node-based composting, because it meant we could push stuff around between machines and use the best tool for the job. There was no point punishing someone to get something done in Shake what could be done in Flame much faster. Likewise, there was no point going the other way as well. I was very pleased with how smoothly our pipeline ran on this show, even though we faced a pretty short schedule to finish. It helped us hone a lot of our own tools, but was also quite a nimble way to do things. We applied the same pipeline to KNOWING (like DAYBREAKERS, was shot on digital), and found that we could really pump out stuff and give ourselves time to experiment and try and add something more. I think that is a really important aspect of a good pipeline… it has to give you options, and the time and ability to be creative.

What did you keep from this experience?
How nice it was to work with directors like these. There was a sense of collaboration and a freedom to experiment that you don’t often get.

What is your next project?
We actually finished Daybreaks a long time ago – in 2008. Since then we did Australia, and Knowing, and quite a few commercials. We have other projects coming up, but as usual, I am sworn to secrecy for now!

What are the 4 movies that gave you the passion for cinema?
I have to admit to the first STAR WARS which was one of those childhood touchstone experiences; but my other inspirations are maybe more eclectic: THE UNBELIEVABLE TRUTH by Hal Hartley; THE COOK, THE THIEF, HIS WIFE & HER LOVER; and 2001. I could watch them all many times over, but perhaps STAR WARS is just a chance to relive a moment from my childhood.

© Vincent Frei – The Art of VFX – 2010

LEGION: Jeff Campbell – VFX Supervisor – Spin VFX

Jeff Campbell has evolved in visual effects for 17 years as an animator, compositor, and VFX Supervisor. He has added his unique artistic vision to the benefit of numerous feature film projects such as FIGHT CLUB, X-MEN and THE CELL. He join Spin in 2003. His most recent VFX Supervisor credits include, 20th Century Fox’s “MAX PAYNE” and now Sony Screen Gems “LEGION”.

Hi, Can you explain your career path in VFX?
I started as an animator in 1992 at Stargate Studios in Toronto. Working in Strata Studio Pro, I was intrigued by the creativity and control of bringing emotion to a character but frustrated by the slow productivity of the hardware at that time. Especially, the time it took to do a Radiosity lighting render. I soon discovered high-speed compositing when introduced to the Quantel Harry. A strictly 2d finishing system that could play standard definition in real-time. It’s not much compared to today’s standards but being able to finish shots fast impressed me. I really missed the 3rd dimension. Soon enough, the answer came. Discreet Logic came out with the Flame. A 2d/3d multi-resolution high speed finishing system. I had to get on one.

I applied for a job as a Flame operator at Command Post in Toronto. They told me that Quantel is where it’s at and that Discreet systems probably wouldn’t last long. But they gave me a shot. Soon enough, I was doing complex and very cool 2d and 3d commercials all in the Flint. After a few years, I was the Senior Inferno Artist at Command Post. In 1999, Command Post changed its name to Toybox and dove into film.

I led a crew on David Finchers “FIGHT CLUB”. In one sequence, I had been asked to create the impact and destruction of a forty-ton ball crashing through a commercial plaza and into a coffee shop. This was accomplished using 3d camera projection techniques, displacements and textured geometry all entirely built and finished in a Discreet Inferno with no outside resources.
I had demonstrated these techniques at a NAB as requested by Discreet Logik (Now Autodesk) I was also Lead Inferno Artist and Sequence Supervisor on New Line Cinema’s “THE CELL”; bringing to life the rich unique imagery of director Tarsem Singh’s vibrant vision.

In 2003, I joined Spin in Toronto as a Senior Inferno Artist and Partner. I liked Spin’s “art meets technology” philosophy. I had a broad commercial client base but soon became dissatisfied in the lack of creativity mainly tied to budget constraints. I decided to focus strictly on film. Because of the boutique size of Spin, my duties can involve being not only VFX Supervisor, but also Animation Director and Lead Compositor.
My most recent VFX Supervisor credits include, 20th Century Fox’s “MAX PAYNE” and now Sony Screen Gems “LEGION”.

Spoiler warning if you haven’t seen the movie yet.

How did you get involved on LEGION?
The guys from the Orphanage had seen our work on MAX PAYNE, which featured a winged demon character, and they liked what they saw. The Director Scott Stewart and VFX Supervisor Jonathan Rothbart, were partners at the Orphanage. We began work in January 2009 and in February, the Orphanage closed their doors. It was a little tense at the time because we had put a lot of work into the film and did not have a contract. Eventually Sony Screen Gems honored the same contract that we had with the Orphanage and business as usual.

How was your collaboration with the film director Scott Stewart and the VFX supervisor Jonathan Rothbart?
It’s great dealing these guys with such a vast background in visual effects. They know exactly what they want and talk our language, which helps because we receive concise answers and direction. This allows us to really focus without any guesswork. I had the pleasure to work with Scott as on-set supervisor for the end sequence and it was all about trust and respect for VFX. He gave me everything I needed to get my elements and data.
It really results in a much better and efficient VFX driven production. I’m a big fan of the Orphanage and ILM so it was a great experience seeing how they work.

What sequences were made at SPIN?
Our primary task on LEGION was adding CG wings to Kevin Durand’s character Gabriel and later to Michael played by Paul Bethany. The client was so happy and confident in our work that we were awarded other difficult shots including a big fluid simulation that needed to be done on the “Howard explodes” shot and the full CG shot of Gladys the crazy Granny (Jeanette Miller) who climbs the wall like a cockroach. We were also called upon for several matte paintings and set extensions

Can you explain to us the Granny sequence?
That was an all CG Granny that could climb walls like a cockroach. The CG Granny has to interact with window blinds and other surfaces and so we ended up building the whole shot in CG. This also enabled multiple camera angle options to better compliment the performance Scott did ask to put Audrey, played by Willa Holland, in the shot for continuity so, I found a take of her from another plate and managed to morph her to work with the shot.

What references were you given for the angels wings?
The director referenced eagles wings.
The look of the feathers was a difficult task. At first we were given the reference of harder leathery look to match the angels armor. The theory was that these wings have to look tough in order to repel bullets and have knife-edge primaries. When we added the feathers to the upper Marginal covert area this gave a more realistic quality we all see in wings. As a result our task was to go with a more feathery feel which involved a lot of re-texturing. I think it’s hard to expect people to deviate from their perceptions of reality. If people don’t see feathery wings as they know them to be, they are not going to buy it.

Your wings are really beautiful. How did you conceive them?
The guys at the Orphanage modeled them and we did the rest.
Generally speaking, we spent a lot of time getting the wings to feel natural and to really match the live action photography. It was difficult at times because these are no ordinary wings. They’re armored and have blades at the feather tips. But they’re also part of the character. They needed to become an extension of the actor’s performance. There are some nice subtle moments when the wings reflect the character’s emotions that work really well.
We are lucky to have the wing experience under our belt, as they are very challenging to do properly. They’re very difficult to rig as every feather must slide upon each other during opening and closure. The geometry is also very heavy as our feathers were built double sided so they don’t look paper-thin. Then, in lighting it is very expensive to render the proper transparencies needed in each feather. This all adds up to one very big asset.

Did you use digital doubles for the angels Gabriel and Michael?
Yes, for the flying shots. We ended up replacing the live stunt wire stuff with our digital doubles in order to get away from the “hey that guy still looks like he is on a wire” look. We would usually end up or start with the live action and do digi-double takeovers.
These doubles were also used to aid tracking and animations as we needed them in every wing shot in order to cast and receive shadows.
Grandma Gladys was also a Digital Double.

How many shots did you work on?
We finished 260 shots with a crew of around 50. Mainly over the coarse of a year but the end sequence was shot in October 2009 so we had 100 shots to finish in 8 weeks.

What software did you use at SPIN?
Layout used Primarily PF Track for tracking. Modeling uses 3dmax, Maya and Zbrush. Texturing in Photoshop CS4 and DeepPaint3D. Animation in Maya. Sitex Graphics Air for rendering. Eyeon Fusion for Compositing. I used an Autodesk Inferno.
Our pipeline has changed drastically since. We now have a Stereoscopic pipeline with 3D Equalizer for tracking, Nuke for finishing and rendering with Pixar’s Renderman.

What was the most complicated shot on this show?
There were many complex shots in this film. But one that stands out is a moment when an army of angels descends from the heavens. The shot begins in a medium close-up of Michael silhouetted against a sunny sky. The camera then tilts up to reveal thousands of angels in formation. The angels break formation and dive in a steep swirling vortex before flying by very close to camera. The shot combines greenscreen footage with matte painting, crowd simulation, effects animation, and hero keyframe animation by Lead Animator Marc Schreiber. It was a challenge to render and pull together so many elements but in the end it turned out very nice.

Did you encounters any difficulties in particular?
As our wings shots were done over and over so you achieve an “economy of scale” by doing lots of similar shots. We were asked to do the Blue light shots were the Angel Michael, disappears. This shot took a lot of our R and D and artist resources to complete in a difficult stretch of time, when we were trying to complete 100 shots in the end sequence. This light was to radiate within the body and eventually rake over. To get this, we had built a CG version of Michael to get a subsurface light effect. The CG Michael would also be used to drive expensive particle simulations. A lot of work for one shot but we pulled it off.

How did you conceive the landscape at the end of the movie?
The ending sequence was shot in October of 2009 all on stage at Sony. I was on set VFX Supervisor along with Joe Bauer, Production VFX Supervisor taking over for Jonathan Rothbart. Jon was not available as he was working with Scott Stewart on PRIEST but managed to squeeze in our weeklong shoot. Because of the green screen stage setup, we ended up doing around 100 shots involving set extensions, full CG environments and of course, more wings.
There are some beautiful matte paintings by Juan Jesus Garcia near the end of the film, which turned out really nice. It’s a slow crane shot over looking tent city. A camp at the foot of the mountains housing survivors of the apocalypse. We had green screen footage of the actors on a foreground set piece. And everything else you see is CG.

As for any productions, there are unplanned things that happen in post. What were those things on this show?
Helping the look of the set rock in the end sequence. We had to roto all the actors from the plates in order to control the look of the set in compositing. A lot of work but it really helps set the proper atmosphere.

What memory did you keep of this experience?
All great memories. When doing two features back to back involving wings, you tend to see them in your sleep. I’m not complaining, we are lucky to have these opportunities.

What is your next project?
We’ve got an exciting slate of projects lined up for 2010! Including another really cool project with director Scott Stewart called PRIEST. It’s going to be a great year for all of us at Spin.

What are the 4 movies that gave you a passion for the cinema?
DARK CITY, THE MATRIX, FIGHT CLUB, THE CELL, BRAZIL, BLADE RUNNER. Sorry, I gave more than 4 but I have a passion for dark movies. To me, these kinds of movies really convey emotion and creativity.

Thanks for your time.

WANT TO KNOW MORE ?
Spin VFX: Official Website of Spin.
Defect: Blog of Eric Doiron, compositing supervisor, at Spin, he spoke in detail about the
compositing of the wings.

© Vincent Frei – The Art of VFX – 2010

HARRY POTTER AND THE HALF-BLOOD PRINCE: Nicolas “Casquette” Aithadi – VFX Supervisor – The Moving Picture Company

After a few years in the videogame (FINAL FANTASY X, TOMB RAIDER 5) and french movies (Vidocq, Asterix Mission Cleopatra) Nicolas Aithadi took on the Londonian Adventure and joined The Moving Picture Company and worked on such projects as TROY, ALEXANDER, CHARLIE AND THE CHOCOLATE FACTORY. He is now working on the next installment of the HARRY POTTER franchise

What got you where you are?
I started as an illustrator, when I was younger. I was working for newspaper like Force Ouvriere Hebdo etc… I jumped to the digital when I met people who offered me to work for Science et Vie Micro, a french Magazine dedicated to Computers. I started to create their Interactive CDs that they were giving away with the magazine. From there I worked my way towards commercials then later on film work. In 2002, The Moving Picture Company contacted me and offered me a job as a TD. I accepted gladly and I work there ever since. My first job was lead Animator on THE MEDAILLON, then sequence supervisor on TROY, CG sup on ALEXANDER and finally arrived to my goal. VFX Supervisor on the last X-MEN.

How did you become VFX Supervisor on Harry Potter?
MPC has always been part of the Potter series since the beginning, it was just a matter of time for me to get involve. My first experience was on “THE GOBLET OF FIRE” as a CG Sup. I like the kind of effects we have to create for these films. It’s always different and often very challenging. There is no time to get bored. After Roland Emmerich’s 10 000 BC, I was offered the opportunity to work of “THE HALF-BLOOD PRINCE”. As I always wanted to get back to the Potter world I took the job.

Almost every studios in London worked on The Half Blood Prince, which sequence MPC was in charge of?
For this Potter we were primarily in charge of the two Quidditch sequences as well as other various effects across the film.

Can you tell us a bit more about it?
Other effects included an ice skating snowman, floating burning Newspaper, shot of the Hogwart Express. including a camera traveling from inside the train going outside a window and getting back in two carriage away, a shot that took something like six months to finalise. Another massive shot was the apparating; The idea was that Harry and Dumbledore use Magic to travel and their bodies are deformed and stretched and fuse with one another. We had to build a Digital double of Harry and Dumbledore that could work very close to camera and to find a technique to melt and fuse the bodies. We were pressed with time so we decided to make this shot the Old School way and to model pretty much everything. It was like making stop motion animation with CG. The shot was modeled every five frames.

Did you use a lot of digital doubles for the Quidditch sequences?
Pretty much all the shots of both sequences have some kind of digital doubles.
The idea was that from the moment a character will move we would switch to a digi-double. The Shot where Ginny is doing a barrel roll for instance was entirely digital, from the environment to the characters. We’ve done a big job on the shading and facial animation to be able to create the best digi-doubles possible. We used what we called videogrammetry. We had an actor sitting on a chair with four camera pointed at him/her two in front, one aiming low, the other one aiming high and one on each side.
The actors had tracking markers on their faces and we had a very flat lighting setup. Once we’ve shot those element we used the four plates to created animated UV textures that were perfectly synched to the facial tracking point that we were using to animate the CG faces. The lighting being so flat we were able to light and shade them like traditional textures.

Did you use miniatures?
Not for the Quidditch sequences, except for the school in the distance, but even then we re-projected it on CG models for better control.

What’s a typical day on Harry Potter?
My days become very quickly the same routine. In the morning we have a production meeting where my VFX Producer, CG Supervisor, 2D Supervisor and myself meet and discuss things to be achieved during the day, after that pretty much for the rest of the day I do what we call dailies, which are reviews of the work that the various departments have done the previous day or week depending on the department. The purpose of these reviews and of my job is to ensure that we are on track in term of deadline and budget and artistically.

What were the challenges on this particular Potter?
As I said before, the challenge was the digital double work for the Quidditch sequences. Not only the facial animation and rendering had to be improved we had to deal with CG hair and CG cloth of extremely high resolution. In addition to that we had to create full CG environments. It was intense. 100% of the shots have some kind of CG in them and maybe 90% are entirely CG. What we wanted to achieve was to make the Quidditch sequence more dynamics than it ever been and for that the solution was to free up the camera.

What was the size of your team?
A little under a hundred, maybe 80 or 90 artists.

How many shots MPC worked on?
We were in charge of about 250 shots. Not a massive number in comparison to what MPC handles on a regular basis, but they were complicated shots.

What were the main technical changes since the last Potter?
We worked on the characters a lot, developing skin shaders and facial animation tools. We developed FX animation as well with massive work on Fire and water. MPC is always developing proprietary tools, we have our Fur system called Furtility, which was started on BC and improved on PRINCE CASPIEN, which we used for the Quidditch player’s hair. We have a amazing rigid body dynamics tool called PAPI and our rendering Pipeline Tickle.

How long lasted the project?
It usually takes a year of work for a Potter. Big part of that is shooting and prep.

Other challenges on this show?
As I said earlier, Ginny’s barrel roll gave us some grief. We work on this shot for about six months to create the most realistic character possible. Working out all the details from the eyes to the hair. For a while she looked good, but she didn’t quite looked like herself. We ended up making it, but it was a battle. The other very complex shots was the train shot. It was a 3000 frames shot and it was complex not in term of technical challenges but more in term of number of elements to put together, about 50 motion control plate had to be combined together with a CG train interior, CG train exterior and CG environment.

What was your best memory on the show?
My best memory? I have a lot, if I must choose one, I will say, it was that day when I was shown the first shot of the match sequence (images below) I asked Guillaume Rocheron, my CG Sup at the time, when exactly we would replace the actor by the CG version and he told me that it was already the case. I looked again and it was the CG double on screen and I didn’t notice. It was a good sign. It was when we realised that we would make it. The client didn’t notice either.

Why did you leave Paris to go to London?
It was after September, 11. There wasn’t much work in Paris and when there was it wasn’t very challenging, I always wanted to work in London and get involved in the big Hollywood projects. But mainly, because they called me (Laughs).

Did you get offers from Paris since you’re in London?
No, not one. They don’t like me anymore. (Laughs).

Which are the four movies that inspired you to work in this industry?
SEVEN SAMURAIS always been my favorite film of all time, when I was a kid we were watching loads of old french or american classics. It would be difficult to chose specific movies. I like films in general. From the SEVEN SAMURAIS to FERRIS BUELLER’S DAY OFF going through BACK TO THE FUTURE…

Thank you for your time.
It was nice answering your questions. Thanks.

WANT TO KNOW MORE?
The Moving Picture Company: Dedicated page for HARRY POTTER AND THE HALF-BLOOD PRINCE on MPC’s website.

© Vincent Frei – The Art of VFX – 2010