BARNEY’S VERSION: Yanick Wilisky – VFX Supervisor & Co-founder – Modus FX

Yanick Wilisky is back on The Art of VFX. After explaining his work on SOURCE CODE, he explains to us now his work on BARNEY’S VERSION.

How did Modus got involved on this show?
We were working with Louis Morin, VFX supervisor on THE AMERICAN at the time and he was starting this film with Richard Lewis and Robert Lantos based on this amazing novel written by Mordecai Richler. We just had to help bring this film to life.

How was the collaboration with director Richard J. Lewis?
Richard was sincerely amazing. His directions were very precise yet he was open to suggestions. He knew exactly what he wanted and was expressing his thoughts impressively fast. I loved it!

What was his approach about VFX?
His focus was definitely on storytelling during the whole process. As long as the visual effects were invisible to the eyes and that every action was clear he was satisfied. He had an amazing confidence in us through the whole process.

How was the collaboration with Production VFX Supervisor Louis Morin?
Again, as always, wonderful! We have done 5 films with Louis over the past few years. Louis is one of those visionary VFX supervisor who is able to extract the best out of anyone. Amazing contact with the artists. Great communicator. Very savvy of the full spectrum of film production which becomes an amazing creative addition when it comes to finding solutions.

What have you done on this show?
We generated a CG plane, digital prosthetic additions, green screen compositing, digital matte painting and some monitor insert.

Can you tell us more about the shot in which we see New York from a park? How did you create this matte painting and where was it filmed?
It was filmed in Parc Lafontaine in Montreal. Louis Morin then went to New York with a small crew and filmed the New York Central Park Skyscraper background using the same camera settings for Modus to roto and combine both environment seamlessly.

What was the real size of the set for the shots with Barney and Miriam in front of New York at night?
I do not have the exact dimension but it was a very small set.

How the backgrounds were filmed and composited?
The actors were filmed on green screen in Montreal. Modus generated the water in CG to match perspective and Louis filmed a night skyline in New York.

Can you tell us how you create step by step the impressive CG plane?
The plane was a CL415 which is a very documented and popular model. We started with footage we acquired from Bombardier’s archives. We were able to get references from several points of view which was enough to build an accurately scaled model of the plane. Mathieu Phaneuf, lead modeler at Modus, build the plane over a few weeks. Martin Pelletier who was our lead look development artist, spent another few weeks bringing the last final and decisive touch to the plane. It goes without saying that we are all very happy with the result.

How did you take your references?
Some came from the footage from Bombardier. We took some more references through the web and Louis Morin had acquired impressive HDRI data capture during the second unit shoot.

How was shot the interaction with the lake?
We asked Louis if it was possible to get a small plane or boat during second unit which he was able to get for us. We used the wake from the plane for the tip of our cg plane and enhance the rest of the wake using Houdini as our CG solution package. We then recreated reflections and additional waves emitted from the plane for the final touch.

Can you tell us in detail the creation of the CG water?
Francois Duchesneau was our lead visual effect artist on the project. The amount of water that gets released from these planes is very impressive. We realized, looking at the references, that we needed to have that feeling of light being absorbed by the thick mass of water particles as it gets closer to the core. He came up with an amazing recipe, again using Houdini, who was based on a volume generator technique which gave a lot of thickness and depth to the particles and which allowed us to hit our goal.

How did you create the distant smoke for the fire?
Since it was distant smoke we came up with a simple yet very efficient technique using detailed matte painting element of smoke who where then distorted into the plate throughout the length of the shot.

Can you tell us more about the other invisible effects you created for this show?
We changed the hockey teams name on the card that Miriam gave to Barney before leaving the wedding. We changed a shot of Miriam and Barney coming down an alley in a taxi to a shot of a Rabi coming down an alley in a taxi. We erased Miriam and added a cg hat to Barney to complete his transformation.

How did you create the final shot with the wide view over Montreal?
Louis provided two different plates that we then rotoed and combined before adding CG snow as a final touch.

What was the biggest challenge on this project and how did you achieve it?
Definitely the CG Plane. And we achieved it with a combination of HDRI, references, small and precise detailed (specular, heat distortion, etc ) that blend the whole integration seamlessly.

Was there a shot or a sequence that prevented you from sleep?
The whole post-production went very smoothly. Mostly due to the precise feedbacks from Richard, the confidence from Louis and Richard we were able to complete work on schedule and to a level quality that the film deserved.

What do you keep from this experience?
It’s always an amazing experience and it leaves you with that great feeling when you work on a film you love and that you would watch over and over again. Being surrounded with Richard, Louis and Susan was wonderful. The energy in the studio was high. All the reasons why we love this business so much were all there to make the best film possible.

How long have you worked on this film?
We have worked on this film for 5 months.

How many shots have you done?
65 shots

What was the size of your team?
42 persons (technicians & artists) worked on the film.

What is your next project?
We have recently completed work on TWILIGHT: BREAKING DAWN – PART 1, IMMORTALS and BIG MIRACLE. We are presently hard at work on MIRROR, MIRROR for Tarsem Singh, another VFX intensive project that we cannot mention for now and we are completing our first full CGI stereo animation film that delivers in a few month.

A big thanks for your time.

// WANT TO KNOW MORE?

Modus FX: Dedicated Page about BARNEY’S VERSION on Modus FX website.

// BARNEY’s VERSION – VFX BREAKDOWN – MODUS FX

 

© Vincent Frei – The Art of VFX – 2011

THE THREE MUSKETEERS: Ara Khanikian – Lead Compositing – Rodeo FX

Ara Khanikian is back for the third time on The Art of VFX. After telling us about his work on JONAH HEX and RESIDENT EVIL: AFTERLIFE 3D, he explains the work of Rodeo FX on THE THREE MUSKETEERS.

How did Rodeo got involved on this show?
Mr. X approached us to work on this project. Having worked with Mr. X in the past on projects like RESIDENT EVIL: AFTERLIFE, DEATH RACE and REPO MEN, we had a very strong relationship established and the matte paintings and environments that were required for this project were a perfect match with our expertise.

How was your collaboration with Production VFX Supervisor Dennis Berardi?
Our collaboration with Dennis Berardi was very good. Our communications would be mainly through weekly cinesync sessions with Sebastien Moreau, Rodeo FX’s president and VFX supervisor, and the department leads.

What have you done on this show?
Rodeo FX mainly worked on the establishing shots of London and Venice.
We also worked on the shots establishing the house of the Musketeers in Paris and the attack by Aramis in night-time Venice as seen in the first few minutes of the film. We also worked on the Port of Calais shots.

How did you proceed step by step to create those different establishing shots? Did the production gives you some sketch or previs?
The way we like to approach these kinds of establishing shots is by doing a lot visual research. We go through a lot of historical books, pictures and sketches for reference, we search the web for a lot of images and pictures (old and new) to have as much reference material as possible.
We also received a lot of sketches from production which gave us a very clear idea on what the director liked and wanted to see. All these shots were already previz-ed by Mr. X which was a crucial step for these kinds of shots.
Before starting work on our shots, we started with a lot of R&D and concept work to get a clear idea on the look for these shots. Some of them were very tricky because of the very ambient lighting of London. It’s not an easy task creating a full CG environment in a very overcast lighting setup. While this was getting perfected, all the shots were going through our camera matchmove and layout departments. Being a stereo project, a great deal of attention was given to the layouts and matchmove of these shots. FBX scenes were exported and heavily used in comp on our Flame and Nuke systems. Matte painters were creating these beautiful environments while the CG department was creating the massive asset for the Tower of London and its environment.
By having FBX scenes, that were getting updated frequently, the compositors were able to start work early on in the pipeline, carefully placing smokes from chimneys stacks, adding torch-lights, projecting water plates and all this while the matte paintings and CG layers were in a very embryonic stage.

How did you recreate the beautiful shots showing Venice (both night and day) and the fireworks?
Production provided us with helicopter-shot footage of Venice using a RED camera. We matchmoved these shots and created a stereo rig in layout, because these shots were shot mono and had to be converted to stereo. These aerial plates were shots with a large zoom so, in layout, we decided to modify the lens to simulate what we would have seen if the helicopter was a lot lower and closer to the ground, while still keeping the same framing. This allowed us to have a much more interesting depth and stereo by introducing a little bit of parallax and perspective changes in all the Venice buildings.
Once we had an approved stereo layout, our matte painters started painting over the principal photography plates and transformed the present-day Venice to what it would have looked like in the 17th century. We painted out everything that looked modern including all the light sources and replaced them with torch lights and candles. Having a convincing moon-lit feeling with torch-lights in windows and streets were a big part of the look for these shots.
These plates were all shot daytime, so we gave them a night-look (and a dawn look to one of them). This look was a combination of matte-painting and color-correction in comp. We also created CG fireworks and crowd simulations in Maya and animated CG ships and gondolas in Softimage XSI.
Live action elements like smoke from chimney stacks, torch-lights, practical fireworks and water (canal and sea) were all added in comp.

Can you tell us more about the introducing shots of Athos and Aramis?
In these shots, we created the aerial shots of Venice with all the fireworks and a very festive feeling with crowd simulation and boat animations. We composited Aramis, who was shot on green-screen, on top of a bridge looking down on a canal in Venice just before his attack on a Gondola.
We also worked on Athos’ attack on the guards, creating set extensions and a CG crossbow and arrows

How was filmed the shot of D’Artagnan coming in Paris?
This shot was filmed on a bridge in Wurzburg, Germany. A green-screen was used for the first part of the shot, behind D’Artagnan. The second part of the shot on the bridge was shot with a lot of costumed stand-ins.

Can you tell us in detail the creation of this great shot?
We were provided with concept work with this shot showing us what the director wanted to see. A farmer’s market and a rural scene was to be created in the A-part of the shot and a 17th century Paris, in the B-part. The bridge and its actors were rotoscoped out of the plate and integrated into a full CG / matte-painting of the Paris environment.
The whole shot was a blend of matte-paintings and CG elements with live-plate elements. Crowd elements were shot on green-screens and composited into the digital environment. Live smoke elements and little pit-fires were added. The Seine was created by projecting live water plates in comp. Mr. X provided us with their asset of Notre-Dame which was integrated into our matte-painting.

How did create the Musketeers House?
The Musketeers house was actually plate photography. We repo’ed it down and created the environment around the house.

What did you do on the shot in which in Milady opens a secret room in the Queen’s Chamber?
We animated the fireplace and the mirror to back-up approximately 4 feet to reveal the secret vault in the Queen’s chamber. We recreated the reflections in the mirror with set photos.

How did you recreate Calais?
We created the port of Calais with a matte-painting. Production provided us with beautiful concept work and we projected photographed period ships onto geometry and projected ocean plate. We decided, for these shots, to use Nuke to do all the projections of the matte-painting. So, the matte painters exported their PSD file and the projections onto geo was actually all handled in Nuke. We were quite satisfied with the result. ?

Can you tell us more about the London Tower?
The Tower of London was a very big asset that we created. It took us 4 months to complete it and it had a ton of detail!

How did share the assets with Mr. X teams?
We shared a couple of asset with Mr. X, mainly the Notre-Dame that we needed to use in two of our shots and the airship that we integrated in our Tower of London shots. We received the Airship as a rendered element with all the passes to do the final comp. For the Notre-Dame asset, we only needed a specific angle of it so we sent Mr. X a 3d scene and they used it to render their Asset with.

What was the biggest challenge on this project and how did you achieve it?
For this project, Our biggest challenge was creating huge assets and a lot of different environments in a very limited amount of time. One other challenge was creating a very ambient and overcast lighting for the Tower of London shots. Having a bright sunny day would have facilitated the task but achieving the photo-realism we were aiming for in this kind of lighting was a bit of a challenge.

Was there a shot or a sequence that prevented you from sleep?
No not really… with great artists and enough time, anything is possible (laughs).

What do you keep from this experience?
This project was a very fun one, we had a lot of variety in our shots and the sets and costumes were amazing. Working on a period film like this one is always great fun.

How long have you worked on this film?
Around 4 months.

How many shots have you done?
We delivered 29 shots.

What was the size of your team?
We were a team of about 15 artists

What is your next project?
We’re currently working on MIRROR MIRROR Tarsem Singh’s adaption of Snow White, UNDERWORLD 4, JACK AND THE GIANT KILLER.

A big thanks for your time.

// WANT TO KNOW MORE?

Rodeo FX: Official website of Rodeo FX.
Mr. X: Official website of Mr. X.
Mr.X Interview: Interview of Eric Robinson about THE THREE MUSKETEERS.

© Vincent Frei – The Art of VFX – 2011

REAL STEEL: Erik Nash – VFX Supervisor – Digital Domain

Erik Nash began his career in 1979 on the film STAR TREK: THE MOTION PICTURE. He joined Digital Domain in 1995 and work on projects like APOLLO 13, TITANIC or KUNDUN. He then became VFX supervisor and take care of the effects of films like RED PLANET, I ROBOT, PIRATES OF THE CARIBBEAN: AT WORLD’S END. In the following interview, he talks about his work on REAL STEEL.

What is your background?
I started my career as a visual effects camera assistant on STAR TREK: THE MOTION PICTURE and spent eight seasons as the lead motion control cameraman on the STAR TREK TV series: THE NEXT GENERATION and DEEP SPACE NINE. I joined Digital Domain and worked as a visual effects director of photography on APOLLO 13, TITANIC and a few other features, then began working as a visual effects supervisor – which is what I do today. I have been fortunate to supervise shows such as I, ROBOT, PIRATES OF THE CARIBBEAN: AT WORLD’S END and most recently, REAL STEEL. Having started my career behind the camera it was particularly thrilling for me to be able to work in the virtual production realm where it’s now possible to direct CG characters as if they are actually present on set – just like you’d shoot a scene with human actors.

How did Digital Domain get involved on this show?
My own experience and thinking for this project sort of began back in 2008, when I had the chance to get an in-depth look at the virtual production process on AVATAR. When I read the script for REAL STEEL a year later, I saw this material as a perfect fit for a similar virtual production paradigm. I didn’t know it at the time, but one of AVATAR’s producers – Josh McLaglen – was also attached to REAL STEEL and had the exact same thoughts for this project. The topic came up in an early meeting and, as I made the case for virtual production I could see Josh nodding in agreement. Dreamworks awarded Digital Domain the visual effects contract and we launched into a six-month preproduction phase.

How was the collaboration with director Shawn Levy?
Shawn was very open and collaborative. We had a really terrific working relationship.

What was his approach to the visual effects?
His sole interest in visual effects was in how it could help him tell the story. He wholeheartedly trusted the VFX team to help him fulfill his vision.

Can you tell us about your collaboration with Legacy Effects?
We had a great working relationship with Legacy Effects. Their supervisor John Rosengrant and I were in complete agreement regarding how the work would be best broken up. They built three practical robots – Ambush, Noisy Boy and Atom – that were used extensively throughout production for shots requiring human contact and upper body animation. We worked with Legacy and with the Real Steel production art department to finalize robot designs and mechanics that could be applied to practical robots and CG models alike, so we could create a seamless transition between the two methodologies as needed on screen.

Can you explain to us in detail the creation of the different robots?
The practical Legacy ‘bots were invaluable for lighting and texture data, as they provided a tangible point of reference for digital characters that needed to be indistinguishable from the real thing. Digital Domain modeled, textured, and rigged eight unique, hero robots for the fight sequences: Ambush, Noisy Boy, Midas, Atom, Metro, Blacktop, Twin Cities, and Zeus, in addition to numerous background robots that appear throughout the film.

How did you rig them?
Early conversations between our rigging team and the Giant Studios team enabled us to get the skeletons and naming conventions worked out. Defining our nomenclature and hierarchies in advance allowed us to streamline the handoff. After principal photography, we would get a live-action plate from production and a Maya file from Giant, and we could go straight into lighting and comp, without having to worry about tracking or layout or any sort of file translation. Because our rigs and their rigs were exactly the same, there was no retargeting needed on the back end.

What were your references for their different animations?
Each robot had a different Mocap performer assigned to it. This gave each robot a distinct personality and way of moving. Each performer had a very clear understanding of the physicality of their particular robot counterpart because they were familiar with the concept art and could see their own performance applied to the Giant Studios version their robot as they were performing on the Mocap stage.

How was the presence of the robots simulated on set?
Once we had all of our fights mapped out in the virtual realm during pre-production, we brought our motion capture system to Detroit and installed it in each of our fight locations. The on-set mocap system allowed us to replay the pre-captured robot action as a live composite through the motion picture camera with complete spatial accuracy. Through the cameraman’s eyepiece and on the director’s monitor, CG boxing robots were visible in the real world set. The actors could then see playback of the recorded take with visible robots in the shot. This was a groundbreaking use of virtual production technology – it had never before been used to this extent on a film that featured so many real-world locations.

Did you create some previs to help the shooting?
For the Metal Valley stunt sequence, Previs Supervisor Casey Schatz modeled all of the camera cranes, lighting rigs, rain bars and stunt rigging that would have to be fit into a very confined area around the set. This allowed all of the department heads to see how all of this huge equipment had to interrelate. We used the previs to figure out how we were going to set up everything on location. When we actually got there, everything went in exactly as we’d planned, and we were able to shoot that very complicated setup in three short nights.

Digital Domain worked on the TRANSFORMERS trilogy. How did this experience served for this show?
The biggest help our TRANSFORMERS experience gave us was organizational and managerial. The greatest similarities between the projects were the quantity and complexity of the work. Stylistically, the projects were very different.

How did you made such great realistic renders for the robots?
DD’s “Light Kit” system, which was originally developed for BENJAMIN BUTTON, produces near-final lighting renders right out of the box using high dynamic range (HDR) images captured on set during principal photography. Once the compositing, integration and lighting teams properly format the HDRs, the resulting 360° digital worldview of each location is ingested by our software pipeline for lighting CG characters within that environment. Typical HDRs are used as an infinite ray dome; on REAL STEEL we projected the HDRs onto 3D geometry and actually traced it in 3D space. Thereby the lighting is much more physically accurate. As our CG character moved through the set, it would get all of the influence of the HDR in the proper three-dimensional space.

Can you tell us more about your work on the futuristic city?
There really wasn’t any “futurization” done thru VFX. All of the cityscapes were achieved in-camera on location in Detroit, Michigan.

How did you create the huge arena for the final fight?
Production Designer Tom Meyer and his art department delivered concept art and a 3d model of a fictional 20,0000-seat sports arena. Our environments team led by Geoff Baumann and Justin van der Lek modified the design to fit the footprint of Cobo Arena in Detroit where the sequence was shot. Then the entire interior was model in detail, textured and lit to match the practical, partial set. We then populated then arena using a crowd system developed specifically for REAL STEEL.

Did you develop specific tools for this show?
The use of virtual production technology – a process we tailored specifically for this film – was the biggest innovation. Virtual production had never before been used this extensively for a location-based feature.
Also, our digital effects supervisor Swen Gillberg and our environments team devised a new crowd technique dubbed “Swen’s Kids.” Eighty-five extras were replicated photographically at very high resolution to achieve crowds as large as 20,000 spectators in the arena fight sequences. The process started with individual extras against green screen who would go through a scripted series of movements – seated, standing, clapping, arms raised, etc. – that were captured by three digital cameras positioned at different angles. The three cameras ultimately captured over 80 terabytes of HD footage. A software tool was written within Nuke to manage the footage and allow for easy replication of crowds depending on the needs of each shot, including factors such as crowd size, proximity to camera, enthusiasm level and camera angle, among others. This system was used to produce anywhere from a handful of background spectators to a crowd of thousands, all generated through an easily managed system that required fewer computing resources than a full 3D solution.

Was Digital Domain Vancouver involved on this show?
100% of Digital Domain’s work on REAL STEEL was done in Venice, CA

What was the biggest challenge on this project and how did you achieve it?
The biggest challenge was figuring out a way to create eight-foot tall working robots that could do battle in a boxing ring and look entirely believable doing so. Our virtual production process was a key front-end solution to meeting that challenge.

Was there a shot or a sequence that kept you up at night?
The scene with the bull at the beginning of the movie was the most difficult. Flesh and fur is much harder to achieve digitally than metal.

What do you keep from this experience?
REAL STEEL stands out for me as by far the most rewarding project I’ve been involved with. Working with Shawn was incredibly collaborative and fulfilling. The work that our team at DD produced is as close to seamless as I could ever hope. It is a movie that I will always be very proud of.

How long have you worked on this film?
We worked on REAL STEEL for about 19 months total – starting with early prep work in December of 2009, through to final delivery in June of 2011.

How many shots have you done?
Digital Domain produced 626 VFX shots for Real Steel.

What was the size of your team?
Digital Domain’s team reached a maximum of 250 people, including artists and production support.

What is your next project?
We’re currently looking at several projects, but don’t know what will be next.

What are the four movies that gave you the passion for cinema?
2001: A SPACE ODYSSEY, CHINATOWN, BRAZIL and THE BIG LEBOWSKI.

A big thanks for your time.

// WANT TO KNOW MORE?

Digital Domain: Dedicated Page about REAL STEEL on Digital Domain website.

© Vincent Frei – The Art of VFX – 2011

TOWER HEIST: Mark Russell – Production VFX Supervisor

After telling us about his work on THE ADJUSTMENT BUREAU, Mark Russell is back on The Art of VFX. He talks about his work on the film TOWER HEIST.

How did you get involved on this project?
I met Brett Ratner just as I was finishing THE ADJUSTMENT BUREAU. The producer Bill Carraro was the same for both films. They started pre-production right next to my office, so it made it pretty easy to communicate.

How was your collaboration with director Brett Ratner?
Brett really relies on his team to bring their ideas to the table. He really allowed me to create some interesting things and realize my ideas.

What was the real size of the set for the penthouse and the pool on top of it?
Kristie Zea, the production designer, and Nick Lundy, the art director, modeled the penthouse set on a real penthouse apartment in the Trump International Hotel and tower. The scale and shape of our set was nearly the same. We did have to make it a bit smaller because we couldn’t find an available stage in NY that could hold such a large apartment. The pool was a complete fabrication, so we made a pool that seemed big enough to be a lap pool and would look good on the roof.

How did you take the references and the elements for the environment?
The BG elements were captured a number of different ways. First, we shot a very very high res panoramic image for use as a printed translight backing during photography. That same image was then used as the digital background in most of the scenes from the apartment. We supplemented the translight image with a combination of images captured with an array of Canon 7D cameras shooting both stills and video from the same place. Also, we shot a number of plates during the Thanksgiving Day parade on film and again with the Canon array.

Have you considered doing the backgrounds shots with the retro-projection technique?
During pre-production we (Brett, Kristie, Dante Spinotti (cinematographer) and I) decided that we wanted the flexibility of a greenscreen for the background for a number of reasons. First, we felt that the amount of parallax that we would get from the multiple steady-cam shots would look much better with a 2-1/2D image rather that a flat backing or a projected image. Also, the sheer scale and scope of the views was too large to realistically cover the set, so we would have been supplementing much of the BG anyway, and I felt it would be cleaner to replace the BG as a whole. Finally, Dante really wanted each of the scenes to feel like different times of day, and we knew it would be much more difficult to achieve that in camera with one backing.

Can you tell us more about the sequence in which Eddie Murphy is moving through the elevator shaft?
The elevator shaft was tricky in that it’s not a very long set of shots, but yet it was an enormous physical and CG build. The art department built a three story section of the elevator shaft with a hotel hallway attached (20 feet in the air) that was used for in the shots of the car getting dropped into the shaft and the guys arriving at the bottom and top of the shaft. It was also used in the shots of Eddie climbing out of the elevator which we extended to add peril. John Bair and his team at Phosphene build a CG version of the shaft and the elevator cars that we used in all of the shots of the elevator moving and anything where we saw more than 2 floors of elevator in the BG.

How have you designed and created the shots in which Ferrari is suspended in the void?
These shots were story-boarded by Brett and an artist named Dave Cooney. I took the boards to Gravity, and we made a 3D previs of the entire sequence with the car and the guys outside the building. In doing that, we found that we needed some additional shots, so we created a number of helicopter-type shots that we could do in full CG which gave the scene a bit more scale and danger. On the set, we used the 3D previs as our shooting guide on a shot by shot basis. It was very helpful.

Can you tell us more about the shots with the team and the Ferrari in the elevator shaft?
Essentially the scenes were divided into two categories: things we could shoot in the practical shaft and things we needed to shoot on greenscreen. We shot all of the actors the top of an elevator cab against green screen for anything where the cab was traveling up or down the shaft. Shots were they elevator cab was slowing to a stop or not moving at all were done on the physical set. Steve Kershoff, our amazing special effects coordinator, built a moving platform that could hold the actors and the Ferrari, so that the shots where they are slowly approaching the roof of the shaft are mostly in camera. The platform didn’t move very fast, so anything that needed to appear fast was shot on the greenscreen.

Did you create digital doubles for the wide shots showing actors on the Ferrari?
Yuval Levy and the 3D team at Gravity made digital double for a number of the wide shots with Ben, Eddie and Matthew hanging on the Ferrari.

Have you done something on the Thanksgiving Day Parade?
We had a team of 7 cameras shooting various elements and shots during the Thanksgiving Day parade. It was a crazy day, because we really had to rely on what each team could capture on their own. We could communicate by radio, but we couldn’t really get to one another once the parade had started. I was very specific about the types of shots I was hoping to get from each of the positions. The most important pieces for me on the day were the ones shot from the roof of The Tower. It was difficult because we were shooting the BG plates before we shot any of the FG pieces, so I was guessing at a lot of what were going to need when we got into post, and of course, we only had on chance.

Can you tell us how you chose the different vendors and distribute the shots?
I tried to choose vendors on their skills as well as their capacities. Each of our vendors has specific strengths, and I really tried to play to those as best as possible. It was a very big show for NY, and I wanted to make sure that no one vendor got too bogged down. It was a bit challenging in the end because Brett was requesting a lot of minute changes up until the very end that really cramped our pipeline. We had to move some things around at the end in order to make sure that we could address all of Brett’s notes.

How was the collaboration with the different VFX Supervisors?
I really enjoyed working with everyone. It’s a great experience for me to see how different people approach similar work. Jim Rider, Greg Liegey, John Bair and Randy Balsmeyer and the other facility guys each came at things from a slightly different perspective and it’s exciting for me to work with them. I also had the pleasure of working with a Adam Howard, Mike Fink and John Bruno who consulted for a few weeks on the film. Each of us had a slightly different focus, and it was great to work together with all of them.

What was the biggest challenge on this project and how did you achieve it?
The biggest single challenge was the opening shot of the film. We start close on the face of Benjamin Franklin from a $100 bill, then the camera pulls up to reveal a swimmer in a pool with the bill on the bottom. It continues to pull up until we see that the pool is on the roof of The Tower until we finally tilt up to show the city of New York. This shot was also the most fun for me. Working again with John Bair and Phosphene, we pre-visualized the entire scene first. I planned with the art and camera departments which elements I would need to be built and shot. We built a real pool with a 100 dollar bill printed on the bottom, and we shot it on a stage. Then Phosphene combined that with a CG pool and rooftop that hooked into some aerial photography that I was able to shoot above the Tower. It was one of the shots where the planning and fate were all working for us. Everything came together beautifully.

Was there a shot or a sequence that prevented you from sleep?
There were a number of shots and sequences that kept me from sleeping. The penthouse apartment scenes were the ones that caused me the most stress. We had a very difficult time locking down the balance between the FG and the BG to a place where Brett was happy with it. Ultimately the film looks beautiful and I’m very proud of the work, but it was not an easy process finding the right balance of brightness, contrast, focus and sky detail.

What do you keep from this experience?
I learned a great deal from TOWER HEIST, but mostly I take away huge respect for the artists in NY. They are incredibly dedicated and driven. They did whatever it took to get the job done.

How long have you worked on this film?
I worked on TOWER HEIST for about 15 months.

How many shots have you done?
There are 535 shots that ended up in the film.

What was the size of your team?
I had a producer, data wrangler and coordinator. Then two VFX editors. There were six of us in all.

What is your next project?
I don’t know yet. I’ve been enjoying some down time.

A big thanks for your time.

// WANT TO KNOW MORE?

Mark Russell: Official website of Mark Russell.

© Vincent Frei – The Art of VFX – 2011

IN TIME: Justin Johnson – VFX Supervisor – Luma Pictures

Justin Johnson is evolving in the visual effects for over 10 years. At Luma Pictures, he has worked on projects like SKY CAPTAIN AND THE WORLD OF TOMORROW, THE BOOK OF ELI, TRUE GRIT, or THOR. In the following interview, he talks about his work on IN TIME.

How did Luma Pictures got involved on this show?
Roger Deakins looked to Luma for this project, as we’ve worked with him several times in the past on many of the Coen Brothers’ films. We’ve got a great working relationship with him – it’s always a pleasure.

How was the collaboration with director Andrew Niccol?
Andrew was fantastic to work with, and we had a great repertoire to achieve his vision: A very clean, neat, corporate feel. That feel contrasts nicely with the main characters, providing a juxtaposition of values. It helps characters to stand out and gives the viewer the uncomfortable experience of living in this world.

Can you tell us what you have done on this show?
Luma Pictures provided sixty-five shots for the film in total, including set extensions of several prominent building interiors and exteriors, removal and replacement of structural features and robust matte painting work, including several cityscapes.

Luma also was tapped to add digital demolition, animating the destruction of a checkpoint booth as a limo crashes through it, and creating larger chunks of debris, smoke and dust in Maya.

How did you create the various set extensions for the different checkpoints?
Initially the effect was to be an extension added on to physically constructed bases before the call was made to have Luma do a complete digital build of the gateway, which included the concrete structure plus animated signs and cement barricades. The work was created in Maya.

What was the real size and locations of those checkpoints?
There were three locations for these checkpoints in downtown Los Angeles. Two on the 6th Street bridge and one on Lower Grand St. These were temporary structures and couldn’t be built to look like the impenetrable barriers they needed to be.

Can you tell us more about the creation of the Wasteland?
The final wasteland sequence required a vast amount of digital augmentation Los Angles, is literally littered with identifying characteristics – palm trees, billboards, etc, all of which give it a unique feel. We had to remove these identifiers so that the film would have a neutral look in the city. We also extended the landscape through infinity, replacing an entire ocean in the background with the rubble of this future society.

What was the most challenging aspect on this show?
The final shot of the film- the Wasteland – took up most of our focus during the project. Digitally removing all of the items mentioned earlier: billboards, road signs, roads themselves. We also had to extend the background over what originally was the ocean, so that took a good amount of doing.

How long have you worked on this film?
8 weeks in earnest (not including time on set).

What was the size of your team?
There were about 20 people at any one time.

What is your next project?
We are currently in production on UNDERWORLD: AWAKENING and THE AVENGERS.

A big thanks for your time.

// WANT TO KNOW MORE?

Luma Pictures: Official website of Luma Pictures.

© Vincent Frei – The Art of VFX – 2011

IMMORTALS: Simon Hughes – VFX Supervisor – Image Engine

Simon Hughes began his career in visual effects in 1997 at Cinesite. For over 10 years, he worked in various studios like Double Negative, Rainmaker UK or Clear and on movies like KINGDOM OF HEAVEN, UNITED 93 or SLUMDOG MILLIONAIRE. In 2009, he joined the teams of Image Engine to work on DISTRICT 9. He subsequently oversaw the effects of film LAW ABIDING CITIZEN, THE LOSERS or THE FACTORY. In the following interview, he talks (with Gustavo Yamin, Jordan Benwick and Janeen Elliott) of his work on IMMORTALS.

What is your background?
My background was in fine art, and audiovisual technology, but I began working in the industry in London at Cinesite in 1997. I started as a runner, to editorial back in the days when we still cut film, projection and database management for vfx, to scanning and recording where I became the S&R supervisor.

I made a jump to the side after a few years. I had been thoroughly trained in shake during my time in S&R, so I moved over to compositing as I really got the taste for my creative urges again! After just over 6 years I left Cinesite to take a job at Clear Film, which soon became a part of Prime Focus. Myself, and a small team were in charge of setting up the film department and had some great experiences such as working directly with Danny Boyle, and high end productions such as KINGDOM OF HEAVEN.

Again after a few years I moved on to Double Negative working on films like UNITED 93, HARRY POTTER and THE REAPING. After this I moved on to Rainmaker UK, where I was again involved in the early days of a startup vfx company and also transitioned into comp supervision.

As I am a Canadian citizen I have always had my eye on the industry in Canada, and after a couple of years at Rainmaker I took a job at Image Engine for two reasons, they had made the complete transition to Nuke and they were due start on District 9 which was just an incredible sounding project. Once I finished on DISTRICT 9, I received a VES award for compositing and moved into VFX supervision more or less straight after. Working on LAW ABIDING CITIZEN, THE FACTORY, THE LOSERS, IMMORTALS and most recently SAFE HOUSE (2012) where I have been the supervisor for the show working directly with Universal.

How did Image Engine get involved on this show?
Image Engine had worked on a number of shows with Raymond Geiringer. I had worked with him on LAW ABIDING CITIZEN with Visual Effects Executive Producer Shawn Walsh, and we had developed a good working relationship, which lead Raymond to contact us. Tarsem was also a big fan of DISTRICT 9.

How long have you worked on this film?
We worked on this for around a year. Work began in 2010 when I went on set in Montréal.

How many shots have you done?
There were around 130 shots in total.

How was the collaboration with director Tarsem?
Working with Tarsem was a fantastic experience; his artistic sensibilities are what drive his films to become the grand spectacles that they are. So it was a real challenge to try to live up to those standards.

What was his approach to VFX?
Tarsem seemed to encourage creative freedom, we were expected to drive the imagery forward ourselves to a point where the film could be viewed holistically, and this is then where he really got involved. This was great, as he understood that the process takes time – allowing us to develop our ideas and techniques first so that he could then direct them further.

How was the collaboration with Production VFX Supervisor Raymond Gieringer?
Working with Raymond has always been a good experience, he is incredibly calm, and focused, and as an ex-facility supervisor he thoroughly understands the challenges that VFX facilities have to overcome to complete high end vfx.

What have you done on this show?
Image Engine provided the full range of visual effects work on IMMORTALS, from computer generated characters, character transitions and heavily stylized digital blood and gore, but the main challenge for the company was definitely the digital environment work.

The main environment was the cliff, which stretches for roughly half a kilometer and houses three of the key sets: the village, the tree-bluff and the checkpoint, which are all carved into the rock.

Can you tell us more about the village and the cliff?
Essentially at the start we did a lot of concept art ourselves; working out how the cliff should look and how the village could conceivably extend from the original 3 stories.

The concepts only got us so far, so essentially became more of a case of pushing it forward and trying to visualize it in broad strokes as we were going along, so we could see it in context. The cliff was a design challenge, as it is such a vast and simplified structure. It was all about rock, and the details within the rock surface, which is a bigger challenge than it may sound.

The village extension was also a very creative challenge. We had to find a way to demonstrate depth and scale, and find a way for people to be able to travel between the levels. We also wanted to create a sense of life and organic growth that you see in medieval towns and cities throughout history, where the structures grow around each other over time.

The trick was scale, how to make these things seem vast.

How did you create the first reveal of Athena?
To start with we created a basic human form in Maya that was used to body track Athena, mainly focused on the head and neck. From this we projected a painted version of her head onto the geo, and also created a selection of ink blot styles images that were also used as a basic texture on the geo. From this we were able to supply comp with a CG head and a selection of alpha channels that they could use to drive the effect.

The lower half of the body was taken from footage of the painted Athena and warped to match the practical in Nuke. The reveal were a combination of displaced roto shapes and the ink blot textures, that gradually reveal the unpainted plate version, the goal was to try to create a fluid transform that looked like it was seeping into her body.

How did you design and create the magical arrows?
The arrows started as a CG asset, that was essentially silver with little chips and dents along the body, and a sharp well defined head, and feathers on the tail. Using 3delight to render we were able to give comp a solid arrow as a base and then a selection of aov’s that were used to drive the effect, namely pref, position, spec, reflection and z depth.

The glisten was created from making the spec sharper and crisper and by adjusting contrast taking it just down to small finer details, which is one of the reasons why we made lots of dents and details in the body of the arrow in the first place. This gave us a basic sparkle, that we treated heavily both through combinations of the chromatic aberration, noise patterns, and convolving with glitter imagery.

The transition was driven in part by mapping noise patterns to the arrow using pref, and transition from tail and tip down to the middle using the depth pass and manipulating it values over time. Essentially we created a method that could be applied to multiple shots.

How did you create the Hawk?
This started as a much simpler build as we were only originally expecting to see it far from camera, so over time this was brought closer and closer until it became a hero character. The build was in maya, textures in photoshop and some sculpting in Z brush. The animation was in maya.

Can you tell us more about your work on gore shots?
The gore work went from the simpler adding of practical gore from an element shoot, to creating CG limbs and arms and chopping them off. In addition to this there was a collection of weapons created which ranged from daggers, to swords, to spears, and in one instance one spear is snapped into three different sections and used to brutally maim soldiers as Theseus rampages through the tunnel.

The gore work was a lot of fun; really it was a case of more more more!!

Was there a shot or a sequence that prevented you from sleep?
The environment was a challenge, so as a whole this is the part of the show that left it’s mark on me, and taught me a lot more about how to do this kind of work. I think it kept us all awake at night dreaming about rock surfaces!

What do you keep from this experience?
How to create something of such a huge scale, but at the same time keep it flexible enough to handle the creative process. We work in an industry that calls more and more for well defined procedural approaches, that don’t always lend themselves well to creativity, so it is a difficult balancing act, and IMMORTALS taught me a lot about this.

// SPECIFIC SHOTS IN FOCUS BY IMAGE ENGINE ARTISTS

// Gustavo Yamin, Senior Digital Artist

Can you tell us how you designed and built the cliff?
The task of creating this massive set extension was a multi-faceted challenge that consumed over 6 months of planning, setup, sculpting, rendering, matte painting and compositing.

The production sketch we received from the client provided a rough guideline to the overall « look », and it was clear right from the start that Tarsem wanted stone formations that looked realistic but definitely epic in shape and scale.

Production used a special setup that would allow the director to shoot these sets against green screens and immediately comp the image over a 3D model of the cliff, placing the real set in proper visual context on the face of the virtual cliff. In this way, Tarsem could choose angles that would frame the real set and the rest of the cliff as it should be seen from each of the three key locations. This 3D model was sent to Image Engine as the basis for the cliff structure we should build [envisioned by Tarsem Singh], along with photos of the rock structures built on-set.

I proceeded to building a more detailed « blocked » version of the same volume in Maya and tagged parts of it with real images of rock features that I thought would fit specific parts of the cliff – broader areas and also the ones closer to the actual sets. The initial blocking of the cliff in Maya was done by simply scaling and piling dozens of polygon cubes together. These served as a « volume guide » for the formations that would bridge each set area and matched the 3D model of the cliff used by Tarsem in his shots. The model ended up spanning roughly half a kilometer within Maya, and it was obvious we would need to segment it to be able to manage the high-resolution version. The cliff face alone was broken-up into 20 parts.

Once we decided how the whole cliff should be segmented, I built cages (simple low-resolution geometry) in Maya that surrounded the cubes that formed each cliff chunk and sent both cage parts and blocks to Zbrush. I shrink-wrapped the cages onto the grouped cubes creating a single mesh that matched the intended volume and could be refined further. For each block, I pushed the subdivisions up to anywhere between 2 and 3 million polygons for initial sculpting. For parts that required the highest amount of detail, I would sculpt further using the HD Geometry feature in Zbrush. I would then export 4K 32-bit displacements of each block and reassemble the whole cliff in Maya to be rendered in 3Delight.

One interesting challenge we had not anticipated, came with the realization that the model used on set as in-camera reference for Tarsem had been deformed and re-arranged almost on a per-shot basis to fit his framing and composition requests. So, our cliff model did not match, initially, any of the reference plates we received from production – even though it had been built based on that same 3D model they used on set! In the end, we had to rig all the 20 cliff parts individually and as a whole to be able to re-shape the entire thing to match the per-shot distortions.

The end result was a clever mix of 3D and matte painting – to tackle the intractable close-ups that were just too extreme for the 3D build to handle (without further cliff segmentation and sculpting); and to handle tweaks and last-minute structural changes requested by the client fast enough to meet the deadlines. »

// Jordan Benwick: Lead Compositor

Can you tell us more about the big impressive pull out shot that start at the village and finish in Athena’s eye?
We knew the shot was going to be massive in scope right from the start of the project. We didn’t know just how much of the vista we were going to see, how much would be covered by cloud, etc. We did a lot of concepts and back and forth with the clients. In the end not very much stayed the same as the look of the cliff and terrain was still being worked out.

While that was going on we started on the transitions & plate work. The 1st plate of the village set was shot on a zoom lens, pulling out from long to ~18mm. We had to transition from the zoom out to a dolly out which would fly us up into the heavens, with all the parallax that implies. So in Nuke I projected the plate on the village/cliff geo, but had to get roto of all the soldiers to stand them up on little cards all around the set, and a clean plate underneath them.

We did a lot of rounds of cliff shape and look, so there was a full 3D render of the cliff, but in the end the closer parts of the cliff were largely augmented by matte paintings projected onto the geo. Because we started so close and ended seeing the whole near cliff, there were four matte paintings at 4-6k each inset within the next. A matte painting was also needed for the terrain on top of the cliff.

The base for the ocean was a wave pattern displaced 3D plane, which we re-lit in Nuke based on a sky dome. To that was added a lot of 2d elements, many layers of crashing waves, whitecaps, rocky formations, and more paintings (did you see a trend here?), to really amp up the interest and break up the clean cg feeling. Janeen Elliott did a lot in comp to bring everything up a few levels.

As we were developing the shot, it was clear we would see the same clouds from below as above, which would clearly never work just using photographed clouds on cards. It would also be difficult to find enough aerial views of clouds with the correct lighting and type of cloud. I came up with a method to create clouds using noise patterns and faked lighting, so the shape and lighting could be controlled, and then put them on a stack of cards for each cloud to get a cheap volumetric effect, and even shadows onto the terrain. It was a real hack, and kinda worked! They were limited in that they could only be used for cumulus clouds and not-so-close up.

By far the best clouds were cooked up by Greg Massie in the fx dept, using Houdini. Those are the giant hero thunderhead clouds. The final shot included some of each kind of cloud.

The last piece is the eye transition, which was done in nuke using the plate and, yep, another matte painting. The iris was broken out into several layers and placed in several depths for a bit of parallax to get a sense of overlapping fibers of muscle.

How did you create the set extensions for the tunnel and the monastery?
The monastery shots came to us very late in the production, so we decided we had to be efficient as possible. We came up with a hybrid 3D/matte painting/comp technique, which worked out very well, as we were able to turn around changes quickly, with only 2 artists.

Our 3D artist, Ben Stern, textured and lit the scene as usual, but only rendered 2-5 key frames for each shot that showed the extremes of the camera moves.

In Nuke, I then re-projected the key renders, using the cameras they were rendered through, onto the geo of the monastery. The projections covered all of the monastery that could be seen through the shot camera, so the full frame range could be rendered out of Nuke. It also meant that we could paint and re-texture the monastery in comp. The candelabra flames were 2D elements I shot in the back room of the studio, with some comp tricks to create the light interaction with the candelabras themselves.

// Janeen Elliott: Senior Compositor

Can you tell us in detail the creation of the great shot in which we follow 4 arrows in the air?
I began working on the fly-by shot (where we follow the four flying arrows to the village along the cliff) after it had been initially setup by another artist.

One of the first things I had to do was to tweak the original foreground plate of the hero shooting the final arrow from the bow. The camera move was baked into the plate, and we needed the actor (Henry Cavill) to release the arrow sooner than he was in the plate, so the hand needed to be adjusted. We also needed to lower his arm since the trajectory of the arrow needed to be lower than he was aiming in the plate. Once that was accomplished, the majority of the work I focused on was the cliff face.

We were working with a mix of some CG areas along the cliff face, and some matte painting patches and photographic patches which were projected on to 3D geometry in Nuke. This was a bit of a tricky process in that the geometry onto which we projected our textures needed to simulate the cliff face geometry as best as possible in order to avoid pinching and stretching as the camera went past. We found that we couldn’t use the full CG geo that 3D had used as it was simply too heavy in Nuke.

Instead, it was more efficient to use cards in areas that we were updating. Also, we had to use higher resolution matte paintings and photographs the closer the camera got to the cliff face, so there were quite a few cards used along the fly-by. Projections in Nuke also helped us where we had notes to adjust the look of the village. Certain buildings needed to be changed for color, or to add fire scorch marks, and we were able to use Nuke’s projections with rotoshapes to easily track the rotoshapes to the desired building to make the change quickly.

Also of course, this helped in blending in the CG buildings with the live action ones of the plate where the actors where shot. Projections were used again in the water area as well. We used them to project clips of live action waves crashing along the cliff face, and the rocky outcroppings.

I also applied the established magic arrow look to the four flying arrows as well as created the look for the impact that those arrows would have upon their targets. Of course there was quite a bit of overall adjustment to color to all aspects of the shot, and the final god rays were also applied as a final touch.

A big thanks for your time.

// WANT TO KNOW MORE?

Image Engine: Official website of Image Engine.

© Vincent Frei – The Art of VFX – 2011

IMMORTALS: Jay Randall – VFX Supervisor & Founder – BarXseven

Jay Randall started his career in 1997 working on GODZILLA. He then worked on films like THE FIFTH ELEMENT, PEARL HARBOR or TITANIC. In 2005, he founded the studio BarXseven in Montreal and work as a VFX supervisor on films like TRANSPORTER 2 or STRANGER THAN FICTION.

What is your background?
I have a degree in psychology from the University of Ottawa, then I went to Vancouver Film School. I launched BarXseven in 2005 after working as a character animator and FX TD on many film and TV projects such as THE FIFTH ELEMENT, PEARL HARBOR, GODZILLA, and TITANIC.

How did BarXseven got involved on this show?
Raymond Gieringer and I have worked together in the past. He was brought onto IMMORTALS by Relativity as VFX supervisor and then hired me to supervise the VFX for 2nd unit. BarXseven then became one of the main VFX vendors.

How was the collaboration with director Tarsem?
Tarsem has great ideas and is a very energetic person. It was a great experience to work with him and watch him create. When it comes to VFX, he is a veteren director and understands our needs and was very approachable and open to input.

How was the collaboration with Production VFX Supervisor Raymond Gieringer?
Raymond and I have worked on a few movies together and always collaborate very well. He really knows his stuff and has great creative vision. Raymond has put his trust in BarXseven on many occasions and we always make sure to repay him by giving him excellent work.

What did you do on this show?
We did over 100 shots but the ones that stand out occur when Aries comes down from Olympus and smashes heads. The heads explode in slow motion as Aries clobbers the bad guys with his war hammer. Another fun sequence of shots that we did occurs when Zeus lands on earth and punishes Aries by whipping him with a fire whip, also in slow motion.

How was this big impressive continuous shot filmed?
It looks continuous on the screen but it was not filmed continuously. It took 3 days to shoot the sequence in many small, carefully planned steps. We shot Aries first, hitting targets on green screen from different camera angles. We then switched to the phantom camera and shot the Heraklions at 500fps from the same angles.

Can you tell us how you approached this huge shot?
We spent a lot of time previzing the shots and figuring out the best approach. We broke the sequence into small details and shot each element separately. We had a very precise plan to create a smooth flowing sequence that looks continuous.

Did you create previs or an animatic for the choreography of the fight?
I worked with Jean Frenette the fight coordinator, along with stuntman Alain Moussi to choreograph the sequence. We motion captured Alain’s movements and used that as a basis for the previz. We built the entire sequence in Maya before we shot it.

How did you manage the super slow-motion aspect of the shot?
We shot Aries with a Genesis camera at 48fps and then shot the Heraklions at 500fps on a Phantom camera. The special effects guys built us some dummy heads filled hamburger and fake blood that we exploded while filming with the phantom for reference. We simulated the heads exploding based on that footage.

Did you create digi-doubles especially the soldier who crosses the entire room in the air?
No digi-doubles were used. The soldier that flew through the air was stuntman Max Savarias rigged up with wires. He is a big guy and he had to fly all the way across the room and hit a specific target, so it was a challenging shot.

How did you create the heads exploding?
We spent a lot of time studying slow motion liquid explosions. We shot some reference of heads exploding on set and also watched every clip we could find on youtube.
We tracked the stuntmen in each shot and put cg helmets on them. We then used those helmets as CG fluid containers for the simulations. We used Maya to create the exploding helmets and also for brain and bone chunks. We used Realflow for the viscous liquid and then topped it off with spray that was created in Houdini. It was a tricky thing to get all the simulation to work together. Steve Elphick was the main guy for the simulations. He did the Realfow and Maya work. Mike Lyndon did the Houdini particles, Charles LeGuen did the lighting and rendering and Rob Rossello comped the shots in Nuke. There is one guy that gets split down the middle. Matt Evans did that guy.

What was the biggest challenge on this project and how did you achieve it?
Once we had the simulations working and we were happy with our look we had to get them to render. The scenes and particle caches were so huge that it was taking forever to render all the layers, when it rendered at all. Charles LeGuen worked tirelessly to optimize the scenes and get them to render without losing any quality.

Was there a shot or a sequence that prevented you from sleep?
All of it was quite challenging and there was always something that needed sorting but the shots that were the most difficult were the fire whip shots. Mitch Deoudes did the fire using Maya liquids. There is one shot that was about 700 frames long and shot at 500fps. To get the fire to look great throughout the entire shot was very difficult. Mitch worked through the entire Christmas break, even on Christmas day on that one. I think he wanted to go in on New Years day as well but his activities the night before made that impossible.

How many shots did you do?
BarXseven did a little over 100 shots.

What is your next project?
We are currently working on Tarsem’s Snow White movie, MIRROR, MIRROR.

A big thanks for your time.

// WANT TO KNOW MORE?

BarXseven: Dedicated Page about IMMORTALS on BarXseven website.

© Vincent Frei – The Art of VFX – 2011

TOWER HEIST: Greg Liegey – VFX Supervisor – Method Studios

After telling us about his work on ABDUCTION, Greg Liegey is back on The Art of VFX to explain his work on TOWER HEIST.

How was your collaboration with director Brett Ratner and Production VFX Supervisor Mark Russell?
Mark has a deep understanding of the process which helped us zero in on hero looks very quickly after the previews were delivered. He worked hard to keep all the vendors in sync with each other and get us consistent in terms of the big picture. That allowed to get into the fine-tuning stage earlier in the schedule.

Brett is a force of nature – he works on an instinctual level. He doesn’t over-analyze, he reacts. When the rest of us diverged into talk about technical aspects of the project, Brett would make sure to bring it all back to the visuals. He helped us concentrate on the story-telling.

What have you done on this show?
Method worked on six sequences and two stand-alone shots within Arthur Shaw’s apartment. We also completed 10 shots of digital alchemy (gold enhancement) using 2D color treatments or complete 3D replacements for gold objects in the scenes.

What was the size of the greenscreen for the penthouse?
The greenscreens wrapped around three sides of the apartment set and ran about 70 feet from end to end. They were about 26 feet tall and set back 50 feet from the glass.

Did you need to do some extensive roto work?
There were some angles where the actors ended up in front of multiple layers of apartment windows. The windows in those shots acted like ND filters which made the greenscreen very difficult to key cleanly. So for those, our roto team of David Marte and Alejandro Monzon did lots of articulate roto including reconstructing hair detail to a very convincing degree.

Many shots also required roto to limit areas of the reflective surfaces such as the floor where the greenscreen would have keyed cleanly even though the surface was opaque. We roto’d those areas & used the key to comp in an appropriate reflection.

The gold shots depended on solid roto for their success. The color correction shots had the objects roto’d from the scene and the 3D gold shots required articulate roto for the people holding the objects.

Which elements and assets did you received from production?
We received all of our NYC background tiles – HDRI stills and moving video footage – from Production. Other CG assets like the Ferrari model and the CG NYC were shared amongst the vendors.

Can you tell us more about the creation of this huge and well known background?
In the earliest days of the project, Peter Marin, the Method VFX supervisor who started the job in New York, and two senior composite artists, Aleksander Djordjevic and David Piombino, teamed up to construct a NYC skyline “bubble” in Nuke. The bubble was a sphere mapped with a stitched panorama of NYC skyline stills. Our matte painter, Amy Paskow, cleaned up the stitches and enhanced sky detail for particular views. Also, the BG plates were shot with trees in full summer greenery, but since the movie takes place around Thanksgiving, we replaced the trees with more autumnal versions. In order to align the perspective from inside the virtual apartment, we used the Google maps satellite view of Manhattan to place the bubble in relation to the apartment geometry.

After our preview delivery in May, we wanted to go for a more precise skyline placement. CG artist, Justin Maynard lightened-up a CG model of NYC so that it could be loaded into Nuke as a placement guide for the compositors. The CG NYC allowed for an exact placement of the BG bubble to match the view from the real-life penthouse apartment at the southwest corner of Central Park.

The Time Warner towers are huge reflective buildings which dominate the southwest views from Shaw’s apartment. We felt they needed special treatment for more realistic reflections and parallax. The panoramic still images of those buildings had frozen reflections and, as stills, would have no parallax on moving camera shots. Since all the apartment shots were shot with a Steadycam, we knew that it would pay off to add live reflections and some levels of parallax. We modeled the Time Warner towers and projected cleaned-up (reflection-less) textures onto that geometry. Then we positioned cards at the surfaces of the Time Warner faces to reflect the bubble environment. The reflected bubble on standalone geometry created a great sense of movement and life for what would have otherwise been much more dull and static.

As the show progressed, we placed several additional front row buildings on cards to further enhance feelings of parallax.

Due to the numbers of windows shots, did you create some automatic setups and scripts?
We created and maintained the BG bubble for use in all apartment shots. Eventually, we devised individual panoramas for each of the different scenes – varying the skies & lighting of the city for variation to match the different times of day for each sequence.

Did you share assets with other vendors?
We did share with Gravity since some of our sequences were so similar. They gave us their Ferrari model so we could use it as the basis for exterior reflections which we used to replace set reflections on the live-action Ferrari in our shots.

Can you tell us more about the gold enhancements shots?
Talking about the gold shots will be a spoiler for anyone who hasn’t seen the movie. At the finale of the film, the defrauded workers each get a piece of Shaw’s solid gold Ferrari. The props used for those shots had various gold treatments – some of which photographed better than others. Once the filmmakers saw the footage in an early DI session, they realized that the gold props weren’t as convincingly gold as they could be.

Starting with the simplest-solution-is-best mindset, we attempted 2D color corrections contained by articulate roto. For certain objects, that method worked very well – we gave the dull props a much more lustrous look.

Other objects proved to be tougher cases and didn’t respond as well to the 2D route. On those shots, involving the steering wheel and the grille, we resorted to a CG replacement of the props. The actual props were delivered to us so we could model them photogrammetrically. Justin Maynard modeled the pieces and matchmoved them to the actors’ action in the shots. Jaemin Lee textured and lit the pieces to match the plates. Flame artist Chris Hunt composited the objects into place – painstakingly adjusting the lighting balances and textural feel to give the objects the same reality as the props in the footage – only more golden…

What was the biggest challenge on this project and how did you achieve it?
The biggest challenge on the show was keeping the fine edge detail of the FG plates while compositing them onto bright-to-blown-out backgrounds. We also had to balance the window reflections to always give a sense that the glass surface was present.

Andy Jones wrote a tool for Nuke which helped us get smooth, clean keys for the greenscreens and still maintain a high level of fidelity to the original plates for extra-fine edge detail. Retaining that detail made the difference in selling the shots.

Was there a shot or a sequence that prevented you from sleep?
Aside from the number of shots & the time we had available (i.e., the standard problems!), David Piombino had a tricky shot featuring the tail-light of the Ferrari. The anisotropic reflections in the glass had to have the exact right refractions of the skyline bubble in order to be convincing…it was touch and go…but we all had faith that he could pull it off.

What do you keep from this experience?
This job was my first job at Method Studios after the merger with my former company CIS Hollywood. I was away from my home base in LA and working with a group at Method NY whom I had never met before. Luckily, in addition to being warm and welcoming, they are an excellent team of resourceful artists. We got the job done and had some fun along the way.

How long have you worked on this film?
Method Studios started work on the film in January 2011 and completed work in early October.

How many shots have you done?
Method Studios worked on 160+ shots, of which 138 are in the final edit.

What was the size of your team?
We had five artists working on CG and matchmove; five composite artists for the main show (and four-to-five more during crunch times); two roto artists; two paint artists; one matte painter and a production team of three.

What is your next project?
I moved directly onto Garry Marshall’s NEW YEAR’S EVE which is delivering now for an early December release.

A big thanks for your time.

// WANT TO KNOW MORE?

Method Studios: Official website of Method Studios.

© Vincent Frei – The Art of VFX – 2011

THE THING: Jesper Kjolsrud – VFX Supervisor – Image Engine

Jesper Kjolsrud began his career on the film THE BORROWERS. After working 10 years in London with MPC and Double Negative on films such as PITCH BLACK, STALINGRAD, THE CHRONICLES OF RIDDICK or 10,000 BC. In 2009, Jesper joined teams Image Engine in Vancouver. He will work on the films DISTRICT 9, GREEN ZONE or THE LOSERS.

What is your background?

I got in to computer graphics through a university course in the north of Sweden. It was unique for using industry standard hardware and software, something that was very hard to get experience with at the time unless you already worked in a facility. That led to an animation job in Gothenburg and after a few months a position at MPC in London through one of the guest lectures from the course, Paul Franklin. One of the first jobs I worked on was THE BORROWERS, which was supervised by Peter Chiang and when he a year and a half later approached some of us to set up a new facility to do his latest job PITCH BLACK I moved on to what became Double Negative. I stayed there for over 10 years until my family and I felt like a break from London, which ultimately led me to Image Engine in Vancouver.

How did Image Engine get involved on this show?

We got a call from the director, Matthijs (Van Heijningen), and the VFX producer Petra (Holtorf) towards the end of 2009. They had seen and liked our work on DISTRICT 9 and thought we’d be a good fit for the job.

What was the Director’s approach to visual effects?

Matthijs is a very experienced commercials director who’s done big projects with complicated VFX so having that knowledge we tried to fit the job around him. In commercials you tend to work with artists directly so we tried to do the same without bothering Matthijs with the processes and pipelines of a project of this scale.

What was the real size of the exterior set of the base?

The base itself was built as a 1:1 scale set based on the production design with only certain areas like the back of the building missing as it was never featured. It was built in a quarry in Toronto against a slope that would fit in to the geometry of the mountain location in BC.

What was your feeling to give your contribution to the great and scary creatures that Rob Bottin and John Carpenter have created?

One of my favorite parts in Carpenter’s version is when the severed head grows legs. I for one wanted to see more of that. And the gruesome double-headed monster the Americans find. The way it was lit and filmed worked great but it was dead. I would have loved to have seen it move. With the techniques of the early 80’s that was extremely difficult to pull off but that’s something we can do quite well today.

Was there a shot or a sequence that prevented you from sleep?

If there’s one thing I’ve learned so far, you can’t let work interfere with sleep. Sleep is far too nice for that.

What do you keep from this experience?

Although the project was a huge challenge I’m very pleased how well it turned out. There are always curve balls being thrown at you. All shows have their challenges but now I’m more confident than ever that we can handle it.

How long have you worked on this film?

I started prepping just before Christmas 2009 and we finished the show at the end of May this year so about a year and a half.

How many shots have you done?

We ended up doing around 550 shots.

What was the size of your team?

I think we peaked at around 100 people.

What is your next project?

I’m currently working on another Universal project called R.I.P.D. It’s another creature show so in a sense we’re carrying on with what we were doing on ‘The Thing’.

// Neil Eskuri – Digital Effects Supervisor

How was the collaboration with director Matthijs van Heijningen Jr.?

Collaboration with the Director was pretty good. Matthijs had a lot of great ideas and disgusting images that he would send us for the look of the different creatures. Their maw, or mouth, what he thought the feet and toe nails might be. The gruesome inside tentacles from swarming snakes and worms. He sent close-up shots of insect feet and bird feet along with nature footage of Cuttlefish and squid hunting their prey. He often said he wanted the creatures to be ‘horrifically beautiful’.

How did you recreate the helicopter?

There was a ‘shell’ of a helicopter on set so we had good reference images from those shots. The only time we used the actual set copter was when it was on the ground and the interiors shots. Whenever it was flying, it was our model.

By photographing the actual model, we used those images to build and texture the CG model. Then the lighting was generated from HDRI images taken on set.

The look of the rotors went through several variations. Because of the rotational speed of the rotors and the shutter speed of the camera, the look can be different. Finding that right mix with motion blur took a lot of trial and error.

Can you tell us more about the impressive transformations shots and especially the one with the woman?

Juliette’s transformation was always a question mark because, although we knew what she was going to look like once she was fully transformed, how she got there was unknown. We had our concept artist, Juliana Kolakis, go through several different ideas and stages of how her chest would change. Matthijs constantly asked what we would see, how much of her human-ness remained? How much alien is pushing through the skin?

A ‘beat sheet’ was created based on the cut, which would explain what would be seen in each shot. This changed as the cut changed. Matthijs then suggested that the alien should already be pushing through the chest before her sweater falls off.

Since tentacles are a big part of the alien look, this gave us the idea of ball of snakes trying to push through the skin, which is what you see in the first shot. Then as the teeth of the alien mouth rip the skin, we see the inside tentacles push through while Juliette’s head is forced towards her back. Her proportions constantly made it difficult for animation since we now had this huge mouth and claw on her thin legs, how was she going to move and look powerful? Like most things, a lot of trial and error and versions were made to bring her to life.

Can you tell us more about the huge ice cavern and the spaceship?

The Spaceship was to be 1000 ft. in diameter and over 60 ft. high. A good portion was buried and covered in ice and snow so you wouldn’t see it all, but still have an idea of how huge it was. After we got the original model, certain design elements had to change once shots were being laid out to convey the vast size of the ship and the cave and still keep the framing Matthijs wanted for the shots.

Since the actors were shot on small mock-ups of the ship, when the designs began to change we quickly knew that very little, if any, of the set pieces could be utilized in the shots. We only kept the very edge of the ship and the hatch from the original photography.

Like most elements, the Ice Cave went through a lot of different versions. There were several meetings to discuss how long the cave had been there, the evolution of the cave over the centuries, if ice columns had been created from the ceiling to the ship, and a schematic was created to show the types of ice and snow in different areas of the cave. Again, armed with hundreds of images of different types of ice caverns, several concept art pieces were created to give the sense of a vast expansive ice cave and sub-caves housing the alien spaceship. It was probably the most difficult look to achieve.

Was there a shot or a sequence that prevented you from sleep?

They all kept me awake at night. Often, I’d wake up at 2 in the morning with some part of the project on my mind. Each had their own trials and tribulations, but the louvre sequence to blowing up the alien was the part that kept me awake the most.

Those sequences came at the end of production with very few weeks left. We still didn’t have the final designs for the ice cave or the spaceship and the Sanders alien was going through a complete overhaul. We knew it was going to be a tough finish, but with the talent of the crew and the flexibility of the Image Engine pipeline, we were able to deliver.

// Fred Chapman – Lead Character Technical Director

Can you tell us more about the design of the creatures?

The initial creature designs were provided to us from production. These designs go through several stages of approval before they reach us but it is still possible for us to suggest tweaks.

Most creature designers work mainly on aesthetics, they want to design something that looks cool and original, however what works for a static design won’t always work when it starts moving. We need to imagine what’s going on under the skin, where are the muscles, bones, tendons, how are the joints structured to give the range of motion required and also convey sufficient strength. In order to convey a sense of realism in a creature it needs to appear to interact with real world forces so each body part needs to appear to be able to withstand the stresses it would be under were it really moving in that environment. For example we requested bulkier muscles in the front limbs and stronger looking shoulder joints for the Edvard-Adam Thing so it would more able to support the mass of the torso, trying to keep the essence of the design and add just enough reality to make it work in the scene.

Once we’ve modeled, rigged and animated a creatures then the clients will get to see it move for the first time, that’s when the next round of design tweak requests start, these requested changes can sometimes go right through to the week of final delivery. At least one of the creatures was totally unrecognizable by the end of the show compared to what they had in mind on the day they shot the sequence.

What was your feeling to give your contribution to the great and scary creatures that Rob Bottin and John Carpenter have created?

The creature effects in the 1982 film were amazing for their time; I’ve always been a huge fan. It’s such a great honor and challenge to be involved in creating the modern equivalent and I’m really proud of what we achieved. Audiences are much less forgiving now than they were in the 80s so the approach used then would simply not have worked this time around. That said, we were very conscious of remaining faithful to the style and character of the creatures in John Carpenters film so I really hope that shows in our work.

Clients no longer have to accept that whatever they get in camera on the day of the shoot is the final version they’re forced to use in the film. They now have the option to come to us afterwards and describe what they really want or how they’d like to take it further. As long as the time and resources are available we can keep working on it until we achieve the result they’re happy with. That’s why in the final release of the film there are almost no practical creatures visible. I take it as a back-handed complement when I read reviews saying the CG creatures were not as good as the practical ones because I know the ones they thought were practical were ours too.

Can you tell us more about their creation and the challenges you have to achieve with them?

There are a number of huge challenges we faced on this show. The visceral, organic nature of the creatures is always a difficult look to re-create. From a rigging point of view we work from the inside out, even though you never see it the viewer has to get the sense that there is a complex internal anatomy to the creature. We don’t have the time or resources to try and recreate all of that underlying anatomy so much of what we do is about understanding the complexity and trying to mimic it as efficiently and simply as we can.

The rigging for those creatures should have been difficult. Can you tell us more about it?

For me the trick to rigging for VFX is about keeping the node graph as clean as possible. However good your initial planning, things will always change so you need to keep the rigs adaptable right through the show. There are 3 main ways we achieved this, by making use of custom nodes to keep the number of nodes and connections low, using a modular rigging system for consistency across each asset and not requiring too much from any single rig.

Most things we need to do in rigging can be achieved using existing nodes and a bit of ingenuity in how to use them. To create complex behavior often requires the layering up of one bit of rigging on top of another to create the final solution. This is both inefficient computationally and messy when you need to debug something not working right or make requested changes. In these cases I like to think about the data flow, what are the inputs an animator needs to have control over and what are the final outcomes that the rig needs to achieve. Then I work out the most efficient calculation to get from A to B and we create a single node to process this calculation.

Our modular rigging system, named “riglets” allows us to position pivots and controls in a template and connect up a rig from its component parts, a torso, clavicle, arm, hand, etc. As soon as we have a stable solution for how we want a body part to behave, we make it a riglet and the riggers should never have to think about that again, freeing them up to spend more time on making the deformations look better. It also allows us to mix and match body parts, so we can put a tentacle coming out of a chest or an alien foot on the end of an arm, etc.

A general rule I like to work to is to build rigs which are capable of doing around 70-80% of what the asset needs to do. It’s very common when you start building a rig to think about the extremes of what will be required and to try and to factor everything into a single solution. This can result in an overly complex and slow rig. I prefer to look at what are the core things a rig has to do in ever shot and build the rig to do that. If in a single shot it needs to do something different then we treat that as a special case and may build a custom rig asset or just fix it in the shot.

The most critical thing about any rig is how the animator feels about using it. Ultimately rigs are just tools for the animators which puts the rigger in the role of servicing the requirements of the animators. We need to strike the right balance between speed and detail, sometimes having different levels of detail of the same rig for use at different stages in the shot development. Its a constant challenge to stay one step ahead of what they may require and ensure that they are able to achieve what they need to as smoothly and efficiently as we can.

Can you tell us more about the impressive transformations shots and especially the one with the woman?

There is no set process to achieving the transformations, each one was handled as a one-off and in most cases required a number of different effects to be layered up to create the final shot. It was also an iterative process in narrowing down what the clients wanted. Initially we’re given quite a vague concept like “these two heads merge together” or  “the skin on her chest rips open to reveal a giant mouth”. Exactly how we achieve that depends on the very specific details. Does it happen fast or slow? Does it start at the top and work down, or in the middle and work out? What happens to the extra skin? Does it shrink back, fall off, dissolve away? Until we’ve answered all these questions we don’t know for certain which parts we’ll do in rigging, animation, cloth, fx or comp. These transformation shots are often the most collaborative between the different disciplines, each having to adapt to what works best for the others.

How did you create the terrifying arm that attack one of the character?

The arm creatures were actually some of the easiest rigs to build and maintain. They’re good examples of why a modular rigging system was a good approach for this show. Through the show the redesigns meant the number of legs changed, human fingers were merged together and the mouth areas changed significantly. Each time we could easily go back to our modular template, make some adjustments and rebuild. One of the reasons these rigs were so simple was due to the proprietary multi-bone ik and spline ik nodes we created. Compared to standard out-of-the-box solutions these nodes have so many more features for controlling the exact behavior of each bone and shaping of the leg. For example in a segmented 5-bone leg the animators can control the weighting of each bone to adjust how much automatic movement goes into each bone with additional animated override. It also took care of IK/FK blending. All of the control values feed into a single node which outputs transform values directly into a single joint chain. This single proprietary node gives complete control and flexibility with the absolute minimum number of nodes and connections. The same is true for our IK spline tool, which gives us a huge amount of extra control over twisting, aiming and scaling of joints along the chain. We had a multi-bone setup for each leg and the claws around the mouth and our ik splines for the spine, mouth sphincters and muscles. That’s pretty much the whole rig.

What are your software and pipeline at Image Engine?

For rigging we use Maya as our main package with a significant amount of proprietary software on top. That’s everything from our cross-platform asset management tool to our custom constraint node.

What do you keep from this experience?

That I’m so lucky to be working on awesome films, for an awesome company, with some awesome colleagues. Image Engine is small enough that we can really be responsive to a client’s needs, we can make bold decisions and be really creative and efficient. At the same time we’re big enough that we have some world-class talent, especially in the RnD team who work closely with the artists to create tools and pipelines that enable us to create top quality work.

We developed some great tools and techniques on THE THING that we will definitely be using again but we’re not complacent, there’s always plenty of room for improvement.

A big thanks for your time.

// WANT TO KNOW MORE?

Image Engine: Dedicated page about THE THING on Image Engine website.

© Vincent Frei – The Art of VFX – 2011

ABDUCTION: Greg Liegey – VFX Supervisor – Method Studios

Greg Liegey begins his career in Visual Effects in 1992 at Cinesite L.A. where he worked on such films as UNDER SIEGE 2, CONTACT or SPACE JAM. Then he worked as freelance at Sony Imageworks and ILM. In 2002 he joined the team of CIS Hollywood (now Method Studios) and work on films like MATRIX REVOLUTIONS and CONSTANTINE. As a VFX supervisor, he has worked on films like THE MUMMY: TOMB OF THE DRAGON EMPEROR, I AM LEGEND or FAST AND FURIOUS 4.

What is your background?
I’m from NYC and went to Pomona College in Claremont, CA for English Literature & Art History. I started working in Hollywood for Paramount Pictures in a development office and soon took a job at a fledgling Cinesite LA in 1992 through a friend’s connection, and have stuck with VFX ever since. I spent seven years at Cinesite working as a composite artist, then I freelanced at Sony Imageworks, Manex and ILM for three years. In 2002, I landed at CIS Hollywood working first as composite artist, then composite supervisor, and eventually visual effects supervisor… where I continue, though now we merged with Method & took on the Method moniker, Method Studios.

How did Method Studios got involved on this show?
As CIS Hollywood (at the time), we acquired the show because of the unfortunate circumstances of Asylum’s closing. Lionsgate brought us the show because of previous work with them we did in conjunction with our sister company Efilm.

How was the collaboration with director John Singleton?
Working with John is a pleasure. I’m happy to say that this was my second opportunity after working on 2 FAST 2 FURIOUS. John always keeps focus on the storytelling aspect of every shot. The details he wants in VFX shots are the ones which move the story along, even if that means stripping things out and simplifying.

What have you done on this show?
We had two main sequences and some miscellaneous shots.
The major sequence involved the catastrophic explosion of the protagonist Nathan Harper’s (Taylor Lautner) house. The background plates of the exploding house were shot as miniatures at Kerner Optical.

Can you explain to us the shooting of the miniature explosion?
// Note: This question are answered by Andy Foster, VFX Producer at Method Studios. He worked for Asylum during the miniature shoot.

From what I remember the miniature explosion was shot with 4 cameras at 72 fps. The camera positions for the miniature shoot were matched as closely as possible to the live action footage shot at the real location for the house.

For the aftermath we shot both vista plates and motion control matched to the angle and moves of the previously shot live action plates of the actors. The motion control moves were based on camera tracking data based on the live action plates themselves. The miniature house was at 1/4 scale. We shot multiple passes of the « aftermath » camera setups with varying degrees of smoke and residual fire to provide a wider assortment of elements for the comps.

The Kitchen was shot at 48 fps on full size set piece that consisted of the stove, it’s surrounding cabinets and a hold out shape for the bar at the entry to the kitchen. After the house and kitchen setups were complete we shot various generic smoke and falling fire ember elements.

VFX and Kerner worked back and forth seamlessly to lock in where all the camera positions would need to be placed on set. To help accomplish this a 3D scene of the set was built based on a combination of the miniature blueprints and provided measurements. Cameras were placed within the scene and than lined up to match the tracked live action shots that were currently in the cut. This allowed everyone involved to see the potential outcome of how the shots would be comped together before anything was actually shot. It also helped in spotting any potential difficulties with the suggested camera placements.

What was your feeling to work with miniatures? It’s something that is unfortunately becoming increasingly rare in VFX.
I’ve always liked working with miniatures. The physicality of miniatures gives them a presence which is often hard to duplicate using CG especially with slow camera movement or variable lighting such as fire. The way light interacts with an actual surface is both subtle and complex – very hard to duplicate exactly. The brain excels at discerning small anomalies in surface texture which are born out of necessary CG shortcuts. Which is not to say you can’t create absolutely wonderful CG – you can – but the best results require lots of time spent creating various depths of reality to fool the eye.

Can you tell us in detail how you put the miniature elements on live action plates?
We lined up the miniature plates with the live action plates for time sync which took a bit of give and take. In some shots, the miniature explosion was a couple frames too fast in its acceleration and in others a couple frames too slow. Nonetheless, it’s a tribute to the experience and expertise of Kerner Optical that the elements were tuned to the live action within that small range!

With timing sorted out, next we established the basic splits – the areas of transition from miniature to live action. The house miniature was detailed and a good match to the live action set, so that made things a lot easier. The trick is finding lines of demarcation which are a bit unexpected, so that the transitions can be meshed together without jarring segues. We had to pay special attention to blending the shading variations between miniature & live action as the explosion develops.

Lastly, we added additional layers of atmospheric elements to further blend the miniature & live-action worlds. Smoke, fires and floating embers bridge the elements and help connect them to each other by creating layers of depth which encompass everything.

How did you manage the lighting and camera movements challenges?
Camera movement was tracked for both live-action and the miniature. Then we justified one to the other and locked them together. Using the 3D cameras, composite artist Kama Moiha placed additional layers of deep background on cards at the appropriate distance so that the parallax would sell the feeling of environment and distance for the shots where Nathan and Karen sat at pool’s edge while the house smoldered behind them.

Lighting was merged by hand placement of soft rotos & animated color corrections. We also added spot fires and their lighting effects on top of transition areas to help break up any obvious divisions between the two worlds.

Did you enhanced the explosion with CG elements like fire and props?
There were instances where composite artist David Rey added fireball elements to the miniature explosion footage to hide rigging, obscure undesirable debris or heighten the level of danger for the actors in the set piece. We also created specific, hero CG debris pieces in a number of shots to supplement the existing miniature debris. The hand-placed CG debris allowed us to fill in gaps and create a sense of increased menace to the live-action actors.

How did you create the water simulation?
We used two different types of effects simulations.

During the initial explosion, Aaron Schultz used RealFlow to create impact splash simulations. The CG splashes were needed to churn the practical pool surface to show impacts from various debris coming from the miniature house. They are subtle in the scheme of things, but go a long way in extending the feeling of destruction from the BG miniature to the FG live action plate. Without them, there were areas of relative calm in a frame full of action. We wanted to amp up the action everywhere.

In another instance, David Santiago used Houdini to create bubble simulations to enhance a large piece of the house plunging into the water. The original photography had a large practical chunk of debris chasing the actors into the pool, but since it was built to protect the actors, it looked a bit too safe and innocuous. David ran multiple iterations to get the velocity & acceleration feeling right – the difficulty was a routine one: the CG effects tended to look orderly & regimented compared to the chaotic reality which we needed. Using more than one pass helped mix up and obscure the systematic look of the CG and allowed the compositor, Kama Moiha, match the reality we wanted to portray.

What was the biggest challenge on this project and how did you achieve it?
The biggest challenge was getting representative temp versions of all the house explosion shots to editorial as soon as possible. We could tell early on that the sequence was extraordinarily difficult to edit since the effects were such a large part of the action in every shot. Without having a sense of the way the effects would play out made it tough to know how the shots would work together. Once we got temps together, we still had to work hard to improve certain aspects of timing to show that particular shots could work in the edit. Working closely with everyone in editorial, we were able to fine tune the effects shots to allow more latitude & possibilities in the cutting room.

Was there a shot or a sequence that prevented you from sleep?
Haha – only everything at the start of the show…but once we started working down our ‘to do’ list, things got clearer and we knew we could do what needed to be done…

What do you keep from this experience?
We worked with a great team of people – on the Production side and at our company. Those relationships are the true value of any job.

How long have you worked on this film?
We started in November 2010 and finished in May 2011.

How many shots have you done?
We worked on 48 shots.

What was the size of your team?
We had six compositors, three CG artists and a matte painter… plus myself and the production staff of two.

What is your next project?
I moved directly onto TOWER HEIST at Method NY until September and then back to Method LA for NEW YEAR’S EVE which wraps today!

What are the four movies that gave you the passion for cinema?
Movies I watched as a kid: THE ADVENTURES OF ROBIN HOOD, SHE WORE A YELLOW RIBBON (any of the Ford/Wayne westerns during the « 4:30 Movie » Western Week), LAWRENCE OF ARABIA and STAR WARS.

A big thanks for your time.

// WANT TO KNOW MORE?

Method Studios: Official website of Method Studios.

© Vincent Frei – The Art of VFX – 2011