SPACE BATTLESHIP YAMATO: Takashi Yamazaki – Director & VFX Supervisor – Shirogumi

Takashi Yamazaki started his career in 1986 at Shirogumi as a model artist, then it will work as a VFX designer before becoming director. It will directs movies like JUVENILE, RETURNER or DEMON WARRIOR 3. He is the VFX supervisor for most of his films.

What is your background?
I have been strongly interested in Monster films since childhood, especially, backstage work had drawn my attention, and I had kept thinking how I wish I could make Monster movies when I am older. When I was in a junior high school, I watched two revolutionary films, STAR WARS and CLOSE ENCOUNTERS OF THE THIRD KIND, and my inclinations toward SFX films got stronger. I may add that there are so many people who have been influenced by these two fantastic movies, and I think that many people who are involved in VFX and CG are close to my age.
A few years later, I watched the making of STAR WARS EPISODE V: THE EMPIRE STRIKES BACK on television and made up my mind to have a job like this, so I started to widen my knowledge and to improve my skills by obtaining technical books.

In 1986, I entered Shirogumi, a Japanese VFX production company, as a miniature maker. Even though I had worked as a VFX designer on many Films, Commercials, and Event footages, I realized that there was almost no opportunity in Japan to find a VFX job that I wanted to do when I was younger. Then, I decided to create new demand by myself.
I wrote a script with a lot of aliens, spaceships, robots, and time traveling, which is called JUVENILE. This script caught the attention of Japanese production company, Robot, and I made my debut as a film director. After some films, I won several prizes with ALWAYS: SUNSET ON THIRD STREET.

Can you tell us your approach on this space opera movie?
At first, screen writer Shimako Sato participated in this project. I have worked with her so many times, I told her that I would like to help this film in any position because of I was a big fan of the original animation of SPACE BATTLESHIP YAMATO. It will be my pleasure if they use my ideas in any scene.

One day, I got a call that the production crew was looking for a director and they offered me through my experiences of film JUVENILE, RETURNER and DEMON WARRIOR 3.

To be honest, I hesitated to take this position as a director because I was not ready to direct such a dream project, however, I realized that it was a chance to make my childhood dream come true, so I undertook the position with honor.

How did you create the shot that start from the eye of Yuki Mori and stops in a middle of a huge space battle?
Shot of Yuki Mori’s eye was shoot by macro lens with DSLR Canon 5D Mark II. This scene starts from close-up of her face to full body, I took a track-back shot by crane camera with a six axes motion ride with actor. Six axes ride makes reality in shot. We switched CGI actors and aircrafts when the crane stops in the final position.

How did you filmed the shots showing pilots in their cockpit?
We shot actors on a six axes motion ride set. Motion ride is a part of the set, we created full body by CGI. It was difficult to composite glass on cockpit by complex reflections, we shot with cockpit frames and replaced by CGI.

Did you create previz for the choreography of the space battle?
Pre-Visualization (previz) was a important for such as VFX film.
For actors, I let them imagine the background of the scene. Previz helps them recognize around the shot.
For a stage team, They need to design the stage size of the scene, miniature team also design and create props from camera position in previz.
For me, I need to find a rhythm and tempo for editing film.
For editing, editor needed previz to fill for un-finished VFX shot.

Therefore, I started to work on previz a few month before shooting, and the CG team also created previz for own VFX scenes.

Can you tell us more about the creation of the huge set extension? How did you create them?
It was created by an ordinary way. The camera tracked the minimum sets in front of a green screen backdrop with 2d3 Boujou match move software and we composited with the CGI set.
But there is a problem on the stage size. The actor couldn’t go through left side in the frame. We switched him with digital actor when passing the another actor on the middle.

How did you create the impressive shot in which the Yamato is going out the ground?
This scene is the prominent to describe film of SPACE BATTLESHIP YAMATO, I thought this is a very special. However, It was very hard task to for my team who did not have enough effect skills for braking huge objects, smokes. We started with a research for software for braking objects.
After the research, we decided to use RayFire plug-in. but RayFire plug-in is made for 3ds Max and our team was using Maya. We start to learn basic operation from 3ds Max team’s. We were able to use RayFire on 3ds Max exporting FBX from Maya. Finally, we could finish this sequence by the deadline. So, we used 3ds Max as an expensive plug-in for effects.

Can you explain to us the creation of the shockwave and explosion that destroy the enemy armada?
For the explosion of Gamilas carrier, I told my broad intention to Kei Yoneoka, the CG artist whoes specialty is in visual effect. The details of the intention was; « The Engine of Gamilas forms micro-black hole if it gets damaged. At the moment, everything around it is drawn into, but when it goes to the critical point, the engine of Gamilas triggers a large explosion. »
I left everything else to Yoneoka after I expressed the ideal implosion and the timing of release by using onomatopoeia effectively. For more specific information, please go to his website.

SPACE BATTLESHIP YAMATO mother ship destruction makingA from Kei Yoneoka on Vimeo.

Kei Yoneoka: “This shot can be divided mainly mother ship destruction by black hole and huge explosion. Most of them are made with 3dsMAX. I put keyframe animation on the sphere object to control of the timing that black hole begins to occur and speed of rotation. Then I used FumeFX’s velocity information which emit from sphere object to make particle an organic motion. After that I rendered those tens of millions particles by using Krakatoa. When I made a center core of black hole, I used Realflow with computing vorticity then I extracted that information and rendered by using Krakatoa. The destruction of mother ship was done by Thinking Particles.
The huge explosion after shrinking of the black hole was also put keyframe animation on the sphere object to control of the expanding timing. Then I used FumeFX’s velocity information which comes from sphere object to make particle an organic motion. I put a plant or microorganism’s picture to the tens of millions particles as a texture then rendered with Krakatoa. I sculpted emitter object having bumpy shape with modo in advance in order to make an organic style.”

How did you create all those pyro elements such as the missile trails, lasers and explosions?
We mostly used N particle of Maya for the pyro elements. I combined a lot of shooting footage on explosions.

What was the real size of the sequence with the heroes on a truck?
It was very tiny studio. There is a only truck in front of green screen. We shot a crane camera to make it looks like following a moving truck. And we covered CGI elements around truck.

How did you create all the huge environment around them?
We created the strange rocks using Pixologic Zbrush. We also put matte paintings and photos of miniature for background.

Through the scene, Actors performed arriving in the enemy’s territory and fight with Gamilas fighters, and arrived to innermost place and escape from there. But we shot only truck and some rocks on green screen, we had to create hole backgrounds digitally.

This whole scene was quite difficult challenge for us to create many kinds of backgrounds, but we are confident that the audiences did not realize how little the actual stage size. This is VFX.

Can you tell us the creation of the particles going off Yuki Mori?
We used Maya nCloth for them. We just processed the particle, but I was particular to blew jacket and helmet by the shock of the particles.

How did you create the beautiful final explosion?
The final explosion was also made by a CG artist, Kei Yoneoka whose speciality is in visual effect.
The concept for this scene was to re-create the sad, but beautiful atmosphere of what one of the original piece FAREWELL TO SPACE BATTLESHIP YAMATO depicts and to make it 2010 version. Yoneoka created mysterious and beautiful explosion, which deserved to cap this film, by incorporating marvelous gaseous nebula-like image in an aftereffect of the blast, which looks like supernovas.

SPACE BATTLESHIP YAMATO Final Explosion making from Kei Yoneoka on Vimeo.

Kei Yoneoka: « In this shot, I also put keyframe animation on the sphere object to control of the expanding timing. Then I used FumeFX’s velocity information which comes from sphere object to make particle an organic motion. For the emitter object of the element like a dark nebula, I used gray scale texture which I modified cloud or white cape pictures to make the particles having a natural distribution. Those tens of millions particles was rendered by Krakatoa.”

Did you create specific tools for this show?
No, we did not invent any tools. I focused on employing what we had efficiently.

What was the biggest challenge on this project?
The biggest challenge was that the Japanese described full-scale cosmic space in a science fiction film. Compared with Hollywood films, the budgets are small (I guess one-tenth or one-twentieth.)
And we did not have enough knowledge. I also never done before there are a lot of challenge.
We have been working hard to remind me that Japanese can produce cosmic space feature film.
I hope this experience helps me with next challenge.
The difference between Hollywood and us is not only the budget.
It will be experience, I believe.

Has there been a shot or a sequence that prevented you from sleeping?
We always prepare the knowledge and skills before start project. But this project was started without enough skills to describe such as special effects. It was unprecedented challenge to produce the significant shot of a theatrical released film as learning basic operation of unfamiliar software.

We spent a lot of time for this scene, so I felt really relieved when we finished the shot.

How long have you worked on this film?
We worked on stage for 3 months and 6 months for the digital work.

What are your softwares and pipeline from Shirogumi?
We used Maya as a main CG software, 3ds Max and RayFire for certain scenes. Nuke for compositing, ZBrush for special modeling, Photoshop for matte paintings and Thinking Particles for explosion effect. We used DPX from DI room of IMAGICA, the post production company.

How many shots have you made and what was the size of your team?
The total number of shots were 436 cut.
There were 34 original staffs, 21 people for support and 12 people for motion capture related work.

What do you keep from this experience?
I realized how difficult it was to produce cosmic space films in Japan.
However, I believe it will refine through this kind of project. So, I would like to keep challenging for the new things.

What is your next project?
One of the next project is called FRIENDS NAKI IN MONONOKE ISLAND.
It is a digital animation film based on an old children’s tale (or a fairy tale), which is created with miniature background and CGI characters technology.
http://www.friends-movie.jp/

The other project is called ALWAYS SUNSET ON THIRD STREET ’64.
This is the third series of ALWAYS. We shot with real 3D camera, so it is a true 3D film It is not 2D to 3D converting film. My idea is taking audiences to the three-dimensional world in 1960’s Tokyo.
http://www.always3.jp/

What are the 4 movies that gave you the passion of cinema?
CLOSE ENCOUNTERS OF THE THIRD KIND
STAR WARS series
SEVEN SAMURAI
IT’S A WONDERFUL LIFE

A big thanks for your time.

// WANT TO KNOW MORE?

Shirogumi: Dedicated page about SPACE BATTLESHIP YAMATO on Shirogumi’s website.

// SPACE BATTLESHIP YAMATO – TRAILER

© Vincent Frei – The Art of VFX – 2011

14 BLADES (JIN YI WEI): Eui Park – VFX Supervisor – Next Visual Studio

After studying in Korea and Canada, Eui Park returned to Korea and worked at Next Visual Studio on over 40 films including THE CHASER, THREE KINGDOMS or THE MURDERER.

What is your background?
Hong-Ik University (Seoul-Korea) / Product Design.
Centennial College (Toronto-Canada) / Film & Video Production.
A.I (Vancouver-Canada) / Visual Effect & Digital Animation (Master grade).
For the complete filmography, click here.

How did Next Visual Studio got involved on this show?
Next Visual Studio worked on Daniel Lee’s THREE KINGDOMS in 2008. He was very satisfied with the VFX work for the film and it led to a great relationship between the two parties and we have been working on his films including 14 BLADES ever since.

How was the collaboration with director Daniel Lee?
Daniel is full of passion for films and always tries his best for the best results. He’s always attentive to the details while never losing sight of the big picture. Frankly, at times it was difficult on our end because Daniel likes trying out new things.
But he always had a clear picture of what he exactly wanted which led to good communication. As a result, we ended up with an output that the director wanted to express onto the screen. At times his strenuous efforts to share ideas with the other teams and staff slowed things down a bit but we all eventually learned that it all paid off as everyone was always on the same page. So even in the midst of all the fatigue of being on set, I was able to share that same passion that Daniel showed.

What was his approach about visual effects?
Daniel came to me for most of the CG work and shots that needed to be expressed using CG. Daniel focused more on the overall beauty of the shots more than a blockbuster shot overloaded with CG. So I set the foundational guideline for the shots while I tried my best to reflect Daniel’s ideas and opinions onto them. Again, because Daniel had such a clear picture of what he wanted, we had a very smooth overall work flow.

Can you tell us more about the main title?
In the trailer we used « Fluid CG » and created a dragon using smoke. I’d like to mention this because the reviews were great. Hollywood has used Fluid CG to create many elements of different shapes and forms in films such as THE MUMMY and PIRATES OF THE CARIBBEAN but it hasn’t been so widely used in Asia. But NEXT Visual Studio has done a lot of R & D work on Fluid CG, which has been popular in fantasy and natural disaster films, and through many in house tests, we’ve been successful in utilizing it on screen. We didn’t use it on the actual film itself but the trailer helped add a fantasy feel to it. We have recently been using Fluid CG in trailers for Korean films and Titles for Korean TV Series.

How did you create the super slow-motion of the warrior girl?
We shot the background and the characters in High Speed and created a digital double for the girl.

How did you create her dress?
We tried many simulations on the action of putting on and off the dress. We customized Maya with a simulation tool called Qualoth to make this possible. The movement of the dress itself didn’t give us much trouble. But the character wore 3 different layers of the dress and the act of taking off each of those layers caused problems like the overlapping of the different layers, unnatural expression of the fabric, and the difficulty in naturally expressing the semi-transparent fabric. We needed to put in the extra hours to solve these problems.

Did you create some digital doubles for this show?
Yes, the part where the Girl Warrior fights as she puts on and off her dresses was entirely expressed using a digital double.

Can you explain to us the creation of the city in the desert?
The desert city doesn’t exist in reality. We used an ancient Silk Road concept so some of it was shot on set while the establishing shots were expressed using a Full CGI.

What were your references and indications for the city?
We researched reference pictures of ancient Islamic cities keeping in mind that these towns were probably built around the Silk Road economy. We focused on creating a worn out look of the city and details such as the trail marks of wagon wheels. We also used pictures of towns we believed were similar to the ones we were creating.

What was the size of the sets for this sequence?
The set in Yin Chuan is a wonderfully designed set and provides great space for us to shoot in. However, due to the large number of tourists visiting the sets, the space allowed for the actual shooting was very limited. Furthermore, there wasn’t even a main building, so for each building, we had to set up a wall and create the rest.

How did you create the fireworks?
Fireworks was a relatively easy CG work than the rest. We shot the actual fireworks on camera first but the fireworks looked too modern so we went ahead and created the entire Fireworks scene using CG. The fireworks on location was eventually used for recreational purposes for our crew as we lit up the Chinese night sky over wine and beer. We used particle effects to give fireworks a more old look.

Did you help the stunts to be more impressive?
We erased hundreds of wires and added weapons when necessary.

About the final fight. How did you create the dress in fire?
In the beginning we tried to attain as much live source as possible but the costume prepared was not a material that burned to ashes but rather melted under extreme heat. We couldn’t get the look that Daniel wanted. So we used particle effects to express the burning dress.

Can you tell us more about the warrior girl whip?
The whip around the girl’s waist was used as a belt then it was whipped out and it moved around like a snake. So we spent a lot of time animating it. We shot most of the scenes that involved whips without the actual whip.

How did you enhanced the blades of Donnie Yen character?
For Donnie Yen, he used a sword that was attached to his wrist. It was too dangerous to use an actual blade. We used only the hilt when shooting it live then added the blade using CGIs.

Did you develop specific tools for this show?
We didn’t develop any new tools but for cloth simulation, we customized a software called Qualoth.

What was the biggest challenge on this project and how did you achieve it?
The most difficult shots were those involving Toto, the girl warrior putting on and off her dresses. Hundreds of simulations were done and only after many trial and errors were we able to come up with the final output.

Was there a shot or a sequence that prevented you from sleep?
The action scene in Toto and Chenglong’s last scene was the most challenging. This scene robbed the entire CG Team of sleep. The city’s Full 3D shot also gave us a tough time as we tried harder and harder for a better quality.

What are your softwares and pipeline at Next Visual Studio?
There aren’t huge differences in software amongst studios. For pipeline, we use an in house software called AINEXT. It’s a web based program so our clients can check and confirm the shots anywhere and anytime. Recently we developed an application for Iphones and Ipad so that our clients can check on our daily works and confirm them using mobile technology.

What do you keep from this experience?
I realized once again how important VFX design and concept work during the PRE-PRODUCTION stage is. With sufficient preparations in these areas, we were able to enhance the quality of CG and try different things. And through such work process, we were able to give the director what he wanted from the beginning. This boosted the overall confidence of our CG team.

How long have you worked on this film?
The actual shooting period was 3 and a half months. CG work in the post production took another 4 months.

How many shots have you done?
We worked on a total of 700 shots.

What was the size of your team?
Around 10 Supervisors and Producers and 30 compositors, 20 3D Artists, and 10 Art team members.

What is your next project?
We are currently working on Daniel Lee’s next film, WHITE VENGEANCE. This project involves around 700 cuts and most of them involve crowd simulation, Battle scenes, and work involving castles and other architectures.

What are the four movies that gave you the passion for cinema?
I loved films since I was very young. There are way too many great films in this world so it’s difficult to pick out a few. But the film that helped me grasp the essence of film was the GHOSTBUSTERS. Honestly, I was shocked when I first watched this film. I don’t think there ever was another film based on a strong story that did a better job visually expressing what all human beings have once in their lives imagined but was never able to see with their eyes. I was also inspired by films like BACK TO THE FUTURE and KINGDOM OF HEAVEN. All these films helped me choose this career as a filmmaker.

A big thanks for your time.

// WANT TO KNOW MORE ?

Next Visual Studio: Official website of Next Visual Studio.

// 14 BLADES (JIN YI WEI) – VFX BREAKDOWN – NEXT VISUAL STUDIO

// 14 BLADES (JIN YI WEI) – TRAILER

© Vincent Frei – The Art of VFX – 2011

SHAOLIN (XIN SHAO LIN SI): Eddy Wong – VFX Supervisor and Founder – Menfond Electronic Art

Eddy Wong founded Menfond Electronic Art in 1990 with his brother Victor. They are the first to create the full computer graphics in China, it was for the commercial EPRO Paging. There are also working on cinematics for video games like PARASITE EVE or FINAL FANTASY VIII. In 2001, they created the first full CG animation feature, MASTER Q, with director Tsui Hark. Eddy Wong supervised the VFX of many films including INFERNAL AFFAIRS trilogy, LEGEND OF ZU, HOUSE OF FLYING DAGGERS or NEW POLICE STORY.

What is your background?
I graduated from China Central Academy of Fine Arts in 1989 and created Menfond in Hong Kong on September of the same year. Then started doing computer animation and special effects till now. My company has expanded to Suzhou in June 2007. Throughout the past 22 years, we created over 850 TV commercials and thousands of visual effects for 90 feature movies.

How did Menfond got involved on this show?
Benny approached me in 2010. After reading the script, I was really interested in it because it was the very first time that Andy Lau acts as a Buddhist monk with numbers of fighting scenes. Also, I believe that working with Benny with his careful and earnest attitude, we would definitely experience a lot from it.

How was the collaboration with director Benny Chan?
I worked with director Benny Chan in 2004 for the NEW POLICE STORY, so I know his working style well. Through detailed and extensive discussions, we knew he always desired for new effects with a strong sense of reality, thus, we did a lot of researches to find the best solutions to cooperate with director’s ideas.

What was his approach about visual effects?
Benny has his own thought on visual effects. “Don’t let the exaggerated special effects influence the drama development” was his direction on visual effects.

The location of the Temple is beautiful. Can you tell us where is it and have you enhance it?
The temple was filmed in Zhejiang, China. We researched various temples from that region and era, then proceeded to add our own creative enhancement to portray the Temple in a more grand and magnificent manner.

Have you made some set extensions for the Temple and how did you create them?
Yes, we extended part of the temple. From our research and various reference photo’s we built a 3d model in Maya. With camera tracking software we tracked the live footage and incorporated that into our 3D scene and added texture and lighting to match the live footage. Then our compositing team put all the elements together and fine tuned the image for output.

Can you tell us what you have done on the Restaurant Ambush sequence?
A lot of the flying axes, broken glass, and table debris were CG simulated for the actors’ safety and for us to have more control to frame the shot properly.

How did you made the shots in which Nan is hit by the horses?
We shot the cart and the girl as separate elements. First we filmed the girl crossing the road and via a wire rig attached to her we were able to simulate her being hit by the horse carriage. Then we filmed the carriage aspect of the shot with a stunt crew and then in After Effects rotoscoped and composited the two elements together.

How did you create the huge canyon for the chase sequence?
We had an expert stunt driver control the cart next to a white line we drew on country a dirt road and then we build the canyon in 3D. After camera tracking the live footage, we incorporated the camera into the 3D software and textured it accordingly.

Then our compositing department rotoscoped out everything beyond the white line from the live footage and composited the CG canyon into the live footage to make it look like the carriage was diving precariously on the edge of the cliff.

Can you tell us more about the Temple destruction?
A life size set of the Temple was constructed and blown up for that scene. Then we filmed the soldier’s reaction and cannons separately and composited the elements together.

Did you use models for this sequence?
It was determined that a 1:1 set piece of the temple was to be built and blown up to properly portray the scale of the explosion.

In many shots of the final sequence there are big explosions really close to the actors. How did you mix those?
We tried to film as much practical special effects at the location as we can so that way the light of environment is as realistic as possible but for obvious safety reasons all the actors, explosions, and bombs were shot separately. With the actors being filmed separately the pyrotechnics crew were able to create big cinematic explosions. In fact only the cannons were computer generated. All the elements were then composited in After Effects.

How did you create the different environments and especially the wide shot of the Temple with the British soldiers?
Research is a very important part of our work, so we tried to find as many photos of Shaolin in the 1920’s as we can. In addition to the photos from the various different location shoots, our creative team were able to draw a detail matte paint to composite into the background of the live footage.

How was filmed the beautiful shot in which Andy Lau in falling on the Buddha hands? Did you made something on it?
That was mostly done on set. Andy Lau was suspended with a wire rig and the stunt crew were able to simulate him falling on the Buddha. The wires were then removed in post production and the blood was simulated in 3D and composited in After Effects.

What was the biggest challenge on this project and how did you achieve it?
The biggest challenge would be how to generate new ideas. Let’s take the huge canyon for the chase sequence as an example. As we said before, it took abundant of time and process to create the scene. Thus, we have to strike a balance between time and production cost.

Was there a shot or a sequence that prevented you from sleep?
It would be the huge canyon for the chasing scene as it created a really tense atmosphere which makes audience feel so excited.

What are your softwares and pipeline at Menfond?
We mainly use After Effects, Autodesk Maya, various 3D tracking softer for matchmoving live footage, and Photoshop. First we gather research of the elements we have to model for the project a hand, then our modeling department creates the elements for our motion department to animate, followed by texture and lighting who realistically render out the elements to composite into the live footage. During this time the live footage is also rotoscoped and had any wire rig removed as necessary. Finally all the elements are put together in compositing and outputted to be scan for film.

What do you keep from this experience?
This was a really great opportunity to work with Benny and his professional team again. We have learnt how to cooperate with director and deal with emergency which was really a valuable experience for me and my team.

How long have you worked on this film?
We worked for about a year.

How many shots have you done?
Around 350 CG shots

What was the size of your team?
Over 100 artists.

What is your next project?
We are now working with famous Hong Kong movie star Cecilia Cheung, Zhang Ziyi and Korean star Jang Donggun in Shanghai for our new movie DANGEROUS LIAISONS. And we will make shanghai’s former glory come alive again in this movie.

What are the four movies that gave you the passion for cinema?
TERMINATOR 2, JURASSIC PARK, TRON (1982) and THE ABYSS (1989).

A big thanks for your time.

// WANT TO KNOW MORE?

Menfond: Official website of Menfond.

// SHAOLIN (XIN SHAO LIN SI) – TRAILER

© Vincent Frei – The Art of VFX – 2011

THE THING: Kyle Yoneda – Effects Supervisor – Mr. X

Kyle Yoneda has worked for over 10 years in visual effects. In 2005 he joined the team of Mr. X and works on projects such as SHOOT ‘EM UP, RESIDENT EVIL: EXTINCTION, Whiteout or SCOTT PILGRIM.

What is your background?
I’m the Effect Supervisor at Mr. X. Inc. For THE THING, I was our in house CG Supervisor. I have 12 years experience in VFX work, the last 7 of which I have spent here at Mr. X.

How did Mr. X got involved on this show?
We’ve worked with Universal on a number of projects over the years. They approached us to bid on THE THING and we were really excited to get on board. Because we have a lot of experience with CG vehicles, digi-doubles and CG environments we were a great fit for the project.

How was the collaboration with director Matthijs van Heijningen Jr. and Production VFX Supervisor Jesper Kjolsrud and his team of Image Engine?
The collaboration with Matthijs van Heijningen Jr. was mostly through the VFX Supervisor Jesper Kjolsrud. We worked together in getting the look of the sequence to be what Matthijs wanted. Being in Toronto and Jesper in Vancouver we communicated mostly over phone with previewing tools to close the distance. It was a great creative process and working with both was exciting and rewarding when we could all see the shots building the story of the opening sequence.

What have you done on this show?
The sequence Mr. X. was working on is the opening sequence up to the title shot of the movie.

How did you recreate the vehicle?
Although many of the shots had an actual Spryte that we shot on set here in Toronto, we did make a digital double for some of the more harrowing moments in the sequence. We modeled the Srpyte in Maya, textured it with Mari, lit and rendered it in V-ray.

What was the real size of the sets?
The sets we worked with for our sequence were really only the Spryte itself, so in that case they were all real world scale.

Can you explain to us in details the ice breaks?
Ice breaking was something we always planned to animate by hand, with that being the driving force of our effects work. We had just finished out physic tool for Houdini that uses the Bullet solver and it was giving us very fast detailed results on other projects so we moved forward with a full physical simulation of how the ice would break away in real life.

The real challenge with the ice breaking away is that it was ice with a foot of snow on top of it. We had to try and sell the idea of a huge amount of soft snow under the Spryte’s treads and have it simulate in a direct-able and convincing way.

We had large chunks that were the driving simulation. Once we had the physics feeling just right, we broke up all the edges of them and did another simulation pass of the broken edges. Once these simulations were done, we added a pass of falling particles that collided with the two simulations. We then seeded this particle animation with millions of additional particle simulations to fill in the density needed for the amount of snow to fall with the ice.

How did you animate the ice breaks and the fall of the vehicle?
The Spryte was also simulated with our bullet solver and was actually used as a very heavy weight to begin the breaking of the ice. Once we had the initial simulation feeling right, we replaced the proxy Spryte object used in the simulation with the rigged Spryte and key framed it from there.

How did you create this huge environment both outside and inside?
Both environments are actually very tricky to make looking believable. On the top of the ice, the only details that could really give a sense of detail were very small things, such as, the wavelet patterns of packed snow that wind produces. How the light plays off the snow and sparkles as the eye moves across it. We spent a great deal of time texturing and lighting it. The treads were what helped us the most giving the feeling of scale in such a barren environment.

The ice crevasse itself was a huge challenge. Primarily because for a large duration of the sequence we could only use one light source, the sky. It was also very important to achieve that subsurface scattering effect that ice has. While we were tasked to mimic real world lighting conditions, we also had to exaggerate reality to produce shots that were frightening as well as beautiful, just like an actual ice crevasse. We ended up using Houdini to light the ice, snow, and chunks of ice with point clouds for subsurface properties and custom shaders for refraction and reflections. V-ray was used to light the Spryte and on occasion digi-doubles.

Can you tell us more about the digi-doubles?
The digi-doubles were scanned on set and modeled and textured based off those scans. Rigged and animated in Maya, then lit and rendered with V-ray. Considering the environment was so dark when digi-doubles were needed they ended up being a straight forward process, although one that added the violence of the situation they were going through.

How did you create the ice and the snow?
The ice chunks that fall though the crevasse were a mix of simulated rigid body dynamics in Houdini’s DOPs and our own bullet solver. We had them splash snow and trail snow whenever they made contact with a wall of the crevasse or the Spryte.

Did you develop specific tools for those?
We created an OTL to help distribution of all the chunks and direct their animation. We could then decide how fast, how brittle, or wet the chunks were, as well as when they would hit a wall and explode. Other tools we had were, the as mentioned bullet solver we developed, as well as our multi-particle simulator that we used to produce 100+ million particles per scene.

What was the biggest challenge on this project and how did you achieve it?
I believe the lighting inside the crevasse was the most difficult. Often with simulations of practical events the heavy lifting is done with the physics behind the solvers, talent of your artists, and the directability of your tools. The look of the crevasse was such a subjective thing to each viewer that it was difficult to find the desired look.

Was there a shot or a sequence that prevented you from sleep?
The entire sequence was a consistent challenge. Each shot had it’s own unique needs, as well they were all very meaty CG shots. The end result is something that I believe is convincing and unique glimpse into what it could be like falling into a crevasse in a glacier.

What do you keep from this experience?
The importance of craftsmanship, and the story of what you are trying to portray visually.

How long have you worked on this film?
Full production for our work was scheduled at 8 months.

How many shots have you done?
38 shots for the opening sequence.

What was the size of your team?
Throughout the production it would fluctuate with the other projects Mr. X is involved with but at it’s peak a team of 20 were working on this sequence.

What is your next project?
We usually have quite a few of projects on the go at one time. Currently we are working on RESIDENT EVIL: RETRIBUTION, MAMA, COSMOPOLIS and SILENT HILL: REVELATION 3D.

What are the four movies that gave you the passion for cinema?
For me the visually complex movies were always my weakness, as well as a huge inspiration for pushing me into an career of effects for film.

AKIRA – directed by Katsuhiro Otomo
THE CITY OF LOST CHILDREN – directed by Marc Caro and Jean-Pierre Jeunet
BLADE RUNNER – directed by Ridley Scott
GHOST IN THE SHELL – directed by Mamoru Oshii

A big thanks for your time.

// WANT TO KNOW MORE?

Mr. X: Official website of Mr. X.

© Vincent Frei – The Art of VFX – 2011

JOHNNY ENGLISH REBORN: Rob Duncan – VFX Supervisor – Framestore

Rob Duncan has worked for nearly 15 years at Framestore. He has participated in projects like LOST IN SPACE, THE BEACH or STALINGRAD. As a VFX supervisor, he worked on films such as BLADE II, UNDERWORLD, AUSTRALIA or WHERE THE WILD THINGS ARE.

What is your background?
I did a graphic design degree at college, but fell into digital effects when it was still in its infancy in the UK. I started off working on a variety of television shows, commercials and music videos, and when the technology became available I moved into feature films.

How did Framestore get involved on this show?
We have a good relationship with Working Title and they couldn’t think of anyone better to hand the work to! This was never going to be a mega-budget production and, without trying to sound like a furniture salesman, they trusted us to deliver quality work at an acceptable price.

How was the collaboration with director Oliver Parker?
It was great – he was always very available and approachable, and took on board my suggestions where appropriate. I would extend that to the producers and also to Rowan Atkinson, who was involved throughout the process. My producer Richard Ollosson and I could have a very open and honest conversation with them about what would be possible within the budget constraints, and we would all move ahead on that basis with an agreed plan of action.

What was his approach towards visual effects?
I’m sure he would be the first to admit that VFX is not really his area of expertise, so he was happy to leave many of the creative decisions to me.
Because this was first and foremost a comedy film, it was important that the VFX supported the story, and didn’t overwhelm or distract from the comedy moments. When I spoke to him after we had finished, he pointed out that he could have easily gone down the cheesy route and not really care too much about the VFX, but decided that having good quality invisible effects would be a better choice, since it would help to reinforce the thriller aspects of the story, leaving the comedy to come from the performances. No-one wanted the laughs to come in the wrong places.

Can you tell us more about the Tibet environment? How did you create it?
I was sent solo to Argentina to obtain material that could be used for digi-matte paintings for the « Himalayas » scene. This was done with a local film crew while the main unit continued in the UK.
Over the course of one day’s shooting, we got some fantastic helicopter footage of the Andes for the establishing shot of the monastery, and when we landed we trekked off into the foothills to get some more controllable tiled plates that we would need for the ground level monastery panoramas. In post, we also had to adapt the original footage to make a winter scene, to sell the idea of the changing seasons.
The monastery courtyard you see in the film is a full-size set built at Ealing Studios which we had to top up only slightly, otherwise our task was to create the view beyond the walls.
There was another scene set in the Himalayas which we had almost finished the VFX for, but it got cut for editorial reasons.

What references and indications did you receive from the director?
Because the Ealing-for-Tibet scenes had been shot relatively early in the schedule, there was time to get a local Argentinian scout to offer up some options and send them to us. We were then able to go through and narrow them down to 2 or 3 possible locations before I set off for South America. The production then left it up to me to get the right stuff – we didn’t need anything too specific to fit the brief, as long as it could pass for the Himalayas and felt remote from civilisation, then everyone would happy.

About the London sequences, how did you create the different panoramas we can see outside the office of MI-7 boss?
The boss’s office was a real location in the City of London, meaning that some of the window views are real. Like a lot of VFX work in this film, we were merely extending, extrapolating or replicating what could be shot for real rather than creating from scratch – I prefer this approach wherever possible since the basis of the scene is real and therefore it is much harder to stray off course when adding in the digital work.
In this case, where we needed to invent the exterior view, it was created from tiled plates filmed from a different vantage point on the same building, from a window cleaner’s cradle suspended at the same height as the main unit footage. Unfortunately we were unable to shoot the plates on the same day, which meant that our lead compositor Bruce Nelson had to completely relight the material to make it match (sorry Bruce!).
The plates were then split up into different layers according to depth so we could create some parallax when the camera was translating inside the office. We built some very basic geometry of the interior so that we could be sure the various camera views had some consistency.
The reason we had to provide an exterior view at all was because we needed a false wall and window out of which the unsuspecting cat could be pushed, meaning it only had 1 metre to fall rather than 30 metres – it certainly cut down on the re-set time and the number of cats used.

What did you do on the roof chase sequence in Hong Kong?
Ultimately, very little. In pre-production this was potentially one of our most difficult sequences, since the buildings chosen to stage the action were due to be under refurbishment while we were shooting, meaning that they were going to be covered in bamboo scaffolding – although this would have been very photogenic, it would have rendered redundant the key action beats written in to the script.

At this time we were preparing to digitally reconstruct all the buildings in a two-block radius in order to remove the scaffolding. On the recce trip, I took thousands of digital stills of the buildings’ facades and other details, so we could use them as textures. However, agreement was reached with the local authorities to remove the bamboo where it would otherwise be in shot, so we were able to film the sequence unencumbered – I cite this as an example of solving a problem in pre-production rather than leaving it until post, when it would have been considerably more expensive to fix and, perhaps just as importantly, wouldn’t have contributed to the storytelling in any meaningful way.
Luckily, the bamboo which remained on the blindside of the building became a story point because it was used for the henchman’s final escape.

Can you tell us more about the helicopter sequence? Which is the balance between real stunts and CG?
When I originally read the script, it seemed to suggest that an extensive amount of CG helicopter shots would be needed to achieve the required action. However the production were nervous about committing to such a large VFX sequence (understandably since it would have eaten up the whole digital budget, and then some) and therefore found a largely practical solution by shooting most of the action on a test track closed to the public. This allowed the pilot to fly very low to the ground and close to cars driven by trained stunt people.

Although this reduced the real-to-CG balance considerably, it didn’t eliminate it entirely, so we still had to create a digital replica. Our CG supervisor John-Peter Li and our lead compositor Adrian Metzelaar did a brilliant job of matching it, using a bespoke side-by-side setup that I had shot in between main unit takes. This enabled us to make it completely indistinguishable from the real helicopter long before we knew which shots would be required for the film, and acted as a proof of concept should there be any scepticism about the believability of the digital vehicle. The shots in the film were approved by the client on version 1. Elsewhere, I was sent up in another helicopter to grab aerial plates for some of the action which takes place inside the cockpit – the actors were filmed safely on the ground against greenscreen first, so that we knew which angles and lens sizes to shoot when airborne.

How did you create the take off of the helicopter and the cutting of the trees?
The takeoff was performed by the real helicopter (this time on a private golf course which had its own helipad, so it was perfectly safe to do so). We had the digital helicopter on standby for this shot, but it was ultimately not required.
Apart from the fact that it would have been too dangerous to attempt, the stunt would have required a huge re-set time if we had tried to chop down the trees in situ. Instead, we shot a plate of the trees standing up to achieve the composition, and then they were laid flat on the ground for the pilot to determine the correct flightpath as he passed safely overhead, causing no destruction.

Some weeks later, the trees were shot upright (one at a time) against bluescreen on the Pinewood backlot, with the ‘slicing’ created by placing explosive charges on the trunks. There was some discussion in the planning stages about using an industrial tree-logging machine, but the slicing would not have been instantaneous and the device itself would have interfered with a lot of the flying debris. In order to avoid obvious repetition, we blew up 10 trees, all at different perspectives to match the original plate layout. They were shot individually so that we could get perfect synchronisation with the helicopter’s path. Even this wasn’t chaotic enough, so we shot some additional debris which would help to fill up the frame with fine leafy material.

About the final sequence, can you tell us the creation of the building on the top of the mountain and his cable car?
This started as an Art Department concept drawing, which we then adapted/developed to make it fit on to the chosen location of l’Aiguille du Midi near Chamonix, France.
In the story, le Bastion is meant to be a Swiss government fortress, inaccessible except by the single cable car which has a key role at the end of the film.
The Production Designer Jim Clay was very keen for the building to have a strong minimalist look, so of course it was impossible to find anything which fit this brief on top of a mountain. Because l’Aiguille du Midi actually has an observatory on the summit (the highest in Europe) serviced by a cable car system, it proved to be a excellent aid in terms of composing the shots – we were mostly able to cover up the real buildings with our digital creation and then erase what little was left.
The full-size cable car (which was used to stage the fight sequence) could not be used in the big establishing shots, because of the camera moves involved, so we built a digital replica with a combination of lidar scans and photographic textures. Whenever we saw the cable car in a wide shot, we stripped out the actors from the real prop and placed them in the digital version.

Can you explain to us in detail how you create the huge environment?
The environment was always going to be a combination of real locations and invented ones, in order to facilitate the journey mapped out in the script. For instance, the summit existed so we didn’t need to create that (apart from the aforementioned le Bastion building), and when Johnny falls out of the cable car this was all shot on the slopes in Megeve, France and Surrey, England.
The major part of the digital build would be confined to the cable car journey, but that brought with it its own challenges since the total distance was quite substantial. During the sequence the camera would be looking in all directions, so we had an environment which potentially covered hundreds of square kilometers.

We knew it would be infeasible to build the whole environment, so we decided to take a modular approach whereby we built a smaller section of the mountain slope which could be bolted together in different configurations to prevent it looking like a repeat. This took care of the close- to mid-ground terrain and beyond that, where parallax was less of an issue, we were able to resort to layered-up digi-matte paintings.

Can you tell us how you get your references for this environment?
As explained before, the real l’Aiguille du Midi is a working observatory, open to the public. We discussed taking the main unit crew up there, but due to time and equipment constraints this was ruled out, so I went as a tourist along with my colleague Dominic Ridley. We spent two sessions (different times of day) capturing the environment as digital stills which could later be worked up into digi-matte paintings and also used as textures for the CG parts of the mountain.
Because we were at such a high vantage point we had spectacular and uninterrupted views of the French Alps which didn’t require much modification to work for the story.
This worked perfectly for tiled panoramas (because we were on solid ground it was completely controllable), but we still needed some moving aerial plates for the establishers of le Bastion, so it was back in the helicopter again.
I also used the opportunity to shoot travelling aerial reference for the cable car’s journey by flying up and down a generic slope on a nearby mountain range – I had mapped out in advance about 10 generic angles that I thought would be good for later reference. Although these passes would be unusable in the final film – it would have been impossible to matchmove them to work with the greenscreen fight footage – they proved invaluable for laying out the sequence and were used in the early temp screenings before any CG backgrounds were available.

How did you create the free fall shots?
In the usual fashion – stringing the actor up on wires, throwing him around, and blowing air at him to make it seem as if he is travelling very fast.
Although this would be a typical studio setup because of the rigging involved, I insisted that we shoot outside so that we got natural daylight, which is almost impossible to recreate on stage and which tends to blow the illusion straight away. Rowan was hanging from a crane arm, so rather than the rigging moving past the camera, the camera moved past the actor to get a sense of movement.

These greenscreen shots were intercut with a stuntman doing the skydive and parachute drop for real in the Alps, which was sometimes made to look higher and more consistent with our digital environment by replacing the live action backgrounds.

Can you tell us more about the big explosion of the cable car?
This shot was a real hybrid – there was the full-size cable car suspended on a crane so that we could get the villain’s performance from the right perspective; there was a third-scale practical miniature built by Mark Holt’s SFX crew rigged with explosives; and then there was the digital replica which was used up to and slightly beyond the explosion.

The miniature was blown up in the Paddock Tank area at Pinewood and was photographed at 125fps to achieve the right scale. Three miniatures had been built in case repeats were needed – we got the best action on take 2 and so used the third one for alternative angles.
Our lead compositor Mark Bakowski then added a CG missile and smoke trail (mixing in some real smoke), more explosion and debris elements from our general library, and then combined everything with a plate shot in the Alps, even making the branches react to the shockwave created by the blast.

What was the biggest challenge on this project and how did you achieve it?
The biggest challenge was definitely the cable car fight sequence, not only because of the scale of the environment that it took place in, but more specifically because of the sheer number of photorealistic CG trees needed to cover it. Our lead TD Dan Canfora took some off-the-shelf tree generating software and refined it, at the same time as developing a tool to quickly and easily populate a mountainside with low res instances for layout purposes, and then replacing them at render time with the full res versions. He had to ensure that we would be able to render thousands of trees, some very close to camera, and he came up with a number of tricks to accomplish this such as simplifying the geometry depending on distance to camera and/or splitting into layers, etc.
We were also aware that the edit was frequently changing, so we had to remain flexible in terms of background layout without foregoing a sense of continuity when the edit was finally locked – it meant that the majority of backgrounds were only rendered in the last 3 weeks of the schedule.

Was there a shot or a sequence that prevented you from sleep?
I slept like a baby – I would often wake up in the middle of the night, screaming. Actually, because we were in constant contact with the cutting room, we were able to anticipate problems before they took root and could come up with a plan to deal with them. I would like to think that as the VFX industry has matured somewhat over the years, it has become easier to deal with a changing and expanding workload during a (more-or-less) normal working day. Hopefully you won’t have to ask that question for much longer!

What do you keep from this experience?
That I really don’t like helicopters.

How long have you worked on this film?
About a year in total, if you include the pre-production period.

How many shots did you do?
We ended up with about 230 shots in the film, plus a few dozen omits. Baseblack, our friends over the road, also did about 60-70 shots.

What was the size of your team?
About 50 I think.

What is your next project?
I’m not at liberty to say.

What are the four movies that gave you the passion for cinema?
It would be unfair to try and narrow it down to only four, since there have been so many that have contributed in different ways. Having said that, STAR WARS hit me at just the right time in my youth and really opened my eyes to what was possible – most of us in the VFX industry owe a great deal to that film.

A big thanks for your time.

// WANT TO KNOW MORE?

Framestore: Dedicated page about JOHNNY ENGLISH REBORN on Framestore website.

© Vincent Frei – The Art of VFX – 2011

CAPTAIN AMERICA – THE FIRST AVENGER: Alessandro Cioffi – VFX Supervisor – Trixter

I had the pleasure to interview Alessandro Cioffi in 2010 for his work on NINJA ASSASSIN at Trixter. Since then he has worked on films like PERCY JACKSON, IRON MAN 2 or X-MEN FIRST CLASS.

How did Trixter got involved on this show?
Since the time of our collaboration with CAPTAIN AMERICA’s vfx Supervisor Christopher Townsend on NINJA ASSASSIN first, in the late 2008, and PERCY JACKSON afterward, we had the pleasure to strike up a nice friendship along the years and many times we’ve looked forward to working together again. Eventually, around the end of March 2011, beginning of April we received an informal telephone call from Chris, who was by that that time extremely busy finishing the visual effects on the show, in which he asked us if we felt up to embracing the challenge on a tricky sequence, the so called Flashback sequence, which presented some complexity for being somehow different from the rest of the show. And guess what? Two hours later we set up our first Cinesync session for an initial briefing.

How was the collaboration with director Joe Johnston and Production VFX supervisors Christopher Townsend and Stephane Ceretti?
It has been an excellent collaboration. Rewarding, effective and mostly easygoing, even in moment of hectic hard work. Though, throughout the entire production we mainly had contacts with Chris Townsend, from whom we received briefings and feedbacks and, as said, we had frequent Cinesync sessions with. We also received Editorial comments almost on a daily basis, as the Flashback sequence, i.e., is all based on a visually vehement concept made of frequent and multiple dissolves upon a variety of elements, creatively combined. And naturally we received director Joe Johnston’s comments as well every time one or more of our shots were screened for him, he always had good words of encouragement and recognition. Very supportive, i was glad to appreciate, and so the team did.

Funny wise, with Stephane I didn’t collaborate on CAPTAIN AMERICA directly, but as we started our work on it we were just wrapped on another international show, the X-MEN: FIRST CLASS, on which Stephane Cerretti was VFX Supervisor, therefore for over two months i had the opportunity to work with him anyway.

What sequences have you made on this show?
We worked on the above mentioned Flashback sequence, describing how Schmidt descends into madness and eventually becomes the Red Skull, and on the train sequence where Captain America tries to high-jack the train and where he fatally loses his friend ‘Bucky’ Barnes. Plus a handful of shots from other sequences. In total we worked on over 115 shots.

What references and indications did you received for the flashback sequence?
For the Flashback sequence we were provided with an editorial version of the sequence for timing reference and naturally plenty of scans. Many backgrounds were then recreated and enhanced by us in a creative way, as well as the picture compositions were rebalanced according to the story that every shot had to tell. As said, we were given the opportunity to be proactive in the look development of this sequence, and the process naturally involved many long talks and cinesync sessions with Chris, which we used with greater enthusiasm!

How did you create the shield?
As mentioned we came on board pretty late, so time was ragingly against us since the beginning. Production was fortunately very supportive so we were provided with a rough model and vast array of pictures of the shield at different stages of deterioration. From there it was fairly straightforward to re-texture it and shade it according to the sequence we were working on. For the animation, we tried to anticipate our client’s wishes and preferred to present two or three versions at a time, so that Joe and Chris could foresee eventual changes and possibly streamline the approval process. That was the same approach we had on the digital double of Bucky falling from the train. There a shot in particular, which is sort of pivotal in the storyline, a POV of the Captain America, where he sees his friend dieing in a crevasse. Joe wanted to extend the fall substantially so we switched from a live action element of the actor, shot on green screen, to a digital double of his. Here we tried to present more alternatives per session in order to speed up and facilitate their decisions.

What references or materials did you received from production for the blue bolt fx and how did you create them?
This has been one of the main challenge, as we had to seamlessly « plug » into other vendors’ work on the same sequences, in this case Double Negative which meant that we had to duplicate some of their effects along with creating our own, but in the same style. Naturally we were provided with plenty of references, qt movies and still images, which was of greater help. After one of two weeks of playing around on Houdini, and long sessions with Chris Townsend, discussing every detail of the effects, we were ready to present our first shots with the bolts in it.

Did you share some assets with the other vendors?
Never directly, everything was coordinated and organized by the Production.

Can you tell us more about your greenscreen composites?
On the train sequence we mostly worked on interior shots. For those shots involving externals, like green screens or the Bucky sequence, BG plates where provided, we then enhanced the scene by adding atmospherics, clouds and so on, taken from our vast library of live action elements: It’s my firm belief that using the « real thing », like actual snow or true wispy clouds, shot on black, has to be the primary way to go, using cg simulated elements only when absolutely necessary.

How did you managed the stereo conversion?
To be precise, we only converted the flashback sequence, as involving a third party, for that specific sequence, would have been too complex. The original idea was to give it a sort of disturbing, atmosphere. Chris asked us to come up with a few ideas for grading the sequence, and adding lensing artifacts in artistic but almost retro way. After a couple of layouts, we’ve gone for the look of the « three strip Technicolor Process ». In this sequence, where the cut is very dynamic and uses a number of alternating cross dissolve transitions, Townsend’s idea was of playing with the depth during the stereo conversion process to create a disorienting feeling for the audience.

Was there a shot or a sequence that prevented you from sleep?
More than a shot or a sequence in particular, was the whole scheduled that felt kind of scary! Anyway we managed to finish over 115 shots in little less than 6 weeks with a team of 20 people between artists and producers. We tried to optimize our resources and our time by keeping the production very tidy and asking and receiving constant feedback. We tried to stay well organized. And cool, most of all!

What do you keep from this experience?
Great excitment and loads of adrenaline!

What is your next project?
We’re currently doing CG creatures for JOURNEY 2 – MYSTERIOUS ISLAND, we provide FX work for James McTeigue’s RAVEN and we’ve just finished DEATH OF A SUPERHERO, a german-irish co-production which is going to be screened at next Toronto Film Festival, in September. There are more exciting projects in the pipeline, but I can’t disclosed details so far.

A big thanks for your time.

// WANT TO KNOW MORE?

Trixter: Official website of Trixter.

© Vincent Frei – The Art of VFX – 2011

THE THREE MUSKETEERS: Eric Robinson – Digital Effects Supervisor – Mr. X

After working several years at IMAX on projects such as BATMAN BEGINS, V FOR VENDETTA or 300, Eric Robinson joined the team of Mr. X in 2010. He will participate in such films as RESIDENT EVIL: AFTERLIFE and HANNA.

What is your background?
I come from an artist background in both animation and compositing. I joined Mr. X as a Production Manager 2 years ago after spending the previous 4 years learning everything about 3Dstereo at IMAX. I am now a Digital Effects Supervisor and resident stereographer at Mr.X and was Digital Effects Supervisor on THE THREE MUSKETEERS.

Mr. X is used to work with director Paul Anderson, how was this new collaboration?
The growing rapport with Paul has been terrific for us at Mr. X. Over the years we have grown to understand his vision and now he trusts us to take his ideas and run with them. With both Three Musketeers and Paul’s most recent project, Resident Evil: Retribution, we have been pulled in from the very onset of pre-production helping to establish the look and feel of his films from the initial concept design stage. Paul and the producers, Robert Kultzer of Constantin Film AG and Jeremy Bolt of Impact Pictures have been tremendously supportive through so many projects.

What was his approach about visual effects for this show?
VFX were paramount in telling this story. From the fantastical airships to 17th century Paris, this was a story that couldn’t be told without visual effects. On the other hand we needed to achieve a level of realism, so that the VFX complemented the story, rather than being the story.

Can you tell us more about the main title and this big camera movement?
At one point when reviewing a version of the opening title shot with Paul, he said that, « This is every boy’s dream come true. » The idea was to create a stylized treatment that would complement the narrative storytelling and set the tone for the film. Paul wanted to start with a 17th century parchment paper styled map that would implement a dynamic use of 3Dstereo. As we open the shot, the camera drops into the map bringing to life the stereo feel with text that first appears to sit on the surface until the camera move reveals that is actually hovering above. Topographical details of the map begin to emerge out of the parchment paper and as we fly through the scene the camera begins to reveal the battling soldiers, ships in the harbour and palaces scattered across the landscape. We see the Spanish army advancing towards France, then we pass over a burning village to a gauntlet of musket and cannon fire between the English and French armies until we end up at the golden effigies of King Louis and Queen Anne being confronted by Cardinal Richelieu and his guards. As the camera pulls back up, the screen is filled with 3D smoke and then the 3D Three Musketeers logo bursts through the haze.

Initially we began by blocking out the camera move so we would know where everyone should be placed. Utilizing the soldier assets that were created for the film we tweaked the levels of detail down and swapped out their uniforms to give that old school led painted toy soldier look. We also repurposed the assets of the ships, Louvre, Notre Dame and Tower of London. The countryside was modeled in 3D using displacement maps of actual European landscapes and bump maps were used to increase the realism of the parchment paper textures. 2D and 3D rendered FX elements of cannon blasts, explosions, smoke and fire were also created. Then all the elements were brought into Nuke utilizing its 3D capabilities. Setting up a shallow depth of field through Nuke gave the miniaturization effect that sold the toy soldier look that we were going for.

How did you recreate Venice?
We started by going to Venice and shooting aerial plates on RED. It’s such a beautiful city that almost plays as period today. Changing modern lights to lamps and torches goes a long way to making it look right. Projected matte paintings were used extensively here so that complex camera moves could be utilized and the parallax to work correctly.

What were your references for the recreation of Paris?
The production designer, Paul Austerberry had numerous period maps and books with etchings of 17th century Paris. We used this as a starting point for our layout.

How did you take your references for the different locations in Paris?
We sent a team to Paris for a photo reference shoot, concentrating on Notre Dame.

The movie features a lot of set extensions. Can you tell us more about their creation?
The three main set extensions we had to do were for the two airships and then the roof of Notre Dame. These were all treated as highly detailed assets. Our team was responsible for creating seamless set extensions and entirely digital environments. Specifically for the Notre Dame roof fight sequence, which was shot green screen on a small set piece, our team was responsible for creating a CG replica of Notre Dame and the surrounding 360 degrees vantage point of Paris below.
We started this process by taking a lot of photo surveys. We sent a team to Paris to build a photo library containing thousands of meticulous pictures of Notre Dame for texture and modeling purposes. We created a 17 million polygon model of the cathedral that adhered to every single spec, down to the centimeter. The level of texture detail that was used to achieve the perfect amount of weathering, lead oxidization levels in the roof and soot induced damage was immense.
To accommodate for the amount of RAM required for each asset our team developed a texture caching system that would allow us to choose very specific resolutions for each portion of an asset ensuring that we were not wasting RAM space unnecessarily. We also implemented a proxying workflow system, which allowed for an efficient use of memory by re-using assets in multiple shots without the need to re-translate for rendering.

Can you tell us more about the creation of the air ships?
There were three paths of design we had to work with for the airships. There were initial concepts to work from that the art department created, lead by Paul Austerberry. From these drawings a team of model builders began creating scale models of the two airships. With the aggressive schedule on the film, the airship set piece was built concurrently with the model builds. We did extensive photo surveys of both the scale model and the set pieces.

What was the biggest challenge with the air ships?
Sheer size! There are so many moving parts which are connected and need to interact with the various motions of the airships. The rigging needed to influence the balloon when the airship was turning or in high winds. We wanted to be able to get really close to the CG airships. To achieve this, a tremendous amount of detail went in to their texturing. The balloon on the Cardinal’s airship had over 60 4k textures. The final poly count was in the 50 million range. With the airships playing in about 100 shots, they needed to work from many viewing angles, which didn’t allow for any shortcuts or compromises with them.

Did you create some previs for this show?
We created all the previs for the show! We initially started with the ASxseq, the first reveal of Buckingham’s airship. This had both technical and artistic requirements as we needed to see how the airship scale played with regards to our « Louvre » (in fact, Herrenchiemsee). We start off from storyboards and then make shots that have a good feel to them. From there we need to make sure it’s shoot-able, refine lens or framing, so that the DP, Glen McPherson would have a guide of how to accomplish the shot. There was one shot in particular where D’Artagnan is arriving in Paris, it starts on D’Artagnan with rural France behind him. We then do a 180° pan as the camera rises to show Paris. This was shot on a small bridge in Bavaria. We created a virtual 50′ Technocrane, placed this on a mockup of the actual bridge so that we could work out an elegant move within the confines of a very small bridge.

Can you tell us more about the big storm in the air battle?
The storm was place for the Musketeers to hide from Rochfort and his stronger airship. It had to be scary enough for a helmsman to refuse to fly into it! Which resulted in his summary execution. We had to use this storm as both a dramatic storytelling device and a clue to the geography of the route the airships were taking. Our skies began as a hemispherical dome that could be rotated to properly align to the CG set. Inside of that dome we have midground and background clouds that were modeled and then converted to volumes to be rendered out of Houdini. In addition to those layers, we felt that to properly immerse the audience in the shot, hero foreground layers of mist were needed.

How did you create and manage the huge environment of Notre Dame and the two air ships?
We had to render the elements for these shots in multiple layers, with holdout mattes so the compositing artist could seamlessly blend the elements together. Notre Dame and the island it resides on was a huge endeavour. We really wanted a rich, detailed environment. By laying on more and more detail, like birds, laundry hanging from windows to apple carts, barrels, sacks of cloth we were able to get a fully formed world that helped sell the scale of 17th century Paris. Creating the stained glass for Notre Dame was a particular challenge. In the modern world, massive stained glass windows are protected by Plexiglas or metal grating as well as being illuminated from within by electric light. In our Notre Dame we had none of this! We approached this by using very subtle colour and amounts of light to transmit through the windows. Bump maps and reflections provide the feeling of relief in the window surface.

About the final sequence. What was the real size of the set for the fight on top of Notre Dame?
There were three main sets for the Notre Dame fight. This included two levels of roof height for the actors to work on. There was a lower level, about one meter high for the bulk of the sword fighting. In part this was for actor safety. Rochfort, played so villainously by Mads Mikkelsen, wears an eye patch, making the complex fight sequences and sword work even more dangerous. The second area had a 5 meter drop from peak to ground. There is action where D’Artagnan, played by Logan Lerman, falls from the peak to a lower roof-line. The third location was where the end of the sword fight takes place. This was basically a 2×2 meter patch of landing. We did a set extension here for a shot that looks up at Rochfort and D’Artagnan fighting.

How did you split the work between Toronto and Montreal?
The Montreal office is a branch of Mr. X that specializes in compositing work. They did the lion’s share of work that didn’t require large amounts of CG. The Toronto branch has an extensive render farm and was more suited to handling the massive amounts of data required for many of the all CG shots in the film.

Did you develop specific tools for this show?
Our cloth pipeline was completely overhauled for 3M. The sails for the airships had to perform realistically so that the audience would believe in their flight. The same pipeline was used for Milady’s hair and dress over multiple shots. We also developed our own GPU based fire tool called Torch to allow for real time fire simulations, a custom physics solver, Bullet to help with real time destruction simulations. Also, our VRAY pipeline was significantly expanded and the introduction of Mari to handle the sheer volume of textures required.

What was the biggest challenge on this project and how did you achieve it?
The Paris environment was killer! At the start of production various locations were scouted as possible stand-ins for Paris but nothing was found that really suited the needs of the film. It was established that the city of Paris had to be created as a full 3D environment to allow flexibility for the unpredictable camera movements. Due to the stereo nature of the film we were unable to use many projection cheats to create the city. Instead we projected Matte Paintings into a true 3D space rather than a traditional 2.5D space. All the paintings were baked down and brought into lighting where they were augmented with specular and dirt textures supplied by our texturing crew. In addition we had 40 extremely hero houses that were mixed into the landscape of the scenes. These assets had to share the same look and style of the deep background matte paintings.
Adding ground level details such as trees, puddles, and people milling around gave a sense of chaos and complexity, while flags, banners and smoke billowing chimney stacks that riddled the cityscape helped sell the authenticity of 17th century Paris.

Was there a shot or a sequence that prevented you from sleep?
The airships’ approach to Notre Dame was extremely involved. It was a massive collaborative undertaking with all departments working together to a unified goal. From water sims for The Seine, which included boats and boatmen, matte painted sky and countryside, CG buildings, thousands of people on the streets, birds in the air, there was no limit to how much we felt we needed to add to make this environment come to life.

How long have you worked on this film?
It was about 15 months of work on the film. Starting with script reads with Paul Anderson and the production team, through previs, shooting, editing and final DI. Mr. X. was involved in this project from start to finish!

How many shots have you done?
The final shot count of 274 doesn’t speak to the volume of work that was needed to final the show. There were many full CG shots, with huge environments.

What was the size of your team?
At peak production we were just over 100 staff on the show. This was split between, Assets, Animation, Lighting, Effects, Cloth and Matte Painting, TD’s, with various production support staff and technical support.

What is your next project?
Another Paul Anderson movie, RESIDENT EVIL: RETRIBUTION. It’s filming now in Toronto Canada. It opens September 14th 2012.

What are the four movies that gave you the passion for cinema?
CASABLANCA, SEVEN SAMURAI, STAR WARS 1977 and LAWRENCE OF ARABIA.

A big thanks for your time.

// WANT TO KNOW MORE?

Mr. X: Official website of Mr. X.

// THE THREE MUSKETEERS – VFX BREAKDOWN – MR. X

© Vincent Frei – The Art of VFX – 2011

Autodesk supports The Art of VFX!

I am very pleased to announce the exclusive partnership between The Art of VFX and Autodesk starting today.
The support and confidence of Autodesk for my work are an important step for me and I am delighted with this new collaboration.

Click here for the Autodesk announcement.

DON’T BE AFRAID OF THE DARK: Glenn Melenhorst – VFX Supervisor – Iloura

After working as an Cel animator, Glenn Melenhorst joined the team of Iloura’s. He worked on films such as THE BANK JOB or CHARLOTTE’S WEB. Then he supervised the effects of AUSTRALIA, the TV series THE PACIFIC, PRIEST or KILLER ELITE.

What is your background?
After graduating from Film school, my first job was as a traditional Cel animator until 1987 when I moved into the emerging field of computer graphics. Having worked in commercials for many years, I changed streams and began to work in film as Iloura’s focus broadened to encompass feature film VFX.

How did Iloura got involved on this show?
We were invited to pitch on the show with several other vendors. I don’t think we rated our chances as very high given that some of the other vendors had already been involved in talks for some time so we were thrilled to secure the work. Following our pitch to Troy and Guillermo, we embarked on a small animation test in order to convey our understanding of the brief.

How was the collaboration with director Troy Nixey?
Having such a strong background in graphic novels, Troy’s visual design for the film was very strong.

Can you explain to us the creation of the beautiful main title?
A wealth of illustration existed that explained a little of the back story of the film and Guillermo wanted us to explore ways to use these illustrations in the title. We then began designing a journey using that artwork and our own 3d modelled assets and trough a process of refinement, we ended up with what is on screen.

How did you design the creatures?
By the time joined the film, the creatures had been designed and sculpted by Spectral Motion.
Once we had the maquettes scanned and remodelled in our systems, we further refined and adjusted the models, and added fine detail like wrinkles and pores, as well as surface textures and fur.

How were simulated their presence on set?
Our on set supervisor, Julian Dimsey took care to survey all of the sets which we rebuilt in 3D for previz and interactions with the characters. We had real sized versions of the creatures sculpted for size and lighting reference which we puppeteered through each shot. We scanned a lot of the ornate
set pieces and converted them to models for shots in which the homunculus needed to crawl over carvings and the like and back at Iloura, we rebuilt many pieces of the set for accurate shadow casting and for reflections back into the creatures or props they carried.

Can you tell us more about their rigging?
The homunculus were rigged in Maya by Avi Goodman. Apart from their basic rigged skeleton, we had systems in place for muscle deformations and skin sliding. The characters were quite taught so a lot of the deformation was subtle. The creatures also had quite loose skin around their armpits and some had fleshy jowls or throats so we simulated the loose skin as a series of cloth solutions that were refined and corrected on a per shot basis.

What was your references for their animation?
We used several golden rules for the way the creatures behaved. They were animated to be slower standing than when on all four legs so, when upright, we referenced old men walking, and when on all fours, we referenced rats and cockroaches and spiders. Through the production we spent a long time videoing ourselves hobbling about and set up a small Homunculus school where we explored methods of locomotion for the creatures with the whole animation team.

Did you create some previz?
Yes. A team of 6 animators were on set creating previz for the film a few days ahead of principal photography. It served as a robust blueprint for the three running crews to understand where in any given shot our characters would be as well as to inform the DOP where walls needed to be removed or when the rig was below the floor level (as much of the camera work was low to the ground). Ahead of this, we used our survey data to recreate the sets accurately and modelled each of the rigs being used on set.

What was the biggest challenge on this project and how did you achieve it?
Getting the animation/performance right was a huge challenge. In some shots there were up to 40 characters and each was hand key framed. That’s a lot of toes and claws and teeth and eyeballs (laughs).

Was there a shot or a sequence that prevented you from sleep?
Only one or tow. The final battle where there are little homunculus everywhere was a challenge for continuity and staging but the previz was mostly adhered to so it made that challenge a little easier.

What are your software and pipeline at Iloura?
We model in Max/Maya, whatever, then rig in Maya and render in Max using Vray (which was not available for Maya at that time). We have built a series of in-house tools to transfer the data between these packages so we can use whatever software is best for the job, we even used Blender a little in
our pipeline (laugh). For compositing, we used Nuke.

What do you keep from this experience?
Our relationship with Guillermo del Toro, Troy Nixey and Mark Johnson. It was an amazing experience and for those three to trust us, and for them to be thrilled with the end result was a career highlight.

How long have you worked on this film?
I believe it was about 8 months.

How many shots have you done?
300 shots

What was the size of your team?
About 40 artists.

What is your next project?
We have just wrapped on the FX for GHOST RIDER: SPIRIT OF VENGEANCE which was all fire and explosions which was 180 degrees from DON’T BE AFRAID OF THE DARK and now we are working on TED for Seth MacFarlane.

What are the four movies that gave you the passion for cinema?
Personally? Well, BACK TO THE FUTURE, CLOSE ENCOUNTERS OF THE THIRD KIND, JASON AND THE ARGONAUTS and RAISING ARIZONA (laughs).

A big thanks for your time.

// WANT TO KNOW MORE?

Iloura: Dedicated page about DON’T BE AFRAID OF THE DARK on Iloura website.
fxguide: Article about DON’T BE AFRAID OF THE DARK on fxguide website.

// DON’T BE AFRAID OF THE DARK – VFX BREAKDOWN – ILOURA

© Vincent Frei – The Art of VFX – 2011

CAPTAIN AMERICA – THE FIRST AVENGER: Max Ivins – VFX Supervisor – Look Effects

Max Ivins worked for more than 15 years in the VFX. Before joining the staff of Look Effects, he has worked at Digital Domain, Blue Sky / VIFX or Rhythm & Hues. He has participated in projects such as ARMAGEDDON, APOLLO 13, or BLOOD DIAMOND. He oversaw the effects of films like SCOOBY DOO 2, BEDTIME STORIES, or AVATAR.

What is your background?
Realizing that I didn’t really want to be an attorney, I turned my passion to visual effects. Over the years I have worked as a vfx artist (2D and 3D), lead and supervisor on such great projects as VOLCANO, ARMAGEDDON, STAR TREK: INSURRECTION, APOLLO 13, SUPERMAN II, BEDTIME STORIES, BLOOD DIAMOND, AVATAR, BONES, LOST, LIMITLESS and CAPTAIN AMERICA. I have worked at Digital Domain, Blue Sky / VIFX and Rhythm & Hues before coming to Look, where I am Senior VFX Supervisor.

How did Look got involved on this show?
With the short schedule remaining and the significance of the work in the additional sequences Mark Soper (Visual Effects Producer) felt they needed a facility that could bring quality creative contributions to the party. To paraphrase “all the shots are going to need special creative attention to get them where they need to be”. He contacted Dan Schrecker, LOOK’s Creative Director, who he had known for years and was our lead creative on the project. Co-Producer Victoria Alonso of Marvel Studios had worked with our Executive Producer and Head of Production Steve Dellerson years ago. Both relationships helped us get the job and made production comfortable that we would do what needed to be done.

How was the collaboration with director Joe Johnston and Production VFX supervisors Christopher Townsend and Stephane Ceretti?
Because we were working on some really iconic shots, a lot of need even earlier for trailer shots, we had a lot of eyes on us and our work. Our main contact Chris Townsend, a true professional, with whom we had great creative rapport, is one of the people that made it possible for us to do the variety and level of work on this project. He and Joe where on the same page and Joe knew what he was after and communicated it well (what you wish for from a director). All the Marvel people that we coordinated with were true professionals. It was a great project.

What sequences have you made on this show?
We did three major sequences:
– The hover car
– The fight scene at the entrance to the first hydra factory that Captain America ‘visits” the montage sequence of Captain America.
– And the Howling Commando wrecking havoc to Hydra across war-torn Europe.

How was the shooting of the levitating car?
We weren’t actually involved in the shoot. We received plates from production. But they included a really big rig.

What did you do on it?
Needless to say, our work included rig removal – lots of rig removal. We also did a lot of 3D tracking and 3D projections. We also produced 2D and 3D interactive sparks and lighting, comped everything together and tweaked a lot to match the director’s vision.

How did you create the shield?
Production gave us lots of reference photos of the hero prop and a couple of hero shots from another vendor that had already done a shield. Our process was that they provided us with a very basic model on which we tweaked the shaders which were very specific because “it’s another character in the movie.” We then animated, lit, rendered and comped it into the plates provided by production. “Rinse and do again until satisfied.”

What references or materials did you received from production for the shield?
As reference for the shields we received a very simple 3D model and photos of the hero prop production built. We also got a couple of hero shots from another vendor who had already done some shield shots.

Does the shiny aspect of the shield cause you some troubles?
The shield was really all about building a shader that created the right isotropic highlights that brushed, shiny metal produces. Generally the 3D shield pipeline was fairly standard: 3D track the plates, animating the shield, lighting, rendering, composite and tweak as necessary. We did take great care with all aspects of the shield, the shaders especially, as production was very specific and the shield really is “another character in the movie.”

Did you share some assets with the other vendors?
While we received some aspects from other vendors, because we came onto the project so late in the game, all of our work went directly to production. We used some of the provided elements, others we created ourselves. We didn’t pass any assets along to other vendors. For example, Double Negative sent us the 3D elements for the hydrabolt which they had created in Houdini. We added the impact.

We primarily did our hydrabolts in 2D, using Nuke and more traditional animation techniques. We didn’t write any new, specific tools for this film. “This was a brute-force project because the time was very short and the look was already established.”

What did you do on the iconic shot with Captain America and his commando?
For that great shot of Captain America and his Howling Commandos bursting through the barn doors, we received plates to which we swapped out the background, added muzzle flashes and did some environment enhancements.

What references or indications did you received from Joe Johnston?
We generally dealt with production, rather than directly with Joe Johnston. Because we came on the project during crunch ti, the look had already been established. Our job was to do what needed to be done and make it look like everything else. We were the chunk of the movie added after production.

Unlike past jobs, like BLACK SWAN and LIMITLESS, where we were involved in the development phase of the show, our mission on CAPTAIN AMERICA was to not only produce some pretty incredible shots, but to make sure they matched the look and feel of the rest of the movie. To make it look good, so no one could tell it was added after principal production. Get a lot of work done quickly, but but match everything else.

Was there a shot or a sequence that prevented you from sleep?
The biggest challenge on this show was the sheer volume of work really. The one shot that really tested our skills was the shot in which Captain America throws his shield directly into the screen. This shot had the full attention of the director, supervisors and producers and there were some very challenging aspects to the shot. Every detail of the shield had to be perfect. How Captain America interacted with the shield and the background had to be spot-on. Yeah, we lost some sleep over this one.

What do you keep from this experience?
The primary thing we keep from this experience is the fact that we could pull it off as well as we did in the time we had to do it. The trailers are full of our shots, which caused its own schedule pressure. However, even with the schedule crunch, it was a great project to be a part of. Chris Townsend and Joe Johnston and all the Marvel staff are true professionals and did a fantastic job of providing clear direction and seemed genuinely appreciative of our initiatives and attention to quality. We’re really proud of the work we did.

How long have you worked on this film?
We had a little over eight weeks from when we got the first plates in house to delivery of the final shot. We did almost 60 shots in that time, many of them iconic sequences in the film. We had a little more time on the hover car sequences because they delivered those plates first.

What was the size of your team?
Our team was pretty small for a job of this magnitude. 3D had five, including 3D tracking, comping had 12. We had two sequence supervisors. A big part of our strategy was to have the two sups and divide the work up so that no one ever had to wait for notes. We needed to turn stuff around really rapidly. So we had a supervisor over every shoulder.

What is your next project?
We’re currently working on THE MUPPETS, THE SITTER, TOWER HEIST, MOONRISE KINGDOM, I DON’T KNOW HOW SHE DOES IT, in film and BONES and THE FINDER for television.

What are the four movies that gave you the passion for cinema?
I enjoy making films much more than watching them. So my four favorite projects have been, for various reasons: ARMAGEDDON, APOLOO 13, SCOOBY DOO 2 and AVATAR. But off the top of my head four movies that inspired me, in various ways, would be – 2001 A SPACE ODYSSEY, STAR WARS, GHOSTBUSTERS, and ONE MILLION YEARS B.C. (Raquel Welch as a cave woman!).

A big thanks for your time.

// WANT TO KNOW MORE?

Look Effects: Official website of Look Effects.

© Vincent Frei – The Art of VFX – 2011