MALEFICENT: Kelly Port – VFX Supervisor – Digital Domain

Kelly Port had explained in details the work of Digital Domain on THOR in 2011. He then worked on the effects of THE WATCH. Now he talks about his work on MALEFICENT.

Note: Darren Hendler (DFX Supervisor), Jonathan Litt (CG Supervisor) and Kelly L’Estrange (VFX Producer) have contributed to this interview.

How did you get involved on this show?
On the production side, Carey Villegas, Senior VFX Supervisor, had collaborated with Digital Domain in the past, and more recently on a project that had focused on creating a realistic humans. As part of that collaboration Carey was closely involved in pushing forward the facial technology at DD and became well-acquainted with our facial animation team. He approached us about becoming involved in MALEFICENT due to the technical foundation we had built together on that project, also taking into account our previous digital human projects, BENJAMIN BUTTON and TRON.

How was the collaboration with director Robert Stromberg?
Robert comes from a design background, and has two Academy Awards for Best Art Direction on AVATAR and ALICE IN WONDERLAND. For many years, he has been producing some of the highest quality matte painting work in the industry. From a visual effects studio perspective, Robert was a dream to work with. He had a very clear vision of what he wanted to achieve; some of the concept art was also created by him. For feedback he would often do quick photoshop mockups on our frames, which made it very easy for us to find the right look. There was, and continues to be, an enormous amount of shared respect.

This was his first feature film. What was his approach about the visual effects?
While this was his first feature film as a director, Robert has a long established background in visual effects. He understood the complexities of the work, and had an incredible eye for detail. Since almost 100% of the film contains some visual treatment, it was an enormous undertaking that took years to complete. Because of this, Robert relied on a strong collaboration with Carey Villegas, the Senior VFX Supervisor as well as the supervisors of the main VFX studios Digital Domain and MPC.

How did you work with Production VFX Supervisor Carey Villegas?
Kelly Port, Digital Domain’s VFX supervisor and Darren Hendler, Digital Domain’s DFX Supervisor had both worked with Carey in the past and have enormous respect for Carey’s vast experience and intuitive understanding of visual effects. The shots on this project ran the gamut from complex bluescreens with actors on stunt rigs to large, traditionally lit sets, virtual production with motion capture and, all-cg sequences. An unbelievable amount of planning was required by Carey and his team and every shot always had a well-thought-out approach. We collaborated closely with him to plan the virtual production shoots, and he was always ready to adjust the process to accommodate our post production needs. During final shot production he visited our offices on a regular basis and worked directly with artists at their work stations.

What are the sequences done by Digital Domain?
Digital Domain’s work on the project extended over quite a few sequences in the film; we were responsible for Maleficent and her wings as well as the three flower pixies. Additionally, we had many fully digital environments that were seen when Maleficent is flying around the mountains and through the clouds. Many shots required significant set extensions and digital matte paintings; some created by Digital Domain’s environments team and others were provided by the VFX production team and integrated into the comps by DD.

How did you create the digi-double for Maleficent?
The digi-double was primarily used for flying shots or extreme stunt work that was not possible to do practically. We knew from the beginning that the Maleficent digi-double would be held to the highest scrutiny, making it was very important to ensure matching it to Angelina as closely as possible. We were fortunate to get very high-resolution scans of Angelina in full makeup and prosthetics. These scans were used as a base to build the digital Maleficent face. Building her digital wardrobes, we deconstructed how her practical wardrobes were tailored and re-built them panel by panel in CG. This ensured that when they were simulated in CG they moved, looked and felt like the real ones. In order to ensure our digital Maleficent matched Angelina as closely as possible, we created test shots in which we could do side by side comparisons of digital Maleficent and Angelina to ensure the digi-double would be indistinguishable in the CG shots. The wardrobe was also pushed through extreme wind simulations, such as when she is flying at hundreds of miles per hour through the fairy world.

Can you explain in details about the Maleficent wings creation?
The production art department constructed a practical set of wings which we used as a starting point for our CG wing build. The first stage was to use images of the practical wings and images of eagles’ wings to create concepts of what Maleficent’s wings would look like on Angelina in a wide variety of dynamic poses. From this point, we created a rough CG version of the wings. We used this version to test motion, and develop the physical layout of the bone and muscles structure, to learn how the wings would work on a real person. Once we had the design and structure completed, we started the complex task of modeling the wings feather by feather. In the end, due to some design changes from the practical wing build, the wings are entirely CG in every single shot.

What was the main challenge with the wings?
The most interesting thing about the wings was the way they moved and folded up. Rob is a very visual director and wanted full control over the silhouette of the wings in every pose. To this end, our wings needed to conform and bend into virtually any shape. This is extremely difficult especially when you are trying to ensure each of the thousands of feathers are always moving correctly and not moving through each other or exploding out of the rig like a porcupine on a bad hair day. To stop these explosions, or interpenetrations, the rig itself used custom technology to prevent major feather intersections. However, additional feather intersections and high-frequency feather dynamics were fixed as a post-process.

How did you approach the Pixies?
Robert Stromberg and Carey Villages wanted the fully CG pixies performances to be completely driven by the three pixie actresses: Imelda Staunton, Lesley Manville, Juno Temple. To this end, Carey opted for a performance capture driven approach where the actors’ body and facial performances were captured as they performed their pixie scenes. This was all done on a motion capture stage with the actors wearing motion capture suits and head camera rigs. All three actors were often suspended on flying stunt rigs to help convey the sense of flying during their flying scenes.

Can you explain in detail about the Pixies creation?
The key to building the pixies was to start by building photoreal CG versions of the actors, then transforming these Digital actors into the pixies. This ensured our pixies always retained the exact likeness of the actors, and each of their thousands of facial expressions; the way their skin moved and looked. The pixies’ faces are proportionally different from the actors’ faces. For example, they have bigger eyes, smaller chins, and wider cheeks. Finding the right proportion changes for each pixie took quite a bit of experimentation and collaboration with Rob and Carey. We would create multiple versions of each pixie face and then use it in tests for several weeks to “try on for size”. Flittle found her look fairly quickly, but both Knotgrass and Thistlewit took longer to settle on a final look, and small tweaks were made to the faces throughout shot production.

How did you handle their rigging and lighting?
All of our rigging is done in Maya using a large number of custom plugins. The rigs need to be able to handle over 3500 facial shapes — including about 150 primary shapes and over 3000 combination shapes — and about 200 body shapes that drive a custom pose-based-deformer system for the rest of the body. You can’t use regular Maya deformers to do this since they will quickly run out of memory and run too slowly handling that much data. We’ve gone through a number of generations of custom plugins to handle this on previous shows. We introduced our newest versions on this show, which ended up working out very well and allowed the animators to get fast interactive response from the rigs. We also used a cgfx shader on the face so that animators could get real-time feedback of pore and wrinkle detail, which is normally only seen in the final render with displacement. The wrinkles were driven by a dynamic system that automatically calculated tension in the skin animation. For example, if a pixie raised her eyebrows, then the forehead wrinkles automatically turn on, in both the final displaced render and in the cgfx shader seen by the animator. For lighting, every plate-based shot had an HDR captured from set. We used our standard “light-kit” based lighting pipeline that was originally developed on BENJAMIN BUTTON, which breaks the HDR down into a more easily editable kit of area lights. Many plates were shot without any lighting directed to where the pixies would be flying, so although the light kits often provided a good starting point there was a lot of additional work done by the lighters to integrate the pixies. Also there are a number of all-CG shots for both the pixies and Maleficent, and these were usually lit by hand.


Can you explain in detail about the Facial Performance?
The face is the hardest part of the character to create; to look real especially when the character is life like and looks like the actor. We spent a considerable amount of time with the actors, capturing high resolution face scans in all of their facial expressions poses. During pre-production we shot each actor in ICT’s “Lightstage X”, which gives us a nearly perfect digital model of each face all the way down to the pore level. A base set of facial motion data — so-called FACS shapes — was captured separately at Disney Zurich.
DD has its own virtual production team, supervised by Gary Roberts, which was responsible for all facial and body performance capture during the shoot itself. During the motion capture shoot, the actors wear custom helmets with four cameras that capture their facial performances to the accuracy of about 200 points, based on dot patterns drawn on their faces. For each shot, the dot-based performance is “solved” using custom software to derive the true high-resolution performance from the “recipe” of the high-resolution FACS and ICT data. Even with the facial solves, animation is still a huge component to bringing life and realism to the characters. At DD we have a team of facial animators so attuned to the nuance of human facial performances that they are able tell when something is not exactly right. The animators then have to do a considerable amount of work to adjust the final animation since the solve is never perfect, there are often changes requested after initial animation that are necessary due other constraints in the shot or the animation of the bodies.



Can you tell us more about the hair for Maleficent and the Pixies?
The hair grooms were created DD’s custom hair system, called Samson, and fed to V-Ray via custom rendering plugins. On this show we used a new, node-based version of Samson. Designing a groom in the new version of Samson is similar to working on a comp in Nuke or a procedural graph in Houdini. This was a big roll-out for us, which was necessary due to the complexity of the hair grooms. Thistlewit, especially, had a large and complex groom of curly blonde hair. On the rendering size, Chaos Group did quite a bit of development to optimize hair rendering in V-Ray. Everything was ray traced, including hair translucency and global illumination in the hair volume. Global illumination is a key component to the look of blonde hair especially, since light bounces through and around blonde hair much more than in dark hair.

How did you created the cloudscapes environments?
To create the cloudscapes, we used our custom cloud building toolkit within Houdini. It allows us to custom design any cloudscapes. This proved very valuable, as Robert had some very specific ideas of how he wanted the clouds to look and feel.

Can you explain in detail about the landscapes creation?
The landscapes for MALEFICENT were extremely complicated, as they included a variety of different elements, for example: rocky terrain, water, waterfalls, clouds mist, trees, and flowers. These elements were often generated by different teams in different software packages. The water surfaces, waterfalls, and mist clouds were created in Houdini and rendered in Mantra.

There are also several shots that have large whale-like creatures (called carriagefaeries) from MPC jumping through the water like dolphins. For these shots we received alembic geometry from MPC and simulated the water surfaces and spray in DD’s custom water solver. The final shots used deep compositing to put MPC’s renders into our final water and landscapes. This kind of tight integration between two companies in a single shot is becoming more and more common. The trees and foliage were mostly full 3D geometry, lit and rendered in V-Ray. We used custom layout tools to populate the environments with the large numbers of trees required. The cliffs were mostly 2.5D matte paintings since nearly every shot had a unique set of cliffs not used in any other shot.

How long have you worked on this film?
Actor data acquisition and scanning started in May of 2012. The live action photography took place around July-Oct 2012, and we delivered our final shots around early April 2014, roughly 2 years in total.

How many shots have you done?
540 VFX shots were in our final award, a number of of which were shared with MPC and other vendors. The show went through some adjustments, so the actual number of shots we worked on ended up being much larger than that, prior to final edit.

What was the size of your team?
We had approximately 320 people on the show in our multiple locations over the span of the project. This was a hugely collaborative effort and most meetings and reviews spanned two, three, or even four locations. Digital Domain has spent a lot of time building out a pipeline that allows for cross-site collaboration and this show definitely used every last ounce of that functionality!

A big thanks for your time.

// WANT TO KNOW MORE?

Digital Domain: Dedicated page about MALEFICENT on Digital Domain website.





© Vincent Frei – The Art of VFX – 2014

LEAVE A REPLY

Please enter your comment!
Please enter your name here