In 2017, Theo Bialek had explained to us the work of Sony Pictures Imageworks on SPIDER-MAN: HOMECOMING. He then worked on LOVE, DEATH & ROBOTS. He talks to us today about his work on SPIDER-MAN: FAR FROM HOME.
How was this new collaboration with director Jon Watts?
This latest project felt very much like an extension of the previous film given the overlap in some of key Production people from the last Spider-Man. Jon, already having a shorthand with Janek and others, remained collaborative and supportive of everyone’s ideas.
What were the main changes he asked for about the visual effects since Homecoming?
In this film he really wanted to push the envelope of Spider-Man’s abilities as they relate to his performance and athleticism given Peter Parker is somewhat of a veteran at this point.
How did you use your experience from the previous Spider-Man for this new one?
We had a better handle on how to utilize mocap this time around. Whereas before, we tended to capture an entire shot action as a single take. This time we knew it would be more efficient to capture snippets of performances. As we need to enhance the captures by hand to imbue superhero abilities and strength into the performances it was known that everything needed to be touched by hand regardless. It turns out to be much easier to coach and get variations in a performance if you can iterate over the same mini action with the actor. Instead of having Spider-Man leap into camera, dodge a blast, spin kick then punch an enemy as one take, we’d have Tom do each action separate. When it came time to construct the shot digitally in animation we could easily pick the bit from each mocap take and put them together with hand animation layered on top and in-between.
How did you work with Overall VFX Supervisor Janek Sirrs?
Janek is a huge resource and partner on the film. Always candid he constantly had reference and advice at the ready as well as always encouraged suggestions toward the goal of improving the story and film. On set, he was present for all of the 1st Unit photography and on occasion also during 2nd Unit. During Post we met weekly and then twice weekly for CineSync remote reviews.
How did you organize the work with your VFX Producer?
This film followed the typical workflow of our VFX projects here at Imageworks. We start on assets at the beginning. In this case, we began on derivative concepts of the final elemental creature even before principal photography was wrapped and also started on some early animation/motion tests for the drones. But for the most part we needed to wait until scans and reference came back from set before beginning in earnest. As a general rule of thumb, we bring assets up to a level where they’ll be viable for 80% of the shots before we move them into actual production. The goal is to reduce excess development time, saving resources for key shots and upgrades as we get deeper into the work. In regards to our internal team framework, all disciples we’re departmentalized with the exception of Lighting. For Lighting we instead had two teams each led by their respective CG Supervisor. Our sequence was for the most part split up between shots with and without the Elemental creature and divided up to each team along this basis. Each CG Supervisor was responsible for assisting the departments downstream with the ultimate goal of getting all of the elements rendered and handed off to the Compositing Department for finaling.
What are the sequences made by Sony Pictures Imageworks?
Imageworks worked on the BFB or Big Freakin Battle sequence which encompasses roughly 20 minutes of screen time. Shots on the London Tower Bridge were exclusively completed by Imageworks including when Happy and Peter are flying toward the bridge aboard a Stark Jet.
Can you explain in detail about the creation of Spider-Man asset?
Imageworks was the lead vendor on the Enhanced Suit, the black and red suit featured in the third act of the film. Our digital suit is based off of scans, photography and video reference of the practical suit which was filmed on set. We are provided model scans of the actor both in and out of the suit. This allows us to accurately model both Tom Holland’s musculature that we use to deform and drive the suit as a cloth sim. Both scans go through a cleanup process in modeling to optimize the topology layout for rigging and deformations. Ultra-high-resolution photographs are taken down the to the thread patterns which we use as the foundation for our textures. Video range of motion and mocap performances test footage is captured of Tom and the stuntman to help guide our cloth simulation and rigging properties. Due to limitations of the actual suit, Tom wears imperfections like zippers and extra bulk to accommodate a mask shell are necessary. However, for the digital suit, we remove these items and create an idealized version that is a replica of the practical suit. Lighting test footage is also captured that we use to help dial our renderer so that we can accurately model the shading response of each material.
How did you create the various shaders and textures for the different suits?
For this film we worked on the “Enhanced Suit” featured at the end of the film. This is the black and red suit that Peter Parker manufacturers while aboard the Stark Jet. Our shaders were built inside of a proprietary version of Katana using a layered shader approach. Our textures were derivatives from high resolution photograph we built in Substance Designer by Adobe. These swatches we’re tiled and further enhanced with procedural variation and breakup which was designed inside of Katana.
Jeremy Sikorski our Texture Paint Lead added:
We took hi res photos of the practical suit. Then we analyzed the weave patterns very closely using directional lighting setups. The patterns were reproduced in Substance Designer, so we can make weft/weave directional maps. This was also an opportunity to break down each pattern into different material masks. For example, the rubber dots printed on the red suit cloth can be extracted from the textures and layered in the shader using unique parameters. When the patterns are rebuilt in procedural ways, we can alter those procedural sliders to produce textures that vary from the real suit, producing effects and material responses that enhance the look of the suit in CG.
Can you tell us more about his rigging and animation?
As we’ve done multiple projects involving Spider-Man. Our rigging of the character was an extension of the setups and tools we have employed on previous films. On Homecoming, our character was in a fairly loose fitted sweatpants costume whereas in Far From Home Spider-Man’s suit was form fitted. Because of this we had to be more cognizant of the underlying muscle deformation as they were always more readable. As a result, upwards of 85% of the shots required an extra step of muscle shape animation as our Spider-Man rig uses a simpler method of blendshapes to achieve our tensed and stretched muscles. Regarding his gross animation, we will typically start each shot by running a gravity ball simulation of Spider-Man’s arc’s as a first step for shots where he’s swinging and leaping. This is a ballistics type tool inside of Maya we’ve created that allows the animator to rapidly create a physics based animation that simulates a ball of SM’s mass that can be strung up to webs. This allows us to reliably achieve realistic motion as Spider-Man drops and changes direction while pivoting from the end of his weblines before we begin on his poses.
What is the main challenges to animate Spider-Man and how did you achieve it?
Animating a character with inhuman ability but still have it feel believable is the biggest challenge we have when doing Spider-Man performances. One of the reasons this is so difficult to achieve is because there is rarely reference you can pull from to help guide your action. There just isn’t any video out there of someone leaping off a building with superhuman strength. The very nature of the goal immediately puts us into the world of subjectivity which when you’re trying to create the illusion of reality is hard place to start from. To combat this challenge, we use simulations to help guide our actions, such as our Gravity Ball tool inside of Maya. But even this is only part of the puzzle. To help in other areas we rely on motion capture and additional video reference. Though we can never capture or recreate a performance to a superhero level we can recreate snippets of the motion that we can graft together and then enhance. Just like in compositing, the more reality you have as your foundation in the frame usually the more convincing your final CG will feel. This same principle applies in the animation as well. There are shots where we might be taking bits from 5 different mocap takes spliced together with hand animation in-between and layered on top to enhance the performance. Enhance as in make a jump a higher, or his run faster, etc.
Can you explain in detail about the design and creation of the Super Uber Elemental?
The Super Uber Elemental, internally codenamed SUE, was initially presented to Imageworks in concept form. As depicted in the original concept image, SUE had several water tentacle appendages that extended out of her back into the River Thames below. Knowing these would be a challenge, we began FX dev immediately as well as prototyped a posed model of her body in Mudbox as a sculpt. This allowed us to rapidly generate a representative model for the creature which we could use to check scale against the environment model of the London Tower Bridge area. From here we rendered out elements of the body model and early dev water tentacles which were used as source imagery to generate an updated concept. Once we got an approval on a more refined 2D concept we began refining the model for rigging and textures as well as began the dev needed to explore the cloud, lightning, fire and smoke FX needed on the rest of the body. As we had already begun work on the interaction of the water tentacles with the river surface we applied the same techniques to add interaction from SUE’s body as she stood and walked in the river. Once each element was working successfully in isolation we pooled together the work from the various FX artists into one master sim so they could share the same driving forces. This way the same turbulent wind that was blowing the clouds that made up SUE’s head could also influence the fire on her shoulders, the smoke wafting off her arms, and the spray spinning off of the water tentacles.
The Elemental is full of FX elements. How did you create and animate those elements?
The FX for the various elemental types were all based off of any real world reference video we could find. Volcanic storm clouds were heavy influences on our work. The actual FX dev and simulations were all done inside of Houdini and rendered in Arnold through our Katana software. Many of the FX needed to interact with one another so we would be careful to do them in serial order of precedence. Rocks falling off SUE’s body would be sim’d first as they would need to influence and smoke or fire they fell through. This basic principle was applied to all of the areas. A typical shot would require 4 to 5 FX artists working on individual pieces.
Mysterio is using tons of drones during the final battle. How did you design and create these drones?
We were supplied a concept for the original drone at the beginning of the project. Building this standard version was straightforward with the exception of the internal components. We knew there would be a lot of drone destruction as we spent some time upfront filling up the drone body with internal components that followed some conventional logic. Power source, fuel, navigation, ammunition storage, among other systems were included. As the scope of the drones performance increased new variants were needed. Imageworks artists worked with the Production team to design and build a flamethrower and sonic variant. Each drone needed to have some variation abilities based on their age or damage. To keep the number of unique models to a minimum we built damage variations by creating various damaged modular pieces that could be combined to create vast arrays of seemingly unique combinations. These physical model differences along with texture variations helped increase the visual variation as well.
What was your approach to animate so many drones?
The bulk of our shots had 15 or less drones directly in pursuit of Spider-Man which we could animate directly by hand. In the background of these shots, several hundred drones were idling as they remained in formation. For these somewhat inactive drones we relied on our FX Dept to propagate and instance idle animation. For the shots where a large number of drones needed to move, our FX team relied on flocking animation tools inside of Houdini to generate the performances.
Can you explain in detail about their crowd animation and destruction?
Our crowd animation needs were somewhat limited. The majority of the work required a group of under a hundred or so people to flee. For this we relied on mocap runs we had in our library as well as performances we captured during the shoot to populate into our own crowd tools within Maya.
The most challenging part of doing destruction on our film was knowing beforehand which parts of the bridge or vehicles we’d want to simulate in Houdini. In order to run stable sims, the models need to be constructed in a more rigorous fashion to prevent any model interpretations. This of course means the models need to be built to a higher spec and results in an increase of time needed. Generally, we ran the sims with the appropriate gravity and mass settings and the results were satisfactory. On some occasions we would find we needed to cheat the gravity slower to help sell the scale.
With all those drones, the FX and Spider-Man. How did you prevent your render farm to not burn?
Our drones were heavily instanced and also benefited from having lower resolution models we used as LODs. As we often had several hundred drones in our shots these tools kept the memory footprint to a manageable level. Progressive rendering was also a big help in Arnold. We could render our shots at 2K with lower samples to turn around renders overnight. In the morning, the Lighter could validate the renders and the Compositor could get started building their shot in Nuke. If everything checked out, the same renders could be put back on the renderfarm and the render would pick up from where it left off, at the lower aliasing image, and continue to refine to a better noise setting. We were able to reuse a lot of our elements across shots as pre-rendered layers which was big benefit. For example, we could render explosions at a sequence level as a 2D image sequence and then in the comp, when extra flak explosions were required, put these on a card in Nuke.
Can you explain in detail about the creation of the Tower Bridge, the Shard and London?
Tower Bridge, The Shard, and the surrounding area was created based on LIDAR scans and reference photography acquired while on location. Over roughly a two week period, we shot reference on and around the bridge from bus, boat, helicopter, foot, rooftops, and from within the towers and walkways of the bridge itself. Initially, we attempted to keep the model of our bridge to a minimum, mirroring details north/south and east/west to reduce model cost. This proved to be less useful as the scope of the project increased and the need for ever increasing all CG shots climbed. Eventually the vast majority of the bridge needed to be hero’d out to accommodate the asymmetry of the original scans of the bridge.
How did you populate the Tower Bridge with people and vehicles?
Our Layout team populated the roadway of the bridge both with vehicles and people using Maya. For debris and trash that we needed to help dress the bridge our team leveraged off of in-house instancer tools. Though the bridge is large it was still manageable to place cars and cycles of people by hand in Maya.
Which sequence or shot was the most challenging?
BFB5060 was one of the most challenging shots in the film for our team. The shot starts out with Spider-Man in a black void with green vapor gas at his feet as he runs toward invisible drones firing. As he uses his Spidey sense to make contact with the drones he begins to disable the illusion and slowly reveal the walkway and damage left in the wake. This was a very long shot that required extensive animation, sim, destruction and FX to complete. The animation alone was complex combination of multiple mocap takes and hand animation work. Although we had massive SUE shots that required complex sims the sheer length and proximity to camera in BFB5060 made it that much more challenging to complete.
Is there something specific that gives you some really short nights?
By the end of the project, the scope of the work had increased a fair amount. Anytime a large change needed to be accommodated it generally meant the plans previously set upon needed to be redrawn. There were several nights of restless sleep on this one spent scheming new workflows to adapt to the shifting needs of the show
What is your favorite shot or sequence?
There’s a sequence of shots of Spider-Man on fire as he’s fleeing a horde of drones chasing him. Spider-Man on fire with explosions and missiles in the background. Though his animation performance is fairly gravity driven with the exception of the blasts pushing Spider-Man around there’s a lot of intensity to the beat that I really enjoyed constructing with our team and still enjoy watching in the theater.
What is your best memory on this show?
Early during production, I was asked on set a question about Beck’s performance while in London. I made a comment about how I felt he should use his pistol to shoot Spider-Man. It was quickly dismissed. Months later when I saw an updated edit only a few weeks before delivery, Beck now used his pistol in an attempt to kill Peter Parker. It was a rare moment of vindication that helped bring some levity to a very challenging last couple of weeks.
How long have you worked on this show?
Since June of 2018.
What’s the VFX shots count?
320 shots.
What was the size of your team?
250 people. That number includes overhead positions such as facilities support, etc. 198 of those people were direct artists and production crewed over the course of the production. Given that people rolled on and off throughout, the max headcount topped out at 147 people during our most challenging two week period.
What is your next project?
Good question. Fingers crossed it is an upcoming Marvel project!
A big thanks for your time.
WANT TO KNOW MORE?
Sony Pictures Imageworks: Dedicated page about SPIDER-MAN: FAR FROM HOME on Sony Pictures Imageworks website.
© Vincent Frei – The Art of VFX – 2019