In January, Joseph Kasparian had explained to us about the work of Hybride on ROGUE ONE: A STAR WARS STORY. He is back today to talk about THE GREAT WALL, his new collaboration with Industrial Light & Magic.
How did you and Hybride get involved on this show?
We have been working with ILM since 2013 on a good amount of shows. More recently, the work we did on WARCRAFT required a lot of asset sharing and we were able to build tools that enabled us to properly ingest complex data from ILM into our own structure. We were approached to work on THE GREAT WALL as it presented us with a similar type of collaboration.
How was the collaboration with director Zhang Yimou and VFX Supervisor Phil Brennan?
We worked with the ILM supervision team that was based in Singapore. They were the ones to review our shots before presenting them to Phil Brennan, which also included work completed by other vendors under their supervision.
What was their approach to the visual effects?
The look and approach the visual effects were mostly dictated by the style of the movie where everything had to feel “big”. A lot of the extras were shot on HUGE sets. For Zhang Yimou and Phil Brennan it was very important to create a look based on heavy artillery and war machinery. We constantly added layers of smoke on top of every shot to give them more depth and added complexity to the generated environments.
How did you collaborate with Samir Hoon and the ILM teams?
We worked very closely with the ILM team in Singapore. They were in charge of creating the look development for the environments. Many huge assets were shared and were often updated as they evolved throughout the show. The wall, the valley and outer valley; all of the different soldiers and their weapons, balloons, lanterns and Kaifeng city were all part of what was shared with us.
What are the sequences made by Hybride?
In the Prepare for battle sequence, the wall’s various defenses are engaged: platforms for the leaping crane warriors emerge from the wall; trebuchets rise to launch balls of fire; soldiers armed with arrows and other weapons take position on the wall. We produced CG elements such as: CG crane rig inside the wall (close-up), CG environments around the wall, animated CG trebuchets, CG soldiers on top of the Wall in the background and produced CG wall extensions.
In the Balloon Journey sequence, we integrated CG balloons seen in the background (partially inflated on the wall, and in the air with CG soldiers inside the baskets), produced CG fire & explosion enhancements (braziers), CG landscapes beyond the wall, CG soldiers prepping balloons, carrying material and also created CG wall extensions.
For the Funeral sequence, we generated wall set extensions and rendered backgrounds for the valley and the outer valley. For the CG crowd multiplications on the wall, we needed to perform complex rotoscopy of live action extras that were in the plates. Headbands and banners moving in the wind within rows of soldiers made the job even more difficult. We also generated lanterns and a few establishing shots entirely in CG.
In the Desert sequence, our work was mostly darkening the ground and adding mountains in the foreground and background. We also generated bows and arrows for several shots.
Finally, in the Kaifeng city sequence, we lit, rendered and comped the building assets provided by ILM. We also added CG crowds and several layers of smoke and fire.
How did you work with the art department to recreate the Great Wall?
We received assets and textures along with reference shots to match. Since the look and complexity of the wall would evolve throughout the show, we would often update the assets, including all of the set dressings (weapons, nets, props). We were also tasked with generating the courtyard at the base of the wall. For some of these shots, the generic textures we’d received didn’t quite hold up due to the camera’s proximity so we decided to recreate the elements with an increased level of detail.
Can you explain the creation of The Great Wall and the huge environment around it in detail?
The Great Wall and the huge environment around it were both created by ILM Singapore. The assets were sent with reference shots to match, and once a version of the look development had been approved, we’d move on to the next stage (in context through shots). Our main challenge was integrating the work that had been done by ILM (with their own their tools) into our own structure, which meant reproducing the shaders and building a solid scene assembly pipeline.
Which part of the Great Wall was the most complicated to create and why?
Since the environment was huge, several different approaches were used to build the assets. The elements that we used were essentially defined by ILM Singapore, who was in charge of the look development.
The wall was very long and had different texture details depending on the camera’s proximity. Low-res maps with the least amount of input were used for the far background of the wall. Hi-res images with several maps driving complex shaders were used for close-up shots. For everything that had to do with the valley and outer valley, a mixture of hi-res textures and camera projection mate painting techniques were required to achieve the desired look. Since we were ingesting a lot of data from ILM, we had to build a solid scene assembly pipeline. Easy versioning of shared material was crucial in order to maintain control on the project.
How did you populate the Wall with crowds and weapons?
We received a file with all proper positioning information for all of the set dressings regarding the wall (static weapons, nets, racks of arrows, cranes), and that helped us accurately match what had been designed by ILM Singapore for the shots that weren’t ours. Using our internal crowd animation tool called Horde we then populated the shots with ILM soldiers. Horde is a Fabric Engine based crowd tool that we refined while working on WARCRAFT to deal with ILM’s complex characters.
How did you handle the animation of the crowd?
We received some animation files from ILM but we mostly generated all the animations required by our system internally.
Did you share assets with ILM?
We shared almost all of the assets with ILM.
The funeral is a touching a beautiful sequence. Which references and indications did you receive for it?
We received artwork and reference photography showing us the general look that the Director wanted to achieve.
Can you explain the creation of the lanterns in detail?
A small number of real lanterns were shot for close-ups. We started by matching the shading and cloth animations of the lanterns that were on the plate. Once that was achieved, we generated a good number of variations that we published and called through a particle system. The fire flickering inside the lanterns was generated using live footage mapped on cards. Animated point lights were also attached to them to accurately generate lighting on the rendered geometry.
Can you tell us more about the animation challenges with so many lanterns?
The biggest animation challenge was creating a “controlled” chaotic look. We needed to sense the wall’s shape by looking at the lanterns in the sky. It was important to find the right balance of movement (translation and rotation) so that we’d really get the feeling that the balloons were floating in the air.
With so many lanterns floating next to each other, we built a set up to control the scattering of the published animation cache. Variations in the time offset, scaling, colors and neighbor awareness were some of the parameters the artists could modulate to help avoid repeats (similar lanterns doing exactly the same thing while next to each other).
How was the balloon sequence filmed?
The Director shot all of the foreground action. The work we did for the courtyard and tower shots translate into crashed balloons with layers of smoke, soldiers working together to get balloons in the air with villagers running everywhere. For aerial shots, they built a rig and a platform with ropes and just a third of the actual balloon cloth. They planned to do the cloth extensions in CG because of the final size of the balloons and the ability to control the movement. Filming moving fabric came in handy for close-ups that showed just a bit of the balloon.
Can you tell us more about the creation of the balloons?
We started doing the layout and animation for all the shots while taking into consideration of what had been done in Previs. It mainly showed where the crashing balloons would be placed on the wall, and how the animation worked for a string of balloons while some burned while falling.
Once we got the feel of the sequence approved, we sent the basic animation file to Ghost FX who took care of everything that had to do with simulation. They generated all of the cloth effects including fire and smoke for the whole sequence. After we received renders from them with all types of AOV’s that we then comped on top of our backgrounds.
The 2D work was pretty extensive for that sequence since there were many layers of explosions, fire, and smoke that needed to blend in seamlessly with what had been shot in the foreground.
The comp scripts of the full CG establishing shots were pretty complex as they included almost every type of element generated in the movie: renders of the valley, outer valley, wall, courtyard, crowds, balloons flying, balloons burning as they fell, columns of smoke on the wall and wafts of smoke in the air…
For the foreground cloth extensions, we started match moving the platform and wood structure that hold the balloon. Once that was done, we sent animation files along with the camera to Ghost FX that took care of simulations and renders.
What do you keep from this experience?
They were many complex sequences on this show that required a great amount of artists and vendors working together on the same shots. Learning how to share look development and different techniques at such a large scale was a great experience for the entire team.
How long have you worked on this show?
We worked a total of 13 months, from September 2015 to October 2016.
How many shots have you done?
We completed 86 shots on the film.
What was the size of your team?
96 employees worked on this project.
What is your next project?
We are currently working with ILM on VALERIAN (245 shots) and we produced VFX for KONG: SKULL ISLAND, which comes out March 10th. Of course, we have other on-going projects but we can’t mention them yet!
A big thanks for your time.
// WANT TO KNOW MORE?
– Hybride: Official website of Hybride.
© Vincent Frei – The Art of VFX – 2017