Published on Tuesday, 05 June 2012 Written by Adriene Hurst
To span the tremendous scope of work Cinesite were charged with on ‘John Carter of Mars’, Senior VFX Supervisor Sue Rowe led four teams of artists to design and build the environments, cities and aerial battles of Mars.
|Cinesite’s scope of work, on over 800 of the movie's shots, handed their artists responsibility for the film’s four major environments – the two cities Helium and Zodanga, the aerial battles and flying ships, and the Thern sanctuary and effect. Sue Rowe and the teams started work on John Carter of Mars in 2010, supervising the shoot together first in the UK for six months, after which Sue supervised a further three months on her own when production shifted to Utah. The entire project took two and a half years, engaging more than 300 artists.
Sue explained that ‘John Carter’ called on the artists to play a larger than usual role in telling a complex, science-fiction story with visual effects. They were not only creating a background or supporting the action. They were using the assets, effects and environments to non-verbally depict the Martian technology that fantasy novelist Edgar Rice Burroughs had invented in high detail in his stories.
Helping them on this aspect of the project was the fact that the director Andrew Stanton was a lifelong fan of the original books. They worked with him on the visual representation of Burrough’s Martian world, so that the actors did not have to explain them and interfere with the dramatic, character-driven side of the tale.
About six months before Cinesite was involved with the project, previsualisation company Halon had worked with Andrew Stanton to give him a clear idea of what he could expect to see. However, technical previs - including precise set measurements, camera data and instructions about equipment – proved equally important to the effects teams because it helped them define the limitations of the sets and plan how to make the original previs actually become the movie. “Previs cameras, for example, sometimes move a lot faster than a real camera on a dolly, which tells us we need to over or undercrank the camera, or retime the footage in post,” Sue said.
She feels VFX artists now need to meet camera crews half-way and fully understand the realities on the set. Included in the dense set data gathered for ‘John Carter’ were LIDAR scans, Total Station scans, HDRI photography and 360° 9K HDR stills captured with a Spheron camera system. She also snapped images of the camera set-ups, and data wranglers recorded the details. All of this data helped keep the artists’ work anchored in reality.
|Life on Mars
While shooting in the UK for the first stage of production, most of it through the winter, Sue and Director of Photography Daniel Mindel were aware that getting these shots to match the expansive, hot, sun-bright desert would be a major undertaking during post production, and cranked up the illumination on set with 20K lights.
However, only when she saw the scope and scale of the environment in Utah with her own eyes did she understand what they were up against. While the other supervisors had been working with her on set in the UK, she was the only one in Utah. She sent them 360° high-resolution images of the locations to help them understand exactly what the movie was expected to look like.
The light was so clear and bright that it was possible to see straight through to the distant background, creating a massive depth of field, contrasting with the depths of field of only about 10 feet they had captured on sets in England in grey, wintry conditions. When it came time in post to marry up a studio shot with footage from Utah, they often needed to remove sections of the practical UK set and replace them digitally to be able to apply the correct depth of field. In Utah the light falloff, haze and cloud interfering with the light and atmosphere were negligible. Instead, dust and heat haze had to be added.
While the artists were always aware of just how much of the technical background to Burroughs’ novels they had to tell visually, this was not the only aspect of the project that set it apart from many other modern fantasy movies. Burroughs was writing in an era when sci-fi technology consisted of known materials like wood, glass and metal, even if the machinery were capable of doing amazing things in other-worldly places. Thus, the teams had to scale back somewhat their ideas of how the buildings, vehicles and machines should look to match the looks that Andrew wanted for ‘John Carter’.
The team looking after the aerial battles, supervised by Ben Shepherd, took particular care with the flying ships of Mars built by the story’s two opposing cultures, Helium and Zodanga. The book describes them as ‘flying on light’ or air instead of the ocean.
Although the team received inspiring and beautiful art work from concept artist Ryan Church, Andrew worked with the Cinesite team, one-to-one, on how to make these galleons truly feel like science fiction versions of vessels from ‘Master and Commander’. They took reference from actual galleons at sea to capture their motion, their speed and the correct rotation of the rigging to give them a mechanical quality unlike a spaceship, although their underlying flight systems had to be very sophisticated.
The ships are built of glass and rope, and fire on each other with canon-style explosions. Their wings are about 100m long, made of tiles resembling solar panels all moving to attract light. Regardless of the ship’s orientation, the tiles turn their faces toward the sun. These movements and the way the sunlight reflected off the panels resulted in some of the moments in the film that Sue is proudest of.
The artists used an isotropic shader that would typically be used for animal fur to produce a different colour waving through the fur depending on the light shining on it. Their shader had a purple through gold to blue colour gamut. As the animated ships rotated and turned past camera, the panels would respond to the light with an effect like fish scales. It was a good example of realizing the look of the original story by employing high-resolution CG techniques.
Andrew particularly valued Cineste’s environmental work on the ground in Utah. Head of VFX Technology Michele Scioletti developed software that enabled the team to use photogrammetric survey imagery to recreate the ground terrain for set locations at Big Water in central Utah. For example, they used this information to replace the backgrounds for a second unit shoot of a high action sequence in which John Carter and Deja fight ther way from the Thark encampment. When Sue arrived at the location, expecting to see a built set, she was told that it would have been too big build and would have to be constructed entirely digitally.
Not only did the buildings need to be digital. By capturing numerous images from the ground and from helicopters and incorporating map data, they could accurately recreate that whole environment in CG, use real images of the ground and also accurately place the CG Thark encampment buildings. More importantly, Andrew could shoot the extreme action with a great deal of freedom – with moving cameras, handheld or Steadicam, high or low or very close to the action - as John Carter leapt from the Thark parapet wall to the Zodangan ship, with Deja dangling in mid-air as he rescued her.
|Near and Far
In Burrough’s day, the first part of last century, people generally believed the Martian surface was crisscrossed with canals. These are now understood to be craters but Andrew wanted to uphold the idea of canals. In the movie’s opening sequences, the viewer sees Mars from deep space, for which Cinesite used NASA photos enhanced in Photoshop and reprojected in Maya. As the camera hurtles towards the planet, progressively revealing greater levels of detail by powers of ten, the first detail the viewer sees are swathes cut into the planet’s surface. These deep scars turn out to be made by the 674 myriapod legs of the mobile mining machine, the city of Zodanga, which mines the planet for fuel as it travels over the terrain.
The city’s complex construction therefore is an important story point, only portrayed visually and never actually explained in the script. “Also, once John Carter arrives on Mars, the early sequences introduce the viewer to Zodanga, to Helium, to a battle, a sandstorm – and then suddenly jump back to New York where the story begins, risking confusion. However, Andrew took care to return to each scene later in the film to hold the story together.”
The teams recognized as soon as they read through the script that they would have to handle a full range of resolutions. Their director wanted extreme close-ups of the mechanical details of the Zodangan city, for example, as well as the wider establishers across the Martian desert. Within Cinesite’s teams, the supervisor artists needed to organize their assets to accommodate all camera locations, and devised efficient ways to manage texturing and render times to avoid grinding to a stop at deadline, which can be a genuine problem at many studios.
Sue said, “Cinesite is a medium-sized facility and the teams had to consider early on that they didn’t have limitless render power and time to build their cities and assets. The Zodanga supervisor Jon Neill, coming from a solid 3D background with an artist’s eye, ensured when building the city that he built the sections the story returned to most often, such as the hangar-deck, as the team’s hero assets. He and I worked together on set and when rushes came through, we mocked up very high resolution Photoshop stills of those key areas. We were able to work out from the full 360° environment the most trafficked section of around 200° to focus his team’s work on.
“The ‘power of ten’ shot needed particular attention as the camera jumps from resolution to resolution. Jon carefully gauged how to bring the camera effectively from very far away, seeing the tiny distant city to suddenly shifting into the midst of it. He switched off those complex myriapod legs, for example, wherever possible but also planned how to globally illuminate the scene and where to apply high resolution textures.”
|City on the Move
Those legs were Zodanga’s main challenge. Because Andrew wanted the city’s movement to resemble a centipede, the animation team developed walk cycles coordinating each leg to consecutively hit the ground. They also assumed that they could simply transfer these cycles uniformly from distant shots to the backgrounds of close-ups of the actors but the results were not always effective. They often needed to slow the animation down in the close-ups and speed it up for the distant views. Secondary animation was added for the vibrations, the chains and other machinery wobbling and shaking as the whole city stops abruptly.
Animation supervisor Catheriine Elvidge worked on a number of cached animation cycles for the different camera positions and used them where they looked best, but at times they also needed to create a cycle to suit a particular shot, or even a full rig for selected legs and then animate them by hand. If John Carter was speaking in the foreground and a leg suddenly rose behind him, the result could be distracting.
CG sequence supervisor Axel Akesson said, “Rendering the city and legs needed close collaboration between the lighting and layout departments. Advanced visibility analysis was used to determine what legs we actually see in any given frame. The legs that we don't see could safely be culled and not show up in expensive global illumination calculations, for example.
“We also relied on an automated ‘level of detail’ solution to limit geometric complexity on legs far away from the camera. The Zodanga layout department did a lot of manual work on top of these automatic solutions to determine the most efficient way to push an insane amount of geometry through our rendering pipeline while still achieving the best possible image."
In another quite different environment, the city of Helium presented a ‘worst case CG scenario’ in some respects, according to Sue. “It is a huge city with thousands of individual buildings and a glass cathedral-like ‘Palace of Light’ right in the centre of it,” she said. “CG handles glass beautifully, but it is incredibly time-consuming to create. This is also where the film’s final battle takes place, comprising about 300 shots and culminating with a Zodangan ship crashing into the Palace.”
Christian Irles supervised the work on Helium. The Palace had to be modelled in full CG for viewing from the outside and also used as a set extension for live-action shot on an interior set. It has solid, vertical ribs with mirrors and a lens mounted at the top of the structure, extending the small amount of set by hundreds of feet.
Since the glass needed to be transparent, the exterior environment also had to be rendered into the scene, along with reflections and refractions of CG and the live action. To handle the ship crashing through the glass walls, some panels were constructed with additional geometry which would work better for shattering. The glass itself needed to resemble the frosted glass that had been on set inside the palace, but also look beautiful – all of which took lots of time and testing.
Smaller assets such as the lenses in the palace were a challenge as this required achieving a very specific, handmade look matching the organic feel that Andrew Stanton wanted Helium to have. Sue explained, “The lenses were made as a solid piece of glass so the viewer could see the moons above, but modelled in Maya and textured to give the look of thick, hand blown, aged glass that reduced a certain amount of visibility. In 2D using Nuke we distorted the back ground to make it look like the sky was refracted through the uneven surface of the glass.
“Glass is computationally very expensive to render. You can use ray-tracing, rely on mental ray and various other techniques - we tried them all. In the end we selected a few hero shots to apply ray-tracing to but for the majority of the remaining shots, we pre-cached the ray-tracing, working out first where most of the angles of light were going to be and then, using point cloud data, re-projected this information,” Sue said.
“It’s not a ground-breaking sort of technique but a very practical way of making sure you can put out 200 or 300 shots at deadline time without upsetting everyone else’s schedule at the facility - as well as the two cities, we were also working out the Thern effect for other critical sections of the film.”
The live action for the final battle sequence taking inside the elegant glass Palace was shot inside an old Woolworth’s warehouse in a north London suburb. Nevertheless, Sue came fully prepared for each shot and able to show the camera operators and crew exactly what the audience would be seeing virtually. While she was fully aware that for the crew on set, their world was an expanse of green screen containing a small section of the set, she also knew that later on it would all be replaced with the cathedral with its magical lenses shining in the moonlight.
There fore, her job in this case was to instruct them to pan off the two actors and tilt the camera toward the ceiling of the gloomy warehouse, exactly where those lenses would be. She did meet with some resistance and the operator’s own ideas about what was going to look best. It forced her to be absolutely confident of the previs and further to that, the precise technical previs.
Thern is the glowing blue phenomenon that reveals the power and presence of the spirits of Mars. “When Andrew described the look he was after, he said Thern should be like a Mechano set, tiny building blocks growing from a source point. ‘It must not be too organic but needs to look mathematical – as if it were building itself into a pre-defined structure.’” Sue said.
“The original concepts were monochrome, but the Thern gun and beam had already been designed to be blue. During the post phase, Andrew wanted to add a warm glow to the room as if light from outside was finding its way into the sanctuary. We found ourselves with a palette of turquoise and amber. I have a simple adage for sci-fi design problems - if you can find something in nature with those colours then I know we can make it work. We did some internet research on these colours and found them in images of deep space nebulae. We showed these to the director and he loved the ethereal quality.”
The challenge was to make something grow in a mathematical way but also at a minuscule level, from extreme close-ups to architectural wide angles. How to create something that can start small then grow rapidly to form a room hundreds of square feet wide? “If we were to build this in a traditional way by modelling, it would have made the scenes too big to work with,” Sue explained. “We needed to build using animating structures, but also smart in the way it grew so that it wouldn't be too geometry heavy.
“We approached the challenge procedurally. One of the first bits of programming computer animation students learn is how to build a CG tree, based on a number of ‘if-then’ statements that grow a computer generated tree, each of which will be slightly different due to the slight changes in the algorithms used. We approached the growth of the Thern in the same way. The challenges came when we needed it to animate in a certain way as well as build and grow responsibly for our modellers.”
The Thern underwent a number of growth stages. The first was the scaffolding, the main body of the Thern. Then they added the secondary layer of fine hair like layers which thickened out the structure and added another layer of interest to the animation. As the first layer solidifies the second layer grows rapidly over the main structure much like iron filings move as a magnet travels underneath. “Finally, tiny fingers of Thern branch out seeking the next surface to touch,” Sue said, “The effect is creepy, organic and mathematical. Six months and some more grey hair and the job was done!
“In some cases we used geometry exported from Houdini into to Maya for shots where there was no animation or where something so specific needed to happen that it could not be controlled procedurally. Other solutions are out there now that we know what the Thern looks like. In fact, one of the hardest things was developing the look at the same time as creating it. We probably spent longer getting to our final look because the design phase was running at the same time as solving the technical challenges of building something so unusual.”
Sue was on set every day of the shoot in both the UK and Utah but once production was over she spent every day working with the specialist supervisors. Cinesite has several different 2K screening theatres, where they worked together as post production progressed. Each team had its own lead CG and artist, and operated as if it were working on its own complete movie. In the beginning, while they were still developing ideas and approaches to pitch to Andrew who was working from California, she could spend a full day at a time with each team but eventually it became a non-stop round of meetings.
Open communication was crucial. In those later stages she was meeting all four supervisors and the director every day and was familiar enough with what the teams were doing to be able to ask a quick question at any time. Images could be sent overnight or by email. They useful several remote collaboration tools like Cinesync to share files with Andrew allowing him to make illustrated notes on frames while Sue and the teams were reviewing.
The shooting format for this film was anamorphic, which is quite unusual for a film relying on a lot of effects due to the challenges that the lens distortions inherent in the very wide, ‘letterbox’ shape, create for the artists tracking the images. Consequently, more often, Super 35 format is used and the middle section of the image is simply cut out to achieve the letter-box look.
Sue believes that tracking artists are critical to the success of VFX in any project. “It is a key skill to master – not easy or exciting perhaps, but learning to do it well means you have a thorough understanding of what it takes to make visual effects work. But Andrew and the DP Daniel Mindel decided to shoot with anamorphic lenses to produce a truer, filmic look. The decision to post-dimensionalise the entire film only came later, once the shoot was underway and, of course, the lenses and camera operators couldn’t be changed in spite of problems it would cause the extensive tracking required for stereo conversion.”
A typical lens used for John Carter was 40mm, producing considerable distortion at the edges of the frame. Any CG buildings and set extensions the team created would need to be precisely bowed and distorted to match the plate containing the physical set. This meant being prepared to match every lens used on the shoot. A lens grid was created for each of these by shooting a graph picture with the full range of lenses. The task had to be repeated for different depths of field and the range of focus used.
For the UK shoot, the crews used about 40 different lenses while working with first and second unit. When they went to Utah, they were faced with an entirely new set of lenses, and again when they returned for a re-shoot. Not surprisingly, a dedicated team was formed to build a lens database. Michele Sciolette wrote a special piece of distortion software. The VFX artists could take the lens ID recognised in Nuke and type the name into the software, which would then apply the appropriate distortion. This program represented three or four months’ work, but it saved a lot of time over the whole project.
Stereoscopic supervisor Scott Willman at Cinesite oversaw the conversion of 87 minutes of the movie. The team considered that the usual conversion technique involves separating layers using roto and then pushing or pulling them to certain depths by grading a depth map. Once this is in place, a series of filters are used to simulate the shape and internal dimension of an object. However they believed this prevents artists from quickly achieving correct spatial relationships and natural dimension in their scenes.
To overcome these limitations, they decided that instead of manually placing objects in space, it was more sensible to use animated geometry that they could track and position in the scene and render through virtual stereo cameras. They felt that this allowed them to place all of the objects in the set in their proper location in 3D space more accurately, so that correct scale perception was maintained.
By laying the scene out in 3D space, shooting became more natural also. Scott said, “We could use the same cameras, lens dat, and animation from the actual set. When we then dialed our stereo interaxial distance, it was in measurements that made sense to the scale of the physical set.
“Another advantage of using the tracked VFX cameras was that we were able to render CG layers in stereo and have them fit seamlessly into the converted plate elements. This was particularly important when Carter physically interacts with four-armed Tharks. In typical 2D visual effects, holdouts would suffice. But in 3D, the position of each CG limb must be correctly placed in depth relative to the converted plate element.” www.cinesite.com
Words: Adriene Hurst
Images: Courtesy of Walt Disney Pictures