Iron Man

Initially, I thought that for scenes that do not require the suit to do much, the actor would just wear a real suit and for scenes that required the suit to transform by shifting and bringing out weapons, a cgi character will be used. However, I found out that cgi was also integrated in scenes where suit does not have to transform.This is because the weight of the real suit made it hard for the actor to do the full range of motions.
ILM had to then work on creating parts of the suit which would allow actors to remove part to the suit and it would later be replaced with cgi.

Image

An example would be the picture above, where by Robert only wear the top half of the suit.

Image

Parts the the suit above is done in cgi and blended well with the real suit.

Battleship

Battleship was a movie that I caught last year. There were quite a number people who thought that this was a bad movie, but I thought otherwise. I actually enjoyed the movie, even though I Rihanna’s acting was pretty lousy.

The visual effects were stunning, as expected from ILM. As a naval war and sci fi action blockbuster film, this movie had to rely heavily on visual effects and cgi. What I wanted to find out was how the Shredder (an intelligent destructive sphere that the aliens sent out) was made.

The Shredder basically looks like a combination of a jet turbine and a series of chain saws with intelligence source in the middle of it. The shredder’s purpose is to neutralized things that they perceive as threats such as military weapons and vehicles. The challenge that ILM had was to inject these intelligence with character. Riggers had to provide controls for the shredders to ensure that each tooth and joint in order to exhibit a more menacing silhouette when it is exterminating threats. The shredders were also animated with high speed to demonstrate power. These simple and straight forward techniques helped in conveying the character, which would that heighten the believability of the scene.

Another challenge that ILM faced was interaction with cg water. Being a film about naval war, it was instinctive that there would be a lot of water simulation. A year long project called the “Battleship Project” was introduced by the vfx team in 2010. The goal was to sum up the lifespan of a water droplet. For example if they wanted to create a waterfall, it would start out as a simulated mesh, with all of the appropriate collisions as it bounces along the surfaces. And then the streams begin to break-up into smaller clusters, and then into tiny droplets, and finally into mist. And along that advancement from dense water to mist, the particles become progressively more influenced by airfields.

These project allowed the animators in research and go into depth on the behavior of water which enabled them to create a more realistic water.

The Vampire Diaries

This is one of the less painful blog post to write because I really like this show. Caleb called this a “girly girl show” but what does he know.

Even though a large part of the effects were special effects and visual effects is not heavily incorporated into this show, it is essential and cannot be done without. For example, veins that appears on a vampire’s face when he/she is angry, aroused or conflicted and was done in post using After Effects and Discreet Boxes.

Image

Image

The vein was made by generating a 3D mapping of the skin. Any adjustments and refinements done of on the veins would automatically run through a module that customizes it’s facial movement of the actor.

Elements such as fangs, blood and bite wounds were tracked onto the actor’s face. Having said that, the bite wounds, blood and fangs can also done using prosthetics.

Image

Markers were made on the actor’s face for tracking purposes.

Coloured contact lenses were also used to enhance the look create a more believable vampire.

3D animation is also incorporated into this show.

Image

Image

For example the feathers that the witch levitated were created in maya.

This is another one of those shows with pretty straightforward visual effects. Even those they are simple, it has helped to enhance the believability of the show.

Avatar – Lighting and Rendering

Avatar is one of the most renowned 3D movies because of it’s groundbreaking visual effects.

I did not catch the movie in stereoscopic 3D because at the point of time when the movie was showing I was still young and stupid. But at least I did watch this movie.

There are a lot of visual effects incorporated into this movie but I’d like to talk about it’s lighting and rendering specifically.

Even though the main visual effects team responsible for Avatar was Weta Digitals. However, there were other production houses that worked with them on the visual effects of the film such as ILM.

It could be easy for artists to light a single object photo-realistically but a heavily forested environment such as Pandora could not be done just by creating lights. Firstly, the artists would have to calculate the occlusions, shadows and diffuse for the environment. Next, they had to take high dynamic range images from new zealand and use image based lighting to illuminate was the scene. Finally, lights would be created to light up the the different areas further, to give it a more realistic look.

pandora

After lighting up the scene, the next thing that they had to do was to render. The poly count of a single tree in a scene would easily go up to a million. With Weta Digital’s high speed computers, rendering it close up would be no problem. However, rendering the tree in a scene from afar makes it smaller in buckets within Renderman, making it slower to render. Hence, the visual effects artists employed a technique called stochastic pruning, in which they would retain the shape of the objects but remove faces as it moves into the distance. A tree that had a million poly counts would be reduced to 20-30 polygons.

Inception

I think that the reason why Inception is such a successful film because of it’s inventive storyline and great visual effects the goes hand in hand with it.

Being a film about dreams gave the it many opportunities to create visuals that were impractical in reality.

Of all the visual effects sequence, that stood out the most for me was the scene where Paris folds up into a cube.

folding paris

In order for Double Negative to create such a complex cgi sequence, 2 weeks worth of extensive documentation of the places was done in order to achieve a photo realistic model of four Parisian apartment blocks. The cars and people that were in the scene were also cgi models. Apart from that, he visual effects team also had to formulate cheats such as hiding buildings behind other geometries to ensure the smooth folding of the buildings.

Without the use of visual effects it would be much harder to create the paris folding scene. Even though I think that it could be done by building a miniature set and fold the buildings animatronically, but it still would not look as photo – realistic as when cgi is executed. The Visual effects in the movie made dream sequences seem possible.

Hansel and Gretel

Recently, I was given the opportunity by the school to catch Hansel and Gretel in IMAX 3D. Initially I thought it was a children’s film but it ended up to be a gory action packed movie.

After watching the movie, I would have thought that the troll was a cg character but to my surprised, it was mostly done with animatronics. Only a small percentage of the troll in the troll scenes were done fully with cgi.

The transition of the human form to the witch form were executed with prosthetics and make up effects incorporated with cgi.

Even though the cg was pretty straightforward, it heightened the believability of the each scene and ultimately the entire film.

After doing a research on this film, I also acquired a better understanding on one of the difficulties faced while shooting a movie in 3D. In a normal movie, camera tricks could be easily executed. For example, if an actor throws a punch at another, he could easily fake it and the audience would not notice. But in a 3D movie, the illusion of depth perception is enhanced. It would be impossible to use such traditional camera tricks.

Terminator 2

The terminator 2 is a science fiction action film that was renowned for it’s outstanding use of visual effects  and was considered highly advance in the year 1991. The T1000 was also the most advanced computer generated cg character, at that time.

The incorporation of cgi and special effects has definitely made the movie more believable as it allowed the T1000 to demonstrate on of it’s functions – the ability to shape shift. After doing some research, I realized that even though at the time, the film was popular because of the outstanding level of cgi, the cgi only totaled up to 5 mins. A lot more special effects was integrated into the movie than digital effects. One example would be the “head splash” scene where the T800 fired a shotgun at T1000 at the elevator was not a digital effect but a special effect. Puppets were employed to create the notion that the T1000’s head was being blown up by a shot gun.

head blown up (Electronic puppet)

Having said that, the cgi was employed to create visuals that can could not be done with special effects. For example, the scene where the T1000 emerges from the ground as a liquid and shaped shifted into the shape of a human was done digitally as it is almost impossible to employ special effects.

Overall, I fell that the in-camera tricks incorporated with digital effects worked very well for this movie and made it more believable. Without the digital effects, the movie would not be as kickass as it would be. Scenes like the T1000 morphing from one person to another would not be possible.

Life of Pi

Like Avatar, Life of Pi is one of those fantasy adventure film that integrates CG heavily.

The lead visual effects company behind Life of Pi was Rhythm and Hues Studio (R&H). The company is renowned for creating cg characters, and have done an incredible job creating highly complex photo-realistic digital animals such as the lion in the first Narnia movie, animals from night of the museum 1&2, and more currently the tiger in Life of Pi.

My aim is to take a look at the making of the Richard Parker, the digital tiger – What were the elements needed to create it and how is it created.

It is made up of a skeletal system, muscular system, skin and the fur.

Firstly, the skeletal system is created to simulate the basic movement of the tiger. It is later bulked up with muscles that are responsible for the primary shape of the tiger. Next, the skin would be added, and a two-pass skin solver would be applied to it – one allows the skin to be tugged by the muscles and a dynamic simulation that allows the skin to slide over the muscles and fold accordingly. Finally, fur would be added to complete the digital tiger.

The making of the fur was a demanding process. Animators have to comb and place over 10 million strands of hair on Richard Parker’s body. Also, more than a dozen artists were assigned to simulate the fur alone, focusing for example, on how the fur would glisten under various intensity and colored lights. In addition, sub-surface scatter was also applied to the fur give it a softer look and enable light to penetrate deeper into the tiger and it’s fur.

After putting together the basic elements of the digital tiger, the animators have to ensure that it would appear photo-realistic. Four real tigers were brought in for motion-capture and 100 hours long footages of the tigers were taken for reference to allow animators to zero in on the tiniest detail of motion. For example, when the digital tiger shifts its weight, the animators had to study the smallest ripple effect it creates, such as how the muscle and skin would jiggle. To bring CGI to an even higher level, real tigers were used in some shots. This meant that there was extremely little to no room for the animators to let up, as the realism of the digital tiger had to match up to the real tiger in the film. The entire process of creating and fine-tuning the mannerism of the digital tiger took the animators approximately one year to complete.

While Life of Pi is a great film with spectacular visual effects, we know that the making of the movie was not easy. Taking note of each tiny movement may seem insignificant on it’s own but when all the small details are added up, it would cause a rippling effect throughout the animation as a whole, making the animation and ultimately the tiger look more realistic and believable.

Percy Jackson and the Lightning Thief

Synopsis: A teenager discovers he’s the descendant of the Greek god, Poseidon, and sets out on an adventure to clear Poseidon’s name by searching for the stolen lightning bolt which Zeus believed Poseidon was responsible for.

Even though I did not think that the story line for Percy Jackson was fantastic, I enjoy movies of it’s genre.

Due to the fantasy nature of the film, cg was heavily integrated into the film to help to enact the story more accurately. Cg has also contributed in making the magical world of Percy Jackson more believable. An example of a cg effect that helped out with that was the animation of the water. With Percy’s magical power, the water has to be simulated to flow towards a target (which was unnatural) while still keeping the natural behavior and flow of water.

Another key thing that I realized could elevate the believability of a scene and generally the film was the behavior of a cg character. For example, the unique ability of the Hydra was that it’s head was able to regenerate and multiply by 2 instantly, after it is being cut off. This ability would have made the extermination of Percy Jackson and his friends easy. However, the animators made sure that even though all the heads belonged to the same body, each head has a mind of it’s own and would get in the way of one another to be the one to get to the target. This self conflicting behavior the Hydra presents, made the escape of Percy Jackson easier, hence, affecting the believability of the scene.

Mary and Max

Mary and Max is a clay-animated feature film directed and written by Adam Elliot based on his personal experience.

Synopsis: Mary is a 8 year old girl who’s from a very dysfunctional family and had difficult time making friends while Max is a 44 year who also had a difficult time making friends because of his mental disorders such as Asperger syndrome and depression. One day Mary decided to write a letter to a random address she picked out from an address book and that was how Mary and Max became pen-pals.

Mary and Max is made of a team of 120 people (artist).

I realised that the colours chosen for the film was generally dull and desaturated. Mary’s world is brown while Max’s world is grey. Reasons for which was because the film was set in the 70’s and in Adam Elliot’s memory, Australia (where Mary lived) was brown. Brown was the ‘in’ colour then. People had brown carpet and dyed their hair mission brown. Max’s world was predominantly black, white and grey simply because Adam elliot believed that New York (where Max lived) was a concrete environment. Another reason for the colour choice was because dull and desaturated colours suited the mood of the film which was rather sad and melancholy.

The animation style that he decided on was stop motion, claymation. Adam Elliot preferred using clay to a computer animation. He felt that building everything (the set and character) with plasticine, gave the film a more tactile look which compliments the story and mood of the film. In addition, the modeling of the character and set has unrefined finishes made the film look more ‘organic’ and also depicts the imperfections of human.