Houdini Session – Week 4

This session we introduced arnold for houdini (htoa), HDRI maps (high dynamic range images) and ACES (academy color encoding system), the focus was rendering and getting the right shaders and their attributes right for each material. It is important to understand how light and it’s reflections in certain materials work as these make up for someone recognizing the material even before touching it. Parameters like roughness, transmission, reflection and specular are the ones we played with as these are the most basic and easier to understand.

HDRI are used as background images and help build up the environment and ACES is a color encoding system that includes a bigger range of colors than RGB and help get a better result as well as more flexibility to adjust color.

On houdini, we began testing the mantra render engine and houdini lights and see how they intereacted with a rubber toy testing model. The most simple light was hlight and then we tried the environment light as it represents a surrounding light so the whole model is illuminated like the light is coming from everywhere. So with the environment light it is possible to add an HDRI map and we can imediatly see the difference.

When rendering it is essential to know about the color values: 8bit; 16bit and 32bit. So if you have more color values it means more definition as they are multiplied each time.

Arnold is not an unknown thing to an animator, as it is the main render engine of MAYA our magic wand as animators, but in every software it works a bit differently. I installed arnold after a lot of problems with it as it seems my computer it set differently so houdini bugs with arnold environment variables but, with the help of Mehdi and Tao I was able to solve the problem and now everything works better.

On the rubber toy we experimented a lot of arnold lights and shaders to understand how it works as well as working in the different networks such as material, output and geometry like the previous sessions. To use arnold or any other render engine on houdini we need to create an arnold node on the output network so houdini recognizes it and then we can work on material and edit the shaders. Shaders need to be inside arnold material builder nodes. With the rubber toy we created a material builder for the toy and the ground and this was the result of those experiments.

After having a better understanding of how everything works we opened the project from week 2 where we learned how to work with dop and vop networks and play with particles and use arnold to render it and apply the different shaders.

The first step was to create another geometry node (on objects network) and have the model and the particles separated, so on the particles node we could just copy the model node and paste it on an object merge node inside the geo node, this way the paths are connected as we are using the same model we used for the particles. After this we created a grid, an arnold node on output network and the shader that the model had was gone as it was set for mantra renderer, now we have to apply the arnold shaders to the model, the hammer, the ground and the particles. Ofcourse the lights used have to be arnold lights that in this case we applied the HDRI map that we previously downloaded from HDRIHaven website.

On material network we created 3 arnold material builders for each material we were going to need and named them accordingly, and each had a standard surface material that then had to be applied as material to each geometry node on objects network. On the crag node where the model was we had to assign two different materials as the character is different from it’s hammer, so we added an attribute delete node to “separate” them and add a material node to assign the different material to the hammer – we used an expression for that.

These colors were only to have an idea of how that process works, now it is necessary to assign a real shader to the model so we used a curvature node to calculate the curvatures of the model and that allowed us to assign a “range” of colors based on the curvature values with a ramp_rgb node. So after assigning metalness to the hammer and the new color to the model it was time to add the particles, so another shader was created and we repeated the process of assigning the shader to the object. This is how the particle shader looks like after all the changes.

So the two ramp_rgb nodes are the ones responsible for color shading and we added quite a few for a more magical effect, these were the experiments with color I made.

At first, my particles were not changing color it was always black until i decided to redo the whole particle part and it worked! Due to my previous problems with arnold the geometry nodes did not have the arnold tab and some attributes were not being applied to the model. In the end all was solved and finally we had to add a bit of motion blur which made it look pretty cool in my opinion.

I made a render during the night and this was the final result:

Matchmove Session 1 to 1 – 9th feb

On my second 1 to 1 with Dom, we had planned to track a shot with a good background and foreground, download arrows or spears and have them thrown at a character. The plan changed as the footage I found was not so good for this idea so Dom had found a great video of a forest where the camera was going under a tree trunk. So we focused on tracking the shot and then adding some animation on top of it, like a character going under the tree.

We began on 3DEqualizer tracking the tree, then the space under the tree, then the floor, the background right and left side and by the end this is what we had.

We were able to track all the points in 2 hours and finish the 3DE work to jump to maya, no errors this time, everything went super well even when running Warp4 and the lens distortion which normally is where some problems may arrise.

On maya, we already had the project set with the three extra folders: 3de_scenes, mel scripts and models, so we just had to open a scene and import the mel script and the footage. This is where the first problem came, the footage was moving when setting the view for the camera, so we had to set the image plane manually and add the footage to it and the problem was solved.

The next step was to center the floor points with the ground grid from maya, this way it is easier to know where the floor is, so we can go to edit mesh – create geometry and create a shape from the floor points. With our cg floor we could add a few more things to the scene such as the tree where we created a cilinder and edited its geometry to look more like a tree. I ended up adding a few more tiny trunks on the floor. Now the scene is ready to have a character animated on top.

Lighting Session – KK Yeung

As I said before, this unit will have sessions with some industry experts that will teach us throughout this term, this week’s was KK Yeung, a lighting artist that made a video tutorial using Maya and NukeX.

This exercise had to be done in vmware as KK prepared the project for us there, I admit I tried to do things accordingly to be able to do in my machine, but we had to change the path of the images from the nuke file and I wasn’t able to change it for my machine as I never worked with nuke, this was the first time.

It is an interesting software and it is super usefull as it is so versatil, can be used for VFX and composing, as well as matching color values. It reminds me of houdini as it works with nodes and the interface is quite similar. Although i really like houdini node design and in my opinion is more user friendly than nuke.

The first step was to change the source images to the correct path, as the file had an error and the images weren’t loaded. After that we had to prepare the HDRI and match the pixel value of a targeted area (we choose the cabinet) to the other sources. For this we used 2 nodes, exposure and multiply (math) to match them in 3 maps, north node, south node and flurosent light. As north node is the upper part of the HDRI, south node is the bottom part and the florosent light is for the main light source of the image as it is burned and when matching the values it was set to a specific value that will not change above that and will never be too burnt.

Before exporting the maps, we did a sanity check which is used to see if the information of the light we changed is acceptable to export, in this case we did lose some data but not enough to be perceptible, as we had to increase the f value almost 1000 to see the data loss. So now is time to export the maps to maya, to set up the light rig.

On maya, we already had the project set and KK made a shelf with some tools that we are going to use in these sessions. The first thing was to align the chrome sphere refletion to the reference image by rotating Y and X axis, after this we unhide the cg characters to see if the light matches, for this we had a second cg sphere that made it possible to see how the light would affect the characters.

On hypershade we were able to see how the chrome and cg spheres were set up and then KK showed us a way to prepare the render, just with the characters, either through the outliner by changing the visibility of the projections and through the script editor where, before changing the visibility of the things in outliner we clean the comand window and after changing visibility of the background images we could copy the comands on script editor and copy them to the MEL tab. This is usefull when we want to make quick changes, the script editor is much more efficient.

Collab Project – Week 3

This week we met twice and came up with different ideas from before, as at first we were aiming to have 3 stages and each artist would perform in a different stage, but we realised that it would be weird as we would only have the artists perform and nothing else. So, we decided to make a single stage where we would have a 4th character and have it be the host of the show. This way the artists could talk to each other in between acts and have a performance not so focused on music and a rehearsed thing on stage.

This way we can actually play with cameras and not have it so much as a live broadcast from a festival, but more as a movie clip where artists perform and interact with each other. One thing that we were also thinking of doing is simulating the crowd, it is something none of us know how to do but we agreed that it is important as this is a show and artists need a crowd.

The artist I chose to animate is michael jackson as has such an iconic performance and it is a good challenge to evolve what I want to in animation, like the overlaping action, his dance is full of it! We are aiming at a more cartoony style and stylized animation, so I chose David from the asset library as I think he has such a good rig and that type of model is exactly what I like to animate.

Another thing that would be amazing to be able to do is change the model’s clothes, as Michael Jackson always used provocative and shiny clothes it would be good if it was possible to make some changes. If not, maybe change the materials and textures of david’s clothes.

The song I chose is billie jean as the coreography has the well known move “The Moon Walk” which I know is a challenging thing to do but I want to try it anyway.

Rotomation and Mocap Capture

Rotomation and mocap capture are the future of this industry as it opens so many doors to shot editing and optimizing animation and movement. So, what are these?

Rotomation

The first time I heard about it was on our first session with Dom where we were intruduced to matchmove and rotomation, as matchmove is the process of tracking a camera and animating or adding something on top of it and rotomation is the process of tracking a body or a part of a body to then add animation or vfx on top of it. Rotomation needs camera tracking so these processes are connected and we can see an example of it in movies like the Avengers where the actors have suits and a lot is added in post production, like spiderman or captain america. The industries that relly more on rotomation are vfx and film animation.

Mocap Capture

Also something I knew what was but did not know the name, this is the process of having actors with mocap suits and performing with them on set so then their movements are applied in a CG model. A movie that really explored this was Avatar (2009), by James Cameron where we can see that the actors faces are actually on the blue humanoids from the movie, the movements are perfect and even after 12 years the movie still looks amazing and is a reference in the industry of film, vfx and game animation.

This techniques were introduced by film and vfx animation and these industries really developed the technique but, today game animation is also using this technology to produce even more realistic animation and movement in games.

VFX, Film and Game Animation

VFX, film and game animation are three very different fields in the industry, it is important to be aware of it and understand it because they work with different rules and methods, the roles are different.

VFX Animation

This is where things such as props animation, particle systems, over body simulation and CFX (creature animation). It is the most common use nowadays as they are responsible for adding features to the shot that didn’t really happen when recording the footage, it is composited to the film and it leans on realism as it is used in real world footage. It must be real to a point where the audience feels like those things are actually happening.

Film Animation

Film animation is about animated features like movies from pixar that are fully made in CGI and the animation is more cartoony and stylized, exagerated and less real. But that doesn’t make it any less harder, although it is considered easier due to the fact that it doesn’t have to be so real and accurate to reality. On the other hand it needs to be stylized, the characters need to do things that no human being would be capable of, with no reference of those changes. Animators need reference like everyone in the industry, but our references can only be pushed to a certain point, the stylized look is made through creativity and that, in my opinion, is what makes animated movies so good.

Game Animation

The line between game,vfx and film animation is becoming more and more blured, as games are becoming more realistic and loyal to reality, but it differs a bit as the animation doesn’t have to be as perfect as a film or vfx work. In matters of contracts it is also different as contracts in game industry can last much longer while in film and vfx the contracts never exceed 9 months, so it gives a bit more stability and things don’t have to be so rushed.

Previs, Techvis and Postvis

We need to know some concepts in the 3D Animation industry, these three are concepts used throughout the making of a movie.

Previs – is the process of visualizing a scene before it is physically made, to help fix some problems and have an idea of what everything is going to look like.

Techvis – is the process of mapping everything such as cameras, actors positions, lights, camera cranes, etc to help create the final footage. This process is like a technical sheet of the previs.

Postvis – is the process of visualizing a rough idea of the final result of a movie before going in to post-production, so not long after the footage is shot the directors have the ability to see what it is going to look like with some vfx and animation.

Houdini Session – Week 3

As Medhi said last session, this week would be much harder and it was! No surprise! It really was super hard, but super cool too. Simulation is a powerfull thing if we know how to do it properlly.

On the first session we had to create a wood cabin and as an extra we could add some detail as it was going to be used for this week, I wasn’t able to add detail as I know so little of Houdini. So to start, Medhi showed us how to add detail and it was super easy!

This is what the first cabin looked like:

This is what the cabin looked like after I added detail:

For the wood planks we copied the main cabin and it’s transform from the first one and in wireframe mode it was possible to place the planks on the right place, then I decided to add 2 more windows as I’m still very new to Houdini. We added glass with the same boolean process from before, grouping every part of the cabin seperatly, depending on each material and the three groups were: roof, wood, ground and glass.

But before doing the simulation we must understand how destruction works, so the demonstration was made with a cube and a sphere. We used the concrete simulation and added noise to the destryoed pieces, but I admit this part was challenging and some things were only understood when applying this knowledge to the cabin. But I was able to create all the things medhi showed us.

Detruction of the cabin

Simple Cabin

We began the destruction simulation with the simple cabin, where we added a rbdmaterialfracture node with 3 fracture levels, each one with different scatter points and noise frequency. Each fracture level can have different scatter points as a way to create more small pieces for the destruction as well as noise frequency to have a more random look. Add an explodedview node to see the scattered pieces.

Next create a rbdbulletsolver node and connect it to the rbdmaterialsolver node, if we hit play the whole cabin starts to fall so we can add a ground plane on the bulletsolver attributes. The result is the cabin destroys itself, nothing is put together, for now it is only loose pieces. To fix that we must connect the constraint output from bulletsolver to rbdmaterialfracture and that sets a glue constraint that keeps the pieces all together. When doing this I had some issues, because some pieces had some errors, but on the talks with medhi he explained that the problem was due to the noise frequency on the fracture levels.

When creating a sphere and key it to go through a side of the cabin, the whole cabin moves and it is not destroyed as we want to. The way to change this is creating an assemble node and check the create packed geometry and uncheck the others, but the cabin still moves so we have to create a group with half the cabin and with an attributecreate node “tell the software” that the pieces from the group cannot move and the ones hit by the simulation can.

Finally we can add a transformpieces and filecache nodes to have the points of the simulation saved with an optimized network which makes it easier to import and quickly have it ready to work.

Detailed Cabin

Now for the detailed wood cabin, Medhi explained that if we ever work with this kind of thing, someone would send us a file and we would need to load it from disk, so we simulated that process creating an object merge node and draging the final null node to the object 1 field. This way the object merge node is connected to the null with the full model’s geometry.

So with 4 blast nodes and assigning them to each group created previously for every part of the cabin we have them separated and can start editing the parameters for each material fracture. We began with the wood group and created 3 nodes: voronoi fracture, scatter and VDB from points and connect them, the result is not very realistic, it doesn’t lood like wood. To fix this Medhi showed us a hack by creating a transform node, change the cabin y scale and create a referenced node from it and invert transformation: now it looks like wood!!

For the glass we created a material fracture node and change the glass material type to glass but it was only creating a center of the fracture in one glass, like they were all one. To change that we could create multiple blast nodes and apply the effect to each, or we could create a foreachconnectedpiece node that applies the effect to every piece and it is simplified.

The roof was destroyed with a material fracture and the material type changed to croncrete and 3 fracture levels like we did before in the simple cabin. Finally, a merge node to connect it all. Finally we can copy the proxys from the simple cabin and paste them onto the end of the simulation. This is what the nodes look like:

Flipbook from simulation

Collab Project – Week 2

This week we have 2 new members, Tina and Kamil from the VFX course! It is great to see the group getting bigger. The first one to come to the group was Tina from our class and when we were looking for more reference of artists and performances to do we decided to change back to the original idea, to actually animate artists performances and each one of the animators chooses a style and an artist to animate. The concept is having iconic performances and impersonate them on the characters from the asset library.

As we don’t have riged artists it will be fun to make a model that has nothing to do with that artist kind of have his soul inside. So now we really should start planing the shots and how everything is going to look, like a storyboard of what is going to happen and how.

Now we have fewer roles to fill, so hopefully this week those will all be filled, we decided we were going to meet up every friday after lunch and keep in touch through discord as we created a channel for our group to keep everything organized.

Collab Project – Week 1

For this term we will have two new units and one of them is the Collaborative Unit in which we have to collaborate with students from our class and course and from other courses as well. For the assignment we had to choose between doing a previs animation, poster animation, vr or an immersive cartoon.

My group chose to do a previs animation of a simulation of a festival in which we want to animate artists performances on stage and simulate it like a festival.

When Luke said what we needed to do I was literally out of ideas, my brain froze and every idea seemed bad so I saw Abbie’s project proposal and asked if I could join, the project seemed interesting and we can do a lot with this idea. The next step was to find more people and start deciding some things like what artists are we going to animate, which performances, which music style, how would the lighting be, the stages… All those things need to be put in to consideration.

So, for the first search we ended up with a lot of electronic music references and lights, as me and Abbie are fans of electronic music everything was sort of going in that direction and for that would be much more interesting to animate the crowd as DJ’s are quite boring to watch. But we still have to be aware that more people will join the group and all this needs to be agreed by everyone.