Interview Prep with Dom

Dom sent me a message saying that some people had been asking for an interview prep and asked me if I wanted to set one too, ofcourse! It is always important to be aware of what an employer would see when looking at my CV and showreel. So I updated my showreel and CV, with this virus situation I had not updated those things for 2 months as I am not even in the UK to apply for jobs. I added the particle shading houdini session and the best works from term 1 like the full body walkcycle with prop constraint and the body mechanics shot as well as some works I did for my portfolio to enter this masters degree.

I am aware that the renders could be much better and I should have at least a matchmove shot but none of them look as good as I want it to, I am improving those shots to include in the next update.

So at first, we made an interview and Dom gave me some feedback to what to say and not say and I took notes. The main dont’s were:

  • Don’t mention you got no experience
  • Don’t fill in the gaps
  • Don’t mention tutorials
  • Don’t repeat yourself

The main Do’s were:

  • Turn negative to positive
  • Be natural and authentic
  • Talk about the rigs
  • Talk about every skill

These were the main notes I took for the first interview try, on the second try I really improved, at first I was really awkward and sort of lost because I didn’t know what to say but then I got a bit more relaxed as I knew the answers to all the questions and was able to turn the negative to positive as well as talked a lot about the rigging and everything I did for each project on my showreel.

In the end, the things Dom advised me to focus on were to organize better my showreel as I have a houdini project and then maya and then textured stuff, so kind of having a “narrative” an evolving thing separated by softwares or techniques, ofcourse, add some matchmove and rotomation shots that we have been making in the past sessions – Dom also gave the idea of having the tracked shots and some of my animations with no background happening on those previously tracked shots.

The main questions an employer is going to ask you are: “Who are you?” and “Why should we hire you?” these are always hard to answer as you really have to look at yourself like an employer, would you hire yourself? if so, why? Why are you better than that other candidate, and this is where young job seekers need to put their answering skills and efforts.

Vertex 2021

I have to be honest, I had never heard about this event before I knew UAL got us a ticket and I was pretty excited about it, even though it was an online event it is amazing to see the people who do what I want for my career, it’s an inspiration.

The sessions I was really excited about were:

  • 9am – Jonathan Cheetham: Delving Deep into Digital Humans
  • 11am – Lois Van Baarle: How to Market Yourself on Social Media
  • 3pm – Alvaro Martinez: Indie Storytelling and World Building in Realtime
  • 5pm – Dylan Sisson: Making Art with Soul
  • 10h30pm – Finnian MacManus: Terraform VR Demo

The sessions that really blew my mind were from Alvaro Martinez and Finnian MacManus, surprisingly as I was expecting to enjoy more the one from Dylan Sisson from Pixar Studios.

Alvaro Martinez was amazing as he was so fast and made such cool things in houdini. I was impressed by how easy it was for him to create an amazing super simple set up where he showed us a node called HighField which he used to create a terrain set up with just a plane and 4 tourus shapes, he created an sci-fi/starwars like moon look with no complex shapes and it looked amazing.

Finnian MacManus was something I was not expecting to watch as VR is not really something I think about much although it is super cool I won’t have any chance to use it, but I ended up whatching his session and it was also super cool as he made a “sketch” of an idea he had with the VR and he literally just grabed some shapes like a cabinet and modeled a stage with it. At first I was really not getting what was he doing, but then as he moved forward with it, it had an amazing look with such a random shape that he just copy pasted all over the place. VR is a really cool evolving tecnology that really allows artists to be inside their own projects and give them some sort of infinite reality where you wont ever run out of (physical) room space.

Dylan Sisson was also cool but I had higher expectations for his session, maybe that is what ruined it for me, the 2 sessions I just described surprised me as I had nothing on it, but as this was pixar I was thrilled. Their latest movie Soul was amazing and Dylan Sisson talked about things I did not know at all like Joe’s sweater was not a mesh it was literally 280000 strings of fabric that made his sweater, which is impressive. I really enjoyed it but I guess I was hoping for something more like a master class, this was more to show what renderman could do and it’s new features for the next version as Soul was made with it we can really see the endless possibilities of it.

XPU rendering was something I really did not know about or that it was even possible, it is the combination of both CPU and GPU rendering within the same render engine: Renderman! As well as LAMA – layered shading system (by ILM) and Stylized looks – Imagination based rendering where we can make the renders look like drawings, like literally drawings.

It was an amazing event, I learned so much in such a short time with such cool professionals and software, everything in this industry. The only thing I’d change is the format of the event – TO BE IN PERSON. But yes, not possible at the time, it was interesting either way.

Houdini Session – Week 5

This weeks’ session began with a demonstration of a fog effect inside a sci-fi corridor, that I was able to find and reproduce the effect but was not able to get the color of the interior as Mehdi did. But I got the chance to know how it works through an atmosphere volume inside the arnold shader node. The real exercise this week was to create an explosion and in the end we used a sphere to create a collision with the smoke and see how it reacted.

First we made a fire and flame simulation, for this we created a circle and scattered it, now we need to add an attribute create node for density, as we need it for the DOP network that is going to be created after this, so that we have the simulation emerging from the circle. A color node is added and density is applied to it as well as on a volume rasterize node, finally we add a null node and all is set to try out the DOP network.

Inside the DOP network, as we are on the latest version of houdini we can use the smokesolver sparse node as it is optimized, if we used the smoke solver node the difference would be that the smoke would be simulated inside a boundiing box, while on the sparse nodes that bounding box grows with the simulation and it is faster to calculate, as the empty spaces are no longer calculated opposite to the “old” smoke solver node.

To use the sparse smoke solver we have to use the other sparse nodes, so we connect a smoke object sparse (container) to the solver as well as a volume source (works like the pop source from week 2) and it is connected to sourcing. This is the node where we can attribute diverse volumes to be calculated such as density, temperature, flame, color and velocity. When we run the simulation we can see that something is missing, it’s very white and flat, so we need to add lights, for that we create a dop import node to visualize some of the data from the DOP network and assign it density, temperature, velocity and color. Create 2 arnold lights on objects network and assign them blueish and redish colors (very subtle) with opposite directions, now everything looks better and we can begin playing with the fire and flame with a volume visualisation node and assign temperature to the emission.

Now it looks more like a fire and not just smoke, on the DOP network with a gas turbulence node we can teak the attributes to give it the proper swirl and turbulence, break the shape a little and whatever necessary for the effect we need. This is what the flame looked like after tweaking and adding a wind forceto the simulation.

flame
flame simulation nodes
flame simulation DOP network nodes

To render we used the convert vbd, vdb vector merge, primitive and file cache nodes, but arnold can only render volumes with an arnold volume node with the rendered file cache assigned to it, as well as an arnold shader this time with a standard volume. This is what the render looked like:

flame simulation render

To the explosion simulation!!

Instead of creating a circle and have a fire emerge from it, we created a sphere, a vdb from polygons and a scatter node and a pyro source to create the volumes with the help of a volume rasterize node to assign the attributes density and temperature. Now we can add a pyro solver node where we will have most of the explosion and other related simulations attributes and tweaks like temperature of the smoke, cooling rate, breaking up the shape, etc. We added a volume visualizer node for a bit more flexibility in the simulation look.

After experimenting all the attributes and parameters, we prepared the render with the same process as before, convert to vdb and with a file cache node assign it to an arnold volume and it’s shader and it is ready to render. This is what it looked like:

explosion simulation render
explosion nodes

This render took a bit more tweaking on the shader as we used a ramp rgb node connected to the emition of the standard volume and there it is possible to push the colors a bit more and really make it look like a huge explosion.

Collision!!

To end this weeks session Mehdi taught us how to have an object such as a sphere and have it collide with the explosion and flame, for that we keyed a sphere going through the simulation and with a vdb from polygons connected it to the pyro solver node. This is what the explosion collision looked like:

The same was done for the flame and finally we added a point velocity node to have the smoke follow the sphere and make it look a bit more realistic.

Houdini Session – Week 4

This session we introduced arnold for houdini (htoa), HDRI maps (high dynamic range images) and ACES (academy color encoding system), the focus was rendering and getting the right shaders and their attributes right for each material. It is important to understand how light and it’s reflections in certain materials work as these make up for someone recognizing the material even before touching it. Parameters like roughness, transmission, reflection and specular are the ones we played with as these are the most basic and easier to understand.

HDRI are used as background images and help build up the environment and ACES is a color encoding system that includes a bigger range of colors than RGB and help get a better result as well as more flexibility to adjust color.

On houdini, we began testing the mantra render engine and houdini lights and see how they intereacted with a rubber toy testing model. The most simple light was hlight and then we tried the environment light as it represents a surrounding light so the whole model is illuminated like the light is coming from everywhere. So with the environment light it is possible to add an HDRI map and we can imediatly see the difference.

When rendering it is essential to know about the color values: 8bit; 16bit and 32bit. So if you have more color values it means more definition as they are multiplied each time.

Arnold is not an unknown thing to an animator, as it is the main render engine of MAYA our magic wand as animators, but in every software it works a bit differently. I installed arnold after a lot of problems with it as it seems my computer it set differently so houdini bugs with arnold environment variables but, with the help of Mehdi and Tao I was able to solve the problem and now everything works better.

On the rubber toy we experimented a lot of arnold lights and shaders to understand how it works as well as working in the different networks such as material, output and geometry like the previous sessions. To use arnold or any other render engine on houdini we need to create an arnold node on the output network so houdini recognizes it and then we can work on material and edit the shaders. Shaders need to be inside arnold material builder nodes. With the rubber toy we created a material builder for the toy and the ground and this was the result of those experiments.

After having a better understanding of how everything works we opened the project from week 2 where we learned how to work with dop and vop networks and play with particles and use arnold to render it and apply the different shaders.

The first step was to create another geometry node (on objects network) and have the model and the particles separated, so on the particles node we could just copy the model node and paste it on an object merge node inside the geo node, this way the paths are connected as we are using the same model we used for the particles. After this we created a grid, an arnold node on output network and the shader that the model had was gone as it was set for mantra renderer, now we have to apply the arnold shaders to the model, the hammer, the ground and the particles. Ofcourse the lights used have to be arnold lights that in this case we applied the HDRI map that we previously downloaded from HDRIHaven website.

On material network we created 3 arnold material builders for each material we were going to need and named them accordingly, and each had a standard surface material that then had to be applied as material to each geometry node on objects network. On the crag node where the model was we had to assign two different materials as the character is different from it’s hammer, so we added an attribute delete node to “separate” them and add a material node to assign the different material to the hammer – we used an expression for that.

These colors were only to have an idea of how that process works, now it is necessary to assign a real shader to the model so we used a curvature node to calculate the curvatures of the model and that allowed us to assign a “range” of colors based on the curvature values with a ramp_rgb node. So after assigning metalness to the hammer and the new color to the model it was time to add the particles, so another shader was created and we repeated the process of assigning the shader to the object. This is how the particle shader looks like after all the changes.

So the two ramp_rgb nodes are the ones responsible for color shading and we added quite a few for a more magical effect, these were the experiments with color I made.

At first, my particles were not changing color it was always black until i decided to redo the whole particle part and it worked! Due to my previous problems with arnold the geometry nodes did not have the arnold tab and some attributes were not being applied to the model. In the end all was solved and finally we had to add a bit of motion blur which made it look pretty cool in my opinion.

I made a render during the night and this was the final result:

Matchmove Session 1 to 1 – 9th feb

On my second 1 to 1 with Dom, we had planned to track a shot with a good background and foreground, download arrows or spears and have them thrown at a character. The plan changed as the footage I found was not so good for this idea so Dom had found a great video of a forest where the camera was going under a tree trunk. So we focused on tracking the shot and then adding some animation on top of it, like a character going under the tree.

We began on 3DEqualizer tracking the tree, then the space under the tree, then the floor, the background right and left side and by the end this is what we had.

We were able to track all the points in 2 hours and finish the 3DE work to jump to maya, no errors this time, everything went super well even when running Warp4 and the lens distortion which normally is where some problems may arrise.

On maya, we already had the project set with the three extra folders: 3de_scenes, mel scripts and models, so we just had to open a scene and import the mel script and the footage. This is where the first problem came, the footage was moving when setting the view for the camera, so we had to set the image plane manually and add the footage to it and the problem was solved.

The next step was to center the floor points with the ground grid from maya, this way it is easier to know where the floor is, so we can go to edit mesh – create geometry and create a shape from the floor points. With our cg floor we could add a few more things to the scene such as the tree where we created a cilinder and edited its geometry to look more like a tree. I ended up adding a few more tiny trunks on the floor. Now the scene is ready to have a character animated on top.

Lighting Session – KK Yeung

As I said before, this unit will have sessions with some industry experts that will teach us throughout this term, this week’s was KK Yeung, a lighting artist that made a video tutorial using Maya and NukeX.

This exercise had to be done in vmware as KK prepared the project for us there, I admit I tried to do things accordingly to be able to do in my machine, but we had to change the path of the images from the nuke file and I wasn’t able to change it for my machine as I never worked with nuke, this was the first time.

It is an interesting software and it is super usefull as it is so versatil, can be used for VFX and composing, as well as matching color values. It reminds me of houdini as it works with nodes and the interface is quite similar. Although i really like houdini node design and in my opinion is more user friendly than nuke.

The first step was to change the source images to the correct path, as the file had an error and the images weren’t loaded. After that we had to prepare the HDRI and match the pixel value of a targeted area (we choose the cabinet) to the other sources. For this we used 2 nodes, exposure and multiply (math) to match them in 3 maps, north node, south node and flurosent light. As north node is the upper part of the HDRI, south node is the bottom part and the florosent light is for the main light source of the image as it is burned and when matching the values it was set to a specific value that will not change above that and will never be too burnt.

Before exporting the maps, we did a sanity check which is used to see if the information of the light we changed is acceptable to export, in this case we did lose some data but not enough to be perceptible, as we had to increase the f value almost 1000 to see the data loss. So now is time to export the maps to maya, to set up the light rig.

On maya, we already had the project set and KK made a shelf with some tools that we are going to use in these sessions. The first thing was to align the chrome sphere refletion to the reference image by rotating Y and X axis, after this we unhide the cg characters to see if the light matches, for this we had a second cg sphere that made it possible to see how the light would affect the characters.

On hypershade we were able to see how the chrome and cg spheres were set up and then KK showed us a way to prepare the render, just with the characters, either through the outliner by changing the visibility of the projections and through the script editor where, before changing the visibility of the things in outliner we clean the comand window and after changing visibility of the background images we could copy the comands on script editor and copy them to the MEL tab. This is usefull when we want to make quick changes, the script editor is much more efficient.

Rotomation and Mocap Capture

Rotomation and mocap capture are the future of this industry as it opens so many doors to shot editing and optimizing animation and movement. So, what are these?

Rotomation

The first time I heard about it was on our first session with Dom where we were intruduced to matchmove and rotomation, as matchmove is the process of tracking a camera and animating or adding something on top of it and rotomation is the process of tracking a body or a part of a body to then add animation or vfx on top of it. Rotomation needs camera tracking so these processes are connected and we can see an example of it in movies like the Avengers where the actors have suits and a lot is added in post production, like spiderman or captain america. The industries that relly more on rotomation are vfx and film animation.

Mocap Capture

Also something I knew what was but did not know the name, this is the process of having actors with mocap suits and performing with them on set so then their movements are applied in a CG model. A movie that really explored this was Avatar (2009), by James Cameron where we can see that the actors faces are actually on the blue humanoids from the movie, the movements are perfect and even after 12 years the movie still looks amazing and is a reference in the industry of film, vfx and game animation.

This techniques were introduced by film and vfx animation and these industries really developed the technique but, today game animation is also using this technology to produce even more realistic animation and movement in games.

VFX, Film and Game Animation

VFX, film and game animation are three very different fields in the industry, it is important to be aware of it and understand it because they work with different rules and methods, the roles are different.

VFX Animation

This is where things such as props animation, particle systems, over body simulation and CFX (creature animation). It is the most common use nowadays as they are responsible for adding features to the shot that didn’t really happen when recording the footage, it is composited to the film and it leans on realism as it is used in real world footage. It must be real to a point where the audience feels like those things are actually happening.

Film Animation

Film animation is about animated features like movies from pixar that are fully made in CGI and the animation is more cartoony and stylized, exagerated and less real. But that doesn’t make it any less harder, although it is considered easier due to the fact that it doesn’t have to be so real and accurate to reality. On the other hand it needs to be stylized, the characters need to do things that no human being would be capable of, with no reference of those changes. Animators need reference like everyone in the industry, but our references can only be pushed to a certain point, the stylized look is made through creativity and that, in my opinion, is what makes animated movies so good.

Game Animation

The line between game,vfx and film animation is becoming more and more blured, as games are becoming more realistic and loyal to reality, but it differs a bit as the animation doesn’t have to be as perfect as a film or vfx work. In matters of contracts it is also different as contracts in game industry can last much longer while in film and vfx the contracts never exceed 9 months, so it gives a bit more stability and things don’t have to be so rushed.

Previs, Techvis and Postvis

We need to know some concepts in the 3D Animation industry, these three are concepts used throughout the making of a movie.

Previs – is the process of visualizing a scene before it is physically made, to help fix some problems and have an idea of what everything is going to look like.

Techvis – is the process of mapping everything such as cameras, actors positions, lights, camera cranes, etc to help create the final footage. This process is like a technical sheet of the previs.

Postvis – is the process of visualizing a rough idea of the final result of a movie before going in to post-production, so not long after the footage is shot the directors have the ability to see what it is going to look like with some vfx and animation.

Houdini Session – Week 3

As Medhi said last session, this week would be much harder and it was! No surprise! It really was super hard, but super cool too. Simulation is a powerfull thing if we know how to do it properlly.

On the first session we had to create a wood cabin and as an extra we could add some detail as it was going to be used for this week, I wasn’t able to add detail as I know so little of Houdini. So to start, Medhi showed us how to add detail and it was super easy!

This is what the first cabin looked like:

This is what the cabin looked like after I added detail:

For the wood planks we copied the main cabin and it’s transform from the first one and in wireframe mode it was possible to place the planks on the right place, then I decided to add 2 more windows as I’m still very new to Houdini. We added glass with the same boolean process from before, grouping every part of the cabin seperatly, depending on each material and the three groups were: roof, wood, ground and glass.

But before doing the simulation we must understand how destruction works, so the demonstration was made with a cube and a sphere. We used the concrete simulation and added noise to the destryoed pieces, but I admit this part was challenging and some things were only understood when applying this knowledge to the cabin. But I was able to create all the things medhi showed us.

Detruction of the cabin

Simple Cabin

We began the destruction simulation with the simple cabin, where we added a rbdmaterialfracture node with 3 fracture levels, each one with different scatter points and noise frequency. Each fracture level can have different scatter points as a way to create more small pieces for the destruction as well as noise frequency to have a more random look. Add an explodedview node to see the scattered pieces.

Next create a rbdbulletsolver node and connect it to the rbdmaterialsolver node, if we hit play the whole cabin starts to fall so we can add a ground plane on the bulletsolver attributes. The result is the cabin destroys itself, nothing is put together, for now it is only loose pieces. To fix that we must connect the constraint output from bulletsolver to rbdmaterialfracture and that sets a glue constraint that keeps the pieces all together. When doing this I had some issues, because some pieces had some errors, but on the talks with medhi he explained that the problem was due to the noise frequency on the fracture levels.

When creating a sphere and key it to go through a side of the cabin, the whole cabin moves and it is not destroyed as we want to. The way to change this is creating an assemble node and check the create packed geometry and uncheck the others, but the cabin still moves so we have to create a group with half the cabin and with an attributecreate node “tell the software” that the pieces from the group cannot move and the ones hit by the simulation can.

Finally we can add a transformpieces and filecache nodes to have the points of the simulation saved with an optimized network which makes it easier to import and quickly have it ready to work.

Detailed Cabin

Now for the detailed wood cabin, Medhi explained that if we ever work with this kind of thing, someone would send us a file and we would need to load it from disk, so we simulated that process creating an object merge node and draging the final null node to the object 1 field. This way the object merge node is connected to the null with the full model’s geometry.

So with 4 blast nodes and assigning them to each group created previously for every part of the cabin we have them separated and can start editing the parameters for each material fracture. We began with the wood group and created 3 nodes: voronoi fracture, scatter and VDB from points and connect them, the result is not very realistic, it doesn’t lood like wood. To fix this Medhi showed us a hack by creating a transform node, change the cabin y scale and create a referenced node from it and invert transformation: now it looks like wood!!

For the glass we created a material fracture node and change the glass material type to glass but it was only creating a center of the fracture in one glass, like they were all one. To change that we could create multiple blast nodes and apply the effect to each, or we could create a foreachconnectedpiece node that applies the effect to every piece and it is simplified.

The roof was destroyed with a material fracture and the material type changed to croncrete and 3 fracture levels like we did before in the simple cabin. Finally, a merge node to connect it all. Finally we can copy the proxys from the simple cabin and paste them onto the end of the simulation. This is what the nodes look like:

Flipbook from simulation