Collab Project – Week 10

The final week of the collaborative unit was to make the final adjustments like adding overlays, editing the final video, sound and effects. It was obviously a week in which we met more often and had more contact as we gave feedback on each others work. I have to say, I always get worried with group projects as my past experiences were not so good in that matter but this one was really fun to work with and completely opened my horizons to how a professional future might look like or should be.

This project also included the goal to know how to work with other people as well as getting to know the students from this and other courses and it worked! I really enjoyed working with the people I worked with and I look forward for more. Ofcourse it is always stressing as we depend on other people’s work to get ours done too but when people commit to it and actually like what they’re doing I don’t think there is much to worry about.

With this project I did a lot of stuff that I had never done before and it wasn’t so bad, I have an idea of what I must improve and change in my working method. Working with cameras, lights, understanding the rendering settings and render farm were the main and scary parts of this projects as I had never used it much, so I had to take a bit more time to study those things and I believe I did a good work as it was my first try with it.

Final Previs Project:

The collaboration of each member of the group is clear, as I had the Michael Jackson performance with David (rig), Tina had Beyonce with Janine (rig), Abbie did an electronic music performance with Stich (rig) using sound made by Kenneth (sound design student).

For this project we created a server for the collaborative unit on discord and used it to discuss everything as well as calls and uploading work.

Collab Project – Week 9

This week is we had another meeting to see how everyone’s work was going and to give some feedback as well as make some final changes. Finally render using the render farm and the pipeline.

At first I was looking for a way to render the sequence from the camera sequencer, I looked it up online and found a script that I ended up not using, as Dom messaged me and I had an hour talk with him working on the cameras because we thought it was possible to render it all at once using render layers. I had never used them and Dom taught me, it was pretty simple, but we were having a problem with the renderable cameras. That problem was solved when I asked Luke for help and turns out that what I wanted to do was not possible as the cameras could not be deleted as they were being used on a different render layer. So I deleted all of them and rendered each shot seperatly, which took a bit more time but the result was great.

Ofcourse, before all this everything needs to be properly referenced and there cannot be any error, which I also solved with the help of Luke, by deleting the ones that were wrong and not being used.

It was the first time I used the render farm and it is trully fast, it’s crazy. But I had some problems as in the end I tried to render multiple shots at once and for some reason the cameras changed between frames so I rendered it again and it was all correct.

This is how the final render looks like:

Now we only have to get all the performances together, get the overlays on top, some effects and it’s done!

Collab Project – Week 8

This week I finished the camera work with the camera sequencer as well as the lighting. It was certainly an experience as I had never used the camera sequencer and added lights to a scene like this one.

So, I decided to begin with the blocking to then work on the cameras, I watched a few tutorials about the camera sequencer and it was pretty simple, it looks like premiere pro, we just have to set up the frame count and camera.

This was how the camera sequencer playblast looked like before lighting:

The lighting part was the hardest as it was not working not looking like I wanted it, I watched a few tutorials on it but it was not looking the same on my scene, so I decided to try every light maya has on an empty scene to understand how they worked. I still don’t know all of them, but I understand it better for sure.

I ended up using spot lights like my reference footage, I tried to be as loyal as possible to it, although in the end I made a few changes. I set up a line of orange and yellow lights in the back, a main white one in the center and five more on the top front of the stage. I like the final result but I know it is something I should improve in the future. This is what the set up looked like:

This is how the lights look like but for some reason they don’t all show up.

When lights are on the viewport it is not possible to see how all of them are as I duplicated them and there must be a bug, I can only have an idea of it when rendered. Also I changed the skydome HDRI to a night image on a park that seemed better for the performance of Michael Jackson.

render of audience view

Collab Project – Week 7

This week I finished the blocking animation for my character, it took longer than I expected as Michael Jackson moves too much and defies physics in every possible way which makes it harder to have a notion of timing and movement. Some things still need tweking and improvement like facial expressions and spline.

This is the playblast of the blocking:

I added the hat and the mic with animSnap tool and downloaded the assets from turbosquid website. This tool is very usefull as nothing really needs to be parented to the hands and the tool just keys the object in the position you want – Amazing! The only thing is that I find easier to just snap the prop when all the animation is done, this way there are no problems when moving the members. I got a few bugs when I tried to snap before having any keys.

The constraints subject is something I should improve and understand better as it is so usefull and extremelly necessary to animate, but quite confusing. This tool just accelerates the process.

This performance that I chose to do was due to it’s dificulty and challenge, also as a way to understand better how the body works and moves, it was much more work than I expected and it was worth it but still needs a lot of improving.

The next step are cameras, using the camera sequencer and adding lights, something I also must study a bit as they are so many and sometimes I am aware that I don’t make the best decisions with lights.

Collab Project – Week 6

This week I blocked the main poses of my character for the beggining of the animation as well as some camera animation with the camera sequencer. At first I had throught about having my character walk to his pose and begin from there but I was having some trouble as michael jackson does not walk normally, it’s more like dance walking, so it was not going very well so I decided to begin directly with the main pose and change to another one, then starts dance walking like in the reference.

This first 300 frames are when he grabs the microphoen and the hat and throws it to the audience, then starts singing. I used anim snap to snap the hat to his head and then to his hand, The microphone isn’t yet parented as I want to finish the animation of the hand and arm so that everything stays in place.

This is the first playblast:

I also set the first 3 shots using the camera sequencer, I had never used it before and it is pretty intuitive, works similiar to premiere pro just less advanced. The cameras are all in place to, but I believe that after all the animation is done all will look better after a tweak in their movement.

Houdini Session – Week 6

This week’s session was about experimenting some examples of particle effects mixed with desintegration, like we did with the particles but this time using the voronoi fracture where the destructed pieces would flow and disappear like the particle simulation giving it a more magical effect.

Desintegration test

So the first try was with a torus shape and with the scatter and voronoi fracture process that we have been doing but this time with a DOP network. Inside the DOP network we used a pop force and a RBD packed object and a bullet RBD solver connected to a multisolver node – usefull for when working with multiple different solvers like this example. But this only allowed us to desintegrate the torus all together and what we want to do is this effect happening from right to left so we had to create a group, assign it to points and check the keep in bounding regions option, this way a bounding box with be created and it can be keyed and give the result we want.

This group we just created is called static and now we need an attribute create node to tell the software that when the bounding box is out of that region the pieces of the fracture can have the magical effect created in the DOP net.

I had an issue when performing this part of assigning active and static parameters because I put the name active in caps lock like the static and only when I redid this part of the exercise I realised that the name active really had to be in lowercase. In the end we applied this effect to a test geometry tommy.

This is what the nodes looked like:

geo nodes

Flipbook:

flipbook

Smoke with particles simulation

The next task was to make a smoke simulation with particles and have them react to the smoke movement. So the first step was to create a circle, scatter points from it, a pyro source node and a volume rasterize attribute node to assign density and temperature. Also attribute noise from density and a pyro solver to have smoke, like we did in the previous session.

After adding lights and tweaking the smoke simulation we can convert to vdb and cache it to disk with a file cache node. When this is done we can take care of the particles with a DOP network and scatter the points from a circle again. This is what smoke simulation nodes looked like:

geo nodes

This is what the DOP network nodes looked like:

DOP network nodes

This was the final result:

Big Destruction

The final task of this session was more complex as it had imported geometry and animation, Mehdi prepared a model of a building and a meteor simulation so that we could do a big destruction on the building with the meteor. So the first step was to import the assets and clean it and see if the geometry was prepared for destruction as every shape needs to be closed and can’t have artifacts (errors in geo).

Before all that we need to check where the destruction is going to happen, after that we need to clean the geometry at least from those parts, the first was the floor from the different floors of the building, then the entrances, then the walls and then the columns.

Nodes after cleaning geo:

building geo cleaned nodes

Now we need to clean the geo of the meteor, it’s geometry was too heavy for a rigid body simulation so we need to reduce it, for that we unpack it, convert it to vdb and with a time shift node we can remove the time dependency and have the proxy calculated only in the first frame, this way the animation is much lighter and simulation runs faster. After cleaning the geo this is what the nodes looked like:

meteor cleanup

Now we need to destroy the building, for this we tried first with 5 rectangular shapes and the goal was to destroy them all but with different fractures, but i had some issues with it when using the for each piece node as it is much more complex and heavy for my machine. In the end the attributes disapeared and I need to search for a solution, but this is what they looked like:

rbd setup nodes
fractured building

Interview Prep with Dom

Dom sent me a message saying that some people had been asking for an interview prep and asked me if I wanted to set one too, ofcourse! It is always important to be aware of what an employer would see when looking at my CV and showreel. So I updated my showreel and CV, with this virus situation I had not updated those things for 2 months as I am not even in the UK to apply for jobs. I added the particle shading houdini session and the best works from term 1 like the full body walkcycle with prop constraint and the body mechanics shot as well as some works I did for my portfolio to enter this masters degree.

I am aware that the renders could be much better and I should have at least a matchmove shot but none of them look as good as I want it to, I am improving those shots to include in the next update.

So at first, we made an interview and Dom gave me some feedback to what to say and not say and I took notes. The main dont’s were:

  • Don’t mention you got no experience
  • Don’t fill in the gaps
  • Don’t mention tutorials
  • Don’t repeat yourself

The main Do’s were:

  • Turn negative to positive
  • Be natural and authentic
  • Talk about the rigs
  • Talk about every skill

These were the main notes I took for the first interview try, on the second try I really improved, at first I was really awkward and sort of lost because I didn’t know what to say but then I got a bit more relaxed as I knew the answers to all the questions and was able to turn the negative to positive as well as talked a lot about the rigging and everything I did for each project on my showreel.

In the end, the things Dom advised me to focus on were to organize better my showreel as I have a houdini project and then maya and then textured stuff, so kind of having a “narrative” an evolving thing separated by softwares or techniques, ofcourse, add some matchmove and rotomation shots that we have been making in the past sessions – Dom also gave the idea of having the tracked shots and some of my animations with no background happening on those previously tracked shots.

The main questions an employer is going to ask you are: “Who are you?” and “Why should we hire you?” these are always hard to answer as you really have to look at yourself like an employer, would you hire yourself? if so, why? Why are you better than that other candidate, and this is where young job seekers need to put their answering skills and efforts.

Vertex 2021

I have to be honest, I had never heard about this event before I knew UAL got us a ticket and I was pretty excited about it, even though it was an online event it is amazing to see the people who do what I want for my career, it’s an inspiration.

The sessions I was really excited about were:

  • 9am – Jonathan Cheetham: Delving Deep into Digital Humans
  • 11am – Lois Van Baarle: How to Market Yourself on Social Media
  • 3pm – Alvaro Martinez: Indie Storytelling and World Building in Realtime
  • 5pm – Dylan Sisson: Making Art with Soul
  • 10h30pm – Finnian MacManus: Terraform VR Demo

The sessions that really blew my mind were from Alvaro Martinez and Finnian MacManus, surprisingly as I was expecting to enjoy more the one from Dylan Sisson from Pixar Studios.

Alvaro Martinez was amazing as he was so fast and made such cool things in houdini. I was impressed by how easy it was for him to create an amazing super simple set up where he showed us a node called HighField which he used to create a terrain set up with just a plane and 4 tourus shapes, he created an sci-fi/starwars like moon look with no complex shapes and it looked amazing.

Finnian MacManus was something I was not expecting to watch as VR is not really something I think about much although it is super cool I won’t have any chance to use it, but I ended up whatching his session and it was also super cool as he made a “sketch” of an idea he had with the VR and he literally just grabed some shapes like a cabinet and modeled a stage with it. At first I was really not getting what was he doing, but then as he moved forward with it, it had an amazing look with such a random shape that he just copy pasted all over the place. VR is a really cool evolving tecnology that really allows artists to be inside their own projects and give them some sort of infinite reality where you wont ever run out of (physical) room space.

Dylan Sisson was also cool but I had higher expectations for his session, maybe that is what ruined it for me, the 2 sessions I just described surprised me as I had nothing on it, but as this was pixar I was thrilled. Their latest movie Soul was amazing and Dylan Sisson talked about things I did not know at all like Joe’s sweater was not a mesh it was literally 280000 strings of fabric that made his sweater, which is impressive. I really enjoyed it but I guess I was hoping for something more like a master class, this was more to show what renderman could do and it’s new features for the next version as Soul was made with it we can really see the endless possibilities of it.

XPU rendering was something I really did not know about or that it was even possible, it is the combination of both CPU and GPU rendering within the same render engine: Renderman! As well as LAMA – layered shading system (by ILM) and Stylized looks – Imagination based rendering where we can make the renders look like drawings, like literally drawings.

It was an amazing event, I learned so much in such a short time with such cool professionals and software, everything in this industry. The only thing I’d change is the format of the event – TO BE IN PERSON. But yes, not possible at the time, it was interesting either way.

Collab Project – Week 5

This week was the beggining of the animation as the master file was completed and each member of the group could start to block the main poses. For me it is best to begin with the lip sync animation, I am aware that I can use animation layers for that but when previewing the animation it is always more interesting to have the character at least following the sound clip.

For this next stage I also went looking for ways to constrain animation for a period of time as my character will throw a hat to the audience and grab a microphone when he enters so I found out that when parent constraining an object, when both are keyed it is possible to animate “blend parent” attribute that keys the constraint. But on another project where a character has to grab a bottle and change the constrained hand I asked Luke if there was a better way to do it as I was only able to parent the object to one of the hands and there is a tool!! AnimSnap which does exactly the same but you don’t have to parent the object.

So I began my animation with the lip sync and a short walk to the middle of the stage so that Michael Jackson can prepare his pose before it all starts. While Mickael Jackson is walking to his position, a guitar player is making his solo and the first camera shot is from the audience, right after that it cuts to a camera behind the character which then makes a tilt shot from the feet to the head and we can see the view from the artist.

We already decided on the concert logo and will insert it in each act with different effects each time.

Lip Sync animation:

Houdini Session – Week 5

This weeks’ session began with a demonstration of a fog effect inside a sci-fi corridor, that I was able to find and reproduce the effect but was not able to get the color of the interior as Mehdi did. But I got the chance to know how it works through an atmosphere volume inside the arnold shader node. The real exercise this week was to create an explosion and in the end we used a sphere to create a collision with the smoke and see how it reacted.

First we made a fire and flame simulation, for this we created a circle and scattered it, now we need to add an attribute create node for density, as we need it for the DOP network that is going to be created after this, so that we have the simulation emerging from the circle. A color node is added and density is applied to it as well as on a volume rasterize node, finally we add a null node and all is set to try out the DOP network.

Inside the DOP network, as we are on the latest version of houdini we can use the smokesolver sparse node as it is optimized, if we used the smoke solver node the difference would be that the smoke would be simulated inside a boundiing box, while on the sparse nodes that bounding box grows with the simulation and it is faster to calculate, as the empty spaces are no longer calculated opposite to the “old” smoke solver node.

To use the sparse smoke solver we have to use the other sparse nodes, so we connect a smoke object sparse (container) to the solver as well as a volume source (works like the pop source from week 2) and it is connected to sourcing. This is the node where we can attribute diverse volumes to be calculated such as density, temperature, flame, color and velocity. When we run the simulation we can see that something is missing, it’s very white and flat, so we need to add lights, for that we create a dop import node to visualize some of the data from the DOP network and assign it density, temperature, velocity and color. Create 2 arnold lights on objects network and assign them blueish and redish colors (very subtle) with opposite directions, now everything looks better and we can begin playing with the fire and flame with a volume visualisation node and assign temperature to the emission.

Now it looks more like a fire and not just smoke, on the DOP network with a gas turbulence node we can teak the attributes to give it the proper swirl and turbulence, break the shape a little and whatever necessary for the effect we need. This is what the flame looked like after tweaking and adding a wind forceto the simulation.

flame
flame simulation nodes
flame simulation DOP network nodes

To render we used the convert vbd, vdb vector merge, primitive and file cache nodes, but arnold can only render volumes with an arnold volume node with the rendered file cache assigned to it, as well as an arnold shader this time with a standard volume. This is what the render looked like:

flame simulation render

To the explosion simulation!!

Instead of creating a circle and have a fire emerge from it, we created a sphere, a vdb from polygons and a scatter node and a pyro source to create the volumes with the help of a volume rasterize node to assign the attributes density and temperature. Now we can add a pyro solver node where we will have most of the explosion and other related simulations attributes and tweaks like temperature of the smoke, cooling rate, breaking up the shape, etc. We added a volume visualizer node for a bit more flexibility in the simulation look.

After experimenting all the attributes and parameters, we prepared the render with the same process as before, convert to vdb and with a file cache node assign it to an arnold volume and it’s shader and it is ready to render. This is what it looked like:

explosion simulation render
explosion nodes

This render took a bit more tweaking on the shader as we used a ramp rgb node connected to the emition of the standard volume and there it is possible to push the colors a bit more and really make it look like a huge explosion.

Collision!!

To end this weeks session Mehdi taught us how to have an object such as a sphere and have it collide with the explosion and flame, for that we keyed a sphere going through the simulation and with a vdb from polygons connected it to the pyro solver node. This is what the explosion collision looked like:

The same was done for the flame and finally we added a point velocity node to have the smoke follow the sphere and make it look a bit more realistic.