Lighting Session 2 – KK Yeung

This second session of lighting was made with katana, a software by Foundry and in my opinion was better to understand the concepts KK was teaching us. We used frames from the movie Fight Club (1999), by David Fincher and lighted a CG scene with the same lights as the frames from the movie. Nuke and Katana are very similiar but I think Katana is bit more intuitive and easy to understand.

KK prepared a file for us and gave an intro at the beggining of the session which gave us a proper workflow on how to work with multiple shots like organizing the shots by angle and re-using light rigs. The angle of the shots and the amount of light they get can sometimes change but that can be tweaked in katana when re-using the same light rig with a small variation for a different shot.

One of the things KK mentioned was when looking for an HDRI an important aspect to be aware of is the position of the sun, as the HDRI can be used as a light and on the website HDRIHaven they preview what the light looks like on different materials which is extremelly helpfull.

The first step of this exercise was to set the graph state variables (GSV), change it’s name to “shot” and add a variable for each shot, in this case we created 5 variables named “shot01, shot02… etc”. After this we go to the node editor and on the lighting node group we can see there is a Gaffer Three node this is the master light rig which then will be tweaked and changed for each shot or camera angle and overwrite them accordingly.

For this we create a Variable Switch node and add it bellow the gaffer three node, to add more inputs to this node we just need to hit the arrow until we have the desired number, in this case we created 4 inputs and connected the gaffer three node on those 4 inputs we just made. To see the parameters of a node we need to either hit E when that node is sellected or hit the square on the right side of it until a green color appears. On the right side pannel we name the variable as “shot” and in the patterns we can set 1, 2 and 3 for each shot group. So the first will be shot01 shot03, the second will be shot02 shot04 and the last one will be shot05.

Before changing anything else we made a few renders to be sure of how they work on the setup KK made for us. A render for just the characters and everything together (background and characters) and these can be seen on the catalog tab where we can see a list of the renders we made.

Back on the node graph, on the input one we create a new gaffer three node and connect it to the according connector. On the parameters tab we have no lights but if we hit “show incoming scene” we will see what the above gaffer three node has inside on this node and change it just for this input. We deactivated the fill light so we can focus only on the env light, to edit it, right click and hit adopt for edditing. With this we can see new parameters showing up on the pannel bellow in the material tab.

To activate live renders for the lights we are using we go to the scene graph, that works like maya’s outliner and we can see everything this project has, we select the light group and check the live render box on the env light so we can have it rendered along with the image. On the object tab we can apply rotation to this light and match it to the right setting according to the movie frame we are trying to replicate while in live rendering. Hitting “S” to change between cg render and original frame.

This process was only for shot 1 so now we have to check if shot 3 (that is on the same camera angle group) is correct with this setting. As it isn’t, we have to add another input on the variable switch node and connect the shot 1 gaffer three node to this last input and get shot 3 on this last input. Add another gaffer three node on input 4 and with a live render tweak it’s rotation to match the original frame. This is how it looked like after tweaking:

After the light matches the original frames we can make a render of the prepared plates KK made for us, where there is only the background of the original frame without the actors and check if the light matches. The exercise for this session is to match every character with the original plates, so we must do this for all the frames.

When comparing the character render with the prepared plate we can select “toogle overlay/underlay controls”, middle mouse drag the background image to the underlay spot and we have the final image for this shot.

Shot 1:

Shot 3:

KK told us to do the rest of the shots with what he had just taught us.

Shot 2:

Shot 4:

Shot 5:

The only thing that I was unable to find out how to do was to export the render images, everytime I rendered to disk nothing appeared so I took screenshots of the final frames. I am aware that some things are wrong and I expect to review them with KK on our 1 to 1 next week.

Collab Project – Week 10

The final week of the collaborative unit was to make the final adjustments like adding overlays, editing the final video, sound and effects. It was obviously a week in which we met more often and had more contact as we gave feedback on each others work. I have to say, I always get worried with group projects as my past experiences were not so good in that matter but this one was really fun to work with and completely opened my horizons to how a professional future might look like or should be.

This project also included the goal to know how to work with other people as well as getting to know the students from this and other courses and it worked! I really enjoyed working with the people I worked with and I look forward for more. Ofcourse it is always stressing as we depend on other people’s work to get ours done too but when people commit to it and actually like what they’re doing I don’t think there is much to worry about.

With this project I did a lot of stuff that I had never done before and it wasn’t so bad, I have an idea of what I must improve and change in my working method. Working with cameras, lights, understanding the rendering settings and render farm were the main and scary parts of this projects as I had never used it much, so I had to take a bit more time to study those things and I believe I did a good work as it was my first try with it.

Final Previs Project:

The collaboration of each member of the group is clear, as I had the Michael Jackson performance with David (rig), Tina had Beyonce with Janine (rig), Abbie did an electronic music performance with Stich (rig) using sound made by Kenneth (sound design student).

For this project we created a server for the collaborative unit on discord and used it to discuss everything as well as calls and uploading work.

Collab Project – Week 9

This week is we had another meeting to see how everyone’s work was going and to give some feedback as well as make some final changes. Finally render using the render farm and the pipeline.

At first I was looking for a way to render the sequence from the camera sequencer, I looked it up online and found a script that I ended up not using, as Dom messaged me and I had an hour talk with him working on the cameras because we thought it was possible to render it all at once using render layers. I had never used them and Dom taught me, it was pretty simple, but we were having a problem with the renderable cameras. That problem was solved when I asked Luke for help and turns out that what I wanted to do was not possible as the cameras could not be deleted as they were being used on a different render layer. So I deleted all of them and rendered each shot seperatly, which took a bit more time but the result was great.

Ofcourse, before all this everything needs to be properly referenced and there cannot be any error, which I also solved with the help of Luke, by deleting the ones that were wrong and not being used.

It was the first time I used the render farm and it is trully fast, it’s crazy. But I had some problems as in the end I tried to render multiple shots at once and for some reason the cameras changed between frames so I rendered it again and it was all correct.

This is how the final render looks like:

Now we only have to get all the performances together, get the overlays on top, some effects and it’s done!

Collab Project – Week 8

This week I finished the camera work with the camera sequencer as well as the lighting. It was certainly an experience as I had never used the camera sequencer and added lights to a scene like this one.

So, I decided to begin with the blocking to then work on the cameras, I watched a few tutorials about the camera sequencer and it was pretty simple, it looks like premiere pro, we just have to set up the frame count and camera.

This was how the camera sequencer playblast looked like before lighting:

The lighting part was the hardest as it was not working not looking like I wanted it, I watched a few tutorials on it but it was not looking the same on my scene, so I decided to try every light maya has on an empty scene to understand how they worked. I still don’t know all of them, but I understand it better for sure.

I ended up using spot lights like my reference footage, I tried to be as loyal as possible to it, although in the end I made a few changes. I set up a line of orange and yellow lights in the back, a main white one in the center and five more on the top front of the stage. I like the final result but I know it is something I should improve in the future. This is what the set up looked like:

This is how the lights look like but for some reason they don’t all show up.

When lights are on the viewport it is not possible to see how all of them are as I duplicated them and there must be a bug, I can only have an idea of it when rendered. Also I changed the skydome HDRI to a night image on a park that seemed better for the performance of Michael Jackson.

render of audience view

Collab Project – Week 7

This week I finished the blocking animation for my character, it took longer than I expected as Michael Jackson moves too much and defies physics in every possible way which makes it harder to have a notion of timing and movement. Some things still need tweking and improvement like facial expressions and spline.

This is the playblast of the blocking:

I added the hat and the mic with animSnap tool and downloaded the assets from turbosquid website. This tool is very usefull as nothing really needs to be parented to the hands and the tool just keys the object in the position you want – Amazing! The only thing is that I find easier to just snap the prop when all the animation is done, this way there are no problems when moving the members. I got a few bugs when I tried to snap before having any keys.

The constraints subject is something I should improve and understand better as it is so usefull and extremelly necessary to animate, but quite confusing. This tool just accelerates the process.

This performance that I chose to do was due to it’s dificulty and challenge, also as a way to understand better how the body works and moves, it was much more work than I expected and it was worth it but still needs a lot of improving.

The next step are cameras, using the camera sequencer and adding lights, something I also must study a bit as they are so many and sometimes I am aware that I don’t make the best decisions with lights.

Collab Project – Week 6

This week I blocked the main poses of my character for the beggining of the animation as well as some camera animation with the camera sequencer. At first I had throught about having my character walk to his pose and begin from there but I was having some trouble as michael jackson does not walk normally, it’s more like dance walking, so it was not going very well so I decided to begin directly with the main pose and change to another one, then starts dance walking like in the reference.

This first 300 frames are when he grabs the microphoen and the hat and throws it to the audience, then starts singing. I used anim snap to snap the hat to his head and then to his hand, The microphone isn’t yet parented as I want to finish the animation of the hand and arm so that everything stays in place.

This is the first playblast:

I also set the first 3 shots using the camera sequencer, I had never used it before and it is pretty intuitive, works similiar to premiere pro just less advanced. The cameras are all in place to, but I believe that after all the animation is done all will look better after a tweak in their movement.

Houdini Session – Week 6

This week’s session was about experimenting some examples of particle effects mixed with desintegration, like we did with the particles but this time using the voronoi fracture where the destructed pieces would flow and disappear like the particle simulation giving it a more magical effect.

Desintegration test

So the first try was with a torus shape and with the scatter and voronoi fracture process that we have been doing but this time with a DOP network. Inside the DOP network we used a pop force and a RBD packed object and a bullet RBD solver connected to a multisolver node – usefull for when working with multiple different solvers like this example. But this only allowed us to desintegrate the torus all together and what we want to do is this effect happening from right to left so we had to create a group, assign it to points and check the keep in bounding regions option, this way a bounding box with be created and it can be keyed and give the result we want.

This group we just created is called static and now we need an attribute create node to tell the software that when the bounding box is out of that region the pieces of the fracture can have the magical effect created in the DOP net.

I had an issue when performing this part of assigning active and static parameters because I put the name active in caps lock like the static and only when I redid this part of the exercise I realised that the name active really had to be in lowercase. In the end we applied this effect to a test geometry tommy.

This is what the nodes looked like:

geo nodes

Flipbook:

flipbook

Smoke with particles simulation

The next task was to make a smoke simulation with particles and have them react to the smoke movement. So the first step was to create a circle, scatter points from it, a pyro source node and a volume rasterize attribute node to assign density and temperature. Also attribute noise from density and a pyro solver to have smoke, like we did in the previous session.

After adding lights and tweaking the smoke simulation we can convert to vdb and cache it to disk with a file cache node. When this is done we can take care of the particles with a DOP network and scatter the points from a circle again. This is what smoke simulation nodes looked like:

geo nodes

This is what the DOP network nodes looked like:

DOP network nodes

This was the final result:

Big Destruction

The final task of this session was more complex as it had imported geometry and animation, Mehdi prepared a model of a building and a meteor simulation so that we could do a big destruction on the building with the meteor. So the first step was to import the assets and clean it and see if the geometry was prepared for destruction as every shape needs to be closed and can’t have artifacts (errors in geo).

Before all that we need to check where the destruction is going to happen, after that we need to clean the geometry at least from those parts, the first was the floor from the different floors of the building, then the entrances, then the walls and then the columns.

Nodes after cleaning geo:

building geo cleaned nodes

Now we need to clean the geo of the meteor, it’s geometry was too heavy for a rigid body simulation so we need to reduce it, for that we unpack it, convert it to vdb and with a time shift node we can remove the time dependency and have the proxy calculated only in the first frame, this way the animation is much lighter and simulation runs faster. After cleaning the geo this is what the nodes looked like:

meteor cleanup

Now we need to destroy the building, for this we tried first with 5 rectangular shapes and the goal was to destroy them all but with different fractures, but i had some issues with it when using the for each piece node as it is much more complex and heavy for my machine. In the end the attributes disapeared and I need to search for a solution, but this is what they looked like:

rbd setup nodes
fractured building

Interview Prep with Dom

Dom sent me a message saying that some people had been asking for an interview prep and asked me if I wanted to set one too, ofcourse! It is always important to be aware of what an employer would see when looking at my CV and showreel. So I updated my showreel and CV, with this virus situation I had not updated those things for 2 months as I am not even in the UK to apply for jobs. I added the particle shading houdini session and the best works from term 1 like the full body walkcycle with prop constraint and the body mechanics shot as well as some works I did for my portfolio to enter this masters degree.

I am aware that the renders could be much better and I should have at least a matchmove shot but none of them look as good as I want it to, I am improving those shots to include in the next update.

So at first, we made an interview and Dom gave me some feedback to what to say and not say and I took notes. The main dont’s were:

  • Don’t mention you got no experience
  • Don’t fill in the gaps
  • Don’t mention tutorials
  • Don’t repeat yourself

The main Do’s were:

  • Turn negative to positive
  • Be natural and authentic
  • Talk about the rigs
  • Talk about every skill

These were the main notes I took for the first interview try, on the second try I really improved, at first I was really awkward and sort of lost because I didn’t know what to say but then I got a bit more relaxed as I knew the answers to all the questions and was able to turn the negative to positive as well as talked a lot about the rigging and everything I did for each project on my showreel.

In the end, the things Dom advised me to focus on were to organize better my showreel as I have a houdini project and then maya and then textured stuff, so kind of having a “narrative” an evolving thing separated by softwares or techniques, ofcourse, add some matchmove and rotomation shots that we have been making in the past sessions – Dom also gave the idea of having the tracked shots and some of my animations with no background happening on those previously tracked shots.

The main questions an employer is going to ask you are: “Who are you?” and “Why should we hire you?” these are always hard to answer as you really have to look at yourself like an employer, would you hire yourself? if so, why? Why are you better than that other candidate, and this is where young job seekers need to put their answering skills and efforts.

Vertex 2021

I have to be honest, I had never heard about this event before I knew UAL got us a ticket and I was pretty excited about it, even though it was an online event it is amazing to see the people who do what I want for my career, it’s an inspiration.

The sessions I was really excited about were:

  • 9am – Jonathan Cheetham: Delving Deep into Digital Humans
  • 11am – Lois Van Baarle: How to Market Yourself on Social Media
  • 3pm – Alvaro Martinez: Indie Storytelling and World Building in Realtime
  • 5pm – Dylan Sisson: Making Art with Soul
  • 10h30pm – Finnian MacManus: Terraform VR Demo

The sessions that really blew my mind were from Alvaro Martinez and Finnian MacManus, surprisingly as I was expecting to enjoy more the one from Dylan Sisson from Pixar Studios.

Alvaro Martinez was amazing as he was so fast and made such cool things in houdini. I was impressed by how easy it was for him to create an amazing super simple set up where he showed us a node called HighField which he used to create a terrain set up with just a plane and 4 tourus shapes, he created an sci-fi/starwars like moon look with no complex shapes and it looked amazing.

Finnian MacManus was something I was not expecting to watch as VR is not really something I think about much although it is super cool I won’t have any chance to use it, but I ended up whatching his session and it was also super cool as he made a “sketch” of an idea he had with the VR and he literally just grabed some shapes like a cabinet and modeled a stage with it. At first I was really not getting what was he doing, but then as he moved forward with it, it had an amazing look with such a random shape that he just copy pasted all over the place. VR is a really cool evolving tecnology that really allows artists to be inside their own projects and give them some sort of infinite reality where you wont ever run out of (physical) room space.

Dylan Sisson was also cool but I had higher expectations for his session, maybe that is what ruined it for me, the 2 sessions I just described surprised me as I had nothing on it, but as this was pixar I was thrilled. Their latest movie Soul was amazing and Dylan Sisson talked about things I did not know at all like Joe’s sweater was not a mesh it was literally 280000 strings of fabric that made his sweater, which is impressive. I really enjoyed it but I guess I was hoping for something more like a master class, this was more to show what renderman could do and it’s new features for the next version as Soul was made with it we can really see the endless possibilities of it.

XPU rendering was something I really did not know about or that it was even possible, it is the combination of both CPU and GPU rendering within the same render engine: Renderman! As well as LAMA – layered shading system (by ILM) and Stylized looks – Imagination based rendering where we can make the renders look like drawings, like literally drawings.

It was an amazing event, I learned so much in such a short time with such cool professionals and software, everything in this industry. The only thing I’d change is the format of the event – TO BE IN PERSON. But yes, not possible at the time, it was interesting either way.

Collab Project – Week 5

This week was the beggining of the animation as the master file was completed and each member of the group could start to block the main poses. For me it is best to begin with the lip sync animation, I am aware that I can use animation layers for that but when previewing the animation it is always more interesting to have the character at least following the sound clip.

For this next stage I also went looking for ways to constrain animation for a period of time as my character will throw a hat to the audience and grab a microphone when he enters so I found out that when parent constraining an object, when both are keyed it is possible to animate “blend parent” attribute that keys the constraint. But on another project where a character has to grab a bottle and change the constrained hand I asked Luke if there was a better way to do it as I was only able to parent the object to one of the hands and there is a tool!! AnimSnap which does exactly the same but you don’t have to parent the object.

So I began my animation with the lip sync and a short walk to the middle of the stage so that Michael Jackson can prepare his pose before it all starts. While Mickael Jackson is walking to his position, a guitar player is making his solo and the first camera shot is from the audience, right after that it cuts to a camera behind the character which then makes a tilt shot from the feet to the head and we can see the view from the artist.

We already decided on the concert logo and will insert it in each act with different effects each time.

Lip Sync animation: