Indie Film Production – Week 4

This week I tried to focus more on lighting as I am the only one doing it and really need to understand it better. I had one more session with KK and this one was focused on the render layers on Maya, I had used it before but with little success so KK asked me to prepare a maya scene with some render layers and we would check it out and talk about that subject more deeply.

Lighting

On the maya scene I created a floor, a grey sphere, a chrome sphere and a skydome, assigned the materials to the spheres and setup the render layers.

The rest of the layers contained the same setup, so we began by checking if everything was correct then, KK drove the session on my pc through teams, also something that I’d never done but it is pretty handy as someone can show you everything on your own computer.

My render layers were okay, but we made a lot of changes as everytime we think about a light or a shadow that we want another render layer is needed so it can get out of hand pretty quickly so, one of the main focuses was the “housekeeping” to keep everything organized and intuitive. KK showed me how to ad content from outliner to the layers through expressions, a simple * in front of an objects’ name, this is usefull for when we are dealing with a lot of content.

When rendering the spheres we found a few issues with the hdri as it was not being disabled by the render layer so both of them were in display. For some reason Maya was using the master layer when we refreshed the render. KK is still going to look into the issue with the render layers as they were not working the way we want it to.

Matchmove

This week on matchmove I had an issue with tracking a second shot from the preparation videos as I tracked it but the points were not creating a depth on the footage, this was due to the fact that the camera that recorded that footage did not move enough for the software to recognise a paralax.

calc from scratch result after tracking points
tracked points

On my session with Dom he showed me a way to solve this, by creating two planes perpendicular to each other positioned in the limits seen in the footage (the end of the road, begining of the mountain) and project the points on to the planes. This way the software can recognise the depth and position of the points.

position of the planes
projected points
calc from scratch result after projecting points

After this, when fixing the lens the results were the expected.

Both results were made in brute force and everything was exported to jump to maya and finish this shot.

Houdini Session – Week 7

This weeks’ session the task was to continue destroying the building from the previous session, with the meteor animation Mehdi prepared for us. I had a few issues on this task and keep having them, each time I fix the problem with Mehdi on a 1 to 1 and I face another issue not much time after. Houdini is complex and these sessions are great to have a better idea of it, it needs solving every time.

We began by assigning active points and static points, as the active points would be the ones affected by the destruction and the static would be the ones who stay in its’ place. For this we created a simple geo of 5 spheres and used a motion trail node to see preciselly the path of the meteor, group it and with an assemble node create the packed geometry.

The for each nodes are used to run a number of loops inside a simulation and the time it takes to complete this operation depends also of how many cores your cpu has available, not all nodes are able to run in this method but the ones that are can be fastened by this process.

After running the compile block with the for each nodes we got some artifacts, to solve this we set up an expression to change the method on which the operation is being run, for this we create a switch node and write an expression on it.

to fracture geo

When we are happy with the setup and there are no more artifacts we create a file cache node and save it as part01, this is also a way to make the simulation run faster as it is now running from disk and not calculating all the nodes again. We repeat this file cache process for the concrete nodes – facture of the building we did in the previous session, save it as part02.

For the glass setup, we create a for each block and copy the group made for the factured pieces, this way we know what glass we need to break, add an rbd material facture and run the compile block. We had some artifacts, and I had to set another metting with mehdi to solve this as I was having more issues than mehdi in the tutorial. We fixed it, the problem was that some glasses were duplicated and the facture was creating artifacts. After this was solved and we were happy with the result, we repeat the file cache node process and save it as part03. The same is done as part 04 and part 05 for the bricks and aircon.

Finally we create a null, a vdbfrompolygons and a scatter and a voronoifracture node and the static points cache is also created, saved as part01_static. This is how the complete setup nodes look like:

When this process is complete we merge all the geo we just setup, create packed geo and an rdbbulletsolver node, that will be used later, first we need to create another for each and compile block nodes with a polyreduce node and a switch with an expression to make the simulation faster. Both these geo need an assemble node to be made in to packed geo and we can run the simulation. The result is that the whole building just destroys itself as all we’ve been doing is destroying it.

Next, we will copy the active group set up and set another expression with the attribute wrangle node and we run the simulation. This is how it looks like:

To create an explosion and some drebi projected on to the air we can use the pyro trail path nodes which will create a small animation for the debri to fall and all this can be adjusted on the parameter window.

This will need to be placed on the right place and the right frame, in this case we set this trail path to 6 frames and got another one on the end of the building, where the meteor comes out. This is the result of the first debri explosion without the fire simulation:

This is how the geo RBD SETUP nodes looked like in the end:

Indie Film Production – Week 3

This week was about continuing last week’s assignments from matchmove and lighting with a catchup session from KK and Dom to see how everything was going and make some questions.

Matchmove

After being able to finish the shot on 3DE before my session with Dom, we began by checking if everything was correct and jumped to maya to set up the scene. We imported the mel script and got the scene in the right place, the road points centered on the grid and scaled it up to now build the road.

On wireframe mode, viewing the camera we got from 3DE, I created a plane and adjusted its size to the road from the footage and repeated this process 3 more times. In the end I had 3 separate pieces of road, so I did a bridge on them and adjusted it’s vertexes to fit the road from the footage, this is how it looked like on maya viewport:

Dom showed me a trick for when we have a drone shot, for this we went back to the 3DE scene, tracked a single point on the car and exported “2.5D points to Maya”, this created a mel script that we import to maya. On the maya scene we created a locator and constrained a sphere, as a test, to the locator and this made the sphere follow the car throughout the road.

After this I imported the car given to us for this exercise, to the scene and draged it over the locator, now the car follows the road. I still had to make a few adjustments with keyframes for the car to actually follow the road and this is the final playblast of the tracked shot:

Lighting

For the lighting session I prepared the 2 maya scenes with the chrome spheres. This week KK showed me how to prepare the HDRI (high value and low value) on nuke and how different HDRI can be exported and used in different skydomes to have more control over those values. On Nuke, with both daylight and night light HDRI image, with a clamp node we make every value above 1 set to 1 to discover the low value and with a merge node (on minus option) we can get the high value. This process is the same for both images but ofcourse, need some tweaking in the values and ussually we will need multiple nodes to get to the desired result. In the end we need to export the different values. I had trouble finding out the high value and that is something I will have to ask KK next session.

On Maya, KK showed me how to set up the render layers and told me that, in production, it is best to only use one maya scene with every light in it and use the render layers to get the different lights from that shot. After the session I set up everything that KK showed me and next session we will see continue setting up the lights.

Indie Film Production – Week 2

As we got the test shots to track and scenes to light, this week we had to begin working on them, I set a meeting with Dom and KK even before working on the shots as I was sure I would have doubts, specially with lighting.

Matchmove

I began with matchmove and tracked a shot, but only the road and I was getting some errors so Dom told me to track the rest of the scene and after that it was much easier to calculate results. Although, when I had the meeting with Dom he explained a few steps I should follow when calculating the lens distortion, as well as tweak the deviation browser for a better result.

Before the deviation browser this was my result from “calc from scratch“:

This was the “calc from scratch” result after turning off some points from the camera point group:

So I set another meeting with Dom as I was not getting the results I expected. But the session is only next week and I wanted to go through the process after 3DE, in MAYA which is the part that I have more doubts with. So, I jumped the deviation browser process as I was getting better results from not touching it and began tweaking the lens and follow the steps Dom gave me and again, the results were not what I expected. I made a lot of tests and none of them was working, these were the results I was getting:

For lens distortion, when we don’t know the lens that recorded the footage we need to open the parameter adjustment window and calculate distortion and focal length on adaptive mode, then calculate curvature x and y and quartic distortion on brute force, finally when we are happy with the results we add all the parameters and calculate them all to brute force. These were the steps Dom gave me, when I asked for help he told me to calculate only on brute force and these were the results:

distortion and focal length calculation results

When calculating focal length and distortion the expected result is a curved plane, when calculating quartic distortion, focal length and distortion, in the end, the expected result is a cube.

After this we export the maya .mel file, run warp4 to export distorted footage and we are ready to jump to MAYA. With Dom’s help I was able to get everything ready before our session.

Lighting

My session with KK was to understand better the concepts involved in the lighting process, like the environment we want to replicate and what aspects to consider, after KK explained everything he asked me to make a study with bullet points for the car and the cafe exterior both in daylight (overcast) and night light with reference images and gather HDRI for the study.

Street (café)

Characteristics of overcast daylight:

  • No direct light source
  • Shadows are less strong
  • Can’t see much inside the cafe
  • Colors are brighter as there is a “general” lighting because of cloud reflection
  • Strong Reflex of sky on windows

Characteristics of night light:

  • Café lights on
  • Lights inside of café are visible
  • Light is warmer due to café red color
  • Darker shadows
  • Darker environment, only light source is un natural and easily changed
  • light reflex on floor if wet

Car

Characteristics of overcast daylight:

  • Strong reflexes of suroundings on car
  • Strong shadows under the car, not around it
  • No light on car lights
  • Unable to see inside due to reflex
  • Bright rims
  • Bright car color

Characteristics of night light:

  • Strong reflexes of city lights
  • able to see inside the car (if there is light inside)
  • Color of exterior lights noticeable as it reflects an artificial light to some parts of the car
  • Headlights turned on as well as plate light
  • Street lights mirror on the car

HDRI examples – daylight (overcast)

HDRI examples – night light

I asked KK if we could do a test next session so he asked me to prepare 2 maya scenes with 2 spheres (chrome and lambert materials) too.

Solo Project – Week 2

This week I sketched the storyboard and built up the 2 master file scenes from the cafe and the store. I looked up some free environments online but none of them was good so i built them up with assets I found on turbosquid and cgtrader websites. The storyboard was drawn by hand as I left my pen tablet in london, the drawing is really just a sketch to have an idea of what shots will i need, with simple stickman drawings.

Storyboard:

After the storyboard I began creating the scenes. The store was the one that took more time as I had to set it all up, I modeled the wall with the showcase and door as well as door handle, in the end I added a few lights as a test, I might make a few changes later.

Interior of store:

light set up

For the exterior I made a basic set up and added a street lamp to be more aware of shadows and build up environment, using the same hdri image for both environments.

For the cafe, I used the corner street environment available on the resource share and added some assets to create some sort of “terrace” from the cafe on the street. I deleted the windows from the ground floor of the main building so i could add a material to look like window glass and some reflection. The only light source of this scene is the skydome light with the hdri. This is how the cafe scene looks:

When testing the materials and light with an arnold render i noticed that my character, amanda has some sort of light attached to her so I’m still trying to figure out how to solve that.

Indie Film Production – Week 1

The collaborative project this term was, as I said before, self-directed study so we have to either get a project within the university or outside of it or we could ask Luke for a role in an Indie Film Production. As I did not have any ideas or projects I would like to join I asked for a role in the production, at first I signed up for animation and matchmove but Luke asked me if I could do some lighting shots, so I set a meeting with KK to understand lighting a bit better.

I have to admit, I’m terrible at it but I hope to get a bit better, I should know more about this subject even to light my shots instead of struggling with it each time, it is an important thing to consider if we want to get some emotion out of it.

I will not be doing animation, instead I will be doing lighting and matchmove. Before doing the actual shots we were given some exercises as a preparation for our roles, the lighting task will be getting some day light shots into night light shots as well as light up a crash scene (buildings, car, street) also in day light and night light.

Crash Scene:

For the matchmove role, we have a few shots from pexels, with streets and cars on a street from diferent angles – we need to track them.

Solo Project – Week 1

For this next term we have to make at least 2 projects one solo and one collaborative project. This is the part where the masters really starts to change its shape into what we want professionally so the next step of our education is self-directed study, to be able to work on our own and plan it properly. Ofcourse, in a stage where I feel unprepared being told what to do is more comfortable, but that is the point: to take us out of our comfort zone and learn from it.

My proposal for this project was an animated short story, with a story of my own, although that seemed over ambitious it’s never too bad to try it. So I began writting a story, and another and realised that I had a huge difficulty in finishing the story, what do I want to tell? What is the goal of all that? But I eventually wrote one, too big in fact. I set up a 1 to 1 with Luke and realised that i could do the story i wrote for my FMP and for this term’s solo project I chose to do a short story but this time, really short, not trying to explain anything just using a performance to tell a story. This way I believe I can improve the body language of my animations.

This story is about greed and fake friendships. A girl sees a handbag and immeadiately falls in love with it on her way to meet her girl friend for a tea, she walks in the shop, asks for the price and leaves as she has no money to afford it. When her friend shows up, she has the exact same bag she just saw and asks if she likes it, jealous of her friend, they drink their tea and a ninja comes and kidnaps her friend, surprised the girl sees her friend being taken, looks at the bag and keeps it to herself.

When searching for rigs I found a really cool website that had a lot of them and some free! So, the 3 characters I will use are from that website, Amanda, merry and Ninja.

Amanda rig
Ninja rig
Merry rig

The environments I need would be a cafe or a bar, the hall of a store and a street where the store is. I’m still looking for good environments but I can also create a simple environment with some assets and set a scene.

Two Character Performance Animation

The second performance animation we had to make had to be with two characters in which at least one had to speak. So the first task was to search for an audio clip and have it approved by luke, then look for rigs from the asset library, record reference footage twice and start working.

The first 3 audio clips I chose were from Final Space (netflix series), The hustle (movie) and from Peaky blinders. The first one couldn’t be as it is from an animation, the second was not very good (I admit) so it was the peaky blinders clip. It is a dialogue where Polly was speaking to Grace in the moment she founds out the betrayed Tommy. It is a slow dialogue but I believe it will be a good practice to know how to make characters move without any major movements. I do have some trouble with timings and this seemed like a good exercise for that. Although I went looking for more audio as I regreted choosing this one, but I ended up keeping it.

The characters I chose were Janine and Lou

I had Janine as Polly and Lou as Grace and began recording the footage. The first one was too bad so, I re-recorded it and the result was much better.

I chose the environment “Victorian Interior”, found a table, glasses and a fancy botle on turbosquid and built up the scene. I began by animating phonemes from each character and then animate the body according to my reference footage, Polly was first then Grace.

These two characters have to pick up and change hands when grabbing objects and that, I believe, was where I lost and keep losing time with as I was using the animSnap tool to have the objects constrain to the hands but they would go crazy and the objects just appeared on the other side of the scene. My solution for this was to block everything like they had the objects on their hands and in the end I would deal with it. I ended up only parenting one cup to grace’s hand, as she wouldn’t have to pick up anything else with that hand. This is a problem I will have to learn how to fix and really understand constraints and all that comes with it.

This is the final playlast of the blocking:

I still have to figure out how to light this scene properly, this environment came with a ton of lights but I would like to improve on it and be the one to light my work as it is such an important concept of emotion and reality.

After the blocking I tried to make a shot sequence with the camera sequencer and this was the result:

Lighting Session 2 – KK Yeung

This second session of lighting was made with katana, a software by Foundry and in my opinion was better to understand the concepts KK was teaching us. We used frames from the movie Fight Club (1999), by David Fincher and lighted a CG scene with the same lights as the frames from the movie. Nuke and Katana are very similiar but I think Katana is bit more intuitive and easy to understand.

KK prepared a file for us and gave an intro at the beggining of the session which gave us a proper workflow on how to work with multiple shots like organizing the shots by angle and re-using light rigs. The angle of the shots and the amount of light they get can sometimes change but that can be tweaked in katana when re-using the same light rig with a small variation for a different shot.

One of the things KK mentioned was when looking for an HDRI an important aspect to be aware of is the position of the sun, as the HDRI can be used as a light and on the website HDRIHaven they preview what the light looks like on different materials which is extremelly helpfull.

The first step of this exercise was to set the graph state variables (GSV), change it’s name to “shot” and add a variable for each shot, in this case we created 5 variables named “shot01, shot02… etc”. After this we go to the node editor and on the lighting node group we can see there is a Gaffer Three node this is the master light rig which then will be tweaked and changed for each shot or camera angle and overwrite them accordingly.

For this we create a Variable Switch node and add it bellow the gaffer three node, to add more inputs to this node we just need to hit the arrow until we have the desired number, in this case we created 4 inputs and connected the gaffer three node on those 4 inputs we just made. To see the parameters of a node we need to either hit E when that node is sellected or hit the square on the right side of it until a green color appears. On the right side pannel we name the variable as “shot” and in the patterns we can set 1, 2 and 3 for each shot group. So the first will be shot01 shot03, the second will be shot02 shot04 and the last one will be shot05.

Before changing anything else we made a few renders to be sure of how they work on the setup KK made for us. A render for just the characters and everything together (background and characters) and these can be seen on the catalog tab where we can see a list of the renders we made.

Back on the node graph, on the input one we create a new gaffer three node and connect it to the according connector. On the parameters tab we have no lights but if we hit “show incoming scene” we will see what the above gaffer three node has inside on this node and change it just for this input. We deactivated the fill light so we can focus only on the env light, to edit it, right click and hit adopt for edditing. With this we can see new parameters showing up on the pannel bellow in the material tab.

To activate live renders for the lights we are using we go to the scene graph, that works like maya’s outliner and we can see everything this project has, we select the light group and check the live render box on the env light so we can have it rendered along with the image. On the object tab we can apply rotation to this light and match it to the right setting according to the movie frame we are trying to replicate while in live rendering. Hitting “S” to change between cg render and original frame.

This process was only for shot 1 so now we have to check if shot 3 (that is on the same camera angle group) is correct with this setting. As it isn’t, we have to add another input on the variable switch node and connect the shot 1 gaffer three node to this last input and get shot 3 on this last input. Add another gaffer three node on input 4 and with a live render tweak it’s rotation to match the original frame. This is how it looked like after tweaking:

After the light matches the original frames we can make a render of the prepared plates KK made for us, where there is only the background of the original frame without the actors and check if the light matches. The exercise for this session is to match every character with the original plates, so we must do this for all the frames.

When comparing the character render with the prepared plate we can select “toogle overlay/underlay controls”, middle mouse drag the background image to the underlay spot and we have the final image for this shot.

Shot 1:

Shot 3:

KK told us to do the rest of the shots with what he had just taught us.

Shot 2:

Shot 4:

Shot 5:

The only thing that I was unable to find out how to do was to export the render images, everytime I rendered to disk nothing appeared so I took screenshots of the final frames. I am aware that some things are wrong and I expect to review them with KK on our 1 to 1 next week.

Houdini Session – Week 6

This week’s session was about experimenting some examples of particle effects mixed with desintegration, like we did with the particles but this time using the voronoi fracture where the destructed pieces would flow and disappear like the particle simulation giving it a more magical effect.

Desintegration test

So the first try was with a torus shape and with the scatter and voronoi fracture process that we have been doing but this time with a DOP network. Inside the DOP network we used a pop force and a RBD packed object and a bullet RBD solver connected to a multisolver node – usefull for when working with multiple different solvers like this example. But this only allowed us to desintegrate the torus all together and what we want to do is this effect happening from right to left so we had to create a group, assign it to points and check the keep in bounding regions option, this way a bounding box with be created and it can be keyed and give the result we want.

This group we just created is called static and now we need an attribute create node to tell the software that when the bounding box is out of that region the pieces of the fracture can have the magical effect created in the DOP net.

I had an issue when performing this part of assigning active and static parameters because I put the name active in caps lock like the static and only when I redid this part of the exercise I realised that the name active really had to be in lowercase. In the end we applied this effect to a test geometry tommy.

This is what the nodes looked like:

geo nodes

Flipbook:

flipbook

Smoke with particles simulation

The next task was to make a smoke simulation with particles and have them react to the smoke movement. So the first step was to create a circle, scatter points from it, a pyro source node and a volume rasterize attribute node to assign density and temperature. Also attribute noise from density and a pyro solver to have smoke, like we did in the previous session.

After adding lights and tweaking the smoke simulation we can convert to vdb and cache it to disk with a file cache node. When this is done we can take care of the particles with a DOP network and scatter the points from a circle again. This is what smoke simulation nodes looked like:

geo nodes

This is what the DOP network nodes looked like:

DOP network nodes

This was the final result:

Big Destruction

The final task of this session was more complex as it had imported geometry and animation, Mehdi prepared a model of a building and a meteor simulation so that we could do a big destruction on the building with the meteor. So the first step was to import the assets and clean it and see if the geometry was prepared for destruction as every shape needs to be closed and can’t have artifacts (errors in geo).

Before all that we need to check where the destruction is going to happen, after that we need to clean the geometry at least from those parts, the first was the floor from the different floors of the building, then the entrances, then the walls and then the columns.

Nodes after cleaning geo:

building geo cleaned nodes

Now we need to clean the geo of the meteor, it’s geometry was too heavy for a rigid body simulation so we need to reduce it, for that we unpack it, convert it to vdb and with a time shift node we can remove the time dependency and have the proxy calculated only in the first frame, this way the animation is much lighter and simulation runs faster. After cleaning the geo this is what the nodes looked like:

meteor cleanup

Now we need to destroy the building, for this we tried first with 5 rectangular shapes and the goal was to destroy them all but with different fractures, but i had some issues with it when using the for each piece node as it is much more complex and heavy for my machine. In the end the attributes disapeared and I need to search for a solution, but this is what they looked like:

rbd setup nodes
fractured building