Matchmove Session!!

We had the amazing opportunity to be learning matchmove and a bit of rotomation with a pro from the industry, Dom Maldlow a 3D Generalist.

I was pretty excited and a bit nervous, I was affraid I would get lost and well…. I guess that’s normal. It went well, and yes I got a bit lost in the middle of the session but Dom helped me so I didn’t fall behind. The session began with a small presentation of Dom about his path to where he is now professionally. It is inspiring to learn how people who have the job you want got there, it sort of makes things a bit more possible in our minds. After the presentation, we jumped to 3DEqualizer – a software that is usually learned on the job and very scary when I looked at it for the first time.

I admit I was expecting something a bit more like MAYA, it’s clear that it is a new thing, the interface is not made to be pretty or appealling, it’s made to work and the tools are great. Ofcourse having this session online makes it harder for everyone, the teacher and the student, the teacher can’t really see our screens when we have a problem so, we students have to find a way to explain whats up even if we don’t know what went wrong.

The first step was to import footage provided by Dom and track the camera so it could be “read” by the software. After importing the footage, we have to create a new camera and set up the gamma to 1.000 and the softclip to 2.000, then export the buffer compression file and only then we can start setting up the points. It is important that we find points in the different depths of the footage and on high contrast spots and be aware that the points might move when we track them. If that happens we have to put them back in their place and in some situations we have to track each frame individually.

Some points might need to be ended and then restarted if, for example a person walks in front of them or the camera moves and the point is out of the frame for a while. The first place where we found the points was the grafitti, it’s wall and the floor. Then, in the back, on the gate. The wall near the gate and finally the buildings in the back. It is important to find points in all the different depths of the footage so the software recognizes the paralax.

After all the points were found this is what it should look like:

After this we have to go to the deviation graphics and smooth the green curve, the deviation value must be under 2.00, so the curves with higher spikes must be deactivated to then calc from scratch. In the deviation browser inicially looks like this:

Each blue curve represents a point we created in the footage. After smoothing the green curve, that represents the calculation of all the blue curves. The smooth green curve must look like this:

In this process I fell a bit behind because I deactivated too many points so my curves were completely messed up. And when I hit the Calc from scratch button my footage was equally bad.

The footage was completelly distorted so I asked for help and the way to solve this was to activate every point again and recalculate that from scratch and start over in order to decrease the deviation value to below 2.00. Under the menu Point and select Timeline Weight Blending, hit calc from sractch again and everything looked so much better, I was out of trouble.

This is what it should look like. When these are good, we hit use result and every curve looks good and is below the desired value – 2.00 and thankfully no distorted footage.

When this is complete we need to go to the Parameter adjustment window, hit brute force a few times and hit Adjust, which will then look like the image below in the 3D space. A window will show up to transfer the calculations that were just made to the project.

After this we have to go to environments-orientation, select all the points and set up the locators

All set to export to maya, but as we were using the Personal Learning edition of 3DEqualizer we were not able to complete that. After the session I installed 3DEqualizer Enterprise and redid the exercise and exported the footage to maya, a .mel file, ready for the next session.

Matchmove Session 2!!

The next week we had a second session with Dom. This time the task was quite different, the footage was from a man on the street looking at his phone. The beggining was similiar to the previous session, we had to create a new camera, create a polygroup and find points in the different depths of the video and track them.

we learned a different method to track points, by pattern which means that the software will calculate the point not by its contrast but by its shape and pattern. The previous points were all tracked in marker tracking mode, which is the most used. It always usefull to know the diverse tools we have to do the same thing, sometimes a different method can be better than the most used.

Right after this, we calc from scratch, use result and save project. The next step was to create a new object inside the camera and the task was to find points in the man’s face so it could be tracked. The best places to get points in the face is in the eyebrows, forehead, between the eyes and in the hair. We had a few minutes to find the face points and in the end, mine, looked like this:

Same process, calc from scratch, use result and go to the Parameter Adjustment Window, brute force all and adjust. On this part I had a problem and the parameters i got were wrong

The problem was the focal length of the camera so I changed it and recalculated the paramenters and all looked good again

After this we had to download and import a iron man helmet .obj to place it on the man’s head.

As we can see in the image above, the blue arrow is pointing for the iron man helmet and the black arrow is pointing to the points we tracked in the beggining of the session, in the background. Now we had to open a new window of 3DEqualizer and having one in F5 environment (lineup) and another in F6 environment (orientation) we had to select the points in the mans face and assign them to the vertices of the helmet.

When all the points are in the approximate right place, on the lineup environment we should have something like this

The points dont have to be perfectly positioned as this can be adjusted later. Now, the orientation environment window can be closed and we can calc from scratch, use result.

To adjust the helmet to the mans face a bit more preciselly we can reopen the second 3DE window and on lineup environment, select translate or rotate to adjust the helmet to its right place in the orientation env.

After this is all set, we have to export for nuke and MAYA. That was the part that I really fell behind and couldn’t really solve my problem. When I tried to export for nuke, I got an error saying that “only selected cameras can be exported” and nothing really happened when I hit ok. So I just watched the rest of the session and try it later.

When we imported the project to MAYA, we also had to import the .fbx iron man helmet and use the .mel file with the data from 3DE along with the footage to now render the model of the helmet with the correct shaders and render settings to then export and have a small animation of a man with an iron man helmet.

This was the part that I got really lost and have to redo and improve. I was not catching up with the MAYA task because my 3DE was not being very helpfull. This was a great session and I really hope we do more things like this, to me it’s always a step closer to know and learn from the industry and to have the opportunity to do something that is so new and needed nowadays was great to open our horizons and professional possibilities.

Personally I had never heard of 3DEqualizer or matchmove and rotomation until the start of the masters and it’s a field that really interested me in every way.

Leave a Reply

Your email address will not be published. Required fields are marked *