Last task on my list for the week was to start working on the UI for the rig manipulation phase. I have intended to do only the graphical layout but I am happy to say that I have it already tied to the controls. At this point, the data to create this connections is all inside the code so I will be making that independent by migrating it to an XML file.
The tool is now driven by UI (it can also run as it did last week, of course, they are separate classes). The process is very simple and the mirroring tools allow the user to work much faster. Setting the whole face takes about 10-15 min, except for painting skin weights, which can take a lot longer to finesse. The most bothersome part is setting the jaw to be controlled by the Osipa -style interface. I could not fully automate the process because since it uses a joint already in scene I cannot make assumptions about its orientation (on what axis should I rotate it to open the jaw). For next week I will add a GUI to pick and manipulate the controls outside of the viewport and (possibly) store and set poses.
Following last week's proof of concept results, I changed the degree on the curves to avoid sharp changes in the rotation of the locators when the motion path deformed from blend shape influences. As you can see in the following video, all the basic functionality is set up, with a clear idea of the user input cycle. I focused on functionality and left UI for later, so the user input is 'hard-coded' in the demonstration, but it is just a matter of replacing that with a window or a dock control.
All the functionality is data driven, so blend shapes can be easily updated and re-exported. The mapping of the controllers to the influences is also saved as an XML that can be edited with ease.
Here is a demonstration of the process that the tool will automate. The user input would be the part where the blend shape for the target geometry is created.
In the process of building this proof of concept, I found that the locators tend to be twitchy when the blend shapes are activated on the driving curve. For the final part of the demonstration, I broke the connection to the locator's rotations, but that means I would need to find a whole new system to drive rotations. I have a couple of ideas to solve this issue while keeping to the motionPath method that I will be testing in the coming week as I begin coding the functionality of the tool.
This is my progress so far. Instead of spending a lot of time manually creating slider controls for each of the blendshapes that will go in the system, I built a couple of classes that allow me to recreate the interface with the click of a button. I considered importing a finished version of the controller interface from another rig, but while that was certainly faster, it was also cumbersome and difficult to tweak without having to hard-code a lot of the names into the script. For that reason I went for the longer route and created methods to automatize the creation in general of slider controls, bounding boxes for those controls and text curves to describe their purpose.
As you can see in the following image and the video after it, with only 30 lines of code and a Python dictionary, I can generate the entire viewport interface required for the target rig.
June 19 to 25:
June 26 to July 2:
July 3 to 9:
July 10 to 13:
For my second portfolio piece, I intend to create a tool that automatically generates a joint-driven facial rig for humanoid characters. The system is based on the tool developed by Jeremy Ernst from Epic Games (presented as 'Fast and Efficient Facial Rigging for Gears of War 3' at GDC 2011) but incorporating techniques from Tim Callaway's curve-blendshape rig (such techniques were implemented in the Goofy Facial Rig from last semester).
The purpose of the tool is to complete the facial setup for the characters of both Neon Night Riders and Hit, by using Faceshift motion capture data to quickly enhance the performance of all the characters. Since the characters already have controls on the eyeballs, eyelids and jaw, the functionality that the tool would automate is the following:
The tool would give two types of controllers for animation: Osipa-style controls inside the viewport and a docked GUI with sliders (this one would work across all characters (not scene dependent). The user of the tool will only need to position a set of NURBs curves by snapping them to the vertices of the face and run the command to create the whole system. The curves used to create the expression blendshapes would be 'exposable' to the user so that he/she can tweak the look to match the character. To increase the flexibility of the rig, individual on-face controllers would also be created.
Examples of the desired functionality can be seen here: