As promised, I implemented variant swapping functionality. It is entirely data driven, meaning that as the user selects a piece of clothing, the tool searches the appropriate 'sourceimages' subdirectory of the Maya project and adds to the list the name of all folders that contain at least a file called DIFF_folderName.jpg, which validates them as texture options for that piece of clothing. The tool also determines how many color detail masks are stored in that folder and communicates it back to the GUI. This allows the tool to show/hide and enable/disable the color picker options depending on the number of masks available per variation.
Some of the UI changes were made after the Usability Tests I conducted earlier today, where I found that a majority of the users had trouble understanding how the colors worked, because I was only disabling them at the time when the active variant did not support them, but the graying out was not very effective to communicate that they were no longer functional. The thing that worried me the most was the fact that the navigation was not clear, and users would normally tweak the parameters on the first screen and then hit the 'Export' button, not realizing the existence of the tabs. For that reason I created to options to change the UI layout and they are currently being voted on by the team.
Another change that you might have noticed in the demonstration video was the addition of template saving. I am currently able to store all the attributes that define a character in a small XML file, and will soon implement the functionality for importing that information back in, so that a character design can be iterated upon.
So far this week I have finalized implementing the basic export functionality. I solved last week's texture issue where sometimes the secondary blend colors would bleed into the final texture when running Maya's 'Convert to File Texture'.
To address it, I baked the diffuse channels from the layered textures on the torso and the legs separately, use the resulting files to replace the layered texture nodes and then run the 'Convert to File Texture''. This way, the command does not need to worry about nested layered texture and factoring the alpha channel on each of their inputs.
The following video demonstrates the export process. The tool looks for a directory in Maya's default project path called 'exports', and if it does not exist, it is created. It generates a unique id for the character using the user's name and a timestamp, with the possibility of adding an optional tag to identify the character. It outputs a final diffuse color map and an FBX with the skinned mesh.
As you may notice, there is a change in the way the texture looks from the layered texture setup. If you look at the Maya swatch for the material's color and the final output, the color values match perfectly, but the way the viewport deals with the layered texture and its blending modes produces a sometimes different look from just plugging an equivalent texture. I will try to address this in the following week.
I am currently working on the functionality of the rest of the buttons (Load and Save) in particular, and expect them to be ready sometime tomorrow.
In spite of the problems I talked about earlier this week with the implementation of a color picker in Qt Creator, I managed to find a way to nest a colorSliderGrp from Maya/Python in the resulting GUI. The trick was querying the path of a Qt control sitting at the same hierarchical level you want the color picker to be. I had tried this before, but it would appear that Qt Creator (or Maya?) has a max hierarchical depth past which 'Layout' elements cease to be listed in the full path of a widget. I restructured the UI to simplify the addition of the new controllers.
With that out of the way, I implemented a first pass of the functionality. When swapping between meshes for the torso and legs, the tool automatically changes the skinMask textures and the diffuse base for each clothing item. It also allows the user to change that base color using a color picker and one of five blending modes (Add, Multiply, Lighten, Darken, and Overlay). While the first four are natively supported by Maya's layered texture, the Overlay was implemented by replicating the math used by Photoshop through a network of utility nodes.
The following video shows the current state of the tool.
As it stands now, there is a problem where the flattening of the torso's layered texture sometimes introduces artifacts in the final texture.
I will address this problem in the coming sprint, as well as transition from the current model where most paths and options are hard-coded to a more data-driven structure.
I have also begun to hold usability tests by letting some of the end users play with the interface. During the rest of the week I will create a more directed version of the test and a companion survey to collect first impressions. I also intend to record both the screen as they play around with the tool and the users themselves to gauge their reactions.
Even though the lack of native Qt Creator support of a Color Picker might make me discard this workflow altogether and build my UI directly with Python, in order to keep up with my schedule (and since UI and functionality are independent in my code), I decided to use the UI created in Qt to implement the first pass of the functionality. This consists of being able to swap meshes and control blend shapes through the slider in the interface. A demonstration of the current state can be seen in the next video.
Since the current state of the UI lacks the Color Picker to accomplish one of my tasks for this week, I decided to instead address the issue of automating the inclusion of the transparency information as an alpha channel instead of as a separate texture caused by Maya's 'Convert to File Texture' command. The options I had available were:
Since the first two options involved having the end user install additional software and possibly do some troubleshooting to make them work correctly with Python (I had trouble myself with win32com), I opted for the third one.
To choose the best elements from the UI Mockups that I created last week, I made a survey using Survey Monkey and asked the Focal Length team (producers and artists), as well as the Art and Technical Art faculty at FIEA, to take it. Here are the results.
The numbers speak for themselves in almost all of the questions, giving me a very clear idea of how to build the UI prototype. It is important to know that these decisions are by no means final, and constant iteration will continue to happen as the fidelity of the prototypes and mockups get closer to the final tool. The only question in which opinion was divided was that regarding how to display the general settings (gender, weight, age, etc.): around 40% liked mockup A, while 30% liked mockup B, and 23% liked a third unrepresented option. I decided to combine mockups A and B in this pass of the UI and address in the following UX test whether or not icons would improve readabilty.
With the data in mind, I built the following widget using Qt Creator:
Unfortunately, Qt Creator does not have a ColorPicker widget. I spent some time researching ways to get one working, but the options I have found so far (QColorDialog, QColorTriangle, PyQt4) would require the end user to download and install extra software to run the tool, possibly even build some classes from the command line. This would make for a much worse UX, making it hard to setup for non-technical people (which most of the potential users are). I will still run some more tests to see if I can add the colorSliderGrp to the docked widget after loading the .ui file in Maya, but if those tests prove unsuccessful, I will probably have to rebuild the UI directly with Python. While this solution would make it easier to port to the end users' system, it would be a setback schedule-wise.
During this week, I setup the MotionBuilder-friendly skeleton for the tool, and skinned all the meshes to that joint hierarchy. However, since the joints had to re-position themselves to match the clothing choices and gender/age/proportions blendshapes, I exported all the skin weighting information as XML files so that the tool can pull them in after the character has been designed and is ready for export.
The other big task for this week was creating set driven keys on all the joints to actually conform to the aforementioned deformations, the result of which you can see here:
Finally, I began the creation of UI mockups to subject them to the end users' (artists and producers from the Focal Length team) judgement. By doing so, I can make sure that the tool is usable and easy to understand before I write the first line of code.
As stated in the schedule for this piece, over the week I finished the blend shapes (morph targets) for the body variations that were specified for the tool, modeled the clothing pieces selected from those that the Focal Length team concepted up to this point and generated morph targets for those models. The fastest way to demonstrate the progress is through the following video:
I also created UV maps for all the models shown above and set up the material network that will allow all of the pieces to share the same UV space so that the final result of the tool is a single flattened map. In order for me to set up this material, I created masks so that a single skin texture lies beneath the layered textures containing all the clothing information, so as to reduce the amount of work required from the Focal Length's texture artists that are working with me (Christina Equels and Monica Alvarez) to create this tool.
This is a final sample of what a textured civilian could look like. Textures by Monica Alvarez and Christina Equels, and it should be noted that they are still work in progress.
As part of the development for Focal Length, I am in the process of creating a tool that will allow the team to generate unique civilians that will populate the environment. This blog will document my week to week progress.
According to production specifications, around twenty civilians could potentially appear at the same time on the screen, so the tool needs to be able to generate at least that many significantly different characters. To keep the amount of work in scope, it was decided that the following parameters will be controllable by the user:
With the addition of variation on the textures for skin, eyes, hair and clothes, the tool could potentially generate hundreds of different civilians. A big focus of the tool will be to produce a friendly UI and creation process to achieve a good UX design, since the tool will be used by artists and producers alike
Similar tools already exist, such as Mixamo Fuse, MakeHuman, the Mii creator on the Wii, the Avatar creator on the XBox, etc.