Thursday, October 25, 2012

Random People Card

To generate cards of random people to give scale to a scene. 

So I need human sized characters in my scene to give it some scale, there are various ways to do this, but I thought it'd be fun to write a little script that generates card people in various poses, heights etc. Doing a quick search on silhouette on Google Image search turns up this image at Deposit Photos, original link HERE,

Image 1: Various Silhouettes

I did a quick edit of the image to make it a square. Not too concerned with the watermarks.

1. Create shader using image 1 as opacity map
2. Create a polyplane with a dimension of 1x2 meters
3. Map the polyplane UV randomly to a person in image 1

Mel Script
global proc randCardPeople() {
    string $card1[];
    string $shaderName;
    string $shadingGroup;
    string $fileName;
    if (!(`objExists cardPeople`)) {
        $shadingGroup = `sets -renderable true -noSurfaceShader true -empty -name cardPeopleSG`;
        $shaderName = `shadingNode -asShader lambert -n cardPeople`;
        $fileName = `shadingNode -asTexture file`;

        connectAttr -f ($shaderName + ".outColor") ($shadingGroup + ".surfaceShader");
        connectAttr -f ($fileName + ".outColor") ($shaderName + ".incandescence");
        connectAttr -f ($fileName + ".outTransparency") ($shaderName + ".transparency");
        setAttr ($shaderName + ".color") -type double3 0 0 0 ;
        setAttr ($shaderName + ".diffuse") 0;    
        setAttr -type "string" ($fileName + ".fileTextureName") "/data/silhoutte.jpg";
        setAttr ($fileName + ".invert") 1;    
        setAttr ($fileName + ".alphaIsLuminance") 1;

    $card1 = `polyCreateFacet  -p 0 0 0 -p 1 0 0 -p 1 0 -2 -p 0 0 -2 -name card`;
    rotate -r -os 90 0 0 ;
    makeIdentity -apply true -t 1 -r 1 -s 1 -n 0;

    sets -e -forceElement cardPeopleSG;
    select -r ($card1[0] + ".map[0:3]") ;
    polyEditUV -pu 0.25 -pv 0.5 -su 0.15 -sv 0.15;
    polyEditUV -u -0.25 -v -0.5; //move to 0 0
    polyEditUV -u 0.0613368 -v 0.355; //move to first row

    int $chooseU = `rand 12`;
    int $chooseV = `rand 1 4`;    
    polyEditUV -u (0.0815*$chooseU) -v (0.192*$chooseV);
    select -cl;




Monday, October 8, 2012

MentalRay Custom Color Buffer / Linear Work Flow

Convert an existing mental ray shaded object into comp passes.


Pic 1

In pic 1, we have the original shader.

Pic 2

In pic 2, we have the shader network. The top blinn is for the wooden base, the bottom material is for the porcelein/ceramic material. It consists of a fast_SSS connected to a mia_material, the color correction is to vary the front SSS color from the back SSS color.

Seperating the Passes
To reiterate, the goal is to recreate Pic 1 from passes.

Step 1: Create Passes
First lets go into Render Settings > Passes

Pic 3: Passes Window

Click on the first button on the right. It'll show a window with a list of different available passes. We'll just click on custom color, there are options to enter a prefix if needed. Rename the pass as "Diffuse". Do this another 3 times and name it as, "Refl", "SSS", "Spec". Once created, select all the passes and press the green check button in the middle of the Passes window. This will activate the passes.

Step 2: Rebuild the Shader Network
Lets do the wooden base shader first. This is simple blinn shader that consists of diffuse, specularity, and reflectivity.

Step 2A: Base Section
1. So let us create 3 writeToColorBuffer nodes in the hypershade. It can be found under the hypershade>create>mental ray>miscellaneous. Rename them as diffuse, spec, and refl. The custom color drop down list will contain a list of color passes we created in Step 1. Select the corresponding color passes for each writeToColorBuffer node.

2. Let us duplicate the blinn shader as well, since we can't output individual shading components from the blinn shader, we will need to re-create the shading components.

  • Diffuse: For the original blinn, turn reflectivity to 0 and specular color to black, and rename it diffuse.
  • Reflection: For reflection, create a mib_reflect under Sample Compositing
  • Specular: For the duplicated blinn, change diffuse to black and reflectivity to 0 and rename it to spec.

3. Now connect each of the blinn shader output to their corresponding writeToColorBuffer input. If you select all the nodes and graph it now, it'll show the pass nodes in the hypershade as well, as shown in pic 4.

Pic 4

4. If you render now, only the diffuse pass will show up since only the diffuse shader is connected to a shading group. One way for the other writeToBuffer node to evaluate is have the blinn somehow connect to the shadingGroup. But the official method is to use the evaluation passthrough of each writeToBuffer node.

  • Connect the refl node to the spec node, and the spec node to the diffuse node.
  • Disconnec the diffuse shader from the shading group
  • Create a surface shader and connect it to the shading group
  • Connect the diffuse writeToBuffer node to the surface shader input color.

It'll look something like in pic 5.

Pic 5

5. If we render now, it'll look like this,

Pic 6

The base will be black because of the surface shader. Here is what the passes look like,




This concludes the base part.

Step 2B: Vase Section

  1. Create a SSS pass in render settings
  2. Create 4 writeToColorBuffer nodes and rename them as 2XSSS, refl, diffuse
  3. Upgrade the shaders,the vase part, I have a misss_fast_shader for diffuse and SSS, and a mia_material for the reflection. The misss shader is connected to the mia_material through additional colors. However, it would not matter once we use passes. These shaders are much easier to ouput passes since you can upgrade them to their respective X_passes shaders. The connections will be a little messed up but its easy to fix.
  4. Once upgraded, have the misss output front and back SSS results into to color buffers.
  5. Output diffuse result into the diffuse color buffer
  6. Output the reflection result from mia_material into the refl color buffer
  7. Disconnect materials from the shading group
  8. Create a surface shader and connect the last buffer node to the surface shader

pic 7
The final layout. 

Step 3: Render



Combining the Passes
Once combined, you will notice that the combined passes will NOT look like the master beauty pass, this is due to the exposure mental ray adds to the beauty pass that is not added to the buffers. 

Comped Image

Master Beauty

To correct for this, we will need to upgrade our workflow to linear workflow.

Linear Workflow

  1. Change render settings > file format to exr
  2. Change render settings > quality > framebuffer to 4x16bit half in render settings
  3. Change render view > display > 32bit HDR
  4. Change render > display > color management, image color profile to linear, and display color profile to sRGB
  5. Color correct ALL texture/color swatch with a gamma correct node set to 0.4545

Comped Image

Master Beauty

The final shader network looks like this,

Final Shader Network

To have both master beauty and final passes rendered out, I connect the last outEvaluation to outMatteOpacity of the surface shader, doesn't effect anything but it will let it evaluate the passes. 

Notes I
There are limitations to custom color buffer workflow, some shading components from maya shaders will not output correctly and some might even interfere with other shading components. The safest way to ensure a consistent workflow would be to stick with mental ray shaders and keep it simple, after all, we're pretty much offloading the light tweak stage to the compositing department in this workflow.

Notes II
To improve the workflow it is advisable to standardize/limit the shaders used and also automate the pass and gamma correction process. Mel scripts will be provided.... when i get around to it.

Wednesday, May 23, 2012

MentalRay Vs Renderman Shaders

So I get asked quite often why Renderman shaders can be a pain in the ass to use compared to Mentalrays shaders. My standard response is that Mentalray shaders are not as art directable compared to Renderman shaders. Using an analogy, one can compare mentalray as canned foods, while renderman are you veggies, meats, etc., in essense, basic ingredients. You can prepare a meal both ways, both can be delicious, but you can't get variety with canned foods but it is fast. While using the basic ingredients you can cook anything up, but it takes more skill and more time consuming. So, it really depends on the companies needs, some projects will require a unique look, while some require speed.

Wednesday, April 25, 2012

Character Shaders and Controlling SSS in Renderman/Slim

Haven't updated this blog in a while, thought i need to write some of these down in case
I forget. So... how to control SSS? What kind of control is expected? The default controls given in APSubsurfacescattering looks like this,

Your basic, intensity, incolor, outcolor, albedo, path length etc etc. Quite a few, which is enough for most shaders. And you can add multiple layers of SSS for a desired effect. Now, lets take a look at my shader trees in Slim.

Complete Shading Graph of a Single Character

In Depth
The yellow nodes are Allpurpose nodes, the red are shading components, and the purple nodes are global controls. The graph looks like this mainly because this is a single character with multiple  shading groups. Most of them are the same shader, but different textures. The three trees to the right are different shaders, mouth, teeth, tongue. So lets just take a look at a single tree.

Here we can see more clearly the structure of a shader. Not much really, spec and rim are shared across all the other shaders, diffuse has its own textures, incandescence is used in our pipeline as an automatic AOV output. SSS here is called APMultiSSS, its a template written by our shader TD that combines three SSS nodes into one, its the same as making SSS nodes and layering the colors together. The purple controls are neccessary as I don't want to change 5 shaders everytime I want to tweak it. Now, let me explain the SSS controls that were needed, the material for this character is ceramic, so there shouldn't be a lot of SSS, yet a strong contrasty look is to be avoided. So, what this means in the end is, strong front scatter to have a soft look, weak back scatter to avoid a translucense effect. To achieve this,  we will need two SSS nodes with specific controls. Lets take a look at the SLBOXs in this shader tree,

Front scatter control

v1 is the diffuse texture image
v2 is control float
sat is control float, 1 for full saturation 0 for black and white
v5 is a matte to separate non-SSS parts of the model

The first part is to make a control like the saturation control in the Adjust node. The second part, in the if statement, is to make sure that every point pointing AWAY from the camera will retrun 0. Every point pointing TOWARD the camera will return some value, which in this case, is used as the incolor of the SSS shader.

The result = desat*v2*v6, is thus simply that the incolor is a desaturated color texture, multiply by intensity(which was 3), and multiplied by a matte which blacks out areas that doesn't need SSS. So, finally, for this part, the shader, the output is a high intensity SSS shader when the light and camera is in the same general direction, but have no SSS when the light and the camera is opposite each other(light behind object). 

Back scatter control
The back SSS control is just the reverse of the above node, every point pointing away the camera has an intensity of v1*v2, which is the original color matted out by v2. And every point pointing towards the camera has a value of 0.

And that is it. Its pretty much just using an If statement to to control the behavior of what happens when a point is pointing toward the camera or when the point is pointing away from the camera