Tuesday, April 26, 2011

Architectural Visualization: Concrete - Part I - Analysis

Intro
Concrete comes in many forms, and I thought it might be time I finally make a detailed study into the different types of concrete.

Monday, April 25, 2011

Architectural Visualization: Planks

Intro
Since I'll be working on a architectural visualization project, I will document the techniques I found interesting. The first notable one so far are the planks near a pool area.

For the planks, I found an 1kx1k tileable image. With some scripting and hypershade trickery, it is more than enough, even at closeups.

Goal
1. add transform noise to the planks
2. offset UV of individual planks
3. add different values to the planks

Sunday, April 17, 2011

Writing mental ray shaders: UV Chooser

Intro
Here is a shader that loads a texture, and chooses which UV set to use. There's an equivalent node in Maya, but it requires setting the input values in hypershade, which isn't intuitive for the avg user. In 3ds max the same thing is set with UV channels. Multi texture layering is an important technique when dealing repetitive textures, by using 5 1k textures, i can create a better looking texture than a single 5k texture. Not sure why maya is making it so hard for avg users.

MI Source
declare shader
 color "uv_chooser" (
  color texture "tex",
  integer "uv_sets"
 )
apply material
end declare


C Source
#include "shader.h"

struct uv_chooser{
 miTag tex;
 miInteger uv_sets;
 };

miBoolean uv_chooser(miColor *result, miState *state, struct uv_chooser *params) {
 miTag tex = *mi_eval_tag(&params->tex);
 miInteger uv_sets = *mi_eval_integer(&params->uv_sets);
 int i = uv_sets;
 mi_lookup_color_texture(result, state, tex, &state->tex_list[i]);
 return miTRUE;
 }

 

Friday, April 15, 2011

Writing mental ray shaders: Mosaic Tiles

Introduction
Here is a little something thats detours from the book. Combining quantization and texture uv, I can create a mosaic effect. 

Methodology
1. Have an input texture
2. quantize the uv_coordinates
3. use the quantized_uv_coordinates inside mi_lookup_texture_color

From top to bottom: Original, tile = 20, tile = 10, Porn?!

Writing mental ray shaders: Texture Mapping

Introduction
Once we have UV coordinates, we can start mapping textures according to these coordinates. This section will introduce the use of miTAG, and the function mi_lookup_color_texture(). I'll create a node thats similar in function to the 2d_placement node in Maya. Key functions are, UV offsets, and  UV scaling.


Notes
Name
Arguments
Comments
mi_lookup_color_texture
*col, *state, tag, *v
Return the value in a color texture at a given coordinate.

The tag is assumed to be a texture as taken from a color texture parameter of a shader. This function checks whether the tag refers to a shader (procedural texture) or an image (file texture or byte stream), depending on which type of color texture statement was used in the .mi file. If tag is a shader, coord is stored in state→tex, the referenced texture shader is called, and its return value is returned. If tag is an image, coord is brought into the range (0…1, 0…1) by removing the integer part, the image is looked up at the resulting 2D coordinate, and miTRUE is returned. If the texture has been marked for filtering, like with the filter keyword in the .mi file, then multi-level pyramid filtering is performed, a procedure derived from classical mip-map textures. In both cases, the color resulting from the lookup is stored in *color.

Thursday, April 14, 2011

Writing mental ray shaders: Quantization Part I

Introduction
At this point in the book, Writing mental ray Shaders, it starts with the UV shader, which uses the quantization function. I think the function deserves its own little write up, so we'll divert a little bit and play with the color quantization function.


Quantization
From Wikipedia
Quantization, in mathematics and digital signal processing, is the process of mapping input values that are members of some relatively large set of admissible input values to output values that are members of a smaller countable set of output values. The set of possible input values may be infinitely large, and may possibly be continuous and therefore uncountable (such as the set of all real numbers, or all real numbers within some limited range). The set of possible output values may be finite or countably infinite. A device or algorithmic function that performs quantization is called a quantizer.

The most common type of quantization is known as scalar quantization. Scalar quantization, typically denoted as y = Q(x), is the process of using a quantization function Q( ) to map a scalar (one-dimensional) input value x to a scalar output value y. Scalar quantization can be as simple and intuitive as rounding high-precision numbers to the nearest integer, or to the nearest multiple of some other unit of precision.


Rounding
From Wikipedia
Rounding a number x to a multiple of some specified increment m entails the following steps:
  1. Divide x by m, let the result be y;
  2. Round y to an integer value, call it q;
  3. Multiply q by m to obtain the rounded value z.
z = \mathrm{round}(x, m) = \mathrm{round}(x / m) \cdot m\,
Regardless, it is recommended to read through the wikipedia entry for details on rounding numbers.


Type Conversion
From Wikipedia
As we can see, we can use explicit type conversion as a rounding function. Getting all these definitions out of the way, we can start working on our UV as color with quantization shader.



Writing mental ray shaders: Quantization Part II

Intro
Here I will combine my two previous posts into one shader, the uv_as_colors_banding shader. I'll implement the rounding function as demonstrated in the wikipedia article. I did not understand some of the source code from the book

Methodology
1. Implement uv as colors
2. Define u_count, which is the amount of banding in the u direction
3. Define v_count, which is the amount of banding in the v direction
4. Implement the quantize function. 

Wednesday, April 13, 2011

Writing mental ray shaders: UV as Colors

Introduction
Here, we'll start working with UV's. We'll first look at the state variable, tex_list. I will first use tex_list in a shader that translates tex_list into the red and green component of result. In the next section I will combine the quantization function into the uv as colors shader. As I'm no expert in programming, my codes will be a little simpler, but hopefully easier to understand.

Definition
state->tex_list is a pointer to an array containing the texture coordinates of the intersection point in all texture spaces. 

Tuesday, April 12, 2011

Writing mental ray shaders: Transparency

Introduction
By definition, transparency is the physical property of allowing light to pass through a material. So, before I start, we need to examine this function.

miBoolean mi_trace_transparent(
 miColor  *result,
 miState  *state)

From Mental Ray online manual
This function casts a ray from state→dir to direction. It returns miFALSE if the trace depth has been exhausted or if the hit object has disabled refraction receiving. If no intersection is found, the optional environment shader is called. It also works when ray tracing is turned off, and considers visible as well as trace objects. 

From Writing mental ray® Shaders,
The API library function mi_trace_transparent sends a ray in the same direction and stores the resulting color in result. Note that this is a potentially recursive shader call—if the ray strikes another instance with a material that contains this transparency shader, then mi_trace_transparent will be called in it, and so on.
Final Image
For instance, lets examine the final test image I made. We're sending out an eye ray it hits the blue plane at point P. It calls the trace function, the function sends out a ray in the same direction as the eye ray, and hits the green plane, it calls the trace function again, and hits the red plane . It will call the function until trace depth is exhausted. It should be noted, setting trace depth in Maya is not enough, you need to set refraction depth as well.


Monday, April 11, 2011

Writing mental ray shaders: Set Range Utility

Goal
Make and/or improve on the set range utility found in maya. 

Methodology
1. Given an oldmin and oldmax, find the range by oldmax - oldmin.
2. Given an newmin and newmin, find the range by newmax - newmin.
3. For a given point P, find the range of P by P - oldmin.
4. Find the current_factor by (Step3/Step1)
5. Find the new_factor by (Step4 * Step2) + newmin

Writing mental ray shaders: Z Depth Part II

Introduction
In this part I'll create the set range and blend functions that can be found in maya. These functions will be a set of auxilliary functions that I can call upon, much like how the set range and blend nodes work in Maya. Again, I am working from, Writing mental ray® Shaders: A Perceptual Introduction (mental ray® Handbooks). Buy the book, its worth it :)

I'll further refine the zdepth shader, incorporating the set range and blend functions.

Goal
1. Develop a Set Range function and use as a library function
2. Develop a Blender function and use as a library function
3. Apply the above functions into the zdepth shader

Friday, April 8, 2011

Writing mental ray shaders: Z Depth Part I

Introduction
Unlike the shader demonstrated in Chapter 7 of Writing Mental Shaders, this is a camera z depth shader. The shader in the book is a world z depth shader.

Goal
A shader that displays a grey scale from near to far from the rendering camera.

Hypershade Equivalent

Zdepth shader tree

Methodology
From the shader tree above,
1. For a point P in space, from the rendering camera, find the -Z position of point P.
2.  Define a near distance and a far distance, use the rendering camera near/far clipping planes to determine, or alternately use the measure distance tool.
3. Use a set range function to remap the distance of (near to far) to 0 and 1, and assign this to a variable factor.
4. Use factor as greyscale values of a surface shader, or use factor as an blend value of a blend function(as above) to blend two colors.

Thursday, April 7, 2011

Writing mental ray shaders: Normals as Colors Part II

Goal
A shader that visualizes the world space, object space, and camera space normals.


Methodology
Same as in part I, but use a switch statement with different mi_vector_**** functions.

Writing mental ray shaders: Normals as Colors Part I

Goal
A shader that visualizes the surface normal.


Methodology
1. Find the surface normal at point p
2. Assign xyz values to rgb at point p

Wednesday, April 6, 2011

Writing mental ray shaders: Introduction

To clarify, I am working from Andy Kopras' book

Writing mental ray® Shaders: A Perceptual Introduction (mental ray® Handbooks)



His website is, 

I'm blogging the excercises as a means to remember, and to have something quick and easy to refer to. It is also for the day when I have to teach these materials. I am using linux and maya2011 64 to test the shaders. 

Friday, April 1, 2011

Writing mental ray shaders: Facing Forward

Goal
A shader that provides the visual representation of facing forward, where the surface normal is 90 degrees to the camera is black, and 0 degrees to the camera is white.

Methodology
1. create a color parameter called tint
2. create a scalar variable called scale
3. assign scale the dot value product of the surface normal and camera normal
4. assign result the value of tint multiplied by scale

Mi File
declare shader
 color "facing_ratio" (
  color "tint"  default 1 1 1
 )
 apply material
end declare

C File
#include "shader.h"

struct facing_ratio {
 miColor tint;
};

DLLEXPORT
miBoolean facing_ratio (
 miColor *result, miState *state, struct facing_ratio *params ) {
 miColor *tint = mi_eval_color(&params->tint);
 miScalar scale = -state->dot_nd;
 result->r = tint->r * scale;
 result->g = tint->g * scale;
 result->b = tint->b * scale;
 result->a = 1.0;
 return miTRUE;
}

Questions

On line 10 of the C source code, 

Q. Why can't I directly assign the color of mi_eval_color() to tint?
A. mi_eval_color() or mi_eval in general, does not return color. It returns a pointer.

Q. What is it doing? 
A.  This is dereferencing, it is storing the location of mi_eval_color() at the location of tint. And hence the final value 

Q. When did tint become a pointer? Or can I convert any type to a pointer type with a "*"?
A. Any variable has an address associated with it. The * refers to the address.

Notes
  1. dot_nd is a state variable that is a dot product of the surface normal and camera vector. It is found in, in the MentalRay manual - Using and Writing Shaders - State Variables - Intersection
  2. mi_eval returns a pointer to the parameter value, no matter where it comes from. If the shader accessed its parameters directly, without using mi_eval, it would get garbage (such as 0 or NaN) if the parameter is assigned. More detail in the MentalRay manual - Using and Writing Shaders -  Parameter Assignment and mi_eval.
  3. A refresher for pointers in C. "Practical Programming in C"