Abbey normal???

It’s just a little thing, but one point I’ve found kind of confusing when playing with the Mobile 3D graphics API (JSR 184) is how to use the normal vectors.

When you’re defining your 3-dimensional polygon, you can give each vertex some position coordinates (okay, that’s reasonable), color coordinates (kind of amusing for the color to be given by coordinates, but okay), and a normal vector.

But what is the normal vector for?

If you’re on the face of one of the triangles of your polygon, the normal vector is logically the vector perpendicular to the flat face of the triangle. So you don’t need to define a normal since the normal is determined by the triangle. I guess that’s why the places we get to define the normal vectors are at the vertices. But what does it even mean to be perpendicular to a pointy part of the surface? It might be used to give the graphics functionality a hint as to how to smooth out and round the point, except that the 3D graphics engine doesn’t do that…

The normals are used to tell the graphics engine how to make light reflect off the surface. So in a sense it does give information about how to smooth and round off the corners — at least from the point of view of the reflected light. If you define the normal vector to be perpendicular to a triangle face, light will reflect off of the triangle as if the triangle is perfectly flat, with a nice, crisp point at the vertex. But if you’d rather smooth out your surface a little — say you’re making a ball and you want it to be less obvious that it’s constructed of triangles — you can define a normal pointing directly out from the center of the ball towards each vertex, giving what the perpendicular direction (normal vector) would be if the ball were smooth at that point. (The flat-faces-and-sharp-points model requires you to define more vertices than the smoothed/rounded model because when you’re smoothing you can re-use the normal vector, whereas to get flat faces you need to define a new normal vector — hence a new corresponding vertex — for each face that meets at a given point.)

As an example, I’ve taken my pyramid from my previous example and I’ve given it some normal vectors. Here’s the set of vertices:

private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};

Recall that using a TriangleStripArray I’ve grouped the vertices into the following strip of triangles: { (0, 0, 10), (10, 0, 0), (0, 10, 0) }, { (10, 0, 0), (0, 10, 0), (0, -10, 0) }, { (0, 10, 0), (0, -10, 0), (-10, 0, 0) }, { (0, -10, 0), (-10, 0, 0), (0, 0, 10) }, plus one last triangle pasted on { (0, -10, 0), (10, 0, 0), (0, 0, 10) }.

And here are the normals I’ve defined for the vertices:

private short[] myNormals2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
1, -1, 1, 1, -1, 1, 1, -1, 1
};

So you can see that for the top strip of triangles, I’ve defined the normal as going directly out from the center in order to get a smoothing effect. Then for the pasted on triangle (the last three vertices), I have defined all three normals as being the same vector perpendicular to the triangle to get a nice flat surface.

Here’s the result:

normal.png

The top triangle in this picture is the flat one.

In order to illustrate this better, I’ve rotated the pyramid a little and moved the camera closer. Also, I’ve switched the pyramid’s appearance from polygon mode to having a shiny surface material and I’ve placed a light source (omnidirectional, Light.OMNI) near the camera so you can see how the light reflects off the surface. If you’re using ambient light (light.AMBIENT) which lights all surfaces equally in all directions, you don’t need to bother with normals, but with the other three types of light (OMNI, DIRECTIONAL, and SPOT) the light hits the surface from a particular direction, so the normals are used to calculate how the light should bounce off.

Just for fun, I decided to see what would happen if I defined some crazy normal vectors — some “Abbey Normals” 😉 — for my pyramid:

private short[] myAbbeyNormals2 = {
0, 1, 1, -1, 0, 0, 1, 0, 1, 0, 1, -1, -1, 0, 0, 1, 1, 1,
-1, 1, 1, 1, -1, 1, 1, 1, -1
};

This isn’t necessarily useful in practice, but I was curious to see what would happen. Here’s the result:

abnormals.png

Note that in both these examples, I didn’t worry at all about the length of each normal vector. The length of the normal doesn’t matter — it isn’t taken into account. All that matters is the direction.

Here’s the code for today’s adventure. I’ve thrown in a method to rotate the pyramid in response to pressing the arrow keys so you can see it from all sides, and the corresponding MIDlet class can be found in my earlier pyramid post:

package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.m3g.*;

/**
* This is a very simple example class to illustrate 3-D coordinates.
*/
public class DemoCanvas extends Canvas {

/**
* The information about where the scene is viewed from.
*/
private Camera myCamera;

/**
* The information about how to move the camera.
*/
private Transform myCameraTransform = new Transform();

/**
* The information about how to move the pyramid.
*/
private Transform myObjectTransform = new Transform();

/**
* The distance to move the camera in response to a keypress.
*/
public static final float DEFAULT_DISTANCE = 10.0f;

/**
* The background. (self-explanatory 😉 )
*/
private Background myBackground = new Background();

/**
* The set of vertices.
*/
private VertexBuffer myVertexBuffer;

/**
* The object that defines how to map the set of vertices into
* a polygon.
*/
private IndexBuffer myIndexBuffer;

/**
* Information on how the polygon should look in terms of
* color, texture, shading, etc..
*/
private Appearance myAppearance;

/**
* The list of vertices for the first example pyramid.
*/
private short[] myVertices1 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
};

/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip1 = { 6 };

/**
* The list of vertices for the second example pyramid.
*/
private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};

/**
* The list of normals for the second example pyramid.
*/
private short[] myNormals2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
1, -1, 1, 1, -1, 1, 1, -1, 1
};

/**
* The list of crazy normals for the second example pyramid.
*/
private short[] myAbbeyNormals2 = {
0, 1, 1, -1, 0, 0, 1, 0, 1, 0, 1, -1, -1, 0, 0, 1, 1, 1,
-1, 1, 1, 1, -1, 1, 1, 1, -1
};

/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip2 = { 6, 3 };

/**
* Initialize everything.
*/
public DemoCanvas() {
try {
// Create the camera object to define where the polygon is being
// viewed from and in what way:
myCamera = new Camera();
// Set the camera so that it will project the 3-D picture onto the
// screen in perspective, with a vanishing point in the distance:
myCamera.setPerspective(60.0f, (float)getWidth() / (float)getHeight(),
1.0f, 1000.0f);

// Here we construct the VertexArray, which is a generic data
// structure for storing collections of coordinate points:
int numVertices = myVertices2.length / 3;
// specify how many vertices, plus the fact that each vertex has
// three coordinates, and each coordinate is coded on two bytes:
VertexArray va = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
va.set(0, numVertices, myVertices2);

// define the normals:
VertexArray na = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
na.set(0, numVertices, myAbbeyNormals2);

// Now create a 3-D object of it.
// Here we can group a set of different VertexArrays, one
// giving positions, one, giving colors, one giving normals:
myVertexBuffer = new VertexBuffer();
myVertexBuffer.setPositions(va, 1.0f, null);
myVertexBuffer.setNormals(na);
// Color the polygon white:
myVertexBuffer.setDefaultColor(0xffffff);

// Here we define how to piece together the vertices into
// a polygon:
myIndexBuffer = new TriangleStripArray(0, myTriangleStrip2);

// Let’s try creating a more complex appearance:
Material material = new Material();
material.setShininess(100.0f);
myAppearance = new Appearance();
myAppearance.setMaterial(material);

// color the background black:
myBackground.setColor(0x000000);

// We set the camera’s X position and Y position to 0
// so that we’re looking straight down at the origin
// of the x-y plane. The Z coordinate tells how far
// away the camera is — increasing this value takes
// you farther from the polygon, making it appear
// smaller.
myCameraTransform.postTranslate(0.0f, 0.0f, 25.0f);

// reset the object’s original orientation:
myObjectTransform.setIdentity();
} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Paint the graphics onto the screen.
*/
protected void paint(Graphics g) {
try {
// Start by getting a handle to the Graphics3D
// object which does the work of projecting the
// 3-D scene onto the 2-D screen (rendering):
Graphics3D g3d = Graphics3D.getInstance();
// Bind the Graphics3D object to the Graphics
// instance of the current canvas:
g3d.bindTarget(g);
// Clear the screen by painting it with the
// background image:
g3d.clear(myBackground);

// now add the light:
Light light = new Light();
light.setMode(Light.OMNI);
light.setIntensity(2.0f);
Transform lightTransform = new Transform();
lightTransform.postTranslate(0.0f, 0.0f, 50.0f);
g3d.resetLights();
g3d.addLight(light, lightTransform);

g3d.setCamera(myCamera, myCameraTransform);

// Now render, project the 3D scene onto the flat screen:
g3d.render(myVertexBuffer, myIndexBuffer, myAppearance,
myObjectTransform);

// Done, the canvas graphics can be freed now:
g3d.releaseTarget();

} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Move the object in response to game commands.
*/
public void keyPressed(int keyCode) {
switch(getGameAction(keyCode)) {
case Canvas.UP:
myObjectTransform.postRotate(DEFAULT_DISTANCE, -1.0f, 0.0f, 0.0f);
break;
case Canvas.DOWN:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 1.0f, 0.0f, 0.0f);
break;
case Canvas.RIGHT:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 0.0f, 1.0f, 0.0f);
break;
case Canvas.LEFT:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 0.0f, -1.0f, 0.0f);
break;
default:
break;
}
repaint();
}

}

Advertisements

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: