Archive for December, 2006|Monthly archive page

Abbey normal???

It’s just a little thing, but one point I’ve found kind of confusing when playing with the Mobile 3D graphics API (JSR 184) is how to use the normal vectors.

When you’re defining your 3-dimensional polygon, you can give each vertex some position coordinates (okay, that’s reasonable), color coordinates (kind of amusing for the color to be given by coordinates, but okay), and a normal vector.

But what is the normal vector for?

If you’re on the face of one of the triangles of your polygon, the normal vector is logically the vector perpendicular to the flat face of the triangle. So you don’t need to define a normal since the normal is determined by the triangle. I guess that’s why the places we get to define the normal vectors are at the vertices. But what does it even mean to be perpendicular to a pointy part of the surface? It might be used to give the graphics functionality a hint as to how to smooth out and round the point, except that the 3D graphics engine doesn’t do that…

The normals are used to tell the graphics engine how to make light reflect off the surface. So in a sense it does give information about how to smooth and round off the corners — at least from the point of view of the reflected light. If you define the normal vector to be perpendicular to a triangle face, light will reflect off of the triangle as if the triangle is perfectly flat, with a nice, crisp point at the vertex. But if you’d rather smooth out your surface a little — say you’re making a ball and you want it to be less obvious that it’s constructed of triangles — you can define a normal pointing directly out from the center of the ball towards each vertex, giving what the perpendicular direction (normal vector) would be if the ball were smooth at that point. (The flat-faces-and-sharp-points model requires you to define more vertices than the smoothed/rounded model because when you’re smoothing you can re-use the normal vector, whereas to get flat faces you need to define a new normal vector — hence a new corresponding vertex — for each face that meets at a given point.)

As an example, I’ve taken my pyramid from my previous example and I’ve given it some normal vectors. Here’s the set of vertices:

private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};

Recall that using a TriangleStripArray I’ve grouped the vertices into the following strip of triangles: { (0, 0, 10), (10, 0, 0), (0, 10, 0) }, { (10, 0, 0), (0, 10, 0), (0, -10, 0) }, { (0, 10, 0), (0, -10, 0), (-10, 0, 0) }, { (0, -10, 0), (-10, 0, 0), (0, 0, 10) }, plus one last triangle pasted on { (0, -10, 0), (10, 0, 0), (0, 0, 10) }.

And here are the normals I’ve defined for the vertices:

private short[] myNormals2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
1, -1, 1, 1, -1, 1, 1, -1, 1
};

So you can see that for the top strip of triangles, I’ve defined the normal as going directly out from the center in order to get a smoothing effect. Then for the pasted on triangle (the last three vertices), I have defined all three normals as being the same vector perpendicular to the triangle to get a nice flat surface.

Here’s the result:

normal.png

The top triangle in this picture is the flat one.

In order to illustrate this better, I’ve rotated the pyramid a little and moved the camera closer. Also, I’ve switched the pyramid’s appearance from polygon mode to having a shiny surface material and I’ve placed a light source (omnidirectional, Light.OMNI) near the camera so you can see how the light reflects off the surface. If you’re using ambient light (light.AMBIENT) which lights all surfaces equally in all directions, you don’t need to bother with normals, but with the other three types of light (OMNI, DIRECTIONAL, and SPOT) the light hits the surface from a particular direction, so the normals are used to calculate how the light should bounce off.

Just for fun, I decided to see what would happen if I defined some crazy normal vectors — some “Abbey Normals” 😉 — for my pyramid:

private short[] myAbbeyNormals2 = {
0, 1, 1, -1, 0, 0, 1, 0, 1, 0, 1, -1, -1, 0, 0, 1, 1, 1,
-1, 1, 1, 1, -1, 1, 1, 1, -1
};

This isn’t necessarily useful in practice, but I was curious to see what would happen. Here’s the result:

abnormals.png

Note that in both these examples, I didn’t worry at all about the length of each normal vector. The length of the normal doesn’t matter — it isn’t taken into account. All that matters is the direction.

Here’s the code for today’s adventure. I’ve thrown in a method to rotate the pyramid in response to pressing the arrow keys so you can see it from all sides, and the corresponding MIDlet class can be found in my earlier pyramid post:

package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.m3g.*;

/**
* This is a very simple example class to illustrate 3-D coordinates.
*/
public class DemoCanvas extends Canvas {

/**
* The information about where the scene is viewed from.
*/
private Camera myCamera;

/**
* The information about how to move the camera.
*/
private Transform myCameraTransform = new Transform();

/**
* The information about how to move the pyramid.
*/
private Transform myObjectTransform = new Transform();

/**
* The distance to move the camera in response to a keypress.
*/
public static final float DEFAULT_DISTANCE = 10.0f;

/**
* The background. (self-explanatory 😉 )
*/
private Background myBackground = new Background();

/**
* The set of vertices.
*/
private VertexBuffer myVertexBuffer;

/**
* The object that defines how to map the set of vertices into
* a polygon.
*/
private IndexBuffer myIndexBuffer;

/**
* Information on how the polygon should look in terms of
* color, texture, shading, etc..
*/
private Appearance myAppearance;

/**
* The list of vertices for the first example pyramid.
*/
private short[] myVertices1 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
};

/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip1 = { 6 };

/**
* The list of vertices for the second example pyramid.
*/
private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};

/**
* The list of normals for the second example pyramid.
*/
private short[] myNormals2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
1, -1, 1, 1, -1, 1, 1, -1, 1
};

/**
* The list of crazy normals for the second example pyramid.
*/
private short[] myAbbeyNormals2 = {
0, 1, 1, -1, 0, 0, 1, 0, 1, 0, 1, -1, -1, 0, 0, 1, 1, 1,
-1, 1, 1, 1, -1, 1, 1, 1, -1
};

/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip2 = { 6, 3 };

/**
* Initialize everything.
*/
public DemoCanvas() {
try {
// Create the camera object to define where the polygon is being
// viewed from and in what way:
myCamera = new Camera();
// Set the camera so that it will project the 3-D picture onto the
// screen in perspective, with a vanishing point in the distance:
myCamera.setPerspective(60.0f, (float)getWidth() / (float)getHeight(),
1.0f, 1000.0f);

// Here we construct the VertexArray, which is a generic data
// structure for storing collections of coordinate points:
int numVertices = myVertices2.length / 3;
// specify how many vertices, plus the fact that each vertex has
// three coordinates, and each coordinate is coded on two bytes:
VertexArray va = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
va.set(0, numVertices, myVertices2);

// define the normals:
VertexArray na = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
na.set(0, numVertices, myAbbeyNormals2);

// Now create a 3-D object of it.
// Here we can group a set of different VertexArrays, one
// giving positions, one, giving colors, one giving normals:
myVertexBuffer = new VertexBuffer();
myVertexBuffer.setPositions(va, 1.0f, null);
myVertexBuffer.setNormals(na);
// Color the polygon white:
myVertexBuffer.setDefaultColor(0xffffff);

// Here we define how to piece together the vertices into
// a polygon:
myIndexBuffer = new TriangleStripArray(0, myTriangleStrip2);

// Let’s try creating a more complex appearance:
Material material = new Material();
material.setShininess(100.0f);
myAppearance = new Appearance();
myAppearance.setMaterial(material);

// color the background black:
myBackground.setColor(0x000000);

// We set the camera’s X position and Y position to 0
// so that we’re looking straight down at the origin
// of the x-y plane. The Z coordinate tells how far
// away the camera is — increasing this value takes
// you farther from the polygon, making it appear
// smaller.
myCameraTransform.postTranslate(0.0f, 0.0f, 25.0f);

// reset the object’s original orientation:
myObjectTransform.setIdentity();
} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Paint the graphics onto the screen.
*/
protected void paint(Graphics g) {
try {
// Start by getting a handle to the Graphics3D
// object which does the work of projecting the
// 3-D scene onto the 2-D screen (rendering):
Graphics3D g3d = Graphics3D.getInstance();
// Bind the Graphics3D object to the Graphics
// instance of the current canvas:
g3d.bindTarget(g);
// Clear the screen by painting it with the
// background image:
g3d.clear(myBackground);

// now add the light:
Light light = new Light();
light.setMode(Light.OMNI);
light.setIntensity(2.0f);
Transform lightTransform = new Transform();
lightTransform.postTranslate(0.0f, 0.0f, 50.0f);
g3d.resetLights();
g3d.addLight(light, lightTransform);

g3d.setCamera(myCamera, myCameraTransform);

// Now render, project the 3D scene onto the flat screen:
g3d.render(myVertexBuffer, myIndexBuffer, myAppearance,
myObjectTransform);

// Done, the canvas graphics can be freed now:
g3d.releaseTarget();

} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Move the object in response to game commands.
*/
public void keyPressed(int keyCode) {
switch(getGameAction(keyCode)) {
case Canvas.UP:
myObjectTransform.postRotate(DEFAULT_DISTANCE, -1.0f, 0.0f, 0.0f);
break;
case Canvas.DOWN:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 1.0f, 0.0f, 0.0f);
break;
case Canvas.RIGHT:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 0.0f, 1.0f, 0.0f);
break;
case Canvas.LEFT:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 0.0f, -1.0f, 0.0f);
break;
default:
break;
}
repaint();
}

}

Coordinates in JSR 184: A very simple example

I love the way the JavaDoc for Graphics3D encouragingly starts with “Using the Graphics3D is very straightforward” just before launching into the precisions about the different types of rendering modes and targets, viewports, antialiasing, dithering, etc. 😉

JSR 184 gives an incredibly rich API for creating 3-D animations (and especially games) for a MIDP handset. It’s designed to allow you to import complex computer-generated 3-D objects as well as to create your own 3-D objects by hand, with a fantastic array of options in terms of lighting, surface textures, etc. It also comes with an equally rich assortment of new terms (Mesh, IndexBuffer, TriangleStripArray, CompositingMode…) that can look like a foreign language to those new to 3-D programming. And just understanding how all of the different coordinate systems work and interact with each other is not trivial.

Not to be discouraging or anything — it’s not so daunting if you start simple. So my programming challenge that I assigned myself for today was to write the simplest 3-D example possible that illustrates the basics of how the coordinate system works and how to define a 3-D polygon.

My example is a pyramid with a square base viewed from above.

This is a two-part example: In part one, I start with just the square base with two of the triangular sides (opposite each other) attached. So picture a square paper with a triangle attached along the right edge and the left edge, and the two triangles folded up to meet at a point at the top. In part two, I start with this same incomplete pyramid and glue on a third side. From there, completeing the pyramid should be no problem. 😀

I’m going to start by posting the code so you can have a look, and then I’ll explain in a little more detail, particularly how the VertexArray works and how the TriangleStripArray defines how to piece the vertices together to form a polygon.

DemoCanvas.java:


package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.m3g.*;

/**
* This is a very simple example class to illustrate 3-D coordinates.
*/
public class DemoCanvas extends Canvas {

/**
* The information about where the scene is viewed from.
*/
private Camera myCamera;

/**
* The background. (self-explanatory 😉 )
*/
private Background myBackground = new Background();

/**
* The set of vertices.
*/
private VertexBuffer myVertexBuffer;

/**
* The object that defines how to map the set of vertices into
* a polygon.
*/
private IndexBuffer myIndexBuffer;

/**
* Information on how the polygon should look in terms of
* color, texture, shading, etc..
*/
private Appearance myAppearance;

/**
* The list of vertices for the first example pyramid.
*/
private short[] myVertices1 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
};
/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip1 = { 6 };

/**
* The list of vertices for the second example pyramid.
*/
private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};
/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip2 = { 6, 3 };

/**
* Initialize everything.
*/
public DemoCanvas() {
try {
// Create the camera object to define where the polygon is being
// viewed from and in what way:
myCamera = new Camera();
// Set the camera so that it will project the 3-D picture onto the
// screen in perspective, with a vanishing point in the distance:
myCamera.setPerspective(60.0f, (float)getWidth() / (float)getHeight(),
1.0f, 1000.0f);

// Here we construct the VertexArray, which is a generic data
// structure for storing collections of coordinate points:
int numVertices = myVertices1.length / 3;
// specify how many vertices, plus the fact that each vertex has
// three coordinates, and each coordinate is coded on two bytes:
VertexArray va = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
va.set(0, numVertices, myVertices1);

// Now create a 3-D object of it.
// Here we could group a set of different VertexArrays, one
// giving positions, one, giving colors, one giving normals,
// but for simplicity we're only setting position coordinates:
myVertexBuffer = new VertexBuffer();
myVertexBuffer.setPositions(va, 1.0f, null);
// Color the polygon white:
myVertexBuffer.setDefaultColor(0xffffff);

// Here we define how to piece together the vertices into
// a polygon:
myIndexBuffer = new TriangleStripArray(0, myTriangleStrip1);

// We want the appearance as simple as possible, so set the
// appearance to polygon mode:
PolygonMode pm = new PolygonMode();
pm.setShading(PolygonMode.SHADE_FLAT);

myAppearance = new Appearance();
myAppearance.setPolygonMode(pm);

// color the background black:
myBackground.setColor(0x000000);
} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Paint the graphics onto the screen.
*/
protected void paint(Graphics g) {
try {
// Start by getting a handle to the Graphics3D
// object which does the work of projecting the
// 3-D scene onto the 2-D screen (rendering):
Graphics3D g3d = Graphics3D.getInstance();
// Bind the Graphics3D object to the Graphics
// instance of the current canvas:
g3d.bindTarget(g);
// Clear the screen by painting it with the
// background image:
g3d.clear(myBackground);

// Now set where we're viewing the scene from:
Transform cameraTransform = new Transform();
// We set the camera's X position and Y position to 0
// so that we're looking straight down at the origin
// of the x-y plane. The Z coordinate tells how far
// away the camera is -- increasing this value takes
// you farther from the polygon, making it appear
// smaller. Try changing these values to view the
// polygon from different places:
cameraTransform.postTranslate(0.0f, 0.0f, 100.0f);
g3d.setCamera(myCamera, cameraTransform);

// Now set the location of the object.
// if this were an animation we would probably
// translate or rotate it here:
Transform objectTransform = new Transform();
objectTransform.setIdentity();

// Now render: (Yay!!! finally!!!)
g3d.render(myVertexBuffer, myIndexBuffer, myAppearance, objectTransform);

// Done, the canvas graphics can be freed now:
g3d.releaseTarget();

} catch(Exception e) {
e.printStackTrace();
}
}

}

***

The first example (using myVertices1 and myTriangleStrip1) gives a result that looks like this:

pyramid1.png

The second example (using myVertices2 and myTriangleStrip2) gives a result that looks like this:

pyramid2.png

In the first example, I constructed my TriangleStripArray with the arguments 0 and a one-element array: { 6 }. That means start from the first vertex in the vertex array (actually the zeroth element — you know what I mean), and then make one strip of triangles from the six vertices. The triangles are defined by taking all sets of three consecutive vertices, the first one starting from the first vertex, the second one starting from the second vertex, etc. So each triangle in the triangle strip shares a side with the next triangle and shares another side with the previous triangle as you can see from this example:

Vertex array #1 looks like this: { 0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10 },– where every three values together form one vertex and three vertices form a triangle — so the corresponding strip is made of the following set of trianges: { (0, 0, 10), (10, 0, 0), (0, 10, 0) }, { (10, 0, 0), (0, 10, 0), (0, -10, 0) }, { (0, 10, 0), (0, -10, 0), (-10, 0, 0) }, { (0, -10, 0), (-10, 0, 0), (0, 0, 10) }. (Here I’ve surrounded the xyz-coordinates of each vertex in parentheses and each triangle in brackets so you can see the list of triangles more easily — this doesn’t represent any syntax that appears in the code.) Note that the first (and last) vertex is the top point and the middle two triangles together form the square base of the pyramid.

Even though the square base is part of the polygon, it turned out black on the screen. That’s because by default the inside faces are invisible; not rendered.  If you’d like the inside faces to be visible, then set your PolygonMode’s culling to CULL_NONE. The computer determines which side is “inside” and which side is “outside” by whether the vertices of the triangle are defined in clockwise or counter-clockwise order. I’d explain how to figure out which side is which if I weren’t dyslexic — normally in such cases I guess, then compile and run, then invert the values if I guessed wrong. 😉

In the second example, I constructed my TriangleStripArray with the arguments 0 and a two-element array: { 6, 3 }. This creates the same strip of triangles as the first one, and then makes another strip of triangles from the next three vertices it finds. Three vertices make one triangle, so using the larger second example array we get the additional triangle { (0, -10, 0), (10, 0, 0), (0, 0, 10) } giving one additional side. This side is white the way I’ve defined it, but if I’d defined it in the wrong order (i.e. attached it backwards), it would appear black from my camera’s angle. (Note that I could actually have used myVertices2 for both examples and the result would have been the same.)

For completeness, I’ll post the simple MIDlet class that goes with this. Also note that this program uses the type float, and I found I had to set my project to CLDC-1.1 (under Settings > API Selection in ktoolbar) to get it to compile:


package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.midlet.MIDlet;

/**
* A simple 3D example.
*/
public class TestMIDlet extends MIDlet implements CommandListener {

private Command myExitCommand = new Command("Exit", Command.EXIT, 1);
private DemoCanvas myCanvas = new DemoCanvas();

/**
* Initialize the Displayables.
*/
public void startApp() {
myCanvas.addCommand(myExitCommand);
myCanvas.setCommandListener(this);
Display.getDisplay(this).setCurrent(myCanvas);
myCanvas.repaint();
}

public void pauseApp() {
}

public void destroyApp(boolean unconditional) {
}

/**
* Change the display in response to a command action.
*/
public void commandAction(Command command, Displayable screen) {
if(command == myExitCommand) {
destroyApp(true);
notifyDestroyed();
}
}

}

Amazon reviews

Since this blog is a companion blog for my book J2ME Games With MIDP2, today I’d like to talk about some of the book’s Amazon reviews. I won’t reprint the reviews in full here (you can see them on my book’s Amazon page), but rather just address some of the comments. I think this chunk sums up the main criticism:

It is basically a review of the code of a few simple games (like a cowboy jumping tumbleweeds, or a simple 2d maze) with very little space devoted to theory and explanations.. both of the APIs and of the internal logic and algorithms. Not that this book isn’t useful.. it is but you have to wade through a lot of code, and I think the author could have done a much better job if for example she had taken the time to EXPLAIN the maze generation algorithms instead of just saying “look at the code”.

What can I say? The text-to-code-sample ratio might not have been optimal. 😉

Actually, I wrote a lot of the theory and explanations into extensive JavaDoc comments in the sample code, but that’s obviously not the most user-friendly format, especially since this is a book. As another critical reviewer said: “If you are the type of person that learns by reading code then you will already have learned the APIs by looking at the sample code. The reason we buy technical books is to teach us how to use the APIs through a combination of well annotated example code, well organized reference material for the APIs, and illustrations that demonstrate best practice code flow.”

This is a valid point, and all I can really say is that I’m working on improving this in my technical writing. I think my bluetooth article the other day shows improvement on the points the reviewer mentions. I started with an overview of what the code needs to do, then talked a little about what options are available and explained why I chose the strategy I did. Plus I searched for additional references on the web, and provided links and a brief explanation of what points I’d learned in them and how I applied them in my project. In that article I didn’t even post the code sample (although I will at some point), essentially because I wasn’t sure the bluetooth part of the code was clear and concise enough to be useful without the complete sample program.

It’s kind of disappointing to see the maze algorithm singled out as not being explained clearly enough. Part of the point to that example was to show that you can make something fun from something simple. What happened was that since the algorithm to generate the maze wasn’t something that applies to games in general, I just wrote the explanation of how it works into the JavaDoc of the code samples. But clearly it wouldn’t have hurt to have included an overview/explanation of it in the text part of the chapter as well. I’ll take it as a challenge to write a concise explanation of the algorithm and post it here to this blog. (I want to move on to some graphics first, but maybe I’ll write up an explanation of it over the holidays since it’s something I don’t have to be sitting at the computer to do.)

I’ll close with a potitive comment from a review titled “Brings the fun back to Java”:

The author of this book has a nice, easy to read style of writing. Her enthusiasm for the topic comes through and makes you want to try the many sample games.

This is what I’m shooting for, of course. I’d like to show that it’s not only fun to play games but also to write them as you use your ingenuity to tackle the special challenges you face when programmming for a small device. 😀