Archive for the ‘Graphics’ Category

Experimenting with BlackBerry Graphics!

As I said the other day, I started my space explorer game by trying out a bunch of different ways to animate the rocketship flying through space.  (I’ve bundled all the source code and resources for the complete example — you can download it here.)  I started by just creating a standard MIDlet using the javax.microedition.lcdui.game classes.  Here’s what it looks like in the debugger:

In the image, I’ve set the debugger to show memory usage.  In this case it’s “lcduispace” — the top process on the list.  Next, I decided to try out the technique I described in my earlier post:  use the same lcdui game implementation, but write each frame into an image buffer and transfer it from the MIDP Graphics implementation to RIM’s proprietary implementation to display it in a RIMlet.  Here it is in the debugger: Continue reading

Advertisements

BlackBerry’s two Graphics implementations: Can you use them both? In the same App?

When I first started programming for BlackBerry, one of the things that struck me as most odd was the two completely independent graphics APIs.  For MIDlets, there’s javax.microedition.lcdui.Graphics, and for RIM’s own profile, there’s net.rim.device.api.ui.Graphics.  The two versions are so similar to one another that — if you’re careful — you can write a game that can use either of the two graphics APIs interchangeably, just by swapping out the include statements (and using a different set of lifecycle classes).  That’s what I illustrated in Chapter 3 of my book.

But I wondered: What if I want to use the javax.microedition.lcdui.game package?  But I still want to take advantage of RIM’s proprietary UI component handling?  Is that even possible?  Note that you can’t just place an LCDUI Sprite onto a RIM Screen or use lcdui Images interchangeably with RIM Images.  Yet, there’s nothing to stop you from instantiating many of the lcdui graphics-related classes in a RIMlet — either type of application has access to the whole API.

Through experimentation, I found that it’s quite possible to take a game that was written using lcdui game Layers and run it in a RIMlet.  The trick is the following: Continue reading

Easy, fun game sprites made with SVG!

I downloaded and installed Inkscape in order to draw an animation to play with the Scalable Vector Graphics API.  Inkscape was quite helpful for creating my game’s opening SVG animation (though it would have been more helpful if it could save the file in “SVG Tiny”…).  What I didn’t expect was how helpful Inkscape is for drawing standard game sprites — drawn first in Scalable Vector Graphics format, then exported as PNG files!

I needed an rocket sprite for my “Space Explorer” game.  Here’s what I came up with:

rocket-flames-96

You can see that the four frames give a spinning effect.  (The lower row gives the frame sequence for when the engines are turned on.)  I’m more an engineer than an artist, but here’s how Inkscape and SVG made it easy for me to draw this: Continue reading

FPUIL: Frog-Parrot User Interface Library

I’ve finished my little library of UI utilities, which you can download here: FPUIL.tar.gz or FPUIL.zip.

Just unzip the directory, placing the WidgetCanvas folder in the apps folder of the WTK, and you can open it as a project.

This set of classes is intended to help with a “managing fragmentation” strategy of type “DERIVE-MULTI > SELECTIVE” (see Device Fragmentation of Mobile Applications for the theory).  As explained on TomSoft, this is a good strategy for one-shot projects, and it’s the strategy I’ll be discussing at Jazoon.

Here are some notes on using this little library: Continue reading

Dungeon Plus!!!

I’ve just created a new version of my Dungeon game featuring a custom user interface!!!

As I was saying the other day, the lcdui components don’t make for a very attractive user interface. Particularly for a game, it’s nice to have all of the game’s GUI components match the style and theme of the game. Rather than going with a third-party solution, it’s possible to program the entire user-interface yourself by painting it onto a canvas. One advantage to this is that you can really optimize and only add code that’s relevant to the game rather than filling your game jar with code for widgets you might have used but didn’t.

For my dungeon game (which you can download from my game page which I’ve conveniently added to my sidebar) I’ve added a bunch of new graphical features to make it more attractive. First it starts with an animated splash screen:

sm_splash.png
The title and the keys fade in if the handset supports a sufficient amount of blending, otherwise they slide in from the sides of the screen.

Then I’ve painted the timer and custom softkeys onto a full-screen canvas instead of using lcdui commands:

large_play1.png

This includes two different sizes of graphics for different screen sizes:

sm_play.pngsm_play_2.png

Then the menu of options is painted over the current game screen. I’ve indicated which item is selected by coloring the text blue and putting stars behind it with an animated sparkling effect:

next_board_2.png

Plus, it shows the labels in French if the microedition.locale system property starts with “fr”:

restituer.png

As I said before, the hardest part is to get the keycodes right for the softkeys. And unfortunately, even if you have a list of key codes for a number of common handsets, it isn’t always easy for the MIDlet to identify the handset. The microedition.platform system property sometimes helps, but not always. Some manufacturers just set this property to return the generic string “j2me” and from there, there’s not a lot your MIDlet can do.

The current version of Dungeon is set so that the custom softkeys will work on the Sagem my700x and probably some other handsets. However if it doesn’t work, I’ve provided a second version with the custom softkey feature disabled. This second version still has the custom menus I’ve added, but uses lcdui commands to make the menu appear and disappear. This is what it looks like:

no_softkeys.png

It’s not as pretty, but I guess it’s not the end of the world…

Creating a basic M3G file with Blender

When using the Mobile 3D Graphics API (JSR 184), it’s important to understand how to create 3D objects, as discussed in my last couple of posts. However, in a real application you will typically use files created by 3D modeling software.

Many commercial software suites allow you to export 3D scenes in M3G format (required by JSR 184), including a free one called Blender, which you can download here. Don’t let the fact that it’s free fool you — like a lot of free software, it’s a full-featured software package suitable for professional use. By default Blender doesn’t export files in M3G format — you have to install an additional plugin such as the one I found from Nelson Games here. The plugin is very easy to install, you just need to make sure that Python (version 2.4 or greater) is correctly installed, and then you place the additional script in the Blender python scripts folder.

As the Blender documentation freely admits, Blender is not terribly newbie-friendly(really it could stand a few basic tutorials…) but it’s quite powerful and configurable once you get the hang of it. Here are some basic steps to get started and create a simple M3G file:

screenshotblendercube.png

This is what you will see when you first open Blender. (Click on the image to see it full-size.)

A new file in Blender starts off with a cube near the origin. You can see what the cube looks like when rendered by selecting render. The camera used for this rendering is that little line drawing near the bottom right corner.

If you don’t feel like using the default cube, just get rid of it by just pressing the delete key. (Since the cube is initially selected by default, the delete key deletes it.) If you’d like to keep it instead, you can transform it. If you’re in object or edit mode (the first drop-down menu in the toolbar along the bottom in the above screenshot is the mode), you’ll see some little buttons with icons for different ways of transforming the objcet: a triangle for translation, a donut for rotation, and a square for scale. To add further objects, go to add > mesh as seen in the following screenshot:

screenshotaddmesh.png

At this point you have the idea of how to create a very basic file, and you can try fiddling with all of the various gizmos on your own to see what else you can do. 😀

Once you have your scene ready, you can export it in M3G format by selecting M3G under the file > export menu. This option will appear automatically if the plugin is correctly installed. If you’re using the same plugin I’ve recommended, you’ll have a choice between exporting as M3G or as Java code. The Java code option is what I like about this particular plugin (I don’t know if others offer it). But let’s start by exporting it as M3G since that’s what you’ll usually do in a real application, and talk about exporting it as Java code later in this example.

Rendering the M3G file in your Java application is even easier than creating it! (Although — as with creating the file — there’s no end to the possibilities once you get started and get the hang of it.) All you do is put your M3G file in the MIDlet’s Jar file, load it with the Loader class, find the World node, and render it using the Graphics3D class.

There are a bunch of tutorials that show these basic steps, but I’ll show them here for completeness since there are only a few essential lines. Let’s assume we called the file “cube.m3g”. Here’s the code for the initialization step:

Object3D[] allNodes = Loader.load(“/cube.m3g”);

// find the world node
for(int i = 0, j = 0; i < allNodes.length; i++) {
if(allNodes[i] instanceof World) {
myWorld = (World)allNodes[i];
}
}

Then in the paint method of the Canvas (or other rendering target), you start by binding the Graphics3D singleton to the target’s Graphics instance, then call render on the World you read from the file, then release the target. You can do more entertaining things by manipulating the scene graph that is accessible from the World node, but if all you want to do is display the image file you created, then you’re done.

/**
* Paint the graphics onto the screen.
*/
protected void paint(Graphics g) {
Graphics3D g3d = null;
try {
// Start by getting a handle to the Graphics3D
// object which does the work of projecting the
// 3-D scene onto the 2-D screen (rendering):
g3d = Graphics3D.getInstance();
// Bind the Graphics3D object to the Graphics
// instance of the current canvas:
g3d.bindTarget(g);

// Now render: (project from 3D scene to 2D screen)
g3d.render(myWorld);

} catch(Exception e) {
e.printStackTrace();
} finally {
// Done, the canvas graphics can be freed now:
g3d.releaseTarget();
}
}

In my earlier examples we had to define the camera, but when rendering a world node like this, the default camera is defined in the world’s data. This mode of rendering is called “retained mode” (the earlier examples were “immediate mode”). A typical way to think about these two modes is that immediate mode is what you would use when defining a simple object in the code whereas retained mode is what you use for rendering M3G files. That’s what it usually comes down to in practice, but that isn’t the real difference, especially since you can define a world node and complete scene graph in the code and render it in retained mode or you could extract the VertexBuffer and IndexBuffer from a Mesh you found in an M3G file and render it in immediate mode if you like.

The fact that you can create the same data in Java code as you can define in an M3G file becomes very apparent if you export your scene from Blender as Java code. In fact, that way you can look at the code and see how it’s done, tweaking it and seeing the effects of your modifications if you like. Even though the Java class generally takes more space in the jar (even after obfuscation), there are real-world applications for exporting the file as Java code since on many platforms Java code loads faster than resources from the Jar file.

The Java code produced by this plugin is quite easy to use. It creates a Java class with the name you choose when you export, and the class has a static “getRoot” method that returns the world node. The only things I had to change to get it to work were to add it to my package and change the use of “Canvas3D” to “Canvas”. (I’m not sure why Canvas3D is used here since Canvas is more widely supported and works just the same.)

If you’d like to have a look at the code it generates, I’m posting the exported code of a simple Blender-generated cube below the fold.

Continue reading

Abbey normal???

It’s just a little thing, but one point I’ve found kind of confusing when playing with the Mobile 3D graphics API (JSR 184) is how to use the normal vectors.

When you’re defining your 3-dimensional polygon, you can give each vertex some position coordinates (okay, that’s reasonable), color coordinates (kind of amusing for the color to be given by coordinates, but okay), and a normal vector.

But what is the normal vector for?

If you’re on the face of one of the triangles of your polygon, the normal vector is logically the vector perpendicular to the flat face of the triangle. So you don’t need to define a normal since the normal is determined by the triangle. I guess that’s why the places we get to define the normal vectors are at the vertices. But what does it even mean to be perpendicular to a pointy part of the surface? It might be used to give the graphics functionality a hint as to how to smooth out and round the point, except that the 3D graphics engine doesn’t do that…

The normals are used to tell the graphics engine how to make light reflect off the surface. So in a sense it does give information about how to smooth and round off the corners — at least from the point of view of the reflected light. If you define the normal vector to be perpendicular to a triangle face, light will reflect off of the triangle as if the triangle is perfectly flat, with a nice, crisp point at the vertex. But if you’d rather smooth out your surface a little — say you’re making a ball and you want it to be less obvious that it’s constructed of triangles — you can define a normal pointing directly out from the center of the ball towards each vertex, giving what the perpendicular direction (normal vector) would be if the ball were smooth at that point. (The flat-faces-and-sharp-points model requires you to define more vertices than the smoothed/rounded model because when you’re smoothing you can re-use the normal vector, whereas to get flat faces you need to define a new normal vector — hence a new corresponding vertex — for each face that meets at a given point.)

As an example, I’ve taken my pyramid from my previous example and I’ve given it some normal vectors. Here’s the set of vertices:

private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};

Recall that using a TriangleStripArray I’ve grouped the vertices into the following strip of triangles: { (0, 0, 10), (10, 0, 0), (0, 10, 0) }, { (10, 0, 0), (0, 10, 0), (0, -10, 0) }, { (0, 10, 0), (0, -10, 0), (-10, 0, 0) }, { (0, -10, 0), (-10, 0, 0), (0, 0, 10) }, plus one last triangle pasted on { (0, -10, 0), (10, 0, 0), (0, 0, 10) }.

And here are the normals I’ve defined for the vertices:

private short[] myNormals2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
1, -1, 1, 1, -1, 1, 1, -1, 1
};

So you can see that for the top strip of triangles, I’ve defined the normal as going directly out from the center in order to get a smoothing effect. Then for the pasted on triangle (the last three vertices), I have defined all three normals as being the same vector perpendicular to the triangle to get a nice flat surface.

Here’s the result:

normal.png

The top triangle in this picture is the flat one.

In order to illustrate this better, I’ve rotated the pyramid a little and moved the camera closer. Also, I’ve switched the pyramid’s appearance from polygon mode to having a shiny surface material and I’ve placed a light source (omnidirectional, Light.OMNI) near the camera so you can see how the light reflects off the surface. If you’re using ambient light (light.AMBIENT) which lights all surfaces equally in all directions, you don’t need to bother with normals, but with the other three types of light (OMNI, DIRECTIONAL, and SPOT) the light hits the surface from a particular direction, so the normals are used to calculate how the light should bounce off.

Just for fun, I decided to see what would happen if I defined some crazy normal vectors — some “Abbey Normals” 😉 — for my pyramid:

private short[] myAbbeyNormals2 = {
0, 1, 1, -1, 0, 0, 1, 0, 1, 0, 1, -1, -1, 0, 0, 1, 1, 1,
-1, 1, 1, 1, -1, 1, 1, 1, -1
};

This isn’t necessarily useful in practice, but I was curious to see what would happen. Here’s the result:

abnormals.png

Note that in both these examples, I didn’t worry at all about the length of each normal vector. The length of the normal doesn’t matter — it isn’t taken into account. All that matters is the direction.

Here’s the code for today’s adventure. I’ve thrown in a method to rotate the pyramid in response to pressing the arrow keys so you can see it from all sides, and the corresponding MIDlet class can be found in my earlier pyramid post:

package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.m3g.*;

/**
* This is a very simple example class to illustrate 3-D coordinates.
*/
public class DemoCanvas extends Canvas {

/**
* The information about where the scene is viewed from.
*/
private Camera myCamera;

/**
* The information about how to move the camera.
*/
private Transform myCameraTransform = new Transform();

/**
* The information about how to move the pyramid.
*/
private Transform myObjectTransform = new Transform();

/**
* The distance to move the camera in response to a keypress.
*/
public static final float DEFAULT_DISTANCE = 10.0f;

/**
* The background. (self-explanatory 😉 )
*/
private Background myBackground = new Background();

/**
* The set of vertices.
*/
private VertexBuffer myVertexBuffer;

/**
* The object that defines how to map the set of vertices into
* a polygon.
*/
private IndexBuffer myIndexBuffer;

/**
* Information on how the polygon should look in terms of
* color, texture, shading, etc..
*/
private Appearance myAppearance;

/**
* The list of vertices for the first example pyramid.
*/
private short[] myVertices1 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
};

/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip1 = { 6 };

/**
* The list of vertices for the second example pyramid.
*/
private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};

/**
* The list of normals for the second example pyramid.
*/
private short[] myNormals2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
1, -1, 1, 1, -1, 1, 1, -1, 1
};

/**
* The list of crazy normals for the second example pyramid.
*/
private short[] myAbbeyNormals2 = {
0, 1, 1, -1, 0, 0, 1, 0, 1, 0, 1, -1, -1, 0, 0, 1, 1, 1,
-1, 1, 1, 1, -1, 1, 1, 1, -1
};

/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip2 = { 6, 3 };

/**
* Initialize everything.
*/
public DemoCanvas() {
try {
// Create the camera object to define where the polygon is being
// viewed from and in what way:
myCamera = new Camera();
// Set the camera so that it will project the 3-D picture onto the
// screen in perspective, with a vanishing point in the distance:
myCamera.setPerspective(60.0f, (float)getWidth() / (float)getHeight(),
1.0f, 1000.0f);

// Here we construct the VertexArray, which is a generic data
// structure for storing collections of coordinate points:
int numVertices = myVertices2.length / 3;
// specify how many vertices, plus the fact that each vertex has
// three coordinates, and each coordinate is coded on two bytes:
VertexArray va = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
va.set(0, numVertices, myVertices2);

// define the normals:
VertexArray na = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
na.set(0, numVertices, myAbbeyNormals2);

// Now create a 3-D object of it.
// Here we can group a set of different VertexArrays, one
// giving positions, one, giving colors, one giving normals:
myVertexBuffer = new VertexBuffer();
myVertexBuffer.setPositions(va, 1.0f, null);
myVertexBuffer.setNormals(na);
// Color the polygon white:
myVertexBuffer.setDefaultColor(0xffffff);

// Here we define how to piece together the vertices into
// a polygon:
myIndexBuffer = new TriangleStripArray(0, myTriangleStrip2);

// Let’s try creating a more complex appearance:
Material material = new Material();
material.setShininess(100.0f);
myAppearance = new Appearance();
myAppearance.setMaterial(material);

// color the background black:
myBackground.setColor(0x000000);

// We set the camera’s X position and Y position to 0
// so that we’re looking straight down at the origin
// of the x-y plane. The Z coordinate tells how far
// away the camera is — increasing this value takes
// you farther from the polygon, making it appear
// smaller.
myCameraTransform.postTranslate(0.0f, 0.0f, 25.0f);

// reset the object’s original orientation:
myObjectTransform.setIdentity();
} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Paint the graphics onto the screen.
*/
protected void paint(Graphics g) {
try {
// Start by getting a handle to the Graphics3D
// object which does the work of projecting the
// 3-D scene onto the 2-D screen (rendering):
Graphics3D g3d = Graphics3D.getInstance();
// Bind the Graphics3D object to the Graphics
// instance of the current canvas:
g3d.bindTarget(g);
// Clear the screen by painting it with the
// background image:
g3d.clear(myBackground);

// now add the light:
Light light = new Light();
light.setMode(Light.OMNI);
light.setIntensity(2.0f);
Transform lightTransform = new Transform();
lightTransform.postTranslate(0.0f, 0.0f, 50.0f);
g3d.resetLights();
g3d.addLight(light, lightTransform);

g3d.setCamera(myCamera, myCameraTransform);

// Now render, project the 3D scene onto the flat screen:
g3d.render(myVertexBuffer, myIndexBuffer, myAppearance,
myObjectTransform);

// Done, the canvas graphics can be freed now:
g3d.releaseTarget();

} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Move the object in response to game commands.
*/
public void keyPressed(int keyCode) {
switch(getGameAction(keyCode)) {
case Canvas.UP:
myObjectTransform.postRotate(DEFAULT_DISTANCE, -1.0f, 0.0f, 0.0f);
break;
case Canvas.DOWN:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 1.0f, 0.0f, 0.0f);
break;
case Canvas.RIGHT:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 0.0f, 1.0f, 0.0f);
break;
case Canvas.LEFT:
myObjectTransform.postRotate(DEFAULT_DISTANCE, 0.0f, -1.0f, 0.0f);
break;
default:
break;
}
repaint();
}

}

Coordinates in JSR 184: A very simple example

I love the way the JavaDoc for Graphics3D encouragingly starts with “Using the Graphics3D is very straightforward” just before launching into the precisions about the different types of rendering modes and targets, viewports, antialiasing, dithering, etc. 😉

JSR 184 gives an incredibly rich API for creating 3-D animations (and especially games) for a MIDP handset. It’s designed to allow you to import complex computer-generated 3-D objects as well as to create your own 3-D objects by hand, with a fantastic array of options in terms of lighting, surface textures, etc. It also comes with an equally rich assortment of new terms (Mesh, IndexBuffer, TriangleStripArray, CompositingMode…) that can look like a foreign language to those new to 3-D programming. And just understanding how all of the different coordinate systems work and interact with each other is not trivial.

Not to be discouraging or anything — it’s not so daunting if you start simple. So my programming challenge that I assigned myself for today was to write the simplest 3-D example possible that illustrates the basics of how the coordinate system works and how to define a 3-D polygon.

My example is a pyramid with a square base viewed from above.

This is a two-part example: In part one, I start with just the square base with two of the triangular sides (opposite each other) attached. So picture a square paper with a triangle attached along the right edge and the left edge, and the two triangles folded up to meet at a point at the top. In part two, I start with this same incomplete pyramid and glue on a third side. From there, completeing the pyramid should be no problem. 😀

I’m going to start by posting the code so you can have a look, and then I’ll explain in a little more detail, particularly how the VertexArray works and how the TriangleStripArray defines how to piece the vertices together to form a polygon.

DemoCanvas.java:


package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.m3g.*;

/**
* This is a very simple example class to illustrate 3-D coordinates.
*/
public class DemoCanvas extends Canvas {

/**
* The information about where the scene is viewed from.
*/
private Camera myCamera;

/**
* The background. (self-explanatory 😉 )
*/
private Background myBackground = new Background();

/**
* The set of vertices.
*/
private VertexBuffer myVertexBuffer;

/**
* The object that defines how to map the set of vertices into
* a polygon.
*/
private IndexBuffer myIndexBuffer;

/**
* Information on how the polygon should look in terms of
* color, texture, shading, etc..
*/
private Appearance myAppearance;

/**
* The list of vertices for the first example pyramid.
*/
private short[] myVertices1 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
};
/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip1 = { 6 };

/**
* The list of vertices for the second example pyramid.
*/
private short[] myVertices2 = {
0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10,
0, -10, 0, 10, 0, 0, 0, 0, 10
};
/**
* The rule for how to piece together the vertices into a polygon.
*/
private int[] myTriangleStrip2 = { 6, 3 };

/**
* Initialize everything.
*/
public DemoCanvas() {
try {
// Create the camera object to define where the polygon is being
// viewed from and in what way:
myCamera = new Camera();
// Set the camera so that it will project the 3-D picture onto the
// screen in perspective, with a vanishing point in the distance:
myCamera.setPerspective(60.0f, (float)getWidth() / (float)getHeight(),
1.0f, 1000.0f);

// Here we construct the VertexArray, which is a generic data
// structure for storing collections of coordinate points:
int numVertices = myVertices1.length / 3;
// specify how many vertices, plus the fact that each vertex has
// three coordinates, and each coordinate is coded on two bytes:
VertexArray va = new VertexArray(numVertices, 3, 2);
// set the data, starting from index 0:
va.set(0, numVertices, myVertices1);

// Now create a 3-D object of it.
// Here we could group a set of different VertexArrays, one
// giving positions, one, giving colors, one giving normals,
// but for simplicity we're only setting position coordinates:
myVertexBuffer = new VertexBuffer();
myVertexBuffer.setPositions(va, 1.0f, null);
// Color the polygon white:
myVertexBuffer.setDefaultColor(0xffffff);

// Here we define how to piece together the vertices into
// a polygon:
myIndexBuffer = new TriangleStripArray(0, myTriangleStrip1);

// We want the appearance as simple as possible, so set the
// appearance to polygon mode:
PolygonMode pm = new PolygonMode();
pm.setShading(PolygonMode.SHADE_FLAT);

myAppearance = new Appearance();
myAppearance.setPolygonMode(pm);

// color the background black:
myBackground.setColor(0x000000);
} catch(Exception e) {
e.printStackTrace();
}
}

/**
* Paint the graphics onto the screen.
*/
protected void paint(Graphics g) {
try {
// Start by getting a handle to the Graphics3D
// object which does the work of projecting the
// 3-D scene onto the 2-D screen (rendering):
Graphics3D g3d = Graphics3D.getInstance();
// Bind the Graphics3D object to the Graphics
// instance of the current canvas:
g3d.bindTarget(g);
// Clear the screen by painting it with the
// background image:
g3d.clear(myBackground);

// Now set where we're viewing the scene from:
Transform cameraTransform = new Transform();
// We set the camera's X position and Y position to 0
// so that we're looking straight down at the origin
// of the x-y plane. The Z coordinate tells how far
// away the camera is -- increasing this value takes
// you farther from the polygon, making it appear
// smaller. Try changing these values to view the
// polygon from different places:
cameraTransform.postTranslate(0.0f, 0.0f, 100.0f);
g3d.setCamera(myCamera, cameraTransform);

// Now set the location of the object.
// if this were an animation we would probably
// translate or rotate it here:
Transform objectTransform = new Transform();
objectTransform.setIdentity();

// Now render: (Yay!!! finally!!!)
g3d.render(myVertexBuffer, myIndexBuffer, myAppearance, objectTransform);

// Done, the canvas graphics can be freed now:
g3d.releaseTarget();

} catch(Exception e) {
e.printStackTrace();
}
}

}

***

The first example (using myVertices1 and myTriangleStrip1) gives a result that looks like this:

pyramid1.png

The second example (using myVertices2 and myTriangleStrip2) gives a result that looks like this:

pyramid2.png

In the first example, I constructed my TriangleStripArray with the arguments 0 and a one-element array: { 6 }. That means start from the first vertex in the vertex array (actually the zeroth element — you know what I mean), and then make one strip of triangles from the six vertices. The triangles are defined by taking all sets of three consecutive vertices, the first one starting from the first vertex, the second one starting from the second vertex, etc. So each triangle in the triangle strip shares a side with the next triangle and shares another side with the previous triangle as you can see from this example:

Vertex array #1 looks like this: { 0, 0, 10, 10, 0, 0, 0, 10, 0, 0, -10, 0, -10, 0, 0, 0, 0, 10 },– where every three values together form one vertex and three vertices form a triangle — so the corresponding strip is made of the following set of trianges: { (0, 0, 10), (10, 0, 0), (0, 10, 0) }, { (10, 0, 0), (0, 10, 0), (0, -10, 0) }, { (0, 10, 0), (0, -10, 0), (-10, 0, 0) }, { (0, -10, 0), (-10, 0, 0), (0, 0, 10) }. (Here I’ve surrounded the xyz-coordinates of each vertex in parentheses and each triangle in brackets so you can see the list of triangles more easily — this doesn’t represent any syntax that appears in the code.) Note that the first (and last) vertex is the top point and the middle two triangles together form the square base of the pyramid.

Even though the square base is part of the polygon, it turned out black on the screen. That’s because by default the inside faces are invisible; not rendered.  If you’d like the inside faces to be visible, then set your PolygonMode’s culling to CULL_NONE. The computer determines which side is “inside” and which side is “outside” by whether the vertices of the triangle are defined in clockwise or counter-clockwise order. I’d explain how to figure out which side is which if I weren’t dyslexic — normally in such cases I guess, then compile and run, then invert the values if I guessed wrong. 😉

In the second example, I constructed my TriangleStripArray with the arguments 0 and a two-element array: { 6, 3 }. This creates the same strip of triangles as the first one, and then makes another strip of triangles from the next three vertices it finds. Three vertices make one triangle, so using the larger second example array we get the additional triangle { (0, -10, 0), (10, 0, 0), (0, 0, 10) } giving one additional side. This side is white the way I’ve defined it, but if I’d defined it in the wrong order (i.e. attached it backwards), it would appear black from my camera’s angle. (Note that I could actually have used myVertices2 for both examples and the result would have been the same.)

For completeness, I’ll post the simple MIDlet class that goes with this. Also note that this program uses the type float, and I found I had to set my project to CLDC-1.1 (under Settings > API Selection in ktoolbar) to get it to compile:


package net.frog_parrot.test;

import javax.microedition.lcdui.*;
import javax.microedition.midlet.MIDlet;

/**
* A simple 3D example.
*/
public class TestMIDlet extends MIDlet implements CommandListener {

private Command myExitCommand = new Command("Exit", Command.EXIT, 1);
private DemoCanvas myCanvas = new DemoCanvas();

/**
* Initialize the Displayables.
*/
public void startApp() {
myCanvas.addCommand(myExitCommand);
myCanvas.setCommandListener(this);
Display.getDisplay(this).setCurrent(myCanvas);
myCanvas.repaint();
}

public void pauseApp() {
}

public void destroyApp(boolean unconditional) {
}

/**
* Change the display in response to a command action.
*/
public void commandAction(Command command, Displayable screen) {
if(command == myExitCommand) {
destroyApp(true);
notifyDestroyed();
}
}

}