Contents Previous Next

Java 3D API Specification


A P P E N D I X C

View Model Implementation Details




An application programmer writing a 3D graphics program that will deploy on a variety of platforms must anticipate the likely end-user environments and must carefully construct the view transforms to match those characteristics using a low-level API. This appendix addresses many of the issues an application must face and describes the sophisticated features that Java 3D's advanced view model provides.

C.1 An Overview of the Java 3D View Model

Both camera-based and Java 3D-based view models allow a programmer to specify the shape of a view frustum and, under program control, to place, move, and re-orient that frustum within the virtual environment. However, how they do this varies enormously. Unlike the camera-based system, the Java 3D view model allows slaving the view frustum's position and orientation to that of a six-degree-of-freedom tracking device. By slaving the frustum to the tracker, Java 3D can automatically modify the view frustum so that the generated images match the end-user's viewpoint exactly.

Java 3D must handle two rather different "head tracking" situations. In one case, we rigidly attach a tracker's base, and thus its coordinate frame, to the display environment. This corresponds to placing a tracker base in a fixed position and orientation relative to a projection screen within a room, relative to a computer display on a desk, or relative to the walls of a multiple wall projection display. In the second case "head tracking" situation, we rigidly attach a tracker's sensor, not its base, to the display device. This corresponds to rigidly attaching one of that tracker's sensors to a head-mounted display and placing the tracker base somewhere within the physical environment.

C.2 Physical Environments and Their Effects

Imagine an application where the end-user sits on a magic carpet. The application flies the user through the virtual environment by controlling the carpet's location and orientation within the virtual world. At first glance, it might seem that the application also controls what the end-user will see-and it does, but only superficially.

The following two examples show how end-user environments can significantly affect how an application must construct viewing transforms.

C.2.1 A Head-mounted Example

Imagine that the end-user sees the magic carpet and the virtual world with a head-mounted display and head tracker. As the application flies the carpet through the virtual world, the user may turn to look to the left, right, or even towards the rear of the carpet. Because the head-tracker keeps the renderer informed of the user's gaze direction, it might not need to draw the scene directly in front of the magic carpet. The view that the renderer draws on the head-mount's display must match what the end-user would see had the experience occurred in the real world.

C.2.2 A Room-mounted Example

Imagine a slightly different scenario, where the end-user sits in a darkened room in front of a large projection screen. The application still controls the carpet's flight path; however, the position and orientation of the user's head barely influences the image drawn on the projection screen. If a user looks left or right, then he or she only sees the darkened room. The screen does not move. It's as if the screen represents the magic carpet's "front window" and the darkened room represents the "dark interior" of the carpet.

By adding a left and right screen, we give the magic carpet rider a more complete view of the virtual world surrounding the carpet. Now our end-user sees the view to the left or right of the magic carpet by turning left or right.

C.2.3 Impact of Head Position and Orientation On the Camera

In the head-mounted example, the user's head position and orientation significantly impacts a camera model's camera position and orientation but has hardly any affect on the projection matrix. In the room-mounted example, the user's head position and orientation contributes little to a camera model's camera position and orientation; however, it does affect the projection matrix.

From a camera-based perspective, the application developer must construct the camera's position and orientation by combining the virtual-world component (the position and orientation of the magic carpet) and the physical-world component (the user's instantaneous head position and orientation).

Java 3D's view model incorporates the appropriate abstractions to compensate automatically for such variability in end-user hardware environments.

C.3 The Coordinate Systems

The basic view model consists of eight or nine coordinate systems, depending on whether the end-user environment consists of a room-mounted display or a head-mounted display. First, we define the coordinate systems used in a room-mounted display environment. Next we define the added coordinate system introduced when using a head-mounted display system.

C.3.1 Room-mounted Coordinate Systems

The room-mounted coordinate system divides into the virtual coordinate system and the physical coordinate system. Figure C-1 shows these coordinate systems graphically. The coordinate systems within the grayed area exist in the virtual world; those outside exist in the physical world. Note that the coexistence coordinate system exists in both worlds.

C.3.1.1 The Virtual Coordinate Systems

The Virtual World Coordinate System

The virtual world coordinate system encapsulates the unified coordinate system for all scene graph objects in the virtual environment. For a given View, the virtual world coordinate system is defined by the Locale object that contains the ViewPlatform object attached to the View. It is a right-handed coordinate system with x to the right, y up, and z towards the viewer.

The ViewPlatform Coordinate System

The ViewPlatform coordinate system is the local coordinate system of the ViewPlatform leaf node that the View is attached to.

The Coexistence Coordinate System

A primary implicit goal of any view model is to map a specified local portion of the Physical World into a specified portion of the Virtual World. Once established, one can legitimately ask where the user's head or hand is located within the Virtual World, or where a virtual object is located in the local Physical World. In this way the physical user can interact with objects inhabiting the Virtual World, and vice-versa. To establish this mapping, Java 3D defines a special coordinate system, called Coexistence coordinates, that is defined to exist in both the Physical World and the Virtual World.

The coexistence coordinate system exists half in the virtual world and half in the physical world. The two transforms that go from the coexistence coordinate system to the virtual world coordinate system and back again contain all the information needed to grow or shrink the virtual world relative to the physical world as well as the information needed to position and orient the virtual world relative to the physical world.

Modifying the transform that maps the coexistence-coordinate system into the virtual world coordinate-system changes what the end user can see. The Java 3D application programmer moves the end-user within the virtual world by modifying this transform.

C.3.1.2 The Physical Coordinate Systems

The Head Coordinate System

The head coordinate system allows an application to import its user's head geometry. The coordinate system provides a simple consistent coordinate frame for specifying such factors as the location of the eyes and ears.

The Image Plate Coordinate System

The image plate coordinate system corresponds with the physical coordinate system of the image generator. The image plate is defined to have its origin at the lower left-hand corner of the display area and to lie in the display area's XY plane. Note that imageplate is a different coordinate system than either leftimageplate or rightimageplate. These last two coordinate systems are defined in head-mounted environments only (see Section C.3.2, "Head-Mounted Coordinate Systems").

The Headtracker Coordinate System

The headtracker coordinate system corresponds to the six-degree-of-freedom tracker's sensor attached to the user's head. The headtracker's coordinate system describes the user's instantaneous head position.

The Trackerbase Coordinate System

The trackerbase coordinate system corresponds to the emitter associated with absolute position-orientation trackers. For those trackers that generate relative position-orientation information, this coordinate system is that tracker's initial position and orientation. In general, this coordinate system is rigidly attached to the physical world.

C.3.2 Head-Mounted Coordinate Systems

Head-mounted coordinate systems divide the same virtual coordinate systems and the physical coordinate systems. Figure C-2 shows these coordinate systems graphically. As with the room-mounted coordinate systems, the coordinate systems within the grayed area exist in the virtual world; those outside exist in the physical world. Once again, the coexistence coordinate system exists in both worlds. The arrangement of the coordinate system differs from those for a room-mounted display environment. The head-mounted version of Java 3D's coordinate system differs in another way. It includes two imageplate coordinate systems, one for each of an end-user's eyes.

The Leftimageplate and Rightimageplate Coordinate Systems

The leftimageplate and rightimageplate coordinate systems correspond with the physical coordinate system of the image generator associated with the left and right eye respectively. The image plate is defined as having its origin at the lower left-hand corner of the display area and lying in the display area's XY plane. Note that the left image plate's XY plane does not necessarily lie parallel to the right image plate's XY plane. Note that leftimageplate and rightimageplate are different coordinate systems than the room-mounted display environment's imageplate coordinate system.

C.4 The ViewPlatform Object

The ViewPlatform object is a leaf object within the Java 3D scene graph. The ViewPlatform object is the only portion of Java 3D's viewing model that resides as a node within the scene graph. Changes to transform group nodes in the scene graph hierarchy above a particular ViewPlatform object move the view's location and orientation within the Virtual World (see Section 8.3, "ViewPlatform-A Place In the Virtual World"). The ViewPlatform object also contains a ViewAttachPolicy and an ActivationRadius (see Section 5.10, "ViewPlatform Node," for a complete description of the ViewPlatform API).

C.5 The View Object

The View object is the central Java 3D object for coordinating all aspects of a viewing situation. All parameters that determine the view transform to be used in rendering on a collected set of canvases in Java 3D are either directly contained within the View object, or within objects pointed to by a View object (or pointed to by these, etc.). Java 3D supports multiple simultaneously-active View objects, each of which controls its own set of canvases.

The Java 3D View object has several instance variables and methods, but most are calibration variables or user-helping functions.

Methods
public final void setTrackingEnable(boolean flag)
public final boolean getTrackingEnable()

These methods set and retrieve a flag specifying whether to enable the use of six-degree-of-freedom tracking hardware.

public final void getUserHeadToVworld(Transform3D t)

This value stores the transform that takes points in the user-head coordinate system and translates them into points in the virtual world coordinate system. This value is read-only. Java 3D continually generates it, but only if enabled by using the setUserHeadToVworldEnable method.

public final void setUserHeadToVworldEnable(boolean flag)
public final boolean getUserHeadToVworldEnable()

These methods set and retrieve a flag that specifies whether or not to repeatedly generate the userHeadToVworld transform (initially false).

public final void getUserHeadToTrackerBase(Transform3D t)

This method retrieves the user-head to the head-tracker-base transform and copies that value into the specified Transform3D object. If a tracker is not present, this matrix is not updated. If a head tracker is present, the system updates this matrix with information about the user's current head position and orientation.

C.5.1 View Policy

The view policy informs Java 3D whether it should generate the view using the head-tracked system of transforms or the head-mounted system of transforms. These policies are attached to the Java 3D View object.

Methods
public final void setViewPolicy(int policy)
public final int getViewPolicy()

These two methods set and retrieve the current policy for view computation. The policy variable specifies how Java 3D uses its transforms in computing new viewpoints, as follows:

C.5.2 Sensors and Their Location In the Virtual World

public final void getSensorToVworld(Sensor sensor, Transform3D t)
public final void getSensorHotSpotInVworld(Sensor sensor,
       Point3d position)
public final void getSensorHotSpotInVworld(Sensor sensor,
       Point3f position)

The first method takes the sensor's last reading and generates a matrix that takes points in that sensor's local coordinate system and produces corresponding points in virtual space. The next two methods transform the specified sensor's last hotspot location in the equivalent location in virtual space.

C.5.3 Frame Start Time and Duration

public long getCurrentFrameStartTime()

This method returns the time at which the most recent rendering frame started. It is defined as the number of milliseconds since January 1, 1970 00:00:00 GMT. Since multiple canvases might be attached to this View, the start of a frame is defined as the point just prior th clearing any canvas attached to this view.

public long getLastFrameDuration()

This method returns the duration, in milliseconds, of the most recently completed rendering frame. The time taken to render all canvases attached to this view is measured. This duration is computed as the difference between the start of the most recently-completed frame and the end of that frame. Since multiple canvases might be attached to this View, the start of a frame is defined as the point just prior to clearing any canvas attached to this view, while the end of a frame is defined as the point just after swapping the buffer for all canvases.

C.5.4 Scene Antialiasing

public final void setSceneAntialiasingEnable(boolean flag)
public final boolean getSceneAntialiasingEnable()

These methods set and retrieve the scene antialiasing flag. Scene antialiasing is either enabled or disabled for this view. If enabled, the entire scene will be antialiased on each canvas in which scene antialiasing is available. Scene antialiasing is disabled by default.

C.5.5 Depth Buffer

public final void setDepthBufferFreezeTransparent(boolean flag)
public final void getDepthBufferFreezeTransparent()

The set method enables or disables automatic freezing of the depth buffer for objects rendered during the transparent rendering pass (i.e., objects rendered using alpha blending) for this view). If enabled, depth buffer writes are disabled during the transparent rendering pass regardless of the value of the depth buffer write enable flag in the RenderingAttributes object for a particular node. This flag is enabled by default. The get method retrieves This flag is enabled by default. flag for this view.

C.6 The Screen3D Object

A Screen3D object represents one independent display device. The most common environment for a Java 3D application is a desktop computer with or without a head tracker. Figure C-3 shows a scene graph fragment for a display environment designed for such an end-user environment. Figure C-4 shows a display environment that matches the scene graph fragment in Figure C-3.

A multiple projection wall display presents a more exotic environment. Such environments have multiple screens, typically three or more. Figure C-5 shows a scene graph fragment representing such a system and Figure C-6 shows the corresponding display environment.

A multiple screen environment requires more care during the initialization and calibration phase. Java 3D must know how the Screen3D are placed with respect to one another, the tracking device, and the physical portion of the coexistence coordinate system.

C.6.1 Screen3D Calibration Parameters

The Screen3D object is the 3D version of AWT's screen object. The majority of the Screen3D API is defined in Section 8.8, "The Screen3D Object."

To use a Java 3D system, someone or some program must calibrate the Screen3D object with the coexistence volume. These methods allow that person or program to inform Java 3D of those calibration parameters.

Measured Parameters

These calibration parameters are set once, typically by a browser, calibration program, system administrator, or system calibrator; not by an applet.

public final void setPhysicalScreenWidth(double width)
public final void setPhysicalScreenHeight(double height)

These methods store the screen's (image plate's) physical width and height in meters. The system administrator or system calibrator must provide these values by measuring the display's active image width and height. In the case of a head-mounted display, this should be the display's apparent width and height at the focal plane.

C.6.2 Accessing and Modifying An Eye's Image-plate Position

A Screen3D object provides sophisticated applications with access to the eye's position information in head-tracked, room-mounted runtime environments. It also allows applications to manipulate the position of an eye relative to an image plate in non-head-tracked runtime environments.

public void setLeftEyePositionInImagePlate(Point3d position)
public void getLeftEyePositionInImagePlate(Point3d position)
public void setRightEyePositionInImagePlate(Point3d position)
public void getRightEyePositionInImagePlate(Point3d position)

These values determine eye placement when a head tracker is not in use and the application is directly controlling the eye position in image-plate coordinates. In head-tracked mode or when the windowEyePointPolicy is RELATIVE_TO_FIELD_OF_VIEW, this value is derived from other values and is read-only. In head-tracked mode, Java 3D repetitively generates these values as a function of the current head-position.

C.6.3 Accessing and Changing Head Tracker Coordinates

public void setTrackerBaseToImagePlate(Transform3D t)
public void getTrackerBaseToImagePlate(Transform3D t)

This value stores the tracker-base coordinate system to image-plate coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This is used only in SCREEN_VIEW mode. User's must recalibrate whenever the imageplate moves relative to the tracker.

public void setHeadTrackerToLeftImagePlate(Transform3D t)
public void getHeadTrackerToLeftImagePlate(Transform3D t)
public void setHeadTrackerToRightImagePlate(Transform3D t)
public void getHeadTrackerToRightImagePlate(Transform3D t)

These values store the head-tracker coordinate system to left image-plate coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This is used only in HMD_VIEW mode.

public final void setCoexistenceToImagePlate(Transform3D t)
public final void getCoexistenceToImagePlate(Transform3D t)

This value stores the coexistence coordinate system to image-plate coordinate system transform. This transform is derived from head-tracking information and is read-only when head-tracking is enabled. It may be set by the user if head tracking is not enabled. It is expected that most applications will not set this.

C.7 The Canvas3D Object

Java 3D provides special support for those applications that wish to manipulate an eye position even in a non-head-tracked display environment. One situation where such a facility proves useful is an application that wishes to generate a very high-resolution image composed of lower-resolution tiled images. The application must generate each tiled component of the final image from a common eye position with respect to the composite image but a different eye position from the perspective of each individual tiled element.

C.7.1 Window Eyepoint Policy

The window eyepoint policy comes into effect in a non-head-track environment. The policy tells Java 3D how to construct a new view frustum based on changes in the field-of-view and in the Canvas3D's location on the screen. The policy only comes into effect when the application changes a parameter that can change the placement of the eyepoint relative to the view frustum.

Constants
public static final int RELATIVE_TO_FIELD_OF_VIEW

This variable tells Java 3D that it should modify the eye-point position so it is located at the appropriate place relative to the window to match the specified field-of-view. This implies that the view frustum will change whenever the application changes the field-of-view. In this mode, the eye position is read-only. This is the default setting.

public static final int RELATIVE_TO_SCREEN

This variable tells Java 3D to interpret the eye's position relative to the entire screen. No matter where an end-user moves a window (a Canvas3D), Java 3D continues to interpret the eye's position relative to the screen. This implies that the view frustum changes shape whenever an end-user moves the location of a window on the screen. In this mode, the field-of-view is read-only.

public static final int RELATIVE_TO_WINDOW

This variable specifies that Java 3D should interpret the eye's position information relative to the window (Canvas3D). No matter where an end-user moves a window (a Canvas3D), Java 3D continues to interpret the eye's position relative to that window. This implies that the frustum remains the same no matter where the end-user moves the window on the screen. In this mode, the field-of-view is read-only.

Methods
public final int getWindowEyepointPolicy()
public final void setWindowEyepointPolicy(int policy)

This variable specifies how Java 3D handles the predefined eye-point in a non-head-tracked application. The variable can contain one of three values: RELATIVE_TO_FIELD_OF_VIEW, RELATIVE_TO_SCREEN, and RELATIVE_TO_WINDOW.

C.7.2 Monoscopic View Policy

This policy specifies how Java 3D generates a monoscopic view.

Constants
public final static int LEFT_EYE_VIEW

Specifies that the monoscopic view generated should be the view as seen from the left eye.

public final static int RIGHT_EYE_VIEW

Specifies that the monoscopic view generated should be the view as seen from the right eye.

public final static int CYCLOPEAN_EYE_VIEW

Specifies that the monoscopic view generated should be the view as seen from the "center eye," the fictional eye half-way between the left and right eyes. This is the default setting.

Methods
public final void setMonoscopicViewPolicy(int policy)
public final int getMonoscopicViewPolicy()

These methods set and return the monoscopic view policy, respectively.

C.7.3 Scene Antialiasing

public final boolean getSceneAntialiasingAvailable()

This method returns a status flag indicating whether scene antialiasing is available.

C.8 The PhysicalBody Object

The PhysicalBody object contains information concerning the end-user's body physical characteristics. The head parameters allow end-users to specify their own head's characteristics and thus to customize any Java 3D application so that it conforms to their unique geometry. The PhysicalBody object defines head parameters in the head coordinate system. It provides a simple and consistent coordinate frame for specifying such factors as the location of the eyes and thus the interpupilary distance.

The Head Coordinate System

The head coordinate system has its origin on the head's bilateral plane of symmetry, roughly half way between the left and right eyes. The origin of the head coordinate system is known as the center eye. The positive X-axis extends to the right. The positive Y axis extends up. The positive Z axis extends into the skull. Values are in meters.

Constructors
public PhysicalBody()

Constructs a default user PhysicalBody object with the following default eye and ear positions:

Left eye: -0.033, 0.0, 0.0
Right eye: 0.033, 0.0, 0.0
Left ear: -0.080, -0.030, 0.095
Right ear: 0.080, -0.030, 0.095

public PhysicalBody(Point3d leftEyePosition,
       Point3d rightEyePosition)
public PhysicalBody(Point3d leftEyePosition,
       Point3d rightEyePosition, leftEarPosition,
       Point3d rightEarPosition)

These methods construct a PhysicalBody object with the specified eye and ear positions.

Methods
public void getLeftEyePosition(Point3d position)
public void setLeftEyePosition(Point3d position)
public void getRightEyePosition(Point3d position)
public void setRightEyePosition(Point3d position)

These methods set and retrieve the position of the center of rotation of a user's left and right eyes in head coordinates.

public void getLeftEarPosition(Point3d position)
public void setLeftEarPosition(Point3d position)
public void getRightEarPosition(Point3d position)
public void setRightEarPosition(Point3d position)

These methods set and retrieve the position of the user head object's left and right ear positions in head coordinates.

public double getNominalEyeHeightFromGround()
public void setNominalEyeHeightFromGround(double height)

These methods set and retrieve the user's nominal eye height as measured from the ground to the center eye in the default posture. In a standard computer monitor environment, the default posture would be seated. In a multiple-projection display room environment, or a head-tracked environment, the default posture would be standing.

public void getNominalEyeOffsetFromNominalScreen(Vector3d offset)
public void setNominalEyeOffsetFromNominalScreen(Vector3d offset)

These methods set and retrieve the offset from the center eye to the center of the display screen. This offset distance allows an "Over the shoulder" view of the scene as seen by the end-user.

public String toString()

This method returns a string that contains the values of this PhysicalBody object.

C.9 The PhysicalEnvironment Object

The PhysicalEnvironment object contains information about the local Physical World of the end-user's physical enviroment. This includes information about tracking sensor and audio playback hardware, if present.

Constructors
public PhysicalEnvironment()

Constructs and initializes a new physical environment object with the following default sensor and audio fields, and as array of sensorCount sensor objects.

Tracking available : false
Sensor count: 10
Audio playback type: HEADPHONES
Distance from listener to speaker(s): 0.0
Angle offset of speaker(s): 0.0

C.9.1 Input Sensors

The sensor information provides real-time access to "continuous" input devices such as joysticks and trackers. It also contains two-degree-of-freedom joystick and six-degree-of-freedom tracker information. See Section 9.2, "Sensors," for more information. Java 3D uses Java AWT's event model for non-continuous input devices such as keyboards-see Chapter 9, "Input."

Methods

The PhysicalEnvironment object specifies the following methods pertaining to input sensors.

public void setSensorCount(int count)
public int getSensorCount()

These methods set and retrieve the count of the number of sensors stored within the PhysicalEnvironment object. It defaults to a small number of sensors. It should be set to the number of sensors available in the end-user's environment before initializing the Java 3D API.

public void setCoexistenceToTrackerBase(Transform3D t)
public void getCoexistenceToTrackerBase(Transform3D t)

Sets the coexistence coordinate system to tracker-base coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This is used in both SCREEN_VIEW and HMD_VIEW modes.

public boolean getTrackingAvailable()

This method returns a status flag indicating whether or not tracking is available.

public Sensor getSensor(int i)

This retrieves the specified sensor.

Physical Coexistence Policy

The setCoexistenceCenterInPworldPolicy and getCoexistenceCenterInPworldPolicy methods in the PhysicalEnvironment object store and retrieve the parameter that specifies how Java 3D places the user's eye-point as a function of current head-position during the calibration process. Java 3D permits one of three values: NOMINAL_HEAD, NOMINAL_FEET, or NOMINAL_SCREEN.

public int getCoexistenceCenterInPworldPolicy()
public void setCoexistenceCenterInPworldPolicy(int policy)

This variable specifies how Java 3D will place the user's eye-point as a function of current head-position during the calibration process. Java 3D permits one of three values: NOMINAL_HEAD, NOMINAL_FEET, or NOMINAL_SCREEN.

C.9.2 Audio Playback

Specifies that the audio playback will be through a pair of speakers, equally distant from the listener, both at some angle from the head coordinate system Z axis. It's assumed that the speakers are at the same elevation and oriented symetrically about the listener.

Constants
public final static int HEADPHONES

Specifies that the audio playback will be through stereo hearphones.

public final static int MONO_SPEAKER

Specifies that the audio playback will be through a single speaker some distance away from the user.

public final static int STEREO_SPEAKERS
Methods

The PhysicalEnvironment object specifies the following methods that affect the audio playback of sound processed by Java 3D.

public void setAudioPlaybackType(int type)
public int getAudioPlaybackType()

These methods set and retrieve the type of audio playback device (Headphones, Mono Speaker, Stereo Speaker) used to output the analog audio from rendering Java3D Sound nodes.

public void setCenterEarToSpeaker(float distance)
public float getCenterEarToSpeaker()

These methods set and retrieve the distance in meters from the center ear (the midpoint between the left and right ears) and the one or more physical speaker transducers in the listener's environment. For monaural speaker playback a typical distance from the listener to the speaker in a workstation cabinet is 0.76. For stereo speakers placed at the sides of the display this might be 0.82.

public void setAngleOffaxisToSpeaker(float angle)
public float getAngleOffaxisToSpeaker()

These methods set and retrieve the angle in radians between the vectors from the center ear to each of the speaker transducers and the vectors from the center ear parallel to the head coordinates Z axis. Speakers placed at the sides of the computer display typically range between 0.28 to 0.35 radians (between 10 and 20 degrees).

C.10 Viewing in Head Tracked Environments

Section 8.5, "Generating a View," describes how Java 3D generates a view for a standard flat screen display with no head tracking. In this section, we describe how Java 3D generates a view in a room-mounted, head-tracked display environment-either a computer monitor with shutter glasses and head tracking or a multiple-wall display with head-tracked shutter glasses. Finally, we describe how Java 3D generates view matrices in a head-mounted and head-tracked display environment.

C.10.1 A Room-mounted Display (Computer Monitor) With Head-Tracking

When head-tracking combines with a room-mounted display environment, the ViewPlatform's origin and orientation serves as a base for constructing the view matrices. Additionally, Java 3D uses the end-user's head position and orientation to compute where an end-user's eyes are located in physical space. Each eye's position serves to offset the corresponding virtual eye's position relative to the ViewPlatform's origin. Each eye's position also serves to specify that eye's frustum since the eye's position relative to a Screen3D uniquely specifies that eye's view frustum. Note that Java 3D will access the PhysicalBody object to obtain information describing the user's interpupilary distance and tracking hardware, values it needs to compute the end-user's eye positions from the head position information.

C.10.2 A Head-Mounted Display, Head-Tracking

In a head-mounted environment, the ViewPlatform's origin and orientation also serves as a base for constructing view matrices. And, as in the head-tracked room-mounted environment, Java 3D also uses the end-user's head position and orientation to further modify the ViewPlatform's position and orientation. In a head-tracked, head-mounted display environment, an end-user's eyes do not move relative to their respective display screens, rather the display screens move relative to the virtual environment. A rotation of the head by an end-user can radically affect the final view's orientation. In this situation, Java 3D combines the position and orientation from the ViewPlatform with the position and orientation from the head-tracker to form the view matrix. The view frustum however does not change since the user's eyes do not move relative to their respective display screen so Java 3D can compute the projection matrix once and cache the result.

If any of the parameters of a View object are updated, this will effect a change in the implicit viewing transform (and thus image) of any Canvas3D that references that view object.

C.11 Compatibility Mode

A camera-based view model allows application programmers to think about the images displayed on the computer screen as if a virtual camera took those images. Such a view model allows application programmers to position and orient a virtual camera within a virtual scene; to manipulate some parameters of the virtual camera's lens (specify its field-of-view); and to specify the locations of the near and far clip planes.

Java 3D allows applications to enable compatibility mode, for room-mounted, non-head-tracked display environments, or to disable compatibility mode using the following methods. Camera-based viewing functions are only available in compatibility mode.

Methods
public final void setCompatibilityModeEnable(boolean flag)
public final boolean getCompatabilityModeEnable()

This flag turns on or off compatibility mode. Compatibility mode is disabled by default.


Note: Use of these view-compatibility functions will disable some of Java 3D's view model features, and limit the portability of Java 3D programs. These methods are primarily intended to help jump-start porting of existing applications.

C.11.1 Overview of the Camera-based View Model

The traditional camera-based view model, shown in Figure C-7, places a virtual camera inside a geometrically-specified world. The camera "captures" the view from its current location, orientation, and perspective. The visualization system then draws that view on the user's display device. The application controls the view by moving the virtual camera to a new location, by changing its orientation, by changing its field-of-view, or by controlling some other camera parameter.

The various parameters that users control in a camera-based view-model specify the shape of a viewing volume (known as a frustum because of its truncated pyramidal shape) and locate that frustum within the virtual environment. The rendering pipeline uses the frustum to decide which objects to draw on the display screen. The rendering pipeline does not draw objects outside the view frustum and it clips (partially draws) objects that intersect the frustum's boundaries.

Though a view-frustum's specification may have many items in common with those of a physical camera, such as placement, orientation, and lens settings, some frustum parameters have no physical analog. Most noticeable, a frustum has two parameters not found on a physical camera: the near and far clip planes.

The location of the near and far clip planes allow the application programmer to specify which objects Java 3D should not draw. Objects too far away from the current eye-point usually do not result in interesting images. Those too close to the eye-point might obscure the interesting objects. By carefully specifying near and far clip planes, an application programmer can control which objects the renderer will not be drawing.

From the perspective of the display device, the virtual camera's image-plane corresponds to the display screen. The camera's placement, orientation, and field-of-view determine the shape of the view frustum.

C.11.2 Using the Camera-based View Model

The camera-based view model allows Java 3D to bridge the gap between existing 3D code and Java 3D's view model. By using the camera-based view-model functions, a programmer retains the familiarity of the older view model but gains some of the flexibility afforded by Java 3D's new view model.

The traditional camera-based view model is supported in Java 3D by helping methods in the Transform3D object. These methods were explicitly designed to resemble as closely as possible the view functions of older packages, and thus should be familiar to most 3D programers. The resulting Transform3D objects can be used to set compatibility mode transforms in the View object.

C.11.2.1 Creating a Viewing Matrix

The Transform3D object provides the following method to create a viewing matrix.

public void lookAt(Point3d eye, Point3d center, Vector3d up)

This is a utility method that specifies the position and orientation of a viewing transform. It works very similarly to the similar function in OpenGL. The inverse of this transform can be used to control the ViewPlatform object within the scene graph. Alternatively, this transform can be passed directly to the View's VpcToEc transform via the compatibility mode viewing functions

C.11.2.2 Creating a Projection Matrix

The Transform3D object provides the following three methods to create a projection matrix. All three map point from Eye Coordinates (EC) to Clipping Coordinates (CC). Eye Coordinates is defined such that (0, 0, 0) is at the eye and the projection plane is at z=-1.

public void frustum(double left, double right, double bottom, 
       double top, double near, double far)

The frustum method establishes a perspective projection with the eye at the apex of a symmetric view frustum. The arguments define the frustum and its associated perspective projection: (left, bottom, -near) and (right, top, -near) specify the point on the near clipping plane that maps onto the lower-left and upper-right corners of the window, respectively. The -far parameter specifies the far clipping plane. See Figure C-8.

public void perspective(double fovy, double aspect, double zNear, 
       double zFar)

The perspective method establishes a perspective projection with the eye at the apex of a symmetric view frustum, centered about the Z axis, with a fixed field of view. The arguments define the frustum and its associated perspective projection: -near and -far specify the near and far clipping planes; fovy specifies the field-of-view in the Y dimension and aspect specifies the aspect ratio of the window. See Figure C-9.

public void ortho(double left, double right, double bottom, 
       double top, double near, double far)

The ortho method establishes a parallel projection. The arguments define a rectangular box used for projection: (left, bottom, -near) and (right, top, -near) specify the point on the near clipping plane that maps onto the lower-left and upper-right corners of the window, respectively. The -far parameter specifies the far clipping plane. See Figure C-10.

C.11.2.3 Setting the Viewing Transform

The View object provides the following compatibility-mode methods that operate on the viewing transform.

public final void setVpcToEc(Transform3D vpcToEc)
public final void getVpcToEc(Transform3D vpcToEc)

This compatibility mode function specifies the ViewPlatform Coordinates (VPC) to Eye Coordinates (EC) viewing transform. If compatibility mode is disabled, this transform is derived from other values and is read-only.

C.11.2.4 Setting the Projection Transform

The View object provides the following compatibility-mode methods that operate on the projection transform.

public final void setLeftProjection(Transform3D projection)
public final void getLeftProjection(Transform3D projection)
public final void setRightProjection(Transform3D projection)
public final void getRightProjection(Transform3D projection)

These compatibility mode functions specify a viewing frustum for the left and right eye that transforms points in Eye Coordinates (EC) to Clipping Coordinates (CC). If compatibility mode is disabled, the projection is derived from other values and is read-only. In monoscopic mode, only the left eye projection matrix is used.



Contents Previous Next

Java 3D API Specification


Copyright © 1997, Sun Microsystems, Inc. All rights reserved.