The Moving Worlds VRML 2.0 Specification

Working Draft #2

Node Reference

May 9, 1996

This section provides a detailed description of each node in VRML 2.0. There is a helpful table of contents at the top organized functionally. However, the nodes are listed alphabetically. (An alphabetical Index of Nodes and Fields is also available.)

This document's URL: http://vrml.sgi.com/moving-worlds/spec/nodesRef.html

File Syntax vs. Specification Syntax

Lights and Lighting

Grouping Nodes

Leaf Nodes

Geometry Nodes

Geometric Property Nodes

Appearance Node and Appearance Property Nodes

Media Property Nodes

Animation Interpolation Nodes


Appearance and Appearance Property Nodes

The Material, Texture, and TextureTransform appearance property nodes are always contained within fields of an Appearance node. The FontStyle node is always contained in the fontStyle field of a Text node.


Bindable Leaf Nodes

The Background, NavigationInfo, and Viewpoint leaf nodes behave as all leaf nodes, except that only one of each type can be active at any point in time. Thus, the browser maintains a stack for each type of binding node (Background stack, NavigationInfo stack, and Viewpoint stack). Each of these nodes includes a bind eventIn and an isBound eventOut. The bind eventIn is used to push and pop a given node from its respective stack. A TRUE value sent to bind, pushes the node to the top of the stack, and FALSE pops it. The isBound eventOut is sent when the given node's binding state changes (i.e. if pushed or popped). See below for details on the binding stack required for each type.

Bind Stack Behavior

Bindable Nodes


File syntax vs. specification syntax

In this document, the last item in the node specifications is the public interface for the node. The syntax for the public interface is the same as that for that node's prototype. This interface is the definitive specification of the fields, names, types, and default values for a given node. Note that this syntax is not the actual file format syntax. However, the parts of the interface that are identical to the file syntax are in bold. For example, the following defines the DirectionalLight node's public interface:

    DirectionalLight {  
      exposedField SFBool  on               TRUE 
      exposedField SFFloat intensity        1 
      exposedField SFFloat ambientIntensity 0
      exposedField SFColor color            1 1 1
      exposedField SFVec3f direction        0 0 -1
    }

Fields that have associated implicit set_ and _changed events are labeled exposedField. For example, the on field has a set_on input event and an on_changed output event. Exposed fields may be connected using ROUTE statements, and may be read and/or written by Script nodes.

Note that this information is arranged in a slightly different manner in the actual file syntax. The keywords "field" or "exposedField" and the types of the fields (e.g. SFColor) are not specified when expressing a node in the file format. An example of the file format for the DirectionalLight is:

DirectionalLight {
  on               TRUE
  intensity        1
  ambientIntensity 0
  color            1 1 1
  direction        0 0 -1
}

Geometry Nodes

Geometry nodes must be contained by a Shape nodes - they are not leaf nodes and thus cannot be children of group nodes. The Shape node contains exactly one geometry node in its geometry field. This node must be one of the following node types:

A geometry node can appear only in the geometry field of a Shape node. Several geometry nodes also contain Coordinate, Color, Normal, and TextureCoordinate as geometry property nodes. These geometry property nodes are separated out as individual nodes so that instancing and sharing is possible between different geometry nodes. All geometry nodes are specified in a local coordinate system determined by the parent(s) nodes of the geometry.

Application of material, texture, and colors:
The final rendered look of a piece of geometry depends on the Material and Texture in the associated Appearance node along with any Color node specified with the geometry (such as per-vertex colors for an IndexedFaceSet node). The following describes ideal behavior; implementations may be forced to approximate the ideal behavior:
Shape Hints Fields:
The ElevationGrid, Extrusion, and IndexedFaceSet nodes all have three SFBool fields that provide hints about the shape--whether it contains ordered vertices, whether the shape is solid, and whether it contains convex faces. These fields are ccw, solid, and convex.

The ccw field indicates whether the vertices are ordered in a counter-clockwise direction when the shape is viewed from the outside (TRUE). If the order is clockwise or unknown, this field value is FALSE. The solid field indicates whether the shape encloses a volume (TRUE). If nothing is known about the shape, this field value is FALSE. The convex field indicates whether all faces in the shape are convex (TRUE). If nothing is known about the faces, this field value is FALSE.

These hints allow VRML implementations to optimize certain rendering features. Optimizations that may be performed include enabling backface culling and disabling two-sided lighting. For example, if an object is solid and has ordered vertices, an implementation may turn on backface culling and turn off two-sided lighting. If the object is not solid but has ordered vertices, it may turn off backface culling and turn on two-sided lighting.

Crease Angle Field:
The creaseAngle field, used by the ElevationGrid, Extrusion, and IndexedFaceSet nodes, affects how default normals are generated. For example, when an IndexedFaceSet has to generate default normals, it uses the creaseAngle field to determine which edges should be smoothly shaded and which ones should have a sharp crease. The crease angle is the angle between surface normals on adjacent polygons. For example, a crease angle of .5 radians means that an edge between two adjacent polygonal faces will be smooth shaded if the normals to the two faces form an angle that is less than .5 radians (about 30 degrees). Otherwise, it will be faceted.

Geometric Property Nodes

Geometric properties must be contained in the corresponding SFNode fields of geometry nodes such as the IndexedFaceSet, IndexedLineSet, and PointSet nodes. The following nodes are geometric properties:

For example, the following IndexedFaceSet (contained in a Shape node) uses all four of the geometric property nodes to specify vertex coordinates, colors per vertex, normals per vertex, and texture coordinates per vertex (note that the material sets the overall transparency):

Shape {
  geometry IndexedFaceSet {
     coordIndex  [ 0, 1, 3, -1, 0, 2, 5, -1, ...]
     coord       Coordinate        { point [0.0 5.0 3.0, ...] }
     color       Color             { rgb [ 0.2 0.7 0.8, ...] }
     normal      Normal            { vector [0.0 1.0 0.0, ...] }
     texCoord    TextureCoordinate { point [0 1.0, ...] }
  }
  appearance Appearance { material Material { transparency 0.5 } }
}

Global Nodes

The Script and WorldInfo nodes are not part of the world's transformational hierarchy.

WorldInfo nodes are global nodes that affect everything in the scene. They can be used anywhere in the scene description and may appear in fields of a Script node. If more than one WorldInfo node appears in a file, the first one encountered during read is the one that is used.


Grouping Nodes

Grouping nodes are container objects that have other nodes as children. Each grouping node treats its children in a different manner.


Interpolator Nodes

Interpolators nodes are designed for linear keyframed animation. That is, an interpolator is simply a linear function, f(t), defined by n values of f(t) and n corresponding values of t, where t ranges from 0.0 to 1.0. An interpolator evaluates the linear function given a value of t (via the set_fraction eventIn).

There are several different types of interpolator nodes, each based on the type of value that is interpolated (e.g. color vs. normal). All interpolator nodes share a common set of fields and semantics:

      exposedField MF<type>     values       [...]
      exposedField MFFloat      keys         [...]
      eventIn      SFFloat      set_fraction 
      eventOut     [S|M]F<type> outValue

The values field specifies the actual values of the linear function. The type of this multiple-valued field is dependent on the type of the interpolator (e.g. ColorInterpolator's values field is of type MFColor). Each value in the values field corresponds in order to a parameterized time in the keys field. Therefore, there exists exactly the same amount of values in the values field as key values in the keys field. Values in the keys field are restricted to the 0.0 to 1.0 range; values outside of this range clamp up and down respectively. The values in the keys field must be increasing and non-repeating - results are indeterminate if the keys values decrease or repeat.

The set_fraction eventIn receives a float event in the 0.0 to 1.0 range and causes the interpolator function to evaluate. The results of the linear interpolation are sent to outValue eventOut.

Most interpolators output a single-valued field to outValue. However, there are some exceptions, such as CoordinateInterpolator and NormalInterpolator, that send multiple-value results to outValue. In this case, the values field is an nxm array of values, where n is the number of key values and m is the number of values per key. The following example illustrates a simple ScalarInterpolator which contains a list of floats (11.0, 99.0, and 33.0), the keyframe times of each scalar (0.0, 0.5, and 1.0), and outputs a single float value for a given time:

    ScalarInterpolator [
      exposedField MFFloat keys      [ 0.0,  0.5,  1.0]
      exposedField MFFloat values    [11.0, 99.0, 33.0]
      eventIn      SFFloat set_fraction
      eventOut     SFFloat outValue
    }

For a given key value of 0.25 (set_fraction), this ColorInterpolator would send an output value of:

    eventOut SFFloat outValue 55.0
                         # = 11.0 + ((99.0-11.0)/(0.5-0.0)) * 0.25

Whereas the CoordinateInterpolator below defines an array of coordinates for each keyframe value and sends an array of coordinates as output:

    CoordinateInterpolator [
      exposedField MFFloat keys   [ 0.0,  0.5,  1.0]
      exposedField MFVec3f values [ 0  0  0,  10 10 30,  # 0.0
                                   10 20 10,  40 50 50,  # 0.5
                                   33 55 66,  44 55 65 ] #1.0

      eventIn      SFFloat set_fraction
      eventOut     MFVec3f outValue

    }

In this case, there are 2 coordinate's for every keyframe. The first two coordinate's (0, 0, 0) and (10, 10, 30) represent the value at keyframe 0.0, the second two coordinate's (10, 20, 10) and (40, 50, 50) represent that value at keyframe 0.5, and so on. If a set_fraction value of 0.25 (meaning 25% of the animation) was sent to this CoordinateInterpolator, the resulting output value would be:

     eventOut MFVec3f outValue [ 5 10 5,  25 30 40 ]

Note: Given a sufficiently powerful scripting language, all of these interpolators could be implemented using Script nodes (browsers might choose to implement these as pre-defined prototypes of appropriately defined Script nodes). Keyframed animation is common enough and performance intensive to justify the inclusion of these classes as built-in types.


Lights and Lighting

Objects are illuminated by the sum of all of the lights in the world. This includes the contribution of both the direct illumination from lights (PointLight, DirectionalLight, and SpotLight) and the ambient illumination from these lights. Ambient illumination results from the scattering and reflection of light originally emitted directly by the light sources. Therefore, ambient light is associated with the lights in the scene, each having an ambientIntensity field. The contribution of a single light to the overall ambient lighting is computed as:

    if ( light is "on" )
        ambientLight = intensity * ambientIntensity * color
    else
        ambientLight = (0,0,0)

This allows the light's overall brightness, both direct and ambient, to be controlled by changing the intensity. Renderers that do not support per-light ambient illumination may simply use this information to set the ambient lighting parameters when the world is loaded.

PointLight and SpotLight illuminate all objects in the world that fall within their volume of influence of the light regardless of location within the file. PointLight defines this volume of influence as a sphere centered at the light (defined by a radius). SpotLight defines the volume of influence a solid angle defined by a radius and a cutoff angle. DirectionalLights illuminate only the objects contained by the light's parent group node (including any descendent children of the group node).


Sensor Nodes

There are several different kinds of sensor nodes: ProximitySensor, TimeSensor, VisibilitySensor, and several kinds of geometric sensors. Sensors are leaf nodes in the hierarchy - they may be children of grouping nodes.

The ProximitySensor detects when the user navigates into a specified invisible region in the world. The TimeSensor is a stop watch that has no geometry or location associated with it - it is used to start and stop time-based nodes, such as interpolators. The VisibilitySensor detects when a specific object in the world becomes visible to the user and vice versa. Geometric sensor nodes detect user pointing events, such as the user clicking on a piece of geometry (i.e. TouchSensor). They are leaf nodes that exist in the local coordinate system determined by their place in the scene hierarchy.

Proximity, time, and visibility sensors are additive. Each one is processed independently of whether others exist or overlap.

Geometry sensors are activated when the user points to geometry that is influenced by a specific geometry sensor. Geometry sensors have influence over all geometry that is a descendent of the geometry sensor's parent group. Typically, the geometry sensor is a sibling child to the geometry that it influences. In other cases, the geometry sensor is a sibling to groups which contain geometry (that is influenced by the geometry sensor). For a given user gesture, the lowest, enabled geometry sensor in the hierarchy is activated - all other geometry sensors above it are ignored. The hierarchy is defined by the geometry leaf node which is activated and the entire hierarchy upward. If there are multiple geometry sensors tied for lowest, then each of these is activated simultaneously and independently. This last feature allows useful combinations of geometry sensors (e.g. TouchSensor and PlaneSensor).



Anchor

The Anchor grouping node causes some data to be fetched over the network when any of its children are chosen. If the data pointed to is a VRML world, then that world is loaded and displayed instead of the world of which the Anchor is a part. If another data type is fetched, it is up to the browser to determine how to handle that data; typically, it will be passed to an appropriate, already-open (or newly spawned) general Web browser.

Exactly how a user "chooses" a child of the Anchor is up to the VRML browser; typically, clicking on one of its children with the mouse will result in the new scene replacing the current scene. An Anchor with an empty ("") url does nothing when its children are chosen.

The url is an arbitrary list of URLs. If multiple URLs are presented, this expresses a descending order of preference. A browser may display a lower-preference URL if the higher-order file is not available. See the section on URLs and URNs.

The description field in the Anchor allows for a friendly prompt to be displayed as an alternative to the URL in the url field. Ideally, browsers will allow the user to choose the description, the URL, or both to be displayed for a candidate Anchor.

The parameters exposed field may be used to supply any additional information to be interpreted by the VRML or HTML browser. Each string should consist of "keyword=value" pairs. For example, some browsers allow the specification of a 'target' for a link, to display a link in another part of the HTML document; the parameters field is then:

Anchor {
  parameters [ "target=name_of_frame" ]
  ...
}

An Anchor may be used to take the viewer to a particular viewpoint in a virtual world by specifying a URL ending with "#viewpointName", where "viewpointName" is the name of a viewpoint defined in the world. For example:

Anchor {
  url "http://www.school.edu/vrml/someScene.wrl#OverView"
  children [ Box { } ]
}

specifies an anchor that puts the viewer in the "someScene" world looking from the viewpoint named "OverView" when the Box is chosen. If no world is specified, then the current scene is implied; for example:

Anchor {
  url "#Doorway"
  children [ Sphere { } ]
}

will take the viewer to the viewpoint defined by the "Doorway" viewpoint in the current world when the sphere is chosen.

Anchor {
  exposedField MFNode   children    [ ]
  exposedField SFString description "" 
  exposedField MFString parameters  [ ]
  exposedField MFString url         [ ]
}

Prototype definition:

PROTO Anchor [
  exposedField MFString url         [ ]
  exposedField SFString description "" 
  exposedField MFString parameters  "" 
  exposedField MFNode   children    [ ]
] {
  Group {
    children [
      DEF CS TouchSensor { }
      Group { children IS children }
    ]
  }
  DEF ASCRIPT Script {
    mustEvaluate TRUE

    field MFString name IS url
    eventIn SFBool loadWorld
    #
    # Script must load new world (using loadWorld() Script API)
    # when TouchSensor is clicked
    #
  }
  ROUTE CS.isActive TO ASCRIPT.loadWorld
}

Appearance

The Appearance node occurs only within the appearance field of a Shape node. The value for any of the fields in this node can be NULL. However, if the field contains anything, it must contain one specific type of node. Specifically, the material field, if specified, must contain a Material node. The texture field, if specified, must contain one of the various Texture nodes (ImageTexture, MovieTexture, or PixelTexture). The textureTransform field, if specified, must contain a TextureTransform node.

Appearance {
  exposedField SFNode material          NULL
  exposedField SFNode texture           NULL
  exposedField SFNode textureTransform  NULL
}


AudioClip

The AudioClip node represents a sound that is pre-loaded by the browser and can be started at any time and has a known duration. It can be used as the source for any VRML sound node.

The url field specifies the URL from which the sound is loaded. The URL may contain sound data in any format for which there is a mime-type. However for basic compatibility, browsers must support at least the wavefile format in uncompressed PCM format. It is strongly recommended that browsers also support the MIDI file type 1 sound format. MIDI files are presumed to use the General MIDI patch set. Sounds should be loaded when the sound node is loaded.

If multiple URLs are specified, then this expresses a descending order of preference. A browser may use a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. (Also see the section on URNs.) Multiple URLs may also specify different sound formats so the browser should use the highest preference file that is in a format that it understands.

Browsers may limit the maximum number of sounds that can be played simultaneously and should use the guidelines specified with the Sound node to determine which sounds are actually played.

The description field is a textual description of the sound, which may be displayed in addition to or in place of playing the sound.

AudioClip nodes ignore changes to their startTime while they are actively outputting values. If a set_startTime event is received while the TimeSensor is active, then that set_startTime event is ignored (the startTime field is not changed, and a startTime_changed eventOut is NOT generated). A TimeSensor may be re-started while it is active by sending it a set_stopTime "now" event (which will cause the TimeSensor to become inactive) and then sending it a set_startTime event (setting it to "now" or any other starting time, in the future or past).

The loop field specifies whether or not the sound is constantly repeated. By default, the sound is played only once. If the loop field is FALSE, the sound has length "length," which is not specified in the VRML file but is implicit in the sound file pointed to by the URL in the url field. If the loop field is TRUE, the sound has an infinite length.

The startTime field specifies the time at which the sound should start playing. The stopTime field may be used to make a sound stop playing some time after it has started.

The pitch field specifies a multiplier for the rate at which sampled sound is played. Changing the pitch field affects the pitch of a sound. If pitch is set to 2.0, the sound should be played one octave higher than normal which corresponds playing it twice as fast. The proper implementation of the pitch control for MIDI (or other note sequence sound clip) is to multiply the tempo of the playback by the pitch value and adjust the MIDI Coarse Tune and Fine Tune controls to achieve the proper pitch change.

The duration eventOut field is sent out whenever there is a new value for the "normal" duration of the clip. Typically this will only occur when the url field is changed, indicating that the clip is playing a different sound source. The duration is the length of time in seconds that the sound will play when the pitch is set to 1.0. Changing the pitch field should not trigger the duration event.

The isActive field can be used by other nodes to determine if the clip is currently being played (or at least in contention to be played) by a sound node. The following algorithm expresses the conditions that should affect when isActive is TRUE and when it is FALSE.

With the current time now, and the duration of the sound as length the rules are as follows:

    if (now < startTime)
        isActive = FALSE
    else if (now >= startTime+length)
        isActive = FALSE )
    else if (stopTime > startTime) && (startTime <= now < stopTime)
        isActive = TRUE
    else isActive = TRUE

Whenever startTime, stopTime, or now changes, the above rules need to be applied to figure out if the sound is playing. If it is, then it should be playing the bit of sound at (now - startTime) or, if it is looping, fmod( now - startTime, realLength ).

If a set_startTime event is received while the AudioClip is active, then that set_startTime event is ignored (the startTime field is not changed, and a startTime_changed eventOut is NOT generated). An AudioClip may be re-started while it is active by sending it a set_stopTime "now" event (which will cause the AudioClip to become inactive) and then sending it a set_startTime event (setting it to "now" or any other starting time, in the future or past).

AudioClip {
  exposedField   SFString description  ""
  exposedField   SFBool   loop         FALSE
  exposedField   SFFloat  pitch        1.0
  exposedField   SFTime   startTime    0
  exposedField   SFTime   stopTime     0
  exposedField   MFString url          [ ]
  eventOut       SFTime   duration
  eventOut       SFBool   isActive
}

Background

The Background node is used to specify a color-ramp backdrop that simulates ground and sky planes, as well as an environment texture, or panorama, that is placed behind all geometry in the scene and in front of the backdrop.

Background nodes are Bindable Leaf Nodes and thus there exists a Background stack in the browser, in which the topmost Background on the stack is the currently active one. To push a Background onto the top of the stack, a TRUE value is sent to the bind eventIn on the specific Background. Once active, the Background is then bound to the browsers view. A FALSE value of bind, pops the Background from the stack and unbinds it from the browser viewer. See Bindable Leaf Nodes for more details on the the Background stack.

The background is is not effected by translations or scales in the hierarchy. Rotations in the hierarchy rotate the background as any other geometric object.

The backdrop is conceptually a sphere with an infinite radius, painted with a smooth gradation of ground colors (starting with a circle straight downward and rising in concentric bands up to the horizon) and a separate gradation of sky colors (starting with a circle straight upward and falling in concentric bands down to the horizon). (It's acceptable to implement the backdrop as a cube painted in concentric square rings instead of as a sphere.) The groundRanges field is a list of floating point values that indicate the cutoff for each groundColor. Its implicit initial value is 0 radians (downward), and the final value given indicates the elevation angle of the horizon, where the ground color ramp and the sky color ramp meet. The skyRanges field implicitly starts at 0 radians (upward) and works its way down to pi radians. If groundColors is NULL, no ground colors are used.

The pos/neg/X/Y/Z fields define a background panorama, between the backdrop and the world's geometry. The panorama consists of a six images, each of which is mapped onto the faces of a cube surrounding the world. Transparency values in the panorama images specify that the panorama is transparent in particular places, allowing the groundColors and skyColors to show through. (Often, the posY and negY images will not be specified, to allow sky and ground to show. The other four images may depict mountains or other distant scenery.) By default, there is no panorama.

The first Background node found during reading of the world is used as the initial background. Subsequent Background nodes are ignored. The current background may be changed by Script node API calls.

Ground colors, sky colors, and panoramic images do not translate with respect to the viewer, though they do rotate with respect to the viewer. That is, the viewer can never get any closer to the background, but can turn to examine all sides of the panorama cube, and can look up and down to see the concentric rings of ground and sky (if visible).

Background {
  exposedfield MFColor  groundColor  [ 0.14 0.28 0.00, # light green
                                       0.09 0.11 0.00 ]# to dark green
  exposedField MFFloat  groundRange  [ .785 ]   # horizon = 45 degrees
  exposedField MFColor  skyColor     [ 0.02 0.00 0.26, # twilight blue
                                       0.02 0.00 0.65 ]# to light blue 
  exposedField MFFloat  skyRange     [ .785 ]   # horizon = 45 degrees
  exposedField MFString posX [ ]
  exposedField MFString negX [ ]
  exposedField MFString posY [ ]
  exposedField MFString negY [ ]
  exposedField MFString posZ [ ]
  exposedField MFString negZ [ ]
  eventIn      SFBool   bind 
  eventOut     SFBool   isBound
}

Billboard

The billboard node is a grouping node which modifies its coordinate system so that the billboard node's local z-axis turns to point at the camera. The billboard node has children which may be other grouping or leaf nodes. This allows you to billboard any geometry.

The axisOfRotation field specifies which axis to use to perform the rotation. This axis is defined in the local coordinates of the billboard node. The default (0,1,0) is useful for objects such as images of trees and lamps positioned on a ground plane. But when an object is oriented at an angle, for example, on the incline of a mountain, then the axisOfRotation may also need to be oriented at a similar angle.

A special case of billboarding is screen-alignment -- the object rotates to always stay aligned with the camera even when the camera elevates, pitches and rolls. This special case is distinguished by setting the axisOfRotation to (0, 0, 0).

To rotate the billboard to face the camera, you determine the line between the billboard's origin and the camera's origin; call this the billboard-to-camera line. The axisOfRotation and the billboard-to-camera line define a plane. The local z-axis of the billboard is then rotated into that plane, pivoting around the axisOfRotation.

If the axisOfRotation and the billboard-to-camera line are coincident (the same line), then the plane cannot be established, and the rotation results of the billboard are undefined. For example, if the axisOfRotation is set to (0,1,0) (the y-axis) and the camera flies over the object, then the object will spin as the camera passes directly over the y-axis. It is undefined at the pole. Another example of this ill-defined behavior occurs when the author sets the axisOfRotation to (0,0,1) (the z-axis). and sets the camera to look directly down the z-axis of the object.

Billboard {
  exposedField MFNode      children         NULL
  exposedField SFVec3f     axisOfRotation   0 1 0 
}

Prototype definition:

Billboard [
  exposedField MFNode      children         NULL
  exposedField SFVec3f     axisOfRotation   0 1 0 
] {
  TBD ...  [Transform plus Script node]
}

Box

This node represents a rectangular aligned with the coordinate axes. By default, the box is centered at (0,0,0) and measures 2 units in each dimension, from -1 to +1. A box's width is its extent along its object-space X axis, its height is its extent along the object-space Y axis, and its depth is its extent along its object-space Z axis.

Textures are applied individually to each face of the box; the entire texture goes on each face. On the front, back, right, and left sides of the box, the texture is applied right side up. On the top, the texture appears right side up when the top of the box is tilted toward the user. On the bottom, the texture appears right side up when the top of the box is tilted towards the -Z axis.

Box {
  field    SFVec3f size  2 2 2 
}
[PROTO - TBD]

Collision

The Collision grouping node specifies which objects in the scene represent navigation obstacles to the browser. For example, it is useful to keep users from walking through walls in a building, or limiting the user to navigating certain restricted regions of the scene. What happens when the user navigates into a collidable object is defined by the browser. For example, when the user comes sufficiently close to an object to trigger a collision, the browser may have the user bounce off the object or simply come to a stop.

The children of a Collision node are identical to the children of a Group node. With the exception that Collision's children are checked for collision with the user during navigation. If desired, a proxy can be supplied, and this proxy will be checked for collision in place of the actual children objects (see description of the proxy field, below).

By default, all objects in the scene are collidable. If there are no Collision nodes specified in a scene, then browsers are required to check for user collision during navigation. The collide field in this node allows collision detection to be turned off, in which case the children of the Collision node will not be checked for collision, even though they will be drawn.

Since collision with arbitrarily complex geometry is computationally expensive, one method of increasing performance is to define an alternate geometry, a proxy, for colliding against. The collision proxy, defined in the proxy field, is any valid VRML group or leaf syntax. However, during collision detection the proxy is used for collision detection only - all other non-geometric nodes will be completely ignored. For example, a proxy can be as crude as a simple bounding box or bounding sphere, or could be more detailed, such as the convex hull of a polyhedron.

Proxy definitions can include recursive Collision nodes. Each level of the Collision is evaluated normally. For example, a Collision node can contain a proxy definition that contains another Collision node with a proxy that contains another Collision node, and so on:

Collision {
  collide TRUE
  proxy Collision {
    collide TRUE
    proxy Collision {
      collide TRUE
      children [ Box {}  ]
    }
  }
  children [ # real geometry of the objects }
}

If the value of the collide field is FALSE, then collision detection is not performed with the children, proxy, or any Collision nodes which are descendents of this node. If the root node of a scene is a Collision node with the collide field set to FALSE, then collision detection is disabled for the entire scene, regardless of whether descendent Collision nodes have set collide TRUE.

If the value of the collide field is TRUE and the proxy field is non-NULL, then the proxy field defines the scene which collision detection is performed. If the proxy value is NULL, the actual children of the collision node are collided against.

If children is empty, collide is TRUE and a proxy is specified then collision detection is done against the proxy but nothing is displayed-- this is a way of colliding against "invisible" geometry.

The collision eventOut generates an event specifying the time when the user intersects the Collision node. An ideal implementation computes the exact time of intersection,. Implementations may approximate the ideal by sampling the positions of collidable objects and the user. Refer to the NavigationInfo node for parameters that control the user's size.

Every Collision node that the user collides with generates an event. Nested or recursive Collision nodes (either as children of each other or as proxies of each other) also send events if collision occurs. Therefore, it is entirely possible that several Collision nodes will generate events simultaneoulsy.

Collision { 
  exposedField MFNode children  []
  exposedField SFBool collide   TRUE
  field        SFNode proxy     NULL
  eventOut     SFTime collision
}

Color

This node defines a set of RGB colors to be used in the color fields of an IndexedFaceSet, IndexedLineSet, or PointSet node.

Color nodes are only used to specify multiple colors for a single piece of geometry, such as a different color for each face or vertex of an IndexedFaceSet. A Material node is used to specify the overall material parameters of a lighted geometry. If both a Material and a Color node are specified for a geometry, the colors should ideally replace the diffuse component of the material.

Textures take precedence over colors; specifying both a Texture and a Color node for a geometry will result in the Color node being ignored.

[Note: Some browsers may not support this functionality, in which case an average, overall color should be computed and used instead of specifying colors per vertex.]

Color {
  exposedField MFColor rgb  []
}

ColorInterpolator

This node interpolates among a set of MFColor values, to produce an SFColor outValue event. The number of colors in the values field must be equal to the number of keyframe times in the keys field. The color values are linearly interpolated in RGB space.

ColorInterpolator {
  exposedField MFFloat keys      []
  exposedField MFColor values    []
  eventIn      SFFloat set_fraction
  eventOut     SFColor outValue
}

Example file syntax interpolates from red to green to blue in a 10 second cycle:

DEF myColor ColorInterpolator {
  MFFloat keys      [   0.0,    0.5,    1.0 ]
  MFColor values    [ 1 0 0,  0 1 0,  0 0 1 ] # red, green, blue
}
DEF myClock TimeSensor {
  cycleInterval 10.0      # 10 second animation
  loop          TRUE      # infinitely cycling animation
}
ROUTE myClock.fraction TO myColor.set_fraction

Prototype definition:

PROTO ColorInterpolator [
  exposedField MFFloat keys         []
  exposedField MFColor values       []
  eventIn      SFFloat set_fraction 
  eventOut     SFColor outValue
] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFColor values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     MFColor outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

Coordinate

This node defines a set of 3D coordinates to be used in the coord field of vertex-based geometry nodes (such as IndexedFaceSet, IndexedLineSet, and PointSet).

Coordinate {
  exposedField MFVec3f point  []
}

Cone

This node represents a simple cone whose central axis is aligned with the Y axis. By default, the cone is centered at (0,0,0) and has a size of -1 to +1 in all three directions. The cone has a radius of 1 at the bottom and a height of 2, with its apex at 1 and its bottom at -1.

The cone has two parts: the side and the bottom. Each part has an associated SFBool field that specifies whether it is visible (TRUE) or invisible (FALSE).

When a texture is applied to a cone, it is applied differently to the sides and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cone. The texture has a vertical seam at the back, intersecting the YZ plane. For the bottom, a circle is cut out of the texture square and applied to the cone's base circle. The texture appears right side up when the top of the cone is rotated towards the -Z axis.

Cone {
  field     SFFloat   bottomRadius 1
  field     SFFloat   height       2
  field     SFBool    side         TRUE
  field     SFBool    bottom       TRUE
}

[PROTO - TBD]

CoordinateInterpolator

This node linearly interpolates among a set of MFVec3f values. This would be appropriate for interpolating vertex positions for a geometric morph.

The number of coordinates in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many coordinates will be contained in the outValue events.

CoordinateInterpolator {
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     MFVec3f outValue
}

Prototype definition:

PROTO CoordinateInterpolator [
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     MFVec3f outValue

] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFVec3f values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     MFVec3f outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

Cylinder

This node represents a simple capped cylinder centered around the Y axis. By default, the cylinder is centered at (0,0,0) and has a default size of -1 to +1 in all three dimensions. You can use the radius and height fields to create a cylinder with a different size.

The cylinder has three parts: the side, the top (Y = +1) and the bottom (Y = -1). Each part has an associated SFBool field that indicates whether the part is visible (TRUE) or invisible (FALSE).

When a texture is applied to a cylinder, it is applied differently to the sides, top, and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cylinder. The texture has a vertical seam at the back, intersecting the YZ plane. For the top and bottom, a circle is cut out of the texture square and applied to the top or bottom circle. The top texture appears right side up when the top of the cylinder is tilted toward the +Z axis, and the bottom texture appears right side up when the top of the cylinder is tilted toward the -Z axis.

Cylinder {
  field    SFFloat   radius  1
  field    SFFloat   height  2
  field    SFBool    side    TRUE
  field    SFBool    top     TRUE
  field    SFBool    bottom  TRUE
}
[PROTO - TBD]

CylinderSensor

[NOTE: This needs a serious re-write.]

The CylinderSensor maps pointer device (e.g. mouse or wand) dragging motion into a rotation around the Y axis of its local space. As the other touch sensors (DiscSensor, PlaneSensor, SphereSensor, TouchSensor), CylinderSensor uses all of the geometry contained by its parent node to determine if a hit occurs.

minAngle and maxAngle may be set to clamp rotation events to a range of values (measured in radians about the local Y axis). If minAngle is greater than maxAngle, rotation events are not clamped.

Upon the initial click down on the CylinderSensor's geometry, the specific point clicked determines the radius of the cylinder used to map pointing device motion while dragging. trackPoint events always reflect the unclamped drag position on the surface of this cylinder, or in the plane perpendicular to the view vector if the cursor moves off this cylinder. An onCylinder TRUE event is generated at the initial click down; thereafter, onCylinder FALSE/TRUE events are generated if the pointing device is dragged off/on the cylinder.

CylinderSensor {
  exposedField SFFloat    minAngle   0
  exposedField SFFloat    maxAngle   0
  exposedField SFBool     enabled    TRUE
  eventOut     SFBool     isActive
  eventOut     SFVec3f    trackPoint
  eventOut     SFRotation rotation
  eventOut     SFBool     onCylinder
}
[PROTO - TBD]

DirectionalLight

The DirectionalLight node defines a directional light source that illuminates along rays parallel to a given 3-dimensional vector. See Lights and Lighting for an explanation of ambient lighting.

A directional light source illuminates only the objects in its enclosing Group. The light illuminates everything within this coordinate system, including the objects that precede it in the scene graph--for example:

Transform {
  children [
    Shape { ... }
    DirectionalLight { .... } # lights the preceding shape
  ]
}

Some low-end renderers do not support the concept of per-object lighting. This means that placing DirectionalLights inside local coordinate systems, which implies lighting only the objects beneath the Transform with that light, is not supported in all systems. For the broadest compatibility, lights should be placed at outermost scope.

DirectionalLight {
  exposedField SFBool  on                TRUE 
  exposedField SFFloat intensity         1 
  exposedField SFFloat ambientIntensity  0 
  exposedField SFColor color             1 1 1
  exposedField SFVec3f direction         0 0 -1
}
[PROTO - TBD]

DiskSensor

The DiskSensor maps dragging motion into a rotation around the Z axis of its local space. The feel of the rotation is as if you were scratching on a record turntable.

minAngle and maxAngle may be set to clamp rotation events to a range of values as measured in radians about the Z axis. If minAngle is greater than maxAngle, rotation events are not clamped. trackPoint events provide unclamped drag position in the XY plane.

DiskSensor {
  exposedField SFFloat    minAngle   0
  exposedField SFFloat    maxAngle   0
  exposedField SFBool     enabled    TRUE
  eventOut     SFBool     isActive
  eventOut     SFVec3f    trackPoint
  eventOut     SFRotation rotation
}
[PROTO - TBD]

ElevationGrid

This node creates a rectangular grid of varying height, especially useful in modeling terrain. The model is primarily described by a scalar array of height values that specify the height of the surface above each point of the grid.

The verticesPerRow and verticesPerColumn fields indicate the number of grid points in the X and Z directions, respectively, defining a grid of (verticesPerRow-1) x (verticesPerColumn-1) rectangles. (Note that the number of columns of vertices is defined by verticesPerRow and the number of rows of vertices is defined by verticesPerColumn. Rows are numbered from 0 through verticesPerColumn-1; columns are numbered from 0 through verticesPerRow-1.)

The vertex locations for the rectangles are defined by the height field and the gridStep field:

Thus, the vertex corresponding to the ith row and jth column is placed at

( gridStep[0] * j, heights[ i*verticesPerRow + j ], gridStep[ 1 ] * i )

in object space, where

0 <= i < verticesPerColumn, and

0 <= j < verticesPerRow.

All points in a given row have the same Z value, with row 0 having the smallest Z value. All points in a given column have the same X value, with column 0 having the smallest X value.

The default texture coordinates range from [0,0] at the first vertex to [1,1] at the far side of the diagonal. The S texture coordinate will be aligned with X, and the T texture coordinate with Z.

The colorPerVertex field determines whether colors (if specified in the color field) should be applied to each vertex or each quadrilateral of the ElevationGrid. If colorPerVertex is FALSE and the color field is not NULL, then the color field must contain a Color node containing at least (verticesPerColumn-1)*(verticesPerRow-1) colors. If colorPerVertex is TRUE and the color field is not NULL, then the color field must contain a Color node containing at least verticesPerColumn*verticesPerRow colors.

See the introductory Geometry section for a description of the ccw, solid, and creaseAngle fields.

By default, the rectangles are defined with a counterclockwise ordering, so the Y component of the normal is positive. Setting the ccw field to FALSE reverses the normal direction. Backface culling is enabled when the ccw field and the solid field are both TRUE (the default).

ElevationGrid {
  field        SFInt32  verticesPerColumn 0
  field        SFInt32  verticesPerRow    0
  field        SFVec2f  gridStep          [ 1 1 ]
  field        MFFloat  height            [ ]
  exposedField SFNode   color             NULL
  field        SFBool   colorPerVertex    TRUE
  exposedField SFNode   normal            NULL
  field        SFBool   normalPerVertex   TRUE
  exposedField SFNode   texCoord          NULL
  field        SFBool   ccw               TRUE
  field        SFBool   solid             TRUE
  field        SFFloat  creaseAngle       0
}
[PROTO - TBD]

Extrusion

The Extrusion node is used to define shapes based on a two dimensional cross section extruded along a three dimensional spine. The cross section can be scaled and rotated at each spine point to produce a wide variety of shapes.

An Extrusion is defined by a 2D crossSection piecewise linear curve (described as a series of connected vertices), a 3D spine piecewise linear curve (also described as a series of connected vertices), a list of 2D scale parameters, and a list of 3D orientation parameters. Shapes are constructed as follows: The cross-section curve, which starts as a curve in the XZ plane, is first scaled about the origin by the first scale parameter (first value scales in X, second value scales in Z). It is then rotated about the origin by the first orientation parameter, and translated by the vector given as the first vertex of the spine curve. It is then extruded through space along the first segment of the spine curve. Next, it is scaled and rotated by the second scale and orientation parameters and extruded by the second segment of the spine, and so on.

A transformed cross section is found for each joint (that is, at each vertex of the spine curve, where segments of the extrusion connect), and the joints and segments are connected to form the surface. No check is made for self-penetration. Each transformed cross section is determined as follows:

  1. Start with the cross section as specified, in the XZ plane.
  2. Scale it about (0, 0, 0) by the value for scale given for the current joint.
  3. Apply a rotation so that when the cross section is placed at its proper location on the spine it will be oriented properly. Essentially, this means that the cross section's Y axis (up vector coming out of the cross section) is rotated to align with an approximate tangent to the spine curve.

    For all points other than the first or last: The tangent for spine[i] is found by normalizing the vector defined by (spine[i+1] - spine[i-1]).

    If the spine curve is closed: The first and last points need to have the same tangent. This tangent is found as above, but using the points spine[0] for spine[i], spine[1] for spine[i+1] and spine[n-2] for spine[i-1], where spine[n-2] is the next to last point on the curve. The last point in the curve, spine[n-1], is the same as the first, spine[0].

    If the spine curve is not closed: The tangent used for the first point is just the direction from spine[0] to spine[1], and the tangent used for the last is the direction from spine[n-2] to spine[n-1].

    In the simple case where the spine curve is flat in the XY plane, these rotations are all just rotations about the Z axis. In the more general case where the spine curve is any 3D curve, you need to find the destinations for all 3 of the local X, Y, and Z axes so you can completely specify the rotation. The Z axis is found by taking the cross product of

    (spine[i-1] - spine[i]) and (spine[i+1] - spine[i]).

    If the three points are collinear then this value is zero, so take the value from the previous point. Once you have the Z axis (from the cross product) and the Y axis (from the approximate tangent), calculate the X axis as the cross product of the Y and Z axes.

  4. Given the plane computed in step 3, apply the orientation to the cross-section relative to this new plane. Rotate it counter-clockwise about the axis and by the angle specified in the orientation field at that joint.
  5. Finally, the cross section is translated to the location of the spine point.

Surfaces of revolution: If the cross section is an approximation of a circle and the spine is straight, then the Extrusion is equivalent to a surface of revolution, where the scale parameters define the size of the cross section along the spine.

Cookie-cutter extrusions: If the scale is 1, 1 and the spine is straight, then the cross section acts like a cookie cutter, with the thickness of the cookie equal to the length of the spine.

Bend/twist/taper objects: These shapes are the result of using all fields. The spine curve bends the extruded shape defined by the cross section, the orientation parameters twist it around the spine, and the scale parameters taper it (by scaling about the spine).

Extrusion has three parts: the sides, the beginCap (the surface at the initial end of the spine) and the endCap (the surface at the final end of the spine). The caps have an associated SFBool field that indicates whether it exists (TRUE) or doesn't exist (FALSE).

When the beginCap or endCap fields are specified as TRUE, planar cap surfaces will be generated regardless of whether the crossSection is a closed curve. (If crossSection isn't a closed curve, the caps are generated as if it were -- equivalent to adding a final point to crossSection that's equal to the initial point. Note that an open surface can still have a cap, resulting (for a simple case) in a shape something like a soda can sliced in half vertically.) These surfaces are generated even if spine is also a closed curve. If a field value is FALSE, the corresponding cap is not generated.

Extrusion automatically generates its own normals. Orientation of the normals is determined by the vertex ordering of the triangles generated by Extrusion. The vertex ordering is in turn determined by the crossSection curve. If the crossSection is drawn counterclockwise, then the polygons will have counterclockwise ordering when viewed from the 'outside' of the shape (and vice versa for clockwise ordered crossSection curves).

Texture coordinates are automatically generated by extrusions. Textures are mapped like the label on a soup can: the coordinates range in the U direction from 0 to 1 along the crossSection curve (with 0 corresponding to the first point in crossSection and 1 to the last) and in the V direction from 0 to 1 along the spine curve (again with 0 corresponding to the first listed spine point and 1 to the last). When crossSection is closed, the texture has a seam that follows the line traced by the crossSection's start/end point as it travels along the spine. If the endCap and/or beginCap exist, the crossSection curve is cut out of the texture square and applied to the endCap and/or beginCap planar surfaces. The beginCap and endCap textures' U and V directions correspond to the X and Z directions in which the crossSection coordinates are defined.

See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.

Extrusion {
  field   MFVec3f    spine            [ 0 0 0, 0 1 0 ]
  eventIn MFVec3f    set_spine
  field   MFVec2f    crossSection     [ 1 1, -1 1, -1 -1, 1 -1, 1 1 ]
  eventIn MFVec2f    set_crossSection
  field   MFVec2f    scale            [ ]
  eventIn MFVec2f    set_scale
  field   MFRotation orientation      [ ]
  eventIn MFRotation set_orientation
  field   SFBool     beginCap         TRUE
  field   SFBool     endCap           TRUE
  field   SFBool     ccw              TRUE
  field   SFBool     solid            TRUE
  field   SFBool     convex           TRUE
  field   SFFloat    creaseAngle      0
}

Prototype definition:

PROTO Extrusion [
  field   MFVec3f    spine            [ 0 0 0, 0 1 0 ]
  eventIn MFVec3f    set_spine
  field   MFVec2f    crossSection     [ 1 1, -1 1, -1 -1, 1 -1, 1 1 ]
  eventIn MFVec2f    set_crossSection
  field   MFVec2f    scale            [ ]
  eventIn MFVec2f    set_scale
  field   MFRotation orientation      [ ]
  eventIn MFRotation set_orientation
  field   SFBool     beginCap         TRUE
  field   SFBool     endCap           TRUE
  field   SFBool     ccw              TRUE
  field   SFBool     solid            TRUE
  field   SFBool     convex           TRUE
  field   SFFloat    creaseAngle      0
] {
    DEF IFS IndexedFaceSet { 
        coord DEF C Coordinate
        ccw IS ccw
        solid IS solid
        convex IS convex
        creaseAngle IS creaseAngle
    }

    DEF S Script {
        field   MFVec3f    spine IS spine
        eventIn MFVec3f    set_spine IS set_spine
        field   MFVec2f    crossSection IS crossSection
        eventIn MFVec2f    set_crossSection IS set_crossSection
        field   MFVec2f    scale IS scale
        eventIn MFVec2f    set_scale IS set_scale
        field   MFRotation orientation IS orientation
        eventIn MFRotation set_orientation IS set_orientation
        field   SFBool     beginCap IS beginCap
        field   SFBool     endCap IS endCap

        eventOut MFVec3f coord
        eventOut MFInt32 coordIndex

        url "file:Extrusion.java"
    }

    ROUTE S.coord TO C.set_point
    ROUTE S.coordIndex TO IFS.set_coordIndex
}

Fog

The Fog node defines an axis-aligned ellipsoid of colored atmosphere. The size field defines the x, y, and z radii of the foggy ellipsoid in the local coordinate system. The visibilityRange field specifies the distance at which an object is completely obscured by the fog. This distance is specified in the local coordinate system (by default, in meters). This distance assume that the viewer and object are contained within the fog ellipsoid. For regions outside the fog and where partial fog coverage occurs, the . The color field may be used to simulate different kinds of atmospheric effects by changing the fog's color. For example, a fog color of (1,1,1) produces a hazy atmosphere, while a fog color of (0,0,0) creates a depth cueing effect. To produce a realistic fog appearance, the Fog node would be used in combination with a Background node with skyColor equivalent to the fog color.

ISSUE: Need a detailed formula for ideal fog calculation - including multiple fogs and partial coverage.]

An ideal implementation of fog would compute exactly how much attenuation occurs between the viewer and each object in the world and render the scene appropriately. However, implementations are free to approximate this ideal behavior, perhaps by computing the intersection of the viewing direction vector with any foggy regions and computing overall fogging parameters each time the scene is rendered. Assume exponential falloff.

Fog {
  exposedField SFVec3f size            0 0 0
  exposedField SFFloat visibilityRange 1
  exposedField SFColor color           1 1 1
}

FontStyle

The FontStyle node, which may only appear in the fontStyle field of a Text node, defines the size, font family, and style of the text font, as well as the direction of the text strings and any specific language rendering techniques that must be used for non-English text.

The size field specifies the height (in object space units) of glyphs rendered and determines the spacing of adjacent lines of text depended on the direction field. All subsequent strings advance in either x or y by -( size * spacing). (See the Text node for a description of the spacing field.)

Font Family and Style

Font attributes are defined with the family and style fields. It is up to the browser to assign specific fonts to the various attribute combinations.

The family field contains an SFString value that can be "SERIF" (the default) for a serif font such as Times Roman; "SANS" for a sans-serif font such as Helvetica; or "TYPEWRITER" for a fixed-pitch font such as Courier.

The style field contains an SFString value that can be an empty string (the default); "BOLD" for boldface type; "ITALIC" for italic type; or "BOLD ITALIC" for bold and italic type.

Direction, Justification and Spacing

The horizontal, leftToRight, and topToBottom fields indicate the direction of the text. The horizontal field indicates whether the text is horizontal (specified as TRUE, the default) or vertical (FALSE). The leftToRight field indicates whether the text progresses from left to right (specified as TRUE, the default) or from right to left (FALSE). The topToBottom field indicates whether the text progresses from top to bottom (specified as TRUE, the default), or from bottom to top (FALSE).

The justify field determines where the text is positioned in relation to the origin (0,0,0) of the object coordinate system. The values for the justify field are "BEGIN", "MIDDLE", and "END". For a left-to-right direction , "BEGIN" would specify left-justified text, "END" would specify right-justified text, and "MIDDLE" would specify centered text. See the FontStyle node for details of text placement.

The spacing field determines the spacing between multiple text strings.

The size field of the FontStyle node specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either X or Y by -(size * spacing). A value of 0 for spacing causes the string to be in the same position. A value of -1 causes subsequent strings to advance in the opposite direction.

For horizontal text (horizontal = TRUE), the first line of text is positioned with its baseline (bottom of capital letters) at Y = 0. The text is positioned on the positive side of the X origin when leftToRight is TRUE and justify is "BEGIN"; the same positioning is used when leftToRight is FALSE and justify is "END". The text is on the negative side of the X origin when leftToRight is TRUE and justify is "END" (and when leftToRight is FALSE and justify is "BEGIN"). For justify = "MIDDLE" and horizontal = TRUE, each string will be centered at X = 0.

For vertical text (horizontal is FALSE), the first line of text is positioned with the left side of the glyphs along the Y axis. When topToBottom is TRUE and justify is "BEGIN" (or when topToBottom is FALSE and justify is "END"), the text is positioned with the top left corner at the origin. When topToBottom is TRUE and justify is "END" (or when topToBottom is FALSE and justify is "BEGIN"), the bottom left is at the origin. For justify = "MIDDLE" and horizontal = FALSE, the text is centered vertically at X = 0.

In the following tables, each small cross indicates where the X and Y axes should be in relation to the text:

horizontal = TRUE:

Horizontal Text Table

horizontal = FALSE:

Vertical Text Table

The language field specifies the context of the language for the text string. Due to the multilingual nature of the ISO 10646-1:1993, the langauge field is needed to provide a proper langauge attribute of the text string. The format is based on the POSIX locale specification as well as the RFC 1766: language[_territory]. The values for the langauge tag is based on the ISO 639, i.e. zh for Chinese, jp for Japanese, sc for Swedish. The territory tag is based on the ISO 3166 country code, i.e. TW is for Taiwan and CN for China for the "zh" Chinese language tag.

Please refer to these sites for more details:

    http://www.chemie.fu-berlin.de/diverse/doc/ISO_639.html
    http://www.chemie.fu-berlin.de/diverse/doc/ISO_3166.html

FontStyle {
 field SFFloat  size       1.0
 field SFString family     "SERIF"  # "SERIF", "SANS", "TYPEWRITER"
 field SFString style       ""      # "BOLD", "ITALIC", "BOLD ITALIC"
 field SFBool   horizontal  TRUE
 field SFBool   leftToRight TRUE
 field SFBool   topToBottom TRUE
 field SFString language    ""
 field SFString justify     "BEGIN" # "BEGIN","MIDDLE", "END"
 field SFFloat  spacing     1.0
}

Group

A Group node is a lightweight grouping node that can contain any number of children. It is equivalent to a Transform node, without the transformation fields.

Group {
  field        SFVec3f bboxCenter  0 0 0
  field        SFVec3f bboxSize    0 0 0
  exposedField MFNode  children    [ ]
  eventIn      MFNode  add_children
  eventIn      MFNode  remove_children
}

Prototype definition:

PROTO Group [
  field        SFVec3f bboxCenter  0 0 0
  field        SFVec3f bboxSize    0 0 0
  exposedField MFNode  children    [ ]
  eventIn      MFNode  add_children
  eventIn      MFNode  remove_children
] {
  Transform {
    bboxCenter IS bboxCenter
    bboxSize IS bboxSize
    children IS children
    add_children IS add_children
    remove_children IS remove_children
  }
}

ImageTexture

The ImageTexture node defines a texture map and parameters for that map.

The texture is read from the URL specified by the url field. To turn off texturing, set the url field to have no values ([]). Implementations should support the JPEG and PNG image file formats. Support for the GIF format is also recommended.

Texture images may be one component (greyscale), two component (greyscale plus transparency), three component (full RGB color), or four-component (full RGB color plus transparency). An ideal VRML implementation will use the texture image to modify the diffuse color and transparency of an object's material (specified in a Material node), then perform any lighting calculations using the rest of the object's material properties with the modified diffuse color to produce the final image. The texture image modifies the diffuse color and transparency depending on how many components are in the image, as follows:

  1. Diffuse color is multiplied by the greyscale values in the texture image.
  2. Diffuse color is multiplied by the greyscale values in the texture image; material transparency is multiplied by transparency values in texture image.
  3. RGB colors in the texture image replace the material's diffuse color.
  4. RGB colors in the texture image replace the material's diffuse color; transparency values in the texture image replace the material's transparency.

Browsers may approximate this ideal behavior to increase performance. One common optimization is to calculate lighting only at each vertex and combining the texture image with the color computed from lighting (performing the texturing after lighting). Another common optimization is to perform no lighting calculations at all when texturing is enabled, displaying only the colors of the texture image.

The repeatS and repeatT fields specify how the texture wraps in the S and T directions. If repeatS is TRUE (the default), the texture map is repeated outside the 0-to-1 texture coordinate range in the S direction so that it fills the shape. If repeatS is FALSE, the texture coordinates are clamped in the S direction to lie within the 0-to-1 range. The repeatT field is analogous to the repeatS field.

ImageTexture {
  exposedField MFString url     [ ]
  field        SFBool   repeatS TRUE
  field        SFBool   repeatT TRUE
}

IndexedFaceSet

The IndexedFaceSet node represents a 3D shape formed by constructing faces (polygons) from vertices listed in the coord field. The coord field must contain a Coordinate node. IndexedFaceSet uses the indices in its coordIndex field to specify the polygonal faces. An index of -1 indicates that the current face has ended and the next one begins. The last face may (but does not have to be) followed by a -1. If the greatest index in the coordIndex field is N, then the Coordinate node must contain N+1 coordinates (indexed as 0-N).

For descriptions of the coord, normal, and texCoord fields, see the Coordinate, Normal, and TextureCoordinate nodes.

If the color field is not NULL then it must contain a Color node, whose colors are applied to the vertices or faces of the IndexedFaceSet as follows:

If the normal field is NULL, then the browser should automatically generate normals, using creaseAngle to determine if and how normals are smoothed across shared vertices.

If the normal field is not NULL, then it must contain a Normal node, whose normals are applied to the vertices or faces of the IndexedFaceSet in a manner exactly equivalent to that described above for applying colors to vertices/faces.

If the texCoord field is not NULL, then it must contain a TextureCoordinate node. The texture coordinates in that node are applied to the vertices of the IndexedFaceSet as follows:

If the texCoord field is NULL, a default texture coordinate mapping is calculated using the bounding box of the shape. The longest dimension of the bounding box defines the S coordinates, and the next longest defines the T coordinates. If two or all three dimensions of the bounding box are equal, then ties should be broken by choosing the X, Y, or Z dimension in that order of preference. The value of the S coordinate ranges from 0 to 1, from one end of the bounding box to the other. The T coordinate ranges between 0 and the ratio of the second greatest dimension of the bounding box to the greatest dimension.

See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.

IndexedFaceSet {
  exposedField  SFNode  coord             NULL
  field         MFInt32 coordIndex        [ ]
  exposedField  SFNode  texCoord          NULL
  field         MFInt32 texCoordIndex     [ ]
  exposedField  SFNode  color             NULL
  field         MFInt32 colorIndex        [ ]
  field         SFBool  colorPerVertex    TRUE
  exposedField  SFNode  normal            NULL
  field         MFInt32 normalIndex       [ ]
  field         SFBool  normalPerVertex   TRUE
  field         SFBool  ccw               TRUE
  field         SFBool  solid             TRUE
  field         SFBool  convex            TRUE
  field         SFFloat creaseAngle       0
}

Inline

The Inline node is a light-weight grouping node like Group that reads its children from anywhere in the World Wide Web. Exactly when its children are read is not defined; reading the children may be delayed until the Inline is actually displayed. An Inline with an empty name does nothing. The name is an arbitrary set of URLs.

An Inline's URLs must refer to a valid VRML file that contains a grouping or leaf node. Referring to non-VRML files or VRML files that do not contain a grouping or leaf node is undefined.

If multiple URLs are specified, then this expresses a descending order of preference. A browser may display a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URLs and URNs.

If the Inline's bboxSize field specifies a non-empty bounding box (a bounding box is non-empty if at least one of its dimensions is greater than zero), then the Inline's object-space bounding box is specified by its bboxSize and bboxCenter fields. This allows an implementation to quickly determine whether or not the contents of the Inline might be visible. This is an optimization hint only; if the true bounding box of the contents of the Inline is different from the specified bounding box, results will be undefined.

Inline {
  exposedField MFString url       [ ]
  field        SFVec3f  bboxSize   0 0 0
  field        SFVec3f  bboxCenter 0 0 0
}

Prototype definition:

PROTO Inline [
  exosedField MFString url       [ ]
  field       SFVec3f  bboxSize   0 0 0
  field       SFVec3f  bboxCenter 0 0 0
] {
  DEF G Group {
    bboxSize IS bboxSize
    bboxCenter IS bboxCenter
  }
  DEF ISCRIPT Script {
    exposedField MFString url IS url
    eventOut     MFNode   children
    #
    # Script's initialization code should call browser's
    # createVrmlFromURL() function, then send resulting node out to
    # children eventOut.
  }
  ROUTE ISCRIPT.children TO G.addChildren
}

IndexedLineSet

This node represents a 3D shape formed by constructing polylines from vertices listed in the coord field. IndexedLineSet uses the indices in its coordIndex field to specify the polylines. An index of -1 indicates that the current polyline has ended and the next one begins. The last polyline may (but does not have to be) followed by a -1.

For descriptions of the coord field, see the Coordinate node.

Lines are not texture-mapped or affected by light sources.

If the color field is not NULL, it must contain a Color node, and the colors are applied to the line(s) as folows:

IndexedLineSet {
  exposedField  SFNode  coord             NULL
  field         MFInt32 coordIndex        []
  exposedField  SFNode  color             NULL
  field         MFInt32 colorIndex        []
  field         SFBool  colorPerVertex    TRUE
}

LOD

The LOD node, (level of detail), is used to allow browsers to switch between various representations of objects automatically. The levels field contains nodes that represent the same object or objects at varying levels of detail, from highest detail to lowest.

First the distance is calculated from the viewpoint, transformed into the local coordinate space of the LOD node (including any scaling transformations), to the center point of the LOD. If the distance is less than the first value in the range field, then the first level of the LOD is drawn. If between the first and second values in the range field, the second level is drawn, and so on.

If there are N values in the range field, the LOD should have N+1 nodes in its level field. Specifying too few levels will result in the last level being used repeatedly for the lowest levels of detail; if too many levels are specified, the extra levels will be ignored. The exception to this rule is to leave the range field empty, which is a hint to the browser that it should choose a level automatically to maintain a constant display rate.

Each value in the range field should be greater than the previous value; otherwise results are undefined. Not specifying any values in the range field (the default) is a special case that indicates that the browser may decide which child to draw to optimize rendering performance.

Authors should set LOD ranges so that the transitions from one level of detail to the next are barely noticeable. Browsers may adjust which level of detail is displayed to maintain interactive frame rates, to display an already-fetched level of detail while a higher level of detail (contained in an Inline node) is fetched, or might disregard the author-specified ranges for any other implementation-dependent reason. Authors should not use LOD nodes to emulate simple behaviors, because the results will be undefined. For example, using an LOD node to make a door appear to open when the user approaches probably will not work in all browsers. Use a ProximitySensor instead.

For best results, specify ranges only where necessary, and nest LOD nodes with and without ranges. For example:

LOD {
  range [100, 1000]
  levels [
    LOD {
      levels [
        Transform { ... detailed version...  }
        DEF LoRes Transform { ... less detailed version... }
      ]
    }
    USE LoRes,
    Shape { } # Display nothing
  ]
}

In this example, the browser is free to choose either a detailed or a less-detailed version of the object when the viewer is closer than 100 meters. The browser should display the less-detailed version of the object if the viewer is between 100 and 1,000 meters and should display nothing at all if the viewer is farther than 1,000 meters. Browsers should try to honor the hints given by authors, and authors should try to give browsers as much freedom as they can to choose levels of detail based on performance.

LOD {
  field        MFFloat range    [ ]  
  field        SFVec3f center   0 0 0 
  exposedField MFNode  levels   [ ]
}

Prototype definition:

PROTO LOD [
  field        MFFloat range    [ ]  
  field        SFVec3f center   0 0 0 
  exposedField MFNode  levels   [ ]
] {
  DEF F Transform {
    DEF PS ProximitySensor { center IS center }
  }
  DEF LODSCRIPT Script {
    eventOut MFNode  remove
    eventOut MFNode  add
    eventOut SFVec3f maxRange
    eventIn  SFVec3f viewerPosition
    field MFFloat range IS range
    field MFNode  levels IS levels
    #
    # Script must:
    #   -- set maxRange to maximum value in range[] field
    #   -- get viewerPosition, figure out which level should
    #     be seen, add/remove appropriate children
  }
  ROUTE F.position TO LODSCRIPT.viewerPosition
  ROUTE LODSCRIPT.maxRange TO PS.size
  ROUTE LODSCRIPT.remove TO F.removeChildren
  ROUTE LODSCRIPT.add TO F.addChildren
}

Material

The Material node defines surface material properties for associated geometry nodes.

The fields in the Material node determine the way light reflects off an object to create color:

The lighting parameters defined by the Material node are the same parameters defined by the OpenGL lighting model. For a rigorous mathematical description of how these parameters should be used to determine how surfaces are lit, see the description of lighting operations in the OpenGL Specification. Also note that OpenGL specifies the specular exponent as a non-normalized 0-128 value, which is specified as a normalized 0-1 value in VRML (simply multiplying the VRML value by 128 to translate to the OpenGL parameter).

For rendering systems that do not support the full OpenGL lighting model, the following simpler lighting model is recommended:

A transparency value of 0 is completely opaque, a value of 1 is completely transparent. Browsers need not support partial transparency, but should support at least fully transparent and fully opaque surfaces, treating transparency values >= 0.5 as fully transparent.

Issues for Low-End Rendering Systems. Many low-end PC rendering systems are not able to support the full range of the VRML material specification. For example, many systems do not render individual red, green and blue reflected values as specified in the specularColor field. The following table describes which Material fields are typically supported in popular low-end systems and suggests actions for browser implementors to take when a field is not supported.

Field           Supported?    Suggested Action

ambientIntensity No           Ignore
diffuseColor     Yes          Use
specularColor    No           Ignore
emissiveColor    No           If diffuse == 0 0 0 then use emissive
shininess        Yes          Use
transparency     Yes          if < 0.5 then opaque else transparent

The emissive color field is used when all other colors are black (0 0 0 ). Rendering systems which do not support specular color may nevertheless support a specular intensity. This should be derived by taking the dot product of the specified RGB specular value with the vector [.32 .57 .11]. This adjusts the color value to compensate for the variable sensitivity of the eye to colors.

Likewise, if a system supports ambient intensity but not color, the same thing should be done with the ambient color values to generate the ambient intensity. If a rendering system does not support per-object ambient values, it should set the ambient value for the entire scene at the average ambient value of all objects.

It is also expected that simpler rendering systems may be unable to support both diffuse and emissive objects in the same world.

Material {
  exposedField SFColor diffuseColor      0.8 0.8 0.8
  exposedField SFFloat ambientIntensity  0.2
  exposedField SFColor specularColor     0 0 0
  exposedField SFColor emissiveColor     0 0 0
  exposedField SFFloat shininess         0.2
  exposedField SFFloat transparency      0
}

MovieTexture

[Note: This needs a major re-write and clarification.]

The MovieTexture node defines an animated movie texture map and parameters for controlling the movie and the map. MPEG1-Systems (audio and video) or MPEG1-Video (video-only) are required movie file formats.

The duration eventOut sends the duration of the movie in seconds. A value of -1 implies the movie has not loaded yet; this eventOut value can be read to determine duration, if known.

MovieTextures are either referenced by the Appearance node's texture field (as a movie texture) or by the Sound node's source field (as an audio source only).

MovieTexture {
  exposedField MFString url       [ ]
  exposedField SFFloat  speed      0
  exposedField SFBool   loop       FALSE
  exposedField SFTime   startTime  0
  exposedField SFTime   stopTime   0
  field        SFBool   repeatS    TRUE
  field        SFBool   repeatT    TRUE
  eventOut     SFFloat  duration
}

NavigationInfo

The NavigationInfo node contains information describing the physical characteristics of the viewer and viewing model.

NavigationInfo nodes are Bindable Leaf Nodes and thus there exists a NaviagtionInfo stack in the browser, in which the topmost NavigationInfo on the stack is the currently active one. To push a NavigationInfo onto the top of the stack, a TRUE value is sent to the bind eventIn on the specific NavigationInfo. Once active, the NavigationInfo is then bound to the browsers view. A FALSE value of bind, pops the NavigationInfo from the stack and unbinds it from the browser viewer. See Bindable Leaf Nodes for more details on the the NavigationInfo stack.

The type field specifies a navigation paradigm to use. The types that all VRML viewers should support are "WALK", "EXAMINE", "FLY", and "NONE". A walk viewer is used for exploring a virtual world. The viewer should (but is not required to) have some notion of gravity in this mode. A fly viewer is similar to walk except that no notion of gravity should be enforced. There should still be some notion of "up" however. An examine viewer is typically used to view individual objects and often includes (but does not require) the ability to spin the object and move it closer or further away. The "none" choice removes all viewer controls. The user navigate using only controls provided in the scene, such as guided tours. Also allowed are browser specific viewer types. These should include a suffix as described in the naming conventions section to prevent conflicts. The type field is multi-valued so that authors can specify fallbacks in case a browser does not understand a given type.

The speed is the rate at which the viewer travels through a scene in units per second. Since viewers may provide mechanisms to travel faster or slower, this should be the default or average speed of the viewer. In an examiner viewer, this only makes sense for panning and dollying--it should have no effect on the rotation speed. The transformation hierarchy scales the speed - translations and rotations have no effect on speed.

The avatarSize field specifies parameters to be used in determining the camera dimensions for the purpose of collision detection and terrain following if the viewer type allows these. It is a multi-value field to allow several dimensions to be specified. The first value should be the allowable distance between the user's position and any collision geometry (as specified by Collision) before a collision is detected. The second should be the height above the terrain the camera should be maintained. The third should be the height of the tallest object over which the camera can "step". This allows staircases to be build with dimensions that can be ascended by all browsers. Additional values are browser dependent and all values may be ignored but if a browser interprets these values the first 3 should be interpreted as described above. The transformation hierarchy scales the avatarSize - translations and rotations have no effect on avatarSize.

The visibilityLimit field sets the furthest distance the viewer is able to see. The browser may clip all objects beyond this limit, fade them into the background or ignore this field. A value of 0.0 (the default) indicates an infinite visibility limit.

The headlight field specifies whether a browser should turn a headlight on. A headlight is a directional light that always points in the direction the user is looking. Setting this field to TRUE allows the browser to provide a headlight, possibly with user interface controls to turn it on and off. Scenes that enlist precomputed lighting (e.g. radiosity solutions) can specify the headlight off here. The headlight should have intensity 1, color 1 1 1, and direction 0 0 -1.

The first NavigationInfo node found during reading of the world supplies the initial navigation parameters. Subsequent NavigationInfo nodes are ignored. The browser may be told to use a different NavigationInfo node using Script node API calls.

NavigationInfo {
  exposedField MFFloat  avatarSize       1.0
  exposedField SFBool   headlight        TRUE
  exposedField SFFloat  speed            1.0 
  exposedField MFString type             "WALK" 
  exposedField SFFloat  visibilityLimit  0.0 
  eventIn      SFBool   bind 
  eventOut     SFBool   isBound
}

Normal

This node defines a set of 3D surface normal vectors to be used in the normal field of some geometry nodes (IndexedFaceSet, ElevationGrid). This node contains one multiple-valued field that contains the normal vectors. Normals should be unit-length or results are undefined.

To save network bandwidth, it is expected that implementations will be able to automatically generate appropriate normals if none are given. However, the results will vary from implementation to implementation.

Normal {
  exposedField MFVec3f vector []
}

NormalInterpolator

This node interpolates among a set of multi-valued Vec3f values, suitable for transforming normal vectors. All output vectors will have been normalized by the interpolator.

The number of normals in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many normals will be contained in the outValue events.

NormalInterpolator {
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     MFVec3f outValue
}

Prototype Defnition:

PROTO NormalInterpolator [
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     MFVec3f outValue
] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFVec3f values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     MFVec3f outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}


OrientationInterpolator

This node interpolates among a set of SFRotation values. The rotations are absolute in object space and are, therefore, not cumulative. The values field must contain exactly as many rotations as there are keyframe times in the keys field, or an error will be generated and results will be undefined.

OrientationInterpolator {
  exposedField MFFloat    keys      []
  exposedField MFRotation values    []
  eventIn      SFFloat    set_fraction
  eventOut     SFRotation outValue
}

Prototype Defnition:

PROTO OrientationInterpolator [
  exposedField MFFloat    keys      []
  exposedField MFRotation values    []
  eventIn      SFFloat    set_fraction
  eventOut     SFRotation outValue


] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFRotation values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     SFRotation outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

PixelTexture

The PixelTexture node defines a 2d image-based texture map as an explicit array of pixel values and parameters controlling tiling repetition of the texture.

Images may be one component (greyscale), two component (greyscale plus transparency), three component (full RGB color), or four-component (full RGB color plus transparency). An ideal VRML implementation will use the texture image to modify the diffuse color and transparency of an object's material (specified in a Material node), then perform any lighting calculations using the rest of the object's material properties with the modified diffuse color to produce the final image. The texture image modifies the diffuse color and transparency depending on how many components are in the image, as follows:

  1. Diffuse color is multiplied by the greyscale values in the texture image.
  2. Diffuse color is multiplied by the greyscale values in the texture image; material transparency is multiplied by transparency values in texture image.
  3. RGB colors in the texture image replace the material's diffuse color.
  4. RGB colors in the texture image replace the material's diffuse color; transparency values in the texture image replace the material's transparency.

Browsers may approximate this ideal behavior to increase performance. One common optimization is to calculate lighting only at each vertex and combining the texture image with the color computed from lighting (performing the texturing after lighting). Another common optimization is to perform no lighting calculations at all when texturing is enabled, displaying only the colors of the texture image.

See the SFImage field specification for details on how to specify an image.

The repeatS and repeatT fields specify how the texture wraps in the S and T directions. If repeatS is TRUE (the default), the texture map is repeated outside the 0-to-1 texture coordinate range in the S direction so that it fills the shape. If repeatS is FALSE, the texture coordinates are clamped in the S direction to lie within the 0-to-1 range. The repeatT field is analogous to the repeatS field.

PixelTexture {
  exposedField SFImage  image      0 0 0
  field        SFBool   repeatS    TRUE
  field        SFBool   repeatT    TRUE
}

PlaneSensor

The PlaneSensor maps dragging motion into a translation in two dimensions, in the XY plane of its local space.

minPosition and maxPosition may be set to clamp translation events to a range of values as measured from the origin of the XY plane. If the X or Y component of minPosition is greater than the corresponding component of maxPosition, translation events are not clamped in that dimension. If the X or Y component of minPosition is equal to the corresponding component of maxPosition, that component is constrained to the given value; this technique provides a way to implement a line sensor that maps dragging motion into a translation in one dimension.

trackPoint events provide unclamped drag position in in the XY plane.

PlaneSensor {
  exposedField SFVec2f minPosition 0 0
  exposedField SFVec2f maxPosition -1 -1
  exposedField SFBool  enabled     TRUE
  eventOut     SFBool  isOnPlane
  eventOut     SFVec3f trackPoint
  eventOut     SFVec3f translation
}

PointLight

The PointLight node defines a point light source at a fixed 3D location. A point source illuminates equally in all directions; that is, it is omni-directional.

See Lights and Lighting for an explanation of ambient lighting.

A PointLight illuminates everything within radius of its location. A PointLight's illumination falls off with distance as specified by three attenuation coefficients. The attenuation factor is 1/(attenuation[0] + attenuation[1]*r + attenuation[2]*r^2), where r is the distance of the light to the surface being illuminated. The default is no attenuation. Renderers that do not support a full attenuation model may approximate as necessary. PointLights are leaf nodes and thus are transformed by the transformation hiearchy of its parents.

PointLight {
  exposedField SFBool  on                TRUE 
  exposedField SFFloat intensity         1  
  exposedField SFFloat ambientIntensity  0 
  exposedField SFColor color             1 1 1 
  exposedField SFVec3f location          0 0 0
  exposedField SFFloat radius            1 
  exposedField SFVec3f attenuation       1 0 0
}

PointSet

The PointSet node represents a set of points listed in the coord field. The coord field must be a Coordinate node (or instance of a Coordinate node). PointSet uses the coordinates in order. A PointSet

If the color field is not NULL, it must contain a Color node that contains at least the number of points contained in the coord node. Colors are always applied to each point in order. Points are not texture-mapped or affected by light sources.

PointSet {
  exposedField  SFNode  coord      NULL
  exposedField  SFNode  color      NULL
}

Examples:

This simple example defines a PointSet composed of 3 points. The first point is red (1 0 0), the second point is green (0 1 0), and the third point is blue (0 0 1). The second PointSet instances the Coordinate node defined in the first PointSet, but defines different colors:

Shape {
  geometry PointSet {
    coord DEF mypts Coordinate { point [ 0 0 0, 2 2 2, 3 3 3 ] }
    color Color { rgb [ 1 0 0, 0 1 0, 0 0 1 ] }
  }
}
Shape {
  geometry PointSet {
    coord USE mypts
    color Color { rgb [ .5 .5 0, 0 .5 .5, 1 1 1 ] }
  }
}

PositionInterpolator

This node linearly interpolates among a set of SFVec3f values. This would be appropriate for interpolating a translation.

PositionInterpolator {
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     SFVec3f outValue
}

Prototype definition:

PROTO PositionInterpolator [
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     SFVec3f outValue

] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFVec3f values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     SFVec3f outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

ProximitySensor

The ProximitySensor generate events when the viewpoint enters, exits, and moves inside a space. A proximity sensor can be activated or deactivated by sending it an enabled event with a value of TRUE or FALSE.

A ProximitySensor generates isActive TRUE/FALSE events as the viewer enters/exits the region defined by its center and size fields. Ideally, implementations will interpolate viewpoint positions and timestamp the isActive events with the exact time the viewpoint first intersected the volume.

The enterTime event is generated whenever the isActive TRUE event is generated, and exitTime events are generated whenever isActive FALSE event is generated.

position and orientation events giving the position and orientation of the viewer in the ProximitySensor's coordinate system are generated when either the user or the coordinate system of the sensor moves and the viewer is inside the region being sensed.

Multiple ProximitySensors will generate events at the same time if the regions they are sensing overlap. Unlike TouchSensors, there is no notion of a ProximitySensor lower in the scene graph "grabbing" events.

A ProximitySensor that surrounds the entire world will have an enterTime equal to the time that the world was entered and can be used to start up animations or behaviors as soon as a world is loaded. A ProximitySensor with a (0 0 0) size field cannot generate events - this is equivalent to setting the enabled field to FALSE.

ProximitySensor {
  exposedField SFVec3f    center      0 0 0
  exposedField SFVec3f    size        0 0 0
  exposedField SFBool     enabled     TRUE
  eventOut     SFBool     isActive
  eventOut     SFVec3f    position
  eventOut     SFRotation orientation
  eventOut     SFTime     enterTime
  eventOut     SFTime     exitTime
}

ScalarInterpolator

This node linearly interpolates among a set of SFFloat values. This interpolator is appropriate for any parameter defined using a single floating point value, e.g., width, radius, intensity, etc. The values field must contain exactly as many numbers as there are keyframe times in the keys field, or an error will be generated and results will be undefined.

ScalarInterpolator {
  exposedField MFFloat keys      []
  exposedField MFFloat values    []
  eventIn      SFFloat set_fraction
  eventOut     SFFloat outValue
}

Prototype definition:

PROTO ScalarInterpolator [
  exposedField MFFloat keys      []
  exposedField MFFloat values    []
  eventIn      SFFloat set_fraction
  eventOut     SFFloat outValue
] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFFloat values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     SFFloat outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

Script

The Script node is used to program behavior in a VRML scene. Script nodes typically receive events that signify a change or user action, contain a program module that performs some computation, and then finally effects changes somewhere else in the world. Each Script node has associated code in some programming language that is executed to carry out the Script node's function. That code will be referred to as "the script" in the rest of this description.

A Script node's scriptType field describes which scripting language is being used. The contents of the url field depends on which scripting language is being used. Typically the url field will contain URLs/URNs from which the script should be fetched.

Each scripting language supported by a browser defines bindings for the following functionality. See Appendices A and B for the standard Java and C language bindings.

When the script is created, any language-dependent or user-defined initialization is performed. The script is able to receive and process events that are sent to it. Each event that can be received must be declared in the Script node using the same syntax as is used in a prototype definition:

    eventIn type name

The type can be any of the standard VRML field types, and name must be an identifier that is unique for this Script node.

The Script node should be able to generate events in response to the incoming events. Each event that can be generated must be declared in the Script node using the following syntax:

    eventOut type name

Script nodes cannot have exposedFields. The implementation ramifications of exposedFields is far too complex and thus not allowed.

If the Script node's mustEvaluate field is FALSE, the browser can delay sending input events to the script until its outputs are needed by the browser. If the mustEvaluate field is TRUE, the browser should send input events to the script as soon as possible, regardless of whether the outputs are needed. The mustEvaluate field should be set to TRUE only if the Script has effects that are not known to the browser (such as sending information across the network); otherwise, poor performance may result.

An example of a Script node is

    Script { 
      url   "http://foo.com/bar.class"
      eventIn    SFString name   
      eventIn    SFBool   selected
      eventOut   SFString lookto
      field      SFInt32  currentState 0
      field      SFBool   mustEvaluate TRUE
    }

The script should be able to read and write the fields of the corresponding Script node. The Script node cannot have exposed fields.

Once the script has access to some VRML node (via an SFNode or MFNode value either in one of the Script node's fields or passed in as an eventIn), the script should be able to read the contents of that node's exposed field. If the Script node's directOutputs field is TRUE, the script may also send events directly to any node to which it has access, and may dynamically establish or break routes. If directOutputs is FALSE (the default), then the script may only affect the rest of the world via events sent through its eventOuts.

A script is able to communicate directly with the VRML browser to get and set global information such as navigation information, the current time, the current world URL, and so on. This is strictly defined by the API for the specific language being used.

It is expected that all other functionality (such as networking capabilities, multi-threading capabilities, and so on) will be provided by the scripting language.

Script { 
  exposedField MFString url           [ ] 

  field        SFBool   mustEvaluate  FALSE
  field        SFBool   directOutputs FALSE
  field        SFString scriptType    "" 
  
  # And any number of:
  eventIn      eventTypeName eventName
  field        fieldTypeName fieldName initialValue
  eventOut     eventTypeName eventName
}

Shape

A Shape node has two fields: appearance and geometry. These fields, in turn, contain other nodes. The appearance field contains an Appearance node that has material, texture, and textureTransform fields (see the Appearance node). The geometry field contains a geometry node. The specified appearance nodes are applied to the specified geometry node.

Shape {
  field SFNode appearance NULL
  field SFNode geometry   NULL
}

Sound

The Sound node describes the positioning and spatial presentation of a sound in a VRML scene. The sound may be located at a point and emit sound in a spherical or ellipsoid pattern. The ellipsoid is pointed in a particular direction and may be shaped to provide more or less directional focus from the location of the sound. The sound node may also be used to describe an ambient sound which tapers off at a specified distance from the sound node. If the distance is set to the maximum value, the sound will be ambient over the entire VRML scene.

The source field specifies the sound source for the sound node. If there is no source specified the Sound will emit no audio. The source field must point to either an AudioClip or a MovieTexture node. Furthermore, the MovieTexture node must refer to a movie format that supports sound (e.g. MPEG1-Systems).

The intensity field adjusts the volume of each sound source; The intensity is an SFFloat that ranges from 0.0 to 1.0. An intensity of 0 is silence, and an intensity of 1 is the full volume of the sound in the sample or the full volume of the MIDI clip.

The priority field gives the author some control over which sounds the browser will choose to play when there are more sounds active than sound channels available. The priority varies between zero and one with one being the highest priority. For most applications priority zero should be used for a normal sound and priority one should be used only for special event or cue sounds (usually of short duration) that the author wants the user to hear even if they are farther away and perhaps of lower intensity than some other ongoing sounds. Browsers should make as many sound channels available to the scene as is efficiently possible.

If the browser does not have enough sound channels to play all of the currently active sounds, we recommend that the browser sort the active sounds into an ordered list using the following sort keys:

1) decreasing Priority; 2) (only for sounds with priority > 0.5) increasing (now-startTime) 3) decreasing intensity at viewer location (Intensity / distance squared);

where Priority and Intensity are fields of the sound node, now is the current time and startTime is the startTime field of the source for the sound node.

It is important that sort key 2 be used for the high priority (event and cue) sounds so that new cues will be heard even when the channels are "full" of currently active high priority sounds. Sort key 2 should not be used for normal priority sounds so selection among them will be based on sort key 3 - intensity and distance from the viewer.

The browser should play as many sounds from the beginning of this sorted list as it has available channels. On most systems the number of concurrent sound channels is distinct from the number of concurrent MIDI streams. On these systems the browser may maintain separate ordered lists for sampled sounds and MIDI streams.

A sound's location in the scene graph determines its spatial location (the sound's location is transformed by the current transformation) and whether or not it can be heard. A sound can only be heard while it is part of the traversed scene; sound nodes underneath LOD nodes or Switch nodes will not be audible unless they are traversed. If a sound is silenced for a time under a Switch or LOD node, and later it becomes part of the traversal again, the sound picks up where it would have been had it been playing continuously.

Around the location of the emitter, minFront and minBack determine the extent of the full intensity region in front of and behind the sound. If the location of the sound is taken as a focus of an ellipsoid, the minBack and minFront values, in combination with the direction vector determine the two focii of an ellipsoid bounding the ambient region of the sound. Similarly, maxFront and maxBack determine the limits of audibility in front of and behind the sound; they describe a second, outer ellipsoid. If minFront equals minBack and maxFront equals maxBack, the sound is omni-directional, the direction vector is ignored, and the min and max ellipsoids become spheres centered around the sound node.

The inner ellipsoid defines an space of full intensity for the sound. Within that space the sound will play at the intensity specified in the sound node. The outer ellipsoid determines the maximum extent of the sound. Outside that space, the sound cannot be heard at all. In between the two ellipsoids, the intensity drops off proportionally with inverse square of the distance. With this model, a Sound usually will have smooth changes in intensity over the entire extent is which it can be heard. However, if at any point the maximum is the same as or inside the minimum, the sound is cut off immediately at the edge of the minimum ellipsoid.

The ideal implementation of the sound attenuation between the inner and outer ellipsoids is an inverse power dropoff. A reasonable approximation to this ideal model is a linear dropoff in decibel value. Since an inverse power dropoff never actually reaches zero, it is necessary to select an appropriate cutoff value for the outer ellipsoid so that the outer ellipsoid contains the space in which the sound is truly audible and excludes space where it would be negligible. Keeping the outer ellipsoid as small as possible will help limit resources used by nearly inaudible sounds. Experimentation suggests that a 20dB dropoff from the maximum intensity is a reasonable cutoff value that makes the bounding volume (the outer ellipsoid) contain the truly audible range of the sound. Since actual physical sound dropoff in an anechoic environment follows the inverse square law, using this algorithm it is possible to mimic real-world sound attenuation by making the maximum ellipsoid ten times larger than the minimum ellipsoid. This will yield inverse square dropoff between them.

Browsers should support spatial localization of sound as well as their underlying sound libraries will allow. The spatialize field is used to indicate to browsers that they should try to locate this sound. If the spatialize field is TRUE, the sound should be treated as a monaural sound coming from a single point. A simple spatialization mechanism just places the sound properly in the pan of the stereo (or multichannel) sound output. Sounds are faded out over distance as described above. Browsers may use more elaborate sound spatialization algorithms if they wish.

Authors can create ambient sounds by setting the spatialize field to FALSE. In that case, stereo and multichannel sounds should be played using their normal separate channels. The distance to the sound and the minimum and maximum ellipsoids (discussed above) should affect the intensity in the normal way. Authors can create ambient sound over the entire scene by setting the minFront and minBack to the maximum value.

Sound {
  exposedField SFNode   source        NULL
  exposedField SFFloat  intensity     1
  exposedField SFFloat  priority      0
  exposedField SFVec3f  location      0 0 0
  exposedField SFVec3f  direction     0 0 1
  exposedField SFFloat  minFront      1
  exposedField SFFloat  maxFront      10
  exposedField SFFloat  minBack       1
  exposedField SFFloat  maxBack       10
  field        SFBool   spatialize    TRUE
}


Sphere

The Sphere node represents a sphere. By default, the sphere is centered at the origin and has a radius of 1.

Spheres generate their own normals. When a texture is applied to a sphere, the texture covers the entire surface, wrapping counterclockwise from the back of the sphere. The texture has a seam at the back on the YZ plane.

Sphere {
  field SFFloat radius  1
}

Prototype definition:

PROTO Sphere [
  field SFFloat radius  1
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

SphereSensor

The SphereSensor maps dragging motion into a free rotation about its center. The feel of the rotation is as if you were rolling a ball.

SphereSensor {
  exposedField SFBool     enabled    TRUE
  eventOut     SFBool     isActive
  eventOut     SFVec3f    trackPoint
  eventOut     SFRotation rotation
  eventOut     SFBool     onSphere
}

The free rotation of the SphereSensor is always unclamped.

Upon the initial click down on the SphereSensor's geometry, the point hit determines the radius of the sphere used to map pointing device motion while dragging. trackPoint events always reflect the unclamped drag position on the surface of this sphere, or in the plane perpendicular to the view vector if the cursor moves off of the sphere. An onSphere TRUE event is generated at the initial click down; thereafter, onSphere FALSE/TRUE events are generated if the pointing device is dragged off/on the sphere.


SpotLight

The SpotLight node defines a light source that is placed at a fixed location in 3-space and illuminates in a cone along a particular direction.

See Lights and Lighting for an explanation of ambient lighting.

The cone of light extends a maximum distance of radius from its location. The light's illumination falls off with distance as specified by three attenuation coefficients. The attenuation factor is 1/(attenuation[0] + attenuation[1]*r + attenuation[2]*r^2), where r is the distance of the light to the surface being illuminated. The default is no attenuation. Renderers that do not support a full attenuation model may approximate as necessary.

The intensity of the illumination may drop off as the ray of light diverges from the light's direction toward the edges of the cone. The angular distribution of light is controlled by the cutOffAngle, beyond which the illumination is zero, and the beamWidth, the angle at which the beam starts to fall off. Renderers that support a two cone model with linear fall off from full intensity at the inner cone to zero at the cutoff cone should use beamWidth for the inner cone angle. Renderers that attenuate using a cosine raised to a power, should use an exponent of exponent = 0.5*log(0.5)/log(cos(beamWidth)). When beamWidth >= PI/2 the illumination is uniform up to the cutoff angle, which is the default.

SpotLight {
  exposedField SFBool  on                TRUE  
  exposedField SFFloat intensity         1  
  exposedField SFFloat ambientIntensity  0 
  exposedField SFColor color             1 1 1 
  exposedField SFVec3f location          0 0 0  
  exposedField SFVec3f direction         0 0 -1
  exposedField SFFloat beamWidth         1.570796
  exposedField SFFloat cutOffAngle       0.785398 
  exposedField SFFloat radius            1 
  exposedField SFVec3f attenuation       1 0 0
}

Switch

The Switch grouping node traverses zero or one of its children (which are specified in the choices field).

The whichChild field specifies the index of the child to traverse, where the first child has index 0. If whichChild is less than zero or greater than the number of nodes in the choices field then nothing is chosen.


Switch {
  exposedField    SFInt32 whichChild -1
  exposedField    MFNode  choices   [ ]
}

Prototype definition:

PROTO Switch [
  exposedField    SFInt32 whichChild -1
  exposedField    MFNode  choices   [ ]
] {
  DEF F Transform {
  }
  DEF SWITCHSCRIPT Script {
    eventOut MFNode remove
    eventOut MFNode add
    exposedField SFInt32 whichChild IS whichChild
    exposedField MFNode  choices    IS choices
    #
    # Script must:
    #   -- keep whichChild up-to-date
    #   -- figure out which child should
    #      be seen when whichChild changes, add/remove 
    #      appropriate children
  }
  ROUTE SWITCHSCRIPT.remove TO F.removeChildren
  ROUTE SWITCHSCRIPT.add TO F.addChildren
}

Text

The Text node represents one or more text strings specified using the UTF-8 encoding as specified as by the ISO 10646-1:1993 standard. Due to the drastic changes in Korean Jamo language, the character set of the UTF-8 will be based on ISO 10646-1:1993 plus pDAM 1 - 5 (including the Korean changes). The text strings are stored in visual order.

The text strings are contained in the string field. The fontStyle field contains one FontStyle node that specifies the font size, font family and style, direction of the text strings, and any specific language rendering techniques that must be used for the text.

The maxExtent field limits and scales the text string if the natural length of the string is longer than the maximum extent, as measured in the local coordinate space. If the text string is shorter than the maximum extent, it is not changed. The maximum extent is measured horizontally for horizontal text (FontStyle node: horizontal=TRUE) and vertically for vertical text (FontStyle node: horizontal=FALSE).

The length field contains an MFFloat value that specifies the length of each text string in the local coordinate space. If the string is too short, it is stretched (either by scaling the text or by adding space between the characters). If the string is too long, it is compressed (either by scaling the text or by subtracting space between the characters). If a length value is missing--for example, if there are four strings but only three length values--the missing values are considered to be 0.

For both the maxExtent and length fields, specifying a value of 0 indicates to allow the string to be any length.

Textures are applied to 3D text as follows. The texture origin is at the origin of the first string, as determined by the justification. The texture is scaled equally in both S and T dimensions, with the font height representing 1 unit. S increases to the right, and T increases up.

ISO 10646-1:1993 Character Encodings

Characters in ISO 10646 are encoded in multiple octets. Code space is divided into four units, as follows:

+-------------+-------------+-----------+------------+
| Group-octet | Plane-octet | Row-octet | Cell-octet |
+-------------+-------------+-----------+------------+

The ISO 10646-1:1993 allows two basic forms for characters:

  1. UCS-2 (Universal Coded Character Set-2). Also known as the Basic Multilingual Plane (BMP). Characters are encoded in the lower two octets (row and cell). Predictions are that this will be the most commonly used form of 10646.
  2. UCS-4 (Universal Coded Character Set-4). Characters are encoded in the full four octets.

In addition, three transformation formats (UCS Tranformation Format (UTF) are accepted: UTF-7, UTF-8, and UTF-16. Each represents the nature of the transformation - 7-bit, 8-bit, and 16-bit. The UTF-7 and UTF-16 can be referenced in the Unicode Standard 2.0 book.

The UTF-8 maintains transparency for all of the ASCII code values (0...127). It allows ASCII text (0x0..0x7F) to appear without any changes and encodes all characters from 0x80.. 0x7FFFFFFF into a series of six or fewer bytes.

If the most significant bit of the first character is 0, then the remaining seven bits are interpretted as an ASCII character. Otherwise, the number of leading 1 bits will indicate the number of bytes following. There is always a o bit between the count bits and any data.

First byte could be one of the following. The X indicates bits available to encode the character.

 0XXXXXXX only one byte   0..0x7F (ASCII)
 110XXXXX two bytes       Maximum character value is 0x7FF
 1110XXXX three bytes     Maximum character value is 0xFFFF
 11110XXX four bytes      Maximum character value is 0x1FFFFF
 111110XX five bytes      Maximum character value is 0x3FFFFFF
 1111110X six bytes       Maximum character value is 0x7FFFFFFF

All following bytes have this format: 10XXXXXX

A two byte example. The symbol for a register trade mark is "circled R registered sign" or 174 in ISO/Latin-1 (8859/1). It is encoded as 0x00AE in UCS-2 of the ISO 10646. In UTF-8 it is has the following two byte encoding 0xC2, 0xAE.

Text {
 exposedField  MFString string    [ ]
 field         SFNode   fontStyle NULL
 field         SFFloat  maxExtent 0.0
 field         MFFloat  length    [ ]
}

TextureTransform

The TextureTransform node defines a 2D transformation that is applied to texture coordinates. This node is used only in the textureTransform field of the Appearance node and affects the way textures are applied to the surfaces of the associated Geometry node. The transformation consists of (in order) a nonuniform scale about an arbitrary center point, a rotation about that same point, and a translation. This allows a user to change the size and position of the textures on shapes.

TextureTransform {
  exposedField SFVec2f translation 0 0
  exposedField SFFloat rotation    0
  exposedField SFVec2f scale       1 1
  exposedField SFVec2f center      0 0
}

TextureCoordinate

This node defines a set of 2D coordinates to be used in the texCoord field to map textures to the vertices of some geometry nodes (IndexedFaceSet and ElevationGrid).

Texture coordinates range from 0 to 1 across the texture image. The horizontal coordinate, S, is specified first, followed by the vertical coordinate, T.

TextureCoordinate {
  exposedField MFVec2f point []
}

TimeSensor

TimeSensors generate events as time passes. TimeSensors can be used to drive continuous simulations/animations, periodic activities (e.g., one per minute), and/or single occurrence events such as an alarm clock. TimeSensor eventOuts include isActive, which is TRUE if the TimeSensor is running, and FALSE otherwise. The remaining outputs are fraction, which is an SFFloat in the interval [0,1], and time, an SFTime field.

TimeSensors remain inactive until their startTime is reached. At the first simulation tick when "now" is greater than or equal to startTime, the TimeSensor will begin generating time and fraction events, which may be routed to other nodes to drive continuous animation or simulated behaviors. (See below for behavior on read.)

The length of time a TimeSensor generates events is controlled using cycleInterval, loop, and stopTime; a TimeSensor stops generating time events at time startTime+cycleInterval if loop is FALSE. If loop is TRUE, the TimeSensor loops forever. The time events contain absolute times so they will start at startTime and increase up to startTime+cycleInterval if loop is FALSE, or increase forever if loop is TRUE. The use of stopTime to halt TimeSensor output is described below.

TimeSensors ignore changes to their startTime while they are actively outputting values. If a set_startTime event is received while the TimeSensor is active, then that set_startTime event is ignored (the startTime field is not changed, and a startTime_changed eventOut is NOT generated). A TimeSensor may be re-started while it is active by sending it a set_stopTime "now" event (which will cause the TimeSensor to become inactive) and then sending it a set_startTime event (setting it to "now" or any other starting time, in the future or past).

If the enabled exposedField is TRUE, the TimeSensor behaves as described above. When enabled is FALSE the TimeSensor does not generate outputs and isActive is set to FALSE. However, events on the exposedFields of the TimeSensor, such as set_startTime, are processed and startTime_changed events are sent regardless of the state of enabled.

The discrete field controls the output of time and fraction events. If discrete is FALSE (the default), then fraction events will rise from 0.0 to 1.0 over each cycleInterval, and time events will be generated continuously over each cycleInterval. If discrete is TRUE, then fraction events are not generated and time events are generated at startTime and at the end of each cycleInterval.

If stopTime is greater than startTime, time and fraction events will not be generated after the stopTime has been reached. However, stopTime is ignored if it is less than or equal to startTime. The computation of the fraction value is performed independently of the stopTime value, i.e., "now" being greater than or equal to stopTime does not imply that fraction is 1.0.

A TimeSensor will generate an isActive TRUE event when it begins generating times, and will generate an isActive FALSE event when it stops generating times (either because stopTime was reached or because time loop is FALSE and startTime+cycleInterval was reached). isActive events are only generated when the state of isActive changes.

Setting the loop field to TRUE makes the TimeSensor start generating events at startTime and continue generating events forever (or until stopTime is reached). This use of the TimeSensor should be used with caution, since it incurs continuous overhead on the simulation. By combining the loop and stopTime fields a TimeSensor that repeats something N times can be created as follows:

TimeSensor {
  startTime     T
  cycleInterv   I
  loop          TRUE
  stopTime      T+I*N
}

Setting loop to FALSE and cycleInterval to 0 will result in a single time event being generated at startTime; this can be used to build an alarm that goes off at some point in the future. If cycleInterval=0, then the TimeSensor does not generate fraction events nor isActive events.

No guarantees are made with respect to how often a TimeSensor will generate time events, but TimeSensors are guaranteed to generate final fraction and time events at or after time (startTime+cycleInterval) if loop is FALSE and stopTime is less than or equal to startTime. Note that a TimeSensor with default startTime, stopTime, and loop values does not generate any events when read.

The TimeSensor design has been motivated to support the greatest number of common cases via efficient techniques. Here, "efficiency" means without Script nodes. The cases we have considered include:



TimeSensor {
  exposedField SFTime   cycleInterval 0
  exposedField SFBool   discrete      FALSE
  exposedField SFBool   enabled       TRUE
  exposedField SFBool   loop          FALSE
  exposedField SFTime   stopTime      1
  exposedField SFTime   startTime     0
  eventOut     SFBool   isActive
  eventOut     SFFloat  fraction
  eventOut     SFTime   time
}

Examples:

#1. Animate a cube when the user clicks on it:

DEF XForm Transform { children [
  Shape { geometry Box { } }
  DEF Clicker TouchSensor { }
  DEF TimeSource TimeSensor { 
    cycleInterval 2.0             # Will run once for 2 seconds
  }
  # Animate one full turn about Y axis
  # (need 4 keyframes to make it well-determined):
  DEF Animation OrientationInterpolator {
       keys   [ 0,      .33,       .66,        1.0 ]
       values [ 0 1 0 0, 0 1 0 2.1, 0 1 0 4.2, 0 1 0 0 ]
  }
]}
ROUTE Clicker.clickTime TO TimeSource.startTime
ROUTE TimeSource.fraction TO Animation.set_fraction
ROUTE Animation.outValue TO XForm.rotation


#2. Play Westminster Chimes once an hour:

Group { children [
  DEF Hour TimeSensor {
    discrete      TRUE
    loop          TRUE
    cycleInterval 3600.0         # 60*60 seconds == 1 hour
  }
  DEF Sounder Sound {
    name "http://...../westminster.mid"
  }
]}
ROUTE Hour.time TO Sounder.startTime


#3. Make a grunting noise when the user runs into a wall:

DEF Walls Collision { children [
  Transform {
    #... geometry of walls...
  }
  DEF Grunt Sound {
    name "http://...../grunt.wav"
  }
]}
ROUTE Walls.collision TO Grunt.startTime

TouchSensor

A TouchSensor tracks the pointing device with respect to its sibling nodes. This sensor can be activated or deactivated by sending it an enabled event with a value of TRUE or FALSE.

The TouchSensor generates events as the pointing device "passes over" the geometry defined by nodes that are children of the TouchSensor's parent Group or Transform. Typically, the pointing device is a 2D device such as a mouse. In this case, the pointing device is considered to be moving within a plane a fixed distance from the camera and perpendicular to the line of sight; this establishes a set of 3D coordinates for the pointer. If a 3D pointer is in use, then the TouchSensor only generates events when the pointer is within the user's field of view. In either case, the pointing device is considered to "pass over" geometry when the geometry is intersected by a line extending from the camera and passing through the pointer's 3D coordinates. If multiple surfaces intersect this line (hereafter called the bearing), only the nearest will be eligible to generate events.

isOver TRUE/FALSE events are generated as the pointing device "passes over" the TouchSensor's geometry. When the pointing device moves to a position such that its bearing intersects the TouchSensor's geometry, an isOver TRUE event should be generated. When the pointing device moves to a position such that its bearing no longer intersects the geometry, or some other geometry is obstructing the TouchSensor's geometry, an isOver FALSE event should be generated. All of these events are generated only when the pointing device has moved; events are not generated if the geometry itself is animating and moving underneath the pointing device.

The user may manipulate the pointing device to cause the TouchSensor to generate isActive events. When the TouchSensor generates an isActive TRUE event, it will grab all further motion events from the pointing device until it releases and generates an isActive FALSE event (other Touch and Drag sensors will not generate events during this time). Motion of the pointing device while isActive is TRUE is referred to as a "drag". If a 2D pointing device is in use, isActive events will typically reflect the state of the primary button associated with the device (i.e. isActive is TRUE when the primary button is pressed, and FALSE when not released). If a 3D pointing device is in use, isActive events will typically reflect whether the pointer is within or in contact with the TouchSensor's geometry.

As the user drags the bearing over the TouchSensor's geometry (with isActive TRUE), the point of intersection (if any) is determined. When isOver is TRUE, each drag of the pointing device generates hitPoint, hitNormal, and hitTexCoord events. hitPoint events contain the 3D point on the surface of the underlying geometry, given in the TouchSensor's coordinate system. hitNormal events contain the surface normal at the hitPoint. hitTexCoord events contain the texture coordinates of that surface at the hitPoint, which can be used to support the 3D equivalent of an image map.

The eventOut field touchTime is generated when all three of the following are true:


TouchSensor {
  exposedField SFBool  enabled TRUE
  eventOut     SFBool  isOver
  eventOut     SFBool  isActive
  eventOut     SFVec3f hitPoint
  eventOut     SFVec3f hitNormal
  eventOut     SFVec2f hitTexCoord
  eventOut     SFTime  touchTime
}

Transform

A Transform is a grouping node that defines a coordinate system for its children that is relative to the coordinate systems of its parents. See also "Coordinate Systems and Transformations."

The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside this Transform. These are hints to the browser that it may use to optimize certain operations such as determining whether or not the Transform needs to be drawn. If the specified bounding box is smaller than the true bounding box of the Transform, results are undefined. The bounding box should be large enough to completely contain the effects of all sounds, lights and fog nodes that are children of this Transform. If the size of this Transform may change over time because its children are animating (moving), then the bounding box must also be large enough to contain all possible animations (movements). The bounding box should be only the union of the Transform's children's bounding boxes; it should not include the Transform's transformation.

The add_children event adds the nodes passed in to the Transform's children field. Any nodes passed in the add_children event that are already in the Transform's children list are ignored. The remove_children event removes the nodes passed in from the Transform's children field. Any nodes passed in the remove_children event that are not in the Transform's children list are ignored.

The translation, rotation, scale, scaleOrientation and center fields define a geometric 3D transformation consisting of (in order) a (possibly) non-uniform scale about an arbitrary point, a rotation about an arbitrary point and axis, and a translation. The Transform node:

Transform {
    translation T1
    rotation R1
    scale S
    scaleOrientation R2
    center T2

    ...
}

is equivalent to the nested sequence of:

Transform { translation T1 
 Transform { translation T2 
  Transform { rotation R1 
   Transform { rotation R2 
    Transform { scale S 
     Transform { rotation -R2 
      Transform { translation -T2
              ... 
}}}}}}}


Transform {
  field        SFVec3f     bboxCenter       0 0 0
  field        SFVec3f     bboxSize         0 0 0
  exposedField SFVec3f     translation      0 0 0
  exposedField SFRotation  rotation         0 0 1  0
  exposedField SFVec3f     scale            1 1 1
  exposedField SFRotation  scaleOrientation 0 0 1  0
  exposedField SFVec3f     center           0 0 0
  exposedField MFNode      children         [ ]
  eventIn      MFNode      add_children
  eventIn      MFNode      remove_children
}  

Viewpoint

The Viewpoint node defines a specific location in a local coordinate system from which the user might wish to view the scene. Viewpoints are Bindable Leaf Nodes and thus there exists a Viewpoint stack in the browser, in which the topmost Viewpoint on the stack is the currently active viewpoint. To push a Viewpoint onto the top of the stack, a TRUE value is sent to the bind eventIn on the specific Viewpoint. Once active, the viewpoint is then bound to the browsers view. All subsequent changes (e.g. animations) to the viewpoint automatically change the user's view. A FALSE value of bind, pops the viewpoint from the stack and unbinds it from the browser viewer. See Bindable Leaf Nodes for more details on the the Viewpoint stack.

An author can automatically move the user's view through the world by binding the user to a viewpoint and then animating that viewpoint.

The position and orientation fields of the Viewpoint node specify relative locations in the local coordinate system. Position is relative to the coordinate system's origin (0,0,0), while orientation specifies a rotation relative to the default orientation; the default orientation has the user looking down the -Z axis with +X to the right and +Y straight up. Note that the single orientation rotation (which is a rotation about an arbitrary axis) is sufficient to completely specify any combination of view direction and "up" vector. Viewpoints are effected by the transformation hierarchy.

The fieldOfView field specifies a preferred field of view from this viewpoint, in radians. A smaller field of view corresponds to a telephoto lens on a camera; a larger field of view corresponds to a wide-angle lens on a camera. The field of view should be greater than zero and smaller than PI; the default value corresponds to a 45 degree field of view. fieldOfView is a hint to the browser and may be ignored. A browser rendering the scene into a rectangular window will ideally scale things such that a solid angle of fieldOfView from the viewpoint in the view direction will be completely visible in the window.

A viewpoint can be placed in a VRML world to specify the initial location of the viewer when that world is entered. Browsers should recognize the URL syntax ".../scene.wrl#ViewpointName" as specifying that the user's initial view when entering the "scene.wrl" world should be the first viewpoint in file "scene.wrl" that appears as "DEF ViewpointName Viewpoint { ... }".

The description field can be used to identify viewpoints to be made publicly accessible through a viewpoints menu or some other device. The string in the description field should be shown in the interface if this functionality is implemented. If no description is given, then this Viewpoint should never appear in any public interface. It is suggested that the browser move to a viewpoint when its description is selected, either animating to the new position or jumping directly there. Once the new position is reached both the isBound and bindTime eventOuts should be sent.

A TRUE value sent to the bind eventIn pushes a given Viewpoint to the top of the Viewpoint stack in the browser. The bindTime eventOut sends the time at which the Viewpoint is bound. This useful for starting an animation or script when a given Viewpoint becomes active.

If a browser has the capability to go to a named viewpoint, it should bind to that viewpoint and issue an isBound TRUE eventOut on that viewpoint.

Viewpoint {
  exposedField SFVec3f    position       0 0 0
  exposedField SFRotation orientation    0 0 1  0
  exposedField SFFloat    fieldOfView    0.785398
  field        SFString   description    ""
  eventIn      SFBool     bind
  eventOut     SFTime     bindTime
  eventOut     SFBool     isBound
}

VisibilitySensor

The VisibilitySensor detects visibility changes of a bounding box as the user navigates the world. It outputs TRUE a event when any portion of the box enters the viewing frustum, and a FALSE event when the box completely exits the frustum. The VisibilitySensor does not detect occlusion by other objects - it compares the bounding box to the viewing frustum regardless of other geometry in the world. Browsers must be conservative by guaranteeing that if isVisible is FALSE that the bounding box is definitely completely outside the viewing frustum - browsers can error liberally when on isVisible is TRUE (e.g. maybe it is visible).

The bounding box specified by VisibilitySensor is effected by the current transform.

VisibilitySensor is typically used to detect when the user can see a specific object or area, and to activate or deactivate some behavior or animation in order to attract the user or improve performance.

VisibilitySensor {
  exposedField SFVec3f bboxCenter 0 0 0
  exposedField SFVec3f bboxSize   0 0 0
  exposedField SFBool  isActive   TRUE
  eventOut     SFBool  isVisible

}

WorldInfo

The WorldInfo node contains information about the world. The title of the world is stored in its own field, allowing browsers to display it--for instance, in their window border. Any other information about the world can be stored in the info field--for instance, the scene author, copyright information, and public domain information.

WorldInfo {
  field SFString title ""
  field MFString info  [ ]
}

[return to top of Node Reference]