Find JSRs
Submit this Search


Ad Banner
 
 
 
 

Reviewer's Guide to JSR-184

Reviewer's Guide to JSR-184


This document is provided as an appendix to the JSR-184 Public Review Draft, to direct the reviewers' attention to issues that the Expert Group would like to get feedback on, in order to make well-informed decisions. (Of course, feedback is equally welcome on all other aspects of the specification.) Comments are solicited, in particular, from programmers and content developers who are going to use the API. This is because the Expert Group has a good representation of parties who are going to implement or deploy the API, but unfortunately not so good a representation of application developers.
 

Baseline limits

The issues listed in this first section are generally related to how tightly the minimum capabilities of implementations should be specified. For instance, how many simultaneously active light sources must be supported. Some issues are related to rendering quality hints. Hints are things that increase rendering quality if available and enabled, but can be silently ignored by implementations without causing any catastrophic consequences to the application.
From the specification point of view, it would be easiest to define very low baseline limits and make all contentious features optional. However, that would result in different implementations having radically different capabilities, and would therefore diminish the usefulness of the standard. If some features are left optional, developers will have three choices:
  • Write to the lowest common denominator. Ignore any optional features and settle for lower speed or quality.
  • Write to the de facto standard feature set. Accept that the application might not work on all devices.
  • Find out the capabilities of the underlying implementation and adjust application behavior accordingly.
    Each alternative clearly has its drawbacks in terms of performance, interoperability, or development effort. It would be best if the API were specified so tightly that variations between implementations simply did not exist -- a worthy goal, but almost impossible to achieve.
To reduce variability as much as possible, the Expert Group would like to better understand what are the consequences to application development if any specific set of baseline capabilities is chosen. In other words, which of the three approaches, listed above, would be selected by the majority of developers? In particular, in which cases would the third alternative (dynamic adaptation to different implementations) be feasible?

Multitexturing

Multitexturing is fully supported in the API, and there is no upper limit to the number of texturing units that an implementation may support. The question is, what should be the lower limit? The current consensus within the EG is that implementations should not be required to support more than one texturing unit. This is due to the inherent difficulty of supporting multitexturing at the driver level if the underlying 3D hardware does not support it.
On the other hand, multitexturing is significantly faster than multipass rendering, because the same piece of geometry does not need to be transformed multiple times. Will it be feasible for applications to use multitexturing on implementations that support it, and substitute multipass rendering on implementations that do not?

Texture size

All implementations are currently required to support textures of at least 64x64 pixels. If an application prefers to use larger textures, it must detect the device capabilities and take appropriate actions (use the texture as is, downsample it, not use it at all, terminate, or something else). In practice, most implementations will support dimensions up to 256x256 -- should that also be the baseline requirement?

Other issues

Besides multitexturing and texture size, there are a number of less critical decisions to be made regarding the capabilities that implementations must have at the minimum. In general, everything that can be queried with the getProperties method in Graphics3D is subject to further discussion. These items are listed below, together with some initial suggestions on what the decision could be in each case:
  • Dithering. Should probably be a hint only.
  • Antialiasing. Should probably be a hint only.
  • Local camera lighting. Should probably be a hint only.
  • Texture bilinear filtering. Should probably be a hint only.
  • Texture mipmapping. Is this important enough to be mandated?
  • Perspective correction. Is this important enough to be mandated?
  • Transforms per vertex in skinning. Could mandate 2 or 4.
  • Simultaneous light sources. Could mandate 4 or 8.
  • Viewport size. Should be the same as the texture size.

Contentious features

This section covers features that make no sense if they are left optional; they must either be mandatory or not present in the API at all. The question is, again, whether these features are important enough that they should really be in the API, or could they be left out without anyone noticing?

Sprite picking

There's a pick method in the Group and Camera nodes. The application can use it to shoot a pick ray into the scene and see which object is first intersected by the ray. The method then returns information about the intersected object. At present, the picked object can be either a Mesh (consisting of polygons) or a Sprite (a flat 2D image with a center point in 3D coordinates).

Is it necessary for Sprites to be pickable, considering that a sprite does not have a 3D shape? An alternative to sprite picking is to set up an invisible Mesh object at the same position as the sprite, and pick that instead. The mesh would represent the shape of the actual object that the sprite image portrays.

Consider a case of a sprite depicting a trailer truck, as viewed from the side. Obviously, the image is much wider than it is tall. If the image is used as a pick target, instead of a dummy 3D object representing the truck, the results will be in error unless the pick ray is shot directly from the side.

Other issues

Miscellaneous other things that the Expert Group would like to get feedback on are listed below.
  • Sprite 2D offset. Should sprites have a screen-space offset, added to the center point before drawing?
  • Blending modes. Is the simplified set of blending modes sufficient? (See the CompositingMode class.)
  • Alpha testing. Is it sufficient that only the "greater or equal" alpha testing operator is supported?
  • Depth range. The OpenGL depth range feature is currently not supported. Should it be?