Interactive Web-Based Virtual Reality with Java 3D
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781599047898, 9781599047911

Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

The last two chapters have discussed how animation and interaction can be created in Java 3D to increase visual impact, to show object dynamics and hidden views, and to allow the user to interact with the objects in a virtual 3D universe (Emoto et al., 2001, Shyamsundar & Gadh, 2001). Our discussion has been carried in a general tone through the use of the behavior class to capture all types of events to cater to all possibilities. However, it is common that interaction with 3D objects in many applications involves the user to pick up relevant objects and change its positions, angles, and even texture and shapes for a variety of purposes. As a simple example of picking behavior, Figure 1 shows snapshots in an application where the user uses the mouse to pick up a cube and moves it to a new position through a mouse dragging operation. In this chapter, we will discuss how the picking behavior class in Java 3D can be made use of to create interesting customized dynamical picking interaction with any specific visual object. We will start in the next section with the use of some standard picking behavior classes, before embarking on to discuss how custom picking classes can be constructed to suit specific applications.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

In the last chapter, the creation of the skeletons or shapes of 3D objects has been discussed through the use of geometry objects in Java 3D. In order for these objects to be as realistic as possible to the user, it is often necessary for these objects to be covered with appropriate “skins” under good lighting conditions. In Java 3D, details on the skins can be specified by using color, texture, and material, which can be specified through the associated appearance objects. In this chapter, all the important attributes, including the ways for rendering points, lines and polygons as well as color and material, for an appearance object will be discussed. The use of texturing will be covered in the next chapter. As mentioned earlier, the creation of a virtual 3D object in a virtual world can be carried out using a Shape3D object in the associated scene graph. This object can reference a geometry object in Java 3D to create the skeleton of the virtual object. In addition, it can also reference an appearance object for specifying the skin of the virtual object. On its own, an appearance object does not contain information on how the object will look like. However, it can reference other objects, such as “attribute objects,” “texture-related objects,” and “material objects,” for getting appearance information to complement the object geometry. Since the use of an appearance object to enhance the geometry in the creation of a virtual universe is a basic requirement in Java 3D, we will now discuss some important aspects of appearance object in this chapter.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

We have discussed important Java 3D objects that are basically static in the last few chapters. Starting from this chapter, we will be looking at universe and objects that are dynamic in nature. Specifically, we will discuss issues on animation and interaction in this and the next chapter, respectively. As well demonstrated by popular interactive computer games, animation, and interaction are crucial in making a Java 3D world more interesting. Technically, animation is associated with changes in graphical objects and images as time passes without any direct user action, while interaction corresponds to any such change in response to an action or input from the user (Tso, Tharp, Zhang, & Tai, 1999). In any virtual reality or game application, animation and interaction are often crucial and critical. Through animation, the user is able to have a more realistic feel of the real 3D objects through looking at the object at different angles and perspectives. Through interaction with these objects, the user will become more integrated into the virtual 3D world in the same way as sensing our own reality in the real world. Under Java 3D, the “behavior” class is used to define and control both animation and interaction. However, note that the behavior class is an abstract class and cannot be directly used (Stromer, Quon, Gordon, Turinsky, & Sensen, 2005). Instead, there are three classes that extend the behavior class and that are commonly used. They are the “interpolator,” the “billboard,” and the “level of detail (LOD)” class. Furthermore, we can create a new behavior class by extending the behavior class to fit any special need. Briefly, in this chapter, we will discuss the important interpolator classes by using a number of illustrating examples, followed by some details discussions on the billboard and LOD classes.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

In the last chapter, a brief introduction on the creation of 3D content through the use of Java 3D and other programming methodologies for virtual reality applications has been Given .Before giving details on the various Java 3D classes and functions in subsequent s, we will now discuss the basic Java 3D program structure in this . Specifically, JDK installation, programming and compiling tools, as well as the difference between Java 3D applet and application will be explained. Originated from Sun Microsystems, the Java 3D API is made up of a few packages (Java platform API specification, 2006), which in turn contain the classes of some related components and elements. Specifically, the package javax.media.j3d (Package javax.media.j3d, 2006) contains the most basic classes, often referred to as core classes, which are needed to create a Java3D program. Note, however, that a complete application will often use many other packages and classes as well. As an example, if there is a need to use vectors, points and matrices to draw the virtual universe, the package javax.vecmath (Package javax.media.j3d, 2006) has to be imported. Another important package is java.awt (AWT stands for Abstract Windowing Toolkit), which include classes to create a window to display the rendering. Associated with each class is a variety of methods to aid the programmer in creating the application. Together, these classes and methods give the programmer the basic tools to construct a simple rotating cube system to a 3D virtual city. An important concept in Java 3D programming is that the program and the programming objects created has a tree like structure. Thus, a Java3D program will create and instantiate Java 3D objects and places them in a virtual world through the use of a tree like scene graph. This will be explained in greater detail in subsequent sections


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

In this final chapter, we will describe the use of Java 3D as a visualization technology in the development of a Web-based 3D real time oscilloscope experiment. Developed and launched under a research project at the National University of Singapore, this application enables students to carry out a physical electronic experiment that involves the use of an actual oscilloscope, a signal generator and a circuit board remotely through the Internet (Ko 2000, and 2001). Specifically, this system addresses 3D visualization schemes on the client side (Bund, 2005, Hobona, 2006, Liang, 2006, Ueda, 2006, Wang, 2006), as well as Web-based real time control and 3D-based monitoring between the client and server (Nielsen, 2006; Qin, Harrison, West, & Wright, 2004). The control of the various instruments are carried out in real time through the use of a Java 3D based interface on the client side, with the results of the experiment being also reflected or displayed appropriately on 3D instruments in the same interface.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

One of the most useful and important advantages of 3D graphics rendering and applications is that there is the possibility for the user to navigate through the 3D virtual world in a seamless fashion. Complicated visual objects can be better appreciated from different angles and manipulation of these objects can be carried out in the most natural manner. To support this important function of navigation, the user will often need to use a variety of input devices such as the keyboard, mouse, and joystick in a fashion that befits a 3D scenario. Also, collision handling is important as it will be unnatural if the user can, say, walk through solid walls in the virtual world. The functionality of navigation therefore has a close relationship with input devices and collision detection, all of which can be handled in Java 3D through a variety of straightforward but not so flexible utility classes as well as more complicated but at the same time more flexible user defined methods. The main requirement of navigation is of course to handle or refresh changes in the rendered 3D view as the user moves around in the virtual universe (Wang, 2006). As illustrated in Figure 1, this will require a modification of the platform transform as the user changes his or her position in the universe. Essentially, as will be illustrated in them next section, we will first need to retrieve the ViewPlatformTransform object from the SimpleUniverse object.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Web-based virtual reality is fast becoming an important application and technological tools in the next generation of games and simulation as well as scientific research, visualization, and multi-user collaboration. While tools based on VRML (virtual reality modeling language) are frequently used for creating Web-based 3D applications, Java 3D has established itself as an important modeling and rendering languages for more specialized applications that involve, for example, database accesses, customized behaviors, and home use mobile devices such as the PDA, mobile phone, and pocket PC (Kameyama, Kato, Fujimoto, & Negishi, 2003).


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Our discussions in previous chapters have centered on the creation and interaction of visual objects in a virtual 3D world. The objects and scenes constructed, however, will ultimately have to be shown on appropriate display devices such as a single PC monitor, a stereoscopic head mount display (HMD), or a multi screen project system (Salisbury, Farr, & Moore, 1999). Also, it is quite often that we may need to show different views of the created universe at the same time for certain applications. Even for the case of a single PC monitor, showing different views of the same objects in different windows will be instructive and informative, and may be essential in some cases. While we have been using a single simple view in earlier chapters, Java 3D has inherent capabilities to give multiple views of the created 3D world for supporting, say, the use of head tracking HMD systems for user to carry out 3D navigation (Yabuki, Machinaka, & Li, 2006). In this chapter, we will discuss how multiple views can be readily generated after outlining the view model and the various components that make up the simple universe view used previously.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

How the properties of virtual 3D objects can be specified and defined has been discussed in earlier chapters. However, how a certain virtual object will appear to the user will in general depends also on human visual impression and perception, which depends to a large extent on the lighting used in illumination. As an example, watching a movie in a dark theatre and under direct sunlight will give rise to different feeling and immersion even though the scenes are the same. Thus, in addition to defining the skeleton of a virtual object by using geometry objects in Java 3D in Chapter III, setting the appearance attributes in Chapter IV and applying texture in Chapter V to give a realistic skin to the virtual object, appropriate environmental concerns such as light, background and even fog are often necessary to make the virtual object appear as realistic to the user as possible. In this chapter, we will discuss topics related to the latter environmental issues. The use of proper lighting is thus crucial to ensure that the 3D universe created is realistic in feeling and adds to strong emotional impressions in any application. For this purpose, Java 3D has a variety of light sources that can be selected and tailored to different scenarios. Technically, light rays are not rendered. In fact, their effects will only become visible once they hit an object and reflect to the viewer. Of course, as with any object in the real world, the reflection depends on the material attributes of the objects. In this chapter, we will discuss the use of different types of light source and their effects after describing the lighting properties or materials of visual objects. We will then outline the use of fogging techniques to turn a hard and straight computer image into a more realistic and smoother scene before discussing methods for immersing active visual objects in a background.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Of all the human perceptions, two of the most important ones are perhaps vision and sound, for which we have developed highly specialized sensors over millions of years of evolution. The creation of a realistic virtual world therefore calls for the development of realistic 3D virtual objects and sceneries supplemented by associated sounds and audio signals. The development of 3D visual objects is of course the main domain of Java 3D. However, as in watching a movie, it is also essential to have realistic sound and audio in some applications. In this chapter, we will discuss how sound and audio can be added and supported by Java 3D. The Java 3D API provides some functionalities to add and control sound in a 3D spatialized manner. It also allows the rendering of aural characteristics for the modeling of real world, synthetic or special acoustical effects (Warren, 2006). From a programming point of view, the inclusion of sound is similar to the addition of light. Both are the results of adding nodes to the scene graph for the virtual world. The addition of a sound node can be accomplished by the abstract Sound class, under which there are three subclasses on BackgroundSound, PointSound, and ConeSound (Osawa, Asai, Takase, & Saito, 2001). Multiple sound sources, each with a reference sound file and associated methods for control and activation, can be included in the scene graph. The relevant sound will become audible whenever the scheduling bound associated with the sound node intersects the activation volume of the listener. By creating an AuralAttributes object and attaching it to a SoundScape leaf node for a certain sound in the scene graph, we can also specify the use of certain acoustical effects in the rendering of the sound. This is done through using the various methods to change important acoustic parameters in the Aura lAttributes object.


Sign in / Sign up

Export Citation Format

Share Document