Vis Madhavan - Wichita State University

4 downloads 165462 Views 787KB Size Report
Student-created business, Mobile Car Tune, comes to you for your auto repair · Wichita State volunteers provide community with free health care · Shocker New ...
VIRTUAL ENVIRONMENT FOR ASSEMBLY OPERATIONS WITH IMPROVED GRASP INTERACTION Gaonkar, R., Madhavan, V. and Zhao, W. Department of Industrial and Manufacturing Engineering Wichita State University 120 Engineering Building Wichita, Kansas – 67260 Corresponding author’s e-mail: [email protected] Abstract: An immersive virtual reality environment (IVE) for performing assembly and maintenance simulations has been developed using the Jack® software package. The Flock of Birds™ motion tracking system is used to capture body postures of an immersed human and reproduce it in real-time in the virtual environment. The Cyberglove™ is used to capture finger movements in real-time for realistic grasp interaction. Many of the virtual interaction and navigation techniques available within the Jack environment, such as the grasp interaction, have been further developed and refined. Grasping using the palm, as well as pinch-type grasping are supported. The object about to be grasped is highlighted, allowing the user to verify whether it is indeed the object he intends to grasp. A program has been developed to transfer the postures captured during immersive virtual reality (IVR) sessions in Jack into IGRIP ®. These capabilities have been used to investigate a new “to-be” scenario for the mid-spar workstation of the Boeing 767 strut torque box assembly line. 1. INTRODUCTION Digital manufacturing technology has made evaluation of the manufacturability of proposed product designs possible during the design stage itself. Virtual reality technology, in conjunction with digital manufacturing, can assist in further validation of product or assembly design with regards to human factors and ergonomics. IGRIP® (Interactive Graphics Robot Instruction Program) and Jack® are two software tools used by industries to develop simulations that include digital humans. These software tools need the user to manually fine-tune joints of the digital humanoid to achieve fully realistic postures. Though several features to facilitate the positioning process are implemented in these software, positioning of these humanoids needs expertise and patience, since humanoids are complex inverse kinematic devices. Apart from these difficulties, one has to validate the postures as being close to reality, which is something very difficult to prove. All these issues are naturally resolved by using an immersive virtual reality environment to create key postures for use in simulations. Though virtual reality technology is perceived by some to be immature and expensive, many industries are using it as part of their established business practices. Many applications of virtual reality technology, in games and entertainment industries (Badique et al., 2002) and in architecture, medicine and education (Goebel, 1996) have been described. In engineering, VR is widely used to create digital prototypes to save time and money spent on manufacturing physical prototypes (Nwoke and Nelson, 1993). Since more than a decade ago, Caterpillar Inc. has been using virtual reality technology for evaluating visibility of the outside environment from the cabs of their machinery (NCSA Applications, 1993). A Silicon Graphics Inc. (SGI) computer was used to generate the virtual world and engineers used a head mounted display to simulate 360 degrees of vision to evaluate view obstructions. This was found to save time in evaluating new designs as opposed to using a physical prototype of the machinery. Similarly, virtual reality devices like dataglove™ and eyephones™, developed by VPL Research Inc., running on SGI computers have been used by engineers at Ford in evaluating feasibility of automotive assembly processes, ergonomics and human factor analysis like reach and accessibility assessment (Ressler, 1994). Sung and Ou (2003) have discussed many ways in which Virtual Reality Modeling Language (VRML) can be used for design of manufacturing systems. The authors have developed VC++ based virtual reality reusable modules for visualization and have proposed a framework for web based manufacturing simulations using virtual reality models. Korves and Loftus (1999) have described a framework for an immersive virtual reality system to be used for planning and implementing manufacturing cells. Virtual reality technology including haptic interface has been used for maintainability analysis of Rolls-Royce aircraft engines weighing more than 35,000 lbs (Savall et al., 2002). Jayaram et al. (1999) have developed a virtual reality based application to facilitate the planning, evaluation and verification of a mechanical assembly system. Boud et al. (2000) conducted experiments to compare assembly tasks performed in real and virtual reality environments. They found that virtual reality assembly tasks are performed better when supplemented with tactile feedback because it enhances the feeling

of presence in the virtual environment. They also found that virtual reality technology can be successfully used to train employees to perform assembly tasks. Chryssolouris et al. (2000) conducted ergonomic analysis of the assembly of a high speed boat propeller using motion tracking systems and have proposed a semi-empirical time model based on statistical design experimentation methods. Interaction techniques that can be used during virtual assembly tasks, such as voice commands and data glove, 3D menu and dataglove, and keyboard interaction by a third person were evaluated by Gomes and Zachmann (1999). The authors concluded that the use of voice commands was the most preferred and effective interaction technique based on the responses of the users. Mo et al. (2002) have applied virtual disassembly technique to improve product design for disassembly. The authors have developed a virtual reality framework consisting of multi sensory input/output for manual 3D interactions with the virtual objects to create/edit/simulate disassembly path and sequence in the virtual world. 1.1 Virtual reality applications involving complete humanoids Very few virtual reality applications use full digital humanoids. The use of full digital humanoids is a necessity for addressing considerations of reach, safety, ergonomics and the ability to negotiate around and through large objects. Companies like Lockheed Martin (Wampler et al., 2003) and John Deere (Cerney et al., 2002) use full digital humanoids, provided by Jack and IGRIP, to evaluate maintainability and human factors in the design of manufacturing systems. Lockheed Martin uses IGRIP to evaluate human factors and maintainability in the design of F16 fighter planes (Abshire and Barron, 1998) whereas John Deere uses Jack similarly. Lockheed Martin developed a simulation with digital humans performing the disassembly task of removing the Pratt and Whitney engine from a Joint Strike Fighter plane to convince the US Navy that the engine could be removed easier in a different way than that stipulated (Thornton, 2001). The same simulation was used as a training module for the actual maintenance crew who completed the task in about 3 hours. The crew credited the simulation with enabling them to carry out the task easily. Also, design changes were suggested by the maintenance crew after reviewing the simulations, to improve access and reachability. These simulations were developed in IGRIP software. The first simulation required nine months to develop, mainly because of the time required for manual modeling of thousands of postures of the four member maintenance crew. Kibira and McLean (2002) have applied virtual reality simulation to the design of production lines. The authors simulated both discrete events of overall process flow and detailed operations of the operator using Quest® and IGRIP software respectively. They concluded that development of line level process simulation was easier compared to detailed workstation level operations simulation, which was found to be time consuming and tedious on account of the need to adjust the postures of the humanoids. A new approach to develop the simulations faster, with the aide of immersive virtual reality technology, is addressed in this work. 2. DESCRIPTION OF THE IMMERSIVE VIRTUAL REALITY SETUP As noted above, a full sized digital humanoid is necessary in simulations aimed at design and development of assembly operations for addressing human related factors like reachability, visibility and ergonomic feasibility. However, an immense amount of time and energy is required for positioning the digital humanoid in realistic postures. The positioning of the digital humanoid can be made more realistic and can be achieved faster if driven by motion tracking of an immersed operator doing the tasks. We use Jack, the Human Simulation and Ergonomic Analysis software marketed by UGS Corporation as the software in which virtual environments are developed. Jack not only has interfaces to a range of VR devices and many built-in functions to facilitate IVR sessions, but also makes the source code of many of these capabilities available for modification. The extensive APIs available in TCL-TK and Python permit users to easily customize existing software functionality. The computer used is a HP xw4100 workstation with 1.0 GB RAM, 3.2 GHz Intel Pentium 4 processor and a NVIDIA 980xgl Stereo enabled graphics card with 128 MB memory. The head mounted display (HMD) used is an iglasses PC/SVGA Pro 3D™, developed and marketed by IO Display Systems Inc. This HMD has a resolution of 800x600 @ 100 Hz and supports stereo vision. Jack can map the motion of immersed operators onto a humanoid and provide stereoscopic views from the humanoid’s eyes. The built-in tools for motion analysis, realistic behavioral control, anthropometric scaling, task animation and evaluation, view analysis, automatic reach and grasp, collision detection etc., provide an ideal platform for immersive virtual reality based assembly and maintenance design.

Figure 1. The experimental setup and typical third and first person views provided by Jack The Flock of Birds (FOB)™, electromagnetic motion tracking system is used to track the position and orientation of key segments of the immersed operator. The FOB system comprises of an extended range controller (Master bird) connected to the host computer and to a set of slave birds. Each of the birds individually tracks the position of one 6 degreeof-freedom sensor. An extended range transmitter which is connected to the extended range controller emits a known pulsed DC magnetic field 120 times per second. The magnetic field measured by each sensor is used to compute the six degrees of freedom information giving its position and orientation relative to the transmitter. The transmitter, a 12-inch cube, is mounted from the ceiling 8 feet above the ground and the operating range of the sensors is a hemisphere of 8 feet radius with the transmitter as its center. In our environment, five sensors are used to track the position and orientation of five parts of the human immersed in the virtual world. Two of the five sensors track the right and left palms, the third tracks the head position and orientation, the fourth tracks the pelvis and the last one is attached to the back of the neck to estimate the curvature of the spine. The sensor positions and orientations are indicated within JACK by figures called Birds, each of which looks approximately like the sensors. The position and orientation of each of these sensors is interpreted by JACK as being relative to a cube named FOB_trans_icon, which can be positioned by the user as required. Within the virtual environment in JACK, we create a humanoid scaled to represent the body proportions of the immersed human operator. The palms, head, pelvis and back of the neck of the humanoid are constrained to follow the location of the five sensors tracking the corresponding locations of the actual immersed human operator. The locations of the other parts that are not tracked, for instance, the elbows, are computed by JACK using inverse kinematics algorithms. Since we do not have any sensor below the pelvis, the legs get translated with the pelvis. Thus, this environment only tracks the posture from the waist upward. As the immersed user’s posture changes according to the movements of his/her body parts, the motion of the sensors attached to the body parts is passed to JACK, which in turn updates the position and posture of the humanoid. When the humanoid is scaled to the correct dimensions of the immersed user, the postures of the humanoid accurately mimic those of the immersed user. The Cyberglove™, developed and marketed by Immersion Corporation, provides 22 high accuracy finger joint angle measurements. These joint angles are mapped on to the immersed humanoid to enable fine grasping tasks during virtual assembly operations. The Cyberglove needs to be calibrated, to define the gain and offset values for each sensor, at least once for each individual. The calibration is done by visually comparing various hand gestures of the immersed human with those of the Jack humanoid. The calibration data for each individual is saved for future use and is used to modify the data output by the Cyberglove in real time. Figure 1 shows a picture of our setup, as well as third and first person views provided by Jack. 3. DEVELOPMENT OF THE VIRTUAL ENVIRONMENT FOR ASSEMBLY DESIGN Certain preparatory tasks are required for the development of an immersive virtual environment in Jack. These include the importing and clean-up of the geometry that describes the assembly workstation, tools and parts, scaling of the humanoid to match the body proportions of the immersed user, choosing the objects that need to be considered by the grasp and collision detection routines, modeling any behaviors of the tools that need to be included, planning the virtual interactions based on the assembly sequence being considered, etc.

27420 triangles

Jack Decimate

7105 triangles

Geomagic Decimator

3910 triangles

Figure 2: Reduction in polygon count using Jack Decimate and Geomagic Decimator Jack supports importing of a wide range of geometry file types including CATIA, IGRIP and VRML files. After importing these files into Jack, the geometry can be saved as Peabody (.pss) surface files. It is very important to use the correct units while importing CAD geometry. If not, the object will be scaled by the ratio of the units. An IVR session with a frame rate of at least 14 frames per second (fps) is found to be satisfactory. If the frame rate drops below 14fps, it is usually due to complex geometry. Jack has a native geometry simplification utility called Jack Decimator that can be used to reduce the polygon count, typically achieved by replacing adjacent small polygons having substantially the same normal direction with larger polygons. We also use another commercially available geometry simplification software tool called Geomagic Decimate which has the added flexibility compared to Jack Decimator of allowing users to make the above mentioned modifications only on selected regions of the geometry. Figure 2 shows the result of using Jack Decimator to reduce the polygon count of a mid-spar assembly with 27420 triangles to a simplified surface with 7105 triangles and the further reduction of the polygon count in the indicated regions from a total of 5324 triangles to 2129 triangles using Geomagic Decimate. Surface errors, in the form of reversed normals, lead to malfunctioning of Jack’s minimum distance routine, which is an important part of the grasp routine. While both Jack Decimate and Geomagic Decimate can “clean” the geometry by removing identical vertices, and inverting normals, these cannot handle occasional flipped normals. It was found that just importing the part into 3DS Max and exporting this back as a .wrl file corrects most errors with normals. Accurate scaling of the Jack humanoid is necessary to obtain realistic postures. Jack provides an advanced humanoid scaling function that lets the user scale every segment of the Jack humanoid. We found the shoulder to elbow distance (between Acromion & Radiale landmarks), elbow to fingertip distance (between Radiale & Dactylion landmark) and stature (overall height from tip of the head to the floor) to be the important measurements for accurate reproduction of postures adopted. Additional benefits were observed by adjusting the inputs to the tracking constraints, namely, the relative weight of each constraint, relative importance of position and orientation, and position and orientation offsets. For instance, for realism of the stereo views generated by Jack, the view direction is very important. This requires that the relative weight of the head constraint be increased and the position-orientation weight parameter value be modified in favor of orientation. Enabling the position and orientation offsets causes Jack to assume that both the immersed user and the humanoid are in the same posture initially, to consider the offsets between the orientation and position of the FOB sensor and the sensor site on the humanoid to be due to inaccurate placement of the sensors and use these offsets to modify the sensor location to obtain the target location for the sensor site. Tasks that are needed during the IVR interaction include various pick-and-place type tasks, navigation through the environment and capturing of key postures adopted. A companion paper (Zhao and Madhavan, 2005) discusses the implementation of voice commands for navigation and posture capture, among other things. The next section describes key improvements made to the grasp functionality provided by Jack, to provide visual identification of the object about to be grasped, to enable behaviors, to highlight collisions, etc. 4. IMPROVEMENTS TO THE GRASP INTERACTION Jack simulates the grasping of an object by the humanoid by attaching the object to the palm of the human if certain criteria are satisfied. In ‘automatic grasp’ mode, if (i) the distance between two finger tip sites is less than the “Fingertip separation threshold AND (ii) the distance between the palm and the closest object is less than the “Palm separation threshold”, then the closest object is grasped. The implementation checks the second condition only if the first is satisfied, thereby being unable to indicate prior to the actual act of grasping, which one of the objects in the scene will be grasped. This sequence was presumably chosen because of concerns that the computational effort for the closest distance computation could slow

down the grasp loop. From our initial experience it was determined that the development of a method to visually indicate the object that would be grasped if the user desires to do so, would greatly improve the ease of pick and place interactions. This implies that the identification of the object to be grasped and the actual grasping process would need to be separated. We have accomplished this in a natural manner by determining the object to be grasped as the one that is closest to the user’s palm and then grasping it if the distance between the chosen finger sites is less than a user determined threshold value, i.e., by reversing the order in which the two conditions for grasping are verified. This is shown in the flowchart of the modified jsGrabCheck command, which is executed at every simulation update, in Figure 3. Now, the closest object is first determined. If the palm separation threshold condition is satisfied, then the closest object is highlighted in bright yellow color. If the immersed user verifies that it is indeed the highlighted object that he is attempting to grasp, then he can grasp it by bringing his fingers together (i.e., satisfying the fingertip separation threshold condition). If the highlighted object is not the object the user is seeking to grasp, he can de-highlight the object by moving his hand away from the object. When the hand is moved away from the object, the palm separation threshold condition is violated and the closest object is brought back to its original color (the original color of all the objects for which grasping has been enabled by the GUI is stored at the beginning of the grasp interaction). Additionally, the grasp routine also causes multiple objects to get attached to the human’s hand if another object becomes the closest object while one object is grasped. As can be seen from the flowchart, we have changed the program logic such that irrespective of the object grasped, the closest object is highlighted, and irrespective of which object is closest, the grasped object continues to be the only one grasped. The minimum distance function, which is used to determine the object closest to the human palm, results in a positive value if the human palm is not penetrating the object, and vice versa. As can be seen from figure 3, we have introduced an additional criterion by which objects which are at a distance less than -1.0 cm from the human palm are not grasped. This has provided a new capability of being able to grasp objects that are within other objects such as open boxes. As noted earlier, if the list of objects to be grasped is large, determining the closest object could potentially slow down the grasp loop to unacceptably low update rates. To avoid this, a bucket sorting technique has been implemented. Note that it is not necessary at each update to check the minimum distance to each of the objects in the grasp list. A bucket of objects, each of which are within a certain multiple of the ‘palm separation threshold’, from the humanoid’s palm center site, is created periodically. In typical assembly simulations it is sufficient to keep the time interval between updates to be one second. This limits the minimum distance calculation to be made only for a few objects, thereby alleviating concerns about the update rate of the grasp loop. We have also modified the grasp routine such that only one object can be grasped at any given time. The object grasped is released if the fingertips are moved apart and the dropping of the object down to the floor is simulated if the object is included in the ‘Drop figure list’. The grasp routine has also been modified to enable behaviors like rotation of Floor Assembly Jigs (FAJs) about their axis in response to a finger site closing in on a button and for enabling auditory feedback such as playing a drilling sound when the index finger joint value exceeds a threshold value. When the name of the closest object matches one of the names in a predefined list, the corresponding behavior is triggered. The ability to do this is yet another benefit of our rearranging the criteria as described above. Behaviors assist not only in performing virtual assembly tasks but also in enhancing the realism of the virtual world. Jack has a built-in collision detection feature that can use one of five different algorithms to detect collisions between any pair of objects. This feature is used to highlight segments of the part being assembled or segments of the humanoid if they collide with the fixture or with other parts already assembled. This is useful when accessibility and reach, especially through restricted access zones, are of interest. This has also been used to develop a capability to stack objects when grasped objects are released. The original grasp routine is programmed in such a way that if an object in the DropList is released, it will drop to the floor irrespective of any objects obstructing its path. We use the collision detection routine in parallel with the drop routine to halt the drop simulation if a collision is detected and then reposition the object. Similarly, other desired behavior such as audible warnings when the operator tries something wrong, can be achieved by utilizing collision detection. While the above discussion assumed that palm-type grasping is being done, we have developed an alternate approach of using the finger sites themselves for finding the closest object, thereby enabling pinch-type grasping. The type of grasping can be changed as needed by the immersed user, at any point in a session, using voice commands. 5. MAPPING OF KEY POSTURES FROM JACK TO IGRIP Since our industrial partners at Boeing and Cessna Aircraft Companies use IGRIP/ERGO for most of their simulations, we have developed a program to export the posture sequences captured in Jack into IGRIP. The program is written in GSL (Graphical Simulation Language, the API language for IGRIP), and reads the joint angles of the Jack figure from the

START

MANUAL INPUTS BY USER IN GRASP USER INTERFACE 1. FingerTip Sites (FTS) 2. FingerTip seperation Threshold (FST) 3. Palm seperation Threshold (PST) 4. List of figures to drop (DropList) 5. List of figures to grab (GrabList)

1. FTS 2. FST 3. PST 4. DropList 5. GrabList 6. Grabtype

NO If there is one figure in DropFigPtrList

Drop the figure (Set height)

Is there any collision?

NO

IF figure is below floor

YES Bucket sorting

Drop logic

YES

snap to object location

Snap it to floor

Set variable ClosestDist to 1000000

Foreach figure in GrabList

Calculate minimum distance between figure and human palm / fingertip Set Dist to minimum distance

Find the closest object

IF Dist > -1.0 AND Dist < ClosestDist NO YES

Set ClosestDist to Dist Set ClosestObject to figure

NO

IF ClosestObject is not previous ClosestObject

YES

IF ClosestDist < PST

YES De-highlight the previous ClosestObject AND Highlight the new ClosestObject

De-highlight the ClosestObject

Release any grabbed figure

IF grabbed figure is in the dropfigure list

YES

NO fingertip distance > FST ? NO

NO

NO

Two criteria for grasping 1. ClosestObject distance < PalmSepThreshold & 2.FingerTip distance< FingerTipSepThreshold

IF fingertip distance < FST AND no object is currently grabbed YES Attach the ClosestObject to the appropriate site

YES Add the grabbed figure in DropFigPtrList END

Figure 3: Flowchart of the modified grasp routine showing the logic by which the object to be grasped is determined and highlighted prior to grasping channel set files in which the posture sequences are saved. For each posture, the IGRIP humanoid’s joint angles are calculated from the joint angles of the Jack humanoid, taking into consideration the mapping between the degrees of freedom as well as scale factors and offsets as appropriate. For instance, while the Jack humanoid has 15 joints between vertebrae representing the spine, the IGRIP humanoid has only six joints, all of which are constrained to have the same joint angles, effectively reducing the degrees of freedom for the spine to three (torso bend, lean and twist). The posture mapping program sums up the joint angles along the spine to obtain the total bend, lean and twist angles and applies these to the torso bend lean and twist degrees of freedom. This is found to be adequate for almost all of the postures commonly encountered. Corresponding to each posture, a tag point is created, with aux_axes values set to the values of the joint degrees of freedom for the Ergo humanoid. Paths are created corresponding to each channelset file (posture sequence). Since the mapping of postures cannot be exact due to the differences between the degrees of freedom of the humanoids in Jack and IGRIP, additional means have to be devised to ensure that the exact position or orientation of important sites, such as the position and orientation of the palm site and the orientation of the head, can be reproduced in

Jack To IGRIP

Figure 4: Mapping of postures from Jack to IGRIP

Figure 5: Snapshot from “to-be” simulation developed using IVR

IGRIP. We use the following approach. The capture program records not only the joint angles of the humanoid, but also the absolute position and orientation of the palm center at each posture. Tag points are created at the position of the palm center in each of these postures, at the appropriate orientation. The “Reach for tag point” command is used to then make the approximately positioned humanoid reach for the exact position and orientation of the palm center in Jack, thereby making these exactly the same as in Jack. A similar procedure can be implemented to improve the accuracy of other degrees of freedom such as the view direction. Figure 4 shows an example of a posture mapped from Jack to IGRIP. It should be noted that since large rotations are being dealt with, the sequence in which the rotations are carried out is important. The sequence of rotations in Jack (lean (x), bend (y), twist (z)) is different from that in IGRIP (twist (z), bend (y) and lean (x)) and so the correct twist, bend and lean angles to be used in IGRIP are calculated using trigonometric relations derived using quaternion arithmetic (Coutsias and Romero, 1999). This is especially important for the palm center positions used in the “Reach for tag point” command. 6. APPLICATION The manufacturing line of the Boeing 767 strut torque box was considered as a test case for IVR based generation of digital manufacturing simulations. The current assembly procedure (“as-is”) being followed was simulated in IGRIP without using IVR. The feasibility of moving some tasks from the skin load position, which is the bottleneck workstation, to the mid-spar drill workstation (“to-be”) was studied using IVR. A typical posture during IVR interaction is shown in figure 1. It was found that the time taken for generating the “to-be” simulation using IVR for generating the postures was around 60 hours, while the time taken for producing the “as-is” simulation using manual positioning of the humanoid was an order of magnitude greater. A snapshot of the “to-be” simulation is shown in Figure 5. This simulation verified the feasibility of moving a significant fraction of the tasks from the skin load position to the mid-spar drill station, with minimal additional tooling, thereby better balancing the flow through the workstations. 7. CONCLUSION A virtual environment for carrying out immersive assembly and maintenance tasks has been developed in the Jack software. This environment has many advantages for studying assembly operations, chief among them being the inclusion of the whole humanoid for studying accessibility, reach and ergonomics. This IVE includes visual feedback of objects to be grasped and of collisions, auditory feedback, voice activated commands for navigating in the virtual world and capturing postures, behaviors such as rotation of jigs when a virtual button is pressed, etc. The Jack humanoid must be accurately scaled to the proportions of the immersed user in order to track the user’s postures accurately. Further improvements such as constraining objects to move about desired axes and planes, additional voice commands, a generalized drop command, etc. are in progress. It should be noted that initial setup time is needed for transferring geometry, scaling of the humanoid and exporting key postures. This should be considered in deciding if IVR is the best option for a given task; for a fairly simple

simulation with simple postures, IVR might not be the best option. This immersive virtual environment setup can be applied advantageously to situations with complex human postures and where the realism of postures is important. 8. ACKNOWLEDGMENT This material is based upon work supported by the National Science Foundation under Grant No. 0125414. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 9. REFERENCES 1.

Abshire, K.J. and Baron, M.K. (1998). Virtual Maintenance: Real-world Applications within Virtual Environments, IEEE Proceedings Annual Reliability and Maintainability Symposium, pp. 132-137.

2.

Badique E., Cavazza, M., Klinker, Mair, G., Sweeney, T., Thalmann, D. and Thalmann N.M. (2002). Entertainment Applications of Virtual Environments, Handbook of Virtual Environments, Ed. Stanney, K., Lawrence Erlbaum Associates, New Jersey, USA.

3.

Boud, A.C., Haniff, D.J., Baber, C. and Steiner, S. J. (1999). Virtual reality and augmented reality as a training tool for assembly tasks, Proceedings of International Conference on Information Visualisation, pp. 32-36.

4.

Cerney, M. M., Duncan, J. R., and Vance, J. M. (2002). Using population data and immersive virtual reality for ergonomic design of operator workstations. SAE Digital Human Modeling Conference Proceedings, pp. 107-120

5.

Chryssolouris, G., Mavrikios, D., Fragos, D. and Karabatsou, V. (2000). A virtual reality-based experimentation environment for the verification of human-related factors in assembly processes, Robotics and Computer Integrated Manufacturing, 16, pp. 267-276.

6.

Coutsias, E.A. and Romero, L. (1999). The Quaternions with an application to Rigid Body Dynamics, http://amath.colorado.edu/courses/5720/2005Sum/quaternions.pdf.

7.

Goebel, M. (1996). Industrial application of VEs, IEEE Computer Graphics and Applications, 16(1), pp. 10-13.

8.

Gomes, A. and Zachmann, G. (1999). Virtual reality as a tool for verification of assembly and maintenance processes, Computer and Graphics, 23(3), pp. 389-403.

9.

Jayaram, S., Wang, Y., Jayaram, U., Lyons, K. and Hart, P. (1999). VADE: A Virtual Assembly Design Environment, IEEE Computer Graphics and Applications, 19(6), pp. 44-50.

10. Kibira, D. and McLean, C. (2002). Virtual reality simulation of a mechanical assembly production line, Winter Simulation Conference Proceedings, v 2, pp. 1130-1137. 11. Korves, B. and Loftus, M. (1999). Application of immersive virtual reality for layout planning of manufacturing cells, Proceedings of the I MECH E Part B Journal of Engineering Manufacture, 213(1), pp. 87-91. 12. Mo, J., Zhang, Q. and Gadh, R. (2002). Virtual disassembly, International Journal of CAD/CAM 2(1), pp. 29-37. 13. NCSA VR Applications (1993). NCSA VR Applications - Virtual Backhoe, http://www.ncsa.uiuc.edu/VR/VR_old/vr_app_cat.html. 14. Nwoke, B. & Nelson, D. (1993). An overview of computer simulation in manufacturing, Industrial Engineering, 25(7), pp. 43-57.

15. Ressler, S. (1994). Applying Virtual Environments to Manufacturing, http://www.itl.nist.gov/iaui/ovrt/projects/mfg/mfgvr.pdf. 16. Savall, J., Borro, D., Gil, J. J. and Matey, L. (2002). Description of a haptic system for virtual maintainability in aeronautics, Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, EPFL, Lausanne, Switzerland, pp. 2887-2892. 17. Sung, W.-T. and Ou, S.-C. (2003). Using virtual reality technologies for manufacturing applications, International Journal of Computer Applications in Technology, 17(4), pp. 213-219. 18. Thornton, J. (2001). Maintainability drives Fort Worth’s joint strike fighter design: blending simulation and ingenuity, Lockheed Martin up-ends design methods in $320 billion program, Assembly Automation, 21(3), pp. 204-209. 19. Wampler, J. L., Bruno, J. M., Blue, R. R. and Hoebel, L. J. (2003). Integrating maintainability and data development, Reliability and Maintainability Symposium, pp. 255-262. 20. Zhao, W. and Madhavan, V. (2005). Integration of voice commands into a virtual reality environment for assembly design, Proceedings of the 10th Annual International Conference on Industrial Engineering Theory, Applications & Practice, Clearwater Beach, Florida, USA.