Haptics and Virtual Reality

6 downloads 0 Views 2MB Size Report
Jun 1, 2008 - rending and synchronization of haptic and graphical loop were presented. ... TABLE OF CONTENTS . ..... MECHATRONICS LAB [KTH UNIVERSITY]. ..... Rather than using a 2D mouse, virtual environment applications track 3D ..... Figure 2.5b shows an example of voxelized human thorax region that was.
Haptics and Virtual Reality

SULEMAN KHAN

Department of Machine Design Royal Institute of Technology SE–100 44 Stockholm, Sweden.

TRITA-MMK 2008:_22 ISSN 1400-1179 ISRN/KTH/MMK/R-08/22-SE

Mechatronics Lab Department of Machine Design Royal Institute of Technlogy S-100 44 Stockholm SWEDEN

TRITA - MMK 2008:22 ISSN 1400 -1179 ISRN/KTH/MMK/R-08/22-SE

Author(s)

Supervisor(s)

Suleman khan ([email protected])

Jan Wikander, Kjell Andersson

Title

Document type

Date

Report.

2008-06-01

Sponsor(s)

Sponsor

Haptics and Virtual Reality

Abstract This study is a part of the research continued on haptics at Mechatronics Lab, Machine design at the Royal Institute of technology - KTH. The purpose of the study was the literature review of haptics, particularly focusing on the haptic interfaces and their technical specification. A brief introduction of virtual object modelling, visual rendering, haptic rending and synchronization of haptic and graphical loop were presented. Human interaction and the ability of sensing the motor signal were also discussed. The idea behind this Literature review was to select design criteria for the development of the Master haptic interface. For this purpose the available haptic interfaces (both commercially and research based in the labs) were studied and their pros and cons were presented. These interfaces were compared on the bases of their technical specifications such as footprint, actuated degree of freedom, workspace, position resolution, stiffness, maximum and continuous force/torque, kinematic structure and cost. The parameters that affect the performance and transparency of the haptic system were also presented. The literature and figures about these devices are taken from the published papers, books and websites. The developed interface will be a part of the project “force reflecting teleoperator system, and milling simulator for bone; used in skull base surgery. Due to the stiffness of bone, stiff contacts between the objects and probe will be required and so specific attention is given to this issue in the design. Optimization of the performance parameters for stiffness and workspace is also under the consideration for the development of this haptic interface.

Keywords

Language

Keyword1, Keyword2, Keyword3

English

Table of contents TABLE OF CONTENTS............................................................................................................................................ 3 LIST OF FIGURES .................................................................................................................................................... 4 1.

HAPTICS AND VIRTUAL REALITY ............................................................................................................ 6

1.1

INTRODUCTION ............................................................................................................................................ 6

1.2

HISTORY OF HAPTICS ................................................................................................................................. 8

1.3

APPLICATION .............................................................................................................................................. 10

1.4

MOTIVATION............................................................................................................................................... 12

2.

COMPUTER HAPTICS................................................................................................................................. 14

2.1

GEOMETRICAL MODELING ..................................................................................................................... 14

2.2

GRAPHICAL OR VISUAL RENDERING .................................................................................................... 19

2.3

PHYSICAL MODELING............................................................................................................................... 20

2.4

HAPTIC RENDERING .................................................................................................................................. 22

3.

MACHINE HAPTICS .................................................................................................................................... 25

3.1

INTRODUCTION .......................................................................................................................................... 25

3.2

IMPORTANT FACTORS IN THE DESIGN OF HAPTIC DEVICES ......................................................... 29

3.3

HAPTICS DEVICES ...................................................................................................................................... 32

3.2.1 PHANToM haptic devices ...................................................................................................... 32 3.2.2 Haption Virtouse haptic devices .............................................................................................. 33 3.2.3 FCS HapticMaster ................................................................................................................ 34 3.2.4 SHaDe (a Spherical 3-DoF Haptic device) ............................................................................. 35 3.2.5 Novient Falcon Haptic devices ............................................................................................... 36 3.2.6 3-DoF Planar Pantograph and 5-DoF Haptic wand(Quanser Haptic devices) ........................ 36 3.2.7 5-DoF haptic device of University of Colorado....................................................................... 37 3.2.8 Cubic3, Freedom 6S, F7S haptic feedback devices.................................................................. 38 3.2.9 Novel 7-DoF Haptic device ..................................................................................................... 39 3.2.10 A 6-URS Parallel Haptic device ........................................................................................... 40 3.2.11 Rutgers Ankle Rehabilitation interface ................................................................................ 41 3.2.12 High Performance 6-DoF Cobot .......................................................................................... 42 3.2.13 Omega 6-Dof Haptic device................................................................................................... 43 3.2.14 6-DoF Delta and New delta modified Haptic devices ............................................................. 43 3.2.15 6-DoF Parallel Haptic Master .............................................................................................. 45 3.2.16 New 6-DoF Parallel HapticMaster ........................................................................................ 45 3.2.17 New Parallel haptic device for desktop applications ............................................................. 47 3.2.18 General purpose 6-DoF Haptic device.................................................................................. 48 3.2.19 New 6-DoF Parallel haptic device ........................................................................................ 49 3.2.20 New Meglav Haptic device .................................................................................................... 50 4.

HUMAN HAPTICS ........................................................................................................................................ 52

5.

STATE OF THE ART BASED REMARKS .................................................................................................. 54

6.

FUTURE CHALLENGES AND OPPORTUNITIES..................................................................................... 55

7.

REFERENCES ............................................................................................................................................... 56

List of Figures FIGURE: 1.1 HAPTIC INTERACTION LOOP WITH USER AND VIRTUAL ENVIRONMENT. MECHATRONICS LAB [KTH UNIVERSITY] ....................................................................................................................................... 6 FIGURE: 1.2 HAPTIC INTERACTION AS AN INTERDISCIPLINARY FIELD OF RESEARCH MIT TOUCH LAB [5] ......... 7 FIGURE: 1.3 KEY FACTORS OF THE VIRTUAL REALITY. MECHATRONICS LAB [KTH UNIVERSITY].................... 8 FIGURE: 1.4A) THE RUTGERS ANKLE REHIBILATION SYSTEM, FOR PATIENTS UNDERGOING PHYSICAL THERAPY [6]. B) HAPTIC FEEDBACK IN MOLECULAR SIMULATION MECHATRONICS LAB [KTH UNIVERSITY]. ...... 11 FIGURE: 1.5 HAPTICS CAN BE USED TO SIMULATE ASSEMBLING OF PARTS [43] .............................................. 12 FIGURE: 1.6 STATE OF THE HAPTICS RESEARCH AT MECHATRONICS LAB.KTH UNIVERSITY [9] ..................... 13 FIGURE: 2.1 HAPTIC INTERACTION LOOP BETWEEN VIRTUAL OBJECTS AND REAL SYSTEMS. MECHATRONICS LAB [KTH UNIVERSITY] ................................................................................................................... 14 FIGURE 2.2 TRIANGLE MESH MODEL TO REPRESENT VIRTUAL OBJECT SURFACE, MECHATRONICS LAB [KTH UNIVERSITY] .................................................................................................................................... 15 FIGURE 2.3SURFACE BASED MODEL OF HUMAN HAND A) 295 VERTICES AND 307 POLYGONS .......................... 16 B) 2446 VERTICES AND 2429 POLYGON [1]................................................................................................... 16 FIGURE 2.4A) SURFACE REPRESENTATION THROUGH A PARAMETRIC EQUATIONS OF THE CURVES.[14] B) SURFACE REPRESENTATION THROUGH AN IMPLICIT EQUATIONS.[14] ................................................... 16 FIGURE 2.5 A) VERTEX OF VOXELS. B) VOLUMETRIC BASED MODEL OF HUMAN THORAX SLICED TO SHOW THE INTERIOR STRUCTURE [14] ................................................................................................................. 17 FIGURE 2.6.A)VIRTUAL CAMERA MODELLING (TOP) PERSPECTIVE PROJECTION AND (BOTTOM) ORTHOGRAPHIC PROJECTION, B)LIGHTING IN VIRTUAL ENVIRONMENTS. (TOP) NO LIGHTING AND (BOTTOM) WITH LIGHTING [14]. .................................................................................................................................. 18 FIGURE 2.7.A) SMOOTH SHADING LEFT AND FLAT SHADING RIGHT B) COLOUR APPLIED TO OBJECTS C) TEXTURE MAPPING TO MODEL A BOWLING BALL AND A GOLF BALL. [14]. ........................................................... 19 FIGURE 2.8. GRAPHICS RENDERING PIPELINE TO MAP 3D VIRTUAL OBJECT INTO 2D IMAGE, MECHATRONICS LAB [KTH UNIVERSITY] ................................................................................................................... 19 FIGURE 2.9. HAPTICS HUMAN-COMPUTER INTERACTIONS MODEL [14]. ......................................................... 21 FIGURE 2.10 REPRESENTATION OF INTERACTION TOOL OF THE DEVICE “PROBE” IN VIRTUAL ENVIRONMENTS [9]. ................................................................................................................................................... 21 FIGURE 2.11. HAPTIC RENDERING LOOP THAT REPRESENT THE COLLISION DETECTION AND FORCE FEEDBACK MECHATRONICS LAB [KTH UNIVERSITY]. ......................................................................................... 22 FIGURE 2.12. SIMPLE POINT-BASED FORCE FEEDBACK CALCULATION. [14]. .................................................. 23 FIGURE 2.13. HAPTICS AND GRAPHICS RENDERING SYNCHRONIZATION. [9]................................................... 24 FIGURE: 3.1A) T ACTILE PIN ARRAY SHAPE DISPLAY BY HARVARD BIOROBOTICS LAB [24]. B) PHANTOM OMNI [SENSABLE TECHNOLOGY INC][26].......................................................................................... 26 FIGURE: 3.2(A) CYBERFORCE[IMMERSION CORPORATION,CYBERFORCE] (B) EXOSKELETON (U-TOKYO) HAPTIC DEVICES[7] ........................................................................................................................... 26 FIGURE: 3.3 A) LOOP OF IMPEDANCE CONTROL SYSTEM B) LOOP OF ADMITTANCE CONTROL SYSTEM USED IN HAPTIC DEVICES [6]. .......................................................................................................................... 27 FIGURE: 3.4 SHOWS DIFFERENT HAPTIC DEVICES HAVING DIFFERENT ACTUATED DEGREE OF FREEDOM [5]. .... 28 FIGURE: 3.5 SHOWS DIFFERENT PHANTOM HAPTIC DEVICES OF SENSABLE TECHNOLOGY INC[26]. ............. 33 FIGURE: 3.6(A) VIRTUOSE 6D35-45 (B) VIRTUOSE 3D15-25 (C) PARALLEL VIRTUOSE HAPTIC DEVICES.[6].... 34 FIGURE: 3.7 FCS HAPTICMASTER (WWW.FCS.CS.COM\ROBOTICS\). .............................................................. 34 FIGURE: 3.8 SHADE 3-DOF SPHERICAL HAPTIC DEVICE [36] ........................................................................ 35 FIGURE: 3.9 NOVINT FALCON 3-DOF HAPTIC DEVICE BY NOVINT TECHNOLOGIES,INC .................................. 36 FIGURE: 3.10 (A) 3-DOF PLANAR AND (B) 5-DOF HAPTIC WAND QUANSER’S HAPTIC DEVICES[43]. ............... 37 FIGURE: 3.11 5-6 DOF HAPTIC INTERFACE OF UNIVERSITY OF COLORADO.................................................... 38 FIGURE: 3.12 MPB TECHNOLOGIES HAPTIC DEVICES [6].............................................................................. 39 FIGURE: 3.13(A) SCHEMATIC OF A TENSION-BASED FORCE FEEDBACK DEVICE CREATED BY SEAHAK KIM AS PART OF IS PHD RESEARCH AT THE TOKYO INSTITUTE OF TECHNOLOGY. (B-C) CABLES EXERT FORCE ON A “GRIP” THAT IS POSITIONED WITHIN THE FRAME OF THE DEVICE[30]. ................................................ 40 FIGURE: 3.14 6-URS PARALLEL HAPTIC DEVICE UNIVERSITY OF POLITECNICA DE MADRID, SPAIN, 2007[39] 41 FIGURE: 3.15 RUTGERS ANKLE REHABILITATION INTERFACE AT CAIP LAB AT RUTGERS [48 ] ..................... 42 FIGURE: 3.16 DIAGRAM OF THE 6-DOF COBOT HAPTIC DEVICE DEVELOPED BY NOTHWESTREN UNIVERSITY. 43 FIGURE 3.17 SHOWS 6 DOF OMEGA.6 AND DELTA HAPTIC DEVICE STRUCTURE [49]. ................................... 44 FIGURE: 3.19 (A) SCHEMATIC REPRESENTATION OF MECHANISM (B) NEW 6-DOF PARALLEL HAPTIC MASTER [52] .................................................................................................................................................. 46

FIGURE: 3.20 NEW PARALLEL HAPTIC DEVICE FOR DESKTOP APPLICATION [47] ............................................ 48 FIGURE: 3.21 GENERAL PURPOSE 6-DOF PARALLEL HAPTIC DEVICE [20]. ..................................................... 49 FIGURE: 3.22 NEW 6-DOF PARALLEL GIMBAL MECHANISM HAPTIC DEVICE [55]. ......................................... 50 FIGURE:3.23 USING THE MAGNETIC LEVITATION HAPTIC INTERACTION SYSTEM[50] ................................... 51 ROBOTICS INSTITUTE, CARNEGIE MELLON UNIVERSITY .............................................................................. 51 FIGURE 4.1 HUMAN PERCEPTION AND RESPONSE (FROM LECTURE NOTES SEUNGMOON, CHAI, 2007) [4]......... 52

1. Haptics and Virtual Reality 1.1 Introduction Haptics is the science of applying sense of touch (tactile) or haptics refers to sensing and manipulation through touch. The word haptic derives from the Greek word haptikos meaning “being able to come into contact with”. The study of haptics emerged from advancement in virtual reality. Haptics is a recent enhancement to virtual environments allowing users to “touch” and feel the simulated objects with which they interact. To be able to interact with an environment, there must be feedback. For example, the user should be able to touch a virtual object and feel a response from it. This type of feedback is called haptic feedback. Haptic (tactile and force) feedback systems are the engineering answer to the need for interacting with remote and virtual worlds [1]. Currently it is a less developed modality of interacting with remote and virtual worlds compared with visual and auditory feedback. In human-computer interaction, haptic feedback means both tactile and force feedback. Tactile or touch feedback is the term applied to sensations felt by the skin. Tactile feedback allows user to feel things such as the texture of surfaces, temperature, vibration and even a grasped object’s slippage due to gravity. Force feedback reproduces directional forces that can result from solid boundaries, the weight of grasped virtual objects, mechanical compliance of an object and inertia. Haptic interaction with virtual environment is bidirectional and symmetric [4]. The haptic feedback devices sense the position and orientation and then provide feedback forces and torques to the user through interaction tool or end effector of haptic device as shown in figure 1.1 Virtual Environment/ Haptic Rendering

Haptic Device X(K)

F(k)

Human operator Input(Position,Orientation)

Output (Force, Torque)

Figure: 1.1 Haptic interaction loop with user and Virtual Environment. Mechatronics Lab [KTH university]

Common human-computer interaction interfaces that are available in the market such as trackballs, instrumented gloves and some joysticks can only convey the user’s

commands (sense motions) to the computer, and are unable to give a natural sense of touch and feel (feedback) to the user. Recent advances in the development of force-reflecting haptic interfaces hardware as well as haptic rendering algorithms’ have caused considerable excitement. The underlying technology is becoming mature and has opened up novel and interesting research areas. The haptics research area is an interdisciplinary field and it is generally subdivided into three main parts [3]. •

Computer haptics -algorithms and software associated with generating and rendering the touch and feel of virtual objects (analogous to computer graphics). Generally this topic spans object modelling and collision detection, both graphical and haptic rendering, calculation of feedback responses and the synchronization of haptic and graphic loop. It also takes care for the stability of the haptic feedback interface. • Machine haptics -the design, construction, and control of the haptic device that provides the bridge between the human user and virtual environment for bidirectional communication (interaction). The device is a mechanical system that is also called input/output haptic interface used to replace or augment human touch. • Human haptics -the study of human sensing and manipulation through touch. It studies the mechanical, sensory, motor and cognitive components of the hand-brain system. Consequently, multiple disciplines such as biomechanics, virtual reality, neuroscience, psychophysics, robot design and control, mathematical modelling and simulation, and software engineering converge to support haptics. Wide varieties of applications have emerged in this field and spans many areas of human needs such as product design, medical training, rehabilitation, games and entertainments.

Figure: 1.2 Haptic interaction as an interdisciplinary field of research MIT Touch Lab [5]

Virtual reality means essence in reality or effect though not in actual fact [15][1]. Virtual Reality is a form of human-computer interface technology in which a computer creates virtual environment for the user interaction and manipulation with virtual objects through visual, auditory, tactile(touch, force) and smell sensation. The field of Virtual Reality includes key elements such as virtual world, Immersion, Interaction, and Imagination known I3[1]. Immersion considers the fact that the user must be immersed into alternative reality or point of view. An alternative world might be a representation of an actual space that exists elsewhere, or it could be a purely imaginary environment (Imagination). It is created in minds of novelists, composers, and other artists and creative individuals [15]. Virtual reality respond to the user through multiple sensorial channel such as visual, auditory, and haptic (tactile and force feedback) known interaction.

Figure: 1.3 Key factors of the virtual reality. Mechatronics Lab [KTH University]

Virtual reality provides a new medium for engineers to create and interact with designs in a manner that is similar to real life. Instead of the 2D computer screen, visualization typically takes place in 3D through shutter glasses or a head mounted display. Rather than using a 2D mouse, virtual environment applications track 3D human motion. Three-dimensional sound is also becoming common. In the real world persons receive and disseminate information in three dimensional space, while in virtual world, the user can access information by imitating that three dimensional space. To incorporate the sense of touch in the virtual environment haptic devices are used that allow the user to physically interact with virtual objects.

1.2 History of haptics Since the early part of twentieth century, the term haptics has been used by psychologists for studies on the “active touch of real objects by humans”. In the late nineteen-eighty research started working on novel machines pertaining to touch. Tactile feedback, as a component of virtual reality simulations, was pioneered at MIT [5]. In 1950s Argonne National Lab developed a master-slave tele-manipulation system in which actuators, by receiving feedback signals from slave sensors, applied forces to a master arm controlled by the user [7]. Haptic technology was used in 1960s for applications such as military flight simulators, with which combat pilots honed their skills. Motors and actuators pushed, pulled, and shook the flight yoke, throttle, rudder pedals, and cockpit shell, reproducing many of the tactile and kinesthetic cues of real flight. Though advanced for their time, these programmable haptic systems were limited by the technology available at the time and were therefore relatively crude and low of fidelity by today’s standards. They also cost hundreds of thousands, if not millions of dollars, and therefore were not within the grasp of consumers or even most businesses. By the late 1970s and early 1980s, computing power reached to the point where rich colour graphics and high-quality audio become possible. Ivan Sutherland, in his 1963 MIT PhD thesis, created Sketchpad and opened the field of computer graphics [7]. His head-mounted “Sproull” of 1966 anticipated virtual reality. He suggested, “The human kinesthetic sense is yet another independent channel to the brain, a channel whose information is assimilated quite subconsciously.” This multimedia revolution spawned entirely new businesses in the late 1980s and early 1990s and opened new possibilities for the consumer. Flight simulations moved from a professional pilot-only activity to a PCbased hobby, with graphics and sound far superior to what the combat pilots in the 1960s

had. The multimedia revolution also gave rise to the medical simulation industry. By the 1990s, high-end workstations displayed highly realistic renderings of human anatomy. By the mid 1990s, shortcomings in simulation products were identified. Even though graphics and animations looked incredibly realistic, they could not possibly convey what it actually feels like to break through a venal wall with a needle or fight the flight yoke out of a steep dive. New industrial and consumer products were in need of enhanced programmable haptic technology that could provide sensations similar to an actual handson experience. In 1990 Patrick used voice coils to provide vibrations at the fingertips of a user wearing a Dexterous Hand Master Exoskeleton[3]. Minsky and her colleagues developed the “Sandpaper” tactile joystick that mapped image texels (texture element to represent images) to vibrations (1990). Commercial tactile feedback interfaces followed, namely the “Touch Master” in 1993, the Cyber Touch glove in 1995, and more recently, the “FEELit Mouse” in 1997. In 1990s-1995s PHANToM was developed by SensAble Technology Inc. and it was a break through in the field of haptics[7][5]. Immersion was founded in 1993 to develop technology for sense of touch that experienced by human users [5]. By combining 1) the basic concepts used in the military flight simulators of the 1960s, 2) state-of-the-art robotic controls, 3) an understanding of the human sense of touch, and 4) advancements in computing power. Immersion was able to significantly reduce the cost and size of programmable haptic technologies while increasing the quality of the simulated forces [2][43]. Immersion’s early haptic technology was used in the world’s first consumer force feedback peripherals for computer video games, such as flight sticks and steering wheels. These products not only looked and sounded more realistic, they allowed users to feel haptic effects that simulated, for example, textures, bouncing and hitting a ball, and vibrations from gun fire. With Immersion haptic technology, sophisticated medical simulators could offer clinicians the ability to practice and perfect their skills with tactile realism not possible before. In 2002 Chial. developed haptic virtual environments where force feedback was used to design scissors that could simulate the cutting of rat tissues. Initial results of this experiment was encouraging and the users found cutting of natural and virtual tissues similar [7]. Also, Greenish demonstrated in 2002 that subjects could identify tissues with similar precision when performing a real or simulated cutting task on various parts of an animal. Further work needs to be done in this area but the initial results have been encouraging. In 2003 Lieu discussed how haptic surgical simulators can be used successfully in medical courses, and what the advantages of these systems are in medical education. Among the many advantages they mention, are that these systems are very flexible to use and that they provide a uniform learning experience. They also state that despite initial costs, in the long run these systems can be very cost effective for educational institutions [2]. Further advancements in size, power, and cost reductions have pushed the adoption of haptics technology even further. Today, Immersion TouchSense® technology is incorporated in the gaming systems and peripherals for Sony and Microsoft products and PC computers in general. More than 1500 Immersion Medical simulators have been deployed at hospitals and medical schools throughout the United States and abroad [2][43]. Over the past few years, haptic technology has been deployed as VibeTonz® software products for mobile phone handsets, including for providing tactile confirmation to touchscreen presses [5]. Immersion haptic technologies are also being used by industrial designers and by researchers from major universities. And automotive companies are using programmable haptic feedback technology in controls through push-turn programmable rotary controllers powered by Immersion. With this technology, drivers can manage comfort and convenience features such as radio, climate control, navigation, and

communications through their sense of touch. More importantly, it gives drivers an alternative to relying solely on visual cues to control these features while driving [43]. Example, Nissan Integrates Immersion TouchSense Technology Into Innovative Concept Car. The haptics discipline includes all aspects of object exploration and object manipulation through touch by humans, machines, or a combination of the two; and the environments can be real in the case of teleoperation or virtual. As computing power increases, multimedia capabilities advance, and product designers become familiar with the advantages of engaging the sense of touch and haptic, even more opportunities will be created for programmable haptic technologies.

1.3 Application The addition of haptics to various applications of virtual reality and teleoperation opens exciting possibilities for researchers. Some more or less common applications of haptics are summarized below. • Medicine and Rehabilitation: In recent years increased usage of computer, virtual reality and haptic devices has changed the way health care is delivered. Advancements in this field allow pre-surgery simulation, telesurgery (remote surgery), rehabilitation of patients and biotechnology (design of new drugs) in more sophisticated way. o Surgical Simulators: As flight simulators are used to train pilots, the haptic based surgical simulators are used for the training of doctors in medical surgery. Due to haptic modeling and visualization, the need to use paid volunteers or dead bodies for the training of doctors is reduced. The use of surgical simulators has proved to increase patient safety and reduce risk associated with human errors in hospitals by allowing surgeon to develop skills more efficiently in a shorter period of time [9]. The virtual reality based surgical simulators enable a medical trainee to see, touch, and manipulate realistic models of biological tissues and organs. As an example Mark Billinghurst, at the Hit Lab in Washington, has developed a prototype surgical assistant for simulation of paranasal surgery [13]. During a simulated operation the system provides vocal and visual feedback to the user, and warns the surgeon when a dangerous action is about to take place. In addition to training, the expert assistant can be used during the actual operation to provide feedback and guidance. This is very useful when the surgeon's awareness of the situation is limited due to complex anatomy. The Yantric Laparoscopic Simulator (Lapsim) and Epidural Injection Simulator (Episim) are a training simulators based on virtual patient with a sense of touch [43]. The Lapsim is a two-handed system that allows users to learn basic manipulative skills in a minimally invasive environment. Episim allows trainees to inject a needle into the epidural space of a virtual patient with a sense of touch. The Episim is particularly suitable for procedural training for practitioners in the military, as well as for trainees in civilian medicine such as obstetrics/gynecology, anesthesia and pain management. o Telesurgery: in this concept the surgeon “operates” locally on a virtual patient model. His actions are transmitted via high speed networks or satellite to a robotic assistant operating on a real but distant patient. Rather than traveling to an operating room, the surgeon instead becomes a telepresence. Expert surgeons may work from a central workstation, performing operations in various locations, with machine setup and patient preparation performed by local nursing staff [16]. A

particular advantage of this type of work is that the surgeon can perform many more operations of a similar type, and with less fatigue and less time. It is well documented that a surgeon who performs more procedures of a given kind will have statistically better outcomes for his patients. o Rehabilitation: a haptic device can be used for multiple virtual reality rehabilitation exercises that are targeted to treat patients suffering from a stroke, disability and other various diseases. VR-based rehabilitation provides a patient, at all times, with the required intensive exercises that are repetitive in nature, which is necessary for recovery as shown in figure 1.4a [6]. o Biotechnology: biotechnology deals with the design of new drugs where large molecucules have complex 3D structure. Haptic technology provides a natural 3D visualization environment to the chemists to study and manipulates these complex structures to develop new drugs as shown in figure 1.4b[1].

Forc Mole l

Hap tic Collosion of probe with molecules Visual Display of VR objects

Figure: 1.4a) The Rutgers ankle Rehibilation system, for patients undergoing Physical Therapy [6]. b) Haptic Feedback in molecular simulation Mechatronics Lab [KTH University].







Collaborative Haptics: The use of haptics to improve human computer interaction as well as human-human interactions mediated by computers is being explored. A multimodal shared virtual environment system has been developed and experiments have been performed with human subjects to study the role of haptic feedback in collaborative tasks and whether haptic communication through force feedback can facilitate a sense of being and collaborating with a remote partner. Two scenarios, one in which the partners are in close proximity and the other in which they are separated by several thousand miles (transatlantic touch with collaborators in University College, London, [13]), have been demonstrated. Entertainment: Entertainment was the driving force of early haptic virtual technology and is presently its largest market [1]. Rich sensorial interaction and 3D immersion make virtual reality and haptic technology an ideal video game environment. Novient Falcon haptic device changed the world of games. It is an entirely new type of a game controller that provides a true 3D virtual touch. Today a large number of video games and simulators are developed that enable the user to feel and manipulate virtual objects like in reality. Haptic communication with computers opens completely new opportunities for music. In music, advances in real-time synthesis tools increase the demand for interactive controllers [13]. Education: Haptics tools are used in a variety of educational settings, both to teach concepts and to train students in specific techniques. New modeling systems are being



developed that allow learners and designers to use their existing skills while working in the virtual environment to further improve their skills. Different types simulators giving students the feel of phenomena at nano, macro, or astronomical scales [13]. A 'Virtual Haptic Back' (VHB) is being successfully integrated in the curriculum of students at the Ohio University College of Osteopathic Medicine [43]. Research indicates that VHB is a significant teaching aid in palpatory diagnosis (detection of medical problems via touch). The VHB simulates the contour and compliance (reciprocal of stiffness) properties of human backs, which are palpated with two haptic interfaces (SensAble Technologies, PHANToM 3.0). Industry and Engineering: Integration of haptics into CAD systems such that a designer can freely explore and manipulate the mechanical components of an assembly in an immersive environment [5][43]. The physical prototypes are replaced by virtual or digital prototypes/models (Computer Aided Design - CAD) to avoid building expensive prototypes, especially in the automotive and aeronautics sectors. Increasingly, these CAD systems also allow designers and engineers to carry out assembly processes. The use of touch in CAD systems allows operators to feel forces and local stimuli similar to those in real situations, which provides more intuitive manipulation (i.e. check any defect or decide the most appropriate assembly sequence). On the other hand, different designers, which may be situated over a thousand kilometers away, often collaborate in the design and revision of products to lessen time and lower costs.

Figure: 1.5 Haptics can be used to simulate assembling of parts [43]



Graphic Arts: virtual art exhibits, concert rooms, and museums in which the user can login remotely to play the musical instruments, and to touch and feel the haptic attributes of the displays; individual or co-operative virtual sculpturing across the internet[5].

1.4 Motivation Undoubtedly, there is an increase in haptics research, which is clearly seen by the number of scholarly articles published in both journals and conference proceedings in the past few years; a figure that is still on the rise. Haptics is an interdisciplinary research field that encompasses and appertains to psychologists, robotics researchers (haptic device) and computer scientists. Haptic devices are useful for tasks where visual information is not sufficient and may induce unacceptable manipulation errors, for example surgery or

teleoperation in radioactive/chemical environments. The value of haptic interaction in surgical simulation applications has led to a great deal of research interest into the challenges involved in providing haptic force-feedback in virtual environment simulations and also into haptic device design for better fedility and performance. Although various force-reflecting haptic devices are available in the market, but from the design point of view further improvements in range (workspace size in relation to device dimension), actuated degrees of freedom, resolution, stiffness and bandwidth of these devices are needed to match their performance with that of the human user. Improvement of these devices will create new opportunity for surgical procedures that are impossible using current device. This research is a continuation of earlier research, where a prototype of master slave system has been developed for introduction of telerobotic system for skull base surgery [12].A valuable research on Haptic and Virtual Reality Temporal Bone Surgery Simulator was carried out [10][11] at Mechatronics lab, Machine Design department at the Royal Institute of Technology KTH. This research will focus on the development of 6-DoF haptic input output device with optimization of performance parameters as shown in the figure 1.6. This research will be more focus on the machine haptics (structure selection, design parameter optimization of the device in relation to workspace, size and stiffness). The targeted milestone of the research is to replace the 3-DoF master haptic device with a new 6-DoF haptic device. The objective is to use the torque feedback with force feedback to increase the realism in the interaction with virtual environment

Real patient

3D Graphic

Anatomical Model

Visual Feedback

Visual

”Calc.

Drilling Operation

Pos.

Pos. Force Feedback.

surg

Real Force Pos.

master Haptic device

slave Robot

Part of current Research

Figure: 1.6 State of the haptics research at Mechatronics Lab.KTH University [9]

2. Computer Haptics Computer haptics is an emerging area of research, that is concerned with the techniques and processes associated with generating and displaying the touch and feel of virtual objects to a human operator through a force reflecting device. Analogous to computer graphics, it deals with models and behaviour of virtual objects together with both haptic and graphical rendering algorithms for real-time display. It includes the software architecture needed not only for haptic interactions but also for their synchronization with visual, audio, perception and force feedback modalities.

Real System

Virtual system Virtual objects

Virtual tool

Haptic device

User

Figure: 2.1 Haptic interaction loop between virtual objects and real systems. Mechatronics Lab [KTH University]

The software development process for creating haptic virtual reality applications involves modelling virtual environment that support real-time interactivity and interfacing with haptic devices and visual displays. To model a virtual environment, 3D geometrical objects are generated. Visual properties are applied to virtual objects to define their appearance. The objects’ physical behaviour and physical properties are then modelled to make them behave and interact according to physical laws. The virtual environment is interfaced with haptics devices using haptics rendering algorithms, low level control algorithms and graphical rendering algorithm that detect collision of objects and send feedback to the user. During object manipulation when collision occurs within the virtual environment it simulates appropriate feedback to the user based on the physical modelling. Virtual environment modelling process involves the following phases • • • •

2.1

Geometrical modelling Graphical or visual rendering Physical modelling haptic rendering

Geometrical Modeling

Geometrical modelling describes the 3D geometrical shape of virtual objects (polygons, triangles and vertices) as well as their appearance (texture, surface reflection co-efficient and colour). Surface-based and volumetric-based 3D objects models are used for this modelling. A surface-based model defines only the outer surface of an object without representing the interior details. In contrast, volumetric models represent both the surface and interior of an object. Each of these types of models can have different geometrical representations. The algorithms that can be utilized to render a virtual environment (graphically and haptically) are based on the types of models and representations used.

Surface-based modeling of Virtual Environments The surface-based model defines the shape of virtual objects by 3D outer surface. The external surface of a geometrical object can be represented by an approximating mesh of planar polygonal facets. The vast majority virtual objects have their surface composed of triangles meshes [1]. Triangles meshes are preferred because they use shared vertices and this makes it possible to design graphics cards that specialize in efficient rendering of surface based objects for visualization to help meet the real-time requirements. Figure 2.2 shows a polygonal model by triangles meshes. The vertices of shared edges are the same for adjacent triangles as (x1, y1, z1) and (x2, y2, z2). Polygonal meshes store the coordinates of the vertices, specification of each vertices make up each triangle, and the surface normal of each triangle. Additional information (e.g. connectivity of the facets, normal of each vertex, neighbouring triangles of each edge etc.) may also be maintained if required by algorithms that utilize the model. The required accuracy of the mesh approximation, based on the difference between the faceted representation and the object’s actual curved surface, affects the number of polygons used. Ideally, the number and size of polygons used should depend on the local spatial curvature of the surface. Fewer and larger polygons can be used in areas where the surface is mostly straight. More and smaller polygons can be used in areas where the curvature changes rapidly. Such characteristics are considered by mesh optimization algorithms that reduce the number of polygons needed to represent a surface while attempting to maintain its general shape. x3,y3,z3

x1,y1,z1

x5,y5,z5

Non shared vertex

x0,y0,z0

x2,y2,z2

x4,y4,z4

shared vertices

Figure 2.2 Triangle mesh model to represent virtual object surface, Mechatronics Lab [KTH University]

Certain issues may arise when using polygonal meshes. For example, a very large number of polygons may be required to approximate the curved and complex surface of a virtual object as shown in figure 2.3. The need to store this information can require a large amount of physical memory. Thus computational workload needed to process a large number of triangles for graphics and haptics rendering that affects the overall performance of an application and thus maximize the size of virtual environment that can be used while maintaining real-time interactivity and stability. An alternative way to represent the shape of surface is parametric models. Parametric models are represented mathematically using functions that define the three dimensional location and shape of its primitives. They are an extension of parametric curves and spline, which allow the points that lie on a curve to be interpolated by evaluating a set of basis functions based on a u spatial parameter and a set of control points [14]. With additional basis functions and another spatial parameter, v, each point that lies on a surface can be interpolated. For example, a Bezier patch of the parametric surface given in Figure 2.4a [14] is defined as

Figure 2.3surface based model of human hand a) 295 vertices and 307 polygons b) 2446 vertices and 2429 polygon [1].

Figure 2.4a) surface representation through a parametric equations of the curves.[14] b) Surface representation through an implicit equations.[14] 3

3

Q(u , v) = ∑∑ Pij Bi (u ) B j (v).

2.1

i =0 j =0

Where Pij is a control point and Bi and Bj are the basis functions that determine how much affect a control point has on a patch at any given point in time. Parametric patches are common representation of parametric surfaces. They resemble polygonal meshes, except that individual polygons are now replaced by curved surfaces. Fewer geometrical primitives (as compared to polygonal meshes) are needed to represent the surface of curved objects. Little memory is required to store parametric surfaces since they are defined mathematically. Rendering an object can be computationally expensive since the equations have to be evaluated to create the object’s geometrical primitives. However, there can be a trade-off between memory and real-time visualization. Implicit models are represented using implicit functions of the form f(x,y,z) = 0. An implicit function is a testing, or membership, function that divides space into those points that belong on the surface and those that don’t [14]. It permits taking values for x, y, and z and evaluating function f to determine if the point lies on the surface. Figure 2.4b is an example of a spherical implicit model defined by the function x2+y2+z2-r2=0. Since the models are represented mathematically, they require very little memory to store them. Rendering implicit models can be computationally expensive since the mathematical representation needs to be evaluated. Surface based models are preferred to use in medical applications to represent the images, as a 3D virtual object by taking data from CT-Scan or MRI, to maintain real time haptic interaction [9].

Volumetric-based modeling of Virtual Environments Volumetric-based models form a 3D scalar field that is defined using voxel elements. A voxel is the equivalent of a cubic pixel that stores the data (e.g. position, stiffness, colour and opacity etc) needed to represent the volumetric model as shown in figure 2.5a. Figure 2.5b shows an example of voxelized human thorax region that was interactively sliced to expose the internal anatomical structures. Technological advancements in video cards allow volumetric objects to be visually rendered in real-time. Volumetric-based graphics rendering algorithms render virtual environment represented with volumetric datasets. These rendering algorithms can be classified as indirect and direct techniques. Indirect techniques create an intermediate form such as a polygonal surface, or iso-surface, and use surface rendering algorithms to display it. Direct volume rendering algorithms render the volumetric data directly by sampling the volumetric data and compositing individual samples to create an image. Different methods are available for volumetrically representing a discrete 3D data. These are ray-casting, 3D texture mapping, and marching cubes etc [9].

Figure 2.5 a) vertex of voxels. b) Volumetric based model of human thorax sliced to show the interior structure [14]

A drawback to using voxel-based models is the considerable amount of physical memory that required to represent virtual objects. Visual rendering of voxel-based volumetric representations is also computationally expensive since the data within a large number of voxels need to be sampled. This typically limits the voxel-based objects to small volumes for real-time rendering. Object visual appearance Modeling the geometry of virtual object is the necessary first step in creating a realistic-looking 3D scene. The next step is to apply visual properties to scene such that the objects become visible in the virtual environment. Global (environment-level) visual properties help models view and lighting of the virtual environment. At the object-level there are also several types of visual properties such as colour of materials, texture maps, and shading. The process used to generate a scene for viewing is analogous to taking a photograph with a camera. This process entails • • • •

Positioning a virtual camera in the virtual environment Arranging the objects in the virtual environment to be photographed (modelling) Choosing the type of camera lens to use and adjusting its settings (projection) Determining the size of the picture (viewport).

The type of projection used and the attributes of the projection help set up a viewing volume that determines which objects are inside of it and how they appear. Only

objects within the viewing volume are displayed. As shown in Figure 2.6a, two types of projections are provided to specify how objects are projected onto the screen. The viewing volumes for each projection type are also displayed in the figure. The perspective projection in the top image is used to model how things are seen in the real world. For example, objects look smaller when they are farther away and appear to converge to a focal point in the distance. The other type of projection is orthographic, which projects objects onto the screen without affecting their relative sizes. This type of projection is used, for example, in architectural drawings where measurements are important. The viewport specifies the size and area that the generated picture is mapped onto on the computer screen.

Figure 2.6.a)Virtual camera modelling (top) perspective projection and (bottom) orthographic projection, b)lighting in Virtual environments. (Top) no lighting and (Bottom) with lighting [14].

Lighting plays an important role in modelling the visual appearance of the virtual environment. The gradation in the colour, or shading, that lighting causes along an object’s surface is what gives the object a three-dimensional appearance. As shown in figure 2.6b, the teapot on the top looks two-dimensional since it lacks shading; however, the shading on the down image makes the 3D shape of the teapot become visible. Different types of lights exist and each has a different effect in the overall light modelling. Ambient lights model provides uniform lighting within the virtual environment. Point sources emit light equally in all directions, but the intensity of illumination that an object receives from the light is proportional to the inverse square of its distance to the light source. Spot lights serves as a point source with a restriction on the shape to a cone of the light that it emits. Finally, directional lights transmit light in a specific direction parallel to the light source. The shading that is created on the objects can also be modelled. Flat shading causes the same shade to be assigned to an entire geometrical primitive (e.g. triangle). This causes edges between neighbouring primitives to become noticeable; giving the curvature of a surface a faceted appearance. When smooth shading is enabled, the shading is interpolated over the geometrical primitive and its neighbours. This blends the shading to make the surface look smooth. The visual difference in smooth and flat shading is apparent in figure 2.7a.

Figure 2.7.a) Smooth shading left and flat shading right b) Colour applied to objects c) Texture mapping to model a bowling ball and a golf ball. [14].

Colour of materials determines how an object reflects light and therefore what material it appears to be made of. A colour material usually includes components such as ambient, diffuse, and specular colours and shininess. By modifying the values of each of these components, different types of materials can be created. For example, Figure 2.7b shows a modelled plastic material (red diffuse colour with low specularity) and a shiny metal material (red diffuse colour with high specularity). Visual properties can also be assigned using the colour information from images. Texture maps enable detailed 3D models to be created by applying, or mapping, an image onto them. For example, a stone wall can be modelled using a few polygons with an image of stones texture mapped onto them. Without this image, a very large number of polygons would be required to model individual stones. Figure 2.7c is an example of using texture mapping to model a bowling ball and a golf ball.

2.2

Graphical or visual rendering

Graphical rendering is the process by which the 3D geometrical representation of the virtual environment is mapped, or projected, onto a 2D image that is displayed to the user. The steps involved in the graphic rendering pipeline are shown in Figure 2.8. While creating the objects of the virtual environment, it is more convenient to model each object based on a coordinate system local to the objects. For example, the vertex coordinates of a polygonal mesh model can be originated at the center of the object. Once the objects are modelled, then modelling transformation(rotation and translation) is applied to transform the object from local coordinates system to world coordinates system to place these objects within a common global space (world coordinate space) to define their spatial relationships. Lighting that is used in the virtual environment, will be modelled in the world coordinate space. The view space is the coordinate system that is created by placing a virtual camera within the virtual environment. The view coordinate system provides a viewpoint that establishes the position and viewing direction of the viewer in the world coordinate space. The world coordinate space is transformed into the view space for viewing. A view volume is also created to determine which objects can be seen by the virtual camera. The final 3D space in the graphics rendering pipeline is the screen space. This space is used to perform many operations on the virtual environment in order to create a 2D image. Local coordinate space Geometrical model of virtual 3D objects

View coordinate space

Globel coordinate space Lighting & view model of virtual objects

Modeling Transformation

View volume clipping

3D Screen space Hidden surface removal Rasterization shading

2D Display space

2D Image

Modeling Transformation

Figure 2.8. Graphics rendering pipeline to map 3D virtual object into 2D image, Mechatronics Lab [KTH University]

Algorithms that remove geometrical primitives that cannot be seen (e.g. back-face culling – facing away from the view, view clipping – outside of the viewing volume, hidden surface removal – view obstructed by other primitives) help to reduce the rendering computational load and thus make better the real-time graphics rendering performance. After multiple stages of transformations and clipping, the geometry of the remaining geometrical primitives must be projected into 2D objects. At the end of the pipeline, rasterization and shading algorithms are carried out to create pixels for the projected 2D image. The final step of the graphics rendering pipeline involves mapping the projected 2D image of the virtual environment onto the visual display’s (e.g. screen) coordinates. The performance of a graphics rendering algorithm is based on the computational workload that is required to process an object’s geometrical representation, transformation, lighting model, volume clipping and rasterlization process.

2.3

Physical Modeling

Virtual objects are modelled physically by specifying their mass, weight, inertia, surface texture (smooth or rough), compliance (hard and soft), deformation mode (elastic or rough) etc. These features are merged with physical behaviour and geometrical modelling to form a more realistic virtual model. Physical modelling also defines the physical behaviour and laws of the 3D virtual objects, as well as their interactions with other objects and the haptics devices. In Physical modelling rigid-body kinematics and dynamics deals with simulating the behaviour of non-deformable objects. The behaviour is simulated by computing the positional and rotational changes that occur to an object over time due to external forces such as gravity or forces generated from objects interactions. The positional and rotational changes are computed based on the equations of motion. Advancements in the computational power of modern computers allow the deformation of soft virtual objects to be modelled and simulated in order to increase the realism of a virtual environment. These deformations can be modelled using two different categories of techniques: geometric and physics-based. With geometric based techniques, the object or its surrounding space is deformed based on geometric manipulations. For example, the vertices or control points of a 3D object are manipulated to modify its shape. Geometric techniques place an emphasis on visually smooth deformations and tend to be faster and relatively easier to implement than physically-based methods. However, geometric techniques do not simulate the underlying mechanics of deformation and rely on the skill of the designer for controlling the deformation. Physically-based techniques model the physical behaviour of objects, under the effects of external and internal forces, that is necessary for realistic deformation. The physical behaviour is realistic since it is governed by physically-based equations (e.g. equations of motion). One advantage of using physicsbased techniques is that the interaction forces are an integral part of the computation and can be applied to the haptic device. However, physics-based techniques are computationally expensive and not always suitable for real-time applications. A trade-off between computational accuracy and speed is often required to simulate deformations in real-time. There have been recent attempts to help satisfy the real-time requirements of applications by offloading some of the computational workload for physical behaviour modelling from the CPU to the graphics processing unit (GPU) of video cards. In one case, using the GPU instead of the CPU for computing deformations reduced the average deformation/rendering time by 45% [14].

Haptic Interaction Modeling Haptic interaction can be subdivided into force feedback and tactile feedback. •

Force feedback deals with devices that interact with muscles and tendons that give humans a sensation of force and torques being applied. This type of feedback allows users to feel weight and/or resistance within Virtual Environment. • Tactile feedback deals with devices that interact with sensors in the skin that indicate sensations like heat, and texture. Depending upon the requirements of the system either force feedback or tactile feedback model are used. Some time both tactile and force feedback model are integrated to enhance the sensation of interaction in a simulated environment. In force feedback user can receive feedback force and torque from haptic applications using special haptic interface devices (machine haptics). Figure 2.9 illustrates the process of general human computer interactions that take place. When the user manipulates the haptic device, the encoders on the device transmit the position to the computer. The computer then checks for collisions between the probe (virtual representation of the interaction tool of haptic device) and other objects in virtual environment, and calculates a force based on the interactions. This force is sent to the actuators of the haptic device, displaying the force to the human user. When a user touches an object, the resultant force is conveyed to the user by sensors embedded in the skin that relay the associated contact information to the brain. The brain in turn issues motor commands that activate the muscles and leads to hand and arm movement.

Figure 2.9. Haptics human-computer interactions model [14].

To make haptic interactions possible, the physical device needs to be represented within the computer’s computational (virtual) workspace, see Figure 2.10. A haptic device interaction tool is modelled in the virtual environment using a “probe” (3D model). As the user manipulates the haptic device in its physical workspace e, its position/orientation is sensed, mapped to the virtual environment computational workspace, and then used to update the probe object’s position/orientation. The computer utilizes this probe object to simulate interactions with the other objects in virtual environment.

Figure 2.10 Representation of interaction tool of the device “Probe” in virtual environments [9].

Collision Detection Collision detection is the mechanism that determines the contact of probe with other surface or object in virtual environment. Determination of collision that occurs in a virtual environment is required for objects interaction with each other. Collision detection is typically performed in two phases: broad and narrow [10, 11, 14]. The broad phase attempts to quickly eliminate objects that will not collide. The narrow phase performs thorough tests to find exact collisions between the primitive elements (e.g. triangles) of the objects. Since the narrow phase can be computationally expensive, it only executes if the broad phase passes. Broad phase algorithms attempt to exploit temporal coherence, spatial coherence, and/or bounding volumes. Temporal coherence is based upon estimating the earliest time that two objects will intersect and not checking for collisions again until that time elapses. Spatial coherence, dividing the scene into cells, is typically performed using an octree configuration. Only cells containing one or more objects are tested for collisions. Octrees work well to eliminate objects in the scene; however, they must be updated every time the objects in the scene move. Bounding volumes is the most common technique for the broad phase. The most used bounding volumes are spheres, axis-aligned bounding boxes (AABB), and object- aligned bounding boxes (OBB). Efficient collision detection algorithms are critical for the virtual environment to run in real-time. Different types of haptic interaction algorithms are used for collision detection such as point-based and ray-based. For the point based haptic interaction algorithm, the probe is simply modelled as a point. For the ray-based interaction algorithm, the probe is modelled as a line segment. Both techniques have advantages and disadvantages. For example, it is computationally less expensive to render 3D objects using point-based technique. On the other hand, the ray-based haptic interaction technique handles side collisions and can provide additional haptic cues for conveying to the user the shape of objects and interaction torques.

2.4

Haptic Rendering

The process of displaying interaction forces and torques to the user for a given haptic interaction point (HIP) with virtual object is known haptic rendering. Haptics rendering makes it possible to touch and interact with 3D geometrical representation of the virtual environment using the sense of touch or haptic rendering allows users to feel virtual objects in a simulated environment [18]. A typical haptic loop consists of the following sequence of events as shown in figure 2.11. Graphical rendering

VR Object Database

Force

Visual display

Collosion detection

Haptic device

Force Response Force

Position & orientation

Real Force

Control algorthims

Haptic

Figure 2.11. haptic rendering loop that represent the collision detection and force feedback Mechatronics Lab [KTH University].



• •

Low level control algorthims sample the position sensors at the haptic interfaces device joints and combined the information collected from each sensor to obtain the position of the device–body interface in Cartesian spacethat is probe position inside virtual environment The collision detection algorithm uses position information to find collision between the probe and the objects in the virtual environment, and report the resulting degree of penetration. The force response algorthims is employed to compute the force response that needed to be sent to the control algorithm and finally sent to the haptic device interaction tool or user.

Force Response model Point-based force feedback model is the simple model that is used in haptics rendering to calculate forces as shown in Figure 2.12. In this model, the haptic interaction tool is represented as a single point in virtual environment and a force feedback is computed when the point penetrates the surface of an object in virtual environment. One of the simple procedure to calculate the feedback force is simple spring mass model. The force, F, is calculated using Equation 2.1, which is based on Hooke’s law and is proportional to the penetration depth of the haptic point within the surface. F= K*d

2.1

where, K is the spring constant that controls the stiffness of the object and d is the penetration depth of the haptic point.

Figure 2.12. Simple point-based force feedback calculation. [14].

The penetration depth is calculated by finding the distance between the closest point on the surface, the surface contact point (SCP), and the position of the device point. The direction of the force vector should be normal to the surface. Forces tangent to the surface can also be added to simulate frictional forces. Graphics/Haptics Rendering Synchronization The real-time performance requirements of haptic virtual reality applications are based on the limitations of the human perception. A virtual environment is typically updated visually at rates around 30-60 Hz. While haptic interactions typically require update rates around 1000 Hz [11]. The high update rates needed for haptics rendering requires that the graphics and haptics rendering be separated into concurrent threads of execution, as shown in Figure 2.14. The application processing and graphics rendering is

typically performed in the same thread. Haptics rendering is performed in a separate dedicated thread. Since the user can see and feel the virtual environment simultaneously, it is critical that haptics and graphics rendering threads be synchronized. Failure to synchronize the two threads can cause a disparity in what the user sees and feels in the virtual environment. Different approaches can be used to synchronize the rendering algorithms. The most common approach is to make thread-safe copies of the shared data. A snapshot of the shared states is typically preferred over just using mutual exclusion since this can lead to cases when the high-priority haptic rendering thread might have to wait on the lower priority graphics rendering thread to release a lock.

Figure 2.13. Haptics and graphics rendering synchronization. [9]

3. Machine haptics 3.1

Introduction

Machine haptics deal with the design, development, and implementation of haptic interfaces or haptic devices. Haptic interface is a mechanical device similar to a robot that enables manual (human) interaction with virtual environment or teleoperated system. These interfaces are used together with computer hardware and software to produce the sense of touch and interaction with real physical objects. These interfaces sense motion (position and orientation) when the user conveys desired motor action by physically manipulating the end effector of the interface, and produces force/torque feedback to the user based on the interaction with objects in virtual environments. The increasing application of robotics and modern technology in various domains such as Medical surgery, haptics and virtual reality, training simulators and machine tools require high degree of stiffness, fidelity and precision. Medical robotics is of course, an area of predilection for high-performance of haptic interfaces. These interfaces are useful for tasks where visual and auditory information is not sufficient and may induced unacceptable error. These interfaces can provide the surgeon with tremor-free movements by down-scaling the force applied, and the physical therapist with reliable rehabilitation system. Haptic interfaces technology has advanced to such extent, that the market has been segmented into different groups; depending on the technology of actuators (electric, hydraulic pneumatic etc), control architectures, portability and kinematic structure of these interfaces [6]. • • • • • •

High-end devices Low-end devices Tactile devices Kinesthetic devices Ground-based devices or Body based devices or wearable devices

High-end devices are developed for haptic research, telerobotic surgery and surgical simulation, where high degree of precision and fidelity are required. FCS Haptic Master 3-DoF [6], Force Dimension Omega 3/6 [6] and SensAble PHANToM [26] are example of these devices. Low-end devices are developed mainly for the PC video game industry, CAD/CAM modelling and education training simulators etc. Tactile haptic devices provide feedback to the user by sense of touch (through receptors in the human skin). These devices provide information on contact surface geometry, surface texture, temperature, vibration and slippage of the objects [24]. Tactile array is one of the most common device as shown in the figure 3.2. The kinesthetic haptic device provides feedback to the user by sensing the position and orientation of interaction tool and provides information of total contact forces, surface compliance, and (if the hand is supporting the object in some way) weight of the virtual objects. For haptic systems, tactile feedback interfaces can function as stand-alone systems, or they can be integrated with force feedback systems to enhance the sensation of immersion in a simulated environment.

Links

End

Figure: 3.1a) Tactile Pin Array shape display by Harvard BioRobotics Lab [24]. b) PHANToM Omni [SensAble Technology Inc][26]

Ground based devices are either linkage-based or force reflecting joysticks. The linkage based devices have robotic arms or multiple links, which connects the end effector to the base or ground. FCS Haptic Master [6], PHANToM of SensAble Technology Inc. [26] and Novient Falcon [43] are the example of these devices as shown in figure 3.1b. Body based devices are wearable gloves, wearable hand, suits and exoskeletal [6] as shown in the figure 3.2. These devices cover the user or the user has to wear these devices in some cases. The exoskeletons devices are large and immobile systems and the user must attach him-self to these devices. The advantage of these devices is the generation of large and varied value of forces without strict size or weight constraints of the device.

wearable

Figure: 3.2(a) CyberForce[Immersion Corporation,Cyberforce] (b) Exoskeleton (U-Tokyo) haptic devices[7]

Haptics devices belong to the family of mechatronics devices. Their fundamental function is to take advantage of mechanical signals to provide and control communication between user and objects. There are two major ways in which a haptic device can be controlled; these are impedance control and admittance control system. Impedance control devices In impedance control system, the device sense motion (position and orientation) and render forces to the user. The basic interaction loop between the user and the control system is "displacement in -force out". Impedance display are will adopted to simulating low inertia, low damping environment, since they have low inertia and highly backdrivable but have difficulty rendering stiff constraints. Admittance control devices In admittance control system, the device sense forces and render motion (acceleration, velocity, position). The basic interaction loop between the user and the control system is "force in - displacement out". Admittance displays are highly

geared and therefore non-backdrivable. They are well adopted to display rigid constraints, but struggle to simulate unencumbered motions. For high level of fidelity admittance display must actively mask inertia and damping. Control of inertial properties is a bilateral process for admittance display, and the requirements place lower limits on the impedance that can be simulated. This is unlike impedance displays that can render ultra low impedance by merely turning actuators off (although it cannot render an impedance lower than their own inertia and backdrivability) [32].

Figure: 3.3 a) Loop of Impedance control system b) loop of admittance control system used in haptic devices [6].

Classification of haptic devices on the basis of mechanism & structure Robot mechanism is one of the important factor to be considered in the design of the haptic devices. How different links and joints of the haptic device will be arranged and dimensioned that fulfill the given set of the user requirements. This is in general a much complex issue in the robotic design. The three main types of robotic mechanism used for haptic devices are; • • •

Serial Mechanism Parallel Mechanism Hybrid Mechanism

In serial mechanism, the robot links and joints are connected in serial fashion from the base, with one path leading out to the end effecter. PHANToM of sensAble technology Inc. (figure 3.1b) is a good example of a serial manipulator [26].While parallel robots have many links connects the end effecter to the base in parallel fashion also known closed loop mechanism such as Stewart platform. Table 3.1 Comparison of parallel and serial mechanisms. NO

ITEMS

SERIAL

PARALLEL

1

Links

Open chain

Closed Chain

2

Workspace

Large

Small

3

Structural stiffness

Low

High

4

Stability

Low

High

5

Dynamic effect

Heavy

Light

6

Analysis of singularity

Simple

Complex

7

Forward kinematics

Unique solution

Difficult (multi- solutions)

8

Inverse kinematics

Multi-solution

Unique solution

Due to the parallel links mechanism, these robots have high stiffness, stability, speed, and can handle higher load with high accuracy. While in serial mechanism, the whole structure is supported by the base actuator to carry and move the loads, so there may be issues of instability, stiffness, accuracy, and fast motions. Comparisons of the different characteristics of these mechanisms are given in table 3.1 [28]. Hybrid parallel-serial mechanism incorporates both types of mechanisms in the design; one can utilize the good qualities of both serial and parallel mechanism. Classification of haptic devices on the basis of Degree of Freedom(DoF) Degree of freedom of the device shows number of independent directions of a motion and rotation in space. It is one of the important factor in the design of the haptic devices. The end effector of haptic device can have translation motion in either x, y, z direction and orientation roll, pitch, yaw motion. The haptic devices may have different actuated DoF input motion (Position and orientation) and may have different DoF output forces and torques. Common force-feedback haptic devices can have; • • • • •

1- DoF input =1- D force feedback 2- DoF input =2- D force feedback 3- DoF input =3- D force feedback 5- DoF input =5-D fore and torque feedback 6- DoF input =3-D force feedback + 3-D torque feedback

(a) 2-DoF Phontograph

(b) 3-DoF Virtous 6D35-45 www.haption.com

(c) 5-DoF colordo university haptic device (d) 6-DoF Delta of Force Dimension

Figure: 3.4 Shows different haptic devices having different actuated degree of freedom [5].

3.2 Important factors in the design of haptic devices The first step to be considered in the mechanical design of a haptic device is to identify user requirements and specifications for the haptic device. Haptic devices presents a difficult mechanical design problem, as it is required to provide enough reachable workspace and high stiffness, as well as it to be light and backdrivable. As haptic device is designed to display tactile and force/torque feedback at the end-effector, the fidelity of force/torque feedback is of utmost importance. Structural transparency is required so that the user should feel only the dynamics of the mechanism being represented and not that of the structure of the haptic devices. From the above design requirements of the haptic device thus it will be summarized that the haptic device “to be able to display a broad range of impedance” [54]. This can be achieved through many design parameters of the mechanism. Workspace The volume and its geometrical configuration that the haptic device (end-effector) can reach is known the workspace. The workspace of the haptic device depends on the mechanism, dimensions and joints and links configurations. The haptic device should provide enough workspace that fulfills the user requirements. Stiffness The ability of the haptic device to mimic a stiff virtual surface such in case of teeth milling with teeth. It has been reported that stiffness needs to be 25 N/mm in order to feel stiff to a user when vision is obscured [32]. Inertia It is important that the haptic device have the lowest possible inertia in order to increase the transparency of the device and don't produce extra forces (dynamics of the haptic device). Choice of the materials such as aluminium tubes, hardware such as light motors and the location of the motor at the base reduce the inertia. Parallel mechanism design satisfies this criterion as the actuators are connected to the base that controls the motion of end effector through light links. Light inertia and distribution of mass in the mechanism is the major contributing factor to the frequency of the mechanism. However these devices would be more difficult to control in a stable and robust manner as it is sensitive to noise. In some case the inertia effects are compensated by the control system to reduce its effects. The apparent inertia of the haptic device end effector changes with configuration of the haptic display mechanism while that of the virtual tool does not. So the apparent inertia should be controlled to appear like that of the virtual tool, for haptic interaction to seem realistic to the user, [32]. Backdriveability Backdrivability is a very important criterion in a haptic device. The haptic interface must be able to move freely in its workspace without opposition. The user feels no resistance and haptic device seems weightless. The device should ideally produce no forces on the user’s hand when there is no interaction with objects in the virtual environment. It can be improved by removing backlashes and reducing friction where possible: by actuating the joints of mechanism through direct drive or cable drive (no gears), and roller bearing in all passive joints [54]. Backlash

This is characterized by moving the end-effector without a corresponding position being sensed. It feels like a dead-zone or void in space that occurs when you move the end effector in one direction against an opposing force, and then switch to another direction where no force opposes the motion. It is often associated with gear systems where it is defined as the excess space between the interfacing teeth. Backlash must be kept minimize to enhance the backdrivability of the haptic system. Friction Friction of the system if sufficiently high (greater then the human tolerance), it will degrade the force transfer to the user and thus the fidelity of the haptic interface. With light inertia, as required for high transparent haptic interface the effect of friction becomes more apparent as compared to conventional robotic manipulator. Compensating for friction, if not done properly, may give rise to stability problem. Many efforts have been made to minimize friction effects mechanically, like cable driven haptic interfaces. Maximum exertable force & continuous force The maximum force that the actuators can exert for a very short interval of time (few milliseconds) is called the maximum exertable force. The amount of force that actuators can exert for an extended (large) period of time is called continuous force. Force magnitude at the end effector of the device depends on the motor and kinematic structure. Position Resolution The smallest amount of movement over which the sensors can detect a change of position [54]. Good position resolution is one of the important factor in displaying stiff virtual walls without vibration. Bandwidth and update rate The frequency range over which the haptic device provides feedback to the user. The range needed will depend on the operation being performed. Generally the maximum limit to a useful impedance range is the stiffness required to counteract a reasonable maximum human hand force while the minimum is the impedance below which it is too small for human hand to detect [54]. Generally, small, precise movements will require a higher frequency feedback than large, more powerful movements. Update rate is the speed at which the feedback loop can be completed. The complete loop consists of sensing device position, computation of force in simulation, force sent to the device and the next time device position. Transparency and Fidelity Perfect transparency implies that forces and velocities experienced by operator will be identical to those generated from the interaction of virtual objects. The ability of the simulator to simulate the real-world interaction called fidelity. A high fedility device has high resolution, high update rate and low latency, good transparency in the transmission of the forces and torques. Challenges in controls Control challenges are inseparable from the design of high performance haptic interface and so high importance is given to the control system of the haptic interfaces. In haptic control system there are two conflicting goals; performance and stability. Performance can be characterized by transparency as discussed above. Also, since the haptic device actively generates physical energy, instability may occur that can damage hardware and even pose a threat to the human [55]. Unfortunately, the stability of the system closed loop is often poor or none at all if the transparency is high and vice versa as also mentioned [57]. Whitney (1976) modelled a manipulator as a velocity input integrator,

and assumed that proportional position, velocity, and force feedback were implemented in discrete time [56]. He modelled the environment as a spring and derived the following stability result 0