The use of visual and non-visual cues in updating

2 downloads 0 Views 438KB Size Report
To investigate the role of visual cues observers sat in an enclosed, immersive, virtual environment formed by six rear-projection screens. A simulated room was.
The use of visual and non-visual cues in updating the perceived position of the world during translation. Laurence R. Harris, Richard T. Dyde, Michael R. Jenkin Centre for Vision Research, York University, Toronto, Ontario, M3J 1P3, Canada ABSTRACT During self-motion the perceived positions of objects remain fixed in perceptual space. This requires that their perceived positions are updated relative to the viewer. Here we assess the roles of visual and nonvisual information in this spatial updating. To investigate the role of visual cues observers sat in an enclosed, immersive, virtual environment formed by six rear-projection screens. A simulated room was presented stereographically and shifted relative to the observer. A playing card, whose movement was phase-locked to the room, floated in front of the subject who judged if this card was displaced more or less than the room. Surprisingly, perceived stability occurred not when the card’s movement matched the room’s displacement but when perspective alignment was maintained and parallax between the card and the room was removed. The role of the complementary non-visual cues was investigated by physically moving subjects in the dark. Subjects judged whether a floating target was displaced more or less than if it were earth stable. To be judged as earth-stationary the target had to move in the same direction as the observer: more so if the movement was passive. We conclude that both visual and non-visual cues to selfmotion and active involvement in the movement are simultaneously required for veridical spatial updating. Keywords: Self motion, spatial updating, vestibular cues, visual cues to motion, translation, virtual reality

1. INTRODUCTION 1.1 Spatial updating When people move it is necessary for them to update their perceived position in the world relative to all the objects in the world. This is equivalent to updating the position of all the objects in the world relative to them. The process is known as ‘spatial updating’ although, of course, the spatial position of objects does not change, only the representation of their positions. Successful spatial updating allows: 1 2 3 4

the detection of movement of objects if they move relative to their expected positions during a subject’s movements an updating of spatial memory so that objects can still be located, looked at, or used as targets for reaching successful obstacle avoidance by continuously updating the locations of objects relative to the moving viewer the use of objects as landmarks to aid in and to guide navigation

Accurately updating the representation of the spatial location of objects is therefore an essential task that allows effective interaction with the world during and following self motion. Spatial updating applies to the high-level representation of objects: it is the position of objects, not visual or auditory sensations that is updated. Although vision is an important sense for locating objects and although some aspects of spatial updating may have visual sequelae, spatial updating is not exclusively applied to visual space. Spatial updating applies to the representation of all objects regardless of how they are detected including those whose positions are known only from auditory or tactile cues for example. Although spatial updating provides an essential working platform for conscious object perception, it is a ‘hidden’ process, of which we are normally unaware. Inasmuch as there can be said to be two streams of

462

Human Vision and Electronic Imaging X, edited by Bernice E. Rogowitz, Thrasyvoulos N. Pappas, Scott J. Daly, Proc. of SPIE-IS&T Electronic Imaging, SPIE Vol. 5666 © 2005 SPIE and IS&T · 0277-786X/05/$15

sensory processing corresponding broadly to perception and to action 1,2-4 spatial updating belongs to the ‘action’ stream, processed primarily by the parietal cortex 5-8. Attempts to measure how accurately spatial updating is achieved have therefore concentrated on how well the spatial locations of targets are updated and available for actions (relative to the actor). For example, Medendorp et al. 9 had subjects translate from one position to another after a visual target had been extinguished. Subjects then had to move their eyes back to the remembered (and updated) location of the original target. Subjects were very good at this, suggesting that updating was efficiently achieved by the oculomotor guidance system. What cues are available for effective spatial updating? An alternative way of asking this question is to ask what cues are available for informing about self motion. In Medendorp et al.’s experiments, 9 as in normal everyday movements, many perceptual and motor cues were available including visual, vestibular and efference copy. None of the available cues provide the position change information that is required for updating the spatial position of objects in a very direct way, however. Although many cues can potentially inform spatial updating, here we limit ourselves to three primary cues: visual, vestibular and efference copy. 1.2 Visual cues The visual cues generated by self motion that are available for spatial updating are both dynamic and static. They depend on the type of motion (rotation or translation), the distance of each object from the observer, the direction relative to the direction of translation or axis of rotation, and the pattern of eye movements occurring during the movement (see 10 for a review). Static visual cues include the new visual directions of objects after a subject has moved. An object that was on the left might now be on the right, for example. However the process of spatial updating is to interpret these positions and so they can hardly be used as input to this process! More significant are the dynamic cues from which displacement can be derived and from which the expected static visual position of objects can be calculated. The dynamic visual cues have been referred to as optic flow 11 of which one particular component is parallax: the relative movement of different objects in the visual field dependent on their relative distances from the viewer. 1.3 Vestibular cues The vestibular system transduces accelerations which then need to be double-integrated to provide a signal indicating the amount of angular or linear displacement. The eye movements evoked by self motion in the dark (the vestibulo-ocular reflex) in response to either linear 12 or angular 13 head movement, need a position signal. The fact that such eye movements are generally accurate indicates that a position signal is available. However, the pattern of eye movements reveals two failings of the vestibular system: it can only operate effectively over relatively short periods of time if accelerations are not maintained 14 and it cannot distinguish accelerations due to gravity from those due to self motion 15, 16. 1.4 Efference copy Efference copy is a central representation of the command to make an active movement. Its existence is needed to explain some aspects of perception during active processes. It was first postulated by von Holst and Mittelstaedt to explain some aspects of animal behaviour 17. Efference copy seems to play an important role in providing knowledge of eye position, since the eye can be relied upon to move accurately in response to a motor command. Efference copy can be available even before the eye movement has happened 7. Efference copy is also probably an essential contributor to interpreting the vestibular signal generated by self motion – especially the linear component 18.

SPIE-IS&T/ Vol. 5666

463

dot directions expected if dot earth-stationary… a b

start

c

… if movement over-estimated … if movement accurately estimated … if movement under-estimated c

b

a

Fig. 1: Method for assessing perceptual updating. The subject is moved in the direction of the white arrow. If the observer’s motion is overestimated, then the expected direction of an earth fixed target will be direction ‘a,’ if the motion is underestimated, the expected direction will be ‘c.’ A truly earthstationary object will therefore appear to move in the directions of the arrows in the lower diagram (from the expected to the observed positions). By adding motion to objects viewed during self motion, this

1.5 Spatial updating and self motion Due to the importance of the representation of space to action, spatial updating has been investigated extensively in motor systems, specifically as it applies to the oculomotor 19, 20 and reaching 21 systems. Less is known about the nature of spatial updating for more general perceptual tasks or about the relationship between perceptual updating and the perception of self motion. Are errors in perceptual updating associated with errors in the perception of self-motion and can the accuracy of spatial updating be used as a monitor of how accurately self motion is coded by the brain? The last point was the original aim of this project. We wanted to know how the perceived distance of travel during self motion varied with the available cues. We devised a system for looking at this by having subjects judge the motion of a small target. We expected to be able to use this task to measure how accurately subjects estimated their own motion. We reasoned that if a person over-estimated their motion, the expected position of an earth-stationary target would vary more than its actual position (lines ‘a’ in figure 1). Since an earth-stationary target would not be seen at this expected position but instead in direction ‘b,’ it would be judged to have moved in the same direction as the viewer. Similarly if a person under-estimated their motion, the expected position would vary less than its actual position (lines ‘c’ in figure 1) and an earth-stationary target would appear to move in the opposite direction to the viewer’s motion. By nulling the target’s perceived movement, either by adjustment or by judging the direction of perceived movement of stimuli that were actually moving by various amounts, we hoped to obtain an objective measure of perceived self motion under a variety of interesting conditions. However there are potential confounds to this argument. Spatial updating and perceived self motion may not have access to the same signals and may yield different estimates. Furthermore, successful performance requires that it is possible for subjects to make accurate judgments concerning the linear displacement of objects in space. A given linear displacement of a person is equivalent to a linear displacement of all objects and features that make up the world in the opposite direction. As objects are viewed at further and further distances, the angular displacements caused by a person’s movement become smaller and smaller.

464

SPIE-IS&T/ Vol. 5666

To convert these retinal angles to external linear distances requires a knowledge of how far away the object is. This non-linear transformation is subject to distortions if distances are misjudged or if perceptual space is not Euclidean 22. In this study we looked at the role of visual, non-visual and efference copy cues in updating the position of objects during self motion, using perceptual judgements about their location. Experiment 1 looks at the role of passive visual cues. Experiment 2 looks at the role of vestibular cues and compares active and passive movements thus assessing the role of the efference copy (which is present only in active, self-generated motion and not passive transport).

2. EXPERIMENT 1 2.1 Methods 2.1.1 Visually simulated self motion

A

B mirrors

projectors

screens

C Fig. 2. The Immersive Visual environment at York (IVY) in which experiment 1 was carried out. A, B: show exterior views, C: shows the arrangement, in side view, of the projectors used to present a display on the walls of a room made entirely of projection screens The projectors for the ceiling and floor surfaces are reflected off front-silvered mirrors (visible in A).

We generated visual cues to self motion in the absence of physical or efference copy cues by simulating passive self motion in the Immersive Virtual environment at York (IVY: Robinson et al., 2002. see figure 2). When looking at a realistic painting or photograph it is best to view it from a particular position so that the geometry correctly corresponds to the real world – the horizon should be at eye height and the angular subtense of the objects arranged to be correct for their simulated distances i.e. by reference to the horizon and ground plane 23, 24. This is unfortunately impossible while watching TV or a movie, because the camera moves dramatically relative to the scene. But we can approximate these contingencies in virtual reality.

SPIE-IS&T/ Vol. 5666

465

Our experiments were therefore performed in a virtual reality setting. Subjects sat in IVY: a 243 x 243 x 243 cm3 cubic room formed by six rear-projection screens (including floor and ceiling, see figure 2). Subjects rested their chin on an unseen padded support to maintain a constant viewing position. Eight synchronized SONY CRT projectors (one for each wall, and two each for the floor and ceiling surfaces) generated images at 96 Hz and a resolution of 1280 x 1024. Images were generated by a tightly coupled cluster of nine Linux Workstations equipped with NVIDIA FX3000G graphics cards which generated synchronized video for each of the surfaces of IVY. Subjects viewed these images via CrystalEyes stereo goggles at 48 Hz to each eye (see 25, for further details). We simulated a virtual room 243 x 243 x 243 cm3 (corresponding, coincidentally, with the actual size of the screens that formed the physical room, figure 3). The subject sat on a chair with the facing wall 201.5 cm in front of them (i.e. they were offset towards the rear portion of the virtual room). The virtual room was oscillated sinusoidally at 0.5 Hz ±10 cm with a maximum velocity of 30 cm/s, and a maximum acceleration of 100 cm/s/s. The target object was the virtual image of a playing card 6.3cm x 8.3cm. This image was selected as being of a familiar object of known size which had high luminance contrast with the visual background and which contained features of strong luminance contrast (black spades on a white background). The target was initially positioned at observer eye height, directly in front of the subject. The phase, frequency and direction of the card's motion were locked to that of the virtual room. During the experiment, the card was visible for only half of its sinusoidal motion. This allowed the subjects to make their judgment solely on the relative direction of the card and room movement.

Fig. 3 Subjects sat in a virtual room simulated using IVY (see Fig. 2). This illustration shows the perceived position of a subject in the room seem from behind and above. The room was moved relative to the subject, shown in three snapshot views here (a, b, c). A card floated in front of the subject and the subject’s task was to set it roomstationary. The card’s correct (roomstationary) position is shown in each panel.

466

SPIE-IS&T/ Vol. 5666

2.1.2 Procedure Subjects were instructed to fixate the playing card and to indicate whether the card appeared to be moving in one direction or the other relative to the room. The card usually did move relative to the room and the subject made judgements about its movement. For example, during left/right room movement, the card might be only visible during the rightward motion, and disappear during the room’s leftward return. If the card appeared to be moving less than the room’s motion, then the subject reported it as moving leftward relative the room. If the card appeared to move more than the room, then the card was reported as moving rightwards relative to the room. The actual movement of the card was expressed relative to the movement of the room and the ratio described as a gain. Thus a gain of 1 describes the card staying in the same point in the room. A gain of 0 corresponds to the card remaining fixed relative to the subject.

2.2 Results

% time ‘moving left’ chosen

ROOM STATIONARY

RETINAL MATCHING

SUBJECT STATIONARY

100%

50%

0%

0.0

0.2

0.4

0.6

0.8

1.0

Gain

Fig. 4 Three typical subjects’ judgements of the movement of the card relative to the moving virtual room (see Fig 2). The percentage of time that the card was judged as moving to the left is shown for when the room was moving left (solid lines and filled symbols) and when room was moving right (dashed lines and open symbols) as a function of the amount of movement of the card. The movement of the card is expressed as a gain: the card’s movement as a fraction of the room’s movement.

Fig. 4 shows psychometric functions obtained from three subjects asked to judge how the card appeared to be moving. Functions were obtained when the room was moving left (solid lines) and right (dashed lines). As the card approached subject stationary it was seen to move left relative to a rightward moving room (dashed lines) or right relative to a leftward moving room (solid lines). Curiously, when the card was actually room stationary, it appeared to move leftwards in a left moving room (solid lines) and rightward in a right moving room (dashed lines). The movement of the card that was judged as neither moving with or against the room (i.e. the 50% point on the vertical axis) was in between the room and subject stationary values.

2.3 Discussion These results have implications for the perceived magnitude of self motion, the use of self motion information for spatial updating and the geometry of extra-personal space. The results shown are compatible with either (i) the magnitude of self motion being under-estimated, (ii) an overestimation of the movement of the card, (iii) systematic errors in spatial updating or (iv) a non-Euclidean distortion of perceptual space. Comparing judgments of the motion of cards at different distances from the observer will enable us to distinguish between these possibilities since over- or under-estimates of self motion would have predictable effects at different distances (constant amplitude of target distance) whereas distortions of space might vary in some other way. Of course several factors could be at play at once.

SPIE-IS&T/ Vol. 5666

467

3. EXPERIMENT 2 3.1 Introduction The contribution of physical and efference copy cues to spatial updating was measured . 3.2Methods 3.2.1 Physical self motion background (ceiling)

y: truly earth stationary

virtual target Plexiglass mirror

x: setting

203 cms 130cms

background (ceiling)

Plexiglass mirror

90 cms

dot on screen

A

bed

airguide bearings rail

monitor

support rail

spring airbearings bed

B

guide rail

spring

pulley

puller

Fig. 5. These diagrams show how subjects were moved in experiment 2. A: A dot was reflected off a Plexiglas screen in front of the subject to provide a virtual target. B: Subjects were moved by a ‘puller.’ Because the bed on which the subjects lay was mounted on air-bearings and movement was constrained by springs (as shown), reliable sinusoidal movement could be generated.

A hoverbed was constructed by attaching four downwards-pointing air-bearings at the corners of a platform mounted on a guide rail (Fig 5a). The platform was padded to allow observers to lie comfortably on their backs looking straight up. Their heads were held by a fixed, recessed, cushioned support. The observer and bed could thus be moved as a single rigid structure. The hoverbed was fitted with a motion sensor which sampled the position of the bed at 50 Hz with a resolution of 0.021 cm. Observers used their left hand to operate the buttons of a computer mouse which acted as their response device. With their right hand they could pull themselves along a support rail if the trials were ‘ACTIVE’ (Fig 5a). The hoverbed was attached to anchor points at the head and foot by heavy-duty springs (Fig 5b). These springs held the hoverbed in a central position about which it could be oscillated. For the active motion condition observers propelled themselves ‘up and down’ (from their perspective) at 0.5 Hz with an amplitude of ±10 cm with their right hand, pulling themselves along a handle fixed to the floor parallel to the guide rail, in time to a metronome. For the passive motion condition they were moved at the same frequency and amplitude by the experimenter using a cable attached to the hoverbed (Fig. 5b). 3.2.2 Stimulus device Suspended directly above the hoverbed at 48 cm from the observer was a thin sheet of Plexiglas parallel to the direction in which the hoverbed moved. The sheet was tilted to form a partially reflecting, but transparent surface (Fig 5a). The sheet reflected a computer screen positioned beside and below (‘behind’) the observer. The normally lit laboratory in which they were lying could be seen clearly through and on each side of the Plexiglas. The visual field was largely filled by a false ceiling comprising of textured tiles at a distance of 203cms from the observer’s eyes. The computer display showed a white ellipse which, when reflected, was foreshortened to appear to the observer as a 0.44 deg circle at a distance of 130 cm. The remainder of the computer screen and the surrounding floor were dark-masked so that no other reflected images were visible. The output from the motion sensor attached to the hoverbed was taken as input by a computer program which shifted the computer-displayed ellipse a known proportion of the hoverbed’s motion. The ratio of the linear motion of the stimulus to that of the hoverbed could be adjusted

468

SPIE-IS&T/ Vol. 5666

under experimental control. The stimulus was moved so as to remain locked in phase to the bed’s movement. Observers saw the textured ceiling of a room with a small, virtual dot floating in space 130 cm in front of them and 73cm in front of the ceiling background. This dot appeared to move along their body centre line as they rode ‘up and down’ on the hoverbed. Apart from the background ceiling there were no other proximal objects to which the motion of the dot could be compared. The movement of the dot can be described as a ratio of its actual movement to the movement required to keep it earth-stationary. This ratio is referred to as the gain. A gain of unity corresponds to the geometrically correct, earth-stationary target. A gain of zero corresponds to the dot moving with the observer i.e. an ‘observer stationary’ dot. Gains of between zero and one correspond to a target that moves with the observer. A gain of greater than one corresponds to the target moving in the opposite direction to the observer’s motion. 3.2.3 Procedure Observers fixated the virtual dot and judged the motion of the stimulus relative to their own motion. The question posed was: “is the stimulus moving in the same direction as your motion or in the opposite direction?” For trials conducted in the light the stimulus moved with a gain in the range of between 0.7 and 1.12 at intervals of 0.03. In the dark condition the range was of gains between 0.22 and 0.85 at intervals of 0.045. In both conditions there were 15 stimulus gains. A different range and spacing of gains was necessary in the light and dark to ensure that the range of presented stimuli effectively bracketed the group’s responses. A stimulus moving at each gain within the appropriate range was presented 10 times in a randomized sequence of 150 trials per trial block. 3.3 Results Psychometric functions were constructed through graphs of the percentage of times ‘moving with’ was chosen as a function of the stimulus’ movement gain. The 50% point of these functions indicated the balance point between moving with and moving against and was taken as corresponding to the target being perceived as earth stationary during the subject’s movement. During physical motion in the light, this point was close to earth stationary. During movement in the dark the points were closer to the vision-only condition used in Experiment 1. Passive movement produced greater errors relative to earth-stationary than active movement. 3.4 Discussion These data indicate a role for efference copy which would, of course, have no influence on the geometry of the representation of space.

4. DISCUSSION 4.1 Summary The first experiment showed that visual cues to motion, even though adequate to evoke a convincing sensation of self motion known as vection 26, are not adequate to correctly update the position of objects in three-dimensional perceptual space. This means that objects are subject to illusory motion if self motion is simulated by visual cues alone. The distortion of perceptual space under these conditions is highly consequential for virtual reality, simulators and the film industry where usually only visual cues are used to create an impression of self motion. The second experiment indicated that not only are non-visual cues necessary to complete a correct essentially Euclidean representation, active involvement in the movement is required to obtain the best performance.

SPIE-IS&T/ Vol. 5666

469

We started this paper by listing some applications of spatial updating, viz. the detection of movement of objects, updating the remember location of targets for reaching, obstacle avoidance and as an aid to navigation. How are these applications affected by these findings? 4.2 Detection of movement All our observations, even with active involvement in the generation of the movement of the subject, suggested an under-estimate of self motion. Consequently there is a tendency for earth stationary objects to be seen as moving opposite to the direction of the observer’s motion. The corollary of this is that motion of the background appears to move in the same direction as the observer (which we have shown elsewhere to be the case 27). 4.3 Targets for reaching Our observations have revealed huge perceptual errors when visual cues are used to simulate motion. However it is not possible to extrapolate from perceptual errors to action errors 28-30. We expect that there will be a tendency at least for reaching errors to be in the direction of the perceptual errors, but eye movements to targets after self displacement are quite accurate 19. 4.4 Use of visual landmarks for obstacle avoidance and as an aid to navigation Our experiments have been designed around the logic that self motion is needed to update the expected, or remembered, location of targets. The systematic inaccuracies that we have revealed in remembering the location of objects after and during observer motion, especially when vestibular cues are not available, suggests that object location could only be used as a very crude guide to navigation.

ACKNOWLEDGEMENTS Supported by NASA Co-operative Agreement NCC9-58 with the National Space Biomedical Research Institute, the Canadian Space Agency, and the Centre for Research in Earth and Space Technology (CRESTech). We would like to acknowledge the generous contributions of CFI and ORDCF that have made the purchase of IVY possible. MJ and LRH are recipients of NSERC operating grants. The technical assistance of A. German, A. Hogue and M. Robinson is also gratefully acknowledged.

REFERENCES

470

1.

Goodale, M. A. and Milner, A. D., "Separate visual pathways for perception and action", Trends Neurosci., 15, 1992.

2.

Milner, A. D. and Goodale, M. A., "Visual pathways to perception and action", Prog Brain Res, 95, 1993.

3.

Milner, A.D. and Goodale, M.A., The visual brain in action, Oxford University Press, Oxford, 1995.

4.

Harris, L.R. and Jenkin, M., Vision and action, Cambridge University Press, Cambridge, UK, 1998.

5.

Colby, C. L., Duhamel, J. R., and Goldberg, M. E., "Oculocentric spatial representation in parietal cortex", Cereb Cortex, 5, 1995.

6.

Desmurget, M., Epstein, C. M., Turner, R. S., Prablanc, C., Alexander, G. E., and Grafton, S. T., "Role of the posterior parietal cortex in updating reaching movements to a visual target", Nature Neuroscience, 2, 1999.

7.

Duhamel, J. R., Colby, C. L., and Goldberg, M. E., "The updating of the representation of visual space in parietal cortex by intended eye movements", Science, 255, 1992.

SPIE-IS&T/ Vol. 5666

8.

Medendorp, W. P., Goltz, H. C., Vilis, T., and Crawford, J. D., "Gaze-centered updating of visual space in human parietal cortex", J Neurosci, 23, 2003.

9.

Medendorp, W. P., Tweed, D. B., and Crawford, J. D., "Motion parallax is computed in the updating of human spatial memory", J. Neurosci., 23, 2003.

10. Harris, L. R., "Visual motion caused by movements of the eye, head and body", in: Smith, A. T. and Snowden, R. J. (ed.) Visual Detection of Motion, 397-435, Academic Press, London, 1994. 11. Gibson, J.J., The perception of the visual world, Houton Mifflin, Boston, 1950. 12. Baloh, R. W., Yue, Q., and Demer, J. L., "The linear vestibulo-ocular reflex in normal subjects and patients with vestibular and cerebellar lesions.", J. Vestib. Res.-Equilib. Orientat., 5, 1995. 13. Buettner, U. W., Henn, V., and Young, L. R., "Frequency response of the vestibulo-ocular reflex (VOR) in the monkey", Aviation, Space and Environmental Medicine, 52, 1981. 14. Wilson, V.J. and Jones, G.M., Mammalian vestibular physiology, Plenum, New York, 1979. 15. Mayne, R., "A systems concept of the vestibular organs", in: Kornhuber, H. H. (ed.) Handbook of Sensory Physiology. Vestibular System., VI part 2, 493-580, Springer-Verlag, New York, 1974. 16. Seidman, S. H., Telford, L., and Paige, G. D., "Tilt perception during dynamic linear acceleration", Exp. Brain Res., 119, 1998. 17. Von Holst, E. and Mittelstaedt, H., "Das Reafferenzprinzip (Wechselwirkungen zwischen Zentralnervensystem und Peripherie).", Die Naturwissenschaften, 37, 1950. 18. Merfeld, D. M., Zupan, L., and Peterka, R. J., "Humans use internal models to estimate gravity and linear acceleration", Nature, 398, 1999. 19. Israel, I. and Berthoz, A., "Contribution of the Otoliths to the Calculation of Linear Displacement", J. Neurophysiol., 62, 1989. 20. Blouin, J., Labrousse, L., Simoneau, M., Vercher, J. L., and Gauthier, G. M., "Updating visual space during passive and voluntary head-in-space movements", Exp. Brain Res., 122, 1998. 21. Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., and Crawford, J. D., "Gaze-centered remapping of remembered visual space in an open-loop pointing task", J Neurosci, 18, 1998. 22. Heelan, P.A., Space-perception and the philosophy of science, Univ. Calif. Press, Berkeley, Calif., 1983. 23. Koenderink, J. J., "Pictorial relief", Phil. Trans. R. Soc. Lond. A., 356, 1998. 24. Koenderink, J. J., van Doorn, A., and Kappers, A. M. L., "Pictorial relief", in: Harris, L. R. and Jenkin, M. R. (ed.) Seeing spatial form, 37-51, Oxford University Press, New York, 2005. 25. Robinson, M., Laurence, J., Zacher, J., Hogue, A., Allison, R., Harris, L. R., Jenkin, M., and Stuerzlinger, W., "Growing IVY: Building the Immersive Visual environment at York. ", ICAT. Proc. 11th Int. Conf. on Artificial Reality and Tele-existence, Tokyo, Japan, 2002. 26. Howard, I.P., Human Visual Orientation, John Wiley, New York, 1982.

SPIE-IS&T/ Vol. 5666

471

27. Jaekl, P., Jenkin, M. R., and Harris, L. R., "Perceiving a stable world during active rotational and translational head movements. ", Exp. Brain Res., 2005 . 28. Goodale, M. A., Milner, A. D., Jakobson, L. S., and Carey, DP., "A neurological dissociation between perceiving objects and grasping them.", Nature, 349, 1991. 29. Goodale, M. A., "Perceiving the world and grasping it: is there a difference?", Lancet, 343, 1994. 30. Aglioti, S., DeSouza, J. F. X., and Goodale, M. A., "Size-contrast illusions deceive the eye but not the hand", Curr Biol, 5, 1995.

472

SPIE-IS&T/ Vol. 5666