Human Factors in Virtual Environment

Human Factors in - Human Factors in Virtual Environments Introduction Virtual environments are envisioned as being systems that will enhance the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Human Factors in Virtual Environments Introduction Virtual environments are envisioned as being systems that will enhance the communication between humans and computers. If virtual systems are to be effective and well received by their users: considerable human-factors research needs to be accomplished. It is the capabilities and limitations of the user that many times will determine the effectiveness of virtual worlds Human factors research for VR Human performance efficiency How to maximize the efficiency of human task performance in virtual worlds. In many cases, the task will be to obtain and understand information portrayed in the virtual environment. The Human Factor in VEs Why are human factors imp? 1. 2. 3. 4. 5. Perceptual and motor capabilities determine the range within which input and output technologies should perform Human senses are informational channels with different possible rates of information transfer How to best present and display information to facilitate understanding Keep in mind intersensory interactions, adaptation, simulator sickness (cybersickness) Predict the VR system's affective influence on how it addresses the user's capabilities Effective Measures To determine the effectiveness of a VE, a means of assessing human performance efficiency in virtual worlds is required. Human-virtual environment interactions are complex. May require effective multi-criteria measures Contributing factors include: 1. 2. 3. The navigational complexity of the VE, The degree of presence provided by the virtual world, and The users' performance on benchmark tests. Human factors involved in measuring the effectiveness of human performance in the VE Diagram's interpretation If individuals cannot effectively navigate in VEs, then their ability to perform required tasks will be severely limited. Operation of a VE system that provides a high degree of presence is likely to be better accomplished than one where such perceptions are not present. Factors that influence human performance in VEs Includes: 1. 2. 3. 4. 5. task characteristics, user characteristics, design constraints imposed by human sensory and motor physiology, integration issues with multimodal interaction, and the potential need for new visual, auditory and haptic design metaphors uniquely suited to virtual environments. Task Characteristics the nature of the tasks being performed in the VE directly influence how effectively humans can function in virtual world. To justify the use of VE technology for a given task, when compared to alternative approaches, the use of a VE should improve task performance when transferred to the real-world task because the VE system capitalizes on a fundamental and distinctively human sensory, perceptual, information-processing, or cognitive capability. Task Characteristics case study In investigating the benefits of stereo cues in a virtual pick-and-place task, Kim et al (1993) found that a stereoscopic display was far superior to a monoscopic display. however, the extent of improvement was dependent on the task, visibility, and learning factors. Kim et al (1993) argue that when tasks become more complex, and the monocular and cognitive cues provided are insufficient or lacking, stereoscopic cues will enhance performance. Task Characteristics case study In medicine Benefits of interactive displays such as the ability to interactively explore complex spatial and temporal anatomical relationships, have also been cited (McConathy & Doyle, 1993; Meyer & Delaney, 1995; Wright, Rolland, & Kancherla, 1995). In fitness performance highly interactive exercise environments have been shown to produce significantly more mechanical output (i.e. calories burned per minute) and promote greater and more consistent participation than less interactive environments (Cherry, 1995). Task Characteristics case study Williams, Wickens, and Hutchinson (1994) found that interactivity had a differential effect that was dependent on task workload. In this study, Interactivity enhanced human performance on a navigational training task under normal workload conditions. When the workload was increased and mental rotation demands were recruited to orient a fixed north-up map, however, individuals trained in the interactive condition performed less effectively than those who studied a map. This suggests that interactivity is beneficial to the extent that it is maintained within human information-processing limitations (Card, Moran, & Newell, 1983). User Characteristics VE users are already known to become lost in virtual worlds. McGovern (1993) found that operators of teleoperated land vehicles, even when using maps and landmarks, have a propensity for becoming lost. This is because knowledge acquisition from maps is more challenging for some individuals than for others and has been found to be associated with high visual/spatial ability (Thorndyke & Statz, 1980). how to assist low-spatial users with maintaining spatial orientation within virtual worlds. a study indicates that, although low-spatial individuals are unable to mentally induce the structure of multidimensional complex systems, they are capable of recognizing the structure of systems when they are well organized and when focus is placed on acquiring their structure (Stanney & Salvendy, 1994). how to assist low-spatial users with maintaining spatial orientation within virtual worlds. Initial interactions with VEs by low-spatial individuals may thus be best focused on system structure (i.e., layout) exploration rather than task accomplishments, until users have recognized the spatial structure of the virtual world. If task workload is high during the initial stages of system use, it is likely that low-spatial individuals will have limited ability to generate an accurate representation User Characteristics Deficits in perception and cognition, which are often experienced by the elderly (Birren & Livingston, 1985; Fisk & Rogers, 1991; Hertzog, 1989; Salthouse, 1992), may lead to a reduction in the information perceived from VE scenes. Older individuals generally experience lower visual acuity and reduced contrast sensitivity that could limit sensory input from a virtual environment. VE Design Constraints Related to Human Sensory Limitations Without a foundation of knowledge in the abilities and limitations imposed by human sensory and motor physiology, there is a chance that VE systems will not be compatible with their users. The physiological and perceptual issues that directly impact the design of VEs include 1. 2. 3. visual perception, auditory perception,and haptic and kinesthetic perception. Human Visual System Our visual system processes information in two distinct ways: 1. conscious and When we are looking at a photograph, or reading a book or map requires conscious visual processing and hence usually requires some learned skill. 1. preconscious processing. Describes our basic ability to perceive light, color, form, depth and movement. The processing is more autonomous, and we are less aware that it is happening Human Visual System eyes are complicated organs, specialized cells form structures which perform several functions: the pupil acts as the aperture where muscles control how much light passes, the crystalline lens performs focusing of light by using muscles to change it's shape, and the retina is the workhorse converting light into electrical impulses for processing by our brains. Our brain performs visual processing by breaking down the neural information into smaller chunks and passing it thorough several filter neurons. Some of these neurons detect only drastic changes in color, others neurons detect only vertical edges or horizontal edges. Human Visual System Eyes Characteristics The physical characteristics of the eye strongly affect the performance of the human visual system. Pupil characteristics are the most relevant to HMD designs. HMDs must account for both the size and location of the pupil of the eye. The pupil can vary in diameter from 2 mm in bright light to 7 mm in darkness. Under normal daytime indoor conditions, 4 mm is a good average pupil diameter to use in HMD design. The pupil, however, is not stationary. The eye rotates about a point roughly 12 mm behind the pupil. Easy rotation angles are 7.5 (H) and +0, - 30 (V) while maximum rotation angles are 15 (H) and + 30, - 35 (V). Accounting for a 4 mm pupil diameter and easy eye motion means that HMD designs have to have an exit pupil located at the eye pupil which is roughly 5 mm (H) and + 3 mm, - 9 mm (V). Human Visual System - Stereoscopic We obtain stereoscopic cues by extracting relevant depth information by comparing the left and right views coming each of our eyes. Human Visual System - Depth information Depth information is conveyed in many different ways. 1. Static depth cues include interposition, brightness, size, linear perspective, and texture gradients. 2. Motion depth cues come from the effect of motion parallax, where objects which are closer to the viewer appear to move more rapidly against the background when the head is moved back and forth. 1. Physiological depth cues convey information in two distinct ways: i. accommodation, which is how our eyes change their shape when focusing on distant objects, and ii. convergence, which is a measurement of how far our eyes must turn inward when looking at objects closer than 20 feet. Human Visual System Accommodation/Stereo The visual cues pertaining to depth perception include 1. occlusion, 2. stereo, 3. accommodation, and 4. vergence. If an image is to look and feel real, it must contain all four depth cues. Emphasis toward each of these depth cues in an HMD design depends on application and length of time the wearer will be using the HMD. Human Visual System Accommodation/Stereo Occlusion - closer objects blocking more distant objects, is the most important. is the only depth cue for objects beyond 2 -3 m from the observer and is extremely important when objects, or the user, are in motion. Stereo - a different view seen by the left and right eyes, becomes the next most important cue for objects closer than 3 m. Depth through stereo can be achieved without accommodation as is the case in most HMDs and without vergence as with autostereograms. Both of these prove unsatisfactory for extended use applications because of eye fatigue. Human Visual System Accommodation/Stereo Accommodation, changing the focus of the eye for objects of varying distances, is very important for the long-term comfort of the HMD user. On average, a person changes focus several times each minute. Each re-focus exercises the eye muscles. The fatigue reported by people who stare into computer monitors all day is partially caused by keeping a fixed focus for longer than is natural. Vergence, aiming the pupils directly at an object, is not a critical cue for depth perception, but it is used for objects closer than 1 m. Discomfort occurs when the stereo cue places an object in a different location than either (or both) the accommodation or vergence cues. Human Visual System Accommodation/Stereo Accommodation is the focusing of the lens of the eye through muscle movement. As subject age, their ability (speed and accuracy) to accommodate decreases (Soderberg et al, 1983). For example, the time to accommodate between infinity to 10" for a 28 year-old is .8 seconds while a 41 year-old will take 2 seconds (Kruger, 1980). The ability to rapidly accommodate appears to decline at the age of 30 and those over 50 will suffer the most. Younger subjects (under the age of 20) will accommodate faster regardless of target size. However, the ability to accommodate may begin to decline as early as age 10. Human Visual System Accommodation/Stereo Accommodation for binocular viewing is both faster and more accurate than monocular viewing for all age groups (Fukuda et al, 1990). Human Visual System sense of visual immersion in VR Comes from several factors which include: 1. 2. 3. field of view, frame refresh rate, and eye tracking/ position and orientation tracking. Human Visual System sense of visual immersion in VR Limited field of view can result in a tunnel vision feeling. Frame refresh rates must be high enough to allow our eyes to blend together the individual frames into the illusion of motion and limit the sense of latency between movements of the head and body and regeneration of the scene. Eye tracking / position or orientation tracking can solve the problem of someone not looking where their head is oriented. can also help to reduce computational load when rendering frames, since we could render in high resolution only where the eyes are looking. Human Visual System - Field of View Each eye can see 140 horizontally and 110 vertically. Vertical field of view is 120-135 for both eyes The horizontal fields of both eyes overlap in the center (binocular field of view) Area of overlap: 120 with 30-35 monocular vision on each side Combined horizontal field of view is about 180-200 A landscape image is the result of these combined images and displays, TV, HMDs exploit this comfortable format. ("Typical" desktop display: 40x32) visual field slowly declines with age. From nearly 180 degrees at age 20, to 135 degrees at age 80. Women have slightly larger visual fields then men, primarily due to differences in the nasal side (Burg, 1963). Human Visual System - Field of View However, when an HMD is used in motion, the rules change. For balance, the wearer needs a horizon. Outside, vanishing points are a necessary. While indoors, a floor edge must be visible. The indoor HMDs requires a large FOV since a distance of 2 meters from a wall requires a downward visibly of about 45. A smaller FOV is acceptable in many stationary uses; however, in mobile or walkthrough situations, very large FOV is critical. A minimally acceptable 60 vertical FOV and minimum 75 horizontal is required for mobile uses. Human Visual System - Spatial Resolution Number of pixels, pixel pitch, angular resolution The eyes tend to work hard to focus a blurry or low-resolution image. When the image displayed is of low quality, out of focus or too few pixels, the eye strains and becomes tired. Human visual acuity is 0.5-1 min of arc 1 arc min resolution allows you to distinguish detail of 0.01'' @ 3' To match this, requires "typical" desktop display of 4800x3840 (18.4 million pixels) During times of moderate movement, 2-4/ second, resolution can be decreased about in the direction of rotation without adversely affecting the HMDs wearers viewing quality. Human Visual System - Color Resolution The interest in most HMDs is displaying daylight (photopic) imagery, preferably in color. Some military and surveillance users require low, monochrome light levels to maintain night vision (scotopic) capability. The eye, in the photopic sensitivity region, can see an intensity variation of about 1000 to one for each of the three colors; Red, Green, and Blue. This translates to 10 bits per color or about 30 bits color depth for each pixel resolvable by the eye. Not all applications require such depth and this should be noted during design and purchase of an HMD. Human Visual System - Color Resolution A unique aspect of this color depth is that it is not needed throughout the viewing area. In fact, as viewing angles extend beyond 60, many colors can no longer be detected. As an experimental verification, while looking strait ahead, close the left eye, and pass two pens, one blue and one green, from the leftmost view point to straight ahead. Both pens appear to be gray at first. As they move closer toward the center of the FOV, the color in the blue pen becomes apparent before that in the green pen. When passing the pens back the other way, the colors will stay with them the whole way. This is because the brain has associated color to that object. This realization in perception may help designers of HMDs to reduce the information level in these areas without adversely affecting the wearer. color sensitivity from the nasal out to the temporal FOV The fovea is responsible for sharp central vision, which is necessary in humans for reading, watching television, driving, and any activity where visual detail is of primary importance. Head Motion The head motion that an HMD will encounter depends highly on the application. Typical vertical motion is but 15 sitting and 10 standing1. Comfortable horizontal movement is 45 easy and 79 maximum and the rotational speed can be greater than 500/ sec. However, a heavy HMD will slow this rate down. Additionally, a heavy HMD also decreases the total vertical motion. This is due to the unnatural balancing of several pounds on one's head. People accustomed to HMD use will normally utilize a full motion pattern. Head motion for medical use will exceed the standard vertical range by looking downward at a 45 angle for extended periods. Military pilots also extend the standard head motions required including rotational speeds. Head Motion Tracking is extremely important for head motion. Lag between the image displayed and the actual head orientation/location can be the greatest obstacle in creating an effective HMD based system. The motions expected in the HMD should be the basis for determining what type of tracker will be needed. When an image registered seethrough type display is needed or targeting is desired, lag must be minimized to 16ms. Although a lag of 16 ms is perceivable, most HMD / tracking / rendering systems are in the 60 90ms range. Human Visual Perception As part of the human visual system, stereo vision is one of the main factors that contribute to a user's perception of realism within a virtual world The human visual system is based on stereo vision A human perceives information such as depth based on the disparity between left and right eyes It is therefore conceivable that a user might perceive differently the effects of the distortions through a stereoscopic display or a monoscopic display Human Visual Perception For VE designers trying to achieve stereo depth perception it is important to note that lateral image disparity (in the range of 07 to 107) leads to depth perception (Kalawsky, 1993). On the other hand, vertical image disparities do not convey any depth cues. In fact, small amounts can lead to double vision (diplopia). Although users can adapt to diplopia in 15 to 20 minutes, they must readjust to visual scenes when they reenter the real world. Human Visual Perception The portion of the visual field shared by both eyes is known as the binocular field of vision (i.e., stereopsis) (Haber & Hershenson, 1973). Partial binocular overlap can be used in VEs to achieve depth perception, in which a monocular image is displaced inwards or outwards. Such partial overlap can be used to realize wide FOVs with smaller and lighter HMDs. Human Visual Perception A human perceives less detail in objects that are located further away compared to closer objects Objects that are further from the users will appear to be smaller from the user's viewpoint Human binocular disparity/stereo perception fails after a distance of more than 30 meters It is therefore possible that distortions that are concentrated at a distance away from the user will be less noticeable Human Visual Perception The human visual system depends on the retina to cope with the wealth of information contained within the scene The user visual attention will typically be focus or drawn to certain interesting areas of the scene High levels of detail contained within the scene will distract and possibly overwhelm the user's perception Human Visual Perception Upon user head movements or rotations, based on feedback from the motoric senses the human brain will expect the scene to change in certain ways The human eye is less sensitive to details that move rapidly over the retina Perception to distortion might vary between head tracked virtual reality systems and non-head tracked systems Human Visual Perception The makeup of the virtual world might also potentially affect a user's perception to the region warping distortions Distortions will appear less obvious on scenes containing random nature objects compared to scenes containing rigid structured architectures The reason for this is because highly structured scenes contain straight lines can might noticeably bend as a result of the warping Human Visual Perception The human brain might unconsciously correct perceived errors when presented with a familiar scene The human visual system is governed by real world experiences, as the brain interprets images presented to the retina based on these experiences The human brain might overlook errors contained in familiar scene Alternatively the converse might apply, in that errors might appear more prominent when presented with a familiar scene Human Auditory System Our ears form the most visible part of our auditory system, guiding sound waves into the auditory canal. The canal enhances the sounds we hear and directs them onto the ear drum. The ear drum converts the sound waves into mechanical vibrations. In the middle ear three tiny bones, the hammer, anvil, and stirrup, form a bridge across an air void and amplify slight sounds by a factor of 30. The stirrup rotates away and the ear drum tightens to inhibit loud sounds. The inner ear translates these machanical vibrations into electrochemical signals for processing by the brain. The elctrochemical signals are conveyed to the brain by the auditory nerve. Sounds detected by both ears are processed by what are called binaural cells. Human Auditory System Our sense of sound localization comes from three different cues. 1. 2. 3. Interaural time difference is a measure of the difference in time when a sound enters our left ear versus entering our right ear. Interaural intensity difference is a measure of how a sound's intensity level drops off with distance. Acoustic shadow is the effect of higher frequency sounds being blocked by object between us and the sound's source. A Virtual Sound System In VR systems computer generated sound comes in several different forms. The use of stereo sound adds some level of sound feedback to the VR environment, but does not correctly resemble the real world. When using 3D sound, we can "place" sounds within the simulated environment using the sound localization cues described above. A 3D sound system begins by recording the differences in sound that reaches both of our ears by placing microphones at each ear. The recordings are then used to produce what is called a head related transfer function (HRTF). These HRTFs are used during playback of recorded sounds to effectively place them within a 3D environment. A Virtual Sound System A virtual sound system requires not only the same sound localization cues but must change and react in realtime to move those sounds around within the 3D environment. An example of a 3D sound system is the Convolvotron, developed by Crystal River Engineering. This system convolves analog audio source material with the HRTFs, creating a startingly realistic 3D sound effect. Another system called the Virtual Audio Processing System (VAPS) mixes the noninteractive binaural recording techniques and Convolvotron-like signal processing to produce both live and recorded 3D sound fields. Other attempts look at performing what is called aural ray tracing, which is similar to light ray tracing found in computer graphics. Auditory Perception In order to synthesize a realistic auditory environment, it is important to obtain a better understanding of how the ears receive sound, particularly focusing on 3-D audio localization. Audio localization assists listeners in distinguishing separate sound sources. Localization is primarily determined by intensity differences and temporal or phase differences between signals at the ears Auditory Perception Auditory localization is understood in the horizontal plane (left to right). Sounds can arrive 700 microseconds earlier to one ear than the other and the sound in the farther ear can be attenuated by as much as 35 decibels relative to the nearer ear (Middlebrooks et al., 1991). For example, if a listener perceives a sound coming from the right, generally the sound has arrived to the right ear first and/or is louder in the right ear as compared to the left. When sound sources are beyond one meter from the head, these interaural time and intensity differences become less pronounced in assisting audio localization. Auditory Perception Vertical localization in the median plane cannot depend on interaural differences (i.e., as long as the head and ear are symmetrical). When a sound is directly in front of (or behind) a listener, the interaural differences are zero; however, the listener is still somehow able to localize the sound. In such cases, the anatomy of the external ear (i.e., the pinna) is thought to produce changes in the spectrum of a sound (i.e., spectral shape cues) that assist in localizing the sound (Fisher & Freedman, 1968; Middlebrooks et al., 1991). Auditory Perception a Head Related Transfer Function (HRTF) has been used to represent the manner in which sound sources change as a listener moves his/her head and can be specified with knowledge of the source position and the position and orientation of the head Human Tactile System Our sense of touch is performed by what is called the haptic or tactile system. The tactile system relays information about touch via two different mechanisms. 1. 2. The mechanoreceptors provide information about shape, texture, and temperature. Proprioceptive feedback conveys information about touch via overall muscle interactions. These muscle interations can inform the brain about the gross shape of an object, the sensing of movement or resistance to movement, the weight of an object, and the firmness of an object. This touch information is conveyed to the brain by both slowly adapting fibers and rapidly adapting fibers Tactile in VR System In VR systems, tactile and force feedback devices seek to emulate the tactile cues our haptic system relays to our brains. Eg. The Argonne Remote Manipulator provided force feedback via a mechanical arm assembly and many tiny motors. This device was used in the molecular docking simulations of the GROPE system. The Portable Dextrous Master used piston-like cylinders which were mounted on ball joints to pass force feedback from a robot's gripper to the operator's hand. The TeleTact Data Acquisition Glove used two gloves: 1. 2. One glove acquired touch data via an array of force-sensitive resistors and relayed that information to a second glove Second glove provided feedback via many small air bladders that inflated at the pressure points to simulate the touch information. Tactile in VR System In many VR systems the ubiquitous data glove plays the same role as that of the mouse in modern computer systems. The VPL Data Glove It used a series of fiber optic cables to detect the bending of fingers and used magnetic sensors for position and orientation tracking of the hand. Haptic Perception A haptic sensation (i.e., touch) is a mechanical contact with the skin. It is important to incorporate haptic feedback in VEs because such feedback has been found to substantially enhance erformance (Burdea et al., 1994). Haptic Perception Three mechanical stimuli produce the sensation of touch: a displacement of the skin over an extended period of time, a transitory (a few milliseconds) displacement of the skin, and a transitory displacement of the skin that is repeated at a constant or variable frequency (Geldard, 1972). Even with the understanding of these global mechanisms, however, the attributes of the skin are difficult to characterize in a quantitative fashion. Haptic Perception This is due to the fact that the skin has variable thresholds for touch (vibrotactile thresholds) and can perform complex spatial and temporal summations that are all a function of the type and position of the mechanical stimuli (Hill, 1967). So as the stimulus changes so does the sensation of touch, thus creating a challenge for those attempting to model synthetic haptic feedback. Haptic Perception Another haptic issue is that the sensations of the skin adapt with exposure to a stimuli. More specifically, the effect of a sensation decreases in sensitivity to a continued stimulus, may disappear completely even though the stimulus is still present, and varies by receptor type. Kinesthetic perception Kinesthesia is an awareness of the movements and relative position of body parts and is determined by the rate and direction of movement of the limbs; the static position of the limbs when movement is absent; tension signals originating from sensory receptors in the joints, skin, and muscles; and visual cues (Kalawsky, 1993). Kinesthetic perception Kinesthetic issues for VE design include the facts that a small rate of movement of a joint can be too small for perception, that certain kinesthetic effects are not well understood (e.g., tensing the muscles improves movement sense), and that humans possess an internal mental image of the positions of limbs/joints that is not dependent on actual sensing information. Healthy and Safety Issues in VE There are several health and safety issues that may affect users of VEs. If they are ignored or minimized, it could result in discomfort, harm, or even injury. These issues include both direct and indirect effects. 1. 2. The direct effects can be looked at from a microscopic level (e.g., individual tissue) or a macroscopic level (e.g., trauma, sickness). The indirect effects include physiological aftereffects and psychological disturbances. Direct Microscopic Effects There are several microscopic direct effects that could affect the tissues of VE users. The eyes, which will be closely coupled to HMDs or other visual displays used in VEs, have the potential of being harmed. The eyes may be affected by the electromagnetic field (emf) of VEs if the exposure is sufficiently intense or prolonged (Viirre, 1994). There is concern over emf exposure (Viirre, 1994). Strong emfs could cause cellular and genetic material damage in the brain as in other tissues. Direct Macroscopic Effects eyestrain could be caused by poor adjustment of HMD displays, as well as flicker, glare, and other visual distortions (Ebenholtz, 1988, 1992; Konz, 1983; Sanders & McCormick, 1993). Direct Microscopic Effects The auditory system and inner ear could be adversely effected by VE exposure to high volume audio. One of the possible effects of such exposure is noiseinduced hearing loss. Continuous exposure to high noise levels (particularly above 80 dBA) can lead to nerve deafness (Sanders & McCormick, 1993). The Occupational Safety and Health Administration (OSHA) has noise exposure limits that should be followed for VE design in order to prevent such hearing loss (OSHA, 1983). Direct Microscopic Effects Prolonged repetitive VE movements could also cause overuse injuries to the body (e.g., carpal tunnel syndrome, tenosynovitis, epicondylitis). The probability for users to be inflicted by such ailments can be moderated by emphasizing ergonomics in VE design and judicious usage procedures. The head, neck, and spine could be harmed by the weight or position of HMDs. Direct Microscopic Effects Phobic effects may result from VE use, such as claustrophobia (e.g., HMD enclosure) and anxiety (e.g., falling off a cliff in a virtual world). Direct Macroscopic Effects The risk of physical injury or trauma from VE interaction is of real concern. VE equipment is complex and interferes with normal sensory perception and body movements. Limited or eliminated vision of natural surroundings when wearing HMDs could lead to falls or trips that result in bumps and bruises. Sound cues may distract users causing them to fall while viewing virtual scenes. Imbalance of body position may occur due to the weight of VE equipment or tethers that link equipment to computers causing users to fall (Thomas & Stuart, 1992). Direct Macroscopic Effects If haptic feedback systems fail, a user might be accidentally pinched, pulled, or otherwise harmed. Most forcefeedback systems attenuate the transmitted force to avoid harm (Biocca, 1992a). As previously discussed, another direct macroscopic effect is that users may become motion sick (i.e., cybersickness) or potentially experience maladaptive physiological aftereffects from human-virtual environment interaction. Cybersickness Cybersickness is a form of motion sickness that occurs as a result of exposure to VEs. There are several factors that may contribute to cybersickness e.g. vection, lag, field of view. Vection, is the illusion of self-movement in a VE; when the body senses that there is no actual physical movement, a conflict occurs between the visual and vestibular systems which is believed to lead to sickness (Hettinger, Berbaum, Kennedy, Dunlap, & Nolan, 1990). Cybersickness In virtual systems, lag occurs when a user perceives a delay between the time a physical motion is made (e.g., turning the head to the right) and the time the computer responds with a corresponding change in the display. Both wide and narrow fields of view have been suggested to lead to motion sickness. Lestienne, Soechting and Berthoz (1977) found that subjects who viewed a wide FOV experienced intense sensations of motion sickness. Nausea has also been found to occur, however, when the field of view is restricted (Anderson & Braunstein, 1985). Cybersickness The experiments suggest that field of view may not be an overriding indicator of whether or not cybersickness is experienced. Howard, Ohmi, Simpson, and Landolt (1987) found that what is perceived as being in the distance drives vection which in turn often drives sickness. To Moderate Cybersickness manipulate the level of interactive control provided to users. Reason and Diaz (1971) and Casali and Wierwille (1986) determined that crewmembers and copilots are more susceptible to sickness because they have little or no control over the simulators movements. Lackner (1990) suggested that the ``driver'' of a simulator becomes less sick than passengers because he/she can control or anticipate the motion. Indirect Effects The use of VEs may produce disturbing aftereffects,such as head spinning, postural ataxia, reduced eye-hand coordination, vestibular disturbances, and/or sickness. Such aftereffects have been known to persist for several hours after system exposure References Stanney et al (1998) Human Factors Issues in Virtual Environments: A Review of the Literature, Presence, Vol. 7, No. 4, August 1998, 327351, by the Massachusetts Institute of Technology [web] Keller, K., and Colucci, D. (1998) Perception in HMDs: What is it in Head Mounted Displays (HMDs) that really make them all so terrible [web] Lingard, B (1995). Human Interfacing Issues of Virtual Reality [web] Dr. Oliver Staadt, ECS 289H: Human Factors and Perception (lecture notes), Univ of California [web] Tutorial Questions 1. 2. 3. Discuss whether image quality affect the perception of distance in VE. Support your arguments with some published evidence. Discuss the impact of field of view and binocular viewing restrictions on people's ability to perceive distance in a virtual world. Support your arguments with some published evidence. Discuss the ideal features of a HMD based on your understanding of the limitations imposed by human visual system/visual perception. ...
View Full Document

This note was uploaded on 02/06/2012 for the course FACULTY OF WXGE6320 taught by Professor Noraini during the Winter '09 term at University of Malaya.

Ask a homework question - tutors are online