PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Stud Health Technol Inform. Author manuscript; available in PMC Sep 1, 2009.
Published in final edited form as:
PMCID: PMC2736108
NIHMSID: NIHMS136934
Postural and Spatial Orientation Driven by Virtual Reality
Emily A. Keshnera and Robert V. Kenyonb
aDepartment of Physical Therapy, College of Health Professions and Department of Electrical and Computer Engineering, College of Engineering, Temple University, Philadelphia, USA
bDepartment of Computer Science, University of Illinois at Chicago, Chicago, IL, USA
Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world.
Keywords: Perception, Posture Platform, Multi-modal Immersion, Visual-vestibular Conflict
We have created a laboratory that combines two technologies: dynamic posturography and virtual reality (VR). The purpose of this laboratory is to test and train postural behaviors in a virtual environment (VE) which simulates real world conditions. Our goal with this environment is to explore how multisensory inputs influence the perception of orientation in space, and to determine the consequence of shifts in spatial perception on postural responses. Drawing from previous findings from aviators in simulators which indicated that responses to visual disturbances became much stronger when combined with a physical force disturbance [1], we have assumed that the use of a VE would elicit veridical reactions.
Traditionally, postural reactions have been studied in controlled laboratory settings that either remove inputs from specific pathways to determine their contribution to the postural response (e.g., closing the eyes to remove vision), or test patients with sensory deficits to determine how the central nervous system (CNS) compensates for the loss of a particular input [2]. In order to simplify the system, most studies recorded a single dependent variable such as center of pressure (COP) or center of mass (COM) when either the physical or visual world moved [3, 4]. There have been studies comparing the relative influence of two input signals, but conclusions have mostly been drawn from a single output variable [5, 6]. With our environment, we have the ability to monitor multiple inputs as well as multiple outputs. Thus our experimental protocols can be designed to incorporate the complex cortical processing that occurs when an individual is navigating through the natural world, thereby eliciting motor behaviors that are presumed to be more analogous to those that take place in the natural physical environment.
In this chapter we present information about both the technology used in our laboratory as well as data that demonstrate how we have been able to modify and measure postural and perceptual responses to multisensory disturbances in both healthy and clinical populations. First, we will present a background of our rationale for using VE technology. Second, we will describe the choices we made in developing our laboratory. Results from experiments in the VE will be offered to support our claim that our laboratory creates a sense of presence in the environment. Lastly, we will present evidence of a strong linkage between posture and perception which supports our belief that the VE is a valuable tool for exploring the organization of postural behaviors as they would occur in natural conditions. Our laboratory presents the challenges necessary to evaluate postural disorders and to design interventions that will fully engage the adaptive properties of the system. We believe the VE has vast potential as both a diagnostic and treatment tool for patients with poorly diagnosed balance disorders.
1.1. Unique Features that Motivated Our Use of VR in Posture Control Research
Prior to the advent of virtual environment display technology, experiments using complex, realistic computer-controlled imagery to study visual information processing/motor control linkages were difficult to produce. Until the arrival of affordable high performance computer graphics hardware, the manipulation of visual stimuli was relegated to real world conditions that were optically altered using prisms lenses [7, 8] or using artificial visual stimuli depicted in pictures and simple computer generated images such as random dots moving across a monitor [9, 11]. Each of these systems had advantages and limitations that constrained the type of experiments that could be performed. Consequently, any investigation of motor control and its interaction with vision and the other senses was limited by the available technology. These limitations severely impeded the study of posture control which incorporates both feedback about our self-motion and the perception of orientation in space. Although the role of each sensory feedback system could be studied by either removing or augmenting its contribution to an action, perception of vertical orientation is more difficult to discern and its measurement was mostly dependent on subjective feedback from the subject. If we were to examine how a higher level process like perception impacted posture control, then we needed to produce a visual environment convincing enough that our subjects believed that they needed to deal with disturbances presented in that environment. To create such conditions we modeled our laboratory after one of the most successful applications of a VE to activate perception, that of pilots in flight simulators [12]. We needed an environment where subjects accepted that they were actually present in the environment, thus their responses to the virtual world would be similar to those elicited in the physical world [13]. This “suspension of disbelief” which accompanies a subject’s immersion in the environment, mentally and/or physically, is also known as “presence”. We felt that a strong sense of “presence” was needed in order to engage and manipulate the higher level cognitive processes that influence posture control [14].
1.2. Our Work and Its Parallel to Flight Simulation
As mentioned previously, probably the best known and most successful use of a VE with human subjects is in the area of flight training [15, 16]. Pilots are exposed to situations that hone their skills in dealing with dangerous situations or train them to use new equipment. Many of these scenarios involve situations that could not be employed in the actual aircraft due to the danger it would present to the crew. In the safety of the simulator, however, such practice is routine and a vital part of their training. What makes this environment so compelling is the combination of somatosensory inputs [from the stick or rudder], vestibular motion inputs [from the motion-base systems], and auditory stimuli that are combined with convincing graphics that relay the pilot’s actions as if he were controlling the actual aircraft. Thus, a sense of presence in the VE, through the combination of sensory and physical motion feedback, helped to obtain a high fidelity response from the pilots that was successfully translated to tangible flying skills [15]. In a similar manner we use a combination of somatosensory and complex visual inputs in our laboratory to immerse the subjects so that their responses to the VE are closely matched to what they perceive to be real disturbances in the environment. Yet, as in simulators, our subjects are protected from the dangers of collision or falling regardless of the scenario under investigation. As with simulators, the VE allows us to expose subjects to situations that they may never actually experience, but their responses will teach us about the adaptive and control properties of the CNS by fully engaging their sensory and motor systems in planning and producing responses to environmental disturbances. We believe that we have accomplished this goal in the way that we have developed our laboratory as described below.
1.3. How We Created Our Lab
To adhere to our goal of engaging the perceptual system so as to elicit as close to a natural reaction from the CNS as possible, we had to decide what kind of VE system would be the best to use in our research. At that time, there were two main contenders to chose from: Head Mounted Displays (HMD) and the CAVE1 (CAVE Audio Visual Environment) [17]. The cost of each was comparable and each had their advantages and disadvantages. HMD systems allow one to totally immerse subjects in the computer generated world and give the scientist complete control over what the subjects will see during the course of the testing/training in a very compact package. In addition, update rates and the resolution of the screens could be higher than in a CAVE system. However, one of the more important considerations is that HMDs suffer from image swimming during head motion due to the latencies inherent in head tracking and image generation. In addition, at that time such systems were heavier and therefore added weight to the subject’s head. The projection-based CAVE used a light weight pair of glasses to allow the subject to perceive stereo which made this option very attractive since it imposed a mild encumbrance to the subject. The subjects were immersed, but not to the extent provided by a HMD system since they could see physical objects, such as their own body, in addition to the virtual world. The ability to see yourself within your environment is a trait experienced in the physical world [Augmented Reality HMD systems that allow the subject to see both physical and virtual objects are currently available but are at least twice the cost of a CAVE]. Swimming of the scene during head movements was minimal because the entire field-of-view (FOV) was projected on the screen in front of the subject. Swimming was further reduced because the tracking system we used (Motion Analysis, Santa Rosa, CA) produced very short latencies (approximately 10–20 msec) resulting in an image update that is very close to the physiological latency of the vestibulo-ocular reflex during natural head motion [18]. Negative characteristics of the projection-based system are that it requires a much larger physical space than a HMD and that it forces the subjects to be confined to an area near the screens in order to see the image. Also, images are not as bright as in HMDs. However, our decision criteria lead us to use a CAVE system rather than a HMD.
We originally started with a one wall CAVE, i.e., a single projection screen in front of the subject [19]. Although the 100° FOV was adequate for our experiments, we expected that a wider FOV would elicit a stronger sense of motion in subjects [20]. In fact, we have found that narrowing the FOV so that the peripheral field is not stimulated actually produces greater delays in response to postural disturbances [21]. We currently have a 3 screen passive stereo system, with walls in front and to the sides of the subject, which permits peripheral as well as central visual field motion (Figure 1). Two projectors are located behind each screen. Each pair of projectors has circularly polarized filters of different (opposite) directions placed in front of them, and each projects a full-color workstation field [1280h × 1024v] at 60 Hz onto each screen. Matching circularly polarized stereo glasses are worn by the subject to deliver the appropriate left and right eye image to each eye allowing a 150° stereo FOV. The correct perspective and stereo projections for the scene are computed using values for the subject’s inter-pupillary distance (IPD) and the current orientation of the head supplied by position markers attached to the subject’s head and scanned by the Motion Analysis infrared camera system. Consequently, virtual objects retain their true perspective and position in space regardless of the subjects’ movement. The visual experience is that of being immersed in a realistic scene with textural content and optic flow. To produce the physical motion disturbances necessary to elicit postural reactions, we incorporated a moving base of support with two integrated force plates (NeuroCom International Inc, Clackamas OR) into the environment (Figure 1). In many posture laboratories and with the popular clinical tools for diagnosis and training of postural reactions (e.g., the Equitest and Balance Master), the visual axis of rotation is placed at the ankle and the multi-segmental body is assumed or even constrained to function as an inverted pendulum [6, 20, 22, 25]. In our laboratory, the visual axis is referenced to the head as occurs during natural movement, and it is assumed that the control of posture is a multi-segmental process.
Figure 1
Figure 1
The Virtual Environment and Postural Orientation Laboratory currently at Temple University is a three-wall virtual environment. Each wall measures 2.4 m × 1.7 m. The visual experience is that of being immersed in a realistic scene with textural (more ...)
1.4. Is Stereo Vision Necessary?
We have explored whether having stereovision in the VE produced a more compelling visual experience than just viewing a flat wall or picture. Stereopsis is an effective cue up to about 30m which encompasses many objects in our scene [26]. We predicted that stereovision was necessary to produce a sense of immersion in the VE, and that this perceptual engagement would be reflected in the postural response metrics. For these experiments we produced postural instability by having young adults stand on the force plate with a full (100%) base of support (BOS), or on a rod offering 45% of their BOS (calculated as a percentage of their foot length), or a rod offering 35% BOS [21, 27]. Subjects viewed the wide FOV visual scene moving fore-aft at 0.1 Hz viewing either stereo (IPD ≠ 0) or dioptic (IPD = 0) images. Response power at the frequency of the scene increased significantly (p < 0.05) with the 35% BOS (Figure 2), suggesting some critical mechanical limit at which subjects could no longer rely on the inputs provided by the BOS and, thus, switched to a reliance on vision. There was also an interaction between BOS and stereovision revealing that when subjects were more reliant on the visual inputs, stereovision had a greater effect on their motion. Thus, in an unstable environment, visual feedback and, in particular, stereovision became more influential on the metrics of the postural response. As a result we chose to retain the stereo component in our 3-wall environment.
Figure 2
Figure 2
Power of head, trunk, and shank center of mass for four subjects is normalized to the largest response of each subject during 0.1 Hz motion of a visual scene with dioptic (2D) and stereo (3D) images while on a full (100%) and reduced (35%) base of support. (more ...)
When the world is moving, we have to determine whether it is the environment or ourselves that is moving in order to recognize our orientation in space. To do this, we must use the sensory information linked to the context of the movement and determine whether there is a mismatch between the visual world motion and our vestibular and somatosensory afference. If we believe that the environment around us is stationary, it is relatively easy to identify our physical motion. However, when the world is also moving, we need to shape our reactions to accurately match the demands of the environment. The ability to orient ourselves in space is a multisensory process [28, 31], and the impairment of any one of the relevant pathways (i.e., proprioceptive, vestibular, and visual) will impact postural stability.
Whole body sway responses of subjects exposed to visual rotation stimuli in our environment were qualitatively similar to those observed and published in the literature that was available prior to our initial experiments [20, 22, 23, 32]. The novelty in our approach was to explore how each body segment acted to maintain posture during visual disturbances rather than looking at postural sway as a single output variable. We chose to examine the body segments individually because of previous studies suggesting differential control mechanisms in the upper and lower body [24, 25]. In our first study with a VE [33], subjects stood in quiet stance while observing either random dots or a realistic visual scene that moved sinusoidally or at constant velocity about the pitch or roll axes (Figure 3). Segmental displacements, Fast Fourier Transforms (FFT), and root mean square (RMS) values were calculated for the head, trunk, and lower limb. We found that with scene motion in either the pitch and roll planes, subjects exhibited greater magnitudes of motion in the head and trunk than at the lower limb. Additionally, the frequency or velocity content of the head and trunk motion was equivalent to that of the visual input, but this was not the case in the lower limb. Smaller amplitudes and frequent phase reversals observed at the ankle suggested that control at the ankle was directed toward keeping the body over the base of support (the foot) rather than responding to changes in the visual environment. These results suggested to us that the lower limb postural controller was setting a limit of motion for postural stabilization while posture of the head and trunk may have been governed by a perception of the visual vertical driven by the visual scene.
Figure 3
Figure 3
(Left) A subject standing within a field of random dots projected in the VE. The subject is tethered to three flock-of-birds sensors that are recording 6 axes of motion of the head, trunk, and lower limb. (Right) Graphs of two subjects (A and B) showing (more ...)
When our subjects were asked to walk while the visual environment rolled counterclockwise, all of the subjects compensated for the visual field motion by exhibiting one of two locomotion strategies. Some subjects exhibited a normal step length, taking only two or three steps to cover the seven-foot distance which would be a normal gait for this distance. However, a lateral shift took place so that they walked sideways in the direction of the rolling scene (Figure 3A). In each case, the subject’s first step was straight ahead and the second step was to the left regardless of which foot was placed first. For example, one subject who made the first step with the left foot then made the second step by crossing the right leg over the left leg when responding to the visual stimulus [in order to move to the left]. When queried about the amount of translation produced during the walking trials, subjects responded that they recognized they were moving off center. In fact, these subjects were three feet to the left of center at the end of their trial but were unable to counteract the destabilizing force.
The other subjects walked with short, vertically projected stamping, taking approximately seven or eight steps in the seven feet traveled (Figure 3B). These subjects exhibited an increased frequency of medial-lateral sway of the head and trunk as though they were rocking over each foot as they stepped forward. These subjects reported that they were only focused on “not falling over”. Shortened step lengths and increased flexion at the knee and ankle implied that these subjects were exerting cognitive decisions about their locomotion that focused on increasing their awareness of both the somatosensory feedback and their motor output. This locomotion pattern was reminiscent of the gait observed in elderly fallers [34] or subjects that have been walking with reversing prisms [35].
From these results we concluded that subjects could only counteract the effects of the destabilizing visual stimulus by altering their normal locomotion pattern and, correspondingly, the altered perception of vertical. Interestingly, the content of the visual scene did not determine response strategy selection (subjects receiving the random dot pattern also exhibited the different strategies), thus this paradigm can be used in laboratories with less advanced technologies than those reported here.
As would be expected when perception is involved, we have found that the effect of visual motion on posture is not equal across all motion axes and that segmental responses can vary across subjects. At the velocities that we have used, postural responses were greatest to roll motion of the visual field, followed closely by pitch, then by anterior-posterior (A–P) motions of the visual scene (Figure 4). We might explain these differences by our experiences with visual feedback in the physical environment. Modulation of segmental responses in the pitch and A–P planes occurs principally in the sagittal plane as we navigate through the environment. Roll motion of the visual environment is less commonly experienced, however, and therefore might elicit an intensified reaction to the perception of visual motion [33].
Figure 4
Figure 4
Amplitudes of head, trunk, and ankle to pitch, roll, and A–P motion of the VE. For pitch and roll, both constant velocity at 5°/s (A) and sinusoidal motion of the VE at 0.1Hz (B) and 0.5Hz (C) were used. (A) Vertical dashed lines indicate (more ...)
When an individual is standing quietly and only the visual scene is moving in any of the planes, the performer experiences a conflict between the visual perception of motion and the vestibular and somatosensory systems signaling an absence of physical motion. A decision then needs to be made about whether the visual motion signal was due to self motion or motion of the environment. Mutability of this decision process may be responsible for the variability observed across subjects. In our experiments [27], the roll scene was rotated in a counterclockwise (CCW) direction about the line of sight; the pitch scene rotated from lower to upper visual fields about an axis passing through the subject’s ears. Quietly standing subjects tended to drift in the direction of the visual motion while producing small corrective oscillations of each segment (Figure 4A). In general, subjects would follow the constant velocity stimulus for some interval of time and then suddenly move in the opposite direction (exhibited as a sudden downward drop in the data), followed by a steep return in the direction of the visual scene motion as though correcting for the visual drift. The peak of the segmental response was delayed in most subjects and initiation of the response to the visual scene took about 20 sec to occur. These delays and response reversals reflected fluctuations in the strength of the immersion in the VE. In both the roll and pitch planes, subjects tended to respond more with the head and trunk than with the ankle.
With 0.1 Hz sinusoidal roll of the visual scene (Figure 4B), although the magnitude of motion was greater in the head and trunk than in the ankle, all segments had similar phases and oscillatory frequencies suggesting that subjects were responding as a simple pendulum limited only by the constraints of the base of support. With 0.1 Hz sinusoidal pitch of the visual scene, the subject shown in Figure 4B attempted to maintain a sinusoidal relation with the stimulus with similar magnitudes at all segments. Segmental responses were more synchronized to a visual scene with a frequency of 0.1 Hz than 0.5 Hz (Figure 4C). Interestingly, that same frequency (0.1 Hz) with a visual scene moving in A–P (Figure 4D) produced a much more subtle response of the body segments with lower amplitudes [35]. Differences seen between the responses of the two subjects presented in Figure 4D are indicative of the variable response to the visual motion, which is not unexpected if the response is a reflection of each individual’s perception of their own movement and that of the environment.
The waxing and waning of our subjects’ responses were reminiscent of reports in the literature regarding subjects’ perceptions of orientation during scene rotation [20, 36, 38]. Consequently, we wanted to determine whether changes in orientation of the head and trunk when exposed to a rotating scene correlated with spatial and temporal characteristics of the perception of self-motion. We recorded head position in space, center of pressure responses, and perception of self-motion through the orientation of a hand-held wand during constant velocity rotations of the visual scene about the roll axis [39]. Although no consistent response pattern emerged across the healthy subjects, there was a clear relationship between the perception of vertical, the position of the head in space, and postural sway within each subject (Figure 5). This observed relationship between spatial perception and postural orientation suggests that spatial representation during motion in the environment is modified by both ascending and descending controls. We inferred from these data that postural behaviors generated by the perception of self-motion are the result of cortical interactions between visual and vestibular signals as well as input from other somatosensory signals. This probable real-time monitoring of spatial orientation has implications for rehabilitation interventions. For example, the recovery of balance following a slip or trip may rely greatly on the ability to match continuously changing sensory feedback to an initial model of vertical that could be highly dependent on the visual environment and the mechanical arrangement during that particular task. Also, we cannot assume that a patient, particularly one with a sensory deficit who appears to be vertically oriented at the initiation of motion, will be able to sustain knowledge of that orientation as the task progresses [3].
Figure 5
Figure 5
Orientation of the hand held wand, the head, and the center of pressure (COP) while viewing counterclockwise (CCW) roll motion of the visual scene (bold line) and a stationary visual scene (broken line) in three subjects demonstrates a fluctuating response (more ...)
If the perception of physical motion and orientation in space is derived from the convergence of vestibular, proprioceptive, and visual signals, then a mismatch between these signals would produce a conflict that needs to be resolved by the CNS (Figure 6). Examples of such a conflict occur in nature when watching a moving train and sensing that it is yourself who is moving, or standing in a tilting room and being unable to distinguish between visual field motion and self-motion [20, 40]. This phenomenon of illusory self-motion (vection) suggests that the CNS is not always capable of suppressing inappropriate inputs.
Figure 6
Figure 6
Schematic illustration of the vection phenomenon. Gravitational and visual signals stimulate the otoliths and the visual system, respectively, which, when combined, produce the perception of tilt. Thus, as seen on the right, when the visual scene is rotating (more ...)
To explore how the postural system weights coincident yet discordant disturbances of the visual and proprioceptive/vestibular systems, we chose to depart from roll motion of the visual scene which is less relevant to the environmental experience. In this study, the visual scene moved in the sagittal plane as did the individual’s physical motion [41, 43]. We examined the postural responses of healthy young adults (25–38 yrs), elderly (60–78 yrs), and labyrinthine deficient subjects (59–86 yrs) during fore-aft translations (0.1 Hz, ± 3.7 m/sec) of an immersive, wide FOV visual environment, or anterior-posterior translations (0.25 Hz, ± 15 cm/sec) of the support surface, or both concurrently. Kinematics of the head, trunk, and shank were collected with an infrared camera motion analysis system, and angular motion of each segment was plotted across time. When only the support surface was translated, segmental responses were small (1°–2°) and mostly opposed the direction of platform translation. When only the visual scene was moving, segmental responses were initially small and increased as the trial progressed. When the inputs were presented at the same time, however, response amplitudes were large even at the onset of the trial. Mean RMS values across subjects were significantly greater with combined stimuli than for either stimulus presented alone, and areas under the power curve across subjects were significantly increased at the frequency of the visual input when both inputs were presented (Figure 7, top). When discordant signals were simultaneously presented, even patients with labyrinthine deficit who claimed that they were ignoring the visual inputs exhibited increased complexity in the frequency spectra of their responses. These increases were not a simple linear summation of the responses to each input (Figure 7, bottom). Thus, intra-modality dependencies were observed, and we must conclude that the CNS doesn’t just add the effects of each sensory pathway but rather attempts to accommodate to the multiple demands presented by conflicting sensory signals.
Figure 7
Figure 7
(Top) Power of the relative angles between head, trunk, shank and the moving platform (sled) over the period of the trial at the relevant frequencies of platform motion (0.25 Hz) and visual scene motion (0.1 Hz) are shown for each protocol for one young (more ...)
Our results have significant bearing on studies of motor control and, ultimately, on the design of rehabilitation interventions. In the past, postural responses have principally been examined by isolating individual control pathways in order to determine their specific contribution. However, if these pathways are responsive to functionally relevant contexts, then their response may well be different when the CNS is receiving simultaneous inputs from multiple pathways - especially when the confluence of signals produces non-linear behaviors.
Furthermore, we believe it unlikely that the role of any single pathway contributing to postural control can be accurately characterized in a static environment if the function of that pathway is context dependent. We conclude from these data that a healthy postural system does not selectively switch between inputs but continuously monitors all environmental signals to update the frequency and magnitude characteristics of a motor behavior.
Our finding that combining conflicting inputs actually produces responses that incorporate specific parameters from each input is surprising in light of the generally accepted hypothesis of sensory weighting which suggests that the signal most relevant to the current task is more heavily weighted in the response.
For example, when we are moving in the environment rather than standing quietly, we might expect that feedback generated by our physical motion becomes more heavily weighted and it should therefore be easier for the postural control system to differentiate between our own motion and motion of the world. PET and MRI studies have supported this hypothesis by demonstrating that when both retinal and vestibular inputs are processed, there are changes in the medial parieto-occipital visual area and parieto-insular vestibular cortex [44, 46] as well as cerebellar nodulus [47, 48] that suggest a deactivation of the structures processing object-motion when there is a perception of physical motion. But we have preliminary data [49] to suggest that inappropriate visual field motion is not suppressed when it is not matched to actual physical motion. Instead, during quiet stance, magnitude and power of segmental motion increased as the velocities of sinusoidal anterior-posterior visual field motion were increased even to values much greater than that normally observed in postural sway. In fact, head velocity in space was modulated by the scene velocity regardless of the velocity of physical body motion.
We have further explored relative sensory weighting of visual field motion on postural responses in the paradigm reported previously when the BOS and the FOV were gradually narrowed [21]. The immersive virtual environment was either moved realistically with head motion (natural motion) or translated sinusoidally at 0.1 Hz in the fore-aft direction (augmented motion). Subjects viewed the visual motion under wide (90° and 55° in the horizontal and vertical directions) and narrow (25° in both directions) FOV conditions while standing flatfooted (100% BOS) and on two blocks (45% and 35% BOS). Furthermore, the augmented motion was presented in stereo and in non-stereo. Head and whole body COM and ankle angle RMS were calculated, and FFT were performed on the head, whole body, and shank COM.
When combined with a 35% BOS, natural motion of the visual scene with a wide FOV produced significantly reduced COM RMS values compared to a narrow FOV. Viewing the augmented stereo visual condition produced a significant reduction in whole body COM for the 45% BOS compared to the 35% BOS. Whole body COM RMS was also significantly greater when standing on the 45% BOS compared to the 100% BOS when viewing an augmented stimulus in a non-stereo scene. The primary effect of augmented motion emerged in both the head and whole body COM which exhibited significantly increased power at the frequency of the visual field motion with a wide FOV and a narrowed BOS (Figure 8). Shank COM power was greater for the wide FOV compared to the narrow FOV regardless of the size of the BOS. We concluded that by narrowing the BOS, the CNS was forced to increase its reliance on peripheral visual information in order to stabilize the head in space even though the augmented visual motion was promoting postural instability. Thus the presence of a destabilizing visual stimulus in a wide FOV was not down weighted and still exerted a strong impact on postural control when postural stability was compromised.
Figure 8
Figure 8
Average head, whole body, and shank COM power for each of the three BOS conditions when the augmented visual motion was imposed on a stereo virtual scene. Subjects viewed the motion with a narrow (black line) and wide (dashed) FOV.
One of the most interesting results to emerge from these data was the finding that a subset of the subjects could not maintain continuous stance on the smallest BOS when the virtual environment was in motion and they needed to take a step to stabilize themselves [21, 27]. We found that when viewing augmented motion with a wide FOV, there was a great effect on their head and whole body COM and ankle angle RMS values. FFT analyses revealed greater power at the frequency of the visual stimulus in the steppers compared to the non-steppers (Figure 9). With a narrow FOV, whole body COM time lags relative to the augmented visual scene also appeared, and the timedelay between the scene and the COM was significantly increased in the steppers. This increased responsiveness to visual field motion indicates a greater visual field-dependency of the steppers and implies that the thresholds for shifting from a reliance on visual information to somatosensory information can differ within the healthy population. Our results strongly point to a role of visual perception in the successful organization of a postural response so that the weighting of the sensory inputs that contribute to the postural response may well depend on the perceptual choice made by each individual CNS [50].
Figure 9
Figure 9
Average head, whole body, and shank COM power for the 100% (dashed line) and 45% (black line) BOS conditions in subjects that were able to maintain balance on the reduced BOS (typical subjects) and those that needed to take a step (steppers) while viewing (more ...)
A natural progression from our findings of the role of perceptual choice in the VE was to explore whether we could find clinical signs in the response patterns of individuals that complained of perceptual dysfunction (i.e., dizziness). We chose to study a group of patients that were classified as visually sensitive. This diagnosis encompasses individuals who complain of dizziness provoked by visual environments with full field of view repetitive or moving visual patterns [51]. Visual vertigo is present in some patients with a history of a peripheral vestibular disorder, but there is also a subset of patients who have no history of vestibular disorder and who test negative for vestibular deficit on traditional clinical tests. We investigated whether the visual sensitivity described by these individuals could be quantified by the magnitude of the postural response in response to an upward pitch of the VE combined with dorsiflexion tilt of the support surface [52].
We found that the healthy subjects exhibited incremental effects of visual field velocity on the peak angular velocities of the head, but responses of the visually sensitive subjects were not linearly modulated by visual field velocity (Figure 10). Patients with no history of vestibular disorder demonstrated exceedingly large head velocities whereas patients with a history of vestibular disorder exhibited head velocities that fell within the bandwidth of healthy subjects. Thus, our results clearly indicated that the relation between postural kinematics and visual inputs could quantify the presence of a perceptual disorder. From this we concluded that virtual reality technology could be useful for differential diagnosis and specifically designed interventions for individuals whose chief complaint was sensitivity to visual motion.
Figure 10
Figure 10
RMS of head velocity across a 1 sec period following a 30 deg/sec dorsiflexion tilt of the base of support while the scene was dark, matched to the head motion (0 deg/sec), matched to the velocity of the base of support (30 deg/sec), or moving at velocities (more ...)
We have also started to explore whether the VE could be used to measure improvements following balance retraining. We have tested one patient with bilateral vestibular deficit and another with BPPV following a training paradigm that focused on somatosensory feedback. To test whether balance was improved following treatment, we placed them in a VE that moved in counterclockwise roll while they were standing on a platform that was sway-referenced to the motion of their center of mass. At the same time, they were instructed to point to a target that moved laterally in their visual field. Although these are preliminary results, we have been able to demonstrate that visual field motion is less destabilizing following the balance training program than prior to the training period (Figure 11) which suggests that VR technology holds particular promise as a clinical evaluation tool.
Figure 11
Figure 11
Center of pressure responses of a BPPV subject before (top traces) and following (bottom traces) balance training. The subject stood on an unstable support surface while in the dark, viewing a scene matched to her head motion (still), viewing a scene (more ...)
We have based the development of our laboratory on many years of research in psychology and perceptual psychophysics. The understanding of how perception and action are linked, and the role of sensation in the production of complex behaviors, has guided the direction of the laboratory and the design of our experiments. One of the most compelling pieces of evidence for why we need to place a great informational load on the CNS to achieve our goal of exploring decision making processes in postural control comes from studies with pilots. There is a large literature base that has explored changes that take place when pilots are exposed to increasingly difficult tasks [e.g., landing in calm air vs. during a turbulent storm] [53]. Evaluations of pilot performance during these variable conditions often produced no differences on measures of performance. Clearly something must have been changing as a result of the different conditions, but this change was not reflected in the pilots’ motor output. What researchers have found is that the pilot workload will change dramatically in each case in order to maintain a consistent level of performance [e.g., smooth landing]. It is only when conditions stress the pilot to the point where an increase in workload cannot be tolerated that we see a decrease in performance and the influence of other environmental factors. Similarly, in order to expose properties of the systems involved in maintaining a stable posture, we needed to stress the spatial orientation system to the point where the performance of the subject changes.
In much of our data there is evidence of individual preferences for selecting and weighting sensory information [54, 57]. Subtle differences in postural control may, therefore, go unnoticed as there can be multiple combinations of sensory information and joint coordination patterns that can yield similar postural outcomes. Only by taxing the biomechanical limits of the system were we able to observe differences in how these combinations impacted the subject’s ability to maintain balance. Thus, the flexibility of the CNS to accommodate to a wide variety of task constraints presents a particular challenge when attempting to evaluate postural disorders and to design an intervention that will fully engage the adaptive properties of the system. We believe that our laboratory presents such an environment and that we have the potential to use the VE both as a diagnostic and treatment tool for patients with poorly diagnosed balance disorders. The potential of our laboratory as a rehabilitation tool is promising given our finding that within our VE we could distinguish the postural responses of patients with visual sensitivity, who present with oscillopsia but have no hard clinical signs, from a healthy population [52]. We have also had some initial success in using the VE to test the carryover of postural training paradigm in patients with vestibular deficit.
Future directions for our laboratory, and for virtual technology to be considered seriously as a rehabilitative tool, must include studies to determine how immersive the VE must be given the strength of the stimuli to produce changes in the perception of vertical and spatial orientation. Does the VE need to project a stereo image and how wide must the field of view be? Can we identify how to make more economical systems for treatment and diagnosis of postural disorders? Finally, we must ask how to make these systems user friendly (and safe) either for the clinic or for home based use.
Acknowledgements
The research reported here was supported by National Institutes of Health (NIH) grants DC01125 and DC05235 from the National Institute on Deafness and Communication Disorders and grants AG16359 and AG26470 from the National Institute on Aging. The virtual reality research, collaborations, and outreach programs at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago are made possible by major funding from the National Science Foundation (NSF), awards EIA-9802090, EIA-9871058, ANI- 9980480, and ANI-9730202, as well as the NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement ACI-9619019 to the National Computational Science Alliance. The authors thank VRCO Inc. for the use of their CAVElib and Trackd software and our colleagues J. Streepy, K. Dokka, and J. Langston for their collaboration on some of this work.
Footnotes
1The CAVE is a registered trademark of the Board of Trustees of the University of Illinois.
1. Young LR. Vestibular reactions to spaceflight: human factors issues. Aviation, Space, and Environmental Medicine. 2000;71:A100–A104. [PubMed]
2. Black FO, Wall C, Nashner LM. Effects of visual and support surface orientation references upon postural control in vestibular deficient subjects. Acta Otolaryngology. 1983;95:199–201. [PubMed]
3. Keshner EA, Allum JH, Pfaltz CR. Postural coactivation and adaptation in the sway stabilizing responses of normals and patients with bilateral vestibular deficit. Experimental Brain Research. 1987;69:77–92. [PubMed]
4. Horak FB, Nashner LM. Central programming of postural movements: adaptation to altered support-surface configurations. Journal of Neurophysiology. 1986;55:1369–1381. [PubMed]
5. Oie KS, Kiemel T, Jeka JJ. Multisensory fusion: simultaneous re-weighting of vision and touch for the control of human posture. Cognitive Brain Research. 2002;14:164–176. [PubMed]
6. Peterka RJ. Sensorimotor integration in human postural control. Journal of Neurophysiology. 2002;88:1097–1118. [PubMed]
7. Stratton, GM Vision without inversion of the retinal image. Psychological Review. 1987;4:463–481.
8. Oman CM, Bock OL, Huang JK. Visually induced self-motion sensation adapts rapidly to left-right visual reversal. Science. 1980;209:706–708. [PubMed]
9. Dichgans J, Brandt T. Visual-vestibular interaction and motion perception. British Journal of Ophthalmology. 1972;82:327–338. [PubMed]
10. Young LR, Oman CM, Watt DG, Money KE, Lichtenberg BK, Kenyon RV, Arrott AP. M.I.T./Canadian vestibular experiments on the Spacelab-1 mission: 1 Sensory adaptation to weightlessness and readaptation to one-g: an overview. Experimental Brain Research. 1986;64:291–298. [PubMed]
11. Weghorst S, Prothero J, Furness T, Anson D, Riess T. Virtual images in the treatment of Parkinson's disease akinesia. In: Morgan K, Satvara RM, Sieburg HB, Matheus R, Christensens JP, editors. Medicine Meets Virtual Reality II. Vol. 30. 1995. pp. 242–243.
12. Ormsby CC, Young L. Perception of static orientation in a constant gravitoinertial environment. Aviation, Space, and Environmental Medicine. 1976;47:159–164. [PubMed]
13. Sadowski W, Stanney K. Presence in Virtual Environments. In: Stanney KM, editor. Handbook of Virtual Environments: Design, Implementation, and Applications. London: Lawrence Erlbaum Associates, Inc; 2002. pp. 791–806.
14. Slater M. Presence - the view from Marina del Rey. 2008. http://www.presence-thoughts.blogspot.com/
15. Lathan CE, Tracey MR, Sebrechts MM, Clawson DM, Higgins GA. Using Virtual Environments as Training Simulators: Measuring Transfer. In: Stanney KM, editor. Handbook of Virtual Environments: Design, Implementation, and Applications. London: Lawrence Erlbaum Associates, Inc; 2002. pp. 403–414.
16. Thurrell AE, Bronstein AM. Vection increases the magnitude and accuracy of visually evoked postural responses. Experimental Brain Research. 2002;147:558–560. [PubMed]
17. Cruz-Neira C, Sandin D, Defanti T, Kenyon R, Hart J. The CAVE Audio-Visual Environment. ACM Transactions on Graphics. 1992;35:65–72.
18. Swee Aw T, Todd MJ, Halmagyi GM. Latency and initiation of the human vestibuloocular reflex to pulsed galvanic stimulation. Journal of Neurophysiology. 2006;96:925–930. [PubMed]
19. Keshner EA, Kenyon RV. Using immersive technology for postural research and rehabilitation. Assistive Technology. 2004;16:54–62. [PubMed]
20. Dichgans J, Held R, Young LR, Brandt T. Moving visual scenes influence the apparent direction of gravity. Science. 1972;178:1217–1219. [PubMed]
21. Streepey J, Kenyon RV, Keshner EA. Visual motion combined with base of support width reveals variable field dependency in healthy young adults. Experimental Brain Research. 2006;176:182–187. [PubMed]
22. Dijkstra TM, Schoner G, Gielen CC. Temporal stability of the action-perception cycle for postural control in a moving visual environment. Experimental Brain Research. 1994;97:477–486. [PubMed]
23. Previc FH. The effects of dynamic visual stimulation on perception and motor control. Journal of Vestibular Research. 1992;2:285–295. [PubMed]
24. Keshner EA, Woollacott MH, Debu B. Neck, trunk and limb muscle responses during postural perturbations in humans. Experimental Brain Research. 1988;71:455–466. [PubMed]
25. Buchanan JJ, Horak FB. Emergence of postural patterns as a function of vision and translation frequency. Journal of Neurophysiology. 1999;81:2325–2339. [PubMed]
26. Cutting J, Vishton PM. Handbook of Perception and Cognition: Perception of Space and Motion. 2nd ed. Academic Press; 1995. Perceiving Layout and Knowing Distances: The Integration, Relative Potency, and Contextual Use of Different Information About Depth; pp. 69–117.
27. Streepey J, Kenyon RV, Keshner EA. Field of view and base of support width influence postural responses to visual stimuli during quiet stance. Gait and Posture. 2006;25:49–55. [PubMed]
28. Kuo AD, Speers RA, Peterka RJ, Horak FB. Effect of altered sensory conditions on multivariate descriptors of human postural sway. Experimental Brain Research. 1998;122:185–195. [PubMed]
29. Mergner T, Glasauer S. A simple model of vestibular canal-otolith signal fusion. Annals of the New York Academy of Sciences. 1999;871:430–434. [PubMed]
30. Mergner T, Maurer C, Peterka RJ. A multisensory posture control model of human upright stance. Progress in Brain Research. 2003;142:189–201. [PubMed]
31. Mergner T, Rosemeier T. Interaction of vestibular, somatosensory and visual signals for postural control and motion perception under terrestrial and microgravity conditions-a conceptual model. Brain Research Reviews. 1998;28:118–135. [PubMed]
32. Previc FH, Kenyon RV, Boer ER, Johnson BH. The effects of background visual roll stimulation on postural and manual control and self-motion perception. Perceptual Psychophysics. 1993;54:93–107. [PubMed]
33. Keshner EA, Kenyon RV. The influence of an immersive virtual environment on the segmental organization of postural stabilizing responses. Journal of Vestibular Research. 2000;10:207–219. [PubMed]
34. Winter DA, Patla AE, Frank JS, Walt SE. Biomechanical walking pattern changes in the fit and healthy elderly. Physical Therapy. 1990;70:340–347. [PubMed]
35. Gonshor A, Jones GM. Postural adaptation to prolonged optical reversal of vision in man. Brain Research. 1980;192:239–248. [PubMed]
36. Thurrell A, Bertholon P, Bronstein AM. Reorientation of a visually evoked postural response during passive whole body rotation. Experimental Brain Research. 2000;133:229–232. [PubMed]
37. Dichgans J, Brandt T. Visual-vestibular interaction: effects on self-motion perception and postural control. In: Held R, Leibowitz HW, Teuber HL, editors. Handbook of sensory physiology. New York: Springer; 1978. pp. 755–804.
38. Fushiki H, Takata S, Watanabe Y. Influence of fixation on circular vection. Journal of Vestibular Research. 2000;10:151–155. [PubMed]
39. Keshner EA, Dokka K, Kenyon RV. Influences of the perception of self-motion on Postural parameters in a dynamic visual environment. Cyberpsychology and Behavior. 2006;9:163–166. [PubMed]
40. Lackner JR, DiZio P. Visual stimulation affects the perception of voluntary leg movements during walking. Perception. 1988;17:71–80. [PubMed]
41. Keshner EA, Kenyon RV, Langston J. Postural responses exhibit multisensory dependencies with discordant visual and support surface motion. Journal of Vestibular Research. 2004;14:307–319. [PubMed]
42. Keshner EA, Kenyon RV, Dhaher Y. Postural research and rehabilitation in an immersive virtual environment; Conference Proceedings IEEE Engineering in Medicine & Biology Society; 2004. pp. 4862–4865. [PubMed]
43. Keshner EA, Kenyon RV, Dhaher YY, Streepey JW. Employing a virtual environment in postural research and rehabilitation to reveal the impact of visual information. International Journal on Disability and Human Development. 2005;4:177–182.
44. Brandt T, Bartenstein P, Janek A, Dieterich M. Reciprocal inhibitory visual-vestibular interaction. Visual motion stimulation deactivates the parieto-insular vestibular cortex. Brain. 1998;121(9):1749–1758. [PubMed]
45. Brandt T, Glasauer S, Stephan T, Bense S, Yousry TA, Deutschlander A, Dieterich M. Visual-vestibular and visuovisual cortical interaction: new insights from fMRI and pet. Annals of the New York Academy of Sciences. 2002;956:230–241. [PubMed]
46. Dieterich M, Brandt T. Brain activation studies on visual-vestibular and ocular motor interaction. Current Opinion in Neurology. 2000;13:13–18. [PubMed]
47. Kleinschmidt A, Thilo KV, Buchel C, Gresty MA, Bronstein AM, Frackowiak RS. Neural correlates of visual-motion perception as object-or self-motion. Neuroimage. 2002;16:873–882. [PubMed]
48. Xerri C, Borel L, Barthelemy J, Lacour M. Synergistic interactions and functional working range of the visual and vestibular systems in postural control: neuronal correlates. Progress in Brain Research. 1988;76:193–203. [PubMed]
49. Dokka K, Kenyon R, Keshner EA. Influence of visual velocity on head stabilization. Society for Neuroscience. 2006
50. Lambrey S, Berthoz A. Combination of conflicting visual and non-visual information for estimating actively performed body turns in virtual reality. International Journal of Psychophysiology. 2003;50:101–115. [PubMed]
51. Bronstein AM. The visual vertigo syndrome. Acta Otolaryngol Suppl. 1995;520(1):45–48. [PubMed]
52. Keshner EA, Streepey J, Dhaher Y, Hain T. Pairing virtual reality with dynamic posturography serves to differentiate between patients experiencing visual vertigo. Journal of NeuroEngineering and Rehabilitation. 2007;4:24. [PMC free article] [PubMed]
53. Gopher D, Donchin E. Workload - An Examination of the Concept. In: Boff KR, Kaufman L, Thomas JP, editors. Handbook of perception and human performance. New York: Wiley; 1986. pp. 41-1–41-49.
54. Gurfinkel VS, Ivanenko Yu P, Levik Yu S, Babakova IA. Kinesthetic reference for human orthograde posture. Neuroscience. 1995;68:229–243. [PubMed]
55. Isableu B, Ohlmann T, Cremieux J, Amblard B. Selection of spatial frame of reference and postural control variability. Experimental Brain Research. 1997;114:584–589. [PubMed]
56. Isableu B, Ohlmann T, Cremieux J, Amblard B. Differential approach to strategies of segmental stabilisation in postural control. Experimental Brain Research. 2003;150:208–221. [PubMed]
57. Kluzik J, Horak FB, Peterka RJ. Differences in preferred reference frames for postural orientation shown by after-effects of stance on an inclined surface. Experimental Brain Research. 2005;162:474–489. [PubMed]