A variety of perceptual correspondences between auditory and visual features have been reported, but few studies have investigated how rhythm, an auditory feature defined purely by dynamics relevant to speech and music, interacts with visual features. Here, we demonstrate a novel crossmodal association between auditory rhythm and visual clutter. Participants were shown a variety of visual scenes from diverse categories and were asked to report the auditory rhythm that perceptually matched each scene by adjusting the rate of amplitude modulation (AM) of a sound. Participants matched each scene to a specific AM rate with surprising consistency. A spatial-frequency analysis showed that scenes with larger contrast energy in midrange spatial frequencies were matched to faster AM rates. Bandpass-filtering the scenes indicated that large contrast energy in this spatial-frequency range is associated with an abundance of object boundaries and contours, suggesting that participants matched more cluttered scenes to faster AM rates. Consistent with this hypothesis, AM-rate matches were strongly correlated with perceived clutter. Additional results indicate that both AM-rate matches and perceived clutter depend on object-based (cycles per object) rather than retinal (cycles per degree of visual angle) spatial frequency. Taken together, these results suggest a systematic crossmodal association between auditory rhythm, representing density in the temporal domain, and visual clutter, representing object-based density in the spatial domain. This association may allow the use of auditory rhythm to influence how visual clutter is perceived and attended.
Crossmodal; multisensory integration; spatial frequency; amplitude modulation rate; natural scenes; density; visual clutter
Emergence of antibiotic-resistant bacteria in the aquaculture environment is a significant problem for disease control of cultured fish as well as in human public health. Conjugative mobile genetic elements (MGEs) are involved in dissemination of antibiotic resistance genes (ARGs) among marine bacteria. In the present study, we first designed a PCR targeting traI gene encoding essential relaxase for conjugation. By this new PCR, we demonstrated that five of 83 strains isolated from a coastal aquaculture site had traI-positive MGEs. While one of the five strains that belonged to Shewanella sp. was shown to have an integrative conjugative element of the SXT/R391 family (ICEVchMex-like), the MGEs of the other four strains of Vibrio spp. were shown to have the backbone structure similar to that of previously described in pAQU1. The backbone structure shared by the pAQU1-like plasmids in the four strains corresponded to a ~100-kbp highly conserved region required for replication, partition and conjugative transfer, suggesting that these plasmids constituted “pAQU group.” The pAQU group plasmids were shown to be capable of conjugative transfer of tet(M) and other ARGs from the Vibrio strains to E. coli. The pAQU group plasmid in one of the examined strains was designated as pAQU2, and its complete nucleotide sequence was determined and compared with that of pAQU1. The results revealed that pAQU2 contained fewer ARGs than pAQU1 did, and most of the ARGs in both of these plasmids were located in the similar region where multiple transposases were found, suggesting that the ARGs were introduced by several events of DNA transposition into an ancestral plasmid followed by drug selection in the aquaculture site. The results of the present study indicate that the “pAQU group” plasmids may play an important role in dissemination of ARGs in the marine environment.
pAQU group; pAQU2; transferable plasmid; tet(M); antimicrobial resistance genes; SXT/R391 ICEs; aquaculture; traI
Persistence and dispersal of antibiotic resistance genes (ARGs) are important factors for assessing ARG risk in aquaculture environments. Here, we quantitatively detected ARGs for sulphonamides (sul1 and sul2) and trimethoprim (dfrA1) and an integrase gene for a class 1 integron (intI1) at aquaculture facilities in the northern Baltic Sea, Finland. The ARGs persisted in sediments below fish farms at very low antibiotic concentrations during the 6-year observation period from 2006 to 2012. Although the ARGs persisted in the farm sediments, they were less prevalent in the surrounding sediments. The copy numbers between the sul1 and intI1 genes were significantly correlated suggesting that class 1 integrons may play a role in the prevalence of sul1 in the farm sediments through horizontal gene transfer. In conclusion, the presence of ARGs may limit the effectiveness of antibiotics in treating fish illnesses, thereby causing a potential risk to the aquaculture industry. However, the restricted presence of ARGs at the farms is unlikely to cause serious effects in the northern Baltic Sea sediment environments around the farms.
We examined the effects of combination therapy with 50 mg/day of sitagliptin and low-dose glimepiride (1 mg/day) in patients with type 2 diabetes.
Twenty-six patients with poorly controlled type 2 diabetes currently taking high-dose glimepiride (≥ 2 mg/day) were enrolled in the study. The dose of glimepiride was reduced to 1 mg/day and 50 mg/day of sitagliptin was added without changing the doses of any other antihyperglycemic agents. The patients were divided into two groups: the low-dose group (2 or 3 mg glimepiride decreased to 1 mg: n = 15) and the high-dose group (4 or 6 mg glimepiride decreased to 1 mg: n = 11).
Combination therapy significantly lowered HbA1c after 24 weeks of treatment in both groups. In the low-dose group, 8.1 ± 0.2% decreased to 7.0 ± 0.1%; in the high-dose group, 8.4 ± 0.1% decreased to 7.3 ± 0.2%. The time course of the degree of HbA1c reduction in the high-dose group was almost superimposable on that in the low-dose group. There were no changes in body weight and no hypoglycemia and in either group during the study period. In conclusion, our results suggested that the combination therapy used in the study is both well tolerated and effective.
This study indicated the usefulness of dipeptidyl peptidase (DPP)-4 inhibitors in Japanese patients with type 2 diabetes, and also reinforces the importance of low doses of sulfonylurea for effective glycemic management.
Sitagliptin; Glimepiride; Combination therapy; Sulfonylurea; Dipeptidyl peptidase (DPP)-4 inhibitors; Type 2 diabetes mellitus; Antihyperglycemicagents; Hypoglycemia
People naturally dance to music, and research has shown that rhythmic auditory stimuli facilitate production of precisely timed body movements. If motor mechanisms are closely linked to auditory temporal processing, just as auditory temporal processing facilitates movement production, producing action might reciprocally enhance auditory temporal sensitivity. We tested this novel hypothesis with a standard temporal-bisection paradigm, in which the slope of the temporal-bisection function provides a measure of temporal sensitivity. The bisection slope for auditory time perception was steeper when participants initiated each auditory stimulus sequence via a keypress than when they passively heard each sequence, demonstrating that initiating action enhances auditory temporal sensitivity. This enhancement is specific to the auditory modality, because voluntarily initiating each sequence did not enhance visual temporal sensitivity. A control experiment ruled out the possibility that tactile sensation associated with a keypress increased auditory temporal sensitivity. Taken together, these results demonstrate a unique reciprocal relationship between auditory time perception and motor mechanisms. As auditory perception facilitates precisely timed movements, generating action enhances auditory temporal sensitivity.
Action; Auditory temporal sensitivity; Visual temporal sensitivity
Behavioral and neuroimaging findings indicate that distinct cognitive and neural processes underlie solving problems with sudden insight. Moreover, people with less focused attention sometimes perform better on tests of insight and creative problem solving. However, it remains unclear whether different states of attention, within individuals, influence the likelihood of solving problems with insight or with analysis. In this experiment, participants (N = 40) performed a baseline block of verbal problems, then performed one of two visual tasks, each emphasizing a distinct aspect of visual attention, followed by a second block of verbal problems to assess change in performance. After participants engaged in a center-focused flanker task requiring relatively focused visual attention, they reported solving more verbal problems with analytic processing. In contrast, after participants engaged in a rapid object identification task requiring attention to broad space and weak associations, they reported solving more verbal problems with insight. These results suggest that general attention mechanisms influence both visual attention task performance and verbal problem solving.
verbal problem solving; visual attention; insight; creativity; focused attention; broadened attention
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
How rapidly can one voluntarily influence percept generation? The time course of voluntary visual-spatial attention is well studied, but the time course of intentional control over percept generation is relatively unknown. We investigated the latter using “one-shot” apparent motion. When a vertical or horizontal pair of squares is replaced by its 90° rotated version, the bottom-up signal is ambiguous. From this ambiguous signal, it is known that people can intentionally generate a percept of rotation in a desired direction (clockwise or counterclockwise). To determine the time course of this intentional control, we instructed participants to voluntarily induce rotation in a pre-cued direction (clockwise rotation when a high-pitched tone is heard and counter-clockwise rotation when a low-pitched tone is heard), and then to report the direction of rotation that was actually perceived. We varied the delay between the instructional cue and the rotated frame (cue-lead time) from 0 ms to 1067 ms. Intentional control became more effective with longer cue-lead times (asymptotically effective at 533 ms). Notably, intentional control was reliable even with a zero cue-lead time; control experiments ruled out response bias and the development of an auditory-visual association as explanations. This demonstrates that people can interpret an auditory cue and intentionally generate a desired motion percept surprisingly rapidly, entirely within the subjectively instantaneous moment in which the visual system constructs a percept of apparent motion.
intentional control; visual bistability; apparent motion; attentive tracking
Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Non-sensory factors such as increased arousal and attention have been thought to mediate this flicker-based temporal-dilation aftereffect. Here we provide evidence that adaptation of low-level cortical visual neurons contributes to this aftereffect. The aftereffect was significantly reduced by a 45° change in Gabor orientation between adaptation and test. Because orientation-tuning bandwidths are smaller in lower-level cortical visual areas and are approximately 45° in human V1, the result suggests that flicker adaptation of orientation-tuned V1 neurons contributes to the temporal-dilation aftereffect. The aftereffect was abolished when the adaptor and test stimuli were presented to different eyes. Because eye preferences are strong in V1 but diminish in higher-level visual areas, the eye specificity of the aftereffect corroborates the involvement of low-level cortical visual neurons. Our results thus suggest that flicker adaptation of low-level cortical visual neurons contributes to expanding visual duration. Furthermore, this temporal-dilation aftereffect dissociates from the previously reported temporal-constriction aftereffect on the basis of the differences in their orientation and flicker-frequency selectivity, suggesting that the visual system possesses at least two distinct and potentially complementary mechanisms for adaptively coding perceived duration.
Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms.
emotion detection; expression categorization; face-inversion effect; awareness; face processing
The present study investigated the limits of semantic processing without awareness, during continuous flash suppression (CFS). We used compound remote associate word problems, in which three seemingly unrelated words (e.g., pine, crab, sauce) form a common compound with a single solution word (e.g., apple). During the first 3 s of each trial, the three problem words or three irrelevant words (control condition) were suppressed from awareness, using CFS. The words then became visible, and participants attempted to solve the word problem. Once the participants solved the problem, they indicated whether they had solved it by insight or analytically. Overall, the compound remote associate word problems were solved significantly faster after the problem words, as compared with irrelevant words, were presented during the suppression period. However this facilitation occurred only when people solved with analysis, not with insight. These results demonstrate that semantic processing, but not necessarily semantic integration, may occur without awareness.
Awareness; Continuous flash suppression; Semantic processing; Semantic integration; Binocular rivalry; Problem solving
The brain receives input from multiple sensory modalities simultaneously, yet we experience the outside world as a single integrated percept. This integration process must overcome instances where perceptual information conflicts across sensory modalities. Under such conflicts, the relative weighting of information from each modality typically depends on the given task. For conflicts between visual and haptic modalities, visual information has been shown to influence haptic judgments of object identity, spatial features (eg location, size), texture, and heaviness. Here we test a novel instance of haptic–visual conflict in the perception of torque. We asked participants to hold a left–right unbalanced object while viewing a potentially left–right mirror-reversed image of the object. Despite the intuition that the more proximal haptic information should dominate the perception of torque, we find that visual information exerts substantial influences on torque perception even when participants know that visual information is unreliable.
sensory integration; crossmodal perception; visual; haptic; weight distribution; torque perception
While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes to perception of 3D space, objects and faces. Hearing a /woo/ sound increases the apparent vertical elongation of a shape, whereas hearing a /wee/ sound increases the apparent horizontal elongation. We further demonstrate that these sounds influence aspect ratio coding. Viewing and adapting to a tall (or flat) shape makes a subsequently presented symmetric shape appear flat (or tall). These aspect ratio aftereffects are enhanced when associated speech sounds are presented during the adaptation period, suggesting that the sounds influence visual population coding of aspect ratio. Taken together, these results extend previous demonstrations that visual information constrains auditory perception by showing the converse – speech sounds influence visual perception of a basic geometric feature.
Auditory–visual; Aspect ratio; Crossmodal; Shape perception; Speech perception
Background: There is growing concern worldwide about the role of polluted soil and water environments in the development and dissemination of antibiotic resistance.
Objective: Our aim in this study was to identify management options for reducing the spread of antibiotics and antibiotic-resistance determinants via environmental pathways, with the ultimate goal of extending the useful life span of antibiotics. We also examined incentives and disincentives for action.
Methods: We focused on management options with respect to limiting agricultural sources; treatment of domestic, hospital, and industrial wastewater; and aquaculture.
Discussion: We identified several options, such as nutrient management, runoff control, and infrastructure upgrades. Where appropriate, a cross-section of examples from various regions of the world is provided. The importance of monitoring and validating effectiveness of management strategies is also highlighted. Finally, we describe a case study in Sweden that illustrates the critical role of communication to engage stakeholders and promote action.
Conclusions: Environmental releases of antibiotics and antibiotic-resistant bacteria can in many cases be reduced at little or no cost. Some management options are synergistic with existing policies and goals. The anticipated benefit is an extended useful life span for current and future antibiotics. Although risk reductions are often difficult to quantify, the severity of accelerating worldwide morbidity and mortality rates associated with antibiotic resistance strongly indicate the need for action.
agriculture; antibiotic manufacturing; antibiotic resistance; aquaculture; livestock; manure management; policy; wastewater treatment
visual spatial frequency; auditory amplitude-modulation rate; auditory-visual interactions
When attention is directed to the local or global level of a hierarchical stimulus, attending to that same scale of information is subsequently facilitated. This effect is called level-priming, and in its pure form, it has been dissociated from stimulus- or response-repetition priming. In previous studies, pure level-priming has been demonstrated using hierarchical stimuli composed of alphanumeric forms consisting of lines. Here, we test whether pure level-priming extends to hierarchical configurations of generic geometric forms composed of elements that can be depicted either outlined or filled-in. Interestingly, whereas hierarchical stimuli composed of outlined elements benefited from pure level-priming, for both local and global targets, those composed of filled-in elements did not. The results are not readily attributable to differences in spatial frequency content, suggesting that forms composed of outlined and filled-in elements are treated differently by attention and/or priming mechanisms. Because our results present a surprising limit on attentional persistence to scale, we propose that other findings in the attention and priming literature be evaluated for their generalizability across a broad range of stimulus classes, including outlined and filled-in depictions.
priming; local; global; attention; hierarchical stimuli
Reading comprehension depends on neural processes supporting the access, understanding, and storage of words over time. Examinations of the neural activity correlated with reading have contributed to our understanding of reading comprehension, especially for the comprehension of sentences and short passages. However, the neural activity associated with comprehending an extended text is not well-understood. Here we describe a current-source-density (CSD) index that predicts individual differences in the comprehension of an extended text. The index is the difference in CSD-transformed event-related potentials (ERPs) to a target word between two conditions: a comprehension condition with words from a story presented in their original order, and a scrambled condition with the same words presented in a randomized order. In both conditions participants responded to the target word, and in the comprehension condition they also tried to follow the story in preparation for a comprehension test. We reasoned that the spatiotemporal pattern of difference-CSDs would reflect comprehension-related processes beyond word-level processing. We used a pattern-classification method to identify the component of the difference-CSDs that accurately (88%) discriminated good from poor comprehenders. The critical CSD index was focused at a frontal-midline scalp site, occurred 400–500 ms after target-word onset, and was strongly correlated with comprehension performance. Behavioral data indicated that group differences in effort or motor preparation could not explain these results. Further, our CSD index appears to be distinct from the well-known P300 and N400 components, and CSD transformation seems to be crucial for distinguishing good from poor comprehenders using our experimental paradigm. Once our CSD index is fully characterized, this neural signature of individual differences in extended-text comprehension may aid the diagnosis and remediation of reading comprehension deficits.
reading comprehension; EEG/ERP; machine learning applied to neuroscience; current source density; working memory
How do the characteristics of sounds influence the allocation of visual-spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual-spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be mediated by perceptual experience. We used a Go/No-Go color-matching task to avoid response compatibility confounds. Participants performed the task either with their heads upright or tilted by 90°, misaligning the head-centered and environmental axes. The first of two colored circles was presented at fixation and the second was presented in one of four surrounding positions in a cardinal or diagonal direction. Either an ascending or descending auditory-frequency sweep was presented coincident with the first circle. Participants were instructed to respond to the color match between the two circles and to ignore the uninformative sounds. Ascending frequency sweeps facilitated performance (response time and/or sensitivity) when the second circle was presented at the cardinal top position and descending sweeps facilitated performance when the second circle was presented at the cardinal bottom position; there were no effects of the average or ending frequency. The sweeps had no effects when circles were presented at diagonal locations, and head tilt entirely eliminated the effect. Thus, visual-spatial cueing by pitch change is narrowly tuned to vertical directions and dominates any effect of average or ending frequency. Because this cross-modal cueing is dependent on the alignment of head-centered and environmental axes, it may develop through associative learning during waking upright experience.
cross-modal perception; auditory-visual interactions; visual-spatial attention; implicit attentional processing; multi-modal cognition
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding.
awareness; pattern adaptation; visual perception
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.
The rise of systems biology and availability of highly curated gene and molecular information resources has promoted a comprehensive approach to study disease as the cumulative deleterious function of a collection of individual genes and networks of molecules acting in concert. These "human disease networks" (HDN) have revealed novel candidate genes and pharmaceutical targets for many diseases and identified fundamental HDN features conserved across diseases. A network-based analysis is particularly vital for a study on polygenic diseases where many interactions between molecules should be simultaneously examined and elucidated. We employ a new knowledge driven HDN gene and molecular database systems approach to analyze Inflammatory Bowel Disease (IBD), whose pathogenesis remains largely unknown.
Methods and Results
Based on drug indications for IBD, we determined sibling diseases of mild and severe states of IBD. Approximately 1,000 genes associated with the sibling diseases were retrieved from four databases. After ranking the genes by the frequency of records in the databases, we obtained 250 and 253 genes highly associated with the mild and severe IBD states, respectively. We then calculated functional similarities of these genes with known drug targets and examined and presented their interactions as PPI networks.
The results demonstrate that this knowledge-based systems approach, predicated on functionally similar genes important to sibling diseases is an effective method to identify important components of the IBD human disease network. Our approach elucidates a previously unknown biological distinction between mild and severe IBD states.
Inflammatory bowel disease (IBD); Disease related genes; Protein-protein interaction networks; GO based functional score; Interpretation of pathogenesis
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent.
Crossmodal interaction; emotion; facial expressions; laughter
Southeast Asia has become the center of rapid industrial development and economic growth. However, this growth has far outpaced investment in public infrastructure, leading to the unregulated release of many pollutants, including wastewater-related contaminants such as antibiotics. Antibiotics are of major concern because they can easily be released into the environment from numerous sources, and can subsequently induce development of antibiotic-resistant bacteria. Recent studies have shown that for some categories of drugs this source-to-environment antibiotic resistance relationship is more complex. This review summarizes current understanding regarding the presence of quinolones, sulfonamides, and tetracyclines in aquatic environments of Indochina and the prevalence of bacteria resistant to them. Several noteworthy findings are discussed: (1) quinolone contamination and the occurrence of quinolone resistance are not correlated; (2) occurrence of the sul sulfonamide resistance gene varies geographically; and (3) microbial diversity might be related to the rate of oxytetracycline resistance.
Indochina; environment; quinolone; sulfonamide; tetracycline; resistance gene; bacteria