PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of intjtelerehabLink to Publisher's site
 
Int J Telerehabil. 2016 Fall; 8(2): 9–14.
Published online 2016 December 15. doi:  10.5195/ijt.2016.6202
PMCID: PMC5536725

Influence of Smartphones and Software on Acoustic Voice Measures

Abstract

This study assessed the within-subject variability of voice measures captured using different recording devices (i.e., smartphones and head mounted microphone) and software programs (i.e., Analysis of Dysphonia in Speech and Voice (ADSV), Multi-dimensional Voice Program (MDVP), and Praat). Correlations between the software programs that calculated the voice measures were also analyzed. Results demonstrated no significant within-subject variability across devices and software and that some of the measures were highly correlated across software programs. The study suggests that certain smartphones may be appropriate to record daily voice measures representing the effects of vocal loading within individuals. In addition, even though different algorithms are used to compute voice measures across software programs, some of the programs and measures share a similar relationship.

Keywords: Acoustic, Applications, Apps, Smartphones, Voice

Voice disorders are the most common communication disorder across the lifespan, impacting 7.5 million people in the USA (NIDCD, 2014) with one in 13 adults affected annually (Bhattacharyya, 2014). In addition, there is also evidence that voice problems may negatively influence health-related quality of life (Cohen, 2010). In standard clinical practice, patients with voice problems visit a multidisciplinary team consisting of a speech-language pathologist (SLP), otolaryngologist, and singing voice specialist for pre- and post-treatment evaluations. This approach only provides two “snapshots” of perceptual, acoustic, and aerodynamic measures. Such snapshots of vocal evaluation cannot fully capture the day-to-day effects of vocal loading (i.e., repeated vocal fold posturing or excessive tissue vibration), especially in professional voice users. Indeed, the effects of vocal loading after periods of intense activity (e.g., teaching all day) may not manifest during SLP pre- and post-evaluation sessions (Grillo, 2011). Therefore, there is a need to adapt a different model of vocal monitoring that is more frequent. Such a model will also need to be accessible and user-friendly. Some recent studies have demonstrated ambulatory monitoring of voice, albeit in a research setting (Hunter, 2012; Mehta et al., 2013, 2015). These studies typically involve the use of specialized equipment that are expensive and are only available to researchers.

One solution that is readily available and easy to use involves applications (apps) downloaded to smartphones. Smartphones and apps are a part of everyday life and will continue to increase in presence over the next decade. Projections for 2013–2017 suggest that smartphone use will rise from 61.1% to 69.4% globally with 1.75 billion people using such devices by 2014 (eMarketer, 2014). As of September 2014, 71% of people in the USA own a smartphone with 85% of millennials (i.e., people aged 18–24) owning the devices (Nielson, 2014). The weekly time spent using apps has increased from 23 hours in 2012 to 37 hours in 2014, a 63% rise in just two years (Nielsen, 2015). Apps that run on mobile devices offer software solutions that extend the reach and productivity of a typical data collection session that is completed in-person with an SLP.

There are numerous voice or speech recording apps that run on iOS, Android, and Windows platforms. For the purposes of voice and speech analysis, apps that record.wav files at a sampling rate of 44,100Hz are sufficient (Plichta & Kornbluh, 2002). The SLP could require that the patient record his/her voice before and after talking for the day using the app on the smartphone and email the files to the SLP. The SLP could then analyze those files on software that is typically used for voice analysis (e.g., Praat). The ease of access of recording the voice throughout a day of talking via the smartphone will provide realistic data that better represents the effects of vocal loading on the voice.

Previous work has suggested that sound measurement apps for Apple smartphones may be considered accurate and reliable for assessing occupational noise exposure (Kardous & Shaw, 2014) and correlations of acoustic measures taken simultaneously from a head mounted microphone and a Samsung Galaxy Note 3 were significant and strong (r = 0.73, Uloza et al., 2015). The purpose of the current pilot study was to compare within-subject variability among voice measures with different recording devices (i.e., head mounted microphone, Apple, and Android smartphones) and software (i.e., ADSV, MDVP, and Praat). In addition, correlations among voice software programs that provided the voice analysis were also assessed.

METHODS

Ten vocally healthy women and men produced three trials of /a/ sustained for five seconds and three trials of “we were away a year ago” at a comfortable fundamental frequency (F0) and intensity. “We were away a year ago” was selected because all the phonemes are voiced, providing a connected speech example of continuous vocal fold vibration. The vocal health of the participants was determined perceptually during conversational speech on the day of testing by the researchers. Each trial was separated by 10 seconds. A head mounted condenser microphone (AKG C420, Northridge, CA), iPhones 5 and 6s, and Samsung Galaxy S5 were placed 4 centimeters (cm) from the participant’s mouth for voice recording (see Figure 1). A 4 cm plastic stick was used to measure the distance from mouth to microphones. All utterances were recorded simultaneously on all devices. Three apps, RØDE Rec LE (iPhone 5) and Recordium (iPhone 6s) for Apple, and Smart Voice Recorder (Samsung Galaxy S5), recorded.wav files. These apps were free, allowed email of the recorded.wav files, and offered a 44,100 Hz sampling rate for recording. The.wav files from the head mounted microphone were saved directly onto the computer that performed the analysis. The middle portion of /a/ (i.e., four seconds, 0.5 seconds trimmed off the beginning and end) and the entire sentence were analyzed.

Figure 1
The experimental set-up with the recording devices (iPhone 5 and 6s, Samsung Galaxy S5, head mounted microphone) and the plastic stick that measured 4 cm from the mouth to the microphones.

The acoustic analysis was completed using Praat (Boersma & Weenink, 2015), free software on the web, and KayPENTAX’s (Montvale, NJ) Multi-dimensional Voice Program (MDVP) and Analysis of Dysphonia in Speech and Voice (ADSV). The measures of interest included: fundamental frequency (F0), standard deviation of the F0 (SD of F0), jitter%, shimmer%, noise-to-harmonics ratio (NHR), cepstral peak prominence (CPP), and Acoustic Voice Quality Index (AVQI, Maryn, De Bodt, & Roy, 2010) (see Table 1). The acoustic measures of F0, SD of F0, jitter%, shimmer%, and NHR were chosen because they represent time-based measures of voice in frequency and amplitude from a nearly periodic voice signal and are measured accurately through sustained vowel. CPP was chosen because it is an alternative to time-based measures and it can be applied to continuous speech, which may provide a more representative sample of voice as compared to sustained vowel. In addition, all of these measures, except AVQI, are among some of the minimum instrumented measures recommended by the Special Interest Group (SIG) 3 Voice and Voice Disorders of the American Speech Language Hearing Association (ASHA) for completion of a comprehensive voice evaluation.

Table 1
Acoustic Measures, Definition, Task, and Software

RESULTS

The main effects of software, device, utterance, and trial were analyzed along with two- and three-way interactions for both women and men participants. For F0 and SD of F0, the main effect of utterance was significant for women (F0 p <0.001 and SD of F0 p <0.001), indicating that F0 and SD of F0 were different for /a/ and the sentence. No significant other main effects or interactions were found. For men, all main effects and interactions for F0 were not significant. The differences in F0 seen for women across sustained /a/ and the sentence were not carried over in men. Perhaps with the lower F0s seen in men, distinctions between sustained phonation and connected speech were not apparent in this study. That is, with added mass to the vocal folds in men there may be no significant difference in F0 for the different speech tasks (i.e., vowel vs. connected speech). For SD of F0 in men, the main effects of software and utterance were significant (p <0.001). There was also a significant two-way interaction between software and utterance (p <0.001). No other significant main effects or interactions were seen for SD of F0 in men. The variability around the mean for F0 in men did demonstrate differences across sustained phonation and connected speech.

For jitter% and shimmer% in women, main effects for software (p < 0.001), devices (p < 0.001), and the two-way interaction between software and devices (p < 0.001 for jitter% and p = 0.01 for shimmer%) were significant. For jitter% in men, main effects for software (p < 0.001) and trial (p = 0.01) were significant; however, no interactions were significant. For shimmer% in men, the main effect for devices (p = 0.01) was significant. No other main effects or interactions were seen.

For NHR in women and men, main effects for software (p < 0.001 for women and p = 0.05 for men) and devices (p < 0.001) were significant, but all two- and three-way interactions were not significant.

For CPP in women and men, the main effects for software, devices, and utterance were all significant (p < 0.001) and the two-way interaction for software and devices was significant (p < 0.001 for women and p = .04 for men). In addition for men, the main effect for trial was significant (p < 0.001). Across women and men for CPP, no other main effects or interactions were significant.

For AVQI, software was not a main effect because Praat is the only program that analyzes AVQI. The main effect for devices was significant in both women and men (p < 0.001 for women and p = 0.01 for men). The other main effect of trial and the two-way interaction of devices and trial were not significant for both women and men. Means and standard deviations for all dependent variables are presented in Tables 1 and and222.

Table 2
Means and Standard Deviations (in Parentheses) of the Dependent Variables for Women across Utterance (i.e., /a/ and “we were away a year ago”), Software (i.e., Multi-dimensional Voice Program (MDVP), Analysis of Dysphonia in Speech and ...
Table 3
Means and Standard Deviations (in Parentheses) of the Dependent Variables for Men across Utterance (i.e., /a/ and “we were away a year ago”), Software (i.e., Multi-dimensional Voice Program (MDVP), Analysis of Dysphonia in Speech and Voice ...

Correlations between software yielded the following results. There was a strong correlation between CPP values calculated by Praat and ADSV for women (r = 0.96, p < 0.00) and for men (r = 0.94, p < 0.001). For women, there were additional strong correlations between jitter% and NHR calculated by Praat and MDVP (r = 0.64, p < 0.001 for both). Shimmer% in women was not a strong correlation between Praat and MDVP (r = 0.11, p = 0.07). For men, there were no additional strong correlations (jitter% r = .198, p < 0.001; NHR r = 0.29, p < 0.001; shimmer% r = 0.12, p = 0.04).

DISCUSSION

Within-subject for both women and men, iPhone 5 and 6s, Samsung Galaxy S5, and the head mounted microphone yielded no significant differences when comparing voice analysis for F0, SD of F0, jitter%, shimmer%, NHR, CPP, and AVQI across MDVP, ADSV, and Praat. This result is supported by no significant three-way interactions of software, device, and trial indicating that there was no change in the dependent variables across software and across device from trial one to trial three. In addition, algorithms differ for calculating jitter%, shimmer%, NHR, and CPP across software. Even with the different algorithms, there was a strong correlation between ADSV and Praat for calculating CPP in both women and men and also between MDVP and Praat for calculating jitter% and NHR in women only. The overall values may be different, but the trends for these measures follow similar trajectories. It is interesting to note that jitter% and NHR were not strongly correlated across MDVP and Praat for men. Perhaps the lower F0s are disrupting the relationship between the algorithms. There was no difference between women and men for CPP because it is not a time-based measure.

The current results are consistent with previous work that suggested certain apps may be used to accurately and reliably measure environmental noise (Kardous & Shaw, 2014) and a Samsung Galaxy Note 3 compared with a head mounted microphone produced strong correlations between acoustic voice measures (Uloza et al., 2015). A recent study presents contradictory suggestions that the use of apps for dB readings of the human voice is premature because all of the three apps tested were not comparable to a Larson-Davis (Depew, NY) Model 831 Type 1 sound level meter (SLM) (Fava, Oliveira, Baglione, Pimpinella, & Spitzer, 2016). Results indicated that three SLM apps on an iPhone 5 and a RadioShack (Fort Worth, TX) SLM yielded inconsistent dB readings for the human voice at soft, habitual, and loud when compared with a Type 1 SLM. Frankly, it is not surprising that the results in Fava and colleagues (2016) were significantly different across recording devices for the human voice recordings and outside of the established criterion of ± 2dB. The procedures did not account for within subject variability across trials. For example, participants only produced one trial of soft /a/ sustained for five seconds. Because the microphones are different across devices, it is expected that the mean results will vary. In fact, the results from the current study were similar to Fava and colleagues (2016) when only looking at the main effect of device. In the current study, there were differences in the means of some of the voice measures across the smartphones and the head mounted microphone. The clinically relevant question is related to maintaining microphone recording integrity across trials in the same individual. The current study addressed that question and found that the smartphones and the head mounted microphone tested enabled consistent analysis of the voice measures within subject across women and men.

Considering the results of this pilot study, it is possible to capture reliable daily vocal loading effects using smartphones and free apps. To limit variability, use the same phone and the same app within each individual and require a 4 cm distance from mouth to microphone. The results are applicable to the phones and the apps used in the study. Future work needs to investigate other phones and other apps, especially given the rapid evolutions in smartphones. If the SLP does not have access to KayPENTAX’s software (i.e., MDVP and ADSV), the recommended minimum acoustic instrumented measures by SIG 3 of ASHA can still be completed using Praat, a free software program downloaded from the internet. In addition, the SLP can include AVQI, which is a measure that is only calculated through Praat. CPP measured through Praat is highly correlated to CPP measured though ADSV for both women and men. Jitter% and NHR are also highly correlated between MDVP and Praat for women only. Even with the measures that are not highly correlated between Praat and ADSV or Praat and MDVP, what matters is within-person change. Differences seen in that individual from pre- to post-treatment carries the most weight regardless of the software program used to perform the analysis. The SLP can complete an acoustic voice evaluation, representing the daily effects of vocal loading, using accessible and low-cost options.

ACKNOWLEDGMENTS

This work was supported by National Institute on Deafness and Other Communication Disorders Grant, R15DC014566 awarded to Elizabeth U. Grillo, Principal Investigator.

REFERENCES

  • Boersma P, Weenink D. Praat: Doing phonetics by computer. 2015. [Computer program]. Version 6004. Retrieved from http://www.praat.org.
  • Bhattacharyya N. The prevalence of voice problems among adults in the United States. Laryngoscope. 2014;124:2359–2362. [PubMed]
  • Cohen SM. Self-reported impact of dysphonia in a primary care population: An epidemiological study. Laryngoscope. 2010;120:2022–2032. [PubMed]
  • eMarketer. Smartphone users worldwide will total 1.75 billion in 2014. 2014. Retrieved from http://www.emarketer.com/Article/Smartphone-Users-Worldwide-Will-Total-175-Billion-2014/1010536.
  • Fava G, Oliveira G, Baglione M, Pimpinella M, Spitzer JB. The use of sound level meter apps in the clinical setting. American Journal of Speech-Language Pathology. 2016;25:14–28. [PubMed]
  • Grillo EU, Fugowski JM. Voice characteristics of female physical education student teachers. Journal of Voice. 2011;25:149–157. [PubMed]
  • Hunter EJ. Teacher response to ambulatory monitoring of voice. Logopedics Phoniatrics Vocology. 2012;37:133–135. [PubMed]
  • Kardous CA, Shaw PB. Evaluation of smartphone sound measurement applications. JASA Express Letters. 2014;135:186–192. [PMC free article] [PubMed]
  • Maryn Y, De Bodt M, Roy N. The acoustic voice quality index: Toward improved treatment outcomes assessment in voice disorders. Journal of Communication Disorders. 2010;43:161–174. [PubMed]
  • Mehta DD, Zanartu M, Van Stan JH, Feng SW, Cheyne HA, Hillman RE. Smartphone-based detection of voice disorders by long-term monitoring of neck acceleration features. Body Sensor Networks (BSN), 2013 IEEE International Conference on IEEE. 2013:1–6.
  • Mehta DD, Van Stan JH, Zañartu M, Ghassemi M, Guttag JV, Espinoza VM, Corets JP, Cheyne HA, Hillman R. Using ambulatory voice monitoring to investigate common voice disorders: Research update. Frontiers in Bioengineering and Biotechnology. 2015;3:155. doi: 10.3389/fbioe.2015.00155. [PMC free article] [PubMed] [Cross Ref]
  • National Institute on Deafness and Other Communication Disorders NIDCD. Statistics on voice speech and language. 2014. Retrieved from http://www.nidcd.nih.gov/health/voice/pages/whatis_vsl.aspx.
  • Nielsen. Mobile millennials: Over 85% of generation Y owns smartphones. 2014. Retrieved from http://www.nielsen.com/us/en/insights/news/2014/mobile-millennials-over-85-percent-of-generation-y-owns-smartphones.html.
  • Nielsen. So many apps, so much more time for entertainment. 2015. Retrieved from http://www.nielsen.com/us/en/insights/news/2015/so-many-apps-so-much-more-time-for-entertainment.html.
  • Plichta B, Kornbluh M. Digitizing speech recordings for archival purposes. 2002. Retrieved from http://www.historicalvoices.org/papers/audio_digitization.pdf.
  • Uloza V, Padervinskis E, Vegiene A, Pribuisiene R, Saferis V, Vaiciukynas E, Gelzinis A, Verikas A. Exploring the feasibility of smart phone microphone for measurement of acoustic voice parameters and voice pathology screening. European Archives of Oto-Rhino-Laryngology. 2015;272:3391–3399. [PubMed]

Articles from International Journal of Telerehabilitation are provided here courtesy of University Library Systems, University of Pittsburgh