|Home | About | Journals | Submit | Contact Us | Français|
Youth remain at high risk for contracting and transmitting human immunodeficiency virus (HIV) and other sexually transmitted diseases (STD). In the U.S., over 5000 people under the age of 20 were diagnosed with HIV or AIDS from 2003–2006. Youth are also among those with the highest rates of STD, and having STD infection substantially increases risk for HIV acquisition and transmission (Centers for Disease Control and Prevention, 2008).
Recent reports indicate that Internet use by youth is now nearly universal, with 93% of teens using the Internet (Pew Internet and American Life Project, 2006). Youth have been reported to reported to spend 40 minutes or more online per day (Gross, 2004; Madden, 2006), and although there are reports that health seeking online is common among youth (Pew Internet and American Life Project, 2006; Pew Internet and American Life Report, 2000; Roberts et al., 2005) data show youth online primarily spend their time multi-tasking, most often communicating through instant messaging (IM) (Gross, 2004; Rideout, 2001; Roberts et al., 2005). A 2005 study by the Kaiser Family Foundation showed that an overwhelming majority of youth have access to a computer in their home and spend an average of one hour on the computer daily. These data, however, include time spent on the Internet, with a substantial portion of the time spent on the computer is actually spent online. In their 1999 assessment of computer use among youth, the KFF documented computer use at an average of 27 minutes per day, and did not ask if any of this time was specific to online activities (Roberts et al., 2005).
Rigorous studies using the Internet to promote healthy behavior show the promise of the medium for promotion of weight loss (Tate et al., 2001), diabetes self-management (Glasgow et al., 2003; Gottlieb, 2000; McKay et al., 2001), and healthy eating (Oenema et al., 2001). The Internet has also been used effectively to increase access to care among persons living with HIV and patients with eating disorders (Flatley-Brennan, 1998; Gustafson et al., 2002; Winzelberg et al., 2000). In a meta-analysis of interventions for physical activity and dietary behaviors that utilized some form of technology related intervention (e.g. computers, e-mail, Internet), Norman et al. showed that 21 of 41 studies reviewed showed positive effects of technology on diet and activity outcomes (Norman et al., 2007). Another meta-analysis of websites delivering physical activity interventions showed similar results, with eight of 15 interventions having a positive effect on physical activity (Vandelanotte et al., 2007).
To date, the only published trial of the efficacy of an Internet-based rather than a computerized HIV prevention delivered via kiosk is that describing a two session program for men who have sex with men (MSM) in rural Wyoming (Bowen et al., 2007a). Results from this trial show positive effects of the intervention on condom attitudes and self-efficacy at one week post intervention. This trial didn’t include long enough follow-up of participants to demonstrate an impact on HIV prevention behaviors.
There have been studies demonstrating the efficacy of computer-based interventions for HIV and STD prevention, delivered via kiosk or computer in various institutional and community settings. Kiene and colleagues showed that college students randomized to participate in two computer based sessions to reduce HIV and STD risk spaced two weeks apart had an increase in awareness regarding HIV and STD, more frequently carried condoms, and more frequently used condoms compared to those in the control group (Kiene and Barta, 2006). Lightfoot and colleagues showed that high risk youth participating in two 1.5 hour computer-based HIV prevention sessions were less likely to engage in sex and had fewer sex partners compared to controls (Lightfoot et al., 2007). Roberto and colleagues showed that high school students participating in six 15 minute computer-based HIV and STD prevention sessions spaced one week apart had greater delays in initiation of intercourse and improved attitudes and self-efficacy for safer sex behaviors compared to controls (Roberto et al., 2007).
Computer-based efforts in HIV prevention show promise, and have the advantages of standardization, fidelity and likely ease of replication. However, they can have small samples (Kiene and Barta, 2006; Lightfoot et al., 2007). The Internet offers an additional opportunity to reach larger numbers of people in diverse settings. The privacy offered by computers and the Internet is also an asset, as is the accessibility to interventions from multiple sites (e.g., home, clinic, library). The standardization, fidelity, ease of replication of computer-based interventions and the added potential benefits of reach and increased use of the Internet by youth, make both computers and Internet logical venues for HIV/STD research and prevention interventions.
The emergence of computer software and “expert systems” allow for the production and dissemination of individually tailored print material (Bental et al., 1999; Campbell et al., 1994) (De Vries and Brug, 1999; Kreuter et al., 2000; Lipkus et al., 1999; Marcus et al., 1998; Rakowski et al., 2003; Rimer et al., 1999; Skinner et al., 1994; Strecher et al., 1994). This body of research has shown that tailoring increases the “self relevance” of print material for subjects, that such material is more likely to be read, comprehended and remembered, and that it can produce significant behavior change (Kreuter et al., 2000; Strecher, 1999) across a wide variety of behavioral outcomes (e.g., smoking cessation, diet and nutrition, cancer screening).
The Internet and computers are excellent modes of delivery for interactive, personalized or tailored HIV prevention intervention delivery. They offer opportunities for interventions that are grounded in behavioral science theory shown to have efficacy in HIV prevention (Centers for Disease Control and Prevention, 1999), including concepts highlighted in two theories: the Social Cognitive Theory (SCT) (Bandura, 1986; Bandura, 1997) and the Theory of Planned Behavior (TPB) (Azjen, 1991), which have been identified to empirically account for or explain most of the variation in HIV/STD prevention behaviors (Albarracin et al., 2001; Albarracin et al., 2003; Albarracin et al., 2004).
Despite the promise of the Internet and computers for health promotion, there are several challenges we face in meeting the potential of these media for health promotion. First, we face competition for time and interest—how can we get youth who are focused on instant messaging online to use our website or access our computer kiosks? Many researchers have presented data showing that it is difficult to achieve full compliance with multi-session health related programs that are on the Internet or are computerized. Participants may enroll but fail to complete multiple sessions over a long period of time (Linke et al., 2007; Verheijden et al., 2007) or will only do so as long as financial incentives are in place (Bowen et al., 2007b). A second concern is attrition from research—participants often fail to return to complete assessments of study efficacy, making interpretation of program effects difficult (Bull et al., 2004a; Severson et al., 2008). Finally, a third concern is that we have yet to do a good job in recruiting and retaining—both to complete program elements as well as assessments-- diverse audiences in our technology based health promotion efforts. Audiences tend to be well educated, middle class and higher income, and not representative of populations at high risk for the negative health outcomes studied (Bull et al., 2004b; Bull et al., 2008a; Feil et al., 2000; Glasgow et al., 2007).
How then can we capitalize on the potential of the Internet and computers, which are so widely used and popular with youth, to promote needed HIV and STD prevention? While the Internet and computers do have the potential to reach large numbers and standardize material, we face difficulties in program compliance, retention of study participants for assessment and in increasing diversity of program participants. We sought to overcome these difficulties with a single session, theoretically driven interactive computer program for HIV prevention delivered either a) exclusively on the Internet or b) in a clinic setting at a computer kiosk. Following is a description of our methods and findings.
Persons between the ages of 18–24, who could read English, who were residents of the U.S. (excluding Colorado) (for the Internet sample) or the Denver Metropolitan area (for the clinic sample), who were willing to return to the site at least two times during the study, and who had the capability to hear audio on their computers (for both samples) were eligible to enroll in the Youthnet study on our project site called Keep It Real. We excluded Colorado residents from the Internet sample because they could enroll in the clinic sample and we did not wish to have overlap between the samples.
Participants were recruited for the Internet sample in November of 2004. They could enroll in one of two ways—through clicking on banner advertisements posted on the Internet at three websites: Black Planet.com, Mi Gente.com and Yahoo!, or through finding the site on their own (e.g. through online search engines or individual referral). Persons seeing a banner advertisement could click on it, taking them to the Keep It Real website. Those self-referring to the site could have been sent a link by a friend, or could have found the site by searching online using keywords such as “keep it real,” or “HIV Prevention.” Once at the website, participants in the Internet sample read information about the study and completed an eligibility screener. Those eligible were invited to complete an informed consent and Health Insurance Portability Authorization Act (HIPAA) process on the Internet. They were asked to provide detailed contact information, including name, address, two telephone numbers and an e-mail address, and to create a study ID and password. We then sent a link to their e-mail address to re-access the site and log on. Participants recruited for the clinic sample were approached after seeking services for STD or contraception, screened for eligibility and enrolled after completing a face-to-face informed consent.
For the Internet sample, we had a total of 3298 complete the baseline survey (56% of those eligible), and of these 675 were removed following a detailed verification process that included identification of duplicate and fraudulent participants and participants with excessive inconsistencies in survey responses. This verification process is detailed elsewhere (Bull et al., 2008a). We were able to verify and keep baseline data for 2623 participants.
We began recruitment for the clinic sample in June of 2004, with an initial sample of 300 who pilot tested the study, and in October 2004 for the remainder of the clinic sample described here. Recruitment for the clinic sample continued through October of 2006. In clinic settings, potential participants were approached by study staff, assessed for eligibility, and, for those eligible, invited to participate. Those agreeing completed the informed consent and HIPAA process face to face with study staff, and then logged on to the computer kiosk to complete the baseline risk assessment and intervention or control content. Figure 1 shows numbers recruited into each sample.
Data from Figure 1 from the clinic sample worth noting are that 78% of those eligible enrolled in the study and completed a baseline survey (1952/2487). We considered the first 300 enrollees in the clinic sample pilot testers and they are not included in analyses described here. We removed an additional 208 participants from analyses because of data inconsistencies.
All participants, including those recruited on the Internet and in clinics completed a computerized baseline risk assessment, and were randomly assigned to intervention or control status.
Those in the intervention arm were asked to respond to questions related to their HIV risk. These questions were divided into five modules. In between each module, participants in the intervention were exposed to a role model story. These stories were delivered using flash technology with pictures and audio (voice and music). Pictures in each story were of a role model matched to participant gender and race/ethnicity. Stories lasted between 60 and 90 seconds and specifically addressed theoretical constructs about condoms and condom use: (1) attitudes, (2) norms, (3) awareness of HIV/STD risk, (4) self-efficacy for condom negotiation, and (5) self-efficacy for condom use. Screen shots showing examples of our role model stories are shown in Figure 2. Controls also completed the risk assessment, but in between each module, they viewed text-based generic HIV prevention information in lieu of role model stories. The entire procedure at baseline and follow-up took participants approximately 20 minutes to complete.
All follow-up procedures occurred on the Internet, regardless of location of recruitment. Thus follow-up modality for the Internet and clinic samples was equivalent. At one month, participants were sent an automated e-mail reminder to log on for their “booster” session that consisted solely of re-exposure to their intervention or control messages. No risk behavior data were collected at this time.
Participants were asked to return to the Keep it Real site to complete a follow-up risk assessment at two months (for the Internet sample) and three months (for the clinic sample). The two-month follow-up period for the Internet sample was established to avoid precipitous attrition at three months seen in other research conducted with persons recruited exclusively on the Internet, (Bull et al., 2004a; Devineni and Blanchard, 2005; Verheijden et al., 2007) a decision made to maximize retention in lieu of conforming to more standard follow-up periods for face-to-face randomized trials.
Each participant was offered an incentive of up to $35 for participation in this study. At baseline, they received a $10 Amazon.com gift certificate. At the one-month booster, participants received a $5 Amazon.com gift certificate. Participants viewing their booster message within 48 hours of being eligible received a “bonus” incentive of another $5. At the follow-up, participants were again offered a $10 incentive and a $5 bonus for completion of the survey within 48 hours of eligibility. Thus, participants completing all phases of the study received between $25 and $35 in Amazon.com gift certificates.
Retention in each of the study samples analyzed here is illustrated in Figure 1. Significantly more people who did complete a follow-up survey for the online sample were not employed, had lower incomes, and were Asian, White, and Multi-racial compared to those who did participate at follow-up. No significant demographic differences were observed between those completing baseline and follow-up in the clinic sample. We retained follow-up data on 53% of the online sample and 61% of the clinic sample for this analysis, rates higher than we have previously seen for online research. Further detail on recruitment and retention of participants in the study is available elsewhere (Bull et al., 2008a).
Our primary study outcome was change in the proportion of protected acts, where proportion of protected acts was measured as the number of sex acts protected by a condom in sixty days divided by the total number of sex acts, and the change measured as the difference in the proportion of protected acts between baseline and follow-up. Other HIV related risk factors we documented included number of sex partners and history of STD. We included demographic measures such as race/ethnicity and gender, and included income measured as the amount of money participants had to spend each month after paying bills.
As described above, a number of theoretical constructs have consistently been shown to be associated with safer sexual behaviors, including condom use. Based on accepted standards for measurement of concepts that emphasize the importance of using multi-item measures to increase validity (Babbie, 2003; Jaccard and Wan, 1995; Jaccard et al., 1996; Mundfrom et al., 2005) we created numerous scales or latent variables (so called because there is more than one item to measure a concept or idea) to measure positive and negative outcome expectations towards condom use, condom use norms, and self-efficacy for condom negotiation and self-efficacy for condom use. Scale items were generated based on those used for these constructs in other research, including our own previous work (DiIorio et al., 1997; Galavotti et al., 1995; Noar et al., 2006; Posner et al., 2004). We subjected these scales to rigorous psychometric testing through both exploratory and confirmatory factor analyses. The results of these factor analyses are described in detail elsewhere (Bull et al., 2008b), but in brief, data indicate these measures are robust and valid across both study samples, with high standardized factor loadings ranging from 0.51–0.86 (factor loadings of 0.50 or greater are considered appropriate (Mundfrom et al., 2005)) and Chronbach’s alpha estimates of reliability ranging from 0.68 to .089 (where estimates >0.70 are considered appropriate (Mundfrom et al., 2005)).
Each construct was measured using a five point Likert scale measuring the degree of agreement with the construct statements, ranging from “Strongly Disagree” to “Strongly Agree”. All participants were given the option to select a “Don’t want to answer” (DWA) option for these scale items as well as for every question on the risk assessment per IRB requirements. Our final measures include six theoretical factors, and each item for each factor is shown in Table 1 with data from the factor analyses. In our analyses we excluded participants who had never had sex at baseline, and persons for whom we uncovered irreconcilable inconsistencies in data (e.g. indication that they had had two lifetime sex partners but also stated they had 40 partners in 12 months). We also had to exclude persons with missing data on any of the endogenous variables or the outcome, a standard procedure used in structural equation modeling (SEM), our analytic procedure for the outcomes described below. We assessed missing data and items marked “DWA” and did not find evidence of data missing or marked “DWA” systematically with one exception; participants identifying as Asian had more missing data than other participants. There was no difference with regard to marking “DWA” among Asians compared to others.
The theories that informed the development of the intervention posit complex relationships among specific constructs. For example, the Theory of Reasoned Action and Planned Behavior (Azjen, 1991) posits that norms and attitudes precede self-efficacy along a pathway towards behavior change. Similarly, Social Cognitive Theory (Bandura, 1986) considers that role modeling of behavior will precede the development of self-efficacy. In order to fully test the relationships between the constructs themselves and between the constructs and the study outcomes, we used SEM, allowing for the simultaneous estimation of both the factor structures of latent variables and the functional relationships among these variables, including both direct and indirect (mediated) relationships. Using repeated measures SEM models in Mplus version 4.2, we analyzed (1) exposure to our study intervention, (2) relationships between each of our six latent variables (factors) and (3) relationships between the intervention, our six factors, demographics and our study outcome.
Figure 3 shows our a priori assumptions regarding the relationships between the study constructs and the outcome. We developed these assumptions based on other research and theory suggesting that norms and outcome expectancies precede self-efficacy, which in turn precedes behaviors (Albarracin et al., 2001; Albarracin et al., 2003; Albarracin et al., 2004; Azjen, 1991). Our a priori assumptions also included the belief that exposure to the intervention would intensify the effects within the model on the endogenous constructs as well as the outcome.
We also considered that several exogenous variables not modifiable through intervention might be associated with the study constructs and outcome, i.e. gender, age, and race and ethnicity. Because of the way race and ethnicity were distributed across samples we considered this as a three item variable in our models; Asian, Black or African American; and “other” (for the Internet sample); Hispanic/Latino, Black or African American and “other” (for the clinic sample). We also considered risk for HIV, measured as none (if participant had only one partner in 12 months and no history of STD) some (if participant had <= 3 partners in 12 months and/or history of STD) or high (if participant had >3 partners in 12 months and history of STD).
SEM analysis began with a confirmatory factor analysis (CFA) of the study measures to determine the association of covariates (race, gender, age, HIV risk) with other variables (manifest or latent); Mplus refers to this type of model, a CFA with covariates, as a MIMIC (multiple causes, multiple indicators) model. The second step in SEM involved fitting our a priori theoretical model (Figure 3) to the baseline data. The third step was to fit the model at follow-up, including the auto-regressive relationships between the follow-up and baseline measures (i.e., controlling for baseline levels of each construct in estimating the relationships among them at follow-up). Comparison of path coefficients across follow-up and baseline revealed that the relationships among constructs within the models remained invariant across time. SEM models were fit using Maximum Likelihood estimation.
Table 2 shows the demographic characteristics of both the Internet and clinic samples included in these analyses. For the Internet we saw no significant differences between intervention and control groups with regard to demographics. Participants were fairly evenly split between male and female; had a mean age close to 22, and about 35% were non-white. The Internet sample had low values for HIV related risk; few had more than one sex partner in 12 months, a very high mean proportion of sex acts protected by condoms; and very few with a history of STD.
In the clinic sample, a significantly higher proportion of those in the control group were currently working compared to those assigned to the intervention, and a significantly higher proportion within the intervention group had had more than one sex partner in 12 months. There were many more females in the clinic sample compared to the Internet sample (primarily because we recruited heavily from Planned Parenthood clinics). Participants had a mean age of about 21 and about 30% were non-white. The clinic sample had higher values for HIV related risk than the Internet sample; close to a third had had more than one sex partner in 12 months, the mean proportion of protected acts was quite low; and about 13% had a history of STD.
Figure 4 shows the results illustrating the best theoretical and statistical model fit for the follow-up data for both the Internet sample (shown at the top of the figure) and clinic sample (shown at the bottom of the figure). We have shaded those elements of the models that have no effect on the study outcome, and bolded those factors that do have effects on the study outcome in an effort to make the models easier to interpret.
Note differences between Figure 3, our a priori assumptions about the relationships between constructs and the outcome and Figure 4, our final models at follow-up. Suggestions for model modifications that made theoretical sense and improved model fit indices led us to place both condom norms and positive outcome expectancies as preceding partner norms and negative outcome expectancies. We will enumerate specific findings for each sample here.
Shown in the top half of Figure 4, this model explains 43% of the variance in the difference in proportion of protected acts for the Internet sample, and has strong model fit indices (CFI/TLI .91; RMSEA .045 and Chi Square p<0.01). The intervention in the Internet sample had a very small effect with a standardized path coefficient of 0.06, operating through the condom norms construct, which in turn was the only factor with a direct and significant impact on proportion of protected acts. Specifically, the intervention indirectly impacted the outcome by increasing support for the items that made up the condom norms factor between baseline and follow-up.
As mentioned, these outcomes show that the only construct that had a direct effect the proportion of protected acts for the Internet sample was norms, with a path coefficient of .42. There were no indirect effects on the study outcome, as the path coefficient between self-efficacy for condom use and the study outcome was not significant. We saw no direct or indirect effects of demographic variables on proportion of protected acts for the Internet sample. In addition to examining the effects of the intervention on the study outcome via the theoretical study constructs, we looked at how well the models explained the overall variance in condom use behaviors. Shown in the last row of the first column of Table 3, we see that the model explained 42% of the variance in changes in the proportion of protected acts for the Internet sample. The model also explained substantial proportions of variance in all of the theoretical constructs placed in the model (between 0.31 and 0.63) as predictors of change in proportion of protected acts.
Shown in the bottom half of Figure 4, this model explained 59% of the variance in the difference in proportion of protected acts for the clinic sample, and has strong model fit indices (CFI/TLI .93; RMSEA .04 and Chi Square p<0.01). While the Internet sample showed an effect of the intervention on one of the factor loadings making up condom norms, we saw no such effect for the clinic sample. We did see a negative effect of the intervention on self-efficacy for condom use; those in the intervention were significantly less likely to feel confident they could use condoms with their partner at follow-up than those in the control (−.10). We saw no other effects of the intervention in the model.
As with the Internet sample, in the clinic sample the only factor with an effect on the study outcome was condom norms, with a path coefficient of .48, but unlike the Internet sample, there was no difference between intervention and control with regard to the level of condom norms between baseline and follow-up assessments. This model explained 59% of the variance in changes in the proportion of protected acts for the clinic sample (Table 3, second column, bottom row). The model also describes substantial proportions of variance in all of the theoretical constructs placed in the model (between 0.29 and 0.66) as predictors of change in proportion of protected acts.
These findings represent data from one of only a handful of randomized controlled trials (RCT’s) conducted using computers or the Internet to test the efficacy of a primary HIV prevention intervention. We know of no other study that has examined the use of both computer kiosks and the Internet for primary HIV Prevention, targeted and reached such large samples of young adults, and followed up with them successfully for two and three month periods. The study had a strong design, and utilized rigorous protocols for participant enrollment, verification and retention that can contribute to our better understanding of how to manage and complete studies using computer and Internet technologies. Furthermore, the use of SEM analyses offers an opportunity to test for complex inter-relationships among theoretical constructs.
While we are satisfied that the design, conduct and analysis of the study was strong, we caution that because this is only one of a small number of RCT’s for HIV Prevention conducted using Internet and computer technology we cannot draw definitive conclusions from these data. The effects of the intervention on the study outcome were quite small for the Internet sample, and null for the clinic sample. Overall, while the models explained a good deal of the variance in proportion of protected acts between baseline and follow-up (43% for the Internet sample and 59% for the clinic sample), the impact of the intervention on variance was negligible. For the Internet sample, we did observe a slight impact of the intervention on norms, which in turn was the only factor affecting our study outcome. For the clinic sample, we saw no effect of the intervention on norms. However, the norms construct was also the only factor in this sample with a direct effect on the study outcome. We did observe a negative effect of the intervention on self-efficacy for condom use in the clinic sample. Note, though, that self-efficacy had no impact on the outcome.
The data from the study did not support our hypothesis that a short, tailored and interactive HIV prevention intervention online could substantially impact condom use among 18–24 year olds. The data also did not show that the intervention had an effect on antecedents to condom use such as attitudes and self-efficacy. We acknowledge that without data from numerous other Internet and computer HIV Prevention RCT’s we cannot interpret either our positive or negative outcomes with confidence. These findings, coupled with the only other Internet-based RCT we know of with efficacy data (Bowen et al., 2007a) illustrate that we still have substantial work to do before we can demonstrate that the Internet and computers have efficacy for affecting HIV Prevention behaviors among large samples—and that these effects can be sustained over time.
It may be that limited effects were due to (a) a lack of intensity in this one-time intervention, (b) difficulty participants may have had in considering exactly what counted as “sex” as we defined it for our dependent variable, thus leading to underestimation of sexual activity; (c) having an intervention approach that isn’t useful, or (d) the possibility that computers and the Internet simply aren’t a good modality for this work. Our intent was to balance the potential of Internet and computers to reach people while minimizing the chronic and repeated participation drop off and assessment attrition we have seen in multiple studies (Bull et al., 2004a; Chiasson et al., 2006; Koo and Skinner, 2005; McKay et al., 1998; Pequegnat et al., 2007). In retrospect this may have been a poor choice, and we consider that multiple sessions over time may be critical for prevention, as they appear to be for computer-based interventions for HIV prevention not delivered online (Kiene and Barta, 2006; Lightfoot et al., 2007; Roberto et al., 2007). However, with multiple sessions online we are still without good approaches to (a) enroll and retain much larger samples; (b) entice people to a project website and (c) keep them coming back with or without financial incentives (Bowen et al., 2007b; Gustafson et al., 2001; McKay et al., 2001; Verheijden et al., 2007).
On a theoretical note, although each of the constructs tested was indeed addressed through our role model stories during the intervention we found no direct effects of self-efficacy for condom use on proportion of protected acts and no indirect effects of self-efficacy for condom negotiation, negative and positive outcome expectancies, and partner norms for condom use on this outcome for either sample. However, the final models suggested that across diverse samples of youth, condom norms were of critical importance, and not only contributed to the proportion of protected acts, but also represented the only construct that had direct as well as indirect effects on almost all the other theoretical constructs. These data corroborate Sheeran (Sheeran et al., 1999), indicating that condom norms related to perceptions of what others do, such as those measured here (i.e. — “people like me:” (a) use condoms; (b) regularly buy condoms; (c) have condoms handy; and (d) expect their partners would be happy to use condoms), are most strongly related to condom use and consistently remain more important than other theoretical factors for 121 studies.
Our study had some instructive achievements related to methods, i.e. improved retention of participants for assessment and improved enrollment of high risk youth. We showed that we could improve retention of study samples for assessment to more acceptable levels of 50–60%, although we still would like to achieve 70–80% retention as the norm. Our retention rates of 53% for the online sample and 61% for the clinic sample are the highest we know of for this length of follow-up for online HIV prevention research and represent an improvement both over earlier work in this field (Bull et al., 2004a) and over a much longer time frame than that reported for other RCT’s in this area (Bowen et al., 2007a). We still need work to achieve a consistent retention rate of 70%, however, and it may be worth considering shortening our time frame to one-month for follow-up; or to potentially shortening our surveys or delivering them in multiple modules. All these warrant further exploration as potentially unique strategies for online recruitment and retention.
From the data shown here, we were able to enroll a more diverse sample than has been seen for other Internet related research (Bowen et al., 2007a; Bull et al., 2004a; Elford et al., 2004; Rhodes et al., 2002), and we consider this an important achievement, particularly given ongoing concerns about the appropriateness of using technology when we face a digital divide (Benotsch et al., 2004; Bernhardt, 2000; Chang et al., 2004; Katz et al., 2004; Lenhart and Horrigan, 2003). However, we must acknowledge that we still did not have data from samples that represent the distribution of HIV infections nationally. Furthermore, it is possible that we faced ceiling effects—particularly in our Internet sample, where the mean proportion of protected acts for the intervention group at baseline was between 71% and 83%, suggesting we failed to tap into the highest risk groups with this intervention. The inability to reach high risk groups has also been cited as a challenge for persons using technology to address chronic illness (Glasgow et al., 2007).
There are important limitations to our work to note. Our sampling mechanism was not random, so the results are not generalizable to all persons between the ages of 18–24 online and in clinic settings for reproductive health services. It may be that our SEM models will not represent the best fit for data from another sample that is older, younger, representing different racial/ethnic groups or that is not on the Internet. However, given the sample sizes, we are confident the findings are useful. While SEM does offer the ability to understand both direct and indirect relationships in models, it has been criticized as offering potential post-diction rather than pre-diction models (Bryan et al., 2007). However, we believe that we have sufficiently overcome the post-diction limitation in using a repeated measures analysis and by controlling for baseline effects at follow-up in our analyses. We also believe that SEM offers us an important opportunity to test behavioral theory and contribute to a better understanding of the relationships between antecedents to behavior change.
In future work we must address (a) challenges to increase intensity and exposure to interventions while minimizing both program and assessment attrition; and (b) enrollment of high risk individuals in tailored interactive interventions online. For some years researchers have been calling for greater attention to research that is likely to have greater reach, effectiveness and sustainability in lieu of over-emphasizing efficacy (Glasgow et al., 1999; Glasgow et al., 2001; Glasgow et al., 2002; Glasgow et al., 2004). We are confident we can reach people—now we need ways to keep them engaged. We posit that it is important to reach people through websites they already frequent, rather than attempting to engage them at different sites. We suggest new Internet-based programs need to be embedded within the websites commonly visited by individuals likely to be at risk.
We must do a better job in enrolling high risk individuals in tailored interactive interventions online. We remain challenged to increase our enrollment and retention of high risk individuals in Internet research for HIV prevention. Until we can accomplish this, we risk having well designed interventions that may be engaging, but that will have little impact on HIV. The higher risk youth engaged in the successful computer-based intervention reported by Lightfoot et al., (Lightfoot et al., 2007) may be instructive—might there be importance in building trust face to face in community settings prior to enrolling higher risk individuals? If we are to achieve diversity in samples, must we sacrifice reach?
In conclusion, we face multiple challenges before we can say that the Internet and computers can be used as a tool to impact HIV prevention behaviors for larger samples and in brief interventions. We need to do a better job of keeping people engaged in multiple sessions online and we need to recruit and retain more high risk individuals for our technology-based research. If we can accomplish these goals, perhaps we can develop effective strategies for technology based HIV prevention that realize the promise of the medium to reach large numbers of people with standardized and easy to replicate interventions that can be delivered on a substantially large scale.
We gratefully acknowledge the partnership of the Denver Public Health Department and Planned Parenthood of the Rocky Mountains for their collaboration in recruitment of the clinic sample. We also acknowledge the National Institute of Mental Health and our Project Officer, Dr. Willo Pequegnat, for their support of this research, NIH/NIMH-RO1MH063690 and R01MH063690-S.