Analyses of data on the duration of sexual partnerships collected in sexual behaviour surveys are complicated by a number of issues including censoring, truncation and sampling biases, and correlated data. Although some of these issues have been recognised previously,9,14
no comprehensive analysis of their impact has been presented. We review these issues and illustrate their effect in the SSPS. We also suggest solutions that are appropriate for different study designs. A recent paper by Copas et al14
provides alternative approaches based on weighting and focuses on the most recent partner sampling.
We distinguish between the ‘current duration’ and ‘total duration’ of a partnership. Current duration may be more relevant to risk of STI, while total duration is more relevant to investigations of disease transmission and epidemic course, and is necessary to parameterise mathematical models of disease transmission. To estimate total duration, the end of a partnership must be unambiguously defined, and ongoing partnerships must be treated as right-censored. To determine if a partnership has ended, the researcher can explicitly ask whether the partnership is ongoing; alternatively, in longitudinal surveys, partnerships that have not been active for a given period may be assumed to have ended. In the SSPS, the end of a partnership was defined as a hiatus in sexual activity of approximately 4 months, although this definition was driven as much by the study design as by any underlying behavioural construct. However, once one distinguishes between current duration and total duration, other issues related to endpoint definition (eg, interval censoring) are less critical.
In addition to right censoring, the design of sexual-behaviour surveys often leads to left truncation and length-biased sampling. For instance, fixed window sampling preferentially includes longer partnerships and can lead to significant biases in the estimated duration distribution. This bias can be removed with appropriate statistical methods that incorporate information on the window length. Sampling schemes that select a fixed number of most recent partnerships can also bias estimates of partnership duration. In the SSPS data, the bias induced by most recent partner sampling appears to be less than that seen with fixed window sampling (and declines as more partners are sampled). However, bias due to most recent partner sampling is less well defined, and statistical methods to adjust for it depend on assumptions about the representativeness of the sampled partners.
An alternative to adjusting for left truncation and sampling biases in the analysis is to design the survey to reduce or eliminate the biases by limiting the extent to which partnership inclusion criteria are associated with the outcome of interest (ie, partnership duration). One possible strategy is to first query participants for a small number of key statistics about all of their partnerships within a relatively long window prior to the interview. Then, the interviewer could randomly sample a subset of these partnerships from which more extensive information is to be gathered. Any estimates based on the random subsample should use weights to correct the estimates for the total number of partners reported by each participant. Such a strategy may be subject to recall bias, however, and is likely to be most effective in individuals who have had a small to moderate number of partners (ie, so all can be listed and recalled). The trade-off between recall bias and length bias is a potential area for future research.
A key assumption of the statistical methods presented here is that the censoring and truncation processes are independent of duration. The independence assumption for censoring is commonly made in time-to-event analyses and may be reasonable in many settings. Independent truncation means that the sampling window does not depend on the time to the event of interest (eg, end of partnership). The assumption of independent truncation may be violated if there is a significant secular trend in duration (ie, partnerships are systematically getting longer or shorter over time). The sensitivity of duration distribution estimates to such violations can be explored through simulations tailored to the particular setting, but ultimately, such assumptions are not testable, and conclusions based on these methods should be appropriately circumspect.
The methods described here have been applied to a longitudinal sexual-behaviour survey. However, all of the methods described, with the exception of our approach to distinguishing completed from ongoing partnerships (which depends on the availability of follow-up data), are equally relevant and applicable in cross-sectional surveys.
We have not discussed in detail more common sampling issues, but these are important as well. For instance, the data analysed here are collected from participants coming to STI clinics and an adolescent health clinic. The duration distribution estimated from these data may be representative of similar clinic populations but is unlikely to be representative of the broader population outside such clinics17
For instance, the quantiles of the duration distribution summarised in from this clinic population are uniformly shorter than those seen in Foxman et al
’s telephone survey of the Seattle general population.9
In summary, estimates of partnership duration based on surveys of sexual behaviour are subject to multiple sources of bias and variability. However, through careful design and analysis, the biases can be substantially reduced and the variability appropriately quantified.