Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2847785

Formats

Article sections

- Abstract
- 1 Introduction
- 2 Response Patterns to Probabilistic Expectations Questions in the HRS
- 3 Empirical Analysis with Potentially Rounded Responses
- 4 Probing Beneath the Reported Expectations
- 5 Conclusion
- References

Authors

Related links

J Bus Econ Stat. Author manuscript; available in PMC 2010 April 1.

Published in final edited form as:

J Bus Econ Stat. 2010 April 1; 28(2): 219–231.

doi: 10.1198/jbes.2009.08098PMCID: PMC2847785

NIHMSID: NIHMS109813

Charles F. Manski, Department of Economics and Institute for Policy Research, Northwestern University;

See other articles in PMC that cite the published article.

Rounding is the familiar practice of reporting one value whenever a real number lies in an interval. Uncertainty about the extent of rounding is common when researchers analyze survey responses to numerical questions. The prevalent practice has been to take numerical responses at face value, even though many may in fact be rounded. This paper studies the rounding of responses to survey questions that ask persons to state the percent-chance that some future event will occur. We analyze data from the Health and Retirement Study and find strong evidence of rounding, the extent of rounding differing across respondents. We propose use of a person’s response pattern across different questions to infer his rounding practice, the result being interpretation of reported numerical values as interval data. We then bring to bear recent developments on statistical analysis of interval data to characterize the potential consequences of rounding for empirical research. Finally, we propose enrichment of surveys by probing to learn the extent and reasons for rounding.

Rounding is the familiar practice of reporting one value whenever a real number lies in an interval. Consider, for example, how American meteorologists describe surface wind direction. Weather reports issued to the general public commonly delineate eight wind directions (north, northeast, east, and so on) while those to aircraft pilots delineate thirty-six directions (360, 10, 20, 30 degrees, and so on). A report to the public that the wind is from the north means that the wind direction lies in the interval [337.5°, 22.5°] while a report to pilots that the wind direction is 360° means that the direction lies in the interval [355°, 5°]. An important feature of wind reports is that the extent of rounding is common knowledge. Hence, pilots and members of the public know the accuracy of the measurements they receive.

Whereas the extent of rounding is common knowledge in standardized communications such as weather reports, recipients of rounded data may be unsure of the extent of rounding in other settings. Consider, for example, responses to the question “What time is it?” If someone says “4.01 PM,” one might reasonably infer that the person is rounding to the nearest minute. However, if someone says “4 PM,” one might well be uncertain whether the person is rounding to the nearest minute, quarter hour, or half hour. Moreover, one might be uncertain whether a person who says “4 PM” knows the precise time and rounds to simplify communication or, contrariwise, does not know the precise time and rounds to convey partial knowledge.

Uncertainty about the extent of rounding is common when researchers analyze survey responses. Respondents are routinely asked to report their annual incomes, hours worked, and other numerical quantities. Questionnaires generally do not request that respondents round to a specified degree, nor do they ask persons to describe their rounding choices. There are no established conventions for rounding survey responses. Hence, researchers cannot be sure how much rounding there may be in survey data. Nor can researchers be sure whether respondents round to simplify communication or to convey partial knowledge. Consider, for example, responses to the question: “How many hours did you work last week?” A person who says “40 hours” may know he worked precisely 40 hours, or know he worked 42 hours but round for simplicity, or not know his hours with precision but want to convey that he has a “full-time” job.

The prevalent practice in survey research has been to ignore the possibility that responses may be rounded. Most empirical studies take numerical responses at face value. When researchers show concern about data accuracy, they typically assume the classical errors-in-variables model in which observed responses equal latent true values plus white-noise error. However, the structure of the data errors produced by rounding is different from that occurring in the errors-in-variables model.

This paper studies the intriguing forms of rounding that appear to occur in responses to survey questions asking persons to state the percent-chance that some future event will occur. From the early 1990s on, questions of this type have become increasingly common in economic surveys. Manski (2004) reviews the literature.

Over a decade ago, Dominitz and Manski (1997) observed that respondents tend to report values at one-percent intervals at the extremes (i.e., 0, 1, 2 and 98, 99, 100) and at five-percent intervals elsewhere (i.e., 5, 10, …, 90, 95), with responses more bunched at 50 percent than at adjacent round values (40, 45, 55, 60). This finding has been corroborated repeatedly in subsequent studies. It seems evident that respondents to subjective probability statements round their responses, but to what extent? When someone states “3 percent,” one might reasonably infer that the person is rounding to the nearest one percent. However, when someone states “30 percent,” one might well be uncertain whether the person is rounding to the nearest one, five, or ten percent. Even more uncertain is how to interpret responses of 0, 50, and 100 percent. In some cases, these may be sharp expressions of beliefs, rounded only to the nearest one or five percent. However, some respondents may engage in gross rounding, using 0 to express any relatively small chance of an event, 50 to represent any intermediate chance, and 100 for any relatively large chance.

Survey data do not reveal why sample members may give rounded expectations responses. Some persons may hold precise subjective probabilities for future events, as presumed in Bayesian statistics, but round their responses to simplify communication. Others may perceive the future as partially ambiguous and, hence, not feel able to place precise probabilities on events. Thus, a response of “30 percent” could mean that a respondent believes that the percent chance of the event is in the range [25, 35] but feels incapable of providing finer resolution.

Considering the extreme case of total ambiguity, Fischhoff and Bruine de Bruin (1999) suggest that when respondents feel unable to assign any subjective probability to an event, they may report the value 50 to signal epistemic uncertainty, as in the loose statement ‘It’s a fifty-fifty chance.’ This idea is formally interpretable as the grossest possible form of rounding, where 50 means that the percent chance lies in the interval [0, 100]. Lillard and Willis (2001) offer an alternative interpretation, in which respondents first form full subjective distributions for the probability of an event and then report whichever of the three values (0, 50, 100) is closest to the mode of this subjective distribution.

Although survey data do not directly reveal the extent or reasons for rounding in observed responses, analysis of patterns of responses across questions and respondents is informative. We perform such analysis in Section 2, focusing on the expectations module in the 2006 administration of the Health and Retirement Study (HRS). We find that, for each question posed, the great preponderance of the responses are multiples of five, most responses are multiples of ten, and a moderate minority are multiples of 50. Examining the module as a whole, we find that sample members vary considerably in their response tendencies. A small but non-negligible fraction use only the values (0, 50, 100) throughout the module. Most of the respondents make fuller use of the 0–100 percent chance scale. About 0.26 at least once use a multiple of ten that is not one of the values (0, 50, 100), about 0.51 at least once use a multiple of five that is not a multiple of ten, and about 0.12 at least once use a value that is not a multiple of five.

The findings of Section 2 indicate that respondents differ systematically in their rounding practices, with some habitually performing gross rounding and others tending to give more refined responses. In Section 3, we suggest use of each person’s response pattern across questions to infer the extent to which he rounds his responses to particular questions. Suppose that one makes such inferences. That is, suppose that when person *j* answers question *k* with the value *v _{jk}*, one finds it plausible to infer from his response pattern that the quantity of interest actually lies in an interval [

In principle, empirical analysis with interval data is simply a matter of considering all points in the relevant interval to be feasible values of the quantity of interest. The practical feasibility of implementing this simple idea depends on the objective of the analysis. We focus on familiar problems of regression and best linear prediction, where the objective is to predict the quantity of interest conditional on specified covariates. Manski and Tamer (2002), Manski (2003), Chernozhukov, Hong, and Tamer (2007), and Beresteanu and Molinari (2008) have studied identification of and statistical inference on regressions and best linear predictors with interval data. We draw on their work and use the HRS data to illustrate.

The research approach proposed in Section 3 is logically more credible than the traditional practice of ignoring rounding, but it carries the price of weakened inferences. The only way to strengthen inference without weakening credibility is to collect richer data on expectations. Section 4 reports an exploratory study to show what we have in mind. Here we describe a sequence of survey questions that follows the usual percent-chance with further questions that probe to learn the extent and reasons for rounding. We use data collected in the American Life Panel to illustrate. Section 5 concludes.

In 2006, the Health and Retirement Study administered a module of thirty-eight probabilistic expectations questions to 17,191 respondents. The module begins by asking respondents to state the percent chance that it will rain or snow tomorrow in their location. This question is intended to encourage respondents to think in probabilistic terms—essentially all Americans are familiar with weather forecasts that state the percent chance of precipitation on a future date. The module then asks a sequence of questions asking for the percent chance that events of three types will occur. Twenty-one questions concern future personal finances, nine relate to future personal health, and eight concern future general economic conditions.

Section 2.1 describes the responses to specific questions. Section 2.2 examines the tendencies of respondents to round throughout the module.

Table 1 presents the response patterns for the “rain or snow” practice question, and for a representative selection of fifteen of the questions posed. We comment on the responses to these. Questions on personal finance shown in the table are P4, P5, P14, P15, P18, P70, P30, and P31. Ones on personal health are P28, P103, and P32. Ones on general economic conditions are P34, P110, P47, and P114. The table gives brief verbal descriptions of each question and the number of respondents who were asked each question. The respondent numbers vary across questions because the HRS makes extensive use of skip sequencing, with some questions posed only if the respondent previously gave certain answers to earlier questions. Section P of the HRS questionnaire documentation gives the exact wordings of the questions and the rules used for skip sequencing (http://hrsonline.isr.umich.edu/).

For each of the questions shown, the columns of Table 1 give the fractions of respondents who do not respond, who respond with three specific values (0, 50, 100), and who respond in two ranges (1–4 and 96–99). The column labeled M10 gives the fraction of responses that are multiples of ten other than (0, 50, 100); for example, 20, 30, or 90. Column M5 gives the fraction of responses that are multiples of 5 but not of ten; for example, 5, 15, or 85. The column labeled “other” gives the fraction of responses that are not multiples of 5 and not in the ranges 1–4 and 96–99; for example, 17, 23, or 94.

The table shows that, with two exceptions, these questions have low nonresponse rates. The fraction of nonresponse is less than 0.03 for seven of the fifteen questions and below 0.08 for thirteen questions. The two exceptions are the questions asking for the percent chance that a mutual fund will increase in value in the year ahead. The nonresponse fractions for these items are 0.24 and 0.28.

We find that responses are generally rounded at least to a multiple of five. The fraction of cases where the response is in the range 1–4 lie in the interval [0.003, 0.026] across questions and the fraction in the range 96–99 in the interval [0.000, 0.008]. Responses in the “other” category are very rare, occurring only about in 0.002 of all cases. Overall, about 0.97 of all responses are multiples of five.

Sizeable fractions of responses fall in all the categories that are multiples of five. The fractions of cases where the response is 0, 50, and 100 lie in the intervals [0.012, 0.646], [0.044, 0.238], and [0.007, 0.447] respectively. The fractions in categories M10 and M5 lie in the intervals [0.173, 0.468] and [0.052, 0.180] respectively.

The sizeable fractions of responses of 0 and 100 do not suggest any particular degree of rounding— respondents may often really believe that an event is extremely unlikely or likely. Consider, for example, the fraction 0.218 of responses of 0 percent to the “rain or snow tomorrow” question. Some of these responses may embody significant rounding but many respondents, especially persons living in the southwestern part of the country, may be rounding only minimally when they report 0 percent.

Comparison of the fractions of 50, M10, and M5 responses suggests that responses vary in the degree to which they are rounded. To see this, select any of the questions described in Table 1 and consider the joint hypothesis that

- all persons giving a 50, M10, or M5 response round to the nearest five percent;
- all persons have latent subjective probabilities for events, and the cross-sectional distribution of beliefs is locally uniform. That is, for each non-extreme value
*x*that is a multiple of 5, similar fractions of persons believe there to be an*x*and*x*+ 5 percent chance that the event will occur.

The response 50 is a single value, the M10 category contains the eight values (10, …, 40, 60, …, 90), and the M5 category has the ten values (5, …, 45, 55, …, 95). Hence, the fraction of M5 responses should be slightly larger than the fraction of M10 responses and about ten times as large as the fraction of 50 responses.

The data in Table 1 are considerably at odds with the specified joint hypothesis. Scanning the fifteen questions, we find that the fraction of M10 responses is always at least twice as large as the fraction of M5 responses, and sometimes much more. The fraction of 50 responses is almost always at least as large as the fraction of M5 responses and sometimes as much as twice as large.

We think it plausible that non-extreme latent subjective probabilities vary smoothly across the population and, hence, that their distribution is locally uniform. Taking part (b) as a maintained assumption, the data in Table 1 sharply contradict the rounding hypothesis of part (a). It appears that many of the respondents who report 50 round to the nearest fifty percent, while many of those who report M10 round to the nearest ten percent.

Table 1 suggests that the responses to different expectations questions vary in the degree to which they are rounded. However, it does not indicate whether respondents systematically vary in their tendency to round. Table 2 addresses this matter.

Table 2 describes response patterns across all the questions in the HRS module. The table presents separate findings for males and females and, within each gender, by age. The sample members under examination are the 16,674 HRS respondents whose age is 50 or above.

Table 2 shows that the mean number of items asked per person is 23 for males and 22 for females. These numbers are smaller than the totality of 38 questions included in the HRS module. The reason is the HRS use of skip sequencing. The table shows that the mean number of responses per person is 22 for males and 20 for females. Thus, as previously indicated in Table 1, the overall response rate to the expectations module is very high.

The main part of the table shows the fractions of respondents with each of seven different response patterns. These patterns are mutually exclusive and exhaustive, with the most rounded responses at the left and the least at the right. The first column gives the fraction of the sample who respond to none of the questions posed. The second gives the fraction who, when they respond, only use the extreme values 0 and 100. The third gives the fraction who only use the values (0, 50, 100) and who respond 50 to at least one question. The fourth column gives the fraction of persons who respond to all questions with a multiple of 10 and to at least one question with a value that is a multiple of 10 but not one of the values (0, 50, 100). The fifth gives the fraction who respond to all questions with a multiple of 5 and to at least one question with a value that is a multiple of 5 but not a multiple of 10. The sixth gives the fraction who respond to at least one question with a value in the range 1–4 or 96–99. The seventh column gives the fraction who respond to at least one question with a value that is not a multiple of 5 and not in the range 1–4 or 96–99.

The table shows that small but non-negligible fractions of respondents respond to none of the questions posed to them; (males 0.0227, females 0.0276). Similarly small but non-negligible fractions use only the values (0, 100) or (0, 50, 100) in their responses; (males 0.0136, females 0.0234) and (males 0.0195, females 0.0261). Observe that females are a bit more likely than males to have these response patterns. The table shows that the incidence of these patterns does not vary systematically with age until age 80, when they become notably more prevalent.

The great preponderance of the respondents have response patterns that use much more of the 0–100 percent-chance scale. About 0.26 fall in the M10 category, about 0.51 in the M5 category, about 0.12 give at least one response in the range 1–4 or 96–99, and about 0.04 give at least one response that is not a multiple of five and not in the range 1–4 or 96–99.

Table 2 showed that HRS respondents differ systematically in their rounding practices, with a small fraction of them habitually performing gross rounding and most of them sometimes giving more refined responses. This suggests use of each person’s response pattern across questions to infer the extent to which he rounds his responses to particular questions.

Section 3.1 proposes a particular inferential approach. When person *j* answers question *k* with the value *v _{jk}*, we infer from his response pattern that the quantity of interest actually lies in an interval [

Sections 3.2 and 3.3 show how to perform conditional prediction with interval expectations data. Section 3.2 reviews relevant methodological research. Section 3.3 uses the HRS data to illustrate.

The general idea is to replace each report *v _{jk}* with an interval [

If a person does not respond to a question, then we only know that his subjective probability for the event lies in the interval [0, 100]. If a person only uses the values (0, 100) when replying to questions in the class, then we treat his data as if he is rounding grossly, with 0 implying the interval [0, 50] and 100 implying the interval [50, 100]. If a person only uses the values (0, 50, 100), then we treat his data as if he is rounding somewhat less grossly, with 0 implying the interval [0, 25], the response 50 implying the interval [25, 75], and 100 implying the interval [75, 100]. If all responses are multiples of 10 and at least one is not (0, 50, 100), then we treat his data as if he is rounding to the nearest 10. And so on.

Formally, let *m* denote a class of expectations questions posed in the HRS, within which one thinks it reasonable to suppose that a respondent uses a common rounding rule. Let *r _{jm}* denote person

If *v _{jk}* = NR, then [

If *r _{jm}* = (all 0 or 100), then [

If *r _{jm}* = (all 0, 50, or 100), then [

If *r _{jm}* = M10, then [

If *r _{jm}* = M5, then [

If *r _{jm}* = (some 1–4 or 96–99) and

If *r _{jm}* = (some 1–4 or 96–99) and

If *r _{jm}* = other, then [

We believe this algorithm to provide a reasonable and practical resolution of the tension between informativeness and credibility. Nevertheless, we think it important to make clear that the algorithm is potentially subject to two forms of misclassification, with asymmetric consequences for inference. First, a given survey response may be less rounded than the interval yielded by the algorithm; that is, the actual rounding interval may be a subset of the algorithm’s interval. When such an error occurs, our use of the data is correct but it is less sharp than it would be if we knew the true degree of rounding of the response. Second, the actual rounding interval may not be a subset of the algorithm’s interval. When such an error occurs, our use of the data is not correct.

Given the available HRS data, we cannot know the prevalence of errors of the two types. All we can say for sure is that, compared with the traditional practice of taking expectations data at face value, use of the algorithm lowers the prevalence of the latter errors and, correspondingly, increases the prevalence of the former errors. We believe that it is valuable to report inferences that are more credible than the traditional ones, albeit less sharp; hence, our development of the algorithm.

A potentially productive, albeit rather challenging, direction for future work would be to consider a broad class of methods for dealing with rounding and seek to develop appropriate criteria for choosing among them. Ideally, such research should begin with an explicit objective function and behavioral assumptions, and then proceed in a principled decision theoretic manner. Manski and Molinari (2008) have recently performed an exploratory study of this type to assess alternative procedures for skip sequencing in surveys. Skip sequencing raises different specific issues than rounding, but the broad conceptual ideas developed in their study of skip sequencing would apply here.

In principle, empirical analysis with interval data is simply a matter of considering all points in the relevant interval to be feasible values of the quantity of interest. In practice, implementation of this simple idea can be easy or difficult, depending on the objective of the analysis. We discuss here only the relatively simple problem of inference on best predictors with interval outcome data. We first consider identification and then statistical inference. Manski and Tamer (2002) and Horowitz and Manski (2006) address aspects of the more complex problem of inference on best predictors with interval covariate data.

As explained in Manski (2003), the *identification region* for a population parameter is the set of values that remain feasible when unlimited observations from the sampling process are combined with maintained assumptions. The parameter is *point-identified* when this set contains a single value and is *partially identified* when the set is smaller than the parameter’s logical range, but is not a single point. The present analysis maintains the assumption that *v _{jk}* lies in the set [

Consider best nonparametric prediction of *v* given **x** under square loss, where *v* is a latent subjective expectation, **x** = [1 *x*_{1} …*x _{p}*

$$\text{H}[E(v\mathbf{x})]=[E({v}_{L}\mathbf{x}),E({v}_{U}\mathbf{x})],$$

(1)

where H [·] denotes the identification region of the functional in brackets.

Consider now best linear prediction (BLP) of *v* given **x** under square loss. Assume that the matrix *E*(**x**′**x**) is nonsingular and that the variables (*v, v _{L}, v_{U}*,

$$\mathit{\beta}=arg\underset{\mathbf{b}}{min}E\phantom{\rule{0.16667em}{0ex}}[{(v-\mathbf{xb})}^{2}].$$

It is well known that in this case ** β** solves the equations

$$\mathit{\beta}={[E({\mathbf{x}}^{\prime}\mathbf{x})]}^{-1}E[{\mathbf{x}}^{\prime}v].$$

(2)

Beresteanu and Molinari (2008) show that when *v* is only known to lie in the random interval [*v _{L}, v_{U}*], the identification region for

$$\text{H}\phantom{\rule{0.16667em}{0ex}}[\mathit{\beta}]=\{\mathbf{b}{k}^{:}.$$

(3)

Intuitively, this identification region is obtained by considering all points in [*v _{L}, v_{U}*], that is all feasible values of

This characterization of the identification region is computationally easy to implement. Closed form bounds are available for the projection of H [** β**] onto each of its components; that is, for the identification region of each single parameter of the BLP. Closed form bounds are also available for the best predictor itself, whose identification region is

$$\text{H}\phantom{\rule{0.16667em}{0ex}}[\mathit{BLP}\phantom{\rule{0.16667em}{0ex}}(v\mathbf{x})]=\{{\mathbf{b}}^{\prime}\mathbf{x},\phantom{\rule{0.38889em}{0ex}}\mathbf{b}\text{H}\phantom{\rule{0.16667em}{0ex}}[\mathit{\beta}]\}.$$

(4)

We provide more details about these closed form bounds below, when discussing statistical inference.

Finally, consider the assumption that the BLP is best nonparametric, i.e. *β** ^{k}* such that

The above discussion assumes that the latent expectations *v* are well-defined and that the inferential method proposed in Section 3.1 is correct. Either assumption could fail in practice, the former if respondents are unable to place precise probabilities on events and the latter if respondents do not use a common rounding rule across the questions in class *m*. If either assumption is incorrect, the identification regions given in equations (1), (3), and (4) nevertheless remain mathematically well-defined and non-empty. However, the substantive interpretation of these regions is not transparent.

The lower and upper bounds in equation (1) can be estimated by standard nonparametric procedures such as the kernel method. Denote these estimators respectively by _{nL}_{|}** _{x}** and

$${\widehat{\text{H}}}_{n}\phantom{\rule{0.16667em}{0ex}}[E\phantom{\rule{0.16667em}{0ex}}(v\mathbf{x})]=[{\overline{v}}_{nL\mathbf{x}}.$$

(5)

Beresteanu and Molinari (2008) show that the identification region (3) for the BLP parameter vector can be estimated as a Minkowski average of properly defined set-valued random variables. The identification region for each component of the BLP parameter vector can be estimated by simple linear projections. In particular, Beresteanu and Molinari (2008, Corollary 4.5) show that for *d* = 1,…*,p*,

$${\widehat{\text{H}}}_{n}[{\beta}_{d}]=\frac{1}{{\sum}_{i=1}^{n}{\stackrel{~}{x}}_{id}^{2}}\left[\sum _{i=1}^{n}min\{{\stackrel{~}{x}}_{id}{v}_{iL},{\stackrel{~}{x}}_{id}{v}_{iU}\},\sum _{i=1}^{n}max\{{\stackrel{~}{x}}_{id}{v}_{iL},{\stackrel{~}{x}}_{id}{v}_{iU}\}\right]$$

(6)

is a consistent estimator (with respect to the Hausdorff metric) for the population identification region H [*β _{d}*]. In the above expression,

To understand this bound, suppose that the latent subjective expectations were exactly revealed in the data. Then one would be able to use the algebra of residual (or partitioned) regression to estimate *β _{d}* by projecting

Similarly, the identification region in equation (4) can be estimated by two linear projections. Denote by * _{n}* the sample analog estimator of

$$\begin{array}{l}{\widehat{\text{H}}}_{n}[\mathit{BLP}(v\mathbf{x}={\mathbf{x}}_{0})]=[\frac{1}{n}\sum _{i=1}^{n}\left(min\left({\mathbf{x}}_{0}{\widehat{\mathrm{\sum}}}_{n}^{-1}{\mathbf{x}}^{\prime}{v}_{iL},{\mathbf{x}}_{0}{\widehat{\mathrm{\sum}}}_{n}^{-1}{\mathbf{x}}^{\prime}{v}_{iU}\right)\right),& \frac{1}{n}\sum _{i=1}^{n}\left(max\left({\mathbf{x}}_{0}{\widehat{\mathrm{\sum}}}_{n}^{-1}{\mathbf{x}}^{\prime}{v}_{iL},{\mathbf{x}}_{0}{\widehat{\mathrm{\sum}}}_{n}^{-1}{\mathbf{x}}^{\prime}{v}_{iU}\right)\right)].\end{array}$$

(7)

see Stoye (2007a) for related findings. When the BLP is assumed to be best nonparametric, a case that we do not pursue here, it can be estimated using methods developed in Manski and Tamer (2002) and Chernozhukov, Hong, and Tamer (2007).

Recent literature proposes two approaches to construction of confidence sets for partially identified parameters. Imbens and Manski (2004) and Stoye (2007b) propose confidence sets that (asymptotically) uniformly cover each point in the identification region, rather than the entire region, with at least a prespecified probability. Horowitz and Manski (2000), Chernozhukov, Hong and Tamer (2007) and Beresteanu and Molinari (2008), among others, give confidence sets that (asymptotically) cover the entire identification region with a prespecified probability. Clearly, as shown in Imbens and Manski (2004, Lemma 1), confidence sets that asymptotically cover the entire identification region with a prespecified probability constitute valid but conservative confidence sets for the partially identified parameter.

In the empirical illustration of Section 3.3, we report Imbens-Manski confidence sets that (asymptotically) cover each point in the identification region with probability at least 95%. When, for example, the object of interest is the bound in equation (5), we obtain Imbens-Manski confidence sets (*CS*) by calculating the value of *c _{α}* such that

$$\mathrm{\Phi}({c}_{\alpha}+\sqrt{n}\frac{{\overline{v}}_{nL\mathbf{x}}-\mathrm{\Phi}\phantom{\rule{0.16667em}{0ex}}(-{c}_{\alpha})=0.95,}{}$$

(8)

and by setting

$$CS=[{\overline{v}}_{nL\mathbf{x}}$$

(9)

where _{nL}_{|}** _{x}**,

This section illustrates how attention to rounding in probabilistic expectations affects the conclusions that one can draw in empirical analysis. We consider the subjective expectations of HRS respondents for survival to age 75. These expectations have drawn attention beginning with the work of Hurd and McGarry (1995). They used data from the 1992 wave of the Health and Retirement Study (HRS), which elicited, on a scale from 0 to 10, respondents’ expectations of their probability to survive to age 75 and older. They described how responses covary with age, gender, marital status, health risk factors, income and wealth.

Several subsequent studies have used percent-chance data collected in later waves of the HRS to investigate the temporal evolution of survival expectations and their association with actual mortality. These include Hurd, McFadden, and Merrill (2001), Hurd and McGarry (2002), and Smith, Taylor and Sloan (2001). These studies conclude that survival expectations predict mortality reasonably well. Moreover, they find that expectations change in reasonable ways following receipt of new information, such as the onset of a new health problem or the death of a relative. In yet another study, Hurd, Smith, and Zissimopoulos used the HRS survival probabilities to predictor retirement behavior and Social Security claiming.

The studies cited above and, to the best of our knowledge, all studies using subjective probabilities, have taken the elicited expectations at face value. We examine how empirical findings are affected when the algorithm specified in Section 3.1 is used to account for rounding. We analyze data from the 2006 wave of the HRS, where 6,713 respondents below age 65 were asked: “What is the percent chance that you will live to be 75 or more?” See question P28 in Table 1 for the response distribution. To keep the illustration simple, we focus on the variation of responses with age and gender (estimation results for the variation of responses with age, gender, marital status, race, several indicators of the level of physical activity the respondent engages in, smoking and drinking behavior, other health risk factors, body weight, education, income and wealth, are available from the authors upon request).

Table 3 reports the sample frequencies of the width of the intervals [*v _{jL}, v_{jU}*] when the intervals are constructed using two versions of the algorithm in Section 3.1. The left panel assumes that each respondent uses a common rounding rule when answering the class (

Table 4 reports estimates of the parameters of the BLP for the probability of survival to age 75, conditional on age and gender. The table presents 95-percent confidence sets based on the normal approximation for the point identified parameters. It gives 95-percent Imbens-Manski confidence sets for the partially identified parameters. These confidence sets are obtained applying equations (8)–(9), with the appropriate modifications for the different parameters reported. The covariance matrix of the limiting distribution of the endpoints of the bounds is estimated using the bootstrap.

The left column labelled “Point Estimates” reports the findings when the expectations data are taken at face value and nonresponse is assumed to be random. The estimates and confidence sets indicate that survival expectations vary positively with age. Males tend to have lower likelihoods of survival than females.

The columns labelled “Set Estimates” report the findings using interval expectations data. In panels A and B, nonresponse is not assumed random. Instead, it generates an extreme interval [0,100]. Panel A gives the estimated lower and upper bound on each of the three parameters when *m* = *health* and Panel B does likewise when *m* = *all*.

When rounding is taken into account, it is not possible to draw strong conclusions about the variation of expectations with age and gender. Consider the variation with age. When the expectations data were taken at face value, the point estimate of the “age” parameter was 0.2518. When rounding is taken into account, the interval estimate is [−0.9751, 1.3752] in Panel A and [−0.546, 0.9841] in Panel B. Similar ambiguity emerges about the variation with gender.

We have performed exploratory data analysis to obtain a sense of the source of the wide intervals obtained when rounding is taken into account. This analysis suggests that the wide intervals result primarily from the respondents in the (all NR), (all 0 or 100), and (all 0, 50, or 100) response categories. Although only 0.16 of all respondents fall in this category for *m* = *health* and 0.06 for *m* = *all*, their responses are sufficiently uninformative to make the identification regions relatively wide.

Focusing on the case *m* = *all*, columns (C)–(E) of Table 4 report BLP parameter estimates that drop, respectively, the respondents in the (all NR) response category, the (all NR) and (all 0 or 100) categories, and the (all NR), (all 0 or 100), and (all 0, 50, or 100) categories. Clearly, using only the data on respondents who give more refined responses yields estimated intervals that are much narrower than those in Columns (A)–(B). However, we caution the reader that these findings pertain to a select sub-population that is not generally of substantive interest. This sub-population would have the same distribution of beliefs as the full population, and hence be of interest, if we were to assume that respondents randomly fall into the (all NR), (all 0 or 100) and (all 0, 50, or 100) response categories. We think this assumption unrealistic in the present application.

Now consider prediction of respondents’ expectations conditional on age and gender. The feasible values of the estimated BLP cannot be obtained from Table 4 because the joint identification region for the three parameters on (age, gender, constant) is not the Cartesian Product of the identification regions for each parameter in isolation. Rather, it is a proper subset of this Cartesian Product set. That is, some values of the parameters for (age, gender, constant) are feasible taken one parameter at a time but are not jointly feasible. Beresteanu and Molinari (2008) show that the joint identification region of the three parameters is a certain convex subset of the Cartesian Product set.

The left panels of Figures 1 and and22 report, for males and females respectively, the nonparametric set estimates of the form given in equation (5). The nonparametric predictions are obtained using simple cell means. The sample sizes for each value of the age variable range between 202 and 349. The top left panel of each figure uses the intervals constructed with *m* = *health* and the bottom left with *m* = *all*.

Set Estimates (Light Grey) and 95% Confidence Sets (Dark Grey) For Males. Predictions of Levels: Nonparametric (Cell Means) on the Left, Parametric on the Right. Data Source: HRS2006.

Set Estimates (Light Grey) and 95% Confidence Sets (Dark Grey) For Females. Predictions of Levels: Nonparametric (Cell Means) on the Left, Parametric on the Right. Data Source: HRS2006.

The right panels of the figures report corresponding Beresteanu-Molinari set estimates of the BLP of the form given in equation (7) and the associated Imbens-Manski confidence sets. The figures illustrate that the set estimates narrow toward mean age, and then spread out again. Beresteanu and Molinari (2008, Section 4) show that this is an algebraic property of the BLP identification region.

Figure 1a reports analogous results for the selected subpopulations of males that exclude respondents in the (all NR) response category (top panels), the (all NR) and (all 0 or 100) categories (middle panels), and the (all NR), (all 0 or 100), and (all 0, 50, or 100) categories (bottom panels). As with estimation of the BLP parameters, using only the data on respondents who give more refined responses yields estimated intervals that are much narrower than those obtained for the entire population of males. For brevity, we do not report the similar results for females.

The figures clearly show that taking rounding into account substantially degrades one’s ability to predict subjective probabilities of survival to age 75 conditional on age and gender. It is similarly difficult to draw conclusions about the variation of subjective probabilities with age and gender. Let *x* and *x*′ denote any distinct values of the covariates (age, gender). In the case of nonparametric prediction, set estimates for *E*(*v*|**x**) − *E*(*v*|**x**′) can be read directly from Figures 1 and and2.2. An appropriate estimate of the lower (upper) bound on *E*(*v*|**x**)−*E*(*v*|**x**′) is the lower (upper) bound on *E*(*v*|**x**) minus the upper (lower) bound on *E*(*v*|**x**′).

Figures 1 and and22 cannot be used to estimate the difference between the BLPs at different covariate values. The reason is that the joint identification region of {*BLP* (*v*|**x**), *BLP* (*v*|**x**′)} is a strict subset of the Cartesian Product of the identification regions for *BLP* (*v*|**x**) and *BLP* (*v*|**x**′). Beresteanu and Molinari (2008) show how to construct an appropriate estimate of the difference in BLPs. For the special case that **x** and **x**′ differ only in one component *d* and by one unit (with
${x}_{d}^{\prime}={x}_{d}-1$), they show that the identification region for the difference between the BLPs is equal to the identification region of the parameter corresponding to the *d*–*th* variable. For example, the difference in BLPs conditional on age and gender for males of age 64 and 63 can be read in Table 4, as the identification region for the parameter of the “age” variable.

The research approach developed in Section 3 is more credible than the traditional practice of ignoring rounding, but it carries the price of weakened inference. The illustration of Section 3.3 indicates that taking account of rounding can be consequential. Even using all of the HRS expectations questions to infer how respondents round their responses to the survival question, we could draw only weak conclusions about survival expectations and their variation with age and gender.

The only way to strengthen inference without weakening credibility is to collect richer data on expectations. A potentially fruitful way to enrich the data is to follow a standard probabilistic expectations question with further questions that probe to learn the extent and reasons for rounding. Consider again the survival probability question analyzed in Section 3. One could follow up with these questions:

Q1. When you said [X percent] just now, did you mean this as an exact number or were you rounding or approximating?

If a person answers “rounding or approximating,” one might then ask

Q2. What number or range of numbers did you have in mind when you said [X percent]?

The responses to these questions could be used to improve on the inferences of Section 3. When the response to Q1 is “an exact number,” one could reasonably conclude that rounding was minimal. When the response is “rounding or approximating,” one could use the response to Q2 to interpret the data.

To explore how persons might respond to probes on their rounding practices, we posed the survival question to 552 respondents to the American Life Panel (ALP), an internet survey of American adults administered by RAND, and followed it by Q1 and Q2. See http://rand.org/labor/roybalfd/american_life.html for description of the ALP. Table 5 describes our findings. All of the respondents answered question Q1. Of the 552 respondents, 264 reported that their response to the survival question was an exact answer and the remaining 288 reported that they had rounded or approximated.

Within the group of 288 persons who were asked Q2, all but two responded fully. 70 persons reported that they had an exact number in mind and 248 that they had a range in mind. These numbers sum to more than 288 because 31 persons reported that they had both an exact number and a range in mind. Among the 248 who reported a range in response to Q2, the average width of the reported interval was 17.6 percent.

One can use the ALP data to estimate the BLP for the probability of survival to age 75, conditional on age and gender. Table 6 presents the parameter estimates in a manner similar to Table 4, and Figure 3 reports the BLP estimates in a manner similar to Figures 1 and and2.2. The left panel of Table 6 reports point estimates that take the elicited survival probabilities at face value. The right panel of Table 6 and the graphs in Figure 3 report set estimates based on the responses to questions Q1 and Q2. Thus, we take the elicited probability at face value when a person responds to Q1 stating that he meant it as an exact number. We use the exact or range response to Q2 otherwise. We use the stated range in the 31 cases where a person gave both an exact number and a range in response to Q2. We use the range [0,100] for the respondent who did not answer question Q2 and for the respondent who answered Q2 with a range having lower bound greater than upper bound. We use an upper bound equal to 100 for the respondent who gave a range that specified only the lower bound.

Observe that the interval estimates in Table 6 and Figure 3 are considerably narrower than the corresponding intervals in Table 4 and Figures 1 and and2.2. This is so because the ALP responses to questions Q1 and Q2 tend to yield much tighter inferences on rounding than the algorithm of Section 3.1 did when applied to the HRS data. We caution that the ALP and HRS sample designs differ considerably. Hence, one should be careful in extrapolating how HRS respondents would have answered questions Q1 and Q2, had they been posed.

We also caution that our estimates using the ALP data accept at face value what respondents reported about their rounding practices. A potentially worrisome finding in Table 5 is that, among the 264 persons who reported that their response was an exact number, almost a quarter (60) reported that their survival probability is precisely 50 percent. Perhaps this is what they really believed, but it could be many of these respondents actually were rounding.

These caveats notwithstanding, the results of this exploratory work are encouraging. Respondents to the ALP showed no resistance to being probed about the interpretation of their reported survival probabilities. If questions Q1 and Q2 had been placed on the 2006 HRS, it would have been possible to use the responses to infer rounding rather than the less direct approach of Section 3.1. It also would have been possible to validate the approach of Section 3.1. A limited experiment within the 2008 administration of the HRS does pose questions Q1 and Q2 to some respondents. When these data become available, presumably in 2009, it may become possible to perform this validation.

This paper has studied the rounding of responses to percent-chance expectations question, a specific type of survey question. The ideas developed here should have broad application. Consider, for example, a question asking respondents to state the number of hours they worked in the past week. Many respondents may round their responses, with the extent of rounding differing across persons. Examination of a person’s response pattern across different numerical questions may provide a credible way to infer his rounding practice. It may then be credible to interpret reported numerical values as intervals. In such cases, the analytical approach proposed and illustrated in Section 3 will be applicable. Similarly, probing to ascertain the extent and reasons for rounding, as suggested in Section 4, should be applicable broadly.

^{*}Manski’s research was supported in part by National Institute of Aging grants R21 AG028465-01 and 5P01AG026571-02, and by National Science Foundation grant SES-0549544. Molinari’s research was supported in part by National Institute of Aging grant R21 AG028465-01 and by National Science Foundation grant SES-0617482. We are grateful to an associate editor, two anonymous referees, and Arie Beresteanu for comments.

Charles F. Manski, Department of Economics and Institute for Policy Research, Northwestern University.

Francesca Molinari, Department of Economics, Cornell University.

1. Beresteanu A, Molinari F. Asymptotic Properties for a Class of Partially Identified Models. Econometrica. 2008;76:763–814.

2. Chernozhukov V, Hong H, Tamer E. Estimation and Confidence Regions for Parameter Sets in Econometric Models. Econometrica. 2007;75:1243–1284.

3. Dominitz J, Manski C. Perceptions of Economic Insecurity: Evidence from the Survey of Economic Expectations. Public Opinion Quarterly. 1997;61:261–287.

4. Fischhoff B, Bruine de Bruin W. Fifty-Fifty = 50%? Journal of Behavioral Decision Making. 1999;12:149–163.

5. Goldberger A. A Course in Econometrics. Harvard University Press; Cambridge, Massachusetts: 1991.

6. Horowitz J, Manski C. Nonparametric Analysis of Randomized Experiments with Missing Covariate and Outcome Data. Journal of the American Statistical Association. 2000;95:77–84.

7. Horowitz J, Manski C. Identification and Estimation of Statistical Functionals Using Incomplete Data. Journal of Econometrics. 2006;132:445–459.

8. Hurd M, McFadden D, Merrill A. Predictors of Mortality among the Elderly. In: Wise David., editor. Themes in the Economics of Aging. Chicago: University of Chicago Press; 2001. pp. 171–197.

9. Hurd M, McGarry K. Evaluation of the Subjective Probabilities of Survival in the Health and Retirement Study. Journal of Human Resources. 1995;30:S268–S292.

10. Hurd M, McGarry K. The Predictive Validity of Subjective Probabilities of Survival. Economic Journal. 2002;112:966–985.

11. Hurd M, Smith JP, Zissimopoulos J. The effects of Subjective Survival on Retirement and Social Security Claiming. Journal of Applied Econometrics. 2004;16:761–775.

12. Imbens G, Manski C. Confidence Intervals for Partially Identified Parameters. Econometrica. 2004;72:1845–1857.

13. Lillard L, Willis R. Cognition and Wealth: The Importance of Probabilistic Thinking. Institute for Social Research, University of Michigan; 2001.

14. Manski C. Partial Identification of Probability Distributions. New York: Springer-Verlag; 2003.

15. Manski C. Measuring Expectations. Econometrica. 2004;72:1329–1376.

16. Manski C, Molinari F. Skip Sequencing: A Decision Problem in Questionnaire Design. Annals of Applied Statistics. 2008;2:264–285. [PMC free article] [PubMed]

17. Manski C, Tamer E. Inference on Regressions with Interval Data on a Regressor or Outcome. Econometrica. 2002;70:519–546.

18. Smith VK, Taylor D, Jr, Sloan F. Longevity Expectations and Death: Can People Predict Their Own Demise? American Economic Review. 2001;91:1126–1134.

19. Stoye J. Bounds on Generalized Linear Predictors with Incomplete Out-come Data. Reliable Computing. 2007a;13:293–302.

20. Stoye J. More on Confidence Regions for Partially Identified Parameters. Working Paper, New York University; 2007b.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |