PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of qualsafetyQuality and Safety in Health CareCurrent TOCInstructions for authors
 
Qual Saf Health Care. 2007 June; 16(3): 185–191.
PMCID: PMC2465007

Have Nursing Home Compare quality measure scores changed over time in response to competition?

Abstract

Background

Currently, the Centers for Medicare and Medicaid Services report on 15 Quality Measures (QMs) on the Nursing Home Compare (NHC) website. It is assumed that nursing homes are able to make improvements on these QMs, and in doing so they will attract more residents. In this investigation, we examine changes in QM scores, and whether competition and/or excess demand have influenced these change scores over a period of 1 year.

Methods

Data come from NHC and the On‐line Survey Certification And Recording (OSCAR) system. QM change scores are calculated using values from January 2003 to January 2004. A series of regression analyses are used to examine the association of competition and excess demand on QM scores.

Results

Eight QMs show an average decrease in scores (ie, better quality) and six QMs show an average increase in scores (ie, worse quality). However, for 13 of the 14 QMs these average changes averaged less than 1%. The regression analyses show an association between higher competition and improving QM scores and an association between lower occupancy and improving QM scores.

Conclusion

As would be predicted based on the market‐driven mechanism underlying quality improvements using report cards, we show that it is in the most competitive markets and those with the lowest average occupancy rates that improvements in the QM scores are more likely.

The former Health and Human Services Secretary, Tommy Thompson, recently announced that many nursing homes have improved quality scores on the web‐based Nursing Home Compare (NHC) report card.1 NHC reports information about the number of beds, types of ownership, staffing and quality (given in the form of Quality Measures (QMs)) for every Medicare/Medicaid‐certified nursing home in the USA.2 (Almost all US nursing homes participate in this certification, in order to be eligible for government payments.) By using this information, consumers select a nursing home that best meets their preferences in these areas. However, the intention of the NHC report card was not only to provide consumers with information, but also to promote quality change in the nursing home industry. Therefore, this statement by Tommy Thompson was noteworthy.

The mechanism behind this change in the industry rests on consumers' migrating towards higher‐quality facilities, and nursing homes' competing to improve their quality, in order to attract potential residents. The use of NHC by consumers can be either real or imaginary—that is, the threat of potentially losing future residents may be incentive enough for nursing homes to make improvements on the quality measures reported in NHC. They do not necessarily have to experience any loss of business. However, these changes in the marketplace rest on the assumption that nursing homes compete for residents.

In the nursing home industry, some markets experience limited competition between facilities. For example, the 2004 On‐line Survey Certification And Recording (OSCAR) system data (which include almost all nursing homes in the USA) show that 18% of markets in the USA have only one or two nursing homes. Many nursing homes also experience high demand (most notably observed as very high occupancy rates and even waiting lists). The 2004 OSCAR shows that 20% of markets in the USA have facilities with an average occupancy rate of 95%. Thus, the reported average changes in QM scores may mask the more notable changes in some markets. We propose, following the mechanism described above, that nursing homes in markets with more competition or excess supply will respond to NHC by improving their quality.

Understanding whether competition and excess supply influence QM scores may be important. First, this represents the linchpin to the consumer empowerment policies currently being pursued in healthcare by the Bush Administration. (This approach seeks to give consumers information and enable them to choose providers.) Little evidence exists that this represents a tenable approach to complementing the regulatory approach of prior administrations. Second, if the QM scores are sensitive to competition and occupancy (ie, excess supply), then this will probably represent a harbinger of future gains. As policies such as those promoting increased competition in the long‐term care market proceed (such as Home and Community‐based Waivers), and other providers expand (such as assisted living providers), we will probably see changes in many more markets towards increased competition and lower occupancy. In this investigation, we seek to determine whether competition and/or excess supply influence QM scores reported on NHC over a period of 1 year.

Data and methods

Source of data

Data used in this analysis primarily came from the NHC website. The NHC data were downloaded from the website in January 2003 and again in January 2004. This included information from 14 QMs (which included all of the QMs reported during this timeframe). In addition, aggregate resident characteristics and facility characteristics from the OSCAR data were used. The OSCAR is collected yearly for most facilities in the USA as part of the Medicare/Medicaid certification process.3 We used OSCAR information that was collected during 2003, thus matching the time in which the NHC information was collected.

Model specification

The dependent variables used in the analyses were the 14 QMs. These are described in further detail in box 1. For each QM, for each facility, the score in 2004 was subtracted from that in 2003. In all cases for these change scores, negative numbers indicate an increase in quality. These change scores were used in the multivariate analyses.

Box 1 Quality measures used in Nursing Home Compare during 2004

Quality measures

Long‐stay residents

  • Percentage of residents whose need for help with daily activities has increased
  • Percentage of residents with moderate to severe pain*
  • Percentage of low‐risk residents who have pressure sores†
  • Percentage of residents who are physically restrained
  • Percentage of residents who are more depressed or anxious
  • Percentage of low‐risk residents who lose control over their bowels or bladder
  • Percentage of residents who have/had a catheter inserted and left in their bladder
  • Percentage of residents who spend most of their time in bed or in a chair
  • Percentage of residents whose ability to move about in and around their room has become worse
  • Percentage of residents with a urinary tract infection
  • Percentage of residents who have lost too much weight

Short‐stay residents

  • Percentage of short‐stay residents with delirium*
  • Percentage of short‐stay residents with moderate to severe pain
  • Percentage of short‐stay residents with pressure sores

Some quality measures are risk adjusted to take into account resident and facility characteristics. Risk adjustments include the adjustments in resident level* and facility level†.

SOURCE: http://www.medicare.gov/NHCompare/Static/Related/DataCollection.asp

The independent variables of interest were competition and excess supply, which came from the OSCAR data. The Herfindahl index was used as a measure of competition. The county was the market area used in this analysis, and the Herfindahl index was calculated by taking each nursing home's squared percentage share of beds in the county and summing for all nursing homes in the county. The resulting scores varied from 0 to 1, with higher values representing less competition. This index was used in several previous nursing home studies (eg, Intrator et al.4). The average occupancy rate in the county was used as a measure of excess supply. The occupancy rate represents the number of residents divided by the number of beds (multiplied by 100), giving occupancy values from 0 to 100.

In addition, we included the interaction between occupancy and competition, because markets with both low occupancy and high competition may respond differently from those with just one of these market conditions. We converted the occupancy and competition variables to Z scores, and then multiplied these, to form the interaction score.

In all cases, these independent variables of interest (competition, occupancy and interaction) were recoded as ordinal categorical variables, using the quartile distributions. The coding was performed so that the fourth quartile for competition represented the highest levels of competition and the first quartile for occupancy represented the highest occupancy levels.

When examining quality using the raw rates of indicators, it is a common practice to use risk adjustment (eg, Mukamel et al,5 Mor et al6). This is because quality indicators may be a function of both actual quality and resident case‐mix and facility characteristics. Risk adjustment controls for resident case‐mix and facility characteristics in the analyses. This may also be the case with the QMs—that is, resident case‐mix and facility characteristics may need to be included in the estimating equations. However, the QMs were either reported with some risk adjustment or were recommended for use with no such adjustment. Nevertheless, in this analysis, we used additional risk adjustment. The rationale behind this is that (1) no gold standard for risk‐adjusted nursing home quality indicators exists5; (2) the risk adjustment of the QMs has been described as limited7; and (3) the QMs are publicly reported, which necessitates use of simple methodologies. As such, the QMs may still be underadjusted. For our analyses, it thus seemed prudent to risk adjust the QMs further. However, we acknowledge that this is probably a conservative approach, and may overcontrol for the impact of competition and occupancy on the QMs.

Aggregate resident factors used as independent variables for risk adjustment include activities of daily living (ADLs), bladder incontinence, bowel incontinence and mental health problems. The ADL score is a facility score (ranging from 0 to 1) based on six OSCAR questions (difficulty with bathing, dressing, toileting, transferring, feeding or walking). Increasing scores indicate a greater average ADL impairment within the facility. The number of residents with bladder and bowel incontinence was used to calculate proportions. The proportions of residents with mental retardation, dementia and psychiatric diagnoses were used as mental health variables.

We controlled for staffing levels within facilities because increased staffing levels will enable individual staff to increase the time they spend in direct resident care, and in turn will benefit residents.8 We included full‐time equivalents per 100 residents of nurse aides, licensed practical nurses and registered nurses, including full‐time, part‐time and temporary staff. We know from other nursing home studies that facility factors (in addition to staffing levels) have a strong impact on quality indicators. Therefore, chain membership, ownership, size and Medicaid census were included as facility level variables.9 Members of a chain were coded as 1 (with non‐chains as 0), for‐profit facilities were coded as 1 (with not‐for‐profit facilities as 0), and Medicaid census was operationalised as a proportion.

Market areas are defined using geographical data in the OSCAR system such that facilities within a given county represent a market. Many studies have used the county as a proxy for the nursing home market (eg, Cohen et al).10 As noted by Banaszak‐Holl et al,11 the county may be a reasonable approximation of the market for nursing home care, given patterns of funding and resident origin. For example, federal block grant funds for long‐term care services are distributed at the county level. Furthermore, Gertler12 found that 75% of residents in New York facilities had previously lived in the county where the home was located. Similarly, Nyman13 found that 80% of residents in Wisconsin facilities chose a nursing home located in the county in which they had resided before entering the home.

Analyses

We first combined the data from both years, using the facility identification number, and then subtracted each 2004 QM score from the 2003 QM score, in order to explore the changes between the 2 years. We present descriptive analyses including the mean QM scores in each year and the percentage of facilities with either increasing or decreasing scores for each of the 14 QMs. We also include the relative change for each measure. These relative change scores are calculated because they more appropriately account for the changes seen in the narrow range of scores for most QMs.

Second, logistic regression models were used to examine the influence of competition and excess supply on the QM scores. In these models, the odds (adjusted) of improving QM scores is estimated relative to the odds of showing worsening or unchanged QM scores, controlling for competition, occupancy, and aggregate resident factors and facility characteristics. Thus, the QM scores in these analyses were recoded as binary variables, with a negative change score (meaning improved quality) coded as 1 and positive or no change scores coded as 0 (meaning worsening or unchanged quality).

Results

Table 11 shows the mean scores for each of the QMs in 2003 and 2004. This also showed the average change in these scores. Eight measures showed an average decrease in scores (ie, better quality) and six measures showed an average increase in scores (ie, worse quality). However, for 13 of the 14 QMs, these changes averaged <1%. For example, the difference between the 2004 and 2003 scores for the percentage of residents who lost control over their bowel or bladder was −0.11% whereas the difference between the 2004 and 2003 scores for the percentage of low‐risk residents who had pressure sores was 0.79%.

Table thumbnail
Table 1 Means and differences of all facilities on each quality measure in 2003 and 2004

The percentage of facilities with decreasing QM scores (ie, better quality) are, for the most part, greater than those with an increase in scores (ie, worse quality). This is shown in table 22.. For example, 38% of facilities had decreasing scores for the percentage of residents with moderate to severe pain, whereas 35% of facilities had increasing scores. Indeed, facilities decreasing their QM scores did so by approximately 5%, on average.

Table thumbnail
Table 2 Facilities with increasing and decreasing scores between 2003 and 2004 on each quality measure

Table 33 shows the results from the 14 logistic regression models used to examine the influence of competition and occupancy on the QM scores. The results show the high‐competition quartile compared with the low‐competition quartile; the high‐occupancy quartile compared with the low‐occupancy quartile; and, the high interaction scores (representing high competition×low occupancy) quartile compared with the low interaction scores quartiles. The competition quartiles are at values of 0.04, 0.13 and 0.31, and the occupancy quartiles are at values of 77, 89 and 95.

Table thumbnail
Table 3 Multivariate logistic model results examining the influence of competition and occupancy on quality measure scores

In markets with high competition, five QMs showed significantly higher adjusted odds ratios (AORs), indicating an association between high competition and improving QM scores. Some of these AORs identified were quite high. For example, facilities in highly competitive markets (compared with those in low‐competitive markets) were 21% more likely to improve their QM scores for the percentage of short‐stay residents who had moderate to severe pain.

In markets with lower occupancy rates, seven QMs showed significantly lower AORs, indicating an association between low occupancy and improving QM scores. Again, some of these AORs identified were quite high. For example, facilities in low‐occupancy markets (compared with those in high‐occupancy markets) are 27% more likely to improve their QM scores for the percentage of short‐stay residents who had moderate to severe pain.

In markets with low occupancy rates and high competition, five QMs showed significantly lower AORs, indicating an association between occupancy × competition (ie, interaction) and improving QM scores. Following the previous findings, some of these AORs identified were quite high. For example, facilities in low‐occupancy high‐competition markets (compared with those in high‐occupancy low‐competition markets) were 25% more likely to improve their QM scores for the percentage of short‐stay residents who had moderate to severe pain.

Discussion

Nursing home regulations probably serve a necessary function in helping promote quality in nursing homes. However, the regulatory system set up to promote quality has many well‐publicised faults,14 and the regulatory approach can facilitate the creation of a quality floor rather than promoting quality improvement. Nevertheless, quality improvement is badly needed in many facilities, as quality problems in many nursing homes are chronic issues. It is clear that, for at least some providers, the current regulatory system may be a necessary but not sufficient mechanism for quality improvement. Report cards (such as NHC) are a relatively new market‐based supplement to the regulatory model. However, we know very little about nursing home report cards and the impact they have on consumers and providers.

With the exception of a recent General Accounting Office report2 describing the federal NHC initiative and four recent descriptive articles,15,16,17,18 we could not identify any research on nursing home report cards. This descriptive literature would suggest that nursing home administrators are aware of NHC, and that a majority of nursing homes are implementing quality improvement efforts as a direct response to NHC.15 What has received less attention in the research literature, so far, is whether nursing homes are making improvements in QMs, and whether factors such as competition and occupancy rates are associated with any improvements.

The only information we could identify addressing these issues comes from the Centers for Medicare and Medicaid Services (CMS). (CMS is the Federal agency that oversees nursing home inspections and the NHC website.) CMS maintains that progress has been made in improving the quality of nursing homes in that some QM scores have declined over time.19 However, the details provided by CMS are incomplete, and it is not clear whether the results reported are representative of all nursing homes.

In this investigation, our results are mixed. On the one hand, we clearly found that many nursing homes have improved their scores on the NHC QMs. For example, 44% of facilities were able to reduce the aggregate incidence of pressure sores by about 7%, but, because the initial average pressure sore rate in these facilities was 16%, this represents a large relative improvement of about 40%. Thus, some of these changes were considerable. On the other hand, we also found a considerable number of facilities increasing their QM scores (ie, getting worse). Thus, overall, the changes seen in NHC QMs were positive but relatively small.

The aggregate mechanism underlying the potential outcomes expected when report cards are used is described in the consumer choice model. In the consumer choice model, consumers are given relevant information, so that they can make informed choices.20 Most often, this information consists of cost and quality. In turn, the market should respond by competing on both cost and quality, with probable decreases in costs and/or increases in quality. However, it would seem reasonable to assume that the results of such market‐based initiatives depend on the markets in which they operate (in this case, more effects should be identified in the most hostile markets)—hence our hypothesis that nursing homes in markets with more competition or excess supply will respond to NHC by improving their quality. Our results clearly show that this is indeed the case. Competition, occupancy and the interaction between the two would seem to promote improvements in QM scores.

Moreover, the pattern of our findings is also of interest. We found consistently significant findings for the short‐stay QMs. This is of interest because short‐stay residents are most often paid for by Medicare. (Medicare most often pays for rehabilitation, whereas Medicaid pays for longer term care.) In general, Medicare rates are higher, and thus more lucrative, than Medicaid rates, which primarily pay for long‐stay residents. Thus, nursing homes tend to compete for these Medicare residents. Our finding that nursing homes in high‐competition and low‐occupancy areas are improving their short‐stay QM scores would thus also be expected as a market reaction. Moreover, facilities may be most able to make gains in these areas because of the high resident turnover. Facilities can implement quality improvement initiatives, see whether they work, and refine the process on the next cohort of short‐stay residents.

Changes in the nursing home market over the past 10 years have included increasing competition from providers, such as assisted living and lower occupancy rates.21 Moreover, these trends have been actively fostered by both state and federal governments, eager to reduce their nursing home bills. For example, in 1981, the Home and Community‐based Services 1915c Waiver was passed by Congress, in order for “States to have the flexibility to develop and implement creative alternatives to placing Medicaid‐eligible individuals in hospitals, nursing facilities or intermediate care facilities” (www.hcfa.org/medicaid/hpg4). Although this waiver was originally designed for those with mental retardation, newer state initiatives have incorporated language into waiver proposals, in order to steer older people away from nursing homes. Moreover, more generic “ageing‐in‐place” initiatives also promote the use of home services or services such as residential care, and attempt to delay the use of nursing homes.22

There are about 600 000 beds in assisted living facilities, most of which are occupied. The number of assisted living facilities is around 12 000 nationwide, and they generate about US$15 billion per year in revenue.23 The number of facilities has grown rapidly over the past 10 years, with more than half of the facilities in business for <10 years, and one‐third in business for <5 years. Assisted living tends to cater to private‐pay residents, which only serves to heighten the competitive pressures nursing homes face for attracting other residents. That is, if assisted living facilities attract private‐pay residents from nursing homes, then nursing homes are likely to compete for the remaining private‐pay residents and Medicare residents. Assisted living is almost certainly partly responsible for the drop in occupancy rates seen in nursing homes.

These market pressures (Home and Community‐based Services, ageing‐in‐place and assisted living) will probably continue to expand. As they do so, our results would suggest that we will probably see further improvements in the QM scores on NHC.

Limitations

We know that nursing home quality is a multidimensional construct, with a wide array of available quality measures. NHC includes only a small subset of these measures; thus nursing homes could have increasing or decreasing quality in many other areas, and this would not be reflected on the website.

In table 22,, we show that the matching success rate between 2003 QMs and the same QMs in 2004 varies from 6105 to 12 024. These differences occur because rates are needed for both years to calculate the difference scores. NHC does not report values for facilities with low observed numbers of each QM (ie, <30 cases for the long‐stay QMs and <20 cases for the short‐stay QMs). This might have caused many missing observations in NHC and accounts for our matching rate. In addition, some facilities could have closed during the period of observation. However, recent research on the closure of nursing homes suggests that only 0.7% of facilities close each year.24 Thus, this probably represents a minor impact on our match rate.

Additionally, the changes we see in the NHC QMs may not necessarily be attributable to the publishing of the report card. Many facilities could have changed their scores, irrespective of NHC. Many other initiatives to improve quality exist, such as those developed by Quality Improvement Organizations and by facilities themselves. These may also influence the QMs.

With approximately as many facilities with increasing QM scores as there are facilities with decreasing QM scores, it is possible that we are simply seeing regression to the mean in the descriptive analyses. However, in the multivariate analysis, our significant findings would seem to indicate that such random variation is not responsible for all of the changes seen.

The OSCAR data used may also have some limitations. For example, a small number of facilities (n = 640) had occupancy rates of <30%. These may represent errors in the data, or recently opened facilities. Therefore, they were removed from our analyses. However, this approach of determining whether low‐occupancy rates represent errors in the data, or not, represents a best guess on our part. Nevertheless, the number of facilities involved is small and the reported results are unaffected by these exclusions.

In our estimation approach, we use logistic regression. In doing so, we created a dichotomous dependent variable. The advantage of this approach is that it provides readily interpretable odds ratios (ORs). The disadvantage of this approach is that information is lost in the transformation of the dependent variable. Nevertheless, in preliminary analyses with least squares regression, similar significant findings were obtained—thus, our choice to report OR. Our independent variables of interest (competition, occupancy and interaction) were specified as ordinal categorical variables. This approach enables an easier interpretation of the OR. However, we also acknowledge that the results presented are for the highest and lowest quartiles, which maximises the comparisons, and are the most significant findings.

There are a number of alternative approaches to defining market areas; however, these are mostly based on migration data, which are not available a priori. An example is Makuc's market definitions of hospital service areas. This previous work has defined healthcare service areas on the basis of Medicare data on travel patterns between counties for routine hospital care.25 Also, other studies have used variable‐radius measures of local competition to define markets.26 Similar approaches may be especially pertinent to our study, given the recent research by Zwanziger et al27 identifying some measurement error in nursing home studies using counties as market areas. However, the fact remains that national data on patient flows for nursing homes do not yet exist.

Despite the fact that the Herfindahl index is the most commonly used measure of competition in nursing home research, it may prove instructive to examine other indexes of competition. Schramm and Renn28 describe that the Herfindahl index was developed for industrial and retail settings, and thus may not be as sound for healthcare facilities.

We are sensitive to these risk‐adjustment issues inherent to NHC. The QMs were initially created for use with various resident and facility risk adjustments. This approach was criticised, and was changed slightly to include only some risk adjustment for some QMs.2 For our analyses, it is therefore questionable whether risk adjustment should be used in the estimating equations. Preliminary analyses using models with no risk adjustment, resident adjustment and facility adjustment did influence the AORs, but did not greatly influence the significance levels (results not shown). Because we cannot ignore the possibility that our results depend on the risk‐adjustment method used, we report analyses using risk‐adjustment models that include resident and facility characteristics, which provide the most conservative effects for the variables of interest.

Finally, nursing homes may also be simply getting better at completing the minimum data set (MDS) that is used to construct the NHC QMs. So reporting may have changed; but, actual levels of quality may not have changed. Several authors have noted that the MDS is generally reliable,29 and others have used these data for research and quality improvement purposes.30,31 Nevertheless, it has also been reported that these data may have reliability issues.29,30 This potential bias is clearly important in examining NHC QMs. Still, in this investigation, it is unclear how this bias, if it exists, would have influenced our results. This bias would tend to reduce the change scores observed, and may be more pronounced for some QMs and not others. But, if this bias in the MDS is random, and not associated with either competition or occupancy rates, then our regression results would probably not be affected. However, the fact remains that we know little about bias coming from this data source.

Key points

  • Nursing homes are somewhat responsive to public reporting of quality indicators—that is, over time, they seek to improve their quality scores. However, this does vary, based on the market in which the nursing homes operate.

Conclusion

The NHC report card was introduced with some trepidation by the nursing home industry. However, some recent research has indicated acceptance by nursing home administrators, to the degree that a majority thought they would use the information for quality improvement purposes.17 We found that this has resulted in improvement in quality in some nursing homes, albeit small. However, as could be predicted on the basis of the market‐driven mechanism underlying quality improvements using report cards, we showed that improvements in the QM scores are more likely in the most competitive markets and in those with the lowest average occupancy rates.

Abbreviations

ADL - activities of daily living

AOR - adjusted odds ratio

CMS - Centers for Medicare and Medicaid Services

NHC - Nursing Home Compare

OSCAR - On‐line Survey Certification And Recording

QM - quality measure

Footnotes

Competing interests: None declared.

References

1. Mantone, J Public knowledge. Mod Healthcare 2005. 3512
2. General Accounting Office Nursing homes: public reporting of quality indicators has merit, but national implementation is premature. Washington, DC: General Accounting Office, 2002
3. Castle N G. Nursing homes increasing and decreasing restraint use. Med Care 2000. 381154–1163.1163 [PubMed]
4. Intrator O, Castle N G, Mor V. Facility characteristics associated with hospitalization of nursing home residents: results of a National study. Med Care 1999. 37228–237.237 [PubMed]
5. Mukamel D B. Risk adjusted outcome measures and quality of care in nursing homes. Med Care 1997. 35367–385.385 [PubMed]
6. Mor V, Berg K, Angelelli J. et al The quality of quality measurement in US nursing homes. Gerontologist 2003. 43(Special Issue II)37–46.46 [PubMed]
7. National Quality Forum National voluntary consensus standards for nursing home care: a consensus report. Report no. NQFCR‐06‐04. Washington, DC: US Department of Health and Human Services, 2004
8. Harrington C, Zimmerman D, Karon S L. et al Nursing home staffing and its relationship to deficiencies. J Gerontol Psychol Sci Soc Sci 2000. 55BS278–S287.S287
9. Spector W, Takada H A. Characteristics of nursing homes that affect resident outcomes. J Aging Health 1991. 3427–454.454 [PubMed]
10. Cohen J W, Spector W D. The effect of Medicaid reimbursement on quality of care in nursing homes. J Health Econ 1996. 1523–48.48 [PubMed]
11. Banaszak‐Holl J, Zinn J S, Mor V. The impact of market and organizational characteristics on nursing home service innovation: a resource dependency perspective. Health Serv Res 1996. 3197–118.118 [PMC free article] [PubMed]
12. Gertler P J. Subsidies, quality, and the regulation of nursing homes. J Public Econ 1989. 3833–52.52
13. Nyman J A. The effects of market concentration and excess demand on the price of nursing home care. J Ind Econ 1994. 42193–204.204
14. Walshe K, Harrington C. Regulation of nursing facilities in the United States: an analysis of resources and performance of state survey agencies. Gerontologist 2003. 42475–486.486 [PubMed]
15. Castle N G, Lowe T J. Report cards and nursing homes. Gerontologist 2005. 4548–67.67 [PubMed]
16. Mukamel D B, Spector W D. Quality report cards and nursing home quality. Gerontologist 2003. 43(Spec no 2)58–66.66 [PubMed]
17. Castle N G. Administrators opinions of nursing home compare. Gerontologist 2005. 45299–308.308 [PubMed]
18. Harrington C, O'Meara J, Kitchener M. et al Designing a report card for nursing facilities: what information is needed and why. Gerontologist 2003. 43(Special Issue II)47–57.57 [PubMed]
19. Centers for Medicare and Medicaid Services Progress in nursing home quality. http://www.cms.hhs.gov/quality/nhqi/NHprogressrpt.pdf 2004
20. Hibbard J H, Slovic P, Jewett M W. Informing consumer decisions in health care: implications from decision‐making research. Millbank Q 1997. 75395–414.414
21. Bishop C E. Where are the missing elders: the decline in nursing home use, 1985 and 1995. Health Affairs 1999. 18146–242.242 [PubMed]
22. Chapin R, Dobbs‐Kepper D. Aging in place in assisted living: philosophy versus policy. Gerontologist 2001. 4143–50.50 [PubMed]
23. Hawes C, Phillips C D, Rose M. et al National survey of assisted living facilities. Gerontologist 2003. 43875–882.882 [PubMed]
24. Castle N G. Closure of nursing homes and quality of care. Med Care Res Rev 2005. 62111–132.132 [PubMed]
25. Makuc D M, Haglund B, Ingram D D. et al The use of health service areas for measuring provider availability. J Rural Health 1991. 7347–356.356 [PubMed]
26. Phibbs C S, Mark D H, Luft H S. et al Choice of hospital for delivery: a comparison of high‐risk and low‐risk women. Health Serv Res 1993. 28201–222.222 [PMC free article] [PubMed]
27. Zwanziger J, Mukamel D B, Indridason I. Use of resident‐origin data to define nursing home market boundaries. Inquiry 2002. 3956–66.66 [PubMed]
28. Schramm C J, Renn S C. Hospital mergers, market concentration, and the Herfindahl‐Hirschman Index. Emory Law J . 1984;Fall869–888.888
29. Mor V. Improving the quality of long‐term care with better information. Milbank Q 2005. 831–20.20
30. Lee R H, Wendling L. The extent of quality improvement activities in nursing homes. Am J Med Qual 2004. 19255–265.265 [PubMed]
31. Mor V. A comprehensive clinical assessment tool to inform policy and practice: applications of the minimum data set. Med Care 2004. 42(Suppl III)50–59.59
32. Schnelle J F, Bates‐Jensen B M, Levy‐Storms L. et al The minimum data set prevalence of restraint quality indicator: does it reflect differences in care? Gerontologist 2004. 44245–255.255 [PubMed]
33. Schnelle J F, Bates‐Jensen B M, Chu L. et al Accuracy of nursing home medical record information about care‐process delivery: implications for staff management and improvement. J Am Geriatr Soc 2004. 521378–1383.1383 [PubMed]

Articles from Quality & Safety in Health Care are provided here courtesy of BMJ Group