|Home | About | Journals | Submit | Contact Us | Français|
Micro-costing studies collect detailed data on resources utilized and the value of those resources. Such studies are useful for estimating the cost of new technologies or new community-based interventions, for producing estimates in studies that include non-market goods, and for studying within-procedure cost variation.
This objectives of this paper were to (1) describe basic micro-costing methods focusing on quantity data collection; and (2) suggest a research agenda to improve methods in and the interpretation of micro-costing
Examples in the published literature were used to illustrate steps in the methods of gathering data (primarily quantity data) for a micro-costing study.
Quantity data collection methods that were illustrated in the literature include the use of (1) administrative databases at single facilities, (2) insurer administrative data, (3) forms applied across multiple settings, (4) an expert panel, (5) surveys or interviews of one or more types of providers; (6) review of patient charts, (7) direct observation, (8) personal digital assistants, (9) program operation logs, and (10) diary data.
Future micro-costing studies are likely to improve if research is done to compare the validity and cost of different data collection methods; if a critical review is conducted of studies done to date; and if the combination of the results of the first two steps described are used to develop guidelines that address common limitations, critical judgment points, and decisions that can reduce limitations and improve the quality of studies.
Micro-costing studies involve the “direct enumeration and costing out of every input consumed in the treatment of a particular patient.”1 Micro costing studies can be contrasted with gross costing studies in which average costs of events (a unit, such as a hospitalization, that is large relative to the intervention being analyzed) are assigned using regional or national data.1 The resources for which detailed data are the focus of micro-costing studies include personnel hours, square feet of office space, miles driven, and supplies used.
Micro-costing results can be presented as stand alone studies or used as inputs directly into cost-benefit or cost-effectiveness analyses. Micro-costing data at the individual level are relevant for describing the precision of results in cost-effectiveness studies. These studies should be considered as a first choice in studies where (a) the intervention or treatment is new so that there has been no opportunity to calculate an average cost, (b) the objective of a study is to examine within procedure variation, and (c) the study methods include the incorporation of non-market goods for which standardized cost estimates are less likely to exist. Community health programs are an example of a health intervention bringing all these elements together. Micro-costing is ideal for studying these because new interventions are being developed on a regular basis, such services are not always reimbursed, and costs are rarely calculated for payment purposes.
In the past fifteen years, micro-costing studies have used a wide range of quantity and price data collection methods.2-15 This paper will (1) describe basic micro-costing methods focusing on quantity data collection; and (2) suggest a research agenda to improve methods in and the interpretation of micro-costing.
A number of terms are important for understanding micro-costing analyses. Economic costs, or opportunity costs, are measured by the value of the next best use of the alternative. For many goods and services this is the market price. As a result of market imperfections (including market power and third party payment systems) the opportunity cost for health care resources is rarely the market price. Use of the market price would lead to an overestimate of the cost.
The cost of time also is characterized by the net benefit of the next best use. Choices about public health interventions can involve tradeoffs between using staff time to travel into communities and having the targets of the intervention come to a relatively centralized location (i.e. needing to consider the value of patient and caregiver time that may be unemployed and not easily valued in the market). Failing to value the time spent by individuals who are not employed can lead to an underestimate of the costs of an intervention and an inappropriate allocation of resources to the intervention. This applies to any resource that is not compensated (either the patient time, informal care provider time, or volunteer time).
The perspective indicates whose costs and benefits are incorporated in the analysis. Two common perspectives are the societal perspective in which all costs and benefits are captured and the perspective of the provider of the health care services. Research using a societal perspective should include the most comprehensive list of resources possible in the cost calculation. Research using more limited perspectives is likely to include only a subset of the costs. Individual stakeholders are likely to respond to only their own incentives but failure to include all costs can lead to societal misallocation of resources.
Costs can be characterized by how they are related to production processes. Variable costs depend on the quantity of output; for example, the variable costs of a public health home visitation program encouraging longer breast-feeding duration among low income women depend on whether the program serves 20 low income mothers or 40 low income mothers each week. In contrast, fixed costs remain the same regardless of the quantity of output produced, e.g. a group program for breastfeeding. In the long-run, there are no fixed costs, in other words all costs can be changed and a program or intervention can even be discontinued. In the short-run, costs can remain fixed even if the program ceases to operate.
Joint costs are accrued as multiple outputs are produced by the same production process. Failure to accurately separate the costs of multiple goods produced by the same process can lead to an overestimate of the costs. If the costs were sufficiently high this could result in failure to allocate resources to an activity that might be considered efficient if costs were allocated properly
Capital costs are related to goods that are not used up within one year. This can include items like computer, office space, or vehicles, and this term is common across multiple types of costing studies. Other sources provide excellent tutorials in actually implementing capital costing.16 The key comparative advantage of micro-costing capital is the same as its advantage with respect to other resources—greater detail to which the unit costs of capital can be applied.
Another term to consider for long-term studies is discounting, or the process of applying lower weights to future changes in costs than to current changes in costs. Other sources offer complete descriptions of the process.16 In some cases, micro-costing studies will use discounting to apply lower weights to future changes in resource utilization. Collecting detailed micro-costing data over more than one year can pose a budgetary and logistical challenge, although the detail that would be available would be excellent for study purposes. However, in many cases, tying the short-run outcomes to long-run outcomes would not necessarily involve micro-costing and would require modeling that is similar to any other cost study.
Micro-costing studies are guided by economic theory and involve the collection of detailed data for both the quantity of resources and the value of those resources. Micro-costing studies are often conducted at a single center or a small number of centers because of the substantial resources and coordination needed to collect detailed resource utilization data. However, conducting studies at limited numbers of facilities creates challenges in translating the results to other contexts. One challenge is the potential need for adjusting labor inputs to account for any differences in qualifications of the workers in the new setting compared to the original setting. Another is the need to account for the leadership style of the program manager at the original test site; if this cannot be replicated the likelihood of success will often be substantially lower. To make results more usable for reader, quantities and prices should be reported separately. Readers can adjust any values that would result in a better fit with the local decision making context.
Table 1 summarizes information on 14 studies that were chosen to illustrate methods of gathering quantity data. The studies are listed in chronological order including the year of publication and location from which the data are obtained, the disease or treatment that was being studied, notes on the methods for quantity and resource value data collection, and the primary objective of the study. The subsections below indicate the method of quantity data collection being discussed.
One study reported “tracing” resources used.15 This study gathered data on personnel, medication, and equipment to assess costs at a university in Belgium, but it is impossible to determine any more about what he authors did because they did not describe clearly the method of measuring quantities. The remaining studies that are discussed made clear which of ten different quantity data collection methods was used.
Two studies obtained resource quantity data from an item-by-item database that was set up to collect data on all resources that were used for common types of surgeries; the studies actually reported on laparoscopic cholecystectomy and pancreatoduodenectomy.12,13 The methods sections of these papers did not provide detailed information on the effort that was required to enter the data to allow for an assessment of whether a database set up specifically to track resources for specific surgeries that occur more frequently would be practical. A reader can only assume that the values that were assigned were what the hospital was required to pay. At present, the use of a single facility database would be better facilitated by the use of a commercial cost accounting system designed to link resources to specific clinical events rather than having a database for a small number of surgeries. The two earlier studies also did not provide any information on collecting overhead-related resources or adjusting the value to reflect overhead costs. This would also be more easily facilitated with cost accounting systems that exist today.
A report on the costs related to rheumatoid arthritis in Germany also used administrative data.8 The data came from insurers and sick funds rather than hospitals. Values were assigned based on a uniform valuation system used to determine what should be paid. This is appropriate for a micro-costing analysis from an insurer’s perspective; the insurer’s perspective does not necessarily capture even the entire amount that is paid for the service. The generalizability of the costs from the insurer’s perspective is likely limited. Not all administrative databases (particularly in countries where there are global budgets) will provide sufficiently granular detail to facilitate micro-costing studies.
A study of appendectomies in multiple European countries employed a standardized reporting template.2 This form was used to collect data on resources and prices including a detailed list of overhead items. This type of system can be useful when data collection from multiple centers is used to achieve greater external validity or to achieve a necessary sample size.
One study used an expert panel modified Delphi approach.11 This approach is in contrast to gathering data on specific patients, as suggested by the definition cited at the start of this manuscript. This survey of experts about respiratory distress syndrome followed a literature review that was conducted to identify the resources that were likely to be used. The expert panel included eight providers who had experience with at least 25 children with respiratory distress syndrome each year. If it were possible to observe this many treatments, the internal validity of the results would not be questioned. However, as with any study relying on recall, this study’s primary weakness in gathering data on resource quantities is the potential for recall bias. Resources were valued at average wholesale prices and prevailing reimbursement rates rather than at levels that were more specific to the providers. No mention was made of overhead costs, and the perspective appeared to be the facility.
A study of the costs of cochlear implantation in the United Kingdom involved a survey of clinical coordinators at sixteen centers.9 The survey was planned based on a combination of prior reports and discussions with clinicians. The coordinators were asked to describe the “profile of care”. The profile allowed the analysts to assess the quantities of staff, accommodations, equipment, incidentals, inpatient care, the implant devices, and adverse events. Values were placed on these items by using salaries (although salary alone is not compensation) and purchase prices. Overhead costs were not included in the estimate. The method only indirectly obtained costs at the individual patient level (by dividing total resources by the number of treatments), but the survey structure is likely to produce high quality data.
Another survey-based micro-costing study used surveys and letters to department managers to assess the costs associated with steps take to control an outbreak of Klebsiella pneumoniae in a neonatal intensive care unit.7 This study also reported using minutes from interdisciplinary meetings in which the use of resources was discussed. The values placed on resources used in this case were specific to the hospital in the study. Since this is a case study, the external validity may not be high.
Another survey-based study focused on cancer genetic services in the United Kingdom.6 Interviews with team leaders were used to define pathways of care and surveys of laboratory staff were used to assess the costs related to laboratory procedures. Values were assigned based on information from the National Health Service (NHS) finance department. This study allowed for the costs per procedure to be more than the minimum costs as many of the laboratories are small. The transparency of the results allows readers to apply other values (perhaps of more efficient facilities) to the resources to study the sensitivity of the conclusions that might be drawn to changes in the resource costs.
Surveys have also been used to study the costs associated with dental fillings.3 In this case, dentists were asked to provide information about the last ten patients matching a vignette. Dentists also had the option of providing information on an average patient. Providing information on an average patient is more likely to be subject to recall bias than providing data that are specific to the last ten patients. This survey actually asked about overhead related resources. Resources were valued using average salaries (again, salaries alone are not full compensation) and price data from registries and manufacturers.
A study of the costs of in vitro fertilization and intracytoplasmic sperm injection also used interviews.5 This study was different from the others because providers at several levels (including gynecologists, nurses, and lab personnel) were interviewed. Physicians and nurses were surveyed about the procedures. Lab personnel survey responses were combined with administrative data to obtain laboratory costs. A pharmaceutical regimen was assumed and hospital records were used to assess the cost of complications. The resources were valued using average salaries (not full compensation), Dutch wholesale prices, equipment costs, and hospital reference costs. The stages of care provide a clear framework for presenting the results. Having a clear framework or sequence of events can facilitate a coherent presentation of the results.
Review of patient charts was used in a study on the treatment of glaucoma patients.10 The researchers used a pre-tested form that included a list of relevant resources. This method allowed the researchers to obtain data about outpatient events, diagnostic tests, medications, and a variety of other glaucoma-specific interventions. This method provided a way to differentiate costs associated with glaucoma from other costs and attempted to economize on data collection by focusing only on items that make a large difference in the costs experienced. Resource values were assigned at the procedure level by performing small time and motion studies to obtain quantities of resources within the procedure and then assigning the value of each resource based on average salaries (not full compensation) for providers and equipment costs. There was no collection of overhead-related resources, but the valuation of hospital days (relevant for some treatments) included an adjustment for overhead. Medication costs were obtained from an official price list. This study is an example of how different micro-costing methods can be linked in a single study.
Direct observation was used in a study of the costs of colonoscopy.4 Direct observation occurred at multiple centers. This was combined with interviews with clinical personnel and patient surveys that were included because the study was conducted from a societal perspective. Direct observation was performed by a study nurse who accompanied all study patients during the care received. The patient experiences were divided into caregiver and coordinating “transactions”. The nurse was responsible not only for recording times but also for recording quantities of medications used and wasted and the equipment used. Fixed costs and capital and equipment costs were obtained by consulting with hospital finance departments. Hospitals also provided information on salaries (but not full compensation). Miles driven by the patient (and anyone accompanying them) were valued, and the time spent by patients and caregivers was valued. Among the reviewed studies, this study uniquely adjusted for the degree to which facilities were operating at capacity (when stated at all, others assumed that facilities were operating at capacity). If a unit is operating below capacity, individuals are paid for some time during which they are not occupied. This suggests that the time they are productive is worth more than the compensation rate (i.e. wages or salary and benefits and taxes paid on workers’ behalf), so the actual cost is adjusted upward from the compensation rate. This study is exemplary in its micro-costing approach to the assessment of societal costs.
Direct observation was also used in studying Chlamydia screening in the United Kingdom.14 As screening for Chlamydia is an ongoing process, this study demonstrates the decision that must be made about when to collect data on the operation of an intervention. Data were collected on both low and high workload days and days at the start, middle, and end of the study. While the high and low workload days may be difficult to determine in advance, the collection of operational data on days with varying operating conditions (particularly at different points in the study) is critical for assuring that the data gathered do not misrepresent the costs. Data collected early in an intervention may not reflect care being provided as efficiently as possible but the time that is required to learn the intervention should not be ignored.
A US study focused on grouping patients by resource utilization levels collected data on 5314 residents in 156 nursing units from 105 facilities in four states.17 Staff were asked to enter time for each patient into a hand held computer while delivering care. Notably, the study required a week of preparation time for all personnel. This was required for only 48 hours of data collected by nurses and seven days of data collected by ancillary staff. The purpose of the study was to group patients at different resource utilization levels rather than to provide a cost estimate. To achieve reliable groupings, this study needed to be much larger than most other micro-costing studies discussed. The amount of time spent preparing for the study makes clear how costly this type of study can be. While devices like hand help computers can make the data collection much more precise, the need for precision must be carefully assessed and compared with the cost of the precision. This study was not included in the table as the data were used for developing case mix measures rather than for reporting costs.
Resource utilization data can also be collected from program operation logs.18 In a study of how a team including a community-health nurse and peer counselor was able to influence breastfeeding duration in low income women, the team members tracked all activities related to the contact of study subjects. This included attempted phone calls, completed phone calls, attempted home visits, repeated home visits, and mileage. Staff were asked to keep a log associated with each study subject. This study was only a pilot and was not reported in Table 1.
Unlike most studies that simply discuss methods, one article provided an example of a time log sheet that allowed for 15 minute intervals and up to 40 different activity codes.19 This study was not included in the table because it did not report results. The study was useful because it described some important aspects of study design and the conversations that can occur between researchers and providers to facilitate high quality research in which study providers comply fully with data collection efforts.
Diary data at the patient level could be used to inform a cost-effectiveness or cost-benefit exercise related to vision impairment have been collected using diaries.20-22 The investigators identified study subjects with vision impairment who were willing to fill out diary data on expenditures (the combination of quantities and prices) for one year. Investigators must recognize that high levels of compliance with this quantity of data collection is not necessarily typical. The cost and value of collecting extensive diary data must be considered when micro-costing studies are being designed. These are not reported in the table because they focus on collecting expenditures directly rather than on collecting data on quantities and applying prices.
An area unexplored in the literature is the real time collection of data on externalities, e.g. effects of a smoking cessation program on those around the smoker through changes in environmental tobacco smoke. Several difficulties arise with the concept of collecting micro-costing data for externalities. The intervention target is often one individual while the externalities can affect multiple individuals. Further, many of the externalities create costs in the long-run that cannot be collected alongside implementation and other short-run data.
One prior review was described as the first study to review methods and make recommendations for methods to use in the Department of Veterans Affairs in the United States.23 This review compared perspectives, compared long-run and short-run costs, and addressed joint costs. The authors described the costs and benefits of different types of studies (time and motion, activity logs, and manager surveys). The authors also described the likely precision, validity, and reliability of different methods and noted that the methods could be combined. The main recommendations were to study survey design and the accuracy of self-reported costs, particularly as related to recall issues. These recommendations remain as relevant today as they were when the previous review was published. Given the minimal movement in this direction during the five years since this set of recommendations was made, it is not clear that this type of study will ever be done.
A second existing review is from six years ago and focused specifically on randomized trials.24 This study concluded that the methods were of generally poor quality. The authors noted that most health care is produced as part of a joint production process and no cost study conducted alongside a randomized trial had taken this into consideration.
Despite the presence of prior reviews, a first recommendation would be a critical review of micro-costing studies since 2002. A fair number of micro-costing studies have been conducted since the publication of the review of randomized trial cost studies that year. As was clear in the discussion of studies reviewed for their methods, micro-costing is not confined to randomized trials. A review should particularly focus on the methods used for gathering quantity data (direct observation, logs, interviews, or Delphi) and assess whether the studies were able to provide sufficient specificity for decision makers to use the study to facilitate better resource allocation decisions. A review of this sort would be time and resource intensive. However there are limited micro-costing studies in comparison with the number of cost-utility studies in a registry.25 With appropriate resources, such a review would be feasible and would benefit the field.
A second recommendation would follow on the recommendation for the VA study. That recommendation focused on additional studies regarding data quality through survey methods. A follow-up recommendation would be to study the same cost using a head to head comparison of different methods. If costs ever are considered in the power calculations for randomized trials or expected value of information calculations are performed, information about the precision of estimates obtained from the variety of methods that can be chosen will be essential. Given the increasing importance of cost studies for budgeting and for combining with comparative effectiveness to facilitate more informed decision making, obtaining resources for this type of study seems possible.
Finally, if a new critical review is conducted and a head to head comparison is completed, this would provide much more concrete evidence upon which to develop guidelines of the best methods for different situations. Specifically, guidelines could address common limitations, critical judgment points, and decisions that can reduce limitations and improve the quality of studies.