PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (38)
 

Clipboard (0)
None
Journals
Year of Publication
Document Types
1.  Design and implementation of GRIP: a computerized glucose control system at a surgical intensive care unit 
Background
Tight glucose control by intensive insulin therapy has become a key part of critical care and is an important field of study in acute coronary care. A balance has to be found between frequency of measurements and the risk of hypoglycemia. Current nurse-driven protocols are paper-based and, therefore, rely on simple rules. For safety and efficiency a computer decision support system that employs complex logic may be superior to paper protocols.
Methods
We designed and implemented GRIP, a stand-alone Java computer program. Our implementation of GRIP will be released as free software. Blood glucose values measured by a point-of-care analyzer were automatically retrieved from the central laboratory database. Additional clinical information was asked from the nurse and the program subsequently advised a new insulin pump rate and glucose sampling interval.
Results
Implementation of the computer program was uneventful and successful. GRIP treated 179 patients for a total of 957 patient-days. Severe hypoglycemia (< 2.2 mmol/L) only occurred once due to human error. With a median (IQR) of 4.9 (4.2 – 6.2) glucose measurements per day the median percentage of time in which glucose fell in the target range was 78%. Nurses rated the program as easy to work with and as an improvement over the preceding paper protocol. They reported no increase in time spent on glucose control.
Conclusion
A computer driven protocol is a safe and effective means of glucose control at a surgical ICU. Future improvements in the recommendation algorithm may further improve safety and efficiency.
doi:10.1186/1472-6947-5-38
PMCID: PMC1334184  PMID: 16359559
2.  SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study 
Background
With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded.
Results
Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features.
Conclusion
SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.
doi:10.1186/1472-6947-5-37
PMCID: PMC1318459  PMID: 16321145
3.  Development and initial testing of a computer-based patient decision aid to promote colorectal cancer screening for primary care practice 
Background
Although colorectal cancer screening is recommended by major policy-making organizations, rates of screening remain low. Our aim was to develop a patient-directed, computer-based decision aid about colorectal cancer screening and investigate whether it could increase patient interest in screening.
Methods
We used content from evidence-based literature reviews and our previous decision aid research to develop a prototype. We performed two rounds of usability testing with representative patients to revise the content and format. The final decision aid consisted of an introductory segment, four test-specific segments, and information to allow comparison of the tests across several key parameters. We then conducted a before-after uncontrolled trial of 80 patients 50–75 years old recruited from an academic internal medicine practice.
Results
Mean viewing time was 19 minutes. The decision aid improved patients' intent to ask providers for screening from a mean score of 2.8 (1 = not at all likely to ask, 4 = very likely to ask) before viewing the decision aid to 3.2 afterwards (difference, 0.4; p < 0.0001, paired t-test). Most found the aid useful and reported that it improved their knowledge about screening. Sixty percent said they were ready to be tested, 18% needed more information, and 22% were not ready to be screened. Within 6 months of viewing, 43% of patients had completed screening tests.
Conclusion
We conclude that a computer-based decision aid can increase patient intent to be screened and increase interest in screening. Practice Implications: This decision aid can be viewed by patients prior to provider appointments to increase motivation to be screened and to help them decide about which modality to use for screening. Further work is required to integrate the decision aid with other practice change strategies to raise screening rates to target levels.
doi:10.1186/1472-6947-5-36
PMCID: PMC1318488  PMID: 16313676
4.  Automatic extraction of candidate nomenclature terms using the doublet method 
Background
New terminology continuously enters the biomedical literature. How can curators identify new terms that can be added to existing nomenclatures? The most direct method, and one that has served well, involves reading the current literature. The scholarly curator adds new terms as they are encountered. Present-day scholars are severely challenged by the enormous volume of biomedical literature. Curators of medical nomenclatures need computational assistance if they hope to keep their terminologies current. The purpose of this paper is to describe a method of rapidly extracting new, candidate terms from huge volumes of biomedical text. The resulting lists of terms can be quickly reviewed by curators and added to nomenclatures, if appropriate. The candidate term extractor uses a variation of the previously described doublet coding method. The algorithm, which operates on virtually any nomenclature, derives from the observation that most terms within a knowledge domain are composed entirely of word combinations found in other terms from the same knowledge domain. Terms can be expressed as sequences of overlapping word doublets that have more specific meaning than the individual words that compose the term. The algorithm parses through text, finding contiguous sequences of word doublets that are known to occur somewhere in the reference nomenclature. When a sequence of matching word doublets is encountered, it is compared with whole terms already included in the nomenclature. If the doublet sequence is not already in the nomenclature, it is extracted as a candidate new term. Candidate new terms can be reviewed by a curator to determine if they should be added to the nomenclature. An implementation of the algorithm is demonstrated, using a corpus of published abstracts obtained through the National Library of Medicine's PubMed query service and using "The developmental lineage classification and taxonomy of neoplasms" as a reference nomenclature.
Results
A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms) that could be added to the reference nomenclature.
Conclusion
The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.
doi:10.1186/1472-6947-5-35
PMCID: PMC1274323  PMID: 16232314
5.  Information resource preferences by general pediatricians in office settings: a qualitative study 
Background
Information needs and resource preferences of office-based general pediatricians have not been well characterized.
Methods
Data collected from a sample of twenty office-based urban/suburban general pediatricians consisted of: (a) a demographic survey about participants' practice and computer use, (b) semi-structured interviews on their use of different types of information resources and (c) semi-structured interviews on perceptions of information needs and resource preferences in response to clinical vignettes representing cases in Genetics and Infectious Diseases. Content analysis of interviews provided participants' perceived use of resources and their perceived questions and preferred resources in response to vignettes.
Results
Participants' average time in practice was 15.4 years (2–28 years). All had in-office online access.
Participants identified specialist/generalist colleagues, general/specialty pediatric texts, drug formularies, federal government/professional organization Websites and medical portals (when available) as preferred information sources. They did not identify decision-making texts, evidence-based reviews, journal abstracts, medical librarians or consumer health information for routine office use.
In response to clinical vignettes in Genetics and Infectious Diseases, participants identified Question Types about patient-specific (diagnosis, history and findings) and general medical (diagnostic, therapeutic and referral guidelines) information. They identified specialists and specialty textbooks, history and physical examination, colleagues and general pediatric textbooks, and federal and professional organizational Websites as information sources. Participants with access to portals identified them as information resources in lieu of texts.
For Genetics vignettes, participants identified questions about prenatal history, disease etiology and treatment guidelines. For Genetics vignettes, they identified patient history, specialists, general pediatric texts, Web search engines and colleagues as information sources. For Infectious Diseases (ID) vignettes, participants identified questions about patients' clinical status at presentation and questions about disease classification, diagnosis/therapy/referral guidelines and sources of patient education. For ID vignettes, they identified history, laboratory results, colleagues, specialists and personal experience as information sources.
Conclusion
Content analysis of office-based general pediatricians' responses to clinical vignettes provided a qualitative description of their perceptions of information needs and preferences for information resource for cases in Genetics and Infectious Diseases. This approach may provide complementary information for discovering practitioner's information needs and resource preferences in different contexts.
doi:10.1186/1472-6947-5-34
PMCID: PMC1266372  PMID: 16225686
6.  Medical record linkage in health information systems by approximate string matching and clustering 
Background
Multiplication of data sources within heterogeneous healthcare information systems always results in redundant information, split among multiple databases. Our objective is to detect exact and approximate duplicates within identity records, in order to attain a better quality of information and to permit cross-linkage among stand-alone and clustered databases. Furthermore, we need to assist human decision making, by computing a value reflecting identity proximity.
Methods
The proposed method is in three steps. The first step is to standardise and to index elementary identity fields, using blocking variables, in order to speed up information analysis. The second is to match similar pair records, relying on a global similarity value taken from the Porter-Jaro-Winkler algorithm. And the third is to create clusters of coherent related records, using graph drawing, agglomerative clustering methods and partitioning methods.
Results
The batch analysis of 300,000 "supposedly" distinct identities isolates 240,000 true unique records, 24,000 duplicates (clusters composed of 2 records) and 3,000 clusters whose size is greater than or equal to 3 records.
Conclusion
Duplicate-free databases, used in conjunction with relevant indexes and similarity values, allow immediate (i.e.: real-time) proximity detection when inserting a new identity.
doi:10.1186/1472-6947-5-32
PMCID: PMC1274322  PMID: 16219102
7.  An adaptive prediction and detection algorithm for multistream syndromic surveillance 
Background
Surveillance of Over-the-Counter pharmaceutical (OTC) sales as a potential early indicator of developing public health conditions, in particular in cases of interest to biosurvellance, has been suggested in the literature. This paper is a continuation of a previous study in which we formulated the problem of estimating clinical data from OTC sales in terms of optimal LMS linear and Finite Impulse Response (FIR) filters. In this paper we extend our results to predict clinical data multiple steps ahead using OTC sales as well as the clinical data itself.
Methods
The OTC data are grouped into a few categories and we predict the clinical data using a multichannel filter that encompasses all the past OTC categories as well as the past clinical data itself. The prediction is performed using FIR (Finite Impulse Response) filters and the recursive least squares method in order to adapt rapidly to nonstationary behaviour. In addition, we inject simulated events in both clinical and OTC data streams to evaluate the predictions by computing the Receiver Operating Characteristic curves of a threshold detector based on predicted outputs.
Results
We present all prediction results showing the effectiveness of the combined filtering operation. In addition, we compute and present the performance of a detector using the prediction output.
Conclusion
Multichannel adaptive FIR least squares filtering provides a viable method of predicting public health conditions, as represented by clinical data, from OTC sales, and/or the clinical data. The potential value to a biosurveillance system cannot, however, be determined without studying this approach in the presence of transient events (nonstationary events of relatively short duration and fast rise times). Our simulated events superimposed on actual OTC and clinical data allow us to provide an upper bound on that potential value under some restricted conditions. Based on our ROC curves we argue that a biosurveillance system can provide early warning of an impending clinical event using ancillary data streams (such as OTC) with established correlations with the clinical data, and a prediction method that can react to nonstationary events sufficiently fast. Whether OTC (or other data streams yet to be identified) provide the best source of predicting clinical data is still an open question. We present a framework and an example to show how to measure the effectiveness of predictions, and compute an upper bound on this performance for the Recursive Least Squares method when the following two conditions are met: (1) an event of sufficient strength exists in both data streams, without distortion, and (2) it occurs in the OTC (or other ancillary streams) earlier than in the clinical data.
doi:10.1186/1472-6947-5-33
PMCID: PMC1266371  PMID: 16221308
8.  Variation in the psychosocial determinants of the intention to prescribe hormone therapy prior to the release of the Women's Health Initiative trial: a survey of general practitioners and gynaecologists in France and Quebec 
Background
Theory-based approaches are advocated to improve our understanding of prescription behaviour. This study is an application of the theory of planned behaviour (TPB) with additional variables. It was designed to assess which variables were associated with the intention to prescribe hormone therapy (HT). In addition, variations in the measures across medical specialities (GPs and gynaecologists) and across countries (France and Quebec) were investigated.
Methods
A survey among 2,000 doctors from France and 1,044 doctors from Quebec was conducted. Data were collected by means of a self-administered questionnaire. A clinical vignette was used to elicit doctors' opinions. The following TPB variables were assessed: attitude, subjective norm, perceived behavioural control, attitudinal beliefs, normative beliefs and power of control beliefs. Additional variables (role belief, moral norm and practice pattern-related factors) were also assessed. A stepwise logistic regression was used to assess which variables were associated with the intention to prescribe HT. GPs and gynaecologists were compared to each other within countries and the two countries were compared within the specialties.
Results
Overall, 1,085 doctors from France returned their questionnaire and 516 doctors from Quebec (response rate = 54% and 49%, respectively). In the overall regression model, power of control beliefs, moral norm and role belief were significantly associated with intention (all at p < 0.0001). The models by specialty and country were: for GPs in Quebec, power of control beliefs (p < 0.0001), moral norm (p < 0.01) and cytology and hormonal dosage (both at p < 0.05); for GPs in France, power of control beliefs and role belief (both at p < 0.0001) and perception of behavioural control (p < 0.05) and cessation of menses (p < 0.01); for gynaecologists in Quebec, moral norm and power of control beliefs (both at p = 0.01); and for gynaecologists in France, power of control beliefs (p < 0.0001), and moral norm, role belief and lipid profile (all at p < 0.05).
Conclusion
In both countries, compared with GPs, intention to prescribe HT was higher for gynaecologists. Psychosocial determinants of doctors' intention to prescribe HT varied according to the specialty and the country thus, suggesting an influence of contextual factors on these determinants.
doi:10.1186/1472-6947-5-31
PMCID: PMC1250227  PMID: 16150149
9.  Automation of a problem list using natural language processing 
Background
The medical problem list is an important part of the electronic medical record in development in our institution. To serve the functions it is designed for, the problem list has to be as accurate and timely as possible. However, the current problem list is usually incomplete and inaccurate, and is often totally unused. To alleviate this issue, we are building an environment where the problem list can be easily and effectively maintained.
Methods
For this project, 80 medical problems were selected for their frequency of use in our future clinical field of evaluation (cardiovascular). We have developed an Automated Problem List system composed of two main components: a background and a foreground application. The background application uses Natural Language Processing (NLP) to harvest potential problem list entries from the list of 80 targeted problems detected in the multiple free-text electronic documents available in our electronic medical record. These proposed medical problems drive the foreground application designed for management of the problem list. Within this application, the extracted problems are proposed to the physicians for addition to the official problem list.
Results
The set of 80 targeted medical problems selected for this project covered about 5% of all possible diagnoses coded in ICD-9-CM in our study population (cardiovascular adult inpatients), but about 64% of all instances of these coded diagnoses. The system contains algorithms to detect first document sections, then sentences within these sections, and finally potential problems within the sentences. The initial evaluation of the section and sentence detection algorithms demonstrated a sensitivity and positive predictive value of 100% when detecting sections, and a sensitivity of 89% and a positive predictive value of 94% when detecting sentences.
Conclusion
The global aim of our project is to automate the process of creating and maintaining a problem list for hospitalized patients and thereby help to guarantee the timeliness, accuracy and completeness of this information.
doi:10.1186/1472-6947-5-30
PMCID: PMC1208893  PMID: 16135244
10.  Exploring cancer register data to find risk factors for recurrence of breast cancer – application of Canonical Correlation Analysis 
Background
A common approach in exploring register data is to find relationships between outcomes and predictors by using multiple regression analysis (MRA). If there is more than one outcome variable, the analysis must then be repeated, and the results combined in some arbitrary fashion. In contrast, Canonical Correlation Analysis (CCA) has the ability to analyze multiple outcomes at the same time.
One essential outcome after breast cancer treatment is recurrence of the disease. It is important to understand the relationship between different predictors and recurrence, including the time interval until recurrence. This study describes the application of CCA to find important predictors for two different outcomes for breast cancer patients, loco-regional recurrence and occurrence of distant metastasis and to decrease the number of variables in the sets of predictors and outcomes without decreasing the predictive strength of the model.
Methods
Data for 637 malignant breast cancer patients admitted in the south-east region of Sweden were analyzed. By using CCA and looking at the structure coefficients (loadings), relationships between tumor specifications and the two outcomes during different time intervals were analyzed and a correlation model was built.
Results
The analysis successfully detected known predictors for breast cancer recurrence during the first two years and distant metastasis 2–4 years after diagnosis. Nottingham Histologic Grading (NHG) was the most important predictor, while age of the patient at the time of diagnosis was not an important predictor.
Conclusion
In cancer registers with high dimensionality, CCA can be used for identifying the importance of risk factors for breast cancer recurrence. This technique can result in a model ready for further processing by data mining methods through reducing the number of variables to important ones.
doi:10.1186/1472-6947-5-29
PMCID: PMC1208892  PMID: 16111503
11.  Handheld computers and the 21st century surgical team: a pilot study 
Background
The commercial development and expansion of mobile phone networks has led to the creation of devices combining mobile phones and personal digital assistants, which could prove invaluable in a clinical setting. This pilot study aimed to look at how one such device compared with the current pager system in facilitating inter-professional communication in a hospital clinical team.
Methods
The study looked at a heterogeneous team of doctors (n = 9) working in a busy surgical setting at St. Mary's Hospital in London and compared the use of a personal digital assistant with mobile phone and web-browsing facilities to the existing pager system. The primary feature of this device being compared to the conventional pager was its use as a mobile phone, but other features evaluated included the ability to access the internet, and reference data on the device. A crossover study was carried out for 6 weeks in 2004, with the team having access to the personal digital assistant every alternate week. The primary outcome measure for assessing efficiency of communication was the length of time it took for clinicians to respond to a call. We also sought to assess the ease of adoption of new technology by evaluating the perceptions of the team (n = 9) to personal digital assistants, by administering a questionnaire.
Results
Doctors equipped with a personal digital assistant rather than a pager, responded more quickly to a call and had a lower of failure to respond rate (RR: 0.44; 95%CI 0.20–0.93). Clinicians also found this technology easy to adopt as seen by a significant reduction in perceptions of nervousness to the technology over the six-week study period (mean (SD) week 1: 4.10 (SD 1.69) vs. mean (SD) week 6: 2.20 (1.99); p = 0.04).
Conclusion
The results of this pilot study show the possible effects of replacing the current hospital pager with a newer, more technologically advanced device, and suggest that a combined personal digital assistant and mobile phone device may improve communication between doctors. In the light of these encouraging preliminary findings, we propose a large-scale clinical trial of the use of these devices in facilitating inter-professional communication in a hospital setting.
doi:10.1186/1472-6947-5-28
PMCID: PMC1208891  PMID: 16109177
12.  PHSkb: A knowledgebase to support notifiable disease surveillance 
Background
Notifiable disease surveillance in the United States is predominantly a passive process that is often limited by poor timeliness and low sensitivity. Interoperable tools are needed that interact more seamlessly with existing clinical and laboratory data to improve notifiable disease surveillance.
Description
The Public Health Surveillance Knowledgebase (PHSkb™) is a computer database designed to provide quick, easy access to domain knowledge regarding notifiable diseases and conditions in the United States. The database was developed using Protégé ontology and knowledgebase editing software. Data regarding the notifiable disease domain were collected via a comprehensive review of state health department websites and integrated with other information used to support the National Notifiable Diseases Surveillance System (NNDSS). Domain concepts were harmonized, wherever possible, to existing vocabulary standards. The knowledgebase can be used: 1) as the basis for a controlled vocabulary of reportable conditions needed for data aggregation in public health surveillance systems; 2) to provide queriable domain knowledge for public health surveillance partners; 3) to facilitate more automated case detection and surveillance decision support as a reusable component in an architecture for intelligent clinical, laboratory, and public health surveillance information systems.
Conclusions
The PHSkb provides an extensible, interoperable system architecture component to support notifiable disease surveillance. Further development and testing of this resource is needed.
doi:10.1186/1472-6947-5-27
PMCID: PMC1201144  PMID: 16105177
13.  Pretest probability assessment derived from attribute matching 
Background
Pretest probability (PTP) assessment plays a central role in diagnosis. This report compares a novel attribute-matching method to generate a PTP for acute coronary syndrome (ACS). We compare the new method with a validated logistic regression equation (LRE).
Methods
Eight clinical variables (attributes) were chosen by classification and regression tree analysis of a prospectively collected reference database of 14,796 emergency department (ED) patients evaluated for possible ACS. For attribute matching, a computer program identifies patients within the database who have the exact profile defined by clinician input of the eight attributes. The novel method was compared with the LRE for ability to produce PTP estimation <2% in a validation set of 8,120 patients evaluated for possible ACS and did not have ST segment elevation on ECG. 1,061 patients were excluded prior to validation analysis because of ST-segment elevation (713), missing data (77) or being lost to follow-up (271).
Results
In the validation set, attribute matching produced 267 unique PTP estimates [median PTP value 6%, 1st–3rd quartile 1–10%] compared with the LRE, which produced 96 unique PTP estimates [median 24%, 1st–3rd quartile 10–30%]. The areas under the receiver operating characteristic curves were 0.74 (95% CI 0.65 to 0.82) for the attribute matching curve and 0.68 (95% CI 0.62 to 0.77) for LRE.
The attribute matching system categorized 1,670 (24%, 95% CI = 23–25%) patients as having a PTP < 2.0%; 28 developed ACS (1.7% 95% CI = 1.1–2.4%). The LRE categorized 244 (4%, 95% CI = 3–4%) with PTP < 2.0%; four developed ACS (1.6%, 95% CI = 0.4–4.1%).
Conclusion
Attribute matching estimated a very low PTP for ACS in a significantly larger proportion of ED patients compared with a validated LRE.
doi:10.1186/1472-6947-5-26
PMCID: PMC1201143  PMID: 16095534
14.  The tissue micro-array data exchange specification: a web based experience browsing imported data 
Background
The AIDS and Cancer Specimen Resource (ACSR) is an HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers to approved researchers HIV infected biologic samples and uninfected control tissues including tissue cores in micro-arrays (TMA) accompanied by de-identified clinical data. Researchers interested in the type and quality of TMA tissue cores and the associated clinical data need an efficient method for viewing available TMA materials. Because each of the tissue samples within a TMA has separate data including a core tissue digital image and clinical data, an organized, standard approach to producing, navigating and publishing such data is necessary.
The Association for Pathology Informatics (API) extensible mark-up language (XML) TMA data exchange specification (TMA DES) proposed in April 2003 provides a common format for TMA data. Exporting TMA data into the proposed format offers an opportunity to implement the API TMA DES. Using our public BrowseTMA tool, we created a web site that organizes and cross references TMA lists, digital "virtual slide" images, TMA DES export data, linked legends and clinical details for researchers.
Microsoft Excel® and Microsoft Word® are used to convert tabular clinical data and produce an XML file in the TMA DES format. The BrowseTMA tool contains Extensible Stylesheet Language Transformation (XSLT) scripts that convert XML data into Hyper-Text Mark-up Language (HTML) web pages with hyperlinks automatically added to allow rapid navigation.
Results
Block lists, virtual slide images, legends, clinical details and exports have been placed on the ACSR web site for 14 blocks with 1623 cores of 2.0, 1.0 and 0.6 mm sizes. Our virtual microscope can be used to view and annotate these TMA images. Researchers can readily navigate from TMA block lists to TMA legends and to clinical details for a selected tissue core.
Exports for 11 blocks with 3812 cores from three other institutions were processed with the BrowseTMA tool. Fifty common data elements (CDE) from the TMA DES were used and 42 more created for site-specific data. Researchers can download TMA clinical data in the TMA DES format.
Conclusion
Virtual TMAs with clinical data can be viewed on the Internet by interested researchers using the BrowseTMA tool. We have organized our approach to producing, sorting, navigating and publishing TMA information to facilitate such review.
We have converted Excel TMA data into TMA DES XML, and imported it and TMA DES XML from another institution into BrowseTMA to produce web pages that allow us to browse through the merged data. We proposed enhancements to the TMA DES as a result of this experience.
We implemented improvements to the API TMA DES as a result of using exported data from several institutions. A document type definition was written for the API TMA DES (that optionally includes proposed enhancements). Independent validators can be used to check exports against the DTD (with or without the proposed enhancements). Linking tissue core images to readily navigable clinical data greatly improves the value of the TMA.
doi:10.1186/1472-6947-5-25
PMCID: PMC1208890  PMID: 16086837
15.  Audio computer-assisted self-interviewing (ACASI) may avert socially desirable responses about infant feeding in the context of HIV 
Background
Understanding infant feeding practices in the context of HIV and factors that put mothers at risk of HIV infection is an important step towards prevention of mother to child transmission of HIV (PMTCT). Face-to-face (FTF) interviewing may not be a suitable way of ascertaining this information because respondents may report what is socially desirable. Audio computer-assisted self-interviewing (ACASI) is thought to increase privacy, reporting of sensitive issues and to eliminate socially desirable responses. We compared ACASI with FTF interviewing and explored its feasibility, usability, and acceptability in a PMTCT program in Kenya.
Methods
A graphic user interface (GUI) was developed using Macromedia Authorware® and questions and instructions recorded in local languages Kikuyu and Kiswahili. Eighty mothers enrolled in the PMTCT program were interviewed with each of the interviewing mode (ACASI and FTF) and responses obtained in FTF interviews and ACASI compared using McNemar's χ2 for paired proportions. A paired Student's t-test was used to compare means of age, marital-time and parity when measuring interview mode effect and two-sample Student's t-test to compare means for samples stratified by education level – determined during the exit interview. A Chi-Square (χ2test) was used to compare ability to use ACASI by education level.
Results
Mean ages for intended time for breastfeeding as reported by ACASI were 11 months by ACASI and 19 months by FTF interviewing (p < 0.001). Introduction of complementary foods at ≤3 months was reported more frequently by respondents in ACASI compared to FTF interviews for 7 of 13 complementary food items commonly utilized in the study area (p < 0.05). More respondents reported use of unsuitable utensils for infant feeding in ACASI than in FTF interviewing (p = 0.001). In other sensitive questions, 7% more respondents reported unstable relationships with ACASI than when interviewed FTF (p = 0.039). Regardless of education level, respondents used ACASI similarly and majority (65%) preferred it to FTF interviewing mainly due to enhanced usability and privacy. Most respondents (79%) preferred ACASI to FTF for future interviewing.
Conclusion
ACASI seems to improve quality of information by increasing response to sensitive questions, decreasing socially desirable responses, and by preventing null responses and was suitable for collecting data in a setting where formal education is low.
doi:10.1186/1472-6947-5-24
PMCID: PMC1190182  PMID: 16076385
16.  The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation 
Background
Clinical practice guidelines are not uniformly successful in influencing clinicians' behaviour toward best practices. Implementability refers to a set of characteristics that predict ease of (and obstacles to) guideline implementation. Our objective is to develop and validate a tool for appraisal of implementability of clinical guidelines.
Methods
Indicators of implementability were identified from the literature and used to create items and dimensions of the GuideLine Implementability Appraisal (GLIA). GLIA consists of 31 items, arranged into 10 dimensions. Questions from 9 of the 10 dimensions are applied individually to each recommendation of the guideline. Decidability and Executability are critical dimensions. Other dimensions are Global, Presentation and Formatting, Measurable Outcomes, Apparent Validity, Flexibility, Effect on Process of Care, Novelty/Innovation, and Computability. We conducted a series of validation activities, including validation of the construct of implementability, expert review of content for clarity, relevance, and comprehensiveness, and assessment of construct validity of the instrument. Finally, GLIA was applied to a draft guideline under development by national professional societies.
Results
Evidence of content validity and preliminary support for construct validity were obtained. The GLIA proved to be useful in identifying barriers to implementation in the draft guideline and the guideline was revised accordingly.
Conclusion
GLIA may be useful to guideline developers who can apply the results to remedy defects in their guidelines. Likewise, guideline implementers may use GLIA to select implementable recommendations and to devise implementation strategies that address identified barriers. By aiding the design and operationalization of highly implementable guidelines, our goal is that application of GLIA may help to improve health outcomes, but further evaluation will be required to support this potential benefit.
doi:10.1186/1472-6947-5-23
PMCID: PMC1190181  PMID: 16048653
17.  A software tool for creating simulated outbreaks to benchmark surveillance systems 
Background
Evaluating surveillance systems for the early detection of bioterrorism is particularly challenging when systems are designed to detect events for which there are few or no historical examples. One approach to benchmarking outbreak detection performance is to create semi-synthetic datasets containing authentic baseline patient data (noise) and injected artificial patient clusters, as signal.
Methods
We describe a software tool, the AEGIS Cluster Creation Tool (AEGIS-CCT), that enables users to create simulated clusters with controlled feature sets, varying the desired cluster radius, density, distance, relative location from a reference point, and temporal epidemiological growth pattern. AEGIS-CCT does not require the use of an external geographical information system program for cluster creation. The cluster creation tool is an open source program, implemented in Java and is freely available under the Lesser GNU Public License at its Sourceforge website. Cluster data are written to files or can be appended to existing files so that the resulting file will include both existing baseline and artificially added cases. Multiple cluster file creation is an automated process in which multiple cluster files are created by varying a single parameter within a user-specified range. To evaluate the output of this software tool, sets of test clusters were created and graphically rendered.
Results
Based on user-specified parameters describing the location, properties, and temporal pattern of simulated clusters, AEGIS-CCT created clusters accurately and uniformly.
Conclusion
AEGIS-CCT enables the ready creation of datasets for benchmarking outbreak detection systems. It may be useful for automating the testing and validation of spatial and temporal cluster detection algorithms.
doi:10.1186/1472-6947-5-22
PMCID: PMC1182374  PMID: 16018815
18.  Distribution of immunodeficiency fact files with XML – from Web to WAP 
Background
Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms.
Methods
Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML), that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases.
Results
IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR) available at . A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases.
Conclusion
The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at .
doi:10.1186/1472-6947-5-21
PMCID: PMC1184081  PMID: 15978138
19.  An overview of the design and methods for retrieving high-quality studies for clinical care 
Background
With the information explosion, the retrieval of the best clinical evidence from large, general purpose, bibliographic databases such as MEDLINE can be difficult. Both researchers conducting systematic reviews and clinicians faced with a patient care question are confronted with the daunting task of searching for the best medical literature in electronic databases. Many have advocated the use of search filters or "hedges" to assist with the searching process. The purpose of this report is to describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronics databases.
Objective
To describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronic databases.
Design
An analytic survey comparing hand searches of 170 journals in the year 2000 with retrievals from MEDLINE, EMBASE, CINAHL, and PsycINFO for candidate search terms and combinations. The sensitivity, specificity, precision, and accuracy of unique search terms and combinations of search terms were calculated.
Conclusion
A study design modeled after a diagnostic testing procedure with a gold standard (the hand search of the literature) and a test (the search terms) is an effective way of developing, testing, and validating search strategies for use in large electronic databases.
doi:10.1186/1472-6947-5-20
PMCID: PMC1183213  PMID: 15969765
20.  Real time spatial cluster detection using interpoint distances among precise patient locations 
Background
Public health departments in the United States are beginning to gain timely access to health data, often as soon as one day after a visit to a health care facility. Consequently, new approaches to outbreak surveillance are being developed. When cases cluster geographically, an analysis of their spatial distribution can facilitate outbreak detection. Our method focuses on detecting perturbations in the distribution of pair-wise distances among all patients in a geographical region. Barring outbreaks, this distribution can be quite stable over time. We sought to exemplify the method by measuring its cluster detection performance, and to determine factors affecting sensitivity to spatial clustering among patients presenting to hospital emergency departments with respiratory syndromes.
Methods
The approach was to (1) define a baseline spatial distribution of home addresses for a population of patients visiting an emergency department with respiratory syndromes using historical data; (2) develop a controlled feature set simulation by inserting simulated outbreak data with varied parameters into authentic background noise, thereby creating semisynthetic data; (3) compare the observed with the expected spatial distribution; (4) establish the relative value of different alarm strategies so as to maximize sensitivity for the detection of clustering; and (5) measure factors which have an impact on sensitivity.
Results
Overall sensitivity to detect spatial clustering was 62%. This contrasts with an overall alarm rate of less than 5% for the same number of extra visits when the extra visits were not characterized by geographic clustering. Clusters that produced the least number of alarms were those that were small in size (10 extra visits in a week, where visits per week ranged from 120 to 472), diffusely distributed over an area with a 3 km radius, and located close to the hospital (5 km) in a region most densely populated with patients to this hospital. Near perfect alarm rates were found for clusters that varied on the opposite extremes of these parameters (40 extra visits, within a 250 meter radius, 50 km from the hospital).
Conclusion
Measuring perturbations in the interpoint distance distribution is a sensitive method for detecting spatial clustering. When cases are clustered geographically, there is clearly power to detect clustering when the spatial distribution is represented by the M statistic, even when clusters are small in size. By varying independent parameters of simulated outbreaks, we have demonstrated empirically the limits of detection of different types of outbreaks.
doi:10.1186/1472-6947-5-19
PMCID: PMC1185545  PMID: 15969749
21.  Online clinical reasoning assessment with the Script Concordance test: a feasibility study 
Background
The script concordance (SC) test is an assessment tool that measures capacity to solve ill-defined problems, that is, reasoning in context of uncertainty. This tool has been used up to now mainly in medicine. The purpose of this pilot study is to assess the feasibility of the test delivered on the Web to French urologists.
Methods
The principle of SC test construction and the development of the Web site are described. A secure Web site was created with two sequential modules: (a) The first one for the reference panel (n = 26) with two sub-tasks: to validate the content of the test and to elaborate the scoring system; (b) The second for candidates with different levels of experience in Urology: Board certified urologists, residents, medical students (5 or 6th year). Minimum expected number of participants is 150 for urologists, 100 for residents and 50 for medical students. Each candidate is provided with an individual access code to this Web site. He/she may complete the Script Concordance test several times during his/her curriculum.
Results
The Web site has been operational since April 2004. The reference panel validated the test in June of the same year during the annual seminar of the French Society of Urology. The Web site is available for the candidates since September 2004. In six months, 80% of the target figure for the urologists, 68% of the target figure for the residents and 20% of the target figure for the student passed the test online. During these six months, no technical problem was encountered.
Conclusion
The feasibility of the web-based SC test is successful as two-thirds of the expected number of participants was included within six months. Psychometric properties (validity, reliability) of the test will be evaluated on a large scale (N = 300). If positive, educational impact of this assessment tool will be useful to help urologists during their curriculum for the acquisition of clinical reasoning skills, which is crucial for professional competence.
doi:10.1186/1472-6947-5-18
PMCID: PMC1184080  PMID: 15967034
22.  Evidence-based patient choice: a prostate cancer decision aid in plain language 
Background
Decision aids (DA) to assist patients in evaluating treatment options and sharing in decision making have proliferated in recent years. Most require high literacy and do not use plain language principles. We describe one of the first attempts to design a decision aid using principles from reading research and document design. The plain language DA prototype addressed treatment decisions for localized prostate cancer. Evaluation assessed impact on knowledge, decisions, and discussions with doctors in men newly diagnosed with prostate cancer.
Methods
Document development steps included preparing an evidence-based DA in standard medical parlance, iteratively translating it to emphasize shared decision making and plain language in three formats (booklet, Internet, and audio-tape). Scientific review of medical content was integrated with expert health literacy review of document structure and design. Formative evaluation methods included focus groups (n = 4) and survey of a new sample of men newly diagnosed with prostate cancer (n = 60), compared with historical controls (n = 184).
Results
A transparent description of the development process and design elements is reported. Formative evaluation among newly diagnosed prostate cancer patients found the DA to be clear and useful in reaching a decision. Newly diagnosed patients reported more discussions with doctors about treatment options, and showed increases in knowledge of side effects of radiation therapy.
Conclusion
The plain language DA presenting medical evidence in text and numerical formats appears acceptable and useful in decision-making about localized prostate cancer treatment. Further testing should evaluate the impact of all three media on decisions made and quality of life in the survivorship period, especially among very low literacy men.
doi:10.1186/1472-6947-5-16
PMCID: PMC1168900  PMID: 15963238
23.  A Severe Acute Respiratory Syndrome extranet: supporting local communication and information dissemination 
Background
The objective of this study was to explore the use and perceptions of a local Severe Acute Respiratory Syndrome (SARS) Extranet and its potential to support future information and communication applications. The SARS Extranet was a single, managed electronic and limited access system to manage local, provincial and other SARS control information.
Methods
During July, 2003, a web-based and paper-based survey was conducted with 53 SARS Steering Committee members in Hamilton. It assessed the use and perceptions of the Extranet that had been built to support the committee during the SARS outbreak. Before distribution, the survey was user-tested based on a think-aloud protocol, and revisions were made. Quantitative and qualitative questions were asked related to frequency of use of the Extranet, perceived overall usefulness of the resource, rationale for use, potential barriers, strengths and limitations, and potential future uses of the Extranet.
Results
The response rate was 69.4% (n = 34). Of all respondents, 30 (88.2%) reported that they had visited the site, and rated it highly overall (mean = 4.0; 1 = low to 5 = high). However, the site was rated 3.4 compared with other communications strategies used during the outbreak. Almost half of all respondents (44.1%) visited the site at least once every few days. The two most common reasons the 30 respondents visited the Extranet were to access SARS Steering Committee minutes (63.3%) and to access Hamilton medical advisories (53.3%). The most commonly cited potential future uses for the Extranet were the sending of private emails to public health experts (63.3%), and surveillance (63.3%). No one encountered personal barriers in his or her use of the site, but several mentioned that time and duplication of email information were challenges.
Conclusion
Despite higher rankings of various communication strategies during the SARS outbreak, such as email, meetings, teleconferences, and other web sites, users generally perceived a local Extranet as a useful support for the dissemination of local information during public health emergencies.
doi:10.1186/1472-6947-5-17
PMCID: PMC1166558  PMID: 15967040
24.  Manuscript Architect: a Web application for scientific writing in virtual interdisciplinary groups 
Background
Although scientific writing plays a central role in the communication of clinical research findings and consumes a significant amount of time from clinical researchers, few Web applications have been designed to systematically improve the writing process.
This application had as its main objective the separation of the multiple tasks associated with scientific writing into smaller components. It was also aimed at providing a mechanism where sections of the manuscript (text blocks) could be assigned to different specialists. Manuscript Architect was built using Java language in conjunction with the classic lifecycle development method. The interface was designed for simplicity and economy of movements. Manuscripts are divided into multiple text blocks that can be assigned to different co-authors by the first author. Each text block contains notes to guide co-authors regarding the central focus of each text block, previous examples, and an additional field for translation when the initial text is written in a language different from the one used by the target journal. Usability was evaluated using formal usability tests and field observations.
Results
The application presented excellent usability and integration with the regular writing habits of experienced researchers. Workshops were developed to train novice researchers, presenting an accelerated learning curve. The application has been used in over 20 different scientific articles and grant proposals.
Conclusion
The current version of Manuscript Architect has proven to be very useful in the writing of multiple scientific texts, suggesting that virtual writing by interdisciplinary groups is an effective manner of scientific writing when interdisciplinary work is required.
doi:10.1186/1472-6947-5-15
PMCID: PMC1180829  PMID: 15960855
25.  A draft framework for measuring progress towards the development of a national health information infrastructure 
Background
American public policy makers recently established the goal of providing the majority of Americans with electronic health records by 2014. This will require a National Health Information Infrastructure (NHII) that is far more complete than the one that is currently in its formative stage of development. We describe a conceptual framework to help measure progress toward that goal.
Discussion
The NHII comprises a set of clusters, such as Regional Health Information Organizations (RHIOs), which, in turn, are composed of smaller clusters and nodes such as private physician practices, individual hospitals, and large academic medical centers. We assess progress in terms of the availability and use of information and communications technology and the resulting effectiveness of these implementations. These three attributes can be studied in a phased approach because the system must be available before it can be used, and it must be used to have an effect. As the NHII expands, it can become a tool for evaluating itself.
Summary
The NHII has the potential to transform health care in America – improving health care quality, reducing health care costs, preventing medical errors, improving administrative efficiencies, reducing paperwork, and increasing access to affordable health care. While the President has set an ambitious goal of assuring that most Americans have electronic health records within the next 10 years, a significant question remains "How will we know if we are making progress toward that goal?" Using the definitions for "nodes" and "clusters" developed in this article along with the resulting measurement framework, we believe that we can begin a discussion that will enable us to define and then begin making the kinds of measurements necessary to answer this important question.
doi:10.1186/1472-6947-5-14
PMCID: PMC1177954  PMID: 15953388

Results 1-25 (38)