A total of 80 practices were invited to take part, and a sample of 36 practices were recruited (45%): 10 in Haringey, London (from 21 invited: 48%); eight in Nottinghamshire (18: 44%); nine in Oldham (19: 47%); and nine in Warwickshire (22: 41%). The characteristics of this sample of 36 practices compared to all practices in England are shown in .
Overall characteristics and representativeness of the PMCPA sample (n = 36 practices) compared to all England.
Four practices withdrew during the piloting process and the remaining 32 practices fully completed the pilot (eight in Haringey; eight in Nottinghamshire; nine in Oldham; and seven in Warwickshire). Thirty of these 32 practices received an assessment visit by their PCT-appointed, three-person team. Two practices were unable to complete assessment visits due to difficulties timetabling a visit with assessors. There were therefore 30 practices with an assessment rating of each criterion.
Thirty-four practices uploaded 1817 documents on to the extranet site (mean 53.4, range 1–165); two practices did so before withdrawing from the pilot. Over 80% of documents were uploaded by practice staff in the last month, with most practices uploading documents during the last week, and some in the last 24 hours of the pilot; although it is known anecdotally that this did not necessarily reflect when the majority of the work took place.
At most practices, all 30 criteria were deemed to be applicable by assessors and assigned a rating. Where all 30 criteria applied and were rated, eight practices scored 30/30 using a rating of‘good’or‘satisfactory’as a pass. Three of these practices were small practices (list size <4000 patients), two medium-sized practices (list size: 4000–7999 patients), and three were larger practices (list size: ≥8000 patients). Of the practices passing 30/30 core criteria, two were in areas with a low deprivation index, five in areas of medium deprivation index, and one in an area of high deprivation.
In some practices, certain criteria were not given a rating by assessors; for example, core criterion 9 related to minor surgery and four practices stated they did not provide a minor surgery service. As such, the denominator for ascribing an overall score to a practice was not always 30. One additional practice scored 28/28. This meant that nine practices (30%) passed 100% of the core criteria for which they were given a rating by their assessors. On this basis, practices passed between 27% and 100% for the core criteria for which they were given an assessment rating. Only two practices had scores of ≤65%, while half of all practices scored ≥90% within the 15-week timeframe.
The frequency with which each of the 30 core criteria were rated good/satisfactory or borderline/unsatisfactory/not applicable across the evaluation sample of practices is shown in Appendix 1
Only one core criterion relating to confidentiality (CC19) was passed as ‘good’ or ‘satisfactory’ by all participating practices.
Achievement by practice characteristics
Across the 30 core criteria, single-handed practices had a lower median pass (good or satisfactory) score (73.3%) compared to group practices (93.3%). However, there were no statistical differences in achievement between different-sized practices (P = 0.13). Single-handed practices were more likely to be located in areas with a higher IMD score (Mann–Whitney U test, P<0.01), but adjusting for IMD did not affect the (lack of) relationship with total core criteria scores (P = 0.17). Median total criteria scores in relation to practice size were 73.3 for single-handed practices, 96.6 for medium-sized practices, and 86.7 for larger practices. Although the mean score was greater for medium-sized practices, under quantile regression the differences between the group means were not significant (P = 0.11), and remained non-significant after adjustment for IMD scores (P = 0.11). shows the individual practice total core criteria pass scores by practice deprivation for single-handed and group practices.
Individual practice total core criteria pass scores by practice IMD scores for single-handed practices and group practices.
Practice feedback highlighted seven key issues: (1) overall view of PMCPA; (2) the role of accreditation; (3) different motivations for taking part; (4) practice managers dominated the workload associated with implementing the scheme; (5) facilitators for implementation; (6) patient benefit — relevance of PMCPA to quality improvement; and (7) recommendations for improving the scheme.
Overall view of PMCPA
All practices felt that PMCPA was relevant to and aligned with family practice priorities, reflected quality in primary care, and was a worthwhile use of practice time. PMCPA was seen as promoting improvements in organisational standards. However, the main risk identified was that externally imposed standards could be seen as a‘tick-box’exercise, with organisations seeking to meet the target without necessarily reflecting on how the issues contained within the standard affect their own setting. PMCPA was not seen as just a tick-box exercise by most practices but as an opportunity to change and benefit the practice.
Many managers and doctors, however, felt that the scheme was too heavily populated by criteria where the evidence for external assessment focused on demonstrating the existence of a protocol or procedure. These health professionals stated that PMCPA should focus more on the evidence of change of benefit to the practice.
The role of accreditation
There was general consensus about the value of accreditation, but different interpretations of its role. Some practices fed back that the role of accreditation is to show adherence to an acceptable standard in terms of compliance or conformance with an accepted set of standards. Others explicitly emphasised focusing on formative practice-specific quality improvement as a reflective exercise rather than a box-ticking exercise. The RCGP was, however, seen as the arbiter of professional standards, and their leadership of the scheme was seen as key by almost all practices.
Motivations for taking part
Practices had different reasons for taking part in the accreditation pilot. This was most often described as a team-development exercise focusing on practice quality improvement, but also to fix a perceived problem such as current standards or deficiencies in team working, or to demonstrate how good they perceived their practice to be.
Practice managers carried out 90–95% of the actual workload. GPs tended to make the initial decision to be involved and then largely had a hands-off approach and were used as a‘checking in point’by most managers. The workload was higher than expected in most practices, although almost all practices emphasised that this reflected the 15-week duration of the pilot, the timing (over the summer months), and the fact that the pilot was implemented at a time when many practices were busy with other concurrent demands such as the information management and technology direct enhanced service.
Facilitators for implementation
While most of the workload was undertaken by managers within practices, both doctor and team engagement at some level were critical to success, and those practices that did not complete PMCPA were usually those where only the manager was truly engaged in the scheme. A combination of team work, designated roles and responsibilities within the team, protected time to work on PMCPA, engagement and ownership by all team members, leadership (from a manager and/or doctor), and doctor buy-in were crucial factors in implementing PMCPA as a learning organisation.
Patient benefit — relevance of PMCPA to quality improvement
Many participants explained how using PMCPA as a learning exercise within the practice, with a focus on internal reflective quality-improvement strategies, as well as focusing on the targets for external assessment, led to direct patient benefits relevant to their practice population. In many practices, staff explained how having to write practice protocols in order to meet the core criteria formalised tacit knowledge. Other examples included facilitating changes in the structures and processes of care that improved patient safety, making services more responsive to the needs of patients, and a greater community focus. A minority of managers and doctors contextualised the role of schemes like PMCPA from the perspective of‘looking through the eyes of patients’.
Examples of how practices used PMCPA to make their practice more responsive to the needs of their patients included thinking about how to work better with local interpreting services, creating a histology safety net to improve the recall system after minor surgery, training healthcare assistants to understand abnormal results picked up in new patient checks, creating fail-safe systems for non-collection of prescriptions, and a variety of initiatives to improve information giving to and consultation with patients.
Recommendations for improving the scheme
Practice feedback on how to improve PMCPA emphasised 2–3 years as a realistic timeframe for completing the scheme and a focus on using evidence of implementation and learning rather than the simple presence of a written protocol. The experiences of piloting PMCPA and listening to feedback in practices of different sizes clearly showed the need to make PMCPA, and the evidence that is presented to meet standards, practice specific and flexible. For example, evidence used to pass a criterion might not be the same in every practice. Many participants also recommended that PMCPA needs to have a greater emphasis on patient benefit and patient responsiveness.