Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

3 a ) the success of a process Improvement initiative is based on a positive cha

ID: 3636342 • Letter: 3

Question


3
a ) the success of a process Improvement initiative is based on a positive change cutler and a high level of acceptance from all members of an oraganisation . Discuss the activities that should be carried out to achieve this and justify your answer .

B ) a general statement was made about the Software Process Improvement (SPI ) approach in the software community , which says that a hight quality software process will result in a high quality software product .
Do you agree with this statement ? And why ? ( your answer should cover a brife discution of software product , process , and the SPI principle )
c ) with reference to Wigers's article , one of the ways that a SPI programme may fail is if “ achieving a CMM level becomes the primary goal “. using your knowledge of the SPI and CMM , explain what the symptoms might be , and propose a solution to address the problem

Explanation / Answer

The necessity for quality and safety improvement initiatives permeates health care.1, 2 Quality health care is defined as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge”3 (p. 1161). According to the Institute of Medicine (IOM) report, To Err Is Human,4 the majority of medical errors result from faulty systems and processes, not individuals. Processes that are inefficient and variable, changing case mix of patients, health insurance, differences in provider education and experience, and numerous other factors contribute to the complexity of health care. With this in mind, the IOM also asserted that today’s health care industry functions at a lower level than it can and should, and it put forth the following six aims of health care: effective, safe, patient-centered, timely, efficient, and equitable.2 The aims of effectiveness and safety are targeted through process-of-care measures, assessing whether providers of health care perform processes that have been demonstrated to achieve the desired aims and avoid those processes that are predisposed toward harm. The goals of measuring health care quality are to determine the effects of health care on desired outcomes and to assess the degree to which health care adheres to processes based on scientific evidence or agreed to by professional consensus and is consistent with patient preferences. Because errors are caused by system or process failures,5 it is important to adopt various process-improvement techniques to identify inefficiencies, ineffective care, and preventable errors to then influence changes associated with systems. Each of these techniques involves assessing performance and using findings to inform change. This chapter will discuss strategies and tools for quality improvement—including failure modes and effects analysis, Plan-Do-Study-Act, Six Sigma, Lean, and root-cause analysis—that have been used to improve the quality and safety of health care. Measures and Benchmarks Go to:Top? Efforts to improve quality need to be measured to demonstrate “whether improvement efforts (1) lead to change in the primary end point in the desired direction, (2) contribute to unintended results in different parts of the system, and (3) require additional efforts to bring a process back into acceptable ranges”6 (p. 735). The rationale for measuring quality improvement is the belief that good performance reflects good-quality practice, and that comparing performance among providers and organizations will encourage better performance. In the past few years, there has been a surge in measuring and reporting the performance of health care systems and processes.1, 7–9 While public reporting of quality performance can be used to identify areas needing improvement and ascribe national, State, or other level of benchmarks,10, 11 some providers have been sensitive to comparative performance data being published.12 Another audience for public reporting, consumers, has had problems interpreting the data in reports and has consequently not used the reports to the extent hoped to make informed decisions for higher-quality care.13–15 The complexity of health care systems and delivery of services, the unpredictable nature of health care, and the occupational differentiation and interdependence among clinicians and systems16–19 make measuring quality difficult. One of the challenges in using measures in health care is the attribution variability associated with high-level cognitive reasoning, discretionary decisionmaking, problem-solving, and experiential knowledge.20–22 Another measurement challenge is whether a near miss could have resulted in harm or whether an adverse event was a rare aberration or likely to recur.23 The Agency for Healthcare Research and Quality (AHRQ), the National Quality Forum, the Joint Commission, and many other national organizations endorse the use of valid and reliable measures of quality and patient safety to improve health care. Many of these useful measures that can be applied to the different settings of care and care processes can be found at AHRQ’s National Quality Measures Clearinghouse (http://www.qualitymeasures.ahrq.gov) and the National Quality Forum’s Web site (http://www.qualityforum.org). These measures are generally developed through a process including an assessment of the scientific strength of the evidence found in peer-reviewed literature, evaluating the validity and reliability of the measures and sources of data, determining how best to use the measure (e.g., determine if and how risk adjustment is needed), and actually testing the measure.24, 25 Measures of quality and safety can track the progress of quality improvement initiatives using external benchmarks. Benchmarking in health care is defined as the continual and collaborative discipline of measuring and comparing the results of key work processes with those of the best performers26 in evaluating organizational performance. There are two types of benchmarking that can be used to evaluate patient safety and quality performance. Internal benchmarking is used to identify best practices within an organization, to compare best practices within the organization, and to compare current practice over time. The information and data can be plotted on a control chart with statistically derived upper and lower control limits. However, using only internal benchmarking does not necessarily represent the best practices elsewhere. Competitive or external benchmarking involves using comparative data between organizations to judge performance and identify improvements that have proven to be successful in other organizations. Comparative data are available from national organizations, such as AHRQ’s annual National Health Care Quality Report1 and National Healthcare Disparities Report,9 as well as several proprietary benchmarking companies or groups (e.g., the American Nurses Association’s National Database of Nursing Quality Indicators). Quality Improvement Strategies Go to:Top? More than 40 years ago, Donabedian27 proposed measuring the quality of health care by observing its structure, processes, and outcomes. Structure measures assess the accessibility, availability, and quality of resources, such as health insurance, bed capacity of a hospital, and number of nurses with advanced training. Process measures assess the delivery of health care services by clinicians and providers, such as using guidelines for care of diabetic patients. Outcome measures indicate the final result of health care and can be influenced by environmental and behavioral factors. Examples include mortality, patient satisfaction, and improved health status. Twenty years later, health care leaders borrowed techniques from the work of Deming28 in rebuilding the manufacturing businesses of post-World War II Japan. Deming, the father of Total Quality Management (TQM), promoted “constancy of purpose” and systematic analysis and measurement of process steps in relation to capacity or outcomes. The TQM model is an organizational approach involving organizational management, teamwork, defined processes, systems thinking, and change to create an environment for improvement. This approach incorporated the view that the entire organization must be committed to quality and improvement to achieve the best results.29 In health care, continuous quality improvement (CQI) is used interchangeably with TQM. CQI has been used as a means to develop clinical practice30 and is based on the principle that there is an opportunity for improvement in every process and on every occasion.31 Many inhospital quality assurance (QA) programs generally focus on issues identified by regulatory or accreditation organizations, such as checking documentation, reviewing the work of oversight committees, and studying credentialing processes.32 There are several other strategies that have been proposed for improving clinical practice. For example, Horn and colleagues discussed clinical practice improvement (CPI) as a “multidimensional outcomes methodology that has direct application to the clinical management of individual patients”33 (p. 160). CPI, an approach lead by clinicians that attempts a comprehensive understanding of the complexity of health care delivery, uses a team, determines a purpose, collects data, assesses findings, and then translates those findings into practice changes. From these models, management and clinician commitment and involvement have been found to be essential for the successful implementation of change.34–36 From other quality improvement strategies, there has been particular emphasis on the need for management to have faith in the project, communicate the purpose, and empower staff.37 In the past 20 years, quality improvement methods have “generally emphasize[d] the importance of identifying a process with less-than-ideal outcomes, measuring the key performance attributes, using careful analysis to devise a new approach, integrating the redesigned approach with the process, and reassessing performance to determine if the change in process is successful”38 (p. 9). Besides TQM, other quality improvement strategies have come forth, including the International Organization for Standardization ISO 9000, Zero Defects, Six Sigma, Baldridge, and Toyota Production System/Lean Production.6, 39, 40 Quality improvement is defined “as systematic, data-guided activities designed to bring about immediate improvement in health care delivery in particular settings”41 (p. 667). A quality improvement strategy is defined as “any intervention aimed at reducing the quality gap for a group of patients representative of those encountered in routine practice”38 (p. 13). Shojania and colleagues38 developed a taxonomy of quality improvement strategies (see Table 1), which infers that the choice of the quality improvement strategy and methodology is dependent upon the nature of the quality improvement project. Many other strategies and tools for quality improvement can be accessed at AHRQ’s quality tools Web site (www.qualitytools.ahrq.gov) and patient safety Web site (www.patientsafety.gov). Table 1 Taxonomy of Quality Improvement Strategies With Examples of Substrategies Quality improvement projects and strategies differ from research: while research attempts to assess and address problems that will produce generalizable results, quality improvement projects can include small samples, frequent changes in interventions, and adoption of new strategies that appear to be effective.6 In a review of the literature on the differences between quality improvement and research, Reinhardt and Ray42 proposed four criteria that distinguish the two: (1) quality improvement applies research into practice, while research develops new interventions; (2) risk to participants is not present in quality improvement, while research could pose risk to participants; (3) the primary audience for quality improvement is the organization, and the information from analyses may be applicable only to that organization, while research is intended to be generalizable to all similar organizations; and (4) data from quality improvement is organization-specific, while research data are derived from multiple organizations. The lack of scientific health services literature has inhibited the acceptance of quality improvement methods in health care,43, 44 but new rigorous studies are emerging. It has been asserted that a quality improvement project can be considered more like research when it involves a change in practice, affects patients and assesses their outcomes, employs randomization or blinding, and exposes patients to additional risks or burdens—all in an effort towards generalizability.45–47 Regardless of whether the project is considered research, human subjects need to be protected by ensuring respect for participants, securing informed consent, and ensuring scientific value.41, 46, 48 Plan-Do-Study-Act (PDSA) Quality improvement projects and studies aimed at making positive changes in health care processes to effecting favorable outcomes can use the Plan-Do-Study-Act (PDSA) model. This is a method that has been widely used by the Institute for Healthcare Improvement for rapid cycle improvement.31, 49 One of the unique features of this model is the cyclical nature of impacting and assessing change, most effectively accomplished through small and frequent PDSAs rather than big and slow ones,50 before changes are made systemwide.31, 51 The purpose of PDSA quality improvement efforts is to establish a functional or causal relationship between changes in processes (specifically behaviors and capabilities) and outcomes. Langley and colleagues51 proposed three questions before using the PDSA cycles: (1) What is the goal of the project? (2) How will it be known whether the goal was reached? and (3) What will be done to reach the goal? The PDSA cycle starts with determining the nature and scope of the problem, what changes can and should be made, a plan for a specific change, who should be involved, what should be measured to understand the impact of change, and where the strategy will be targeted. Change is then implemented and data and information are collected. Results from the implementation study are assessed and interpreted by reviewing several key measurements that indicate success or failure. Lastly, action is taken on the results by implementing the change or beginning the process again.51 Six Sigma Six Sigma, originally designed as a business strategy, involves improving, designing, and monitoring process to minimize or eliminate waste while optimizing satisfaction and increasing financial stability.52 The performance of a process—or the process capability—is used to measure improvement by comparing the baseline process capability (before improvement) with the process capability after piloting potential solutions for quality improvement.53 There are two primary methods used with Six Sigma. One method inspects process outcome and counts the defects, calculates a defect rate per million, and uses a statistical table to convert defect rate per million to a s (sigma) metric. This method is applicable to preanalytic and postanalytic processes (a.k.a. pretest and post-test studies). The second method uses estimates of process variation to predict process performance by calculating a s metric from the defined tolerance limits and the variation observed for the process. This method is suitable for analytic processes in which the precision and accuracy can be determined by experimental procedures. One component of Six Sigma uses a five-phased process that is structured, disciplined, and rigorous, known as the define, measure, analyze, improve, and control (DMAIC) approach.53, 54 To begin, the project is identified, historical data are reviewed, and the scope of expectations is defined. Next, continuous total quality performance standards are selected, performance objectives are defined, and sources of variability are defined. As the new project is implemented, data are collected to assess how well changes improved the process. To support this analysis, validated measures are developed to determine the capability of the new process. Six Sigma and PDSA are interrelated. The DMAIC methodology builds on Shewhart’s plan, do, check, and act cycle.55 The key elements of Six Sigma is related to PDSA as follows: the plan phase of PDSA is related to define core processes, key customers, and customer requirements of Six Sigma; the do phase of PDSA is related to measure performance of Six Sigma; the study phase of PDSA is related to analyze of Six Sigma; and the act phase of PDSA is related to improve and integrate of Six Sigma.56 Toyota Production System/Lean Production System Application of the Toyota Production System—used in the manufacturing process of Toyota cars57—resulted in what has become known as the Lean Production System or Lean methodology. This methodology overlaps with the Six Sigma methodology, but differs in that Lean is driven by the identification of customer needs and aims to improve processes by removing activities that are non-value-added (a.k.a. waste). Steps in the Lean methodology involve maximizing value-added activities in the best possible sequence to enable continuous operations.58 This methodology depends on root-cause analysis to investigate errors and then to improve quality and prevent similar errors. Physicians, nurses, technicians, and managers are increasing the effectiveness of patient care and decreasing costs in pathology laboratories, pharmacies,59–61 and blood banks61 by applying the same principles used in the Toyota Production System. Two reviews of projects using Toyota Production System methods reported that health care organizations improved patient safety and the quality of health care by systematically defining the problem; using root-cause analysis; then setting goals, removing ambiguity and workarounds, and clarifying responsibilities. When it came to processes, team members in these projects developed action plans that improved, simplified, and redesigned work processes.59, 60 According to Spear, the Toyota Production System method was used to make the “following crystal clear: which patient gets which procedure (output); who does which aspect of the job (responsibility); exactly which signals are used to indicate that the work should begin (connection); and precisely how each step is carried out”60 (p. 84). Factors involved in the successful application of the Toyota Production System in health care are eliminating unnecessary daily activities associated with “overcomplicated processes, workarounds, and rework”59 (p. 234), involving front-line staff throughout the process, and rigorously tracking problems as they are experimented with throughout the problem-solving process. Root Cause Analysis Root cause analysis (RCA), used extensively in engineering62 and similar to critical incident technique,63 is a formalized investigation and problem-solving approach focused on identifying and understanding the underlying causes of an event as well as potential events that were intercepted. The Joint Commission requires RCA to be performed in response to all sentinel events and expects, based on the results of the RCA, the organization to develop and implement an action plan consisting of improvements designed to reduce future risk of events and to monitor the effectiveness of those improvements.64 RCA is a technique used to identify trends and assess risk that can be used whenever human error is suspected65 with the understanding that system, rather than individual factors, are likely the root cause of most problems.2, 4 A similar procedure is critical incident technique, where after an event occurs, information is collected on the causes and actions that led to the event.63 An RCA is a reactive assessment that begins after an event, retrospectively outlining the sequence of events leading to that identified event, charting causal factors, and identifying root causes to completely examine the event.66 Because it is a labor-intensive process, ideally a multidisciplinary team trained in RCA triangulates or corroborates major findings and increases the validity of findings.67 Taken one step further, the notion of aggregate RCA (used by the Veterans Affairs (VA) Health System) is purported to use staff time efficiently and involves several simultaneous RCAs that focus on assessing trends, rather than an in-depth case assessment.68 Using a qualitative process, the aim of RCA is to uncover the underlying cause(s) of an error by looking at enabling factors (e.g., lack of education), including latent conditions (e.g., not checking the patient’s ID band) and situational factors (e.g., two patients in the hospital with the same last name) that contributed to or enabled the adverse event (e.g., an adverse drug event). Those involved in the investigation ask a series of key questions, including what happened, why it happened, what were the most proximate factors causing it to happen, why those factors occurred, and what systems and processes underlie those proximate factors. Answers to these questions help identify ineffective safety barriers and causes of problems so similar problems can be prevented in the future. Often, it is important to also consider events that occurred immediately prior to the event in question because other remote factors may have contributed.68 The final step of a traditional RCA is developing recommendations for system and process improvement(s), based on the findings of the investigation.68 The importance of this step is supported by a review of the literature on root-cause analysis, where the authors conclude that there is little evidence that RCA can improve patient safety by itself.69 A nontraditional strategy, used by the VA, is aggregate RCA processes, where several simultaneous RCAs are used to examine multiple cases in a single review for certain categories of events.68, 70 Due the breadth of types of adverse events and the large number of root causes of errors, consideration should be given to how to differentiate system from process factors, without focusing on individual blame. The notion has been put forth that it is a truly rare event for errors to be associated with irresponsibility, personal neglect, or intention,71 a notion supported by the IOM.4, 72 Yet efforts to categorize individual errors—such as the Taxonomy of Error Root Cause Analysis of Practice Responsibility (TERCAP), which focuses on “lack of attentiveness, lack of agency/fiduciary concern, inappropriate judgment, lack of intervention on the patient’s behalf, lack of prevention, missed or mistaken MD/healthcare provider’s orders, and documentation error”73 (p. 512)—may distract the team from investigating systems and process factors that can be modified through subsequent interventions. Even the majority of individual factors can be addressed through education, training, and installing forcing functions that make errors difficult to commit.