Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-06T04:08:21.520Z Has data issue: false hasContentIssue false

Assuring the Quality of Medical Care: The Impact of Outcome Measurement and Practice Standards

Published online by Cambridge University Press:  29 April 2021

Extract

For the better part of the last decade, the major goal of health care policy has been to reduce health care costs. This has raised fears that the quality of care may suffer, as providers cut corners in response to cost containment pressures from the government and other third-party payers.

These concerns over quality in turn have increased interest in the complex systems that monitor and regulate health care quality. Three main questions have been raised about the ability of the quality assurance systems to assure the quality of care. First, they may not be sufficiently effective, especially given the threat to quality posed by efforts to contain costs. For example, quality assurance has been criticized for focusing on a small number of aberrant physicians and hospitals—a so-called “bad apples” approach—rather than on improving the overall quality of care and for failing to identify more than a handful of the poor quality providers believed to exist.

Type
Article
Copyright
Copyright © 1990 American Society of Law, Medicine & Ethics

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Quality may suffer if savings are obtained by reducing beneficial as opposed to wasteful services. The identification of wasteful care is difficult and expensive, however. See Mehlman, , “Health Care Cost Containment and Medical Technology: A Critique of Waste Theory36 Case W. Res. L. Rev. 778 (1986).Google Scholar
Quality of care can be defined as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.” Institute of Medicine, 1 Medicare: A Strategy for Quality Assurance 21 (1990)(hereinafter Strategy).Google Scholar
See Berwick, , “Continuous Improvement as an Ideal in Health Care,320 N. Eng. J. Med. 53 (1989).Google Scholar
See U.S. Department of Health and Human Services, Office of the Inspector General, National DRG Validation Study: Quality of Patient Care in Hospitals (1988)(hereinafter “GAO Quality Study”).Google Scholar
The term “providers” refers to health care professionals such as physicians and nurses, as well as to institutions that deliver health care, such as hospitals and health maintenance organizations.Google Scholar
The Medicare quality assurance system relies for the most part on nurses to review cases initially for quality problems. A provider may not be sanctioned under Medicare, however, until his case has been reviewed by a physician. See 42 U.S.C. 1320c-3(c).Google Scholar
See Strategy, supra note 2, at 188 (physicians do not regard Medicare PRO physician reviewers as “peers”).Google Scholar
See 42 U.S.C. 1320c – 5(b)(1)(4) (1982). This is rationalized on the basis of the need to protect patients who might be injured by a poor quality provider before a hearing can be held. See, Thorbus v. Bowen, 848 F.2d 901, 904 (8th Cir. 1988); Cassein v. Bowen, 824 F.2d 791, 797 (9th Cir. 1987). The courts have uniformly upheld these regulations against due process challenges. See Varandani, v. Bowen, 824 F.2d 307 (4th Cir. 1987), cert. denied, 484 U.S. 1051 (1988); Ritter v. Cohen, 797 F.2d 119 (3rd Cir. 1986); Koerpel v. Heckler, 797 F.2d 858 (9th Cir. 1986). See also, Jost, , “Administrative Law Issues Involving the Medicare Utilization and Quality Control Peer Review Organization (PRO) Program: Analysis and Recommendations,” 50 Ohio L.J. 1, 3739 (1989).Google Scholar
See Strategy, supra note 2, at 171. Note that this only comprises program outlays, and not the providers' compliance costs.Google Scholar
See, e.g., Ellman, , “Monitor Mania: Physician Regulation Runs Amok!20 Loy. U. Chi. L.J. 721, 773 (1989)(“… the lack of coordination, the duplication and cross-purposes at which the myriad of monitoring entities exist generate more expenditures in bureaucratic waste—just when these dollars are tight and the need for them infinite—than they may save”).Google Scholar
We might still be tempted to intervene if we felt that the provider's behavior created unnecessary risks for the patient—that is, was faulty—even though those risks never had a chance to materialize because of the effect of the patient's underlying illness. But our reaction would have to be premised on a sufficient number of other cases in which similar provider behavior did cause patient harm—again requiring a determination of causation.Google Scholar
The provider might be faulted for providing unnecessary care, but unless this subjected the patient to unnecessary risks, it would not raise a direct quality concern. By wasting resources, however, the provider may have compromised the quality of care for other patients.Google Scholar
See, e.g., Eddy, , “Variation in Physician Practice: The Role of Uncertainty,” 3 Health Affairs 74 (1984)(patient outcomes are difficult to measure due to vague diagnostic criteria, observer variation and multiple procedural choices).Google Scholar
See Strategy, supra note 2, at 154–156. The PROs review a certain percentage of all cases, plus a number of cases that are originally identified for other reasons, such as those involving unusually long stays in the hospital. Id at 149–154.Google Scholar
Id. at 170 (PROs must investigate written beneficiary complaints).Google Scholar
See 42 C.F.R. 1004.40(b) 1004.50(a)(1988). As a practical matter, almost all Medicare quality assurance activities relate to care provided in an institutional setting such as a hospital or an HMO. While PROs theoretically are responsible for the quality of care delivered in physicians' offices, they conduct virtually no surveillance of that sector. Yet it is noteworthy that 20% of malpractice claims are estimated to result from care received in physician's offices. See Danzon, P., The Frequency and Severity of Medical Malpractice Claims 25, n.3 (Rand Corporation 1982).Google Scholar
See Strategy, supra note 3, at 163.Google Scholar
42 C.F.R. 1004.40(b)(6); 1004.50(a)(5) (1988).Google Scholar
See 42 C.F.R. 1004.80.Google Scholar
Corrective actions include educational requirements, such as attending continuing medical education classes, mentoring and intensification of the PRO's review of future patient care by the provider. See Strategy, supra note 2, at 161–63. HHS has also proposed to allow PROs to deny payment for substandard care. Id. at 190–91.Google Scholar
See 42 C.F.R. 1004.130 (1988).Google Scholar
See GAO Quality Study, supra note 6.Google Scholar
These and the subsequent PRO data are based on Health Care Finance Administration, Peer Review Organization Data Summary (May 1989), unless otherwise indicated.Google Scholar
McIlrath, “Receding Tide of Physician Sanctions by Medicare PROS Triggers Debate,” AMA News, April 21, 1989, at 59, col. 1.Google Scholar
See Grad, , “Medical Malpractice and the Crisis of Insurance Availability: The Waning Options,” 36 Case W. Res. L. Rev. 1058, 1072, n. 57 (1986).Google Scholar
Harvard University, Patients, Doctors, and Lawyers: Medical Injury, Malpractice Litigation and Patient Compensation in New York 1–7 (Final Report 1990). This generally agrees with the findings of a 1975 study in California. See Danzon, P., Medical Malpractice: Theory, Evidence, and Public Policy 25 (1985).Google Scholar
See Appeal of Schramm, infra note 101, and cases cited therein.Google Scholar
See, “State Medical Boards Disciplined Record Number of Doctors in ’85,” N.Y. Times, Nov. 9, 1986, at 1, col. 1.Google Scholar
Kusserow, Handley, and Yessian, , “An Overview of State Medical Discipline,” 257 J. Am. Med. Assn. 820, 822 (1987).Google Scholar
Id. at 820 (boards are in “an extremely vulnerable position, with investigatory and administrative resources well below the level necessary to handle the job before them effectively”).Google Scholar
See, id. (“[l]aborious and costly procedures … contribute to the time and complexity of internal review and due process hearings.”).Google Scholar
See Cruz, , “The Duty of Fair Procedure and the Hospital Medical Staff: Possible Extension in Order to Project Private Sector Employees,” 16 Cap. U.L. Rev. 59, 71 (1986).Google Scholar
The rules typically are by-laws of the organized medical staff. See, e.g., Pepple v. Parkview Mem. Hospital, Inc., 511 N.E.2d 467 (Ind. App. Ct. 1987)(judicial review of privileging decisions based on provisions of by-laws).Google Scholar
See Champa, B., “Medical Staff Privileges Decisions in Private Hospitals: Do Physicians Actually Receive Due Process?” (April 27, 1989)(unpublished manuscript).Google Scholar
The courts have been willing to hear antitrust suits involving physicians and hospitals. See, Arizona v. Maricopa, 457 U.S. 332 (1982)(physicians not exempt from price fixing prohibitions under a learned profession exception); Pinhas v. Summit Health Ltd., 880 F.2d 1108 (1989)(physician states antitrust claim alleging privileges restricted for anticompetitive purposes). Physicians have attempted to use a statutory defense in these cases when the action has involved peer review organizations. The Health Care Quality Improvement Act of 1986 42 U.S.C. 11101–11152 Supp. IV (1986) protects peer review groups from antitrust liability. (Many states have similar statutes. See, e.g., Ill. Rev. Stat. ch. 110, para. 8–2102 (1987)). However, the courts have held that these statutes only provide a defense if there is a clearly articulated state policy and the state actively supervises the peer review. See, Patrick v. Burget, 487 U.S. 1243 (1988)(physicians acting on hospital peer review boards not immune from violations because there was no active supervision of their activities by state officials); Pinhas, 880 F.2d 1108 (1989)(no immunity since no active supervision). A defense that the physician lacks professional competence or character will avoid antitrust liability. Miller v. Indiana Hosp., 843 F.2d 139 (3rd Cir. 1988); Weiss v. York Hosp., 745 F.2d 786 (3rd Cir. 1984), cert. denied, 470 U.S. 1060 (1985); Marin v. Citizens Memorial Hosp., 700 F. Supp. 354 (S.D. Tex. 1988)(action motivated by valid concern for quality of care not an antitrust violation).Google Scholar
This organization, formerly known as the Joint Commission for the Accreditation of Hospitals (JCAH), is primarily comprised of representatives from the American Hospital Association and the American Medical Association. See generally Jost, , “The Joint Commission on Accreditation of Hospitals: Private Regulation of Health Care and the Public Interest,” 24 B.C.L.R. 835 (1983). Approximately 80% of the 6,609 hospitals participating in Medicare are accredited by either the JCAHO or the American Osteopathic Association. See McGeary, Study on Conditions of Participation and Accreditation 1 (Institute of Medicine 1989).Google Scholar
The JCAHO manual has set out the general requirements of a quality assurance program. Accreditation Manual for Hospitals 1990, 211–217 (1989). The services and departments which require monitoring are outlined. (For example, surgical case review, nursing, pharmacy, and laboratory services are identified). The manual mandates the use of “indicators to monitor important aspects of care.” Id. at 214. Data collection and trend analysis of these indicators are required to identify and resolve substandard care problems. However, the hospital has discretion in determining exactly what these indicators are. The manual requires only that they be objective, measurable and based on current clinical knowledge. Id. at 216.Google Scholar
Only between 10 and 15 of the 1800 hospitals surveyed each year by the JCAHO lose their accreditation or close voluntarily. Strategy, supra note 2, at 129.Google Scholar
See U.S. Office of Technology Assessment, The Quality of Medical Care: Information for Consumers 201 (1988)(describing limited release of JCAHO accreditation results). For a critical appraisal of the JCAHO process, see Bogdanich, “Prized by Hospitals, Accreditation Hides Perils Patients Face,” Wall St. J., Oct. 12, 1988, at A-1, col. 6. The JCAHO has recently instituted a program to create a category of conditionally accredited marginal hospitals and to make their identities public. See, “Accreditation Plan Identifies ‘Marginal Hospitals’,” Am. Med. News, June 2, 1989, at 4, col. 4.Google Scholar
See Mehlman, , “Fiduciary Contracting: Limitations on Bargaining Between Patients and Health Care Providers51 U. Pitt. L. Rev. 365 (1990).Google Scholar
See U.S. Office of Technology Assessment, The Quality of Medical Care: Information for Consumers 94 (1988) (“The cumulation of information in its current methodological state may be helpful to consumers who are relatively sophisticated. Those who are less sophisticated will probably need a considerable amount of help interpreting the data.”) For criticisms of the HCFA mortality data, see, e.g., U.S. General Accounting office, Medicare: An Assessment of HCFA's 1988 Hospital Mortality Analysis (1988); Rosen, and Green, , “The HCFA Excess Lists: A Methodological Critique,” 32 Hosp. & Health Sciences Admin. 119 (1987).Google ScholarPubMed
See Proposed Rules on Denial of Payment for Substandard Quality Care Under Medicare, 54 Fed. Reg. 1956, 1958–59 (1989). Undoubtedly one reason for the concern with accuracy is that the proposed regulations would require that the patient and provider institution be notified when a PRO denied payment on grounds of poor quality. Id. at 1960. This might stimulate the patient to initiate a malpractice suit and the provider to open an investigation leading to revocation or limitation of admitting privileges.Google Scholar
Even data aggregating malpractice claims against individual providers over time is considered to be too imprecise to be a basis for quality comparisons. See U.S. Office of Technology Assessment, The Quality of Medical Care: Information for Consumers, 133–38 (1988).Google Scholar
Donabedian, , “Evaluating the Quality of Medical Care,” 44 Milbank Mem. Fund Q. 166 (1966). Donabedian has written extensively on the subject. See Donabedian, “The Assessment of Technology and Quality,” 2 Intl. J. of Techn. Assessment in Health Care 487 (1988); Donabedian, “Monitoring: The Eyes and Ears of Health Care,” Health Progress 38–43 (1988); Donabedian, “The Quality of Care: How Can It Be Assessed?” 260 J. Am. Med. Assn. 1743 (1988); Donabedian, “Quality Assessment and Assurance: Unity of Purpose, Diversity of Means,” 25 Inquiry 173 (1988); Donabedian, The Definition of Quality and Approaches to its Management (vols. I–III, 1980, 1982, 1984).Google ScholarPubMed
Elinson describes the outcome of care in terms of the “five D's”: death, disease, disability, discomfort and dissatisfaction. Donabedian, , et al., “Advances in Health Assessment Conference Discussion Panel,” 40 J. Chronic Diseases 183S (Suppl. 1, 1987)(quoting Elinson). Lohr transposes these into the more positive categories of survival, physiologic, physical and emotional health, and patient satisfaction. Lohr, “Outcome Measurement: Concepts and Questions,” 25 Inquiry 37 (1988).Google Scholar
Note that we are not necessarily concluding in either case that the patient's death was culpable—e.g. that the provider was negligent. Instead the provider might have given the patient optimal care, but death might have been unavoidable. A certain number of patients will die as the result even of the best surgical techniques, for example. Alternatively, the provider may have made a mistake that caused the patient's death, but the mistake may have been one that even a reasonable provider might have made.Google Scholar
For a list and description, see Chassin, Kosecoff & Dubois, Health Care Quality Assessment (Midwest Business Group on Health), 28–61 1989 [hereinafter Midwest Business Group on Health].Google Scholar
See, e.g., Midwest Business Group on Health, supra, note 47, at 59–60 (“[a]lthough not fully validated by an independent group, MediQual's MedisGroups has been used unduly and appears to provide its clients with flexibility in report generation. If validated, this system might be the most desirable of the group.”).Google Scholar
The clinical findings are assessed at different times depending in part on whether or not the patient is admitted for a surgical procedure. See Iezzoni, and Moskowitz, , “A Clinical Assessment of MedisGroups,” 260 J. Am. Med. Assn. 3159, 3161–62 (1988).Google ScholarPubMed
Midwest Business Group on Health, supra note 47, at 48. Unlike some rival approaches such as Computerized Severity Index (CSI), in which patients are classified in terms of their diagnosis, MedisGroups disregards diagnosis in evaluating patients’ health status. See id. at 53. This avoids inaccuracies introduced by erroneous diagnosis, but sacrifices validity in some situations.Google Scholar
For this reason, MedisGroups and similar methodologies are often referred to as “severity of illness measures.” Batchelor, Esmond, & Johnson, , “Maintaining High Quality Patient Care While Controlling Costs,” 43 Healthcare Financial Management 20 (1989) (hereinafter Batchelor). Since they enable providers to be compared who have different “case mixes”—i.e., they treat patient populations with different illnesses and different levels of severity—these measures are also known as “case mix adjustment systems.” See, e.g., Midwest Business Group on Health, supra note 47, at 28.Google Scholar
The MedisGroups vendor compiles standardized outcome data from all of the providers who use the system. See Midwest Business Group on Health, supra note 47, at 48.Google Scholar
An example of a process change that may prove wasteful is the use of tissue plasminogen activator or TPA for heart attack victims. A recent study showed that Streptokinase, a drug that had long been used for such patients, yielded the same benefits at a fraction of the cost in certain patient populations. White, Rivers, Maslowski, Ormiston, Takayama, Hart, Sharpe, Whitlock & Norris, “Effect of Intravenous Streptokinase As Compared With That of Tissue Plasminogen Activator on Left Ventricular Function After First Myocardial Infarction,” 320 New Eng. J. Med. 817, 821 (1989). For discussion of other examples of wasteful interventions, see Mehlman, , “Health Care Cost Containment and Medical Technology: A Critique of Waste Theory,” 36 Case W. Res. L. Rev. 786–87 (1986).Google Scholar
An example of a potentially harmful procedure is the unnecessary use of coronary artery bypass graft surgery in patients for whom it is not appropriate. See Winslow, , Kosecoff, , Chassin, , Kamouse, and Brook, , “The Appropriateness of Performing Coronary Artery Bypass Surgery,” 260 J. Am. Med. Assn. 505 (1988). The harm includes the risks from general anesthesia and wound infection, as well as from errors in the manner in which the procedure is performed.Google ScholarPubMed
JCAHO accreditation guidelines, for example, are largely structural requirements that providers must meet and that are chosen in most cases on the basis of the belief—rather than actual proof—that they are characteristic of better providers.Google Scholar
Outcome assessment, it will be recalled, must measure far more than just mortality rates in order to be a valid indicator of quality.Google Scholar
Outcome measurement therfore is key to another new development in health care quality assurance: continuous improvement. Pioneered by Deming and Juran, this is an approach to quality that focuses on enlisting workers and management constantly to identify and remedy deficiencies at the production level rather than waiting for problems to turn up in the course of product inspections or as the result of consumer dissatisfaction. See generally Deming, W., Out of the Crisis (1986); Deming, W., Quality, Productivity, and Competitive Position (1982); Juran, J., Gyrna, F. Jr., and Bingham, R. Jr., Quality Control Handbook (4th ed. 1988); Juran, J., Managerial Breaktrough (1964). The approach has been adapted for use in the health care sector by Paul Bataldan at HCA, and by Donald Berwick at the Harvard Community Health Plan. See Berwick, , “Continuous Improvement as an Ideal in Health Care,” 320 N. Eng. J. Med. 53–6 (1989).Google Scholar
See Capowski, , “Accuracy and Consistency in Categorical Decision-Making: A Study of Social Security's Medical-Vocational Guidelines—Two Birds With One Stone or Pigeon-Holing Claimants?42 Md. L. Rev. 329 (1983). See also Capowski, “The Appropriateness and Design of Categorical Decision-Making Systems,” 48 Alb. L. Rev. 951 (1984). For an early discussion of categorical decisionmaking, see Pound, “Mechanical Jurisprudence,” 8 Colum. L. Rev. 605 (1908).Google Scholar
See Starr v. Federal Aviation Administration, 589 F.2d 307 (7th Cir. 1978); Airline Pilots Assn. v. Quesada, 276 F.2d 892 (2d Cir. 1960). The pilot rule is discussed in Diver, , “The Optimal Precision of Administrative Rules,” 93 Yale L.J. 65, 80–83 (1983).Google Scholar
Furrow, for example, has suggested that providers could be paid a bonus if their patients improve. See, Furrow, , “The Changing Role of the Law in Promoting Quality in Health Care: From Sanctioning Outlaws to Managing Outcomes,” 26 Houston L. Rev. 147, 187–88 (1989).Google Scholar
Problems would arise in evaluating the quality of providers with small numbers of cases, or the overall quality of treatment for rare diseases and conditions, because of the large sample sizes needed for statistical accuracy.Google Scholar
Health care costs might be lower in the long run, however, if the bonus system induced a higher quality of care that avoided future costs.Google Scholar
See Airline Pilots Assn. v. Quesada, 276 F.2d 892 (2nd Cir. 1960)(pilots not entitled to individual hearings on mandatory retirement as Administrator had published notice and received over one hundred comments which largely favored the rule—which satisfied the notice-and-comment requirement for rule-making); National Assn. of Home Health Agencies v. Schweiker, 690 F.2d 932 (D.C. Cir 1982), cert. denied, 459 U.S. 1205 (1983)(rule which required Home Health Agencies to use 49 government designated intermediaries for all Medicare reimbursement determinations and payments was invalidated because it had been promulgated without notice or comment).Google Scholar
Various Medicare provisions indicate that the Secretary of Health and Human Services already has the authority to change to outcome-based standards as long as the notice-and-comment requirements are satisfied. Section 1302 of Title 42 of the United States Code gives the Secretary general rule-making authority. Additionally, the Secretary has power to review professional activities to ensure that professionals are meeting recognized standards of health care. 42 U.S.C. 1320(c)-3(a)(i)(b). If outcome criteria reflect recognized standards of health care, then, arguably, this provision allows the Secretary to use them for quality assurance purposes. Other provisions of the law state that the Secretary can identify cases and other relevant criteria and require record keeping of these types of data to accomplish the goal of quality assurance. 42 U.S.C. 1320(c)-4(a); 9. The Secretary also has specific rule-making authority to accomplish the purposes of quality assurance. 42 U.S.C. 1320(c)-8. If these provisions nevertheless were not read to give the Secretary implicit power to institute an outcome-based quality assurance system, the legislature could expressly provide authority in new legislation.Google Scholar
Recent legislation requires a 60 day period of notice-and-comment for Medicare rules. 42 U.S.C. 1395hh(a)(2).Google Scholar
See 5 U.S.C. 553 (rule-making under Administrative Procedure Act).Google Scholar
See, e.g., Midwest Business Group on Health, supra note 47, at 51 (“MediQual closely guards the Medis Groups algorithm …. Potential purchasers of the software do not have the opportunity to examine the criteria.”)Google Scholar
See notes 32–35, supra, and accompanying text.Google Scholar
For a discussion of the costs of basing administrative decision-making on valid and reliable techniques, see Mashaw, J., Bureaucratic Justice: Managing Social Security Disability Claims, 79–88 (1983).Google Scholar
See P. Danzon, The Frequency and Severity of Medical Malpractice Claims 25, n.3 (1982). Assessing outcomes outside of large-scale provider institutions such as hospitals and HMOs is complicated by the need for relatively large numbers of patient visits to enable the data to be statistically valid. See Kahn, , et al., “Interpreting Hospital Mortality Data: How Can We Proceed?260 J. Am. Med. Assn. 3625, 3626 (1988)(large sample sizes needed for aggregate outcome investigation). A comparison of mortality rates between physicians will not reveal much information about the quality of care rendered by a physician few of whose patients die; given the small number of data points, it is unlikely that the physician's mortality rate would fall outside the range that would be predictable from aggregating mortality rates over many physicians in similar practice settings. Consequently, the quality of care in small-scale settings such as physician office practices may be more accurately evaluated by comparing the process of care used in individual patient cases with recognized process standards, although outcome assessment may still be appropriate for outcome parameters with larger data bases even in small-scale practice settings, such as patient satisfaction.Google ScholarPubMed
The quality of hip fracture care, for example, cannot be evaluated without determining how well the patient is functioning months after discharge. See Fitzgerald, , Fagan, , Tierney, and Dittus, , “Changing Patterns of Hip Fracture Care Before and After Implementation of the Prospective Payment System,” 258 J. Am. Med. Assn. 218, 219 (1987)(evaluation of patients six months after discharge).Google ScholarPubMed
See Knopp v. Thorton, 199 Ky. 216, 250 S.W. 853 (1923); Rann v. Twitchell, 82 Vt 79, 71 A. 1045 (1909).Google Scholar
See Robbins v. Footer, 553 F.2d 123 (D.C. Cir. 1977); Shilkret v. Annapolis Emergency Hosp. Assn., 276 Md 187, 349 A.2d 245 (1975)(national standard for obstetricians).Google Scholar
Normally, if a physician holds himself out as a specialist he is held to a higher level of care than a general practitioner. See Wall v. Stout, 310 N.C. 184, 311 S.E.2d 571 (1984); Bovbjerg, , “The Medical Malpractice Standard of Care: HMO's and Customary Practice,” 1975 Duke L.J. 1375, 1385 (1975)(specialists held to a higher level of care than general practitioners).Google Scholar
Congress has made exceptions for rural providers under the Medicare quality assurance program. A rural provider cannot be excluded from providing services before an administrative hearing with the Secretary of Health & Human Services unless an administrative law judge finds that the provider poses a serious risk to the community. 42 U.S.C. 1320c – 5(b)(5) 1987.Google Scholar
See Chemical Mfg. Assn. v. Natural Resources, 470 U.S. 116 (1985)(court deferred to EPA's interpretation of Clean Air Act, inter alia, because of the complexity of the statute); Syracuse Peace Council v. F.C.C., 867 F.2d 654 (D.C. Cir. 1989)(court deferred to expertise of F.C.C. in fact finding and policy setting); Jarmillo v. Morris, 50 Wash. App. 882, 750 P.2d 1301 (1988)(trial court erred in not deferring to the expertise of the state podiatry licensing board). Therefore, providers most likely would aim their fairness objections at Congress or the state legislature, seeking to establish narrow limits on administrative discretion.Google Scholar
The cost data per hospital is based on data compiled by the Center for Health Affairs, Greater Cleveland Hospital Association. The $2 billion figure is arrived at by multiplying the amount per hospital by 7000, the number of hospitals qualified to receive Medicare reimbursement. See Strategy, supra note 2, at 120.Google Scholar
Am. Med. News, Jan. 6, 1989 at 19, col. 3. This amount has yet to be appropriated, however.Google Scholar
See “Outcome Data Won't Be Used in Accreditation of Hospitals,” Am. Med. News, Nov. 25, 1988, at 5, col. 1.Google Scholar
See Batchelor, supra note 51, at 20.Google Scholar
See Chassin, , “Standards of Care in Medicine,” 25 Inquiry 437, 438 (1988) (standards of care or practice guidelines, generally, are “statements of specific diagnostic or therapeutic maneuvers that should or should not be performed in certain specific clinical circumstances”). Kinney and Wilder distinguish two types of standards, “clinical practice protocols” and “utilization review protocols.” Kinney and Wilder, “Medical Standard Setting in the Current Malpractice Environment: Problems and Possibilities,” 22 U.C. Davis L. Rev. 421, 424 (1989). The latter, however, are often retrospective applications of the former by third-party payers to determine if the patient's care was appropriate.Google ScholarPubMed
One standard for treating patients with smoke inhalation, for example, states that “after initial assessment, patients whose symptoms and signs suggest significant injury require bronchoscopy via the nasal approach to evaluate laryngeal edema.” Raffin, “Emergency: Acute Smoke Inhalation,” Hosp. Med. 23 (July 1988). This not only prescribes a process to be followed—bronchoscopy—but a piece of equipment to have on hand—a bronchoscope. Health care standards that focus on structural requirements apart from the processes by which care is delivered should not be regarded as practice standards. See, e.g., Joint Commission on Accreditation of Healthcare Organizations, Accreditation Manual for Hospitals, 1990 32 (1989) (Standard EF.2.3: emergency service must be directed by physician member of hospital's medical staff).Google Scholar
A current example is the American Cancer Society's guidelines for breast cancer screening. The physician must consider the woman's age, family history of breast cancer, and child-bearing history among other criteria to decide which screening techniques are appropriate for the patient. Older and higher risk patients require mammography while younger patients receive only a physical exam and breast self exam instruction. Clinical Preventive Medicine: Health Promotion and Disease Prevention, 534–539 (3rd ed. 1988).Google Scholar
See, “Guidelines for Percutaneous Transluminal Coronary Angioplasty,” 78 Circulation 486 (1988).Google Scholar
See, e.g., Margolis, , Cook, , Barak, , Alder, & Geertsma, , “Clinical Algorithms Teach Pediatric Decisionmaking More Effectively Than Prose,” 27 Med. Care 576 (1989). This type of algorithm is called a “decision-tree.”Google ScholarPubMed
See, e.g., Marks v. Mandel, 477 So. 2d 1036 (Fla. 1985)(in action alleging patient's death caused by negligent operation of hospital “on-call” system, trial court erred in excluding from evidence hospital emergency room policy and procedure manual prescribing functioning of system).Google Scholar
See, e.g., Office of Medical Applications of Research, National Institutes of Health, Magnetic Resonance Imaging: National Institutes of Health Consensus Development Conference Statement 111 (1987) (“MRI is recognized as the preferred and most sensitive imaging technique for the diagnosis of multiple sclerosis”).Google Scholar
See, e.g., Stone v. Proctor, 259, N.C. 633, 131 S.E.2d 297 (1963) (standards on electroshock treatment issued by American Psychiatric Association admissible as evidence of standard of care in malpractice action). See also standards for ambulatory electrocardiography promulgated by the American Academy of Cardiology, described supra at note 84. For a description of the standard-setting activities of these organizations, see Kinney and Wilder, supra note 81, at 427–31.Google Scholar
See, e.g., Technology Management Dept., Health Benefits Management Division, Blue Cross & Blue Shield Association, Medical Necessity Program Diagnostic Testing Guidelines (1987), cited in Kinney and Wilder, supra note 81 at 425, n.13.Google Scholar
See, e.g., Darling V.Charleston Community Memorial Hospital, 33 Ill. 2d 326, 211 N.E.2d 253 (1965), cert. denied 383 U.S. 946 (1966) (standards of Joint Commission on Accreditation of Hospitals [predecessor to JCAHO] admissible as evidence of custom). For a discussion of the admissibility of JCAHO-type standards as evidence in court, see Schockemoehl, , “Admissibility of Written Standards as Evidence of the Standard of Care in Medical and Hospital Negligence Actions in Virginia,” 18 U. Rich. L. Rev. 725 (1984).Google Scholar
Practice standards can also be used prospectively or retroactively by third party payers to help determine whether or not a service was appropriate and should be paid for. See Kinney and Wilder, supra note 81, at 436–38 (discussing use of practice standards for utilization review).Google Scholar
Standards also can help prospective plaintiffs and their attorneys determine whether or not they have a meritorious claim without consulting a medical expert. By reducing the need for expert witness to evaluate potential cases, practice standards will enable patients with smaller claims to obtain representation and compensation. Standards also would facilitate settlement by reducing much of the uncertainty over whether or not the defendant will be found liable.Google Scholar
In 1989, for example, Congress created the Agency for Health Care Policy and Research within HCFA, and gave it responsibility to support the development of guidelines for clinical practice. See Public Law 101–239 (1990).Google Scholar
See Medical Practice Guidelines: Hearing Before the Subcomm. on Health and the Environment of the House Energy and Commerce Committee, 100th Cong., 2nd Sess. 107–20 (1988)(description of process for establishing standards used by Value Health Sciences, Inc.).Google Scholar
A joint effort by the American Medical Association, the Rand Corporation, and eight academic medical centers to establish practice standards has raised over $1.5 million to fund the project. See Am. Med. News, February 9, 1990, at p. 40, col. 4.Google Scholar
See, e.g., Brook, , “Practice Guidelines and Practicing Medicine: Are They Compatible?262 J. Am. Med. Assn. 3027, 3029 (“[p]eople are worried that the production of practice guidelines may be accompanied by increased regulation or public release of sensitive information. Will more harm than good be done to patients? Will such a process destroy the ability of physicians to work together to improve constantly their own performance? Will the medical system be codified and will both experimentation and change be slowed? Will the interest in medicine as a career decline and, therefore, will the quality of the future practitioner be ever decreasing? Finally, will a regulatory system emerge that will actually cost more than it saves?”)CrossRefGoogle Scholar
See generally Kinney & Wilder, supra note 81; Schockemoehl, supra note 90.Google Scholar
See Kinney & Wilder, supra note 81, at 443; J. McCormick, McCormick on Evidence 33, at 900 (Cleary, ed. 1984) (“most courts have been unwilling to adopt a broad exception to the hearsay rule for treatises and other professional literature”).Google Scholar
See Kinney & Wilder, supra note 81, at 446 (“courts are reluctant to admit treatises as evidence without accompanying expert testimony as to the qualifications of the treatises' authors to serve as experts”). In Pardy v. United States, 783 F.2d 710, 714–15 (7th Cir. 1986), for example, the court rejected plaintiff's effort to rely on patient rights standards published by the American Hospital Association to establish a duty to obtain the patient's informed consent to treatment, stating: “[G]eneral reliance on the Bill of Rights statements cannot create legally binding medical standards without testimony supportive of the notion that the accepted standard of medical care in Illinois … was deviated from in this instance.” The most liberal treatment of standards is found in Rule 803(18) of the Federal Rules of Evidence, which allows the judge to take judicial notice of the fact that standards are reliable authority, but only allows them to be admitted as evidence if called to the attention of an expert witness on cross-examination or relied upon by the expert on direct examination. Fed. R. Evid. 803(18). Furthermore, the standards may only be read into evidence; they may not be received as an exhibit that the jury can take with it into the jury room for use in its deliberations. Id.; McCormick, supra note 98, at 901. Standards also are admissible to impeach a witness even if he does not rely on the standards in formulating his opinion. See McCormick, supra note 98, at 900 (“[v]irtually all courts have, to some extent, permitted the use of learned materials in the cross-examination of an expert witness”).Google Scholar
See Kinney & Wilder, supra note 81, at 446 (“[t]hus, as a practitioner matter, the ‘mechanics’ of putting on proof of the standard of care in a malpractice lawsuit are fundamentally unaffected by the use of medical standards to establish the standard of care”).Google Scholar
A majority of jurisdictions require state medical boards to rely on the testimony of nonmember experts to establish the standard of care in disciplinary proceedings and to determine whether or not the standard has been met. Appeal of Schramm, 414 S.W.2d 31, 35 (S.D. 1987). Eight states, however, allow state agencies to base their findings solely on the expertise of their members. See id. at 36. Moreover, practice standards themselves might satisfy the majority's requirement of outside expertise. In Appeal of Schramm, supra, for example, the court noted that this requirement was subject to the “obvious well-recognized exception” that the board can take judicial notice of “judicially cognizable or generally recognized technical or scientific facts.” 414 S.W.2d at 37. While Schramm premised this exception on a statutory provision governing the board's activities, it can be argued that state boards generally ought to be able to take official notice of practice standards without the need for outside expert testimony as to their applicability or reliability.Google Scholar
See Comment, “Rational Health Policy and the Legal Standard of Care: A Call for Judicial Deference to Medical Practice Guidelines,” 77 Cal. L. Rev. 1483, 1522–28 (1989).Google Scholar
The classic cases of The T.J. Hooper, 60 F.2d 737, 740 (2d Cir.), cert. denied 287 U.S. 662 (1932) and Helling v. Carey, 83 Wash. 2d 514, 519 P.2d 981 (1974), reflect this concern, the latter in a medical malpractice context.Google Scholar
See Kinney & Wilder, supra note 81, at 448 (“[p]rotocols developed by national medical specialty societies and other groups of prestigious physicians may be highly influential because of their origins and imprimatur of approval”).Google Scholar
See, e.g., Shilkret v. Annapolis Emergency Hospital Association, 276 Md. 187, 349 A.2d 245 (1975)(arguing in favor of national standard for specialists and general practitioners alike).Google Scholar
One option might be standards set by the federal government. The National Institutes of Health publishes the results of expert evaluations of new medical technologies, called “consensus conferences.” These often include opinions on the appropriate use of the technologies, and are therefore akin to practice standards. The report of the 1987 consensus conference on magnetic resonance imaging (MRI), for example, stated that “MRI should not be performed on patients with cardiac pacemakers or aneurysm clips.” Office of Medical Applications of Research, U.S. Department of Health and Human Services, Magnetic Resonance Imaging: National Institutes of Health Consensus Development Conference Statement at 3 (1987). If the federal government opts to issue practice standards, it should do so in cooperation with private medical organizations to take advantage of their expertise and to assure that the results will be widely accepted.Google Scholar
For a discussion of whether resource constraints ought to be a defense in an action for malpractice, see Morreim, , “Cost Containment and the Standard of Medical Care,” 75 Cal. L. Rev. 1719 (1987); Furrow, “Medical Malpractice and Cost Containment: Tightening the Screws,” 36 Case W. Res. L. Rev. 985 (1986); Morreim, “Commentary: Stratified Scarcity and Unfair Liability,” 36 Case W. Res. L. Rev. 1033 (1986).Google ScholarPubMed
See Hall, “The Malpractice Standard Under Health Care Cost Containment,” 17 L., Med. & Health Care 347 (1989); Morreim, , “Stratified Scarcity: Redefining the Standard of Care,” 17 L., Med. & Health Care 356 (1989).Google ScholarPubMed
See, e.g. Institute of Medicine, Clinical Practice Guidelines: Directions for a New Program 2-4-2-22 (Field, M. and Lohr, K. eds. 1990), arguing for use of the term “guidelines” in an apparent attempt to encourage their use as an aid to the medical profession rather than as a method to enable external quality assurance mechanisms to establish fault.Google Scholar
See, e.g., “Practice Standards: MDs' Shield or Plaintiffs' Spear?” Am. Med. News, January 6, 1989, at 21 (hereinafter Shield or Spear) (quoting Kirk B. Johnson, General Counsel, American Medical Association, as stating that standards should be called “practice parameters” to reduce liability for noncomplying physicians). Another reason that medical organizations resist the notion of rigid standards is that the standards may not be comprehensive enough or may not reflect the latest scientific developments. See, e.g., Schmidt, “Medical Standards Developed by Medical Specialty Societies—Their Use and Abuse,” in Proceedings of Invitational Conference on Standards of Quality in Patient Care: The Importance and Risks of Standard Setting (Council of Medical Specialty Societies, 1987) at 56 (“[i]t should be universally understood that there is almost no standard, no matter how widely recognized, that can fairly or validly be applied to each and every case in each and every circumstance”).Google Scholar
See Shield or Spear, supra note 110 (quoting James F. Holzer, Risk Management Foundation of the Harvard Medical Institutions as stating that “[r]egardless of what you call it—standards, parameters or guidelines—it's what it is that determines how it's used in court”).Google Scholar
The provider in effect would be asserting a defense based on lack of causation. His argument would be that his failure to adhere to the standard did not cause or threaten to cause patient injury. Such an argument would negate the presumptive effect of the standard. In any event, a presumption typically requires only that the opposing party come forward with rebuttal evidence. See Fed. R. Evid. 301 (presumption shifts burden of going forward but not burden of proof).Google Scholar
Maine Public Law No. 1990, Ch. 931, 1990 (to be codified at 24 MRSA 2972–2978).Google Scholar
Main Public Law 1990, Ch. 931, sec. 2975(1) (to be codified at 24 MRSA sec. 2975(1)).Google Scholar
Maine Public Law 1990, Ch. 931, sec. 2973 (to be codified at 24 MRSA sec. 2973)).Google Scholar
An article in the newsletter of the American Society of Anesthesiology states that “the parameters and protocols will have the force and effect of law for those physicians practicing in the demonstration project,” and adds: “In essence then, the physician will have the benefit of a known standard that cannot be challenged by experts within or outside the state.” Smith, G., “Maine's Liability Demonstration Project: Relating Liability to Practice Parameters,” 54 Am. Society Anesth. Newsletter 18 (Oct. 1990).Google Scholar
Cf. Imwinkelreid, E., “Of Evidence and Equal Protection: The Unconstitutionality of Excluding Government Agents' Statements Offered as Vicarious Admissions Against the Prosecution,” 71 Minn. L. R. 269 (1986).Google Scholar