Preface
1. Social Program Evaluation: Definitions and Issues
I. Introduction
II. Purposes and Functions of Evaluation
A. Definitions and Distinctions
B. Evaluation as Part of the Feedback Process
C. Relation of Output Measures to the Feedback Process
D. Some Questions About the Purpose and Utilization of Evaluation
III. Methodology in Impact Assessment
A. Some General Considerations
B. Data Needs and Analysis
IV. Assessment and Value Judgments
A. Values and Evaluation
B. Criterion Formulation
V. Some Organizational and Ethical Issues
VI. Critical Issues
2. Assessing Social Innovations: An Empirical Base for Policy
I. The General Idea
II. Introduction
A. The Plan of the Paper
B. Evaluating Social Programs
C. Initial Ignorance
D. Methods of Investigation
E. Large and Small Effects
III. Three Instructive Examples
A. The Salk Vaccine Trials
B. The Gamma Globulin Study
C. Emergency School Assistance Program
D. Afterword
IV. Ratings of Innovations
A. Sources of the Studies and Their Biases
B. Medical and Social Innovations
C. Our Ratings of Social Innovations
D. Social Innovations
E. Summary for Social Innovations
F. Evaluations of Socio-Medical Innovations
G. Summary for Socio-Medical Innovations
H. Evaluations of Medical, Mainly Surgical, Innovations
I. Summary of Medical Ratings
J. Summary of Ratings
V. Findings from Nonrandomized Studies
A. Nonrandomized Studies
B. Nonrandomized Studies in Medicine
C. Summary for Section V
VI. Issues Related to Randomization
A. The Idea of a Sample as a Microcosm
B. Searching for Small Program Effects
C. Studying the Interaction Effects in Social Programs
D. Unmeasurable Effects
E. Validity of Inference from One-Site Studies
F. Does Randomization Imply Coercion?
G. The Ethics of Controlled Field Studies
H. Need to Develop Methodology
I. Need for an Ongoing Capability for Doing Randomized Controlled Field Studies
VII. Issues of Feasibility in Installing Program Evaluations
A. Specifying the Treatment
B. Incentives for Participation
C. A Multiplicity of Program Goals
VIII. Costs, Timeliness, and Randomized Field Studies
A. Costs and Benefits of Doing Randomized Controlled Field Studies
B. Value of a Field Trial
C. The Question of "Gradualism"
D. "Stalling" and Evaluating Innovations
E. Time and Doing Field Studies
IX. Issues That Arise in Implementing Innovations
A. Evolutionary Development of Programs
B. Field Trials and Policy Inaction
C. Political Obstacles
X. Findings and Recommendations
A. The Results of Innovations
B. Findings for Nonrandomized Trials
C. Beneficial Small Effects
D. Costs and Time
E. Feasibility of Randomized Trials
F. Evolutionary Evaluations
G. Long-Run Development
H. Controlled Trials vs. Fooling Around
3. Making the Case for Randomized Assignment to Treatments by Considering the Alternatives: Six Ways in Which Quasi-Experimental Evaluations in Compensatory Education Tend to Underestimate Effects
I. Introduction
II. Common Sense and Scientific Knowing
III. Experimentation in Education
IV. Six Sources of Underadjustment Bias
A. Systematic Underadjustment of Preexisting Differences
B. Differential Growth Rates
C. Increases in Reliability with Age
D. Lower Reliability in the More Disadvantaged Group
E. Test Floor and Ceiling Effects
F. Grouping Feedback Effects
V. Summary Comments
4. Regression and Selection Models to Improve Nonexperimental Comparisons
I. Introduction
II. An Alternate Approach to Bias in Treatment Effects
III. Models Which Allow Unbiased Estimation
5. Field Trial Designs in Gauging the Impact of Fertility Planning Programs
I. Introduction
A. Purpose and Rationale
B. Perspectives
II. Field Studies of Fertility Program Impacts
A. The Nature of This Survey
B. Randomization in Sample Selection and Experimental Assignment
III. Important Aspects of Various Classes of Study Design Exemplified
A. Major Types of Design Employed
B. Patterns of Comparison in Population Program Impact Studies
C. Three Illustrative "True Experiments"
D. Quasi-Experiments Varying in Strength as to Evidence of Impact
E. Weaker Quasi-Experimental Designs
F. Correlational Analysis of Impact on Fertility Indices
G. "Preexperimental" or Post Hoc Studies
IV. Special Features Observed in Field Experiments
A. Main Features of the Twelve "True" Experiments
B. Features Brought out in More Complex Experiments
V. Measures of Impact Used in Field Studies
VI. Summary and Conclusions
A. Resumé
B. Recommendations
C. Concluding Remarks
6. Experiments and Evaluations: A Reexamination
I. Introduction
A. A Definition
B. The Confusing Diversity of Current Evaluation Practice
II. Decision Analysis as a Paradigm for Evaluation Research
A. Stakes, as Well as Odds, Control Decisions
B. Inconsistent Values Held by Disagreeing Groups Control Most Decisions
C. The Decision-Theoretic Evaluation Framework
D. Multi-Attribute Utility Analyses
E. Interpersonal and Intergroup Disagreements
F. The Integration of Planning and Evaluation
III. Some Comments and Complaints, Mostly About Experimental Evaluations
A. What Is a Variable in a Social Program?
B. How to Aggregate the Effects of Heterogeneous Programs
C. Effect Size, Variance, and Variable Definition
D. What Can Happen When Large Effects Are Not Found
E. Causal Inferences
F. Using All the Data
G. Who Decides What Will Be Studied?
H. The Temporal Integration of Planning, Evaluation, and Program Changes
IV. Conclusion
7. Feedback in Social Systems: Operational and Systemic Research on Production, Maintenance, Control and Adaptive Functions
I. Introduction
II. Types of Feedback
A. Operational and Systemic Levels of Feedback
B. Social System Functions
C. Energic and Informational Forms of Feedback
III. The Development of Feedback
A. Around Production Problems
B. Around Maintenance Problems
C. Relationship to the Managerial and Political Structure
D. The Political Structure and Maintenance Problems
E. Adaptation Problems and Secondary Effects
IV. The Improvement of System Functioning Through Feedback
A. Direct vs. Indirect Feedback
B. Task Requirements as a Determinant of the Nature of Feedback Loops
C. Tying Feedback to System Functioning
8. Assessing Alternative Conceptions of Evaluation
I. Introduction
II. Determination of Impact
III. Evaluation and Experiment
A. Pilot Programs
B. Experimentation and Innovation
C. Comparative Evaluation and Program Evolution
IV. Decision vs. Understanding
V. Other Considerations Concerning Implementation
A. Role of the Evaluator
B. Use and Misuse of Evaluative Research Findings
VI. Some Suggested Conclusions and Recommendations
A. Management and Organizational Aspects of Evaluation
B. Ascertaining Program Impacts
C. Use of Information for Decision Making
VII. Sources of Ideas