Practical Program Evaluation Theorydriven
Evaluation And The Integrated Evaluation
Perspective 2nd Edition Huey T Chen download
https://ebookbell.com/product/practical-program-evaluation-
theorydriven-evaluation-and-the-integrated-evaluation-
perspective-2nd-edition-huey-t-chen-33360402
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Program Evaluation Theory And Practice A Comprehensive Guide 2nd Donna
M Mertens
https://ebookbell.com/product/program-evaluation-theory-and-practice-
a-comprehensive-guide-2nd-donna-m-mertens-7422494
Practical Program Evaluation For Criminal Justice 1st Edition Gennaro
F Vito
https://ebookbell.com/product/practical-program-evaluation-for-
criminal-justice-1st-edition-gennaro-f-vito-4737318
Handbook Of Practical Program Evaluation Essential Texts For Nonprofit
And Public Leadership And Mana 2nd Edition Joseph S Wholey
https://ebookbell.com/product/handbook-of-practical-program-
evaluation-essential-texts-for-nonprofit-and-public-leadership-and-
mana-2nd-edition-joseph-s-wholey-2415780
Handbook Of Practical Program Evaluation 4th Edition Harry P Hatry
https://ebookbell.com/product/handbook-of-practical-program-
evaluation-4th-edition-harry-p-hatry-5310454
Handbook Of Practical Program Evaluation 3rd Edition Joseph S Wholey
https://ebookbell.com/product/handbook-of-practical-program-
evaluation-3rd-edition-joseph-s-wholey-1636550
Program Evaluation Alternative Approaches And Practical Guidelines
Fourth Edition Jody L Fitzpatrick
https://ebookbell.com/product/program-evaluation-alternative-
approaches-and-practical-guidelines-fourth-edition-jody-l-
fitzpatrick-5645094
Health Program Planning And Evaluation A Practical Systematic Approach
For Community Health 3rd Edition L Michele Issel
https://ebookbell.com/product/health-program-planning-and-evaluation-
a-practical-systematic-approach-for-community-health-3rd-edition-l-
michele-issel-5477838
Health Program Planning And Evaluation A Practical Systematic Approach
For Community Health 4th Edition L Michele Issel Rebecca Wells
https://ebookbell.com/product/health-program-planning-and-evaluation-
a-practical-systematic-approach-for-community-health-4th-edition-l-
michele-issel-rebecca-wells-10848394
Health Program Planning And Evaluation A Practical Systematic Approach
To Community Health 5th Edition 5th L Michele Issel
https://ebookbell.com/product/health-program-planning-and-evaluation-
a-practical-systematic-approach-to-community-health-5th-edition-5th-l-
michele-issel-47548094
Practical Program Evaluation Theorydriven Evaluation And The Integrated Evaluation Perspective 2nd Edition Huey T Chen
Practical Program Evaluation Theorydriven Evaluation And The Integrated Evaluation Perspective 2nd Edition Huey T Chen
Practical
Program
Evaluation
Second
Edition
To the memory of my mother, Huang-ai Chen
Practical
Program
Evaluation
Theory-Driven Evaluation
and the Integrated
Evaluation Perspective
Huey T. Chen
Mercer University
Second
Edition
Copyright  2015 by SAGE Publications, Inc.
All rights reserved. No part of this book may be repro-
duced or utilized in any form or by any means, electronic
or mechanical, including photocopying, recording, or by
any information storage and retrieval system, without
permission in writing from the publisher.
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
Chen, Huey-tsyh.
Practical program evaluation : theory-driven evaluation
and the integrated evaluation perspective / Huey T. Chen,
Mercer University. — 2nd edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-4129-9230-5 (pbk. : alk. paper)
1. Evaluation research (Social action programs) I. Title.
H62.C3647 2015
300.72—dc23   2014019026
This book is printed on acid-free paper.
14 15 16 17 18 10 9 8 7 6 5 4 3 2 1
For information:
SAGE Publications, Inc.
2455 Teller Road
Thousand Oaks, California 91320
E-mail: order@sagepub.com
SAGE Publications Ltd.
1 Oliver’s Yard
55 City Road
London, EC1Y 1SP
United Kingdom
SAGE Publications India Pvt. Ltd.
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road, New Delhi 110 044
India
SAGE Publications Asia-Pacific Pte. Ltd.
3 Church Street
#10-04 Samsung Hub
Singapore 048763
Acquisitions Editor: Helen Salmon
Associate Editor: Eve Oettinger
Editorial Assistant: Anna Villarruel
Production Editor: Jane Haenel
Copy Editor: Paula L. Fleming
Typesetter: C&M Digitals (P) Ltd.
Proofreader: Susan Schon
Indexer: Robie Grant
Cover Designer: Anupama Krishnan
Marketing Manager: Nicole Elliott
Contents
Preface xvi
Special Features of the Book xvii
About the Author xix
PART I: Introduction 1
Chapter 1. Fundamentals of Program Evaluation 3
The Nature of Intervention Programs and
Evaluation: A Systems View 3
Classic Evaluation Concepts, Theories, and Methodologies:
Contributions and Beyond 6
Evaluation Typologies 7
The Distinction Between Formative
and Summative Evaluation 7
Analysis of the Formative and Summative Distinction 8
A Fundamental Evaluation Typology 10
Basic Evaluation Types 11
Hybrid Evaluation Types 13
Applications of the Fundamental Evaluation Typology 14
Internal Versus External Evaluators 14
Politics, Social Justice, Evaluation Standards, and Ethics 15
Evaluation Steps 17
Evaluation Design and Its Components 18
Major Challenges of Evaluation:
Lessons Learned From Past Practice 20
Judge a Program Not Only by Its Results but
Also by Its Context 21
Evaluations Must Address Both Scientific and
Stakeholder Credibility 22
Evaluations Must Provide Information That
Helps Stakeholders Do Better 23
Addressing the Challenges: Theory-Driven
Evaluation and the Integrated Evaluation Perspective 25
Theory-Driven Evaluation Approach 25
Integrated Evaluation Perspective 26
Program Complexity and Evaluation Theories 30
Who Should Read This Book and How They
Should Use It 31
Students 31
Evaluation Practitioners 32
Introducing the Rest of the Chapters 32
Questions for Reflection 33
Chapter 2. Understand Approaches to
Evaluation and Select Ones That Work:
The Comprehensive Evaluation Typology 35
The Comprehensive Evaluation Typology: Means and Ends 36
Stages in the Program Life Cycle 39
Dynamics of Transition Across Program Stages 41
Evaluation Approaches Associated With Each Stage 41
Planning Stage 42
Initial Implementation 42
Mature Implementation 43
Outcome Stage 43
Strategies Underlying Evaluation Approaches 44
Merit Assessment Strategies 46
Development Strategies 47
Applying the Typology: Steps to Take 51
Evaluation Ranging Across Several Program Stages 54
Dynamics of Evaluation Entries Into Program Stages 55
1. Single-Entry Evaluation 55
2. Multiple-Entry Evaluation 56
Questions for Reflection 57
Chapter 3. Logic Models and the Action Model/Change
Model Schema (Program Theory) 58
Logic Models 58
Additional Examples of Applying Logic Models 61
Program Theory 65
The Action Model/Change Model Schema 66
Descriptive Assumptions 67
Prescriptive Assumptions 68
Components of the Change Model 70
Goals and Outcomes 71
Determinants 71
Intervention or Treatment 73
Components of the Action Model 74
Intervention and Service Delivery Protocols 74
Implementing Organizations: Assess, Enhance,
and Ensure Their Capabilities 75
Program Implementers: Recruit, Train, and
Maintain Both Competency and Commitment 76
Associate Organizations/Community Partners:
Establish Collaborations 76
Ecological Context: Seek the Support of the Environment 77
Target Population: Identify, Recruit, Screen, Serve 78
Relationships Among Components of the Action
Model/Change Model Schema 79
Applying the Action Model/Change Model Schema:
An Example 82
Change Model 84
Action Model 84
Some Advantages of Using the Action Model/Change
Model Schema 84
Facilitation of Holistic Assessment 84
Provision of Comprehensive Information
Needed to Improve Programs 85
Delineation of a Strategy to Consider Stakeholders’
Views and Interests 85
Flexible Application of Research Methods to
Serve Evaluation Needs 86
Aid to Selecting the Most Suitable
Approaches or Methods 86
Helping Stakeholders Gear Up (or Clear Up) Their
Action Model/Change Model Schema 86
Reviewing Existing Documents and Materials 87
Clarifying Stakeholders’ Theory 87
Participatory Modes for Development Facilitation 88
Theorizing Procedures for Development Facilitation 89
Preparing a Rough Draft That Facilitates Discussion 91
Applications of Logic Models and the Action Model/Change
Model Schema 92
Questions for Reflection 92
PART II: Program Evaluation to
Help Stakeholders Develop a Program Plan 95
Chapter 4. Helping Stakeholders Clarify a Program Plan: Program Scope 97
The Program Plan, Program Scope, and Action Plan 97
Conceptual Framework of the Program Scope 98
Why Develop a Program Scope? 100
Strategies for Articulating the Program Scope 100
Background Information Provision Strategy and Approaches 101
Needs Assessment 101
Formative Research 102
The Conceptualization Facilitation Approach:
A Part of the Development Facilitation Strategy 103
Working Group or Intensive Interview Format? 103
Theorizing Methods 104
Determinants and Types of Action
Model/Change Model Schemas 109
Choosing Interventions/Treatments That Affect
the Determinant 110
The Relevancy Testing Approach: A Part of the
Troubleshooting Strategy 112
Research Example of Relevancy Testing 113
Moving From Program Scope to Action Plan 115
Questions for Reflection 116
Chapter 5. Helping Stakeholders Clarify a Program Plan: Action Plan 117
The Action Model Framework and the Action Plan 117
Strategies for Developing Action Plans 120
The Formative Research Approach
(Under Background Information Provision Strategy) 120
Example of Formative Research 121
The Conceptualization Facilitation Approach
(Under Development Facilitation Strategy) 122
1. Implementing Organization: Assess, Enhance,
and Ensure Its Capacity 123
2. Intervention and Service Delivery Protocols:
Delineate Service Content and Delivery Procedures 124
3. Program Implementers: Recruit, Train, and Maintain for
Competency and Commitment 126
4. Associate Organizations/Community Partners:
Establish Collaborative Relationships 127
5. Ecological Context: Seek the Support of the Environment 128
6. Target Population: Identify, Recruit, Screen, and Serve 130
Application of the Conceptualization Facilitation Strategy 133
Example 1: A Garbage Reduction Program 133
Example 2: An HIV-Prevention Program 136
The Pilot-Testing Approach 141
Defining Pilot Testing 141
Conducting Pilot Testing 142
Designing Pilot Testing 143
The Commentary or Advisory Approach 146
Questions to Inform the Evaluator’s Commentary
on a Program Scope 146
Questions to Inform the Evaluator’s Commentary
on an Action Plan 147
Summary 148
Questions for Reflection 149
PART III: Evaluating Implementation 151
Chapter 6. Constructive Process Evaluation Tailored
for the Initial Implementation 153
The Formative Evaluation Approach
(Under the Troubleshooting Strategy) 154
Timeliness and Relevancy 155
Research Methods 155
Steps in Applying Formative Evaluation 156
Four Types of Formative Evaluation 158
Formative Evaluation Results: Use With Caution 164
The Program Review/Development Meeting
(Under the Troubleshooting Strategy) 165
Program Review/Development Meeting Principles
and Procedures 166
Program Review/Development Meeting Advantages and
Disadvantages 167
Example of a Program Review/Development Meeting 169
Bilateral Empowerment Evaluation (Under the
Development Partnership Strategy) 171
Evaluation Process 172
Example of Bilateral Empowerment Evaluation 172
The Evaluator’s Role 173
Pros and Cons of Bilateral Empowerment Evaluation 174
Questions for Reflection 174
Chapter 7. Assessing Implementation in the
Mature Implementation Stage 176
Constructive Process Evaluation and Its Application 177
Modifying or Clarifying a Program Scope and Action Plan 177
Troubleshooting Implementation Problems 179
Conclusive Process Evaluation and Its Applications 180
How to Design a Conclusive Process Evaluation That Fits
Stakeholders’ Needs 181
Approaches of Conclusive Process Evaluation 182
Intervention Fidelity Evaluation 183
Referral Fidelity Evaluation 184
Service Delivery Fidelity Evaluation 185
Target Population Fidelity Evaluation 186
Fidelity Versus “Reinvention” in Conclusive
Process Evaluation 189
Hybrid Process Evaluation: Theory-Driven Process Evaluation 190
Examples of Theory-Driven Process Evaluation 191
Theory-Driven Process Evaluation and
Unintended Effects 198
Questions for Reflection 199
PART IV: Program Monitoring
and Outcome Evaluation 201
Chapter 8. Program Monitoring and the Development
of a Monitoring System 203
What Is Program Monitoring? 203
Process Monitoring 204
Uses of Process-Monitoring Data 205
Process Monitoring Versus Process Evaluation 205
Outcome Monitoring 206
Identification of Goals 206
Outcome Measures and Data Collection 207
Outcome Monitoring Versus Outcome Evaluation 207
Program-Monitoring Systems Within Organizations 208
Program-Monitoring System Elements 209
Developing a Program-Monitoring System 210
An Example of Developing a Program
Monitoring/Evaluation System 211
Questions for Reflection 229
Chapter 9. Constructive Outcome Evaluations 230
Constructive Outcome Evaluation 230
SMART Goals 231
Specific 232
Measurable 232
Attainable 232
Relevant 232
Time-Bound 233
Putting SMART Characteristics Together in Goals 233
Evaluability Assessment 234
Step 1: Involve the Intended Users of Evaluation
Information 235
Step 2: Clarify the Intended Program 235
Step 3: Explore the Program’s Reality 236
Step 4: Reach Agreement on Any Needed Program
Changes 236
Step 5: Explore Alternative Evaluation Designs 236
Step 6: Agree on the Evaluation’s Priority and How
Information From the Evaluation Will Be Used 236
Plausibility Assessment/Consensus-Building Approach 237
Potential Problems of Evaluation That Are Based Mainly
on Official Goals 237
Plausibility Assessment/Consensus-Building Approach 239
A Preview of Conclusive Outcome Evaluation: Selecting an
Appropriate Approach 245
Questions for Reflection 246
Chapter 10. The Experimentation Evaluation Approach
to Outcome Evaluation 247
The Foundation of the Experimentation Approach
to Outcome Evaluation 247
The Distinction Between Internal Validity and External
Validity in the Campbellian Validity Typology 248
Threats to Internal Validity 249
Research Designs for Ruling Out Threats to Internal Validity 250
Experimental Designs 251
Pre-Experimental Designs 253
Quasi-Experimental Designs 256
Questions for Reflection 258
Chapter 11. The Holistic Effectuality Evaluation Approach
to Outcome Evaluation 260
Ongoing Debates Over the Experimentation Evaluation Approach 260
Efficacy Evaluation Versus Effectiveness Evaluation 263
Relationships Among the Experimentation Evaluation
Approach and the Campbellian Validity Typology 264
The Holistic Effectuality Evaluation Approach 266
The Experimentation Evaluation Approach’s
Conceptualization of Outcome Evaluation 267
The Holistic Effectuality Approach’s
Conceptualization of Outcome Evaluation 267
Constructive Assessment and Conclusive Assessment:
Theory and Methodology 269
Constructive Assessment 270
Conclusive Assessment 275
The Zumba Weight Loss Project 275
Why Adjuvants Are Needed for Real-World Programs 281
Types of Adjuvants and Threats to Internal Validity 282
Methodology for Real-World Outcome Evaluation 284
Assessing the Joint Effects of an Intervention
and Its Adjuvants 284
Eliminating Potential Biases 285
Research Steps for Assessing Real-World Effects 286
Inquiring Into the Process of Contextualizing
an Intervention in the Real-World Setting 286
Using a Relatively Unobtrusive Quantitative
Design to Address Biases and Assess Change 288
Using an Auxiliary Design to Triangulate Evidence 289
Replication of the Mesological Intervention (Optional) 291
Example of a Real-World Outcome Evaluation 291
Checklist for Ranking Real-World Evaluations 296
Usefulness of the Holistic Effectuality Evaluation Approach 298
Providing Theory and Methodology
for Real-World Outcome Evaluation 298
Providing Insight Into the Relationship Between
Adjuvants and Internal Validity 298
Inspiring Evaluators to Develop Indigenous Evaluation
Theories and Methodologies 299
The Experimentation Evaluation Approach Versus the
Holistic Effectuality Evaluation Approach 299
Pure Independent Effects 300
Real-World Joint Effects 300
Building Evidence From the Ground Up 301
Questions for Reflection 302
Chapter 12. The Theory-Driven Approach to Outcome Evaluation 304
Clarifying Stakeholders’ Implicit Theory 306
Making Stakeholder Theory Explicit 306
Building Consensus Among Stakeholders
Regarding Program Theory 309
Guidelines for Conducting Theory-Driven Outcome Evaluation 309
Types of Theory-Driven Outcome Evaluation 310
The Intervening Mechanism Evaluation Approach 312
Two Models of the Intervening Mechanism
Evaluation Approach 313
Some Theoretical Bases of the Intervening
Mechanism Evaluation 318
When to Use an Intervening Mechanism
Evaluation Approach 319
The Moderating Mechanism Evaluation Approach 321
Constructing Moderating Mechanism Evaluation Models 322
Examples of Moderating Mechanism Evaluation 323
Advanced Moderating Mechanism Models 324
When to Use a Moderating Mechanism Evaluation Approach 327
The Integrative Process/Outcome Evaluation Approach 327
Research Methods and Strategies Associated With
Integrative Process/Outcome Evaluation 329
Examples of Integrative Process/Outcome Evaluation 329
Theory-Driven Outcome Evaluation and Unintended Effects 333
Formal Specification of Possible Unintended Effects 334
Field Study Detection of Unintended Effects
of Implementation 335
A Reply to Criticisms of Theory-Driven Outcome Evaluation 336
Questions for Reflection 338
Part V: Advanced Issues in Program Evaluation 341
Chapter 13. What to Do if Your Logic Model Does
Not Work as Well as Expected 343
A Diversity Enhancement Project 344
Description of the Project 344
Applying the Logic Model 345
Applying the Action Model/Change Model Schema 345
A Community Health Initiative 350
Description of the Project 350
Applying the Logic Model 352
Applying the Action Model/Change Model Schema 355
A Guide to Productively Applying the Logic Model
and the Action Model/Change Model Schema 361
System Change and Evaluation in the Future 362
Questions for Reflection 363
Chapter 14. Formal Theories Versus Stakeholder Theories in
Interventions: Relative Strengths and Limitations 365
Formal Theory Versus Stakeholder-Implicit Theory
as a Basis for Intervention Programs 365
Intervention Programs Based on Formal Theory 365
Programs Based on Stakeholder Theory 366
Views on the Relative Value of Formal
Theory-Based Interventions and
Stakeholder Theory-Based Interventions 369
Formal Theory Versus Stakeholder Theory: A Case Study 371
Program Theory Underlying the Anti–Secondhand
Smoking Program 371
Action Model 373
Outcome Evaluation Design and Change Model 376
Process Evaluation Design 377
Evaluation Findings 377
Results of Process Evaluation 377
Outcome Evaluation Results 378
Relative Strengths and Limitations of Formal Theory-Based
Intervention and Stakeholder Theory-Based Intervention 379
Theoretical Sophistication and Prior Evidence 379
Efforts to Clarify the Change Model and Action Model in
Program Theory 380
Efficacious Evidence Versus Real-World Effectiveness 380
Viability 381
Action Theory Success and Conceptual Theory Success 382
Lessons Learned From the Case Study 383
Questions for Reflection 387
Chapter 15. Evaluation and Dissemination: Top-Down
Approach Versus Bottom-Up Approach 388
The Top-Down Approach to Transitioning From Evaluation to
Dissemination 388
Lessons Learned From Applying the Top-Down
Approach to Program Evaluation 389
Integrative Cogency Model: The Integrated Evaluation
Perspective 394
Effectual Cogency 395
Viable Cogency 396
Transferable Cogency 398
Evaluation Approaches Related to the Integrative
Cogency Model 398
Effectuality Evaluation 398
Viability Evaluation 399
Transferability Evaluation 399
The Bottom-Up Approach to Transitioning From
Evaluation to Dissemination 400
The Bottom-Up Approach 400
The Bottom-Up Approach and Social Betterment/Health
Promotion Programs 401
Types of Intervention for the Bottom-Up Approach 403
The Current Version of Evidence-Based Interventions:
Limitations and Strategies to Address Them 403
The Integrated Evaluation Perspective on Concurrent
Cogency Approaches 406
Focusing on Effectual Cogency 407
Focusing on Viable Cogency 407
Optimizing Approach 408
The Usefulness of the Bottom-Up Approach
and the Integrative Cogency Model 408
Questions for Reflection 410
References 412
Index 426
xvi
Preface
Ihave been practicing program evaluation over a few decades. My practice
has greatly benefited from conventional evaluation theories and approaches.
However, on many occasions, I have also experienced conventional evaluation
theories and approaches that do not work as well as they are supposed to. I
have been contemplating and working on how to expand them or develop
alternative theories and approaches that will better serve evaluation in the
future. I planned to discuss my experiences and lessons learned from these
efforts in the second edition of Practical Program Evaluation so that evalua-
tors, new or seasoned, would not only learn both traditional and cutting-edge
concepts but also have opportunities to participate in further advancing pro-
gram evaluation. However, this plan has frequently been stymied. One reason
is that the more I study the issues, the more complicated they become. I some-
times felt as though I was constantly banging my head against the proverbial
wall. Luckily, I found I was not the only person having these frustrations and
struggling with these problems. The following friends and colleagues have
provided timely encouragement and advice that have been crucial to my finish-
ing the book: Thomas Chapel, Amy DeGroff, Stewart Donaldson, Jennifer
Greene, Brian Lien, Lorine Spencer, Jonathan Morell, Craig Thomas, Nannette
Turner, and Jennifer Urban. I am indebted greatly to them for their support of
the project. I am also grateful for the valuable feedback from the following
reviewers: Darnell J. Bradley, Cardinal Stritch University; C. W. Cowles,
Central Michigan University; and Mario A. Rivera, University of New Mexico.
Any shortcomings of this book are entirely my own.
Furthermore, the book was also frequently disrupted by other, more press-
ing tasks. Helen Salmon, my SAGE editor, issued gentle ongoing reminders and
patiently checked on my progress every step of the way. Without her persistent
nudging, I would not have been able to meet the deadline. I also appreciate my
research assistants, Joanna Hill and Mauricia Barnett, for their help in prepar-
ing questions for reflection and the tables that appear in the book. With so
much time and effort spent, it is a great joy for me to see this book reach
fruition.
xvii
Special Features
of the Book
This book is about program evaluation in action, and to that end it does the
following:
1. Provides a comprehensive evaluation typology that facilitates the system-
atic identification of stakeholders’ needs and the selection of the evaluation
options best suited to meet those needs. Almost always, program evaluation is
initiated to meet the particular evaluation needs of a program’s stakeholders. If
a program evaluation is to be useful to those stakeholders, it is their expectations
that evaluators must keep in mind when designing the evaluation. The precise
communication and comprehension of stakeholder expectations is crucial; to
facilitate the communication process, this book presents a comprehensive evalu-
ation typology for the effective identification of evaluation needs. Within this
typology, the book provides a variety of evaluation approaches suitable across a
program’s life cycle—from program planning to initial implementation, mature
implementation, and outcome achievement—to enrich the evaluator’s toolbox.
Once the stakeholders’ expectations are identified, evaluators must select a strat-
egy for addressing each evaluation need. Many evaluation options are available.
The book discusses them, exploring the pros and cons of each and acknowledg-
ing that trade-offs sometimes must be made. Furthermore, it suggests practical
principles that can guide evaluators to make the best choices in the evaluation
situations they are likely to encounter.
2. Introduces both conventional and cutting-edge evaluation perspectives
and approaches. The core of program evaluation is its body of concepts, theo-
ries, and methods. It provides evaluators needed principles, strategies, and
tools for conducting evaluations. As will be demonstrated in the book, cutting-
edge evaluation approaches have been developed to further advance program
evaluation by thinking outside the proverbial box. Evaluators can do better
evaluations if they are familiar and competent with both conventional and
innovative evaluation perspectives and approaches. This book systematically
introduces the range of options and discusses the conditions under which they
can be fruitfully applied.
xviii Practical Program Evaluation
3. Puts each approach into action. Using illustrative examples from the
field, the book details the methods and procedures involved in using various
evaluation options. How does the program evaluator carry out an evaluation
so as to meet real evaluation needs? Here, practical approaches are discussed—
yet this book avoids becoming a “cookbook.” The principles and strategies of
evaluation that it presents are backed by theoretical justifications, which are
also explained. This context, it is hoped, fosters the latitude, knowledge, and
flexibility with which program evaluators can design suitable evaluation mod-
els for a particular evaluation project and better serve stakeholders’ needs.
xix
About the Author
Huey T. Chen is Professor of the Department of Public Health and Director
of the Center for Evaluation and Applied Research in the College of Health
Professions at Mercer University. He previously served as branch chief and
senior evaluation scientist at the Centers for Disease Control and Prevention
(CDC), as well as Professor at the University of Alabama at Birmingham. Dr.
Chen has worked with community organizations, health-related agencies,
government agencies, and educational institutions. He has conducted both
large-scale and small-scale evaluations in the United States and internation-
ally, including evaluating a drug abuse treatment program and a youth ser-
vice program in Ohio, a carbon monoxide ordinance in North Carolina, a
community health initiative in New Jersey, a juvenile delinquency prevention
and treatment policy in Taiwan, and an HIV prevention and care initiative in
China. He has written extensively on program theory, theory-driven evalua-
tion, the bottom-up evaluation approach, and the integrated evaluation per-
spective. In addition to publishing over 70 articles in peer-reviewed journals, he
is the author of several evaluation books. His book Theory-Driven Evaluations
(1990, SAGE) is seen as one of the landmarks in program evaluation. His book
Practical Program Evaluation: Theory-Driven Evaluation and the Integrated
Evaluation Perspective, Second Edition (2015, SAGE) introduces cutting-edge
evaluation approaches and illustrates the benefits of thinking outside the pro-
verbial box. Dr. Chen serves on the editorial advisory boards of Evaluation and
Program Planning and is a winner of the American Evaluation Association’s
Lazarsfeld Award for Evaluation Theory and of the Senior Biomedical Service
Award from the CDC for his evaluation work.
Practical Program Evaluation Theorydriven Evaluation And The Integrated Evaluation Perspective 2nd Edition Huey T Chen
1
Part I
Introduction
The first three chapters of this book, which comprise Part I, provide general information
about the theoretical foundations and applications of program evaluation principles.
Basic ideas are introduced, and a conceptual framework is presented. The first chapter
explains the purpose of the book and discusses the nature, characteristics, and strategies
of program evaluation. In Chapter 2, program evaluators will find a systematic typology
of the various evaluation approaches one can choose among when faced with particular
evaluation needs. Chapter 3 introduces the concepts of logic models and program theory,
which underlie many of the guidelines found throughout the book.
2 Introduction
3
The programs that evaluators can expect to assess have different names such as treat-
ment program, action program, or intervention program. These programs come from
different substantive areas, such as health promotion and care, education, criminal justice,
welfare, job training, community development, and poverty relief. Nevertheless, they all
have in common organized efforts to enhance human well-being—whether by preventing
disease, reducing poverty, reducing crime, or teaching knowledge and skills. For conve-
nience, programs and policies of any type are usually referred in this book as “intervention
programs” or simply “programs.” An intervention program intends to change individuals’
or groups’ knowledge, attitudes, or behaviors in a community or society. Sometimes, an
intervention program aims at changing the entire population of a community; this kind of
program is called a population-based intervention program.
The Nature of Intervention Programs
and Evaluation: A Systems View
The terminology of systems theory (see, e.g., Bertalanffy, 1968; Ryan & Bohman, 1998)
provides a useful means of illustrating how an intervention program works as an open
system, as well as how program evaluation serves the program. In a general sense, as an
open system an intervention program consists of five components (input, transformation,
outputs, environment, and feedback), as illustrated in Figure 1.1.
Chapter 1
Fundamentals of
Program
Evaluation
4 Introduction
Environment
Feedback
Input Transformation Output
Figure 1.1   A Systems View of a Program
Inputs. Inputs are resources the program takes in from the environment. They
may include funding, technology, equipment, facilities, personnel, and clients.
Inputs form and sustain a program, but they cannot work effectively without
systematic organization. Usually, a program requires an implementing organi-
zation that can secure and manage its inputs.
Transformation. A program converts inputs into outputs through transformation.
This process, which begins with the initial implementation of the treatment/inter-
vention prescribed by a program, can be described as the stage during which
implementers provide services to clients. For example, the implementation of a new
curriculum in a school may mean the process of teachers teaching students new
subject material in accordance with existing instructional rules and administrative
guidelines. Transformation also includes those sequential events necessary to
achieve desirable outputs. For example, to increase students’ math and reading
scores, an education program may need to first boost students’ motivation to learn.
Outputs. These are the results of transformation. One crucial output is the
attainment of the program’s goals, which justifies the existence of the program.
For example, an output of a treatment program directed at individuals who
engage in spousal abuse is the end of the abuse.
Environment. The environment consists of any factors that, despite lying out-
side a program’s boundaries, can nevertheless either foster or constrain that
program’s implementation. Such factors may include social norms, political
structures, the economy, funding agencies, interest groups, and concerned
5
Chapter 1   Fundamentals of Program Evaluation
citizens. Because an intervention program is an open system, it depends on the
environment for its inputs: clients, personnel, money, and so on. Furthermore,
the continuation of a program often depends on how the general environment
reacts to program outputs. Are the outputs valuable? Are they acceptable? For
example, if the staff of a day care program is suspected of abusing children, the
environment would find that output unacceptable. Parents would immediately
remove their children from the program, law enforcement might press criminal
charges, and the community might boycott the day care center. Finally, the
effectiveness of an open system, such as an intervention program, is influenced
by external factors such as cultural norms and economic, social, and political
conditions. A contrasting system may be illustrative: In a biological system, the
use of a medicine to cure an illness is unlikely to be directly influenced by
external factors such as race, culture, social norms, or poverty.
Feedback. So that decision makers can maintain success and correct any prob-
lems, an open system requires information about inputs and outputs, transfor-
mation, and the environment’s responses to these components. This feedback is
the basis of program evaluation. Decision makers need information to gauge
whether inputs are adequate and organized, interventions are implemented
appropriately, target groups are being reached, and clients are receiving quality
services. Feedback is also critical to evaluating whether outputs are in align-
ment with the program’s goals and are meeting the expectations of stakehold-
ers. Stakeholders are people who have a vested interest in a program and are
likely be affected by evaluation results; they include funding agencies, decision
makers, clients, program managers, and staff. Without feedback, a system is
bound to deteriorate and eventually die. Insightful program evaluation helps to
both sustain a program and prevent it from failing. The action of feedback
within the system is indicated by the dotted lines in Figure 1.1.
To survive and thrive within an open system, a program must perform at least
two major functions. First, internally, it must ensure the smooth transformation of
inputs into desirable outcomes. For example, an education program would experi-
ence negative side effects if faced with disruptions like high staff turnover, excessive
student absenteeism, or insufficient textbooks. Second, externally, a program must
continuously interact with its environment in order to obtain the resources and
support necessary for its survival. That same education program would become
quite vulnerable if support from parents and school administrators disappeared.
Thus, because programs are subject to the influence of their environment,
every program is an open system. The characteristics of an open system can
also be identified in any given policy, which is a concept closely related to that
of a program. Although policies may seem grander than programs—in terms of
6 Introduction
the envisioned magnitude of an intervention, the number of people affected,
and the legislative process—the principles and issues this book addresses are
relevant to both. Throughout the rest of the book, the word program may be
understood to mean program or policy.
Based upon the above discussion, this book defines program evaluation as the
process of systematically gathering empirical data and contextual information
about an intervention program—specifically answers to what, who, how, whether,
and why questions that will assist in assessing a program’s planning, implementa-
tion, and/or effectiveness. This definition suggests many potential questions for
evaluators to ask during an evaluation: The “what” questions include those such
as, what are the intervention, outcomes, and other major components? The“who”
questions might be, who are the implementers and who are the target clients? The
“how”questions might include, how is the program implemented? The“whether”
questions might ask whether the program plan is sound, the implementation
adequate, and the intervention effective. And the “why” questions could be, why
does the program work or not work? One of the essential tasks for evaluators is
to figure out which questions are important and interesting to stakeholders and
which evaluation approaches are available for evaluators to use in answering the
questions. These topics will be systematically discussed in Chapter 2. The purpose
of program evaluation is to make the program accountable to its funding agencies,
decision makers, or other stakeholders and to enable program management and
implementers to improve the program’s delivery of acceptable outcomes.
Classic Evaluation Concepts, Theories, and
Methodologies: Contributions and Beyond
Program evaluation is a young applied science; it began developing as a disci-
pline only in the 1960s. Its basic concepts, theories, and methodologies have
been developed by a number of pioneers (Alkin, 2013; Shadish, Cook, &
Leviton, 1991). Their ideas, which are foundational knowledge for evaluators,
guide the design and conduct of evaluations. These concepts are commonly
introduced to readers in two ways. The conventional way is to introduce classic
concepts, theories, and methodologies exactly as proposed by these pioneers.
Most major evaluation textbooks use this popular approach.
This book, however, not only introduces these classic concepts, theories, and
methodologies but also demonstrates how to use them as a foundation for
formulating additional evaluation approaches. Readers can not only learn from
evaluation pioneers’ contributions but also expand or extend their work,
informed by lessons learned from experience or new developments in program
evaluation. However, there is a potential drawback to taking this path. It
7
Chapter 1   Fundamentals of Program Evaluation
requires discussing the strengths and limitations of the work of the field’s pio-
neers. Such critiques may be regarded as intended to diminish or discredit this
earlier work. It is important to note that the author has greatly benefited from
the classic works in the field’s literature and is very grateful for the contribu-
tions of those who developed program evaluation as a discipline. Moreover, the
author believes that these pioneers would be delighted to see future evaluators
follow in their footsteps and use their accomplishments as a basis for exploring
new territory. In fact, the seminal authors in the field would be very upset if
they saw future evaluators still working with the same ideas, without making
progress. It is in this spirit that the author critiques the literature of the field,
hoping to inspire future evaluators to further advance program evaluation.
Indeed, the extension or expansion of understanding is essential for advanc-
ing program evaluation. Readers will be stimulated to become independent
thinkers and feel challenged to creatively apply evaluation knowledge in their
work. Students and practitioners who read this book will gain insights from the
discussions of different options, formulate their own views of the relative worth
of these options, and perform better work as they go forward in their careers.
Evaluation Typologies
Stakeholders need two kinds of feedback from evaluation. The first kind is infor-
mation they can use to improve a program. Evaluations can function as improve-
ment-oriented assessments that help stakeholders understand whether a program
is running smoothly, whether there are problems that need to be fixed, and how
to make the program more efficient or more effective. The second kind of feed-
back evaluations can provide is an accountability-oriented assessment of whether
or not a program has worked. This information is essential for program manag-
ers and staff to fulfill their obligation to be accountable to various stakeholders.
Different styles of evaluation have been developed to serve these two types
of feedback. This section will first discuss Scriven’s (1967) classic distinction
between formative and summative evaluation and then introduce a broader
evaluation typology.
The Distinction Between Formative
and Summative Evaluation
Scriven (1967) made a crucial contribution to evaluation by introducing the
distinction between formative and summative evaluation. According to Scriven,
formative evaluation fosters improvement of ongoing activities. Summative evalua-
tion, on the other hand, is used to assess whether results have met the stated goals.
8 Introduction
Summative evaluation informs the go or no-go decision, that is, whether to continue
or repeat a program or not. Scriven initially developed this distinction from his
experience of curriculum assessment. He viewed the role of formative evaluation in
relation to the ongoing improvement of the curriculum, while the role of summative
evaluation serves administrators by assessing the entire finished curriculum. Scriven
(1991a) provided more elaborated descriptions of the distinction. He defined for-
mative evaluation as “evaluation designed, done, and intended to support the pro-
cess of improvement, and normally commissioned or done, and delivered to
someone who can make improvement” (p. 20). In the same article, he defined sum-
mative evaluation as “the rest of evaluation; in terms of intentions, it is evaluation
done for, or by, any observers or decision makers (by contrast with developers) who
need valuative conclusions for any other reasons besides development.”The distinct
purposes of these two kinds of evaluation have played an important role in the way
that evaluators communicate evaluation results to stakeholders.
Scriven (1991a) indicated that the best illustration of the distinction between
formative and summative evaluation is the analogy given by Robert Stake:“When
the cook tastes the soup, that’s formative evaluation; when the guest tastes it,
that’s summative evaluation” (Scriven, p. 19). The cook tastes the soup while it is
cooking in case, for example, it needs more salt. Hence, formative evaluation hap-
pens in the early stages of a program so the program can be improved as needed.
On the other hand, the guest tastes the soup after it has finished cooking and is
served. The cook could use the guest’s opinion to determine whether to serve the
soup to other guests in the future. Hence, summative evaluation happens in the
last stage of a program and emphasizes the program’s outcome.
Scriven (1967) placed a high priority on summative evaluation. He argued
that decision makers can use summative evaluation to eliminate ineffective
programs and avoid wasting money. However, Cronbach (1982) disagreed with
Scriven’s view, arguing that program evaluation is most useful when it provides
information that can be used to strengthen a program. He also implied that few
evaluation results are used for making go or no-go decisions. Which type of
evaluation has a higher priority is an important issue for evaluators, and the
importance of this issue will be revisited later in this chapter.
Analysis of the Formative and Summative Distinction
The distinction between formative and summative evaluation provides an impor-
tant framework evaluators can use to communicate ideas and develop approaches,
and these concepts will continue to play an important role. However, Scriven
(1991a) proposed that formative and summative evaluations are the two main
evaluation types. In reality, there are other important evaluation types that are not
9
Chapter 1   Fundamentals of Program Evaluation
covered in this distinction. To avoid confusion and to lay a foundation for advanc-
ing the discipline, it is important to highlight these other evaluation types as well.
In Scriven’s conceptualization, evaluation serves to improve a program only
during earlier stages of the program (formative evaluation), while evaluation
renders a final verdict at the outcome stage (summative evaluation). However,
this conceptualization may not sufficiently cover many important evaluation
activities (Chen, 1996). For example, evaluations at the early stage of the pro-
gram do not need to be used to improve the program. Evaluators could admin-
ister summative evaluations during earlier phases of the program. Similarly,
evaluations conducted at the outcome stage do not have to be summative.
Evaluators could administer a formative evaluation at the outcome stage to
gain information that would inform and improve future efforts.
Since Scriven regarded Robert Stake’s soup-tasting analogy as the best way to
illustrate the formative/summative distinction, let’s use this analogy to illustrate
that all evaluations do not fit this description.According to Stake’s analogy, when
“the cook tastes the soup,” that act represents formative evaluation. This concept
of formative evaluation has some limitations. The cook does not always taste the
soup for the purpose of improvement. The cook may taste the soup to determine
whether the soup is good enough to serve to the guests at all, especially if it is a
new recipe. Upon testing the soup, she/he may feel it is good enough to serve to
the guests; alternatively, she/he may decide that the soup is awful and not worth
improving and simply chuck the soup and scratch it off the menu. In this case,
the cook has not tasted the soup for the purpose of improvement but to reach a
conclusion about including the soup or excluding it from the menu.
To give another illustration, a Chinese cook, who is a friend of mine, once tried
to prepare a new and difficult dish, called Peking duck, for his restaurant. Tasting
his product, he found that the skin of the duck was not as crispy as it was sup-
posed to be, nor the meat as flavorful. Convinced that Peking duck was beyond
his capability as a chef, he decided not to prepare the dish again. Again, the cook
tasted the product to conduct a summative assessment rather than a formative
one. The formative/summative distinction does not cover this kind of evaluation.
Returning to Stake’s analogy, when “the guest tastes the soup,” this is
regarded as a summative evaluation since the guest provides a conclusive opin-
ion of the soup. This concept of summative evaluation also has limitations. For
example, the opinion of the guests is not always used solely to determine the
soup’s final merit. Indeed, a cook might well elicit opinions from the guests for
the purpose of improving the soup in the future. In this case, this type of
evaluation is also not covered by the formative/summative distinction.
Stake’s analogy, though compelling, excludes many evaluation activities.
Thus, we need a broader conceptual typology so as to more comprehensively
communicate or guide evaluation activities.
10 Introduction
A Fundamental Evaluation Typology
To include more evaluation types in the language used to communicate and
guide evaluation activities, this chapter proposes to extend Scriven’s formative
and summative distinction. The typology developed here is a reformulation of
an early work by Chen (1996). This typology has two dimensions: the program
stages and evaluation functions. In terms of program stages, evaluation can
focus on program process (such as program implementation) and/or on pro-
gram outcome (such as the impact of the program on its clients). In terms of
evaluation functions, evaluation can serve a constructive function (providing
information for improving a program) and/or a conclusive function (judging
the overall merit or worth of a program). A fundamental typology of evalua-
tion can thus be developed by placing program stages and evaluation functions
in a matrix, as shown in Figure 1.2.
Constructive
Process
Evaluation
Process
Outcome
Program
Stages
Evaluation Functions
Constructive Conclusive Hybrid Types
of Evaluation
Constructive
Outcome
Evaluation
Other Hybrid Types of Evaluation
Conclusive
Process
Evaluation
Conclusive/
Constructive
Process
Evaluation
Conclusive/
Constructive
Outcome
Evaluation
Conclusive
Outcome
Evaluation
Figure 1.2   Fundamental Evaluation Typology
SOURCE: Adapted from Chen (1996).
11
Chapter 1   Fundamentals of Program Evaluation
This typology consists of both basic evaluation types and hybrid evaluation
types. The rest of this section will discuss the basic types first and then the
hybrid types.
Basic Evaluation Types
The basic types of evaluation include constructive process evaluation, con-
clusive process evaluation, constructive outcome evaluation, and conclusive
outcome evaluation.
Constructive Process Evaluation
Constructive process evaluation provides information about the relative
strengths/weaknesses of the program’s structure or implementation pro-
cesses, with the purpose of program improvement. Constructive process
evaluation usually does not provide an overall assessment of the success or
failure of program implementation. For example, a constructive process
evaluation of a family-planning program may indicate that more married
couples can be persuaded to utilize birth control in an underdeveloped coun-
try if the service providers or counselors are local people, rather than outside
health workers. This information does not provide a conclusive judgment of
the merits of program implementation, but it is useful for improving the
program. Decision makers and program designers can use the information to
strengthen the program by training more local people to become service
providers or counselors.
Conclusive Process Evaluation
This type of evaluation, which is frequently used, is conducted to judge the
merits of the implementation process. Unlike constructive process evaluation,
conclusive process evaluation attempts to judge whether the implementation of a
program is a success or a failure, appropriate or inappropriate. A good example
of conclusive process evaluation is an assessment of whether program services are
being provided to the target population. If an educational program intended to
serve disadvantaged children is found to serve middle-class children, the program
would be consider an implementation failure. Another good example of conclu-
sive process evaluation is manufacturing quality control, when a product is
rejected if it fails to meet certain criteria. Vivid examples of conclusive process
evaluation are the investigative reports seen on popular TV programs, such as 60
Minutes and 20/20. In these programs, reporters use hidden cameras to document
12 Introduction
whether services delivered by such places as psychiatric hospitals, nursing homes,
child care centers, restaurants, and auto repair shops are appropriate.
Constructive Outcome Evaluation
This type of evaluation identifies the relative strengths and/or weaknesses of
program elements in terms of how they may affect program outcomes. This
information can be useful for improving the degree to which a program is
achieving its goals, but it does not provide an overall judgment of program
effectiveness. For example, evaluators may facilitate a discussion among stake-
holders to develop a set of measurable goals or to reach consensus about pro-
gram goals. Again, such activity is useful for improving the program’s chance
of success, but it stops short of judging the overall effectiveness of the program.
This type of evaluation will be discussed in detail in Chapter 9. In another
example, a service agency may have two types of social workers, case managers
whose work is highly labor-intensive and care managers whose work is less
labor-intensive. An evaluator can apply constructive outcome evaluation to
determine which kind of social worker is more cost-effective for the agency.
Conclusive Outcome Evaluation
The purpose of a conclusive outcome evaluation is to provide an overall
judgment of a program in terms of its merit or worth. Scriven’s summative
evaluation is synonymous with this category. A typical example of conclusive
outcome evaluation is validity-focused outcome evaluation that determines
whether changes in outcomes can be causally attributed to the program’s inter-
vention. This kind of evaluation is discussed in detail in Chapter 10.
The typology outlined above eliminates some of the difficulties found in the
soup-tasting analogy. Formerly, when the cook tasted the soup for conclusive
judgment purposes, this activity did not fit into the formative/summative dis-
tinction. However, it can now be classified as conclusive process evaluation.
Similarly, when the guest tastes the soup for improvement purposes, this action
can now be classified as constructive outcome evaluation.
Furthermore, the typology clarifies the myth that process evaluation is always
a kinder, gentler type of evaluation in which evaluators do not make tough con-
clusive judgments about the program. Constructive process evaluation may be
kinder and gentler, but conclusive process evaluation is not necessarily so. For
example, TV investigative reports that expose the wrongdoing in a psychiatric
hospital, auto shop, restaurant, or day care center have resulted in changes in
service delivery, the firing of managers and employees, and even the closing of
13
Chapter 1   Fundamentals of Program Evaluation
the agencies or businesses in question. In such cases, process evaluations were
tougher than many outcome evaluations in terms of critical assessment and
impact. Moreover, the basic typology disrupts the notion that outcome evalua-
tion must always be carried out with a “macho” attitude so that it threatens
program providers while failing to offer any information about the program. A
conclusive outcome evaluation may provide information whether a program has
been successful or not, but the constructive outcome evaluation can provide use-
ful information for enhancing the effectiveness of a program without threaten-
ing its existence. For example, the survival of a program is not threatened by a
constructive outcome evaluation that indicates that program effectiveness could
be improved by modifying some intervention elements or procedures.
Hybrid Evaluation Types
Another important contribution of this fundamental evaluation typology is
to point out that evaluators can move beyond the basic evaluation types to
conduct hybrid evaluations. As illustrated in Figure 1.2, a hybrid evaluation
can combine evaluation functions, program stages, or both (Chen, 1996). This
section intends to introduce two types of hybrid evaluation that, across evalu-
ation, functions at a program stage.
Conclusive/Constructive Process Evaluation
Conclusive/constructive process evaluation serves both accountability and
program improvement functions. A good example is evaluation carried out by
the Occupational Safety and Health Administration (OSHA). OSHA inspectors
may evaluate a factory to determine whether the factory passes a checklist of
safety and health rules and regulations. The checklist is so specific, however,
that these inspections can also be used for improvement. If a company fails the
inspection, the inspector provides information concerning areas that need cor-
rection to satisfy safety standards. Other regulatory agencies, such as the
Environmental Protection Agency (EPA), perform a similar type of evaluation.
In these kinds of evaluation, the overall quality of implementation is repre-
sented by a checklist of crucial elements. These elements provide exact clues for
how to comply with governmental regulations.
A similar principle can be applied to assess the implementation of an inter-
vention. As will be discussed in Chapter 7, a conclusive/constructive process
evaluation can look into both overall quality and discrete program elements so
as to provide information about the overall quality of implementation as well
as specific areas for its future improvement.
14 Introduction
Conclusive/Constructive Outcome Evaluation
Another hybrid evaluation type is the conclusive/constructive outcome
evaluation. An excellent example of this kind of evaluation is real-world out-
come evaluation, which will be discussed in great detail in Chapter 11. Another
excellent example is theory-driven outcome evaluation. This type of evaluation
elaborates causal mechanisms underlying a program so that it examines not
only whether the program has an impact but why. It also informs stakeholders
as to which mechanisms influence program success or failure for program
improvement purposes. Theory-driven outcome evaluation will be discussed in
Chapters 12 and 14 of the book.
Applications of the Fundamental Evaluation Typology
The fundamental evaluation typology discussed here prevents evaluators from
hewing rigidly to just two types of evaluation, that is, formative evaluation in the
early stages of the program and summative evaluation toward the end. The funda-
mental evaluation typology provides evaluators and stakeholders many options for
devising basic or hybrid types of evaluation at implementation and outcome stages
so as to best meet stakeholders’ needs. However, the fundamental evaluation typol-
ogy does not cover the planning stage. Thus, Chapter 2 will expand the fundamen-
tal evaluation typology into a comprehensive evaluation typology that covers a full
program cycle from program planning to implementation to outcome. Then the
rest of the book will provide concrete examples of these evaluation approaches and
illustrate their applications across the entire life cycle of programs.
Internal Versus External Evaluators
Evaluators are usually classified into two categories: internal and external evalu-
ators. Internal evaluators are employed by an organization and are responsible
for evaluating the organization’s own programs. External evaluators are not
employees of the organization but are experts hired from outside to evaluate the
program. One of the major differences between the two is independence. Internal
evaluators are part of the organization. They are familiar with the organizational
culture and the programs to be evaluated. Like other employees, they share a
stake in the success of the organization. External evaluators are not constrained
by organizational management and relationships with staff members and are less
invested in the program’s success. The general conditions that tend to favor either
internal evaluation or external evaluation are summarized as follows:
15
Chapter 1   Fundamentals of Program Evaluation
Internal Evaluation
•
• Cost is a great concern.
•
• Internal capacity/resources are available.
•
• The evaluator’s familiarity with the program is important.
•
• The program is straightforward.
•
• Evaluation is for the purpose of monitoring or is constructive in nature.
External Evaluation
•
• The cost of hiring an external evaluator is manageable.
•
• Independence and objectivity are essential.
•
• A program is large or complicated.
•
• The evaluation will focus on conclusive assessment or conclusive/
constructive assessment.
•
• Comprehensive assessment or fresh insight is needed.
Politics, Social Justice, Evaluation
Standards, and Ethics
One important distinction that separates program evaluation from research is
that evaluations are carried out under political processes. The purpose of an
evaluation is to evaluate an intervention program. However, the program is
created by political processes. What kinds of programs are to be funded? Which
programs need evaluation in a community? These decisions are made through
bargaining and negotiation by key players such as politicians and advocacy
groups. After a program is funded and evaluators are hired to evaluate it, the
focus of the evaluation and the questions to be asked are determined, or largely
influenced, by stakeholders. Cronbach and colleagues (1980) argued that a
theory of evaluation must be as much a theory of political interaction as it is a
theory of how to determine facts. Weiss (1998), too, indicated that evaluators
must understand the political nature of evaluations and be aware of the obsta-
cles and opportunities that can impinge upon evaluation efforts.
Since evaluation provides feedback to a program, evaluators may have high
hopes that decision makers will use the findings as a basis for action. However,
since program evaluation is part of political processes, evaluation findings are just
one of many inputs that decision makers use. Decision making is more often based
on factors such as political support and community service needs than evaluation
findings. Since evaluations take place within a political and an organizational
context, Chelimsky (1987) stated that evaluators are shifting their view of the role
evaluations play, from reforming society to the more realistic aim of bringing the
16 Introduction
best possible information to bear on a wide variety of policy questions. Also
because evaluation takes place in a political environment, evaluators’ communica-
tion skills are critical. Evaluators’ qualifications should include research skills but
should emphasize group facilitation skills, political adroitness, managerial ability,
and cultural sensitivity to multiple stakeholders.
In evaluation, stakeholders are those persons, groups, or organizations who
have a vested interest in the evaluation results. Stakeholders often are not a homog-
enous group but rather multiple groups with different interests, priorities, and
degrees of power or influence. The number of stakeholder groups evaluators must
communicate with often depends on the magnitude of an intervention program. In
a small community-based program, key stakeholders may include the program
director, staff, and clients. Stakeholder groups of a large federal program, on the
other hand, could include federal agencies, state agencies, community-based orga-
nizations, university researchers, clients, program directors, program administra-
tors, implementers, community advocates, computer experts, and so on.
Evaluators are usually hired by decision makers, and one of the major pur-
poses of program evaluation is to provide information to decision makers that
they will use to allocate funds or determine program activities. This contractual
arrangement has a potential to bias evaluators toward the groups in power,
that is, the decision makers who hire them or the stakeholders with whom the
decision makers are most concerned. Critics such as House (1980) argued that
evaluation should address social justice and specifically the needs and interests
of the poor and powerless. However, Scriven (1997) and Chelimsky (1997)
were concerned that when evaluators take on the role of program advocates,
their evaluations’ credibility will be tarnished.
Social justice is a difficult issue in evaluation. Participatory evaluation has
the potential to alleviate some of the tension between serving social justice and
decision makers. Including representatives of the various stakeholder groups in
evaluation has been proposed as a way to address some social justice issues.
Generally, stakeholders participate in an evaluation for two purposes: practical
and transformative (Greene, Lincoln, Mathison, Mertens, & Ryan, 1998).
Practical participatory evaluation is meant to enhance evaluation relevance,
ownership, and utilization. Transformative participatory evaluation seeks to
empower community groups to democratize social change. Either way, partici-
patory evaluation can provide evaluators with an opportunity to engage with
different stakeholder groups and balance diverse views, increase buy-in from all
stakeholder groups, and enhance their willingness to use evaluation results.
Another way of enhancing evaluators’ credibility is to promote profes-
sional ethics. Like other professionals, evaluators must adhere to professional
ethics and standards. The American Evaluation Association (2004) adopted
the following ethical principles for evaluators to follow:
17
Chapter 1   Fundamentals of Program Evaluation
•
• Systematic inquiry. Evaluators conduct systematic, data-based inquiries.
•
• Competence. Evaluators provide competent performance to stakeholders.
•
• Integrity/honesty. Evaluators ensure honesty and integrity of the entire
evaluation process.
•
• Respect for people. Evaluators respect the security, dignity, and self-worth
of the respondents, program participants, clients, and other stakeholders.
•
• Responsibilities for general and public welfare. Evaluators articulate and
take into account the diversity and values that may be related to the gen-
eral and public welfare. (“The Principles”)
In addition, to ensure the credibility of evaluation, the Joint Committee on
Standards for Education (Yarbrough, Shulha, Hopson, & Caruthers, 2011) has
specified the following five core standards for evaluators to follow:
1. Utility standards. The utility standards are intended to increase the extent
to which program stakeholders find evaluation processes and products
valuable in meeting their needs.
2. Feasibility standards. The feasibility standards are intended to increase
evaluation effectiveness and efficiency.
3. Propriety standards. The propriety standards support what is proper, fair,
legal, right, and just in evaluations.
4. Accuracy standards. The accuracy standards are intended to increase the
dependability and truthfulness of evaluation representations, proposi-
tions, and findings, especially those that support interpretations and judg-
ments about quality.
5. Evaluation accountability standards. The evaluation accountability
standards encourage adequate documentation of evaluations and a meta-
evaluative perspective focused on improvement of and accountability for
evaluation processes and products.
Evaluation Steps
The Centers for Disease Control and Prevention (CDC) published the CDC
Framework of Program Evaluation for Public Health (CDC, 1999) to help
evaluators understand how to conduct evaluation based on evaluation stan-
dards. The document specified six steps that are useful guides to the evaluation
of public health and social betterment programs:
18 Introduction
Step 1: Engage Stakeholders deals with engaging individuals and organiza-
tions with an interest in the program in the evaluation process.
Step 2: Describe the Program involves defining the problem, formulating
program goals and objectives, and developing a logic model showing how
the program is supposed to work.
Step 3: Focus the Evaluation Design determines the type of evaluation to
implement, identifies the sources needed to implement the evaluation, and
develops evaluation questions.
Step 4: Gather Credible Evidence identifies how to answer the evaluation
questions and develop an evaluation plan that will include, among other
things, indicators, data sources and methods for collecting data, and the
timeline.
Step 5: Justify Conclusions involves collecting, analyzing, and interpreting
the evaluation data.
Step 6: Ensure Use and Share Lessons Learned identifies effective methods
for sharing and using the evaluation results.
Evaluation Design and Its Components
When proposing an evaluation to stakeholders or organizations such as fund-
ing agencies, evaluators must describe the evaluation’s purposes and methodol-
ogy. An evaluation design needs to include at least five components:
1. Purposes of and Background Information about the Intervention
Program. The first thing that evaluators need to do when assessing an inter-
vention program is to gain a solid knowledge of the background of the pro-
gram and document this understanding. Background information includes the
purposes of the intervention program, the target population, the organizations
responsible for implementing the program, key stakeholders of the program,
implementation procedures, reasons for conducting the evaluation, the evalu-
ation’s timeline, the resources that will be used, and who will utilize the evalu-
ation results. Evaluators usually gather information by reviewing existing
documents such as program reports and the grant application proposal, as
well as by interviewing key stakeholders of the program. The background
information serves as a preliminary basis for communication by evaluators
and stakeholders about the program and evaluation.
19
Chapter 1   Fundamentals of Program Evaluation
2. A Logic Model or Program Theory for Describing the Program. A sound
evaluation requires a systematic and coherent description of the intervention
program, which will serve as a basis for communication between evaluators
and stakeholders and for the evaluation design. In reality, a systematic and
coherent program description is often not available. It is unwise for evaluators
to conduct a program evaluation without a mutual agreement with stakehold-
ers about what the program looks like. In this situation, how could an evalua-
tion provide useful information to stakeholders? Or, even worse, stakeholders
later could easily claim that an evaluation failed to accomplish what they
expected from it, if the evaluation results do not convey good news. Program
description is an important step in evaluation.
If a program does not have a systematic and coherent program description,
evaluators must facilitate stakeholders in developing one. This book discusses
two options for describing a program: logic models and program theory. Logic
models are used to identify the major components of a program in terms of a
set of categories such as inputs, activities, outputs, and outcomes. However, if
evaluators and stakeholders are interested in looking into issues such as contex-
tual factors and causal mechanisms, this book encourages the use of program
theory. Both logic models and program theory will be discussed in Chapter 3.
3. Assertion of a Program’s Stage of Development. As will be discussed in
the next chapter, an intervention program’s life cycle can be generally classified
as being in one of four phases: planning, initial implementation, mature imple-
mentation, and outcome. Program designers, during the planning phase, work
with partners to identify or develop an intervention and organize resources and
activities for supporting the intervention. After the planning phase, the pro-
gram goes into the initial implementation phase. The major tasks here are train-
ing implementers, checking clients’ acceptance, and ensuring appropriate
implementation. After the initial implementation, the program progresses to
the mature implementation stage. The major tasks here include ensuring or
maintaining the quality of implementation. During the outcome phase, the
program is expected to have desirable impacts on clients. The different stages
of a program require different evaluation approaches. For example, construc-
tive evaluation is most useful to a program during the initial implementation
stage when it can help with service delivery, but it is not appropriate for a
formal assessment of a program’s merits at the outcome stage.
Evaluators and stakeholders have to agree on which stage a program is in to
select an appropriate evaluation type(s) and approach. Chapter 2 will provide
detailed discussions of the nature of program stages and how they relate to
different evaluation types and approaches.
20 Introduction
4. Evaluation Types, Approaches, and Methodology. This component is the
core of evaluation design. Using information regarding the evaluation’s pur-
poses and the logic model/program theory, evaluators and stakeholders need to
determine what type of evaluation, whether one of the basic evaluation types—
constructive process, conclusive process, constructive outcome, or conclusive
outcome—or a hybrid type, is suitable for correctly evaluating the program.
Once program stage and evaluation type are determined, evaluators can move
on to select or design an evaluation approach or approaches for evaluating a
program. Chapter 2 will provide a comprehensive typology for guiding evalu-
ators in selection of evaluation types and approaches.
Determining the most appropriate evaluation approach is challenging and
time-consuming. However, it ensures that all involved share a mutual under-
standing of why a particular evaluation type has been selected. Without it,
stakeholders are likely to find that the results of the evaluation address issues
that are not of concern to them and/or are not useful to them. Stakeholders are
often not trained on evaluation techniques. They often do not express what
they expect and need from an evaluation as clearly and precisely as evaluators
could hope. Evaluators usually must double- or even triple-check with stake-
holders to make sure everyone shares the same understanding and agrees on
the evaluation’s purposes up front.
5. Budget and Timeline. Regardless of stakeholders’ and evaluators’ visions
of an ideal evaluation plan, the final evaluation design is bound to be shaped
by the money and time allocated. For example, if stakeholders are interested in
a rigorous assessment of an intervention program’s outcomes but can provide
only a small evaluation budget, the research method used in the evaluation is
not likely to be a randomized controlled trial over a few years, which would
likely cost over a few million dollars. Similarly, if the timeline is short, evalua-
tors will likely use research methods such as rapid assessments rather than
conduct a thorough evaluation.
When facilitating stakeholders in making an informed decision, it is highly
preferable for evaluators to propose a few options and explain the information
each option is likely to provide, as well as the price tag of each.
Major Challenges of Evaluation:
Lessons Learned From Past Practice
Program evaluation has been practiced over several decades. Lessons learned
from experience indicate that program evaluation faces a set of unique chal-
lenges that are not faced by other disciplines.
21
Chapter 1   Fundamentals of Program Evaluation
Judge a Program Not Only by Its
Results but Also by Its Context
One important characteristic distinguishing program evaluation is its
need, rarely shared by other disciplines, to use a holistic approach to assess-
ment. The holistic approach includes contextual or transformation informa-
tion when assessing the merit of a program. By comparison, product
evaluation is more streamlined, perhaps focusing solely on the intrinsic
value of its object. Products like televisions can be assessed according to
their picture, sound, durability, price, and so on. In many situations, how-
ever, the value of a program may be contextual as well as intrinsic or inherent.
That is, to adequately assess the merit of a program, both its intrinsic value
and the context in which that value is assigned must be considered together.
For example, say an educational program has, according to strictly perfor-
mance-based evaluation, attained its goals (which are its intrinsic values).
But in what context was the performance achieved? Perhaps the goal of
higher student scores on standardized tests was attained by just “teaching
students the tests.” Does the program’s performance still deserve loud
applause? Probably not.
Similarly, what about a case in which program success is due to the par-
ticipation of a group of highly talented, well-paid teachers with ample
resources and strong administrative support, but the evaluated program is
intended for use in ordinary public schools? This “successful” program may
not even be relevant, from the viewpoint of the public schools, and is not
likely to solve any of their problems. Therefore, how a program achieved
its goals is just as important as whether it achieved them. For example, an
outcome evaluation of one family-planning program in a developing coun-
try limited its focus to the relationship between program inputs and out-
puts; it appeared possible, on this basis, to claim success for the program.
A large drop in the fertility rate was indeed observed following the inter-
vention. Transformation information, however, showed that such a claim
was misleading. Although the drop in fertility was real, it had little to do
with the intervention. A larger factor was that, following implementation,
a local governor of the country, seeking to impress his prime minister with
the success of the program, ordered soldiers to seize men on the streets and
take them to be sterilized. An evaluator with a less holistic approach might
have declared that the goals of the program were attained, whereas other
people’s personal knowledge led them to condemn the program as inhu-
mane. Lacking a holistic orientation, program evaluation may reach very
misleading conclusions.
22 Introduction
Evaluations Must Address Both
Scientific and Stakeholder Credibility
Program evaluation is both a science and an art. Evaluators need to be
capable of addressing both scientific and stakeholder credibility in an evalua-
tion. The scientific credibility of program evaluation reflects the extent to
which that evaluation was governed by scientific principles. Typically, in scien-
tific research, scientific credibility is all that matters. The more closely research
is guided by scientific principles, the greater its credibility. However, as an
applied science, program evaluation also exhibits varying degrees of stake-
holder credibility. The stakeholder credibility of a program evaluation reflects
the extent to which stakeholders believe the evaluation’s design gives serious
consideration to their views, concerns, and needs.
The ideal evaluation achieves both high scientific and high stakeholder cred-
ibility, and the two do not automatically go hand in hand. An evaluation can
have high scientific credibility but little stakeholder credibility, as when evalu-
ators follow all the scientific principles but set the focus and criteria of evalua-
tion without considering stakeholders’ views and concerns. Their evaluation
will likely be dismissed by stakeholders, despite its scientific credibility, because
it fails to reflect the stakeholders’ intentions and needs. For example, there are
good reasons for African-Americans to be skeptical of scientific experiments
that lack community input, due to incidents such as the Tuskegee syphilis
experiment (Jones, 1981/1993). Researchers in the experiment withheld effec-
tive treatment from African-American men suffering from syphilis so that the
long-term effects of the disease could be documented. Conversely, an evalua-
tion overwhelmed by the influence of stakeholders, such as program managers
and implementers, may neglect its scientific credibility, resulting in suspect
information.
One of the major challenges in evaluation is how to address the tension between
scientific credibility and stakeholder credibility. Evaluation theorists, such as
Scriven (1997), argued that objectivity is essential in evaluation because without it,
evaluation has no credibility. On the other hand, Stake (1975) and Guba and
Lincoln (1981) argued that evaluations must respond to stakeholders’ views and
needs in order to be useful. Both sides make good points, but objectivity and
responsiveness are conflicting values. How would evaluators address this tension?
One strategy is to prioritize, choosing one type of credibility to focus on.
However, this prioritization strategy does not satisfactorily address the conflict
between the two values. A better strategy, proposed by and used in this book,
is perhaps to strike a balance between the two. For example, evaluators might
23
Chapter 1   Fundamentals of Program Evaluation
pursue stakeholder credibility in the earliest phases of evaluation design but
turn their attention toward scientific credibility later in the process. Initially,
evaluators experience a great deal of interaction and communication with a
program’s stakeholders for the specific purpose of understanding their views,
concerns, and needs. Evaluators then incorporate the understanding they have
acquired into the research focus, questions, and design, along with the necessary
scientific principles. From this point on, to establish scientific credibility, the
evaluators require autonomy to design and conduct evaluations without inter-
ference from stakeholders. Stakeholders are usually receptive to this strategy,
especially when evaluators explain the procedure to them at the beginning of
the process. While stakeholders do not object to a program being evaluated, or
dispute the evaluator’s need to follow scientific procedures, they do expect the
evaluation to be fair, relevant, and useful (Chen, 2001).
As will be discussed in the rest of the book, the tension between scientific
and stakeholder credibility arises in many situations. Such tension makes
evaluation challenging, but resolving it is essential for advancing program
evaluation.
Evaluations Must Provide Information
That Helps Stakeholders Do Better
Earlier in this chapter, we learned that Scriven placed a higher priority on
conclusive assessment than on program improvement, while Cronbach pre-
ferred otherwise. This is an important, but complicated, issue for evaluators.
Many evaluators quickly learn that stakeholders are eager to figure out what
to do next in order to make a program work better. Stakeholders find evalua-
tions useful if they both offer conclusions about how well programs have
worked and provide information that assists the stakeholders in figuring out
what must be done next to maintain—or even surpass—program goals. Thus,
the assessment of a program’s performance or merit is only one part of pro-
gram evaluation (or, alone, provides a very limited type of evaluation). To be
most useful, program evaluation needs to equip stakeholders with knowledge
of the program elements that are working well and those that are not. Program
evaluation in general should facilitate stakeholders’ search for appropriate
actions to take in addressing problems and improving programs. There are
important reasons why evaluations must move beyond narrow merit assess-
ment into the determination of needed improvements. In the business world,
information on product improvement is provided by engineering and market
24 Introduction
research; likewise, in the world of intervention programs, the agency or orga-
nization overseeing an effort relies on program evaluation to help it continually
guarantee or improve the quality of services provided.
Consider that intervention programs typically operate in the public sector.
In the private sector, the existence or continuation of a product is usually deter-
mined by market mechanisms. That is, through competition for consumers, a
good product survives, and a bad product is forced from the market. However,
the great majority of intervention programs do not encounter any market com-
petition (Chen, 1990). Drug abusers in a community may find, for example,
that only one treatment program is available to them. In the absence of an
alternative, the treatment program is likely to continue whether or not its out-
comes justify its existence. Furthermore, well-known programs with good
intentions, such as Head Start, would not be discontinued based on an evalua-
tion saying the programs were ineffectual; decision makers rarely use program
evaluation results alone to decide whether a program will go on.
Under these circumstances, an evaluation that simply assesses the merit of
a program’s past performance and cannot provide stakeholders with insights
to help them take the next step is of limited value (Cronbach, 1982). In fact,
many stakeholders look to a broad form of program evaluation to point out
apparent problems, as well as strengths upon which to build. In general, to be
responsive and useful to stakeholders, program evaluation should meet both
assessment needs and improvement needs rather than confine itself solely to
conclusive assessment. Stakeholders need to know whether the program is
reaching the target group, the treatment/intervention is being implemented as
directed, the staff is providing adequate services, the clients are making a com-
mitment to the program, and the environment seems to be helping the delivery
of services. Any part of this information can be difficult for stakeholders to
collect; thus, program evaluators must have the necessary training and skills
to gather and synthesize it all systematically.
In a broad sense, therefore, merit assessment is a means, rather than the end,
of program evaluation. Our vision of program evaluation should extend
beyond the design of supremely rigorous and sophisticated assessments. It is
important to grasp that evaluation’s ultimate task is to produce useful informa-
tion that can enhance the knowledge and technology we employ to solve social
problems and improve the quality of our lives.
Furthermore, as discussed in the last section, constructive evaluation for pro-
gram improvement and conclusive evaluation for merit assessment are not
mutually exclusive categories. Evaluation does not have to focus on either pro-
gram improvement or merit assessment. The introduction of hybrid evaluation
types in this book provides options by which evaluation can address both issues.
25
Chapter 1   Fundamentals of Program Evaluation
Addressing the Challenges:
Theory-Driven Evaluation and
the Integrated Evaluation Perspective
To better address these challenges, this book applies the frameworks provided by
the theory-driven evaluation approach and the integrated evaluation perspective.
Theory-Driven Evaluation Approach
The theory-driven evaluation approach requires evaluators to under-
stand assumptions made by stakeholders (called program theory) when they
develop and implement an intervention program. Based on stakeholders’
program theory, evaluators design an evaluation that systematically exam-
ines how these assumptions operate in the real world. By doing so, they
ensure that the evaluation addresses issues in which the stakeholders are
interested. The usefulness of the theory-driven evaluation approach has
been discussed intensively in the evaluation literature (e.g., Chen, 1990,
2005, 2012a, 2012b; Chen & Rossi, 1980, 1983a; Chen & Turner, 2012;
Coryn, Noakes, Westine, & Schröter, 2011; Donaldson, 2007; Funnell &
Rogers, 2011; Nkwake, 2013; Rossi, Lipsey, & Freeman, 2004; Weiss,
1998). The concept and application of program theory will be intricately
discussed in Chapter 3.
It is important to know that theory-driven evaluation provides a sharp con-
trast to traditional method-driven evaluation. Method-driven evaluation views
evaluation as mainly an atheoretical activity. Evaluation is carried out by fol-
lowing research steps of a chosen research method such as randomized experi-
ments, survey, case study, focus group, and so on. Within this tradition,
evaluation does not need any theory. If evaluators are familiar with the research
steps of a particular method, then they can apply the same research steps and
principles across different types of programs in different settings. To some
degree, method-driven evaluation simplifies evaluation tasks. However, because
the focus of method-driven evaluation is mainly on methodological issues, it
often does not capably address stakeholders’ views and needs. The theory-
driven evaluation approach argues that while research methods are important
elements of an evaluation, evaluation should not be dictated or driven by one
particular method.
Because theory-driven evaluation uses program theory as a conceptual
framework for assessing program effectiveness, it provides information not
only on whether an intervention is effective but also how and why a program
26 Introduction
is effective. In other words, it is capable of addressing the challenge discussed
in the last section: The success of a program has to be judged not only by its
results but also by its context. This approach is also useful for addressing the
following challenge: Evaluation must be capable of providing information for
stakeholders to do better. The theory-driven evaluation approach will be inten-
sively discussed in Chapters 3, 7, 12, 13, and 14.
Integrated Evaluation Perspective
Program evaluation is challenging because it has to provide evaluative evi-
dence for a program that meets two requirements. The first requirement is that
the evaluative evidence must be credible; that is, program evaluation has to
generate enough credible evidence to gain a scientific reputation. This require-
ment is called the scientific requirement. The second requirement is that the
evidence must respond to the stakeholders’ views, needs, and practices so as to
be useful. Stakeholders are consumers of evaluation. Program evaluation has
little reason to exist unless it is able to adequately serve stakeholders’ needs.
This requirement is called the stakeholder requirement.
Ideally, evaluations should meet both requirements, but in reality evalua-
tors often find it difficult to meet both. One the one hand, they must apply
rigorous methods to produce credible evidence. On the other hand, evalua-
tors often find it difficult to apply rigorous methods—such as randomized
controlled trials (RCTs)—to evaluate real-world programs given insufficient
resources and short time lines. In many situations, administrative hindrances
and ethnic concerns add barriers to such an application. Furthermore, even
should these barriers be removed and a rigorous method applied, stakehold-
ers may feel that the focus of the evaluation is then too narrow or too aca-
demic to be relevant or useful to them. The reason for this disconnect is that
the stakeholders’ views on community problems and how to solve them are
quite different from the conventional scientific methods’ underlying philoso-
phy—reductionism. Reductionism postulates that a program is stable and can
be analytically reduced to a few core elements. If a program can be reduced
to core components, such as intervention and outcome, then an adjustment
can be implemented and desirable changes will follow. Given this view, the
evaluators’ main task is to rigorously assess whether the change produces
predetermined outcomes.
However, stakeholders’ views on and experiences with social problems
and addressing them in a community are more dynamic and complicated
27
Chapter 1   Fundamentals of Program Evaluation
than those assumed by reductionism. Their views can be characterized as
the following:
1. An intervention program is implemented as a social system. In a
social system, contextual factors in a community—such as culture, norms,
social support, economic conditions, and characteristics of implementers
and clients—are likely to influence program outcomes. As discussed at the
beginning of this chapter, program interventions are open systems, not
closed like a biological system in terms of contextual factors.
2. Health promotion/social betterment programs require clients, with the
help of implementers, to change their values and habits in order to work.
Unfortunately, people are notoriously resistant to changing their values and
habits. For example, an education program may require children fond of playing
video games to substantially cut down on game playing to make time for study-
ing; these children may vastly prefer playing the latest zombie massacre game to
studying. Victims of bullying in schools may be asked to start reporting bullying
incidents to school authorities and parents; based on past experience, these vic-
tims may believe reporting these incidents is useless or even dangerous. Because
an intervention requires changes, its demands may be highly challenging to both
clients and implementers. Not only must program designers wrestle with this
challenge when designing an effective intervention program but evaluators must
also take this reality into consideration when designing a useful evaluation.
Because of the above factors, stakeholders believe that they need to take
a much broader approach in solving a community problem. An intervention
is not a stand-alone entity but, rather, has to connect to contextual factors
and/or change clients’ values and habits to work. Their broad view of com-
munity problem solving is inconsistent with the traditional scientific methods,
which focus on narrow issues such as assessing the causal relationships
between an intervention and its outcomes. The inconsistency between
stakeholders’ views and reductionism’s assumptions regarding community
problems and interventions is partly why there is such a huge chasm
between the academic and practice communities regarding interventions, as
will be discussed in Chapter 15.
Stakeholders respect the value and reputation of scientific methods but
view the information provided by using them as just one piece of a jigsaw
puzzle they need to assemble. They need other pieces to complete the picture.
They hope evaluators can figure out ways to provide all, not just one, of those
pieces to them. Stakeholders are concerned that, if evaluators focus too much
28 Introduction
on the scientific piece, it will blind them or prevent them from simultaneously
investigating other means to solve the puzzle. Stakeholders’ views on com-
munity problem solving are relevant to ideas proposed by systems thinking
(e.g., Meadows, 2008). According to systems thinking, a system is made up
of diverse and interactive elements and must address environmental turbu-
lence. Problem solving thus requires the modification of groups of variables
simultaneously.
The above analysis shows that evaluators face a dilemma in meeting the
scientific requirement and the responsiveness requirement at the same time. An
evaluation emphasizing the scientific requirement may scarify the responsive-
ness requirement, and vice versa. The dilemma has significant implications for
evaluation practices, but it has not been intensively and systematically dis-
cussed in the literature. There are three general strategies evaluators use to
address the dilemma:
Prioritizing the Scientific Requirement as the Top Priority in Evaluation: The
first strategy is to stress the scientific requirement by arguing that evaluation’s
utility relies on whether it can produce credible evidence. Following this gen-
eral strategy, evaluators must apply rigorous methods as best as they can. Issues
related to the responsiveness requirement are addressed only when they do not
compromise the rigor issues. Currently, this strategy is the most popular one
used by evaluators (Chen, Donaldson, & Mark, 2011). The strategy appeals
particularly to evaluators who are strongly committed to scientific values and
evidence-based interventions.
Prioritizing the Responsiveness Requirement as the Top Priority in Evaluation.
The second strategy is to put the emphasis on the responsiveness requirement.
This strategy requires that evaluators use a participatory evaluation approach
and qualitative methods to meet stakeholders’ information needs (e.g.,
Cronback, 1982; Stake, 1975). This method is attractive to evaluators who
view traditional scientific methods as too narrow and rigid to accommodate
stakeholders’ views and to meet their informational needs.
Synthesizing the Scientific and Responsiveness Requirements in Evaluation.
The third general strategy is to synthesize the scientific and responsiveness
requirements in evaluation. This strategy does not prioritize either requirement
as the prime focus and thus avoids maximizing one at the expense of the other.
Evaluations following this strategy may not be able to provide highly rigorous
evidence but can provide good-enough evidence to balance the scientific and
responsiveness requirements.
29
Chapter 1   Fundamentals of Program Evaluation
The first two strategies have merits. They are especially useful when there
is a strong mandate for evaluation to be either highly rigorous or highly
responsive. However, the author believes that, in many typical intervention
programs, stakeholders are more likely to benefit from evaluations that use
the synthesizing strategy. This book advocates this strategy and formally calls
it the integrated evaluation perspective. Specifically, the integrated evaluation
perspective urges evaluators to develop evaluation theories and approaches
that can synthetically integrate stakeholders’ views and practices, thus
acknowledging the dynamic nature of an intervention program in a commu-
nity, with scientific principles and methods for enhancing the usefulness of
evaluation.
In spite of its conceptual appeals, the integrated evaluation perspective
faces a challenge in developing specific evaluation theories and approaches to
guide the work. It does not have advantages such as the scientific prioritiza-
tion strategy. For example, advocates of the scientific prioritization strategy
can borrow scientific methods and models developed by more matured disci-
plines and apply them to evaluation. The integrated evaluation perspective,
however, does not have this ability because other disciplines do not face the
kind of inconsistency between scientific and responsiveness requirements
experienced in evaluation. They thus do not need to deal with synthesizing
issues. For example, in biomedical research, both researchers and physicians
consistently demand rigorous evidence for a medicine’s efficacy. Accordingly,
biomedical research cannot offer evaluators clues or solutions on synthesizing
the conflict between scientific and responsiveness requirements. The integrated
evaluation perspective, therefore, requires evaluators to develop innovative,
indigenous theories and approaches to synthesize the requirements unique to
the discipline.
This book contributes to the integrated evaluation perspective by introduc-
ing many innovative, indigenous theories and approaches evaluators can use in
balancing the scientific and responsiveness requirements. At the same time, this
book does not neglect traditional theories and approaches promoted by the
scientific prioritization or responsiveness prioritization strategies. Instead, the
author intends to introduce both traditional and innovative evaluation theories
and approaches from these three strategies to enrich evaluators’ toolbox so
they can apply all theories and approaches as needed.
The nature and applications of the integrated evaluation perspective will be
illustrated in detail in Chapters 11, 12, 13, 14, and 15, but its spirit and the
principles it employs to develop indigenous concepts, theories, approaches, and
methodologies are manifested throughout the book.
Discovering Diverse Content Through
Random Scribd Documents
The Professor of Topography directs the whole of the surveys and
the execution of the Director Plan.
FIFTH SECTION.—TRACING OF THE WORKS OF ATTACK, AND ACTUAL
EXECUTION IN FULL RELIEF OF CERTAIN WORKS.
The sub-lieutenants, divided into brigades, trace the works of the
siege, under the direction of the officers of the staff, and take part in
the superintendence of the works executed in full relief when the
exigencies of the service will permit the chief of the Artillery Service
and the Colonel of the Regiment of Engineers to place workmen at
the disposal of the General Commandant of the School. Six days are
appropriated to this work.
SIXTH SECTION.—WORK IN THE HALLS OF STUDY.
The work in the Halls of Study consists of:—
1st. A memoir on the sham siege, which memoir must be
approved by the General Commandant of the School.
2d. Of a sketch representing one of the works traced or executed
in full relief. These works in the Halls are performed during the
interval of the attendances devoted to out-of-door work. Two days
are appropriated to the preparation of the memoir, and two to the
execution of the sketch. This time is included in the eleven days
allowed to the sham siege.
RECAPITULATION FOR THE ARTILLERY AND ENGINEERS.
NL No. of Lectures or Conferences.
CL Credits for Lectures or Conferences.
L Lectures.
Cf Conferences.
T Total.
Q No. of Questions.
Lectures and Conferences. NL Credits for
Lectures or
Q
Conferences.
L Cf T
By the Professor of Military Art, 2 3 . . 3
By the Professor of Topography, 1 1½ . . 1½
2
By the Professor of Permanent Fortification, 2 3 . . 3
By the Professor of Artillery, 2 3 . . 3
Conferences by the Chief of the Service,
of Artillery, 4 . . 6 6
of Engineers, 4 . . 6 6
Total, 15 10½ 12 22½ 2
* One series of questions by the Chief of the Artillery Service, as to what
relates to that arm.
One series of questions by the Chief of the Engineer Service, as to
what relates to that arm.
A Credit of 11 is assigned to each series of questions.
D Drawings.
M Memoirs.
H Attendances in the Halls.
I Credits.
Works of Application.
Number of
D M
Attendances
out of doors.
H C
of 4½h. of 8 h.
2nd Reconnaissance Plan (Memoir.)
Topographical Work, . . . . 4 . . . . 20
*
Itinerary and Sketch (Memoir,) . . . . . . . . . . . .
Plan “Director,” . . . . . . . . 1 5
Tracing of Lines, . . . . . . 1 . . 10
†
Tracing of Works of Attack and of Defense, . . . . 6 . . . . 25
Sketch, 1 . . . . . . 2 1
‡
Memoir, . . 1 . . . . 2 2
90
Total, 1 1 10 1 5
* Credits given by the Professor of Topography.
† Credits given by the Captains of the Staff, Chiefs of Brigades.
‡ Credits given by the Chiefs of the Service of the Artillery and Engineers.
XIII.—PROGRAMME OF THE COURSE ON THE VETERINARY ART.
FIRST PART.—INTERIOR OF THE HORSE.
Lecture 1.—Classification and nomenclature of the various matters
which constitute the horse. Skeleton (head and body.)
Lecture 2.—Skeleton (limbs.) Mechanical importance of the
skeleton. Nomenclature and use of the muscles. Cellular and fatty
tissues, grease, skin. Insensible perspiration.
Lecture 3.—Functions for maintenance. Arteries of the nerves.
Animal heat.
Lecture 4.—On various functions.
SECOND PART.—EXTERIOR OF THE HORSE.
Lecture 5.—Proportions. Equilibrium. Description and importance
of the natural beauties and defects of the head and region of the
throat.
Lecture 6.—Description and importance of the other parts of the
horse. Blemishes. Soft tumors.
Lecture 7.—Osseous tumors. Various accidents. Temperaments.
Description of clothing, &c.
Lecture 8.—Data respecting horses.
Lecture 9.—To know the age. On various bad habits. Examination
of the eyes; their diseases.
Lecture 10.—Defective paces, &c. Draught and pack horses.
Mules.
Lecture 11.—Stud and remounts. Races.
Lecture 12.—Vicious horses, and different bits. Manner of bitting a
horse. On grooms and punishment.
THIRD PART.—ON THE HEALTH OF THE HORSE.
Lecture 13.—Examination of the foot, and shoeing with the hot
shoe.
Lecture 14.—Shoeing with the cold shoe. Different kinds of horse-
shoe, &c.
Lecture 15.—On stables. Food. Rations.
Lecture 16.—Description and nomenclature of the saddle. Harness
and pack. Various saddles.
Lecture 17.—On work and rest. Horse and mule on the road and in
bivouac. On diseases and accidents.
Abstract of the course:—
Interior of the
horse,
4
17 lectures at 1½ hours. Total time, 25½
hours. Credits, 25.
Exterior, 6
Health, 7
The instruction on horseback can, under certain circumstances, be
considered as connected with this course; and questions are asked
during the time when the sub-lieutenants are not engaged in actual
riding exercise. This instruction is described under the head of
Practical Military Instruction; it comprises at the maximum 272
attendances, and its credit of influence is valued at 240.
ARTILLERY AND ENGINEERS’
REGIMENTAL SCHOOLS.
I. ARTILLERY REGIMENTAL SCHOOLS.
These are intended for the theoretical and practical instruction of
officers, sous-officiers, and gunners.
Each School is under the orders of the General of Brigade
commanding the Artillery in the military division in which it is
situated.
Independent of the general officer, the school has the following
staff:—
A Lieutenant (associated assistant to the General.)
A Professor of Sciences, applying more particularly to the Artillery.
A Professor of Fortification, of drawing, and construction of buildings.
Two Gardes of Artillery (one of the first, and the other of the second
class.)
There are, in addition, attached to each school the number of
inferior officers (captains, lieutenants, or sous-lieutenants) required
for carrying on the theoretical courses, which are not placed under
the direction of the professors.
A captain of the first class, assisted by two first lieutenants, is the
director of the park of the school. Another captain, also of the first
class, but taken from the regiment of Pontooneers, has the direction
of that portion of the bridge equipage necessary for the special
instruction of this corps, as well as of the material of the artillery
properly belonging to this instruction.
The lieutenant-colonel, assistant to the general, fulfills,
independent of every other detail of supervision with which he may
be charged, the functions of ordonnateur secondaire, in what
concerns the expenses of the school and their propriety
(justification.) He corresponds with the minister of war for this part
of the service.
The instruction is divided into theoretical and practical, and the
annual course is divided into half-yearly periods, or into summer and
winter instructions.
The summer instruction commences, according to different
localities, from the 1st of April to the 1st of May, and that of the
winter from the 1st of October to the 1st of November.
The winter and summer instruction is subdivided into school and
regimental instruction.
The school instruction comprehends all the theoretical and
practical instruction common to the different corps which require the
assistance of the particular means of the school, the employment of
its professors, locality, and material, as that of the practical
instruction in which the troops belonging to the different corps of the
army are united to take part.
The regimental instruction is that which exists in the interior of the
regiments and the various bodies of the artillery. It is directed by the
chiefs of these corps, who are responsible for it, with the means
placed at their disposal, under the general surveillance of the
commandant of the school.
The special instruction of the Pontooneers not admitting of their
following the same instruction as the other regiments of artillery, the
chief of this corps directs the special instruction according to certain
bases prescribed by the regulations.
There are for the captains of artillery, each year during the winter
half-year, six conferences for the purposes of considering and
discussing projects for the organization of different equipages and
armaments for the field service, and for attack and defense of
places.
In a building belonging to each school of artillery, under the name
of the hotel of the school, are united the halls and establishments
necessary for the theoretical instruction of the officers and sous-
officers, such as halls for théorique drill and drawing, library, depots
of maps and plans, halls for machines, instruments and models, &c.
Each school is provided with a physical cabinet and a chemical
laboratory. There is also a piece of ground, called a polygon, for
exercising artillerymen to the manœuvers of cannon and other
firearms of great range. Its extent is sufficient in length to furnish a
range of 1,200 meters, and in breadth of 600 meters.
Permanent and temporary batteries are established on this
ground, and they seem not only for practice, but also to accustom
the men to the construction of fascines, field batteries, &c.
The administration of each school, and the accounts relating to it,
are directed by an administrative council, consisting of—
The General Officer commanding the Artillery (President.)
The Colonels of the regiments of Artillery in the towns where two
regiments of the Artillery are quartered, and in other towns, the Colonel
and Lieutenant-Colonel of the regiment.
The Colonel of the regiment of Pontooneers in the town where the
principal part of the corps may be stationed, and in any other town the
Lieutenant-Colonel or the Major.
The Lieutenant-Colonel associated assistant with the General
Commandant.
The functions of secretary of the council are intrusted to a grade
of the first class.
The functionaries of the corps of intendants fulfill, in connection
with the administrative councils of the artillery schools, the same
duties as are assigned by the regulations relating to the interior
administration of bodies of troops. They will exercise over the
accounts, both of money and material of the said schools, the same
control as over the administration connected with the military
interests of the state.
II. ENGINEER REGIMENTAL SCHOOLS.
The colonel of each regiment has the superior direction of the
instruction.
The lieutenant-colonel directs and superintends, under his orders,
the whole of the details of the regimental instruction.
A major, selected from among the officers of this rank belonging
to the état-major of this arm, directs and superintends, under the
orders of the colonel, the whole of the details of the special
instruction.
The complete instruction consists of—
General instruction, or that of the regiment, by which a man is
made a soldier.
Special or school instruction, having for its object the training of
the miner or sapper.
The instructions are each separated into theoretical and practical
instruction.
The theoretical instruction of the regiment comprehends the
theories:—
On the exercises and manœuvers of infantry. On the interior service. On
the service of the place. On field service. On the maintenance of arms. On
military administration. On military penal legislation.
The practical instruction of the regiment comprises:—
The exercises and manœuvers of infantry. Practice with the musket.
Military Marches. Fencing.
The teaching of these various duties is confided to officers, sous-
officiers, and corporals of the regiments, as pointed out by the
regulation, and the orders of the colonel.
The fencing school is organized in a similar manner to those of the
infantry, and the military marches are also made in the same way as
in those corps.
The special and theoretical instruction consists of:—
Primary instruction. Mathematics. Drawing. Geography. Military history of
France. Fortification and the various branches of the engineering work.
Three civil professors (appointed by competition) are attached to
each regimental school, for the special theoretical instruction, as
regards the primary instruction, drawing, and mathematics.
The courses are distributed and taught in the following manner:
Primary instruction for the Soldiers.
By the Professor of
Primary Instruction.
French grammar for the Corporals.
Book-keeping for the Sous-Officiers.
Elementary arithmetic for the Corporals.
By the Prof. of
Mathematics.
Complete arithmetic
for the Serjeants.
Elementary geometry
Complete geometry
for the Serjeant-Major.
Trigonometry
Surveys for the Sous-Officiers.
Special mathematics for the Officers.
Drawing for the Corporals and Sous-Officers.
By the Professor of
Drawing, who is also
charged with completing
the collection of models
which relate to it.
The elements of fortification for the Serjeant-Majors.
Construction, and theories
on practical schools
for the Sous-Officiers.
By the Officers of the
regiment, named by the
Colonel, independently
of those appointed by
the regulations
Permanent fortification
The attack and defense of
places
for the Officers.
Mines
Bridges
Ovens
Topography
Geography
for the Sous-Officiers.
Military history of France
At the end of each course the colonel of the regiment causes a
general examination to be made in his presence of the whole of the
men who have followed this course, and has a list made out in the
order of merit, with notes of the capacity and aptitude of each.
These lists are consulted in the formation of tables of promotion,
and placed with the said tables before the inspector-general.
Each captain and lieutenant are obliged to give in at least a single
treatise on five different projects, consisting of a memoir discussing
or the journal of a siege, with drawing of the whole, and of details in
sufficient number to render them perfectly intelligible.
The special practical instruction is composed of seven distinct
schools, relating to:—
Field Fortification. Saps. Mines and Fireworks. Bridges. Ovens.
Topography. Gymnastics.
And they comprehend, in addition, sham sieges, and underground
war. Each of these seven schools is taught in accordance with the
special instructions annexed to the regulation, which, however, are
not published.
Winter is more especially devoted to the course of special
theoretical instruction, which commences on the 1st November, and
usually finishes on the 15th March, and the course of special
practical instruction is carried on during the summer from the 15th
March to the 15th September. The second fortnight of September
and the month of October are devoted to sham sieges and
underground war, to the leveling of the works executed, and to the
arrangement of magazines.
SCHOOL FOR INFANTRY AND CAVALRY
AT ST. CYR.
GENERAL DESCRIPTION. CONDITIONS OF ADMISSION. STAFF.
It will have been seen in the accounts of the Polytechnic School
and the School of Application at Metz, in what manner young men
destined for commissions in the artillery and engineers receive their
previous education, and under what conditions appointments as
officers in these two services are made in France. The regulations for
the infantry, the cavalry, and the marines are of the same
description. There are in these also the same two ways of obtaining
a commission. One, and in these services the more usual one, is to
rise from the ranks. The other is to pass successfully through the
school at St. Cyr. Young men who do not enter as privates prove
their fitness for the rank of officers by going through the course of
instruction given, and by passing the examinations conducted in this,
the principal, and putting aside the School of Application at Metz, the
one Special Military School of the country.
The earliest foundation of the kind in France was the Ecole Royale
Militaire of 1751. Like most other similar institutions of the time, it
was intended for the young nobility. No one was to be admitted who
could not prove four generations of Noblesse. The pupils were
taught free of charge, and might enter at eight years old. Already,
however, some marks of competition are to be discerned, as the best
mathematicians were to be taken for the Artillery and Engineers.
Buildings on the Plain of Grenelle (the same which still stand,
occupying one end of the present Champs de Mars, and retaining,
though only used as barracks, their ancient name,) were erected for
the purpose. The school continued in this form till 1776, when it was
dissolved (apparently owing to faults of discipline,) and replaced by
ten Colleges, at Sorrèze, Brienne, Vendôme, and other places, all
superintended by ecclesiastics. A new Ecole Royale Militaire,
occupying the same buildings as the former, was added in 1777.
This came to an end in 1787; and the ten colleges were
suppressed under the Republic. A sort of Camp School on the plain
of Sablons took their place, when the war had broken out, and
lasted about a year under the name of the Ecole de Mars.
Under the Consulate in 1800, the Prytanée Français was founded,
consisting of four separate Colleges. The name was not long after
changed to the Prytanée Militaire; and after some time the number
was diminished, and La Flèche, which had in 1764 received the
youngest pupils of the old Royal Military School, became the seat of
the sole remaining establishment; which subsequently sunk to the
proportions of a mere junior preparatory school, and became, in
fine, the present establishment for military orphans, which still
retains the title, and is called the Prytanée Militaire de la Flèche.
A special Military School, in the meantime, had been set up at
Fontainebleau in 1803, transferred in 1808 to St. Cyr, and thus
taking the place of the Prytanée Militaire and of its predecessor, the
original Ecole Royale Militaire, gradually assumed its present form. 15
The course of study lasts two years; the usual number of cadets in
time of peace is five, or at the utmost six hundred; the admission is
by competitive examination, open to all youths, French by birth or by
naturalization, who on the first of January preceding their
candidature were not less than sixteen and not more than twenty
years old. To this examination are also admitted soldiers in the ranks
between twenty and twenty-five years of age, who, at the date of its
commencement, have been actually in service in their regiments for
two years.
The general conditions and formalities are the same as those
already stated for the Polytechnic. It may be repeated that all the
candidates, in accordance with a recent enactment, must have taken
the usual degree which terminates the task at the lycées—the
baccalaureate in sciences.
Those who succeed in the examination and are admitted, take an
engagement to serve seven years either in the cavalry or infantry,
and are thus under the obligation, if they are judged incompetent at
the close of their two years’ stay at the school to receive a
commission, to enter and serve as common soldiers. The two years
of their stay at the school counts as a part of their service. It is only
in the special case of loss of time caused by illness, that permission
is given to remain a third year.
The ordinary payment is 60l. (1,500 francs) per annum. All whose
inability to pay this amount is satisfactorily established, may claim,
as at the Polytechnic, an allowance of the whole or of half of the
expenses from the State, to which may be added an allowance for
the whole or for a portion of the outfit (from 24l. to 28l.) These
bourses or demi-bourses, with the trousseau, or demi-trousseau,
have during the last few years been granted unsparingly. One-third
of the 800 young men at the school in February 1856 were boursiers
or demi-boursiers. Candidates admitted from the Orphan School of
La Flèche, where the sons of officers wounded or killed in service
receive a gratuitous education, are maintained in the same manner
here. 16
It was the rule till lately that cadets appointed, on leaving St. Cyr,
to the cavalry should be placed for two years at the Cavalry School
at Saumur. This, however, has recently been changed; on entering
St. Cyr those who desire appointments in the cavalry declare their
wishes, and are put at once through a course of training in
horsemanship. Those who are found unfit are quickly withdrawn; the
remainder, if their place on the final examination allows of their
appointment to the cavalry, are by that time sufficiently well
practiced to be able to join their regiments at once.
Twenty-seven, or sometimes a greater number, are annually at the
close of their second year of study placed in competition with
twenty-five candidates from the second lieutenants belonging to the
army, 17 if so many are forthcoming, for admission to the Staff
School at Paris. This advantage is one object which serves as a
stimulus to exertion, the permission being given according to rank in
the classification by order of merit.
The school consists of two divisions, the upper and the lower,
corresponding to the two years of the course. Each division is
divided again into four companies. In each of these eight companies
there are sub-officers chosen from the élèves themselves, with the
titles of Sergent, Sergent Fourrier, and Caporal; those appointed to
the companies of the junior division are selected from the second
year cadets, and their superiority in standing appears to give these
latter some considerable authority, exercised occasionally well,
occasionally ill. The whole school, thus divided into eight companies,
constitutes one battalion.
The establishment for conducting the school consists of—
A General as Commandant.
A Second in Command (a Colonel of Infantry.)
A Major, 4 Captains, 12 Lieutenants, and 5 Second Lieutenants of
Infantry; the Major holding the office of Commandant of the Battalion.
A Major, 1 Captain, 34 Lieutenants, and 3 Second Lieutenants of Cavalry
to superintend the exercises, the riding, &c.
A Director of Studies (at present a Lieutenant-Colonel of Engineers.)
Two Assistant Directors.
Six Examiners for Admission.
One Professor of Artillery.
One Assistant ditto.
One Professor of Topography and Mathematics.
One Professor of Military Administration, Military Art, and Military History.
One Professor of Fortification.
One Professor of Military Literature.
Two Professors of History and Geography.
One Professor of Descriptive Geometry.
One Professor of Physics and Chemistry.
Three Professors of Drawing,
One Professor of German.
Eleven Military and six Civilian Assistant Teachers (Répétiteurs.)
There is also a Quartermaster, a Treasurer, a Steward, a Secretary
of the Archives, who is also Librarian, an Almoner (a clergyman,)
four or five Surgeons, a Veterinary Surgeon, who gives lessons on
the subject, and twelve Fencing Masters.
The professors and teachers are almost entirely military men.
Some difficulty appears to be found by civilians in keeping sufficient
order in the large classes; and it has been found useful to have as
répétiteurs persons who could also be employed in maintaining
discipline in the house. Among the professors at present there are
several officers of the engineers and of the artillery, and of the staff
corps.
There is a board or council of instruction, composed of the
commandant, the second in command, one of the field officers of
the school staff, the director of studies, one of the assistant
directors, and four professors.
So, again, the commandant, the second in command, one of the
field officers, two captains, and two lieutenants, the last four
changing every year, compose the board or council of discipline.
St. Cyr is a little village about three miles beyond the town of
Versailles, and but a short distance from the boundary of the park.
The buildings occupied by the school are those formerly used by
Madame de Maintenon, and the school which she superintended.
Her garden has given place for the parade and exercise grounds; the
chapel still remains in use; and her portrait is preserved in the
apartments of the commandant. The buildings form several courts or
quadrangles; the Court of Rivoli, occupied chiefly by the apartments
and bureaux of the officers of the establishment, and terminated by
the chapel; the Courts of Austerlitz, and Marengo, more particularly
devoted to the young soldiers themselves; and that of Wagram,
which is incomplete, and opens into the parade grounds. These, with
the large stables, the new riding school, the exercising ground for
the cavalry, and the polygon for artillery practice, extend to some
little distance beyond the limit of the old gardens into the open
arable land which descends northwards from the school, the small
village of St. Cyr lying adjacent to it on the south.
The ground floor of the buildings forming the Courts of Marengo,
Austerlitz, and Wagram appeared to be occupied by the two
refectories, by the lecture-rooms or amphitheaters, each holding two
hundred pupils, and by the chambers in which the ordinary
questionings, similar to those already described in the account of the
Polytechnic School, under the name of interrogations particulières,
are conducted.
On the first floor are the salles d’étude and the salle des
collections the museum or repertory of plans, instruments, models
and machines, and the library; on the second floor the ordinary
dormitories; and on the third (the attics,) supplementary dormitories
to accommodate the extra number of pupils who have been
admitted since the commencement of the war.
The commission, when visiting the school, was conducted on
leaving the apartments of the commandant to the nearest of the two
refectories. It was after one o’clock, and the long room was in the
full possession of the whole first or junior division. A crowd of active
and spirited-looking young soldiers, four hundred at least in number,
were ranged at two long rows of small tables, each large enough,
perhaps, for twelve; while in the narrow passage extending up and
down the room, between the two rows, stood the officers on duty
for the maintenance of order. On passing back to the corridor, the
stream of the second year cadets was issuing from their opposite
refectory. In the adjoining buttery, the loaf was produced, one
kilogramme in weight, which constitutes the daily allowance. It is
divided into four parts, eaten at breakfast, dinner, the afternoon
lunch or gouter, and the supper. The daily cost of each pupil’s food is
estimated at 1f. 80c.
The lecture rooms and museums offer nothing for special remark.
In the library containing 12,000 books and a fine collection of maps,
there were a few of the young men, who are admitted during one
hour every day.
The salles d’étude on the first floor are, in contrast to those at the
Polytechnic, large rooms, containing, under the present
circumstances of the school, no less than two hundred young men.
There are, in all, four such rooms, furnished with rows of desks on
each side and overlooked in time of study by an officer posted in
each to preserve order, and, so far as possible, prevent any idleness.
From these another staircase conducts to the dormitories,
containing one hundred each, and named after the battles of the
present war—Alma, Inkerman, Balaclava, Bomarsund. They were
much in the style of those in ordinary barracks, occupied by rows of
small iron beds, each with a shelf over it, and a box at the side. The
young men make their own beds, clean their own boots, and sweep
out the dormitories themselves. Their clothing, some portions of
which we here had the opportunity of noticing, is that of the
common soldier, the cloth being merely a little finer.
Above these ordinary dormitories are the attics, now applied to
the use of the additional three hundred whom the school has latterly
received.
The young men, who had been seen hurrying with their muskets
to the parade ground, were now visible from the upper windows,
assembled, and commencing their exercises. And when, after
passing downwards and visiting the stables, which contain three
hundred and sixty horses, attended to by two hundred cavalry
soldiers, we found ourselves on the exercising ground, the cavalry
cadets were at drill, part mounted, the others going through the
lance exercise on foot. In the riding-school a squad of infantry
cadets were receiving their weekly riding lesson. The cavalry cadets
ride three hours a-day; those of the infantry about one hour a week.
The exercising ground communicates with the parade ground; here
the greater number of the young men were at infantry drill, under
arms. A small squad was at field-gun drill in an adjoining square.
Beyond this and the exercising ground is the practice ground, where
musket and artillery practice is carried on during the summer.
Returning to the parade ground we found the cadets united into a
battalion; they formed line and went through the manual exercise,
and afterwards marched past; they did their exercise remarkably
well. Some had been only three months at the school. The marching
past was satisfactory; it was in three ranks, in the usual French
manner.
Young men intended for the cavalry are instructed in infantry and
artillery movements and drill; just as those intended for the infantry
are taught riding, and receive instruction in cavalry, as well as
artillery drill and movements.
It is during the second year of their stay they receive most
instruction in the arms of the service to which they are not destined,
and this, it is said, is a most important part of their instruction. “It is
this,” said the General Commandant, “that made it practicable, for
example, in the Crimea, to find among the old élèves of St. Cyr,
officers fit for the artillery, the engineers, the staff; and for general
officers, of course, it is of the greatest advantage to have known
from actual study something of every branch.”
The ordinary school vacation last six or seven weeks in the year.
The young men are not allowed to quit the grounds except on
Sundays. On that day there is mass for the young men.
The routine of the day varies considerably with the season. In
winter it is much as follows:—At 5 A.M. the drum beats, the young
men quit their beds; in twelve minutes they are all dressed and out,
and the dormitories are cleared. The rappel sounds on the grand
carré; they form in their companies, enter their salles, and prepare
for the lecture of the day until a quarter to 7. At 7 o’clock the officers
on duty for the week enter the dormitories, to which the pupils now
return, at a quarter to 8 the whole body passes muster in the
dormitories, in which they have apparently by this time made their
beds and restored cleanliness and order. Breakfast is taken at one
time or other during the interval between a quarter to 7 and 8
o’clock.
They march to their lecture rooms at 8, the lecture lasts till a
quarter past 9, when they are in like manner marched out, and are
allowed a quarter of an hour of amusement. They then enter the
halls of study, make up their notes on the lecture they have come
from, and after an hour and a half employed in this way, for another
hour and a half are set to drawing.
Dinner at 1 is followed by recreation till 2. Two hours from 2 to a
quarter past 4 are devoted to military services.
From 4 to 6 P.M. part are occupied in study of the drill-book
(théorie,) part in riding or fencing: a quarter of an hour’s recreation
follows, and from 6¼ to 8½ there are two hours of study in the
salles. At half-past 8 the day concludes with the supper.
The following table gives a view of the routine in summer:—
4½ A.M. to 4¾ A.M. Dressing.
4¾ “ to 7¼ “ Military exercises.
7¼ “ to 8¼ “ Breakfast, cleaning, inspection.
8¼ “ to 9½ “ Lecture.
9½ “ to 9¾ “ Recreation.
9¾ “ to 11¼ “ Study.
11¼ “ to 1 P.M. Drawing.
1 P.M. to 2 “ Dinner and recreation.
2 “ to 4 “ Study of drill-book (théorie) or fencing.
4 “ to 6 “ Study for some, riding for others.
6 “ to 6¼ “ Recreation.
6¼ “ to 8 “ Riding for some, study for others,
8 “ to 8½ “ Supper.
The entrance examination is much less severe than that for the
Polytechnic; but a moderate amount of mathematical knowledge is
demanded, and is obtained. The candidates are numerous; and if it
be true that some young men of fortune shrink from a test, which,
even in the easiest times, exacts a knowledge of the elements of
trigonometry, and not unfrequently seek their commissions by
entering the ranks, their place is supplied by youths who have their
fortunes to make, and who have intelligence, industry, and
opportunity enough to acquire in the ordinary lycées, the needful
amount of knowledge.
Under present circumstances it is, perhaps, more especially in the
preparatory studies that the intellectual training is given, and for the
examination of admission that theoretical attainments are
demanded. The state of the school in a time of war can not exactly
be regarded as a normal or usual one. The time of stay has been
sometimes shortened from two years to fifteen months; the
excessive numbers render it difficult to adjust the lectures and
general instruction so as to meet the needs of all; the lecture rooms
and the studying rooms are all insufficient for the emergency; and
what is yet more than all, the stimulus for exertion, which is given by
the fear of being excluded upon the final examination, and sent to
serve in the ranks, is removed at a time when almost every one may
feel sure that a commission which must be filled up will be vacant
for him. Yet even in time of peace, if general report may be trusted,
it is more the drill, exercises, and discipline, than the theory of
military operations, that excite the interest and command the
attention of the young men. When they leave, they will take their
places as second lieutenants with the troops, and they naturally do
not wish to be put to shame by showing ignorance of the common
things with which common soldiers are familiar. Their chief incentive
is the fear of being found deficient when they join their regiments,
and, with the exception of those who desire to enter the staff corps,
their great object is the practical knowledge of the ordinary matters
of military duty. “Physical exercises,” said the Director of Studies,
“predominate here as much as intellectual studies do at the
Polytechnic.”
But the competition for entrance sustains the general standard of
knowledge. Even when there is the greatest demand for admissible
candidates, the standard of admission has not, we are told, been
much reduced. No one comes in who does not know the first
elements of trigonometry. And the time allotted by the rules of the
school to lectures and indoor study is far from inconsiderable.
EXAMINATIONS FOR ADMISSION—STUDIES AT THE SCHOOL.
The examinations for admission are conducted almost precisely
upon the same system which is now used in those for the
Polytechnic School. 18 There is a preliminary or pass examination
(du premier degré), and for those who pass this a second or class
examination (du second degré.) For the former there are three
examiners, two for mathematics, physics, and chemistry, and a third
for history, geography, and German. The second examination, which
follows a few days after, is conducted in like manner by three
examiners. A jury of admission decides. The examination is for the
most part oral; and the principal difference between it and the
examination for the Polytechnic is merely that the written papers are
worked some considerable time before the first oral examination
(du premier degré,) and are looked over with a view to assist the
decision as to admissibility to the second (du second degré.) Thus
the compositions écrites are completed on the 14th and 15th of
June; the preliminary examination commences at Paris on the 10th
of July; the second examination on the 13th.
The subjects of examination are the following:—
Arithmetic, including vulgar and decimal fractions, weights and measures, square
and cube root, ratios and proportions, interest and discount, use of logarithmic
tables and the sliding rule.
Algebra, to quadratic equations with one unknown quantity, maxima and minima,
arithmetical and geometrical progressions, logarithms and their application to
questions of compound interest and annuities.
Geometry, plane and solid, including the measurement of areas, surfaces, and
volumes; s of the cone, cylinder, and sphere.
Plane Trigonometry: construction of trigonometrical tables and the solution of
triangles; application to problems required in surveying.
Geometrical representations of bodies by projections.
French compositions.
German exercises.
Drawing, including elementary geometrical drawing and projections; plan, , and
elevation of a building; geographical maps.
Physical Science (purely descriptive:) cosmography; physics, including elementary
knowledge of the equilibrium of fluids; weight, gravity, atmospheric pressure,
heat, electricity, magnetism, acoustics, optics, refraction, microscope, telescope.
Chemistry, elementary principles of; on matter, cohesion, affinity; simple and
compound bodies, acids, bases, salts, oxygen, combustion, azote, atmospheric
air, hydrogen, water; respecting equivalents and their use, carbon, carbonic
acid, production and decomposition of ammonia, sulphur, sulphuric acid,
phosphorus, chlorine; classification of non-metallic bodies into four families.
History: History of France from the time of Charles VII. to that of the Emperor
Napoleon I. and the treaties of 1815.
Geography, relating entirely to France and its colonies, both physical and
statistical.
German: the candidates must be able to read fluently both the written and printed
German character, and to reply in German to simple questions addressed to
them in the same language.
The general system of instruction at St. Cyr is similar to that of the
Polytechnic; the lectures are given by the professors, notes are
taken and completed afterwards, and progress is tested in
occasional interrogations by the répétiteurs. One distinction is the
different size of the salles d’étude (containing two hundred instead
of eight or ten;) but, above all, is the great and predominant
attention paid to the practical part of military teaching and training.
It is evident at the first sight that this is essentially a military school,
and that especial importance is attached both by teachers and pupils
to the drill, exercise, and manœuvers of the various arms of the
service.
The course of study is completed in two years; that of the first
year consists of:—
27 lectures in descriptive geometry.
35 “ physical science.
20 “ military literature.
35 “ history.
21 “ geography and military statistics.
30 “ German.
Total, 174
In addition to the above, there is a course of drawing between the
time when the students join the school early in November and the
15th of August.
The course of drawing consists in progressive studies of landscape
drawing with the pencil and brush, having special application to military
subjects, to the shading of some simple body or dress, and to enable the
students to apply the knowledge which has been communicated to them on
the subject of shadows and perspective. This course is followed by the
second or junior division during the first year’s residence.
The course of lectures in descriptive geometry commences with certain
preliminary notions on the subject; refers to the representation of lines on
curved surfaces, cylindrical and conical, surfaces of revolutions, regular
surfaces, inter of surfaces, shadows, perspective, vanishing points, &c.,
construction of geographical maps, and plan côté.
The lectures in physical science embrace nine lectures on the general
properties of bodies; heat, climate, electricity, magnetism, galvanism,
electro-magnetism, acoustics.
There are twelve lectures in chemistry; on water, atmospheric air,
combustibles, gas, principal salts, saltpetre, metallurgy, organic chemistry.
There are fourteen lectures in mechanics applied to machines; motion,
rest, gravity, composition and resolution of forces, mechanical labor,
uniform motion, rectilinear and rotatory, projectiles in space, mechanical
powers, drawbridges, Archimedean principle, military bridges, pumps,
reservoirs, over and under-shot wheels, turbines, corn mills, steam-engines,
locomotives, transport of troops, materials, and munitions on railways.
The twenty lectures in military literature refer to military history and
biography, memoirs of military historians, battles and sieges, the art of war,
military correspondence, proclamations, bulletins, orders of the day,
instructions, circulars, reports and military considerations, special memoirs,
reconnaissance and reports, military and periodical collections, military
justice.
The thirty-five lectures in history principally relate to France and its wars,
commencing with the Treaty of Westphalia and ending with the Treaty of
Vienna.
The twenty-seven lectures in geography and military statistics are
subdivided into different parts; the first eight lectures are devoted to
Europe and France, including the physical geography and statistics of the
same; the second six lectures are devoted to the frontiers of France; and
the third part of thirteen lectures to foreign states and Algeria, including
Germany, Italy, Spain, Portugal, Poland, and Russia.
The studies for the first division during the second year of their
residence consist of—
10 lectures in topography.
27 “ fortification.
15 “ artillery.
10 “ military legislation.
12 “ military administration.
27 “ military art and history.
20 “ German.
Total, 121
One lesson weekly is given in drawing, in order to render the
students expert in landscape and military drawing with the pencil,
pen, and brash.
We must not omit to call attention to the fact that mathematics
are not taught in either yearly course at St. Cyr.
The course in topography, of ten lectures, has reference to the
construction of maps, copies of drawings, theory, description, and use of
instruments for measuring angles and leveling, the execution for a regular
survey on the different systems of military drawing, drawing from models of
ground, on the construction of topographical drawing and reconnaissance
surveys, with accompanying memoirs.
Twenty-seven lectures are devoted to fortification; the first thirteen relate
principally to field fortification, statement of the general principles,
definitions, intrenchments, lines, redoubts, armament, defilement,
execution of works on the ground, means necessary for the defense,
application of field fortification to the defenses of têtes de pont and
inhabited places, attack and defense of intrenchments, &c.,
castramentation; six lectures have reference to permanent fortification, on
ancient fortifications, Cormontaigne’s system, exterior and detached works,
considerations respecting the accessories of defense to fortified places;
eight lectures relate to the attack and defense of places, preparations for
attack and defense, details of the construction of siege works from the
opening of the trenches to the taking of the place, exterior works, as
auxiliaries, sketches, and details of the different works in fortifications,
plans, and profile, &c.
The students also execute certain works, such as the making of fascines,
gabions, saucissons, repair of revetments of batteries, platform, setting the
profiles, defilement, and construction of a fieldwork, different kinds of sap,
plan and establishment of a camp for a battalion of infantry, &c.
Under the head of artillery, fifteen lectures are given, commencing with
the resistance of fluids, movement of projectiles, solution of problems with
the balistic pendulum, deviation of projectiles, pointing and firing guns;
small arms, cannon, materials of artillery, powder, munition, fireworks for
military purposes; range of cannon, artillery for the attack or defense of
places or coasts, field artillery, military bridges.
The students are practically taught artillery drill with field and siege guns,
practice with artillery, repair of siege batteries, bridges of boats or rafts.
The ten lectures allowed for the course of military legislation have for
their object the explanation of the principles, practice, and regulations
relating to military law, and the connection with the civil laws that affect
military men.
The twelve lectures on what is called military administration relate to the
interior economy of a company, and to the various matters appertaining to
the soldier’s messing, mode of payment, necessaries, equipment, lodging,
&c.
Military art and history is divided into three parts. The first, of five
lectures, relates to the history of military institutions and organization. The
second, of fifteen lectures, refers to the composition of armies and to
considerations respecting the various arms, infantry, cavalry, état-major,
artillery and engineers, and the minor operations of war. The third part, of
seven lectures, gives the history of some of the most celebrated campaigns
in modern times. In the practical exercises, the students make an attack or
defense of a work or of a system of fieldworks during their course of
fortification, or of a house, farm, village, in the immediate vicinity of the
school, or make the passage of a river.
The students receive twenty lectures in German, and are required to keep
up a knowledge of German writing.
EXAMINATIONS AT THE SCHOOL.
The examinations at the end of the first year take place under the
superintendence of the director and assistant director of studies.
They are conducted by the professor of each branch of study,
assisted by a répétiteur, each of whom assigns a credit to the
student under examination, and the mean, expressed as a whole
number, represents the result of the student’s examination in that
particular branch of study. The examination in military instruction for
training (in drill and exercises) is carried on by the officers attached
to companies, under the superintendence of the commandant of the
battalion, and that relating to practical artillery by the officer in
charge of that duty.
The pupils’ position is determined, as at the Polytechnic, partly by
the marks gained at the examination, partly by those he has
obtained during his previous studies. In other words, the half of the
credit obtained by a student at this examination in each subject is
added to the half of the mean of all the credits assigned to him, in
the same subject, for the manner in which he has replied to the
questions of the professor and répétiteur during the year; and the
sum of these two items represents his total credit at the end of the
year. The scale of credit is from 0 to 20, as at the Polytechnic.
Every year, before the examinations commence, the commandant
and second in command, in concert with the director and assistant
director, and in concurrence with the superior officer commanding
the battalion for military instruction, are formed into a board to
determine the amount of the minimum credit which should be
exacted from the students in every branch of study. This minimum is
not usually allowed to fall below eight for the scientific, and ten for
the military instruction.
Any student whose general mean credit is less than eight for the
scientific, or ten for the military instruction, or who has a less credit
than four for any particular study in the general instruction, or of six
for the military instruction, is retained at the school to work during
the vacation, and re-examined about eight days before the re-
commencement of the course, by a commission composed of the
director and assistant director of studies for the general instruction,
and of the second in command and the commandant of the
battalion, and of one captain for the military instruction. A statement
of this second examination is submitted to the minister of war, and
those students who pass it in a satisfactory manner are permitted by
him to proceed into the first division. Those who do not pass it are
reported to the minister of war as deserving of being excluded from
the school, unless there be any special grounds for excusing them,
such as sickness, in which case, when the fact is properly
established before the council of instruction, they are permitted to
repeat the year’s studies.
Irregularity of conduct is also made a ground for exclusion from
the school. In order to estimate the credit to be attached to the
conduct of a student, all the punishments to which he can be
subjected are converted into a specific number of days of
punishment drill. Thus,
For each day confined in the police chamber, 4 days’ punishment
drill.
For each day confined in the prison, 8 days’ punishment drill.
The statement is made out under the presidency of the
commandant of the school, by the second in command, and the
officer in command of the battalion. The credits for conduct are
expressed in whole numbers in terms of the scale of 0 to 20, in
which the 20 signifies that the student has not been subjected to
any punishment whatever, and the 0, that the student’s punishments
have amounted to 200 or more days of punishment drill. The
number 20 is diminished by deducting 1 for every 10 days of
punishment drill.
The classification in the order of merit depends upon the total
amount of the sum of the numerical marks or credits obtained by
each student in every branch of study or instruction. The numerical
credit in each subject is found by multiplying the credit awarded in
each subject by the co-efficient of influence belonging to it.
The co-efficients, representing the influence allowed to each
particular kind of examination, in the various branches of study are
as follows:—
Second Division, or First Year’s Course of Study.
Descriptive Geometry,
Course, 6
General
Instruction.
Drawing and
Sketches, 2
40
Physical Science applied to the
Military Arts,
Course, 6
Sketch and
Memoir, 2
History, 6
Geography and Statistical
Memoirs,
Course, 5
Sketch and
Memoir, 2
Literature, Memoir on 4
German, 4
Drawing, 3
Special Instruction:—Drill, Practice, Manœuvers (Infantry and Cavalry,) 7
Conduct, 3
50
First Division, or Second Year’s Course of Study
Infantry. Cavalry.
Topography,
Course, 3 3
General
Instruction.
Maps, Memoirs, and
Practical Exercises, 3
35
2
32
Fortification,
Course, 4 4
Drawings, Memoirs,
and Practical
Exercises, 3 2
Artillery and Balistic
Pendulum,
Course, 4 4
Practical Exercises,
School of Musketry 2 1
Military Legislation, 2 2
Military
Administration,
Course, 3 3
Sheets of Accounts, 1 1
Military History and
Art,
Course, 4 4
Memoirs and
applications, 1 1
German, 4 4
Drawing, 1 1
Infantry
Theory of Drill,
Manœuvers—
3 Schools, 4 9
Special
instruction
for
Practical Instruction 3
Regulations, 2
Cavalry, Riding, 3 12
Theoretical and
Practical Instruction 7
Veterinary Art, 2
Conduct 6 6
Total, 50 50
To facilitate this classification in order of merit, three distinct
tables are prepared,—
The first relating to the general instruction;
The second relating to the military instruction; and
The third relating to the conduct;
and they respectively contain, one column in which the names of
the students are arranged by companies in the order in which they
have been examined; followed by as many columns as there are
subjects of examination, for the insertion of their individual credit
and the co-efficient of influence, by which each credit is multiplied;
and lastly by a column containing the sum of the various products
belonging to, and placed opposite each student’s name.
These tables are respectively completed by the aid of the existing
documents, the first for the general instruction, by the director of
studies; the second for the military instruction, by the officer
commanding the battalion; the third for conduct, under the direction
of the commandant of the school, assisted by the second in
command.
A jury formed within the school, composed of the general
commandant, president, the second in command, the director of
studies, and the officer commanding the battalion, is charged with
the classification of the students in the order of merit.
To effect it, after having verified and established the accuracy of
the above tables, the numbers appertaining to each student in the
three tables are extracted and inserted in another table, containing
the name of each student, and, in three separate columns, the
numbers obtained by each in general instruction, military instruction,
and conduct, and the sum of these credits in another column.
By the aid of this last table, the jury cause another to be
compiled, in which the students are arranged in the order of merit as
established by the numerical amount of their credits, the highest in
the list having the greatest number.
If there should be any two or more having the same number of
total credits, the priority is determined by giving it to the student
who has obtained a superiority of credits in military instruction,
conduct, general instruction, notes for the year; and if these prove
insufficient, they are finally classed in the same order as they were
admitted into the school.
A list for passing from the second to the first division is forwarded
to the minister at war, with a report in which the results for the year
are compared with the results of the preceding year; and the
minister at war, with these reports before him, decides who are
ineligible from incompetency, or by reason of their conduct, to pass
to the other division.
The period when the final examinations before leaving the school
are to commence, is fixed by the president of the jury, specially
appointed to carry on this final examination, in concert with the
general commandant of the school.
The president of the jury directs and superintends the whole of
the arrangements for conducting the examination; and during each
kind of examination, a member of the corps, upon the science of
which the student is being questioned, assists the examiner, and, as
regards the military instruction, each examiner is aided by a captain
belonging to the battalion.
The examination is carried on in precisely the same manner as
that already described for the end of the first year’s course of study.
And the final classification is ascertained by adding to the numerical
credits obtained by each student during his second year’s course of
study, in the manner already fully explained, one-tenth of the
numerical credits obtained at the examinations at the end of the first
year.
The same regulations as to the minimum credit which a student
must obtain in order to pass from one division to the other, at the
end of the first year, which are stated in page 160, are equally
applicable to his passing from the school to become a second
lieutenant in the army.
A list of the names of those students who are found qualified for
the rank of second lieutenant is sent to the minister at war, and a
second list is also sent, containing the names of those students that
have, when subjected to a second or revised examination, been
pronounced by the jury before whom they were re-examined as
qualified.
Those whose names appear in the first list are permitted to
choose according to their position in the order of merit, the staff
corps or infantry, according to the number required for the first
named service, and to name the regiments of infantry in which they
desire to serve.
Those intended for the cavalry are placed at the disposal of the
officer commanding the regiment which they wish to enter.
Those whose names appear in the second list are not permitted to
choose their corps, but are placed by the minister at war in such
corps as may have vacancies in it, or where he may think proper.
The students who are selected to enter the staff corps, after
competing successfully with the second lieutenants of the army,
proceed as second lieutenants to the staff school at Paris. Those
who fail pass into the army as privates, according to the terms of the
engagement made on entering the school.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Practical Program Evaluation Theory Driven Evaluation and the Integrated Eval...
PDF
Practical Program Evaluation Theory Driven Evaluation and the Integrated Eval...
PDF
Practical Program Evaluation Theory Driven Evaluation and the Integrated Eval...
PDF
Evaluating Public and Community Health Programs 2nd Edition, (Ebook PDF)
PDF
Evaluating Public and Community Health Programs 2nd Edition, (Ebook PDF)
PDF
(eBook PDF) Evaluating Public and Community Health Programs 2nd Edition
PDF
Program Evaluation Methods And Case Studies Emil J Posavac Kenneth J Linfield
PDF
Designing and Managing Programs: An Effectiveness-Based Approach (SAGE
Practical Program Evaluation Theory Driven Evaluation and the Integrated Eval...
Practical Program Evaluation Theory Driven Evaluation and the Integrated Eval...
Practical Program Evaluation Theory Driven Evaluation and the Integrated Eval...
Evaluating Public and Community Health Programs 2nd Edition, (Ebook PDF)
Evaluating Public and Community Health Programs 2nd Edition, (Ebook PDF)
(eBook PDF) Evaluating Public and Community Health Programs 2nd Edition
Program Evaluation Methods And Case Studies Emil J Posavac Kenneth J Linfield
Designing and Managing Programs: An Effectiveness-Based Approach (SAGE

Similar to Practical Program Evaluation Theorydriven Evaluation And The Integrated Evaluation Perspective 2nd Edition Huey T Chen (20)

PDF
Leading the Teacher Induction and Mentoring Program 2nd Edition Barry W. Sweeny
PDF
Qualitative Research Evaluation Methods Integrating Theory and Practice Micha...
PDF
(eBook PDF) Evaluating Public and Community Health Programs 2nd Edition
PDF
(eBook PDF) Evaluating Public and Community Health Programs 2nd Edition
PDF
Program Evalutaion Forms And Approaches 3rd Edition John M Owen
PDF
Qualitative Research Evaluation Methods Integrating Theory and Practice Micha...
PDF
Flexible Evaluation
PDF
(eBook PDF) A Local Assessment Toolkit to Promote Deeper Learning: Transformi...
PDF
Full download Program Evaluation 3rd Edition John M. Owen pdf docx
DOCX
Workbook for Designing a Process Evaluation
DOCX
Workbook for Designing a Process Evaluation .docx
PDF
Social Work Evaluation: Enhancing What We Do 3rd Edition James R. Dudley
PDF
logic mode is used for project analsysis
PDF
logit model for program and service delivery works
PPT
2014_10_17_HowtoWriteanEvaluationPlanSlides_ORE.ppt
PPTX
Hr chapter 8 Training employees
PPT
Program Evaluation 1
PDF
Essentials of Utilization Focused Evaluation 1st Edition Michael Quinn Patton
PDF
Program Evaluation 3rd Edition John M. Owen
PDF
Social Work Evaluation: Enhancing What We Do 3rd Edition James R. Dudley
Leading the Teacher Induction and Mentoring Program 2nd Edition Barry W. Sweeny
Qualitative Research Evaluation Methods Integrating Theory and Practice Micha...
(eBook PDF) Evaluating Public and Community Health Programs 2nd Edition
(eBook PDF) Evaluating Public and Community Health Programs 2nd Edition
Program Evalutaion Forms And Approaches 3rd Edition John M Owen
Qualitative Research Evaluation Methods Integrating Theory and Practice Micha...
Flexible Evaluation
(eBook PDF) A Local Assessment Toolkit to Promote Deeper Learning: Transformi...
Full download Program Evaluation 3rd Edition John M. Owen pdf docx
Workbook for Designing a Process Evaluation
Workbook for Designing a Process Evaluation .docx
Social Work Evaluation: Enhancing What We Do 3rd Edition James R. Dudley
logic mode is used for project analsysis
logit model for program and service delivery works
2014_10_17_HowtoWriteanEvaluationPlanSlides_ORE.ppt
Hr chapter 8 Training employees
Program Evaluation 1
Essentials of Utilization Focused Evaluation 1st Edition Michael Quinn Patton
Program Evaluation 3rd Edition John M. Owen
Social Work Evaluation: Enhancing What We Do 3rd Edition James R. Dudley
Ad

Recently uploaded (20)

PDF
Journal of Dental Science - UDMY (2020).pdf
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PDF
PowerPoint for Climate Change by T.T.pdf
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PPTX
MICROPARA INTRODUCTION XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
PPTX
INSTRUMENT AND INSTRUMENTATION PRESENTATION
PDF
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
PPTX
Core Concepts of Personalized Learning and Virtual Learning Environments
PDF
semiconductor packaging in vlsi design fab
PDF
My India Quiz Book_20210205121199924.pdf
PDF
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
PDF
Literature_Review_methods_ BRACU_MKT426 course material
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
PDF
HVAC Specification 2024 according to central public works department
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PPTX
DRUGS USED FOR HORMONAL DISORDER, SUPPLIMENTATION, CONTRACEPTION, & MEDICAL T...
PDF
Civil Department's presentation Your score increases as you pick a category
PDF
International_Financial_Reporting_Standa.pdf
PPTX
Climate Change and Its Global Impact.pptx
Journal of Dental Science - UDMY (2020).pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PowerPoint for Climate Change by T.T.pdf
Cambridge-Practice-Tests-for-IELTS-12.docx
MICROPARA INTRODUCTION XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
INSTRUMENT AND INSTRUMENTATION PRESENTATION
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
Core Concepts of Personalized Learning and Virtual Learning Environments
semiconductor packaging in vlsi design fab
My India Quiz Book_20210205121199924.pdf
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
Literature_Review_methods_ BRACU_MKT426 course material
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
HVAC Specification 2024 according to central public works department
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
DRUGS USED FOR HORMONAL DISORDER, SUPPLIMENTATION, CONTRACEPTION, & MEDICAL T...
Civil Department's presentation Your score increases as you pick a category
International_Financial_Reporting_Standa.pdf
Climate Change and Its Global Impact.pptx
Ad

Practical Program Evaluation Theorydriven Evaluation And The Integrated Evaluation Perspective 2nd Edition Huey T Chen

  • 1. Practical Program Evaluation Theorydriven Evaluation And The Integrated Evaluation Perspective 2nd Edition Huey T Chen download https://ebookbell.com/product/practical-program-evaluation- theorydriven-evaluation-and-the-integrated-evaluation- perspective-2nd-edition-huey-t-chen-33360402 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Program Evaluation Theory And Practice A Comprehensive Guide 2nd Donna M Mertens https://ebookbell.com/product/program-evaluation-theory-and-practice- a-comprehensive-guide-2nd-donna-m-mertens-7422494 Practical Program Evaluation For Criminal Justice 1st Edition Gennaro F Vito https://ebookbell.com/product/practical-program-evaluation-for- criminal-justice-1st-edition-gennaro-f-vito-4737318 Handbook Of Practical Program Evaluation Essential Texts For Nonprofit And Public Leadership And Mana 2nd Edition Joseph S Wholey https://ebookbell.com/product/handbook-of-practical-program- evaluation-essential-texts-for-nonprofit-and-public-leadership-and- mana-2nd-edition-joseph-s-wholey-2415780 Handbook Of Practical Program Evaluation 4th Edition Harry P Hatry https://ebookbell.com/product/handbook-of-practical-program- evaluation-4th-edition-harry-p-hatry-5310454
  • 3. Handbook Of Practical Program Evaluation 3rd Edition Joseph S Wholey https://ebookbell.com/product/handbook-of-practical-program- evaluation-3rd-edition-joseph-s-wholey-1636550 Program Evaluation Alternative Approaches And Practical Guidelines Fourth Edition Jody L Fitzpatrick https://ebookbell.com/product/program-evaluation-alternative- approaches-and-practical-guidelines-fourth-edition-jody-l- fitzpatrick-5645094 Health Program Planning And Evaluation A Practical Systematic Approach For Community Health 3rd Edition L Michele Issel https://ebookbell.com/product/health-program-planning-and-evaluation- a-practical-systematic-approach-for-community-health-3rd-edition-l- michele-issel-5477838 Health Program Planning And Evaluation A Practical Systematic Approach For Community Health 4th Edition L Michele Issel Rebecca Wells https://ebookbell.com/product/health-program-planning-and-evaluation- a-practical-systematic-approach-for-community-health-4th-edition-l- michele-issel-rebecca-wells-10848394 Health Program Planning And Evaluation A Practical Systematic Approach To Community Health 5th Edition 5th L Michele Issel https://ebookbell.com/product/health-program-planning-and-evaluation- a-practical-systematic-approach-to-community-health-5th-edition-5th-l- michele-issel-47548094
  • 7. To the memory of my mother, Huang-ai Chen
  • 8. Practical Program Evaluation Theory-Driven Evaluation and the Integrated Evaluation Perspective Huey T. Chen Mercer University Second Edition
  • 9. Copyright  2015 by SAGE Publications, Inc. All rights reserved. No part of this book may be repro- duced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. Printed in the United States of America Library of Congress Cataloging-in-Publication Data Chen, Huey-tsyh. Practical program evaluation : theory-driven evaluation and the integrated evaluation perspective / Huey T. Chen, Mercer University. — 2nd edition. pages cm Includes bibliographical references and index. ISBN 978-1-4129-9230-5 (pbk. : alk. paper) 1. Evaluation research (Social action programs) I. Title. H62.C3647 2015 300.72—dc23   2014019026 This book is printed on acid-free paper. 14 15 16 17 18 10 9 8 7 6 5 4 3 2 1 For information: SAGE Publications, Inc. 2455 Teller Road Thousand Oaks, California 91320 E-mail: [email protected] SAGE Publications Ltd. 1 Oliver’s Yard 55 City Road London, EC1Y 1SP United Kingdom SAGE Publications India Pvt. Ltd. B 1/I 1 Mohan Cooperative Industrial Area Mathura Road, New Delhi 110 044 India SAGE Publications Asia-Pacific Pte. Ltd. 3 Church Street #10-04 Samsung Hub Singapore 048763 Acquisitions Editor: Helen Salmon Associate Editor: Eve Oettinger Editorial Assistant: Anna Villarruel Production Editor: Jane Haenel Copy Editor: Paula L. Fleming Typesetter: C&M Digitals (P) Ltd. Proofreader: Susan Schon Indexer: Robie Grant Cover Designer: Anupama Krishnan Marketing Manager: Nicole Elliott
  • 10. Contents Preface xvi Special Features of the Book xvii About the Author xix PART I: Introduction 1 Chapter 1. Fundamentals of Program Evaluation 3 The Nature of Intervention Programs and Evaluation: A Systems View 3 Classic Evaluation Concepts, Theories, and Methodologies: Contributions and Beyond 6 Evaluation Typologies 7 The Distinction Between Formative and Summative Evaluation 7 Analysis of the Formative and Summative Distinction 8 A Fundamental Evaluation Typology 10 Basic Evaluation Types 11 Hybrid Evaluation Types 13 Applications of the Fundamental Evaluation Typology 14 Internal Versus External Evaluators 14 Politics, Social Justice, Evaluation Standards, and Ethics 15 Evaluation Steps 17 Evaluation Design and Its Components 18 Major Challenges of Evaluation: Lessons Learned From Past Practice 20 Judge a Program Not Only by Its Results but Also by Its Context 21 Evaluations Must Address Both Scientific and Stakeholder Credibility 22 Evaluations Must Provide Information That Helps Stakeholders Do Better 23 Addressing the Challenges: Theory-Driven Evaluation and the Integrated Evaluation Perspective 25
  • 11. Theory-Driven Evaluation Approach 25 Integrated Evaluation Perspective 26 Program Complexity and Evaluation Theories 30 Who Should Read This Book and How They Should Use It 31 Students 31 Evaluation Practitioners 32 Introducing the Rest of the Chapters 32 Questions for Reflection 33 Chapter 2. Understand Approaches to Evaluation and Select Ones That Work: The Comprehensive Evaluation Typology 35 The Comprehensive Evaluation Typology: Means and Ends 36 Stages in the Program Life Cycle 39 Dynamics of Transition Across Program Stages 41 Evaluation Approaches Associated With Each Stage 41 Planning Stage 42 Initial Implementation 42 Mature Implementation 43 Outcome Stage 43 Strategies Underlying Evaluation Approaches 44 Merit Assessment Strategies 46 Development Strategies 47 Applying the Typology: Steps to Take 51 Evaluation Ranging Across Several Program Stages 54 Dynamics of Evaluation Entries Into Program Stages 55 1. Single-Entry Evaluation 55 2. Multiple-Entry Evaluation 56 Questions for Reflection 57 Chapter 3. Logic Models and the Action Model/Change Model Schema (Program Theory) 58 Logic Models 58 Additional Examples of Applying Logic Models 61 Program Theory 65 The Action Model/Change Model Schema 66 Descriptive Assumptions 67 Prescriptive Assumptions 68 Components of the Change Model 70
  • 12. Goals and Outcomes 71 Determinants 71 Intervention or Treatment 73 Components of the Action Model 74 Intervention and Service Delivery Protocols 74 Implementing Organizations: Assess, Enhance, and Ensure Their Capabilities 75 Program Implementers: Recruit, Train, and Maintain Both Competency and Commitment 76 Associate Organizations/Community Partners: Establish Collaborations 76 Ecological Context: Seek the Support of the Environment 77 Target Population: Identify, Recruit, Screen, Serve 78 Relationships Among Components of the Action Model/Change Model Schema 79 Applying the Action Model/Change Model Schema: An Example 82 Change Model 84 Action Model 84 Some Advantages of Using the Action Model/Change Model Schema 84 Facilitation of Holistic Assessment 84 Provision of Comprehensive Information Needed to Improve Programs 85 Delineation of a Strategy to Consider Stakeholders’ Views and Interests 85 Flexible Application of Research Methods to Serve Evaluation Needs 86 Aid to Selecting the Most Suitable Approaches or Methods 86 Helping Stakeholders Gear Up (or Clear Up) Their Action Model/Change Model Schema 86 Reviewing Existing Documents and Materials 87 Clarifying Stakeholders’ Theory 87 Participatory Modes for Development Facilitation 88 Theorizing Procedures for Development Facilitation 89 Preparing a Rough Draft That Facilitates Discussion 91 Applications of Logic Models and the Action Model/Change Model Schema 92 Questions for Reflection 92
  • 13. PART II: Program Evaluation to Help Stakeholders Develop a Program Plan 95 Chapter 4. Helping Stakeholders Clarify a Program Plan: Program Scope 97 The Program Plan, Program Scope, and Action Plan 97 Conceptual Framework of the Program Scope 98 Why Develop a Program Scope? 100 Strategies for Articulating the Program Scope 100 Background Information Provision Strategy and Approaches 101 Needs Assessment 101 Formative Research 102 The Conceptualization Facilitation Approach: A Part of the Development Facilitation Strategy 103 Working Group or Intensive Interview Format? 103 Theorizing Methods 104 Determinants and Types of Action Model/Change Model Schemas 109 Choosing Interventions/Treatments That Affect the Determinant 110 The Relevancy Testing Approach: A Part of the Troubleshooting Strategy 112 Research Example of Relevancy Testing 113 Moving From Program Scope to Action Plan 115 Questions for Reflection 116 Chapter 5. Helping Stakeholders Clarify a Program Plan: Action Plan 117 The Action Model Framework and the Action Plan 117 Strategies for Developing Action Plans 120 The Formative Research Approach (Under Background Information Provision Strategy) 120 Example of Formative Research 121 The Conceptualization Facilitation Approach (Under Development Facilitation Strategy) 122 1. Implementing Organization: Assess, Enhance, and Ensure Its Capacity 123 2. Intervention and Service Delivery Protocols: Delineate Service Content and Delivery Procedures 124 3. Program Implementers: Recruit, Train, and Maintain for Competency and Commitment 126 4. Associate Organizations/Community Partners: Establish Collaborative Relationships 127
  • 14. 5. Ecological Context: Seek the Support of the Environment 128 6. Target Population: Identify, Recruit, Screen, and Serve 130 Application of the Conceptualization Facilitation Strategy 133 Example 1: A Garbage Reduction Program 133 Example 2: An HIV-Prevention Program 136 The Pilot-Testing Approach 141 Defining Pilot Testing 141 Conducting Pilot Testing 142 Designing Pilot Testing 143 The Commentary or Advisory Approach 146 Questions to Inform the Evaluator’s Commentary on a Program Scope 146 Questions to Inform the Evaluator’s Commentary on an Action Plan 147 Summary 148 Questions for Reflection 149 PART III: Evaluating Implementation 151 Chapter 6. Constructive Process Evaluation Tailored for the Initial Implementation 153 The Formative Evaluation Approach (Under the Troubleshooting Strategy) 154 Timeliness and Relevancy 155 Research Methods 155 Steps in Applying Formative Evaluation 156 Four Types of Formative Evaluation 158 Formative Evaluation Results: Use With Caution 164 The Program Review/Development Meeting (Under the Troubleshooting Strategy) 165 Program Review/Development Meeting Principles and Procedures 166 Program Review/Development Meeting Advantages and Disadvantages 167 Example of a Program Review/Development Meeting 169 Bilateral Empowerment Evaluation (Under the Development Partnership Strategy) 171 Evaluation Process 172 Example of Bilateral Empowerment Evaluation 172 The Evaluator’s Role 173 Pros and Cons of Bilateral Empowerment Evaluation 174 Questions for Reflection 174
  • 15. Chapter 7. Assessing Implementation in the Mature Implementation Stage 176 Constructive Process Evaluation and Its Application 177 Modifying or Clarifying a Program Scope and Action Plan 177 Troubleshooting Implementation Problems 179 Conclusive Process Evaluation and Its Applications 180 How to Design a Conclusive Process Evaluation That Fits Stakeholders’ Needs 181 Approaches of Conclusive Process Evaluation 182 Intervention Fidelity Evaluation 183 Referral Fidelity Evaluation 184 Service Delivery Fidelity Evaluation 185 Target Population Fidelity Evaluation 186 Fidelity Versus “Reinvention” in Conclusive Process Evaluation 189 Hybrid Process Evaluation: Theory-Driven Process Evaluation 190 Examples of Theory-Driven Process Evaluation 191 Theory-Driven Process Evaluation and Unintended Effects 198 Questions for Reflection 199 PART IV: Program Monitoring and Outcome Evaluation 201 Chapter 8. Program Monitoring and the Development of a Monitoring System 203 What Is Program Monitoring? 203 Process Monitoring 204 Uses of Process-Monitoring Data 205 Process Monitoring Versus Process Evaluation 205 Outcome Monitoring 206 Identification of Goals 206 Outcome Measures and Data Collection 207 Outcome Monitoring Versus Outcome Evaluation 207 Program-Monitoring Systems Within Organizations 208 Program-Monitoring System Elements 209 Developing a Program-Monitoring System 210 An Example of Developing a Program Monitoring/Evaluation System 211 Questions for Reflection 229
  • 16. Chapter 9. Constructive Outcome Evaluations 230 Constructive Outcome Evaluation 230 SMART Goals 231 Specific 232 Measurable 232 Attainable 232 Relevant 232 Time-Bound 233 Putting SMART Characteristics Together in Goals 233 Evaluability Assessment 234 Step 1: Involve the Intended Users of Evaluation Information 235 Step 2: Clarify the Intended Program 235 Step 3: Explore the Program’s Reality 236 Step 4: Reach Agreement on Any Needed Program Changes 236 Step 5: Explore Alternative Evaluation Designs 236 Step 6: Agree on the Evaluation’s Priority and How Information From the Evaluation Will Be Used 236 Plausibility Assessment/Consensus-Building Approach 237 Potential Problems of Evaluation That Are Based Mainly on Official Goals 237 Plausibility Assessment/Consensus-Building Approach 239 A Preview of Conclusive Outcome Evaluation: Selecting an Appropriate Approach 245 Questions for Reflection 246 Chapter 10. The Experimentation Evaluation Approach to Outcome Evaluation 247 The Foundation of the Experimentation Approach to Outcome Evaluation 247 The Distinction Between Internal Validity and External Validity in the Campbellian Validity Typology 248 Threats to Internal Validity 249 Research Designs for Ruling Out Threats to Internal Validity 250 Experimental Designs 251 Pre-Experimental Designs 253 Quasi-Experimental Designs 256 Questions for Reflection 258
  • 17. Chapter 11. The Holistic Effectuality Evaluation Approach to Outcome Evaluation 260 Ongoing Debates Over the Experimentation Evaluation Approach 260 Efficacy Evaluation Versus Effectiveness Evaluation 263 Relationships Among the Experimentation Evaluation Approach and the Campbellian Validity Typology 264 The Holistic Effectuality Evaluation Approach 266 The Experimentation Evaluation Approach’s Conceptualization of Outcome Evaluation 267 The Holistic Effectuality Approach’s Conceptualization of Outcome Evaluation 267 Constructive Assessment and Conclusive Assessment: Theory and Methodology 269 Constructive Assessment 270 Conclusive Assessment 275 The Zumba Weight Loss Project 275 Why Adjuvants Are Needed for Real-World Programs 281 Types of Adjuvants and Threats to Internal Validity 282 Methodology for Real-World Outcome Evaluation 284 Assessing the Joint Effects of an Intervention and Its Adjuvants 284 Eliminating Potential Biases 285 Research Steps for Assessing Real-World Effects 286 Inquiring Into the Process of Contextualizing an Intervention in the Real-World Setting 286 Using a Relatively Unobtrusive Quantitative Design to Address Biases and Assess Change 288 Using an Auxiliary Design to Triangulate Evidence 289 Replication of the Mesological Intervention (Optional) 291 Example of a Real-World Outcome Evaluation 291 Checklist for Ranking Real-World Evaluations 296 Usefulness of the Holistic Effectuality Evaluation Approach 298 Providing Theory and Methodology for Real-World Outcome Evaluation 298 Providing Insight Into the Relationship Between Adjuvants and Internal Validity 298 Inspiring Evaluators to Develop Indigenous Evaluation Theories and Methodologies 299 The Experimentation Evaluation Approach Versus the Holistic Effectuality Evaluation Approach 299
  • 18. Pure Independent Effects 300 Real-World Joint Effects 300 Building Evidence From the Ground Up 301 Questions for Reflection 302 Chapter 12. The Theory-Driven Approach to Outcome Evaluation 304 Clarifying Stakeholders’ Implicit Theory 306 Making Stakeholder Theory Explicit 306 Building Consensus Among Stakeholders Regarding Program Theory 309 Guidelines for Conducting Theory-Driven Outcome Evaluation 309 Types of Theory-Driven Outcome Evaluation 310 The Intervening Mechanism Evaluation Approach 312 Two Models of the Intervening Mechanism Evaluation Approach 313 Some Theoretical Bases of the Intervening Mechanism Evaluation 318 When to Use an Intervening Mechanism Evaluation Approach 319 The Moderating Mechanism Evaluation Approach 321 Constructing Moderating Mechanism Evaluation Models 322 Examples of Moderating Mechanism Evaluation 323 Advanced Moderating Mechanism Models 324 When to Use a Moderating Mechanism Evaluation Approach 327 The Integrative Process/Outcome Evaluation Approach 327 Research Methods and Strategies Associated With Integrative Process/Outcome Evaluation 329 Examples of Integrative Process/Outcome Evaluation 329 Theory-Driven Outcome Evaluation and Unintended Effects 333 Formal Specification of Possible Unintended Effects 334 Field Study Detection of Unintended Effects of Implementation 335 A Reply to Criticisms of Theory-Driven Outcome Evaluation 336 Questions for Reflection 338 Part V: Advanced Issues in Program Evaluation 341 Chapter 13. What to Do if Your Logic Model Does Not Work as Well as Expected 343 A Diversity Enhancement Project 344 Description of the Project 344
  • 19. Applying the Logic Model 345 Applying the Action Model/Change Model Schema 345 A Community Health Initiative 350 Description of the Project 350 Applying the Logic Model 352 Applying the Action Model/Change Model Schema 355 A Guide to Productively Applying the Logic Model and the Action Model/Change Model Schema 361 System Change and Evaluation in the Future 362 Questions for Reflection 363 Chapter 14. Formal Theories Versus Stakeholder Theories in Interventions: Relative Strengths and Limitations 365 Formal Theory Versus Stakeholder-Implicit Theory as a Basis for Intervention Programs 365 Intervention Programs Based on Formal Theory 365 Programs Based on Stakeholder Theory 366 Views on the Relative Value of Formal Theory-Based Interventions and Stakeholder Theory-Based Interventions 369 Formal Theory Versus Stakeholder Theory: A Case Study 371 Program Theory Underlying the Anti–Secondhand Smoking Program 371 Action Model 373 Outcome Evaluation Design and Change Model 376 Process Evaluation Design 377 Evaluation Findings 377 Results of Process Evaluation 377 Outcome Evaluation Results 378 Relative Strengths and Limitations of Formal Theory-Based Intervention and Stakeholder Theory-Based Intervention 379 Theoretical Sophistication and Prior Evidence 379 Efforts to Clarify the Change Model and Action Model in Program Theory 380 Efficacious Evidence Versus Real-World Effectiveness 380 Viability 381 Action Theory Success and Conceptual Theory Success 382 Lessons Learned From the Case Study 383 Questions for Reflection 387
  • 20. Chapter 15. Evaluation and Dissemination: Top-Down Approach Versus Bottom-Up Approach 388 The Top-Down Approach to Transitioning From Evaluation to Dissemination 388 Lessons Learned From Applying the Top-Down Approach to Program Evaluation 389 Integrative Cogency Model: The Integrated Evaluation Perspective 394 Effectual Cogency 395 Viable Cogency 396 Transferable Cogency 398 Evaluation Approaches Related to the Integrative Cogency Model 398 Effectuality Evaluation 398 Viability Evaluation 399 Transferability Evaluation 399 The Bottom-Up Approach to Transitioning From Evaluation to Dissemination 400 The Bottom-Up Approach 400 The Bottom-Up Approach and Social Betterment/Health Promotion Programs 401 Types of Intervention for the Bottom-Up Approach 403 The Current Version of Evidence-Based Interventions: Limitations and Strategies to Address Them 403 The Integrated Evaluation Perspective on Concurrent Cogency Approaches 406 Focusing on Effectual Cogency 407 Focusing on Viable Cogency 407 Optimizing Approach 408 The Usefulness of the Bottom-Up Approach and the Integrative Cogency Model 408 Questions for Reflection 410 References 412 Index 426
  • 21. xvi Preface Ihave been practicing program evaluation over a few decades. My practice has greatly benefited from conventional evaluation theories and approaches. However, on many occasions, I have also experienced conventional evaluation theories and approaches that do not work as well as they are supposed to. I have been contemplating and working on how to expand them or develop alternative theories and approaches that will better serve evaluation in the future. I planned to discuss my experiences and lessons learned from these efforts in the second edition of Practical Program Evaluation so that evalua- tors, new or seasoned, would not only learn both traditional and cutting-edge concepts but also have opportunities to participate in further advancing pro- gram evaluation. However, this plan has frequently been stymied. One reason is that the more I study the issues, the more complicated they become. I some- times felt as though I was constantly banging my head against the proverbial wall. Luckily, I found I was not the only person having these frustrations and struggling with these problems. The following friends and colleagues have provided timely encouragement and advice that have been crucial to my finish- ing the book: Thomas Chapel, Amy DeGroff, Stewart Donaldson, Jennifer Greene, Brian Lien, Lorine Spencer, Jonathan Morell, Craig Thomas, Nannette Turner, and Jennifer Urban. I am indebted greatly to them for their support of the project. I am also grateful for the valuable feedback from the following reviewers: Darnell J. Bradley, Cardinal Stritch University; C. W. Cowles, Central Michigan University; and Mario A. Rivera, University of New Mexico. Any shortcomings of this book are entirely my own. Furthermore, the book was also frequently disrupted by other, more press- ing tasks. Helen Salmon, my SAGE editor, issued gentle ongoing reminders and patiently checked on my progress every step of the way. Without her persistent nudging, I would not have been able to meet the deadline. I also appreciate my research assistants, Joanna Hill and Mauricia Barnett, for their help in prepar- ing questions for reflection and the tables that appear in the book. With so much time and effort spent, it is a great joy for me to see this book reach fruition.
  • 22. xvii Special Features of the Book This book is about program evaluation in action, and to that end it does the following: 1. Provides a comprehensive evaluation typology that facilitates the system- atic identification of stakeholders’ needs and the selection of the evaluation options best suited to meet those needs. Almost always, program evaluation is initiated to meet the particular evaluation needs of a program’s stakeholders. If a program evaluation is to be useful to those stakeholders, it is their expectations that evaluators must keep in mind when designing the evaluation. The precise communication and comprehension of stakeholder expectations is crucial; to facilitate the communication process, this book presents a comprehensive evalu- ation typology for the effective identification of evaluation needs. Within this typology, the book provides a variety of evaluation approaches suitable across a program’s life cycle—from program planning to initial implementation, mature implementation, and outcome achievement—to enrich the evaluator’s toolbox. Once the stakeholders’ expectations are identified, evaluators must select a strat- egy for addressing each evaluation need. Many evaluation options are available. The book discusses them, exploring the pros and cons of each and acknowledg- ing that trade-offs sometimes must be made. Furthermore, it suggests practical principles that can guide evaluators to make the best choices in the evaluation situations they are likely to encounter. 2. Introduces both conventional and cutting-edge evaluation perspectives and approaches. The core of program evaluation is its body of concepts, theo- ries, and methods. It provides evaluators needed principles, strategies, and tools for conducting evaluations. As will be demonstrated in the book, cutting- edge evaluation approaches have been developed to further advance program evaluation by thinking outside the proverbial box. Evaluators can do better evaluations if they are familiar and competent with both conventional and innovative evaluation perspectives and approaches. This book systematically introduces the range of options and discusses the conditions under which they can be fruitfully applied.
  • 23. xviii Practical Program Evaluation 3. Puts each approach into action. Using illustrative examples from the field, the book details the methods and procedures involved in using various evaluation options. How does the program evaluator carry out an evaluation so as to meet real evaluation needs? Here, practical approaches are discussed— yet this book avoids becoming a “cookbook.” The principles and strategies of evaluation that it presents are backed by theoretical justifications, which are also explained. This context, it is hoped, fosters the latitude, knowledge, and flexibility with which program evaluators can design suitable evaluation mod- els for a particular evaluation project and better serve stakeholders’ needs.
  • 24. xix About the Author Huey T. Chen is Professor of the Department of Public Health and Director of the Center for Evaluation and Applied Research in the College of Health Professions at Mercer University. He previously served as branch chief and senior evaluation scientist at the Centers for Disease Control and Prevention (CDC), as well as Professor at the University of Alabama at Birmingham. Dr. Chen has worked with community organizations, health-related agencies, government agencies, and educational institutions. He has conducted both large-scale and small-scale evaluations in the United States and internation- ally, including evaluating a drug abuse treatment program and a youth ser- vice program in Ohio, a carbon monoxide ordinance in North Carolina, a community health initiative in New Jersey, a juvenile delinquency prevention and treatment policy in Taiwan, and an HIV prevention and care initiative in China. He has written extensively on program theory, theory-driven evalua- tion, the bottom-up evaluation approach, and the integrated evaluation per- spective. In addition to publishing over 70 articles in peer-reviewed journals, he is the author of several evaluation books. His book Theory-Driven Evaluations (1990, SAGE) is seen as one of the landmarks in program evaluation. His book Practical Program Evaluation: Theory-Driven Evaluation and the Integrated Evaluation Perspective, Second Edition (2015, SAGE) introduces cutting-edge evaluation approaches and illustrates the benefits of thinking outside the pro- verbial box. Dr. Chen serves on the editorial advisory boards of Evaluation and Program Planning and is a winner of the American Evaluation Association’s Lazarsfeld Award for Evaluation Theory and of the Senior Biomedical Service Award from the CDC for his evaluation work.
  • 26. 1 Part I Introduction The first three chapters of this book, which comprise Part I, provide general information about the theoretical foundations and applications of program evaluation principles. Basic ideas are introduced, and a conceptual framework is presented. The first chapter explains the purpose of the book and discusses the nature, characteristics, and strategies of program evaluation. In Chapter 2, program evaluators will find a systematic typology of the various evaluation approaches one can choose among when faced with particular evaluation needs. Chapter 3 introduces the concepts of logic models and program theory, which underlie many of the guidelines found throughout the book.
  • 28. 3 The programs that evaluators can expect to assess have different names such as treat- ment program, action program, or intervention program. These programs come from different substantive areas, such as health promotion and care, education, criminal justice, welfare, job training, community development, and poverty relief. Nevertheless, they all have in common organized efforts to enhance human well-being—whether by preventing disease, reducing poverty, reducing crime, or teaching knowledge and skills. For conve- nience, programs and policies of any type are usually referred in this book as “intervention programs” or simply “programs.” An intervention program intends to change individuals’ or groups’ knowledge, attitudes, or behaviors in a community or society. Sometimes, an intervention program aims at changing the entire population of a community; this kind of program is called a population-based intervention program. The Nature of Intervention Programs and Evaluation: A Systems View The terminology of systems theory (see, e.g., Bertalanffy, 1968; Ryan & Bohman, 1998) provides a useful means of illustrating how an intervention program works as an open system, as well as how program evaluation serves the program. In a general sense, as an open system an intervention program consists of five components (input, transformation, outputs, environment, and feedback), as illustrated in Figure 1.1. Chapter 1 Fundamentals of Program Evaluation
  • 29. 4 Introduction Environment Feedback Input Transformation Output Figure 1.1   A Systems View of a Program Inputs. Inputs are resources the program takes in from the environment. They may include funding, technology, equipment, facilities, personnel, and clients. Inputs form and sustain a program, but they cannot work effectively without systematic organization. Usually, a program requires an implementing organi- zation that can secure and manage its inputs. Transformation. A program converts inputs into outputs through transformation. This process, which begins with the initial implementation of the treatment/inter- vention prescribed by a program, can be described as the stage during which implementers provide services to clients. For example, the implementation of a new curriculum in a school may mean the process of teachers teaching students new subject material in accordance with existing instructional rules and administrative guidelines. Transformation also includes those sequential events necessary to achieve desirable outputs. For example, to increase students’ math and reading scores, an education program may need to first boost students’ motivation to learn. Outputs. These are the results of transformation. One crucial output is the attainment of the program’s goals, which justifies the existence of the program. For example, an output of a treatment program directed at individuals who engage in spousal abuse is the end of the abuse. Environment. The environment consists of any factors that, despite lying out- side a program’s boundaries, can nevertheless either foster or constrain that program’s implementation. Such factors may include social norms, political structures, the economy, funding agencies, interest groups, and concerned
  • 30. 5 Chapter 1   Fundamentals of Program Evaluation citizens. Because an intervention program is an open system, it depends on the environment for its inputs: clients, personnel, money, and so on. Furthermore, the continuation of a program often depends on how the general environment reacts to program outputs. Are the outputs valuable? Are they acceptable? For example, if the staff of a day care program is suspected of abusing children, the environment would find that output unacceptable. Parents would immediately remove their children from the program, law enforcement might press criminal charges, and the community might boycott the day care center. Finally, the effectiveness of an open system, such as an intervention program, is influenced by external factors such as cultural norms and economic, social, and political conditions. A contrasting system may be illustrative: In a biological system, the use of a medicine to cure an illness is unlikely to be directly influenced by external factors such as race, culture, social norms, or poverty. Feedback. So that decision makers can maintain success and correct any prob- lems, an open system requires information about inputs and outputs, transfor- mation, and the environment’s responses to these components. This feedback is the basis of program evaluation. Decision makers need information to gauge whether inputs are adequate and organized, interventions are implemented appropriately, target groups are being reached, and clients are receiving quality services. Feedback is also critical to evaluating whether outputs are in align- ment with the program’s goals and are meeting the expectations of stakehold- ers. Stakeholders are people who have a vested interest in a program and are likely be affected by evaluation results; they include funding agencies, decision makers, clients, program managers, and staff. Without feedback, a system is bound to deteriorate and eventually die. Insightful program evaluation helps to both sustain a program and prevent it from failing. The action of feedback within the system is indicated by the dotted lines in Figure 1.1. To survive and thrive within an open system, a program must perform at least two major functions. First, internally, it must ensure the smooth transformation of inputs into desirable outcomes. For example, an education program would experi- ence negative side effects if faced with disruptions like high staff turnover, excessive student absenteeism, or insufficient textbooks. Second, externally, a program must continuously interact with its environment in order to obtain the resources and support necessary for its survival. That same education program would become quite vulnerable if support from parents and school administrators disappeared. Thus, because programs are subject to the influence of their environment, every program is an open system. The characteristics of an open system can also be identified in any given policy, which is a concept closely related to that of a program. Although policies may seem grander than programs—in terms of
  • 31. 6 Introduction the envisioned magnitude of an intervention, the number of people affected, and the legislative process—the principles and issues this book addresses are relevant to both. Throughout the rest of the book, the word program may be understood to mean program or policy. Based upon the above discussion, this book defines program evaluation as the process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program’s planning, implementa- tion, and/or effectiveness. This definition suggests many potential questions for evaluators to ask during an evaluation: The “what” questions include those such as, what are the intervention, outcomes, and other major components? The“who” questions might be, who are the implementers and who are the target clients? The “how”questions might include, how is the program implemented? The“whether” questions might ask whether the program plan is sound, the implementation adequate, and the intervention effective. And the “why” questions could be, why does the program work or not work? One of the essential tasks for evaluators is to figure out which questions are important and interesting to stakeholders and which evaluation approaches are available for evaluators to use in answering the questions. These topics will be systematically discussed in Chapter 2. The purpose of program evaluation is to make the program accountable to its funding agencies, decision makers, or other stakeholders and to enable program management and implementers to improve the program’s delivery of acceptable outcomes. Classic Evaluation Concepts, Theories, and Methodologies: Contributions and Beyond Program evaluation is a young applied science; it began developing as a disci- pline only in the 1960s. Its basic concepts, theories, and methodologies have been developed by a number of pioneers (Alkin, 2013; Shadish, Cook, & Leviton, 1991). Their ideas, which are foundational knowledge for evaluators, guide the design and conduct of evaluations. These concepts are commonly introduced to readers in two ways. The conventional way is to introduce classic concepts, theories, and methodologies exactly as proposed by these pioneers. Most major evaluation textbooks use this popular approach. This book, however, not only introduces these classic concepts, theories, and methodologies but also demonstrates how to use them as a foundation for formulating additional evaluation approaches. Readers can not only learn from evaluation pioneers’ contributions but also expand or extend their work, informed by lessons learned from experience or new developments in program evaluation. However, there is a potential drawback to taking this path. It
  • 32. 7 Chapter 1   Fundamentals of Program Evaluation requires discussing the strengths and limitations of the work of the field’s pio- neers. Such critiques may be regarded as intended to diminish or discredit this earlier work. It is important to note that the author has greatly benefited from the classic works in the field’s literature and is very grateful for the contribu- tions of those who developed program evaluation as a discipline. Moreover, the author believes that these pioneers would be delighted to see future evaluators follow in their footsteps and use their accomplishments as a basis for exploring new territory. In fact, the seminal authors in the field would be very upset if they saw future evaluators still working with the same ideas, without making progress. It is in this spirit that the author critiques the literature of the field, hoping to inspire future evaluators to further advance program evaluation. Indeed, the extension or expansion of understanding is essential for advanc- ing program evaluation. Readers will be stimulated to become independent thinkers and feel challenged to creatively apply evaluation knowledge in their work. Students and practitioners who read this book will gain insights from the discussions of different options, formulate their own views of the relative worth of these options, and perform better work as they go forward in their careers. Evaluation Typologies Stakeholders need two kinds of feedback from evaluation. The first kind is infor- mation they can use to improve a program. Evaluations can function as improve- ment-oriented assessments that help stakeholders understand whether a program is running smoothly, whether there are problems that need to be fixed, and how to make the program more efficient or more effective. The second kind of feed- back evaluations can provide is an accountability-oriented assessment of whether or not a program has worked. This information is essential for program manag- ers and staff to fulfill their obligation to be accountable to various stakeholders. Different styles of evaluation have been developed to serve these two types of feedback. This section will first discuss Scriven’s (1967) classic distinction between formative and summative evaluation and then introduce a broader evaluation typology. The Distinction Between Formative and Summative Evaluation Scriven (1967) made a crucial contribution to evaluation by introducing the distinction between formative and summative evaluation. According to Scriven, formative evaluation fosters improvement of ongoing activities. Summative evalua- tion, on the other hand, is used to assess whether results have met the stated goals.
  • 33. 8 Introduction Summative evaluation informs the go or no-go decision, that is, whether to continue or repeat a program or not. Scriven initially developed this distinction from his experience of curriculum assessment. He viewed the role of formative evaluation in relation to the ongoing improvement of the curriculum, while the role of summative evaluation serves administrators by assessing the entire finished curriculum. Scriven (1991a) provided more elaborated descriptions of the distinction. He defined for- mative evaluation as “evaluation designed, done, and intended to support the pro- cess of improvement, and normally commissioned or done, and delivered to someone who can make improvement” (p. 20). In the same article, he defined sum- mative evaluation as “the rest of evaluation; in terms of intentions, it is evaluation done for, or by, any observers or decision makers (by contrast with developers) who need valuative conclusions for any other reasons besides development.”The distinct purposes of these two kinds of evaluation have played an important role in the way that evaluators communicate evaluation results to stakeholders. Scriven (1991a) indicated that the best illustration of the distinction between formative and summative evaluation is the analogy given by Robert Stake:“When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation” (Scriven, p. 19). The cook tastes the soup while it is cooking in case, for example, it needs more salt. Hence, formative evaluation hap- pens in the early stages of a program so the program can be improved as needed. On the other hand, the guest tastes the soup after it has finished cooking and is served. The cook could use the guest’s opinion to determine whether to serve the soup to other guests in the future. Hence, summative evaluation happens in the last stage of a program and emphasizes the program’s outcome. Scriven (1967) placed a high priority on summative evaluation. He argued that decision makers can use summative evaluation to eliminate ineffective programs and avoid wasting money. However, Cronbach (1982) disagreed with Scriven’s view, arguing that program evaluation is most useful when it provides information that can be used to strengthen a program. He also implied that few evaluation results are used for making go or no-go decisions. Which type of evaluation has a higher priority is an important issue for evaluators, and the importance of this issue will be revisited later in this chapter. Analysis of the Formative and Summative Distinction The distinction between formative and summative evaluation provides an impor- tant framework evaluators can use to communicate ideas and develop approaches, and these concepts will continue to play an important role. However, Scriven (1991a) proposed that formative and summative evaluations are the two main evaluation types. In reality, there are other important evaluation types that are not
  • 34. 9 Chapter 1   Fundamentals of Program Evaluation covered in this distinction. To avoid confusion and to lay a foundation for advanc- ing the discipline, it is important to highlight these other evaluation types as well. In Scriven’s conceptualization, evaluation serves to improve a program only during earlier stages of the program (formative evaluation), while evaluation renders a final verdict at the outcome stage (summative evaluation). However, this conceptualization may not sufficiently cover many important evaluation activities (Chen, 1996). For example, evaluations at the early stage of the pro- gram do not need to be used to improve the program. Evaluators could admin- ister summative evaluations during earlier phases of the program. Similarly, evaluations conducted at the outcome stage do not have to be summative. Evaluators could administer a formative evaluation at the outcome stage to gain information that would inform and improve future efforts. Since Scriven regarded Robert Stake’s soup-tasting analogy as the best way to illustrate the formative/summative distinction, let’s use this analogy to illustrate that all evaluations do not fit this description.According to Stake’s analogy, when “the cook tastes the soup,” that act represents formative evaluation. This concept of formative evaluation has some limitations. The cook does not always taste the soup for the purpose of improvement. The cook may taste the soup to determine whether the soup is good enough to serve to the guests at all, especially if it is a new recipe. Upon testing the soup, she/he may feel it is good enough to serve to the guests; alternatively, she/he may decide that the soup is awful and not worth improving and simply chuck the soup and scratch it off the menu. In this case, the cook has not tasted the soup for the purpose of improvement but to reach a conclusion about including the soup or excluding it from the menu. To give another illustration, a Chinese cook, who is a friend of mine, once tried to prepare a new and difficult dish, called Peking duck, for his restaurant. Tasting his product, he found that the skin of the duck was not as crispy as it was sup- posed to be, nor the meat as flavorful. Convinced that Peking duck was beyond his capability as a chef, he decided not to prepare the dish again. Again, the cook tasted the product to conduct a summative assessment rather than a formative one. The formative/summative distinction does not cover this kind of evaluation. Returning to Stake’s analogy, when “the guest tastes the soup,” this is regarded as a summative evaluation since the guest provides a conclusive opin- ion of the soup. This concept of summative evaluation also has limitations. For example, the opinion of the guests is not always used solely to determine the soup’s final merit. Indeed, a cook might well elicit opinions from the guests for the purpose of improving the soup in the future. In this case, this type of evaluation is also not covered by the formative/summative distinction. Stake’s analogy, though compelling, excludes many evaluation activities. Thus, we need a broader conceptual typology so as to more comprehensively communicate or guide evaluation activities.
  • 35. 10 Introduction A Fundamental Evaluation Typology To include more evaluation types in the language used to communicate and guide evaluation activities, this chapter proposes to extend Scriven’s formative and summative distinction. The typology developed here is a reformulation of an early work by Chen (1996). This typology has two dimensions: the program stages and evaluation functions. In terms of program stages, evaluation can focus on program process (such as program implementation) and/or on pro- gram outcome (such as the impact of the program on its clients). In terms of evaluation functions, evaluation can serve a constructive function (providing information for improving a program) and/or a conclusive function (judging the overall merit or worth of a program). A fundamental typology of evalua- tion can thus be developed by placing program stages and evaluation functions in a matrix, as shown in Figure 1.2. Constructive Process Evaluation Process Outcome Program Stages Evaluation Functions Constructive Conclusive Hybrid Types of Evaluation Constructive Outcome Evaluation Other Hybrid Types of Evaluation Conclusive Process Evaluation Conclusive/ Constructive Process Evaluation Conclusive/ Constructive Outcome Evaluation Conclusive Outcome Evaluation Figure 1.2   Fundamental Evaluation Typology SOURCE: Adapted from Chen (1996).
  • 36. 11 Chapter 1   Fundamentals of Program Evaluation This typology consists of both basic evaluation types and hybrid evaluation types. The rest of this section will discuss the basic types first and then the hybrid types. Basic Evaluation Types The basic types of evaluation include constructive process evaluation, con- clusive process evaluation, constructive outcome evaluation, and conclusive outcome evaluation. Constructive Process Evaluation Constructive process evaluation provides information about the relative strengths/weaknesses of the program’s structure or implementation pro- cesses, with the purpose of program improvement. Constructive process evaluation usually does not provide an overall assessment of the success or failure of program implementation. For example, a constructive process evaluation of a family-planning program may indicate that more married couples can be persuaded to utilize birth control in an underdeveloped coun- try if the service providers or counselors are local people, rather than outside health workers. This information does not provide a conclusive judgment of the merits of program implementation, but it is useful for improving the program. Decision makers and program designers can use the information to strengthen the program by training more local people to become service providers or counselors. Conclusive Process Evaluation This type of evaluation, which is frequently used, is conducted to judge the merits of the implementation process. Unlike constructive process evaluation, conclusive process evaluation attempts to judge whether the implementation of a program is a success or a failure, appropriate or inappropriate. A good example of conclusive process evaluation is an assessment of whether program services are being provided to the target population. If an educational program intended to serve disadvantaged children is found to serve middle-class children, the program would be consider an implementation failure. Another good example of conclu- sive process evaluation is manufacturing quality control, when a product is rejected if it fails to meet certain criteria. Vivid examples of conclusive process evaluation are the investigative reports seen on popular TV programs, such as 60 Minutes and 20/20. In these programs, reporters use hidden cameras to document
  • 37. 12 Introduction whether services delivered by such places as psychiatric hospitals, nursing homes, child care centers, restaurants, and auto repair shops are appropriate. Constructive Outcome Evaluation This type of evaluation identifies the relative strengths and/or weaknesses of program elements in terms of how they may affect program outcomes. This information can be useful for improving the degree to which a program is achieving its goals, but it does not provide an overall judgment of program effectiveness. For example, evaluators may facilitate a discussion among stake- holders to develop a set of measurable goals or to reach consensus about pro- gram goals. Again, such activity is useful for improving the program’s chance of success, but it stops short of judging the overall effectiveness of the program. This type of evaluation will be discussed in detail in Chapter 9. In another example, a service agency may have two types of social workers, case managers whose work is highly labor-intensive and care managers whose work is less labor-intensive. An evaluator can apply constructive outcome evaluation to determine which kind of social worker is more cost-effective for the agency. Conclusive Outcome Evaluation The purpose of a conclusive outcome evaluation is to provide an overall judgment of a program in terms of its merit or worth. Scriven’s summative evaluation is synonymous with this category. A typical example of conclusive outcome evaluation is validity-focused outcome evaluation that determines whether changes in outcomes can be causally attributed to the program’s inter- vention. This kind of evaluation is discussed in detail in Chapter 10. The typology outlined above eliminates some of the difficulties found in the soup-tasting analogy. Formerly, when the cook tasted the soup for conclusive judgment purposes, this activity did not fit into the formative/summative dis- tinction. However, it can now be classified as conclusive process evaluation. Similarly, when the guest tastes the soup for improvement purposes, this action can now be classified as constructive outcome evaluation. Furthermore, the typology clarifies the myth that process evaluation is always a kinder, gentler type of evaluation in which evaluators do not make tough con- clusive judgments about the program. Constructive process evaluation may be kinder and gentler, but conclusive process evaluation is not necessarily so. For example, TV investigative reports that expose the wrongdoing in a psychiatric hospital, auto shop, restaurant, or day care center have resulted in changes in service delivery, the firing of managers and employees, and even the closing of
  • 38. 13 Chapter 1   Fundamentals of Program Evaluation the agencies or businesses in question. In such cases, process evaluations were tougher than many outcome evaluations in terms of critical assessment and impact. Moreover, the basic typology disrupts the notion that outcome evalua- tion must always be carried out with a “macho” attitude so that it threatens program providers while failing to offer any information about the program. A conclusive outcome evaluation may provide information whether a program has been successful or not, but the constructive outcome evaluation can provide use- ful information for enhancing the effectiveness of a program without threaten- ing its existence. For example, the survival of a program is not threatened by a constructive outcome evaluation that indicates that program effectiveness could be improved by modifying some intervention elements or procedures. Hybrid Evaluation Types Another important contribution of this fundamental evaluation typology is to point out that evaluators can move beyond the basic evaluation types to conduct hybrid evaluations. As illustrated in Figure 1.2, a hybrid evaluation can combine evaluation functions, program stages, or both (Chen, 1996). This section intends to introduce two types of hybrid evaluation that, across evalu- ation, functions at a program stage. Conclusive/Constructive Process Evaluation Conclusive/constructive process evaluation serves both accountability and program improvement functions. A good example is evaluation carried out by the Occupational Safety and Health Administration (OSHA). OSHA inspectors may evaluate a factory to determine whether the factory passes a checklist of safety and health rules and regulations. The checklist is so specific, however, that these inspections can also be used for improvement. If a company fails the inspection, the inspector provides information concerning areas that need cor- rection to satisfy safety standards. Other regulatory agencies, such as the Environmental Protection Agency (EPA), perform a similar type of evaluation. In these kinds of evaluation, the overall quality of implementation is repre- sented by a checklist of crucial elements. These elements provide exact clues for how to comply with governmental regulations. A similar principle can be applied to assess the implementation of an inter- vention. As will be discussed in Chapter 7, a conclusive/constructive process evaluation can look into both overall quality and discrete program elements so as to provide information about the overall quality of implementation as well as specific areas for its future improvement.
  • 39. 14 Introduction Conclusive/Constructive Outcome Evaluation Another hybrid evaluation type is the conclusive/constructive outcome evaluation. An excellent example of this kind of evaluation is real-world out- come evaluation, which will be discussed in great detail in Chapter 11. Another excellent example is theory-driven outcome evaluation. This type of evaluation elaborates causal mechanisms underlying a program so that it examines not only whether the program has an impact but why. It also informs stakeholders as to which mechanisms influence program success or failure for program improvement purposes. Theory-driven outcome evaluation will be discussed in Chapters 12 and 14 of the book. Applications of the Fundamental Evaluation Typology The fundamental evaluation typology discussed here prevents evaluators from hewing rigidly to just two types of evaluation, that is, formative evaluation in the early stages of the program and summative evaluation toward the end. The funda- mental evaluation typology provides evaluators and stakeholders many options for devising basic or hybrid types of evaluation at implementation and outcome stages so as to best meet stakeholders’ needs. However, the fundamental evaluation typol- ogy does not cover the planning stage. Thus, Chapter 2 will expand the fundamen- tal evaluation typology into a comprehensive evaluation typology that covers a full program cycle from program planning to implementation to outcome. Then the rest of the book will provide concrete examples of these evaluation approaches and illustrate their applications across the entire life cycle of programs. Internal Versus External Evaluators Evaluators are usually classified into two categories: internal and external evalu- ators. Internal evaluators are employed by an organization and are responsible for evaluating the organization’s own programs. External evaluators are not employees of the organization but are experts hired from outside to evaluate the program. One of the major differences between the two is independence. Internal evaluators are part of the organization. They are familiar with the organizational culture and the programs to be evaluated. Like other employees, they share a stake in the success of the organization. External evaluators are not constrained by organizational management and relationships with staff members and are less invested in the program’s success. The general conditions that tend to favor either internal evaluation or external evaluation are summarized as follows:
  • 40. 15 Chapter 1   Fundamentals of Program Evaluation Internal Evaluation • • Cost is a great concern. • • Internal capacity/resources are available. • • The evaluator’s familiarity with the program is important. • • The program is straightforward. • • Evaluation is for the purpose of monitoring or is constructive in nature. External Evaluation • • The cost of hiring an external evaluator is manageable. • • Independence and objectivity are essential. • • A program is large or complicated. • • The evaluation will focus on conclusive assessment or conclusive/ constructive assessment. • • Comprehensive assessment or fresh insight is needed. Politics, Social Justice, Evaluation Standards, and Ethics One important distinction that separates program evaluation from research is that evaluations are carried out under political processes. The purpose of an evaluation is to evaluate an intervention program. However, the program is created by political processes. What kinds of programs are to be funded? Which programs need evaluation in a community? These decisions are made through bargaining and negotiation by key players such as politicians and advocacy groups. After a program is funded and evaluators are hired to evaluate it, the focus of the evaluation and the questions to be asked are determined, or largely influenced, by stakeholders. Cronbach and colleagues (1980) argued that a theory of evaluation must be as much a theory of political interaction as it is a theory of how to determine facts. Weiss (1998), too, indicated that evaluators must understand the political nature of evaluations and be aware of the obsta- cles and opportunities that can impinge upon evaluation efforts. Since evaluation provides feedback to a program, evaluators may have high hopes that decision makers will use the findings as a basis for action. However, since program evaluation is part of political processes, evaluation findings are just one of many inputs that decision makers use. Decision making is more often based on factors such as political support and community service needs than evaluation findings. Since evaluations take place within a political and an organizational context, Chelimsky (1987) stated that evaluators are shifting their view of the role evaluations play, from reforming society to the more realistic aim of bringing the
  • 41. 16 Introduction best possible information to bear on a wide variety of policy questions. Also because evaluation takes place in a political environment, evaluators’ communica- tion skills are critical. Evaluators’ qualifications should include research skills but should emphasize group facilitation skills, political adroitness, managerial ability, and cultural sensitivity to multiple stakeholders. In evaluation, stakeholders are those persons, groups, or organizations who have a vested interest in the evaluation results. Stakeholders often are not a homog- enous group but rather multiple groups with different interests, priorities, and degrees of power or influence. The number of stakeholder groups evaluators must communicate with often depends on the magnitude of an intervention program. In a small community-based program, key stakeholders may include the program director, staff, and clients. Stakeholder groups of a large federal program, on the other hand, could include federal agencies, state agencies, community-based orga- nizations, university researchers, clients, program directors, program administra- tors, implementers, community advocates, computer experts, and so on. Evaluators are usually hired by decision makers, and one of the major pur- poses of program evaluation is to provide information to decision makers that they will use to allocate funds or determine program activities. This contractual arrangement has a potential to bias evaluators toward the groups in power, that is, the decision makers who hire them or the stakeholders with whom the decision makers are most concerned. Critics such as House (1980) argued that evaluation should address social justice and specifically the needs and interests of the poor and powerless. However, Scriven (1997) and Chelimsky (1997) were concerned that when evaluators take on the role of program advocates, their evaluations’ credibility will be tarnished. Social justice is a difficult issue in evaluation. Participatory evaluation has the potential to alleviate some of the tension between serving social justice and decision makers. Including representatives of the various stakeholder groups in evaluation has been proposed as a way to address some social justice issues. Generally, stakeholders participate in an evaluation for two purposes: practical and transformative (Greene, Lincoln, Mathison, Mertens, & Ryan, 1998). Practical participatory evaluation is meant to enhance evaluation relevance, ownership, and utilization. Transformative participatory evaluation seeks to empower community groups to democratize social change. Either way, partici- patory evaluation can provide evaluators with an opportunity to engage with different stakeholder groups and balance diverse views, increase buy-in from all stakeholder groups, and enhance their willingness to use evaluation results. Another way of enhancing evaluators’ credibility is to promote profes- sional ethics. Like other professionals, evaluators must adhere to professional ethics and standards. The American Evaluation Association (2004) adopted the following ethical principles for evaluators to follow:
  • 42. 17 Chapter 1   Fundamentals of Program Evaluation • • Systematic inquiry. Evaluators conduct systematic, data-based inquiries. • • Competence. Evaluators provide competent performance to stakeholders. • • Integrity/honesty. Evaluators ensure honesty and integrity of the entire evaluation process. • • Respect for people. Evaluators respect the security, dignity, and self-worth of the respondents, program participants, clients, and other stakeholders. • • Responsibilities for general and public welfare. Evaluators articulate and take into account the diversity and values that may be related to the gen- eral and public welfare. (“The Principles”) In addition, to ensure the credibility of evaluation, the Joint Committee on Standards for Education (Yarbrough, Shulha, Hopson, & Caruthers, 2011) has specified the following five core standards for evaluators to follow: 1. Utility standards. The utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs. 2. Feasibility standards. The feasibility standards are intended to increase evaluation effectiveness and efficiency. 3. Propriety standards. The propriety standards support what is proper, fair, legal, right, and just in evaluations. 4. Accuracy standards. The accuracy standards are intended to increase the dependability and truthfulness of evaluation representations, proposi- tions, and findings, especially those that support interpretations and judg- ments about quality. 5. Evaluation accountability standards. The evaluation accountability standards encourage adequate documentation of evaluations and a meta- evaluative perspective focused on improvement of and accountability for evaluation processes and products. Evaluation Steps The Centers for Disease Control and Prevention (CDC) published the CDC Framework of Program Evaluation for Public Health (CDC, 1999) to help evaluators understand how to conduct evaluation based on evaluation stan- dards. The document specified six steps that are useful guides to the evaluation of public health and social betterment programs:
  • 43. 18 Introduction Step 1: Engage Stakeholders deals with engaging individuals and organiza- tions with an interest in the program in the evaluation process. Step 2: Describe the Program involves defining the problem, formulating program goals and objectives, and developing a logic model showing how the program is supposed to work. Step 3: Focus the Evaluation Design determines the type of evaluation to implement, identifies the sources needed to implement the evaluation, and develops evaluation questions. Step 4: Gather Credible Evidence identifies how to answer the evaluation questions and develop an evaluation plan that will include, among other things, indicators, data sources and methods for collecting data, and the timeline. Step 5: Justify Conclusions involves collecting, analyzing, and interpreting the evaluation data. Step 6: Ensure Use and Share Lessons Learned identifies effective methods for sharing and using the evaluation results. Evaluation Design and Its Components When proposing an evaluation to stakeholders or organizations such as fund- ing agencies, evaluators must describe the evaluation’s purposes and methodol- ogy. An evaluation design needs to include at least five components: 1. Purposes of and Background Information about the Intervention Program. The first thing that evaluators need to do when assessing an inter- vention program is to gain a solid knowledge of the background of the pro- gram and document this understanding. Background information includes the purposes of the intervention program, the target population, the organizations responsible for implementing the program, key stakeholders of the program, implementation procedures, reasons for conducting the evaluation, the evalu- ation’s timeline, the resources that will be used, and who will utilize the evalu- ation results. Evaluators usually gather information by reviewing existing documents such as program reports and the grant application proposal, as well as by interviewing key stakeholders of the program. The background information serves as a preliminary basis for communication by evaluators and stakeholders about the program and evaluation.
  • 44. 19 Chapter 1   Fundamentals of Program Evaluation 2. A Logic Model or Program Theory for Describing the Program. A sound evaluation requires a systematic and coherent description of the intervention program, which will serve as a basis for communication between evaluators and stakeholders and for the evaluation design. In reality, a systematic and coherent program description is often not available. It is unwise for evaluators to conduct a program evaluation without a mutual agreement with stakehold- ers about what the program looks like. In this situation, how could an evalua- tion provide useful information to stakeholders? Or, even worse, stakeholders later could easily claim that an evaluation failed to accomplish what they expected from it, if the evaluation results do not convey good news. Program description is an important step in evaluation. If a program does not have a systematic and coherent program description, evaluators must facilitate stakeholders in developing one. This book discusses two options for describing a program: logic models and program theory. Logic models are used to identify the major components of a program in terms of a set of categories such as inputs, activities, outputs, and outcomes. However, if evaluators and stakeholders are interested in looking into issues such as contex- tual factors and causal mechanisms, this book encourages the use of program theory. Both logic models and program theory will be discussed in Chapter 3. 3. Assertion of a Program’s Stage of Development. As will be discussed in the next chapter, an intervention program’s life cycle can be generally classified as being in one of four phases: planning, initial implementation, mature imple- mentation, and outcome. Program designers, during the planning phase, work with partners to identify or develop an intervention and organize resources and activities for supporting the intervention. After the planning phase, the pro- gram goes into the initial implementation phase. The major tasks here are train- ing implementers, checking clients’ acceptance, and ensuring appropriate implementation. After the initial implementation, the program progresses to the mature implementation stage. The major tasks here include ensuring or maintaining the quality of implementation. During the outcome phase, the program is expected to have desirable impacts on clients. The different stages of a program require different evaluation approaches. For example, construc- tive evaluation is most useful to a program during the initial implementation stage when it can help with service delivery, but it is not appropriate for a formal assessment of a program’s merits at the outcome stage. Evaluators and stakeholders have to agree on which stage a program is in to select an appropriate evaluation type(s) and approach. Chapter 2 will provide detailed discussions of the nature of program stages and how they relate to different evaluation types and approaches.
  • 45. 20 Introduction 4. Evaluation Types, Approaches, and Methodology. This component is the core of evaluation design. Using information regarding the evaluation’s pur- poses and the logic model/program theory, evaluators and stakeholders need to determine what type of evaluation, whether one of the basic evaluation types— constructive process, conclusive process, constructive outcome, or conclusive outcome—or a hybrid type, is suitable for correctly evaluating the program. Once program stage and evaluation type are determined, evaluators can move on to select or design an evaluation approach or approaches for evaluating a program. Chapter 2 will provide a comprehensive typology for guiding evalu- ators in selection of evaluation types and approaches. Determining the most appropriate evaluation approach is challenging and time-consuming. However, it ensures that all involved share a mutual under- standing of why a particular evaluation type has been selected. Without it, stakeholders are likely to find that the results of the evaluation address issues that are not of concern to them and/or are not useful to them. Stakeholders are often not trained on evaluation techniques. They often do not express what they expect and need from an evaluation as clearly and precisely as evaluators could hope. Evaluators usually must double- or even triple-check with stake- holders to make sure everyone shares the same understanding and agrees on the evaluation’s purposes up front. 5. Budget and Timeline. Regardless of stakeholders’ and evaluators’ visions of an ideal evaluation plan, the final evaluation design is bound to be shaped by the money and time allocated. For example, if stakeholders are interested in a rigorous assessment of an intervention program’s outcomes but can provide only a small evaluation budget, the research method used in the evaluation is not likely to be a randomized controlled trial over a few years, which would likely cost over a few million dollars. Similarly, if the timeline is short, evalua- tors will likely use research methods such as rapid assessments rather than conduct a thorough evaluation. When facilitating stakeholders in making an informed decision, it is highly preferable for evaluators to propose a few options and explain the information each option is likely to provide, as well as the price tag of each. Major Challenges of Evaluation: Lessons Learned From Past Practice Program evaluation has been practiced over several decades. Lessons learned from experience indicate that program evaluation faces a set of unique chal- lenges that are not faced by other disciplines.
  • 46. 21 Chapter 1   Fundamentals of Program Evaluation Judge a Program Not Only by Its Results but Also by Its Context One important characteristic distinguishing program evaluation is its need, rarely shared by other disciplines, to use a holistic approach to assess- ment. The holistic approach includes contextual or transformation informa- tion when assessing the merit of a program. By comparison, product evaluation is more streamlined, perhaps focusing solely on the intrinsic value of its object. Products like televisions can be assessed according to their picture, sound, durability, price, and so on. In many situations, how- ever, the value of a program may be contextual as well as intrinsic or inherent. That is, to adequately assess the merit of a program, both its intrinsic value and the context in which that value is assigned must be considered together. For example, say an educational program has, according to strictly perfor- mance-based evaluation, attained its goals (which are its intrinsic values). But in what context was the performance achieved? Perhaps the goal of higher student scores on standardized tests was attained by just “teaching students the tests.” Does the program’s performance still deserve loud applause? Probably not. Similarly, what about a case in which program success is due to the par- ticipation of a group of highly talented, well-paid teachers with ample resources and strong administrative support, but the evaluated program is intended for use in ordinary public schools? This “successful” program may not even be relevant, from the viewpoint of the public schools, and is not likely to solve any of their problems. Therefore, how a program achieved its goals is just as important as whether it achieved them. For example, an outcome evaluation of one family-planning program in a developing coun- try limited its focus to the relationship between program inputs and out- puts; it appeared possible, on this basis, to claim success for the program. A large drop in the fertility rate was indeed observed following the inter- vention. Transformation information, however, showed that such a claim was misleading. Although the drop in fertility was real, it had little to do with the intervention. A larger factor was that, following implementation, a local governor of the country, seeking to impress his prime minister with the success of the program, ordered soldiers to seize men on the streets and take them to be sterilized. An evaluator with a less holistic approach might have declared that the goals of the program were attained, whereas other people’s personal knowledge led them to condemn the program as inhu- mane. Lacking a holistic orientation, program evaluation may reach very misleading conclusions.
  • 47. 22 Introduction Evaluations Must Address Both Scientific and Stakeholder Credibility Program evaluation is both a science and an art. Evaluators need to be capable of addressing both scientific and stakeholder credibility in an evalua- tion. The scientific credibility of program evaluation reflects the extent to which that evaluation was governed by scientific principles. Typically, in scien- tific research, scientific credibility is all that matters. The more closely research is guided by scientific principles, the greater its credibility. However, as an applied science, program evaluation also exhibits varying degrees of stake- holder credibility. The stakeholder credibility of a program evaluation reflects the extent to which stakeholders believe the evaluation’s design gives serious consideration to their views, concerns, and needs. The ideal evaluation achieves both high scientific and high stakeholder cred- ibility, and the two do not automatically go hand in hand. An evaluation can have high scientific credibility but little stakeholder credibility, as when evalu- ators follow all the scientific principles but set the focus and criteria of evalua- tion without considering stakeholders’ views and concerns. Their evaluation will likely be dismissed by stakeholders, despite its scientific credibility, because it fails to reflect the stakeholders’ intentions and needs. For example, there are good reasons for African-Americans to be skeptical of scientific experiments that lack community input, due to incidents such as the Tuskegee syphilis experiment (Jones, 1981/1993). Researchers in the experiment withheld effec- tive treatment from African-American men suffering from syphilis so that the long-term effects of the disease could be documented. Conversely, an evalua- tion overwhelmed by the influence of stakeholders, such as program managers and implementers, may neglect its scientific credibility, resulting in suspect information. One of the major challenges in evaluation is how to address the tension between scientific credibility and stakeholder credibility. Evaluation theorists, such as Scriven (1997), argued that objectivity is essential in evaluation because without it, evaluation has no credibility. On the other hand, Stake (1975) and Guba and Lincoln (1981) argued that evaluations must respond to stakeholders’ views and needs in order to be useful. Both sides make good points, but objectivity and responsiveness are conflicting values. How would evaluators address this tension? One strategy is to prioritize, choosing one type of credibility to focus on. However, this prioritization strategy does not satisfactorily address the conflict between the two values. A better strategy, proposed by and used in this book, is perhaps to strike a balance between the two. For example, evaluators might
  • 48. 23 Chapter 1   Fundamentals of Program Evaluation pursue stakeholder credibility in the earliest phases of evaluation design but turn their attention toward scientific credibility later in the process. Initially, evaluators experience a great deal of interaction and communication with a program’s stakeholders for the specific purpose of understanding their views, concerns, and needs. Evaluators then incorporate the understanding they have acquired into the research focus, questions, and design, along with the necessary scientific principles. From this point on, to establish scientific credibility, the evaluators require autonomy to design and conduct evaluations without inter- ference from stakeholders. Stakeholders are usually receptive to this strategy, especially when evaluators explain the procedure to them at the beginning of the process. While stakeholders do not object to a program being evaluated, or dispute the evaluator’s need to follow scientific procedures, they do expect the evaluation to be fair, relevant, and useful (Chen, 2001). As will be discussed in the rest of the book, the tension between scientific and stakeholder credibility arises in many situations. Such tension makes evaluation challenging, but resolving it is essential for advancing program evaluation. Evaluations Must Provide Information That Helps Stakeholders Do Better Earlier in this chapter, we learned that Scriven placed a higher priority on conclusive assessment than on program improvement, while Cronbach pre- ferred otherwise. This is an important, but complicated, issue for evaluators. Many evaluators quickly learn that stakeholders are eager to figure out what to do next in order to make a program work better. Stakeholders find evalua- tions useful if they both offer conclusions about how well programs have worked and provide information that assists the stakeholders in figuring out what must be done next to maintain—or even surpass—program goals. Thus, the assessment of a program’s performance or merit is only one part of pro- gram evaluation (or, alone, provides a very limited type of evaluation). To be most useful, program evaluation needs to equip stakeholders with knowledge of the program elements that are working well and those that are not. Program evaluation in general should facilitate stakeholders’ search for appropriate actions to take in addressing problems and improving programs. There are important reasons why evaluations must move beyond narrow merit assess- ment into the determination of needed improvements. In the business world, information on product improvement is provided by engineering and market
  • 49. 24 Introduction research; likewise, in the world of intervention programs, the agency or orga- nization overseeing an effort relies on program evaluation to help it continually guarantee or improve the quality of services provided. Consider that intervention programs typically operate in the public sector. In the private sector, the existence or continuation of a product is usually deter- mined by market mechanisms. That is, through competition for consumers, a good product survives, and a bad product is forced from the market. However, the great majority of intervention programs do not encounter any market com- petition (Chen, 1990). Drug abusers in a community may find, for example, that only one treatment program is available to them. In the absence of an alternative, the treatment program is likely to continue whether or not its out- comes justify its existence. Furthermore, well-known programs with good intentions, such as Head Start, would not be discontinued based on an evalua- tion saying the programs were ineffectual; decision makers rarely use program evaluation results alone to decide whether a program will go on. Under these circumstances, an evaluation that simply assesses the merit of a program’s past performance and cannot provide stakeholders with insights to help them take the next step is of limited value (Cronbach, 1982). In fact, many stakeholders look to a broad form of program evaluation to point out apparent problems, as well as strengths upon which to build. In general, to be responsive and useful to stakeholders, program evaluation should meet both assessment needs and improvement needs rather than confine itself solely to conclusive assessment. Stakeholders need to know whether the program is reaching the target group, the treatment/intervention is being implemented as directed, the staff is providing adequate services, the clients are making a com- mitment to the program, and the environment seems to be helping the delivery of services. Any part of this information can be difficult for stakeholders to collect; thus, program evaluators must have the necessary training and skills to gather and synthesize it all systematically. In a broad sense, therefore, merit assessment is a means, rather than the end, of program evaluation. Our vision of program evaluation should extend beyond the design of supremely rigorous and sophisticated assessments. It is important to grasp that evaluation’s ultimate task is to produce useful informa- tion that can enhance the knowledge and technology we employ to solve social problems and improve the quality of our lives. Furthermore, as discussed in the last section, constructive evaluation for pro- gram improvement and conclusive evaluation for merit assessment are not mutually exclusive categories. Evaluation does not have to focus on either pro- gram improvement or merit assessment. The introduction of hybrid evaluation types in this book provides options by which evaluation can address both issues.
  • 50. 25 Chapter 1   Fundamentals of Program Evaluation Addressing the Challenges: Theory-Driven Evaluation and the Integrated Evaluation Perspective To better address these challenges, this book applies the frameworks provided by the theory-driven evaluation approach and the integrated evaluation perspective. Theory-Driven Evaluation Approach The theory-driven evaluation approach requires evaluators to under- stand assumptions made by stakeholders (called program theory) when they develop and implement an intervention program. Based on stakeholders’ program theory, evaluators design an evaluation that systematically exam- ines how these assumptions operate in the real world. By doing so, they ensure that the evaluation addresses issues in which the stakeholders are interested. The usefulness of the theory-driven evaluation approach has been discussed intensively in the evaluation literature (e.g., Chen, 1990, 2005, 2012a, 2012b; Chen & Rossi, 1980, 1983a; Chen & Turner, 2012; Coryn, Noakes, Westine, & Schröter, 2011; Donaldson, 2007; Funnell & Rogers, 2011; Nkwake, 2013; Rossi, Lipsey, & Freeman, 2004; Weiss, 1998). The concept and application of program theory will be intricately discussed in Chapter 3. It is important to know that theory-driven evaluation provides a sharp con- trast to traditional method-driven evaluation. Method-driven evaluation views evaluation as mainly an atheoretical activity. Evaluation is carried out by fol- lowing research steps of a chosen research method such as randomized experi- ments, survey, case study, focus group, and so on. Within this tradition, evaluation does not need any theory. If evaluators are familiar with the research steps of a particular method, then they can apply the same research steps and principles across different types of programs in different settings. To some degree, method-driven evaluation simplifies evaluation tasks. However, because the focus of method-driven evaluation is mainly on methodological issues, it often does not capably address stakeholders’ views and needs. The theory- driven evaluation approach argues that while research methods are important elements of an evaluation, evaluation should not be dictated or driven by one particular method. Because theory-driven evaluation uses program theory as a conceptual framework for assessing program effectiveness, it provides information not only on whether an intervention is effective but also how and why a program
  • 51. 26 Introduction is effective. In other words, it is capable of addressing the challenge discussed in the last section: The success of a program has to be judged not only by its results but also by its context. This approach is also useful for addressing the following challenge: Evaluation must be capable of providing information for stakeholders to do better. The theory-driven evaluation approach will be inten- sively discussed in Chapters 3, 7, 12, 13, and 14. Integrated Evaluation Perspective Program evaluation is challenging because it has to provide evaluative evi- dence for a program that meets two requirements. The first requirement is that the evaluative evidence must be credible; that is, program evaluation has to generate enough credible evidence to gain a scientific reputation. This require- ment is called the scientific requirement. The second requirement is that the evidence must respond to the stakeholders’ views, needs, and practices so as to be useful. Stakeholders are consumers of evaluation. Program evaluation has little reason to exist unless it is able to adequately serve stakeholders’ needs. This requirement is called the stakeholder requirement. Ideally, evaluations should meet both requirements, but in reality evalua- tors often find it difficult to meet both. One the one hand, they must apply rigorous methods to produce credible evidence. On the other hand, evalua- tors often find it difficult to apply rigorous methods—such as randomized controlled trials (RCTs)—to evaluate real-world programs given insufficient resources and short time lines. In many situations, administrative hindrances and ethnic concerns add barriers to such an application. Furthermore, even should these barriers be removed and a rigorous method applied, stakehold- ers may feel that the focus of the evaluation is then too narrow or too aca- demic to be relevant or useful to them. The reason for this disconnect is that the stakeholders’ views on community problems and how to solve them are quite different from the conventional scientific methods’ underlying philoso- phy—reductionism. Reductionism postulates that a program is stable and can be analytically reduced to a few core elements. If a program can be reduced to core components, such as intervention and outcome, then an adjustment can be implemented and desirable changes will follow. Given this view, the evaluators’ main task is to rigorously assess whether the change produces predetermined outcomes. However, stakeholders’ views on and experiences with social problems and addressing them in a community are more dynamic and complicated
  • 52. 27 Chapter 1   Fundamentals of Program Evaluation than those assumed by reductionism. Their views can be characterized as the following: 1. An intervention program is implemented as a social system. In a social system, contextual factors in a community—such as culture, norms, social support, economic conditions, and characteristics of implementers and clients—are likely to influence program outcomes. As discussed at the beginning of this chapter, program interventions are open systems, not closed like a biological system in terms of contextual factors. 2. Health promotion/social betterment programs require clients, with the help of implementers, to change their values and habits in order to work. Unfortunately, people are notoriously resistant to changing their values and habits. For example, an education program may require children fond of playing video games to substantially cut down on game playing to make time for study- ing; these children may vastly prefer playing the latest zombie massacre game to studying. Victims of bullying in schools may be asked to start reporting bullying incidents to school authorities and parents; based on past experience, these vic- tims may believe reporting these incidents is useless or even dangerous. Because an intervention requires changes, its demands may be highly challenging to both clients and implementers. Not only must program designers wrestle with this challenge when designing an effective intervention program but evaluators must also take this reality into consideration when designing a useful evaluation. Because of the above factors, stakeholders believe that they need to take a much broader approach in solving a community problem. An intervention is not a stand-alone entity but, rather, has to connect to contextual factors and/or change clients’ values and habits to work. Their broad view of com- munity problem solving is inconsistent with the traditional scientific methods, which focus on narrow issues such as assessing the causal relationships between an intervention and its outcomes. The inconsistency between stakeholders’ views and reductionism’s assumptions regarding community problems and interventions is partly why there is such a huge chasm between the academic and practice communities regarding interventions, as will be discussed in Chapter 15. Stakeholders respect the value and reputation of scientific methods but view the information provided by using them as just one piece of a jigsaw puzzle they need to assemble. They need other pieces to complete the picture. They hope evaluators can figure out ways to provide all, not just one, of those pieces to them. Stakeholders are concerned that, if evaluators focus too much
  • 53. 28 Introduction on the scientific piece, it will blind them or prevent them from simultaneously investigating other means to solve the puzzle. Stakeholders’ views on com- munity problem solving are relevant to ideas proposed by systems thinking (e.g., Meadows, 2008). According to systems thinking, a system is made up of diverse and interactive elements and must address environmental turbu- lence. Problem solving thus requires the modification of groups of variables simultaneously. The above analysis shows that evaluators face a dilemma in meeting the scientific requirement and the responsiveness requirement at the same time. An evaluation emphasizing the scientific requirement may scarify the responsive- ness requirement, and vice versa. The dilemma has significant implications for evaluation practices, but it has not been intensively and systematically dis- cussed in the literature. There are three general strategies evaluators use to address the dilemma: Prioritizing the Scientific Requirement as the Top Priority in Evaluation: The first strategy is to stress the scientific requirement by arguing that evaluation’s utility relies on whether it can produce credible evidence. Following this gen- eral strategy, evaluators must apply rigorous methods as best as they can. Issues related to the responsiveness requirement are addressed only when they do not compromise the rigor issues. Currently, this strategy is the most popular one used by evaluators (Chen, Donaldson, & Mark, 2011). The strategy appeals particularly to evaluators who are strongly committed to scientific values and evidence-based interventions. Prioritizing the Responsiveness Requirement as the Top Priority in Evaluation. The second strategy is to put the emphasis on the responsiveness requirement. This strategy requires that evaluators use a participatory evaluation approach and qualitative methods to meet stakeholders’ information needs (e.g., Cronback, 1982; Stake, 1975). This method is attractive to evaluators who view traditional scientific methods as too narrow and rigid to accommodate stakeholders’ views and to meet their informational needs. Synthesizing the Scientific and Responsiveness Requirements in Evaluation. The third general strategy is to synthesize the scientific and responsiveness requirements in evaluation. This strategy does not prioritize either requirement as the prime focus and thus avoids maximizing one at the expense of the other. Evaluations following this strategy may not be able to provide highly rigorous evidence but can provide good-enough evidence to balance the scientific and responsiveness requirements.
  • 54. 29 Chapter 1   Fundamentals of Program Evaluation The first two strategies have merits. They are especially useful when there is a strong mandate for evaluation to be either highly rigorous or highly responsive. However, the author believes that, in many typical intervention programs, stakeholders are more likely to benefit from evaluations that use the synthesizing strategy. This book advocates this strategy and formally calls it the integrated evaluation perspective. Specifically, the integrated evaluation perspective urges evaluators to develop evaluation theories and approaches that can synthetically integrate stakeholders’ views and practices, thus acknowledging the dynamic nature of an intervention program in a commu- nity, with scientific principles and methods for enhancing the usefulness of evaluation. In spite of its conceptual appeals, the integrated evaluation perspective faces a challenge in developing specific evaluation theories and approaches to guide the work. It does not have advantages such as the scientific prioritiza- tion strategy. For example, advocates of the scientific prioritization strategy can borrow scientific methods and models developed by more matured disci- plines and apply them to evaluation. The integrated evaluation perspective, however, does not have this ability because other disciplines do not face the kind of inconsistency between scientific and responsiveness requirements experienced in evaluation. They thus do not need to deal with synthesizing issues. For example, in biomedical research, both researchers and physicians consistently demand rigorous evidence for a medicine’s efficacy. Accordingly, biomedical research cannot offer evaluators clues or solutions on synthesizing the conflict between scientific and responsiveness requirements. The integrated evaluation perspective, therefore, requires evaluators to develop innovative, indigenous theories and approaches to synthesize the requirements unique to the discipline. This book contributes to the integrated evaluation perspective by introduc- ing many innovative, indigenous theories and approaches evaluators can use in balancing the scientific and responsiveness requirements. At the same time, this book does not neglect traditional theories and approaches promoted by the scientific prioritization or responsiveness prioritization strategies. Instead, the author intends to introduce both traditional and innovative evaluation theories and approaches from these three strategies to enrich evaluators’ toolbox so they can apply all theories and approaches as needed. The nature and applications of the integrated evaluation perspective will be illustrated in detail in Chapters 11, 12, 13, 14, and 15, but its spirit and the principles it employs to develop indigenous concepts, theories, approaches, and methodologies are manifested throughout the book.
  • 55. Discovering Diverse Content Through Random Scribd Documents
  • 56. The Professor of Topography directs the whole of the surveys and the execution of the Director Plan. FIFTH SECTION.—TRACING OF THE WORKS OF ATTACK, AND ACTUAL EXECUTION IN FULL RELIEF OF CERTAIN WORKS. The sub-lieutenants, divided into brigades, trace the works of the siege, under the direction of the officers of the staff, and take part in the superintendence of the works executed in full relief when the exigencies of the service will permit the chief of the Artillery Service and the Colonel of the Regiment of Engineers to place workmen at the disposal of the General Commandant of the School. Six days are appropriated to this work. SIXTH SECTION.—WORK IN THE HALLS OF STUDY. The work in the Halls of Study consists of:— 1st. A memoir on the sham siege, which memoir must be approved by the General Commandant of the School. 2d. Of a sketch representing one of the works traced or executed in full relief. These works in the Halls are performed during the interval of the attendances devoted to out-of-door work. Two days are appropriated to the preparation of the memoir, and two to the execution of the sketch. This time is included in the eleven days allowed to the sham siege. RECAPITULATION FOR THE ARTILLERY AND ENGINEERS. NL No. of Lectures or Conferences. CL Credits for Lectures or Conferences. L Lectures. Cf Conferences. T Total. Q No. of Questions. Lectures and Conferences. NL Credits for Lectures or Q
  • 57. Conferences. L Cf T By the Professor of Military Art, 2 3 . . 3 By the Professor of Topography, 1 1½ . . 1½ 2 By the Professor of Permanent Fortification, 2 3 . . 3 By the Professor of Artillery, 2 3 . . 3 Conferences by the Chief of the Service, of Artillery, 4 . . 6 6 of Engineers, 4 . . 6 6 Total, 15 10½ 12 22½ 2 * One series of questions by the Chief of the Artillery Service, as to what relates to that arm. One series of questions by the Chief of the Engineer Service, as to what relates to that arm. A Credit of 11 is assigned to each series of questions. D Drawings. M Memoirs. H Attendances in the Halls. I Credits. Works of Application. Number of D M Attendances out of doors. H C of 4½h. of 8 h. 2nd Reconnaissance Plan (Memoir.) Topographical Work, . . . . 4 . . . . 20 * Itinerary and Sketch (Memoir,) . . . . . . . . . . . . Plan “Director,” . . . . . . . . 1 5 Tracing of Lines, . . . . . . 1 . . 10 † Tracing of Works of Attack and of Defense, . . . . 6 . . . . 25 Sketch, 1 . . . . . . 2 1 ‡ Memoir, . . 1 . . . . 2 2
  • 58. 90 Total, 1 1 10 1 5 * Credits given by the Professor of Topography. † Credits given by the Captains of the Staff, Chiefs of Brigades. ‡ Credits given by the Chiefs of the Service of the Artillery and Engineers. XIII.—PROGRAMME OF THE COURSE ON THE VETERINARY ART. FIRST PART.—INTERIOR OF THE HORSE. Lecture 1.—Classification and nomenclature of the various matters which constitute the horse. Skeleton (head and body.) Lecture 2.—Skeleton (limbs.) Mechanical importance of the skeleton. Nomenclature and use of the muscles. Cellular and fatty tissues, grease, skin. Insensible perspiration. Lecture 3.—Functions for maintenance. Arteries of the nerves. Animal heat. Lecture 4.—On various functions. SECOND PART.—EXTERIOR OF THE HORSE. Lecture 5.—Proportions. Equilibrium. Description and importance of the natural beauties and defects of the head and region of the throat. Lecture 6.—Description and importance of the other parts of the horse. Blemishes. Soft tumors. Lecture 7.—Osseous tumors. Various accidents. Temperaments. Description of clothing, &c. Lecture 8.—Data respecting horses. Lecture 9.—To know the age. On various bad habits. Examination of the eyes; their diseases.
  • 59. Lecture 10.—Defective paces, &c. Draught and pack horses. Mules. Lecture 11.—Stud and remounts. Races. Lecture 12.—Vicious horses, and different bits. Manner of bitting a horse. On grooms and punishment. THIRD PART.—ON THE HEALTH OF THE HORSE. Lecture 13.—Examination of the foot, and shoeing with the hot shoe. Lecture 14.—Shoeing with the cold shoe. Different kinds of horse- shoe, &c. Lecture 15.—On stables. Food. Rations. Lecture 16.—Description and nomenclature of the saddle. Harness and pack. Various saddles. Lecture 17.—On work and rest. Horse and mule on the road and in bivouac. On diseases and accidents. Abstract of the course:— Interior of the horse, 4 17 lectures at 1½ hours. Total time, 25½ hours. Credits, 25. Exterior, 6 Health, 7 The instruction on horseback can, under certain circumstances, be considered as connected with this course; and questions are asked during the time when the sub-lieutenants are not engaged in actual riding exercise. This instruction is described under the head of Practical Military Instruction; it comprises at the maximum 272 attendances, and its credit of influence is valued at 240.
  • 60. ARTILLERY AND ENGINEERS’ REGIMENTAL SCHOOLS. I. ARTILLERY REGIMENTAL SCHOOLS. These are intended for the theoretical and practical instruction of officers, sous-officiers, and gunners. Each School is under the orders of the General of Brigade commanding the Artillery in the military division in which it is situated. Independent of the general officer, the school has the following staff:— A Lieutenant (associated assistant to the General.) A Professor of Sciences, applying more particularly to the Artillery. A Professor of Fortification, of drawing, and construction of buildings. Two Gardes of Artillery (one of the first, and the other of the second class.) There are, in addition, attached to each school the number of inferior officers (captains, lieutenants, or sous-lieutenants) required for carrying on the theoretical courses, which are not placed under the direction of the professors. A captain of the first class, assisted by two first lieutenants, is the director of the park of the school. Another captain, also of the first class, but taken from the regiment of Pontooneers, has the direction of that portion of the bridge equipage necessary for the special instruction of this corps, as well as of the material of the artillery properly belonging to this instruction. The lieutenant-colonel, assistant to the general, fulfills, independent of every other detail of supervision with which he may be charged, the functions of ordonnateur secondaire, in what concerns the expenses of the school and their propriety
  • 61. (justification.) He corresponds with the minister of war for this part of the service. The instruction is divided into theoretical and practical, and the annual course is divided into half-yearly periods, or into summer and winter instructions. The summer instruction commences, according to different localities, from the 1st of April to the 1st of May, and that of the winter from the 1st of October to the 1st of November. The winter and summer instruction is subdivided into school and regimental instruction. The school instruction comprehends all the theoretical and practical instruction common to the different corps which require the assistance of the particular means of the school, the employment of its professors, locality, and material, as that of the practical instruction in which the troops belonging to the different corps of the army are united to take part. The regimental instruction is that which exists in the interior of the regiments and the various bodies of the artillery. It is directed by the chiefs of these corps, who are responsible for it, with the means placed at their disposal, under the general surveillance of the commandant of the school. The special instruction of the Pontooneers not admitting of their following the same instruction as the other regiments of artillery, the chief of this corps directs the special instruction according to certain bases prescribed by the regulations. There are for the captains of artillery, each year during the winter half-year, six conferences for the purposes of considering and discussing projects for the organization of different equipages and armaments for the field service, and for attack and defense of places. In a building belonging to each school of artillery, under the name of the hotel of the school, are united the halls and establishments
  • 62. necessary for the theoretical instruction of the officers and sous- officers, such as halls for théorique drill and drawing, library, depots of maps and plans, halls for machines, instruments and models, &c. Each school is provided with a physical cabinet and a chemical laboratory. There is also a piece of ground, called a polygon, for exercising artillerymen to the manœuvers of cannon and other firearms of great range. Its extent is sufficient in length to furnish a range of 1,200 meters, and in breadth of 600 meters. Permanent and temporary batteries are established on this ground, and they seem not only for practice, but also to accustom the men to the construction of fascines, field batteries, &c. The administration of each school, and the accounts relating to it, are directed by an administrative council, consisting of— The General Officer commanding the Artillery (President.) The Colonels of the regiments of Artillery in the towns where two regiments of the Artillery are quartered, and in other towns, the Colonel and Lieutenant-Colonel of the regiment. The Colonel of the regiment of Pontooneers in the town where the principal part of the corps may be stationed, and in any other town the Lieutenant-Colonel or the Major. The Lieutenant-Colonel associated assistant with the General Commandant. The functions of secretary of the council are intrusted to a grade of the first class. The functionaries of the corps of intendants fulfill, in connection with the administrative councils of the artillery schools, the same duties as are assigned by the regulations relating to the interior administration of bodies of troops. They will exercise over the accounts, both of money and material of the said schools, the same control as over the administration connected with the military interests of the state. II. ENGINEER REGIMENTAL SCHOOLS.
  • 63. The colonel of each regiment has the superior direction of the instruction. The lieutenant-colonel directs and superintends, under his orders, the whole of the details of the regimental instruction. A major, selected from among the officers of this rank belonging to the état-major of this arm, directs and superintends, under the orders of the colonel, the whole of the details of the special instruction. The complete instruction consists of— General instruction, or that of the regiment, by which a man is made a soldier. Special or school instruction, having for its object the training of the miner or sapper. The instructions are each separated into theoretical and practical instruction. The theoretical instruction of the regiment comprehends the theories:— On the exercises and manœuvers of infantry. On the interior service. On the service of the place. On field service. On the maintenance of arms. On military administration. On military penal legislation. The practical instruction of the regiment comprises:— The exercises and manœuvers of infantry. Practice with the musket. Military Marches. Fencing. The teaching of these various duties is confided to officers, sous- officiers, and corporals of the regiments, as pointed out by the regulation, and the orders of the colonel. The fencing school is organized in a similar manner to those of the infantry, and the military marches are also made in the same way as in those corps. The special and theoretical instruction consists of:—
  • 64. Primary instruction. Mathematics. Drawing. Geography. Military history of France. Fortification and the various branches of the engineering work. Three civil professors (appointed by competition) are attached to each regimental school, for the special theoretical instruction, as regards the primary instruction, drawing, and mathematics. The courses are distributed and taught in the following manner: Primary instruction for the Soldiers. By the Professor of Primary Instruction. French grammar for the Corporals. Book-keeping for the Sous-Officiers. Elementary arithmetic for the Corporals. By the Prof. of Mathematics. Complete arithmetic for the Serjeants. Elementary geometry Complete geometry for the Serjeant-Major. Trigonometry Surveys for the Sous-Officiers. Special mathematics for the Officers. Drawing for the Corporals and Sous-Officers. By the Professor of Drawing, who is also charged with completing the collection of models which relate to it. The elements of fortification for the Serjeant-Majors. Construction, and theories on practical schools for the Sous-Officiers. By the Officers of the regiment, named by the Colonel, independently of those appointed by the regulations Permanent fortification The attack and defense of places for the Officers. Mines Bridges Ovens Topography Geography for the Sous-Officiers. Military history of France At the end of each course the colonel of the regiment causes a general examination to be made in his presence of the whole of the
  • 65. men who have followed this course, and has a list made out in the order of merit, with notes of the capacity and aptitude of each. These lists are consulted in the formation of tables of promotion, and placed with the said tables before the inspector-general. Each captain and lieutenant are obliged to give in at least a single treatise on five different projects, consisting of a memoir discussing or the journal of a siege, with drawing of the whole, and of details in sufficient number to render them perfectly intelligible. The special practical instruction is composed of seven distinct schools, relating to:— Field Fortification. Saps. Mines and Fireworks. Bridges. Ovens. Topography. Gymnastics. And they comprehend, in addition, sham sieges, and underground war. Each of these seven schools is taught in accordance with the special instructions annexed to the regulation, which, however, are not published. Winter is more especially devoted to the course of special theoretical instruction, which commences on the 1st November, and usually finishes on the 15th March, and the course of special practical instruction is carried on during the summer from the 15th March to the 15th September. The second fortnight of September and the month of October are devoted to sham sieges and underground war, to the leveling of the works executed, and to the arrangement of magazines. SCHOOL FOR INFANTRY AND CAVALRY AT ST. CYR. GENERAL DESCRIPTION. CONDITIONS OF ADMISSION. STAFF.
  • 66. It will have been seen in the accounts of the Polytechnic School and the School of Application at Metz, in what manner young men destined for commissions in the artillery and engineers receive their previous education, and under what conditions appointments as officers in these two services are made in France. The regulations for the infantry, the cavalry, and the marines are of the same description. There are in these also the same two ways of obtaining a commission. One, and in these services the more usual one, is to rise from the ranks. The other is to pass successfully through the school at St. Cyr. Young men who do not enter as privates prove their fitness for the rank of officers by going through the course of instruction given, and by passing the examinations conducted in this, the principal, and putting aside the School of Application at Metz, the one Special Military School of the country. The earliest foundation of the kind in France was the Ecole Royale Militaire of 1751. Like most other similar institutions of the time, it was intended for the young nobility. No one was to be admitted who could not prove four generations of Noblesse. The pupils were taught free of charge, and might enter at eight years old. Already, however, some marks of competition are to be discerned, as the best mathematicians were to be taken for the Artillery and Engineers. Buildings on the Plain of Grenelle (the same which still stand, occupying one end of the present Champs de Mars, and retaining, though only used as barracks, their ancient name,) were erected for the purpose. The school continued in this form till 1776, when it was dissolved (apparently owing to faults of discipline,) and replaced by ten Colleges, at Sorrèze, Brienne, Vendôme, and other places, all superintended by ecclesiastics. A new Ecole Royale Militaire, occupying the same buildings as the former, was added in 1777. This came to an end in 1787; and the ten colleges were suppressed under the Republic. A sort of Camp School on the plain of Sablons took their place, when the war had broken out, and lasted about a year under the name of the Ecole de Mars.
  • 67. Under the Consulate in 1800, the Prytanée Français was founded, consisting of four separate Colleges. The name was not long after changed to the Prytanée Militaire; and after some time the number was diminished, and La Flèche, which had in 1764 received the youngest pupils of the old Royal Military School, became the seat of the sole remaining establishment; which subsequently sunk to the proportions of a mere junior preparatory school, and became, in fine, the present establishment for military orphans, which still retains the title, and is called the Prytanée Militaire de la Flèche. A special Military School, in the meantime, had been set up at Fontainebleau in 1803, transferred in 1808 to St. Cyr, and thus taking the place of the Prytanée Militaire and of its predecessor, the original Ecole Royale Militaire, gradually assumed its present form. 15 The course of study lasts two years; the usual number of cadets in time of peace is five, or at the utmost six hundred; the admission is by competitive examination, open to all youths, French by birth or by naturalization, who on the first of January preceding their candidature were not less than sixteen and not more than twenty years old. To this examination are also admitted soldiers in the ranks between twenty and twenty-five years of age, who, at the date of its commencement, have been actually in service in their regiments for two years. The general conditions and formalities are the same as those already stated for the Polytechnic. It may be repeated that all the candidates, in accordance with a recent enactment, must have taken the usual degree which terminates the task at the lycées—the baccalaureate in sciences. Those who succeed in the examination and are admitted, take an engagement to serve seven years either in the cavalry or infantry, and are thus under the obligation, if they are judged incompetent at the close of their two years’ stay at the school to receive a commission, to enter and serve as common soldiers. The two years of their stay at the school counts as a part of their service. It is only
  • 68. in the special case of loss of time caused by illness, that permission is given to remain a third year. The ordinary payment is 60l. (1,500 francs) per annum. All whose inability to pay this amount is satisfactorily established, may claim, as at the Polytechnic, an allowance of the whole or of half of the expenses from the State, to which may be added an allowance for the whole or for a portion of the outfit (from 24l. to 28l.) These bourses or demi-bourses, with the trousseau, or demi-trousseau, have during the last few years been granted unsparingly. One-third of the 800 young men at the school in February 1856 were boursiers or demi-boursiers. Candidates admitted from the Orphan School of La Flèche, where the sons of officers wounded or killed in service receive a gratuitous education, are maintained in the same manner here. 16 It was the rule till lately that cadets appointed, on leaving St. Cyr, to the cavalry should be placed for two years at the Cavalry School at Saumur. This, however, has recently been changed; on entering St. Cyr those who desire appointments in the cavalry declare their wishes, and are put at once through a course of training in horsemanship. Those who are found unfit are quickly withdrawn; the remainder, if their place on the final examination allows of their appointment to the cavalry, are by that time sufficiently well practiced to be able to join their regiments at once. Twenty-seven, or sometimes a greater number, are annually at the close of their second year of study placed in competition with twenty-five candidates from the second lieutenants belonging to the army, 17 if so many are forthcoming, for admission to the Staff School at Paris. This advantage is one object which serves as a stimulus to exertion, the permission being given according to rank in the classification by order of merit. The school consists of two divisions, the upper and the lower, corresponding to the two years of the course. Each division is divided again into four companies. In each of these eight companies there are sub-officers chosen from the élèves themselves, with the
  • 69. titles of Sergent, Sergent Fourrier, and Caporal; those appointed to the companies of the junior division are selected from the second year cadets, and their superiority in standing appears to give these latter some considerable authority, exercised occasionally well, occasionally ill. The whole school, thus divided into eight companies, constitutes one battalion. The establishment for conducting the school consists of— A General as Commandant. A Second in Command (a Colonel of Infantry.) A Major, 4 Captains, 12 Lieutenants, and 5 Second Lieutenants of Infantry; the Major holding the office of Commandant of the Battalion. A Major, 1 Captain, 34 Lieutenants, and 3 Second Lieutenants of Cavalry to superintend the exercises, the riding, &c. A Director of Studies (at present a Lieutenant-Colonel of Engineers.) Two Assistant Directors. Six Examiners for Admission. One Professor of Artillery. One Assistant ditto. One Professor of Topography and Mathematics. One Professor of Military Administration, Military Art, and Military History. One Professor of Fortification. One Professor of Military Literature. Two Professors of History and Geography. One Professor of Descriptive Geometry. One Professor of Physics and Chemistry. Three Professors of Drawing, One Professor of German. Eleven Military and six Civilian Assistant Teachers (Répétiteurs.) There is also a Quartermaster, a Treasurer, a Steward, a Secretary of the Archives, who is also Librarian, an Almoner (a clergyman,) four or five Surgeons, a Veterinary Surgeon, who gives lessons on the subject, and twelve Fencing Masters. The professors and teachers are almost entirely military men. Some difficulty appears to be found by civilians in keeping sufficient order in the large classes; and it has been found useful to have as
  • 70. répétiteurs persons who could also be employed in maintaining discipline in the house. Among the professors at present there are several officers of the engineers and of the artillery, and of the staff corps. There is a board or council of instruction, composed of the commandant, the second in command, one of the field officers of the school staff, the director of studies, one of the assistant directors, and four professors. So, again, the commandant, the second in command, one of the field officers, two captains, and two lieutenants, the last four changing every year, compose the board or council of discipline. St. Cyr is a little village about three miles beyond the town of Versailles, and but a short distance from the boundary of the park. The buildings occupied by the school are those formerly used by Madame de Maintenon, and the school which she superintended. Her garden has given place for the parade and exercise grounds; the chapel still remains in use; and her portrait is preserved in the apartments of the commandant. The buildings form several courts or quadrangles; the Court of Rivoli, occupied chiefly by the apartments and bureaux of the officers of the establishment, and terminated by the chapel; the Courts of Austerlitz, and Marengo, more particularly devoted to the young soldiers themselves; and that of Wagram, which is incomplete, and opens into the parade grounds. These, with the large stables, the new riding school, the exercising ground for the cavalry, and the polygon for artillery practice, extend to some little distance beyond the limit of the old gardens into the open arable land which descends northwards from the school, the small village of St. Cyr lying adjacent to it on the south. The ground floor of the buildings forming the Courts of Marengo, Austerlitz, and Wagram appeared to be occupied by the two refectories, by the lecture-rooms or amphitheaters, each holding two hundred pupils, and by the chambers in which the ordinary questionings, similar to those already described in the account of the
  • 71. Polytechnic School, under the name of interrogations particulières, are conducted. On the first floor are the salles d’étude and the salle des collections the museum or repertory of plans, instruments, models and machines, and the library; on the second floor the ordinary dormitories; and on the third (the attics,) supplementary dormitories to accommodate the extra number of pupils who have been admitted since the commencement of the war. The commission, when visiting the school, was conducted on leaving the apartments of the commandant to the nearest of the two refectories. It was after one o’clock, and the long room was in the full possession of the whole first or junior division. A crowd of active and spirited-looking young soldiers, four hundred at least in number, were ranged at two long rows of small tables, each large enough, perhaps, for twelve; while in the narrow passage extending up and down the room, between the two rows, stood the officers on duty for the maintenance of order. On passing back to the corridor, the stream of the second year cadets was issuing from their opposite refectory. In the adjoining buttery, the loaf was produced, one kilogramme in weight, which constitutes the daily allowance. It is divided into four parts, eaten at breakfast, dinner, the afternoon lunch or gouter, and the supper. The daily cost of each pupil’s food is estimated at 1f. 80c. The lecture rooms and museums offer nothing for special remark. In the library containing 12,000 books and a fine collection of maps, there were a few of the young men, who are admitted during one hour every day. The salles d’étude on the first floor are, in contrast to those at the Polytechnic, large rooms, containing, under the present circumstances of the school, no less than two hundred young men. There are, in all, four such rooms, furnished with rows of desks on each side and overlooked in time of study by an officer posted in each to preserve order, and, so far as possible, prevent any idleness.
  • 72. From these another staircase conducts to the dormitories, containing one hundred each, and named after the battles of the present war—Alma, Inkerman, Balaclava, Bomarsund. They were much in the style of those in ordinary barracks, occupied by rows of small iron beds, each with a shelf over it, and a box at the side. The young men make their own beds, clean their own boots, and sweep out the dormitories themselves. Their clothing, some portions of which we here had the opportunity of noticing, is that of the common soldier, the cloth being merely a little finer. Above these ordinary dormitories are the attics, now applied to the use of the additional three hundred whom the school has latterly received. The young men, who had been seen hurrying with their muskets to the parade ground, were now visible from the upper windows, assembled, and commencing their exercises. And when, after passing downwards and visiting the stables, which contain three hundred and sixty horses, attended to by two hundred cavalry soldiers, we found ourselves on the exercising ground, the cavalry cadets were at drill, part mounted, the others going through the lance exercise on foot. In the riding-school a squad of infantry cadets were receiving their weekly riding lesson. The cavalry cadets ride three hours a-day; those of the infantry about one hour a week. The exercising ground communicates with the parade ground; here the greater number of the young men were at infantry drill, under arms. A small squad was at field-gun drill in an adjoining square. Beyond this and the exercising ground is the practice ground, where musket and artillery practice is carried on during the summer. Returning to the parade ground we found the cadets united into a battalion; they formed line and went through the manual exercise, and afterwards marched past; they did their exercise remarkably well. Some had been only three months at the school. The marching past was satisfactory; it was in three ranks, in the usual French manner.
  • 73. Young men intended for the cavalry are instructed in infantry and artillery movements and drill; just as those intended for the infantry are taught riding, and receive instruction in cavalry, as well as artillery drill and movements. It is during the second year of their stay they receive most instruction in the arms of the service to which they are not destined, and this, it is said, is a most important part of their instruction. “It is this,” said the General Commandant, “that made it practicable, for example, in the Crimea, to find among the old élèves of St. Cyr, officers fit for the artillery, the engineers, the staff; and for general officers, of course, it is of the greatest advantage to have known from actual study something of every branch.” The ordinary school vacation last six or seven weeks in the year. The young men are not allowed to quit the grounds except on Sundays. On that day there is mass for the young men. The routine of the day varies considerably with the season. In winter it is much as follows:—At 5 A.M. the drum beats, the young men quit their beds; in twelve minutes they are all dressed and out, and the dormitories are cleared. The rappel sounds on the grand carré; they form in their companies, enter their salles, and prepare for the lecture of the day until a quarter to 7. At 7 o’clock the officers on duty for the week enter the dormitories, to which the pupils now return, at a quarter to 8 the whole body passes muster in the dormitories, in which they have apparently by this time made their beds and restored cleanliness and order. Breakfast is taken at one time or other during the interval between a quarter to 7 and 8 o’clock. They march to their lecture rooms at 8, the lecture lasts till a quarter past 9, when they are in like manner marched out, and are allowed a quarter of an hour of amusement. They then enter the halls of study, make up their notes on the lecture they have come from, and after an hour and a half employed in this way, for another hour and a half are set to drawing.
  • 74. Dinner at 1 is followed by recreation till 2. Two hours from 2 to a quarter past 4 are devoted to military services. From 4 to 6 P.M. part are occupied in study of the drill-book (théorie,) part in riding or fencing: a quarter of an hour’s recreation follows, and from 6¼ to 8½ there are two hours of study in the salles. At half-past 8 the day concludes with the supper. The following table gives a view of the routine in summer:— 4½ A.M. to 4¾ A.M. Dressing. 4¾ “ to 7¼ “ Military exercises. 7¼ “ to 8¼ “ Breakfast, cleaning, inspection. 8¼ “ to 9½ “ Lecture. 9½ “ to 9¾ “ Recreation. 9¾ “ to 11¼ “ Study. 11¼ “ to 1 P.M. Drawing. 1 P.M. to 2 “ Dinner and recreation. 2 “ to 4 “ Study of drill-book (théorie) or fencing. 4 “ to 6 “ Study for some, riding for others. 6 “ to 6¼ “ Recreation. 6¼ “ to 8 “ Riding for some, study for others, 8 “ to 8½ “ Supper. The entrance examination is much less severe than that for the Polytechnic; but a moderate amount of mathematical knowledge is demanded, and is obtained. The candidates are numerous; and if it be true that some young men of fortune shrink from a test, which, even in the easiest times, exacts a knowledge of the elements of trigonometry, and not unfrequently seek their commissions by entering the ranks, their place is supplied by youths who have their fortunes to make, and who have intelligence, industry, and opportunity enough to acquire in the ordinary lycées, the needful amount of knowledge. Under present circumstances it is, perhaps, more especially in the preparatory studies that the intellectual training is given, and for the examination of admission that theoretical attainments are demanded. The state of the school in a time of war can not exactly
  • 75. be regarded as a normal or usual one. The time of stay has been sometimes shortened from two years to fifteen months; the excessive numbers render it difficult to adjust the lectures and general instruction so as to meet the needs of all; the lecture rooms and the studying rooms are all insufficient for the emergency; and what is yet more than all, the stimulus for exertion, which is given by the fear of being excluded upon the final examination, and sent to serve in the ranks, is removed at a time when almost every one may feel sure that a commission which must be filled up will be vacant for him. Yet even in time of peace, if general report may be trusted, it is more the drill, exercises, and discipline, than the theory of military operations, that excite the interest and command the attention of the young men. When they leave, they will take their places as second lieutenants with the troops, and they naturally do not wish to be put to shame by showing ignorance of the common things with which common soldiers are familiar. Their chief incentive is the fear of being found deficient when they join their regiments, and, with the exception of those who desire to enter the staff corps, their great object is the practical knowledge of the ordinary matters of military duty. “Physical exercises,” said the Director of Studies, “predominate here as much as intellectual studies do at the Polytechnic.” But the competition for entrance sustains the general standard of knowledge. Even when there is the greatest demand for admissible candidates, the standard of admission has not, we are told, been much reduced. No one comes in who does not know the first elements of trigonometry. And the time allotted by the rules of the school to lectures and indoor study is far from inconsiderable. EXAMINATIONS FOR ADMISSION—STUDIES AT THE SCHOOL. The examinations for admission are conducted almost precisely upon the same system which is now used in those for the Polytechnic School. 18 There is a preliminary or pass examination (du premier degré), and for those who pass this a second or class
  • 76. examination (du second degré.) For the former there are three examiners, two for mathematics, physics, and chemistry, and a third for history, geography, and German. The second examination, which follows a few days after, is conducted in like manner by three examiners. A jury of admission decides. The examination is for the most part oral; and the principal difference between it and the examination for the Polytechnic is merely that the written papers are worked some considerable time before the first oral examination (du premier degré,) and are looked over with a view to assist the decision as to admissibility to the second (du second degré.) Thus the compositions écrites are completed on the 14th and 15th of June; the preliminary examination commences at Paris on the 10th of July; the second examination on the 13th. The subjects of examination are the following:— Arithmetic, including vulgar and decimal fractions, weights and measures, square and cube root, ratios and proportions, interest and discount, use of logarithmic tables and the sliding rule. Algebra, to quadratic equations with one unknown quantity, maxima and minima, arithmetical and geometrical progressions, logarithms and their application to questions of compound interest and annuities. Geometry, plane and solid, including the measurement of areas, surfaces, and volumes; s of the cone, cylinder, and sphere. Plane Trigonometry: construction of trigonometrical tables and the solution of triangles; application to problems required in surveying. Geometrical representations of bodies by projections. French compositions. German exercises. Drawing, including elementary geometrical drawing and projections; plan, , and elevation of a building; geographical maps. Physical Science (purely descriptive:) cosmography; physics, including elementary knowledge of the equilibrium of fluids; weight, gravity, atmospheric pressure, heat, electricity, magnetism, acoustics, optics, refraction, microscope, telescope. Chemistry, elementary principles of; on matter, cohesion, affinity; simple and compound bodies, acids, bases, salts, oxygen, combustion, azote, atmospheric air, hydrogen, water; respecting equivalents and their use, carbon, carbonic acid, production and decomposition of ammonia, sulphur, sulphuric acid, phosphorus, chlorine; classification of non-metallic bodies into four families.
  • 77. History: History of France from the time of Charles VII. to that of the Emperor Napoleon I. and the treaties of 1815. Geography, relating entirely to France and its colonies, both physical and statistical. German: the candidates must be able to read fluently both the written and printed German character, and to reply in German to simple questions addressed to them in the same language. The general system of instruction at St. Cyr is similar to that of the Polytechnic; the lectures are given by the professors, notes are taken and completed afterwards, and progress is tested in occasional interrogations by the répétiteurs. One distinction is the different size of the salles d’étude (containing two hundred instead of eight or ten;) but, above all, is the great and predominant attention paid to the practical part of military teaching and training. It is evident at the first sight that this is essentially a military school, and that especial importance is attached both by teachers and pupils to the drill, exercise, and manœuvers of the various arms of the service. The course of study is completed in two years; that of the first year consists of:— 27 lectures in descriptive geometry. 35 “ physical science. 20 “ military literature. 35 “ history. 21 “ geography and military statistics. 30 “ German. Total, 174 In addition to the above, there is a course of drawing between the time when the students join the school early in November and the 15th of August. The course of drawing consists in progressive studies of landscape drawing with the pencil and brush, having special application to military subjects, to the shading of some simple body or dress, and to enable the students to apply the knowledge which has been communicated to them on
  • 78. the subject of shadows and perspective. This course is followed by the second or junior division during the first year’s residence. The course of lectures in descriptive geometry commences with certain preliminary notions on the subject; refers to the representation of lines on curved surfaces, cylindrical and conical, surfaces of revolutions, regular surfaces, inter of surfaces, shadows, perspective, vanishing points, &c., construction of geographical maps, and plan côté. The lectures in physical science embrace nine lectures on the general properties of bodies; heat, climate, electricity, magnetism, galvanism, electro-magnetism, acoustics. There are twelve lectures in chemistry; on water, atmospheric air, combustibles, gas, principal salts, saltpetre, metallurgy, organic chemistry. There are fourteen lectures in mechanics applied to machines; motion, rest, gravity, composition and resolution of forces, mechanical labor, uniform motion, rectilinear and rotatory, projectiles in space, mechanical powers, drawbridges, Archimedean principle, military bridges, pumps, reservoirs, over and under-shot wheels, turbines, corn mills, steam-engines, locomotives, transport of troops, materials, and munitions on railways. The twenty lectures in military literature refer to military history and biography, memoirs of military historians, battles and sieges, the art of war, military correspondence, proclamations, bulletins, orders of the day, instructions, circulars, reports and military considerations, special memoirs, reconnaissance and reports, military and periodical collections, military justice. The thirty-five lectures in history principally relate to France and its wars, commencing with the Treaty of Westphalia and ending with the Treaty of Vienna. The twenty-seven lectures in geography and military statistics are subdivided into different parts; the first eight lectures are devoted to Europe and France, including the physical geography and statistics of the same; the second six lectures are devoted to the frontiers of France; and the third part of thirteen lectures to foreign states and Algeria, including Germany, Italy, Spain, Portugal, Poland, and Russia. The studies for the first division during the second year of their residence consist of— 10 lectures in topography. 27 “ fortification. 15 “ artillery. 10 “ military legislation.
  • 79. 12 “ military administration. 27 “ military art and history. 20 “ German. Total, 121 One lesson weekly is given in drawing, in order to render the students expert in landscape and military drawing with the pencil, pen, and brash. We must not omit to call attention to the fact that mathematics are not taught in either yearly course at St. Cyr. The course in topography, of ten lectures, has reference to the construction of maps, copies of drawings, theory, description, and use of instruments for measuring angles and leveling, the execution for a regular survey on the different systems of military drawing, drawing from models of ground, on the construction of topographical drawing and reconnaissance surveys, with accompanying memoirs. Twenty-seven lectures are devoted to fortification; the first thirteen relate principally to field fortification, statement of the general principles, definitions, intrenchments, lines, redoubts, armament, defilement, execution of works on the ground, means necessary for the defense, application of field fortification to the defenses of têtes de pont and inhabited places, attack and defense of intrenchments, &c., castramentation; six lectures have reference to permanent fortification, on ancient fortifications, Cormontaigne’s system, exterior and detached works, considerations respecting the accessories of defense to fortified places; eight lectures relate to the attack and defense of places, preparations for attack and defense, details of the construction of siege works from the opening of the trenches to the taking of the place, exterior works, as auxiliaries, sketches, and details of the different works in fortifications, plans, and profile, &c. The students also execute certain works, such as the making of fascines, gabions, saucissons, repair of revetments of batteries, platform, setting the profiles, defilement, and construction of a fieldwork, different kinds of sap, plan and establishment of a camp for a battalion of infantry, &c. Under the head of artillery, fifteen lectures are given, commencing with the resistance of fluids, movement of projectiles, solution of problems with the balistic pendulum, deviation of projectiles, pointing and firing guns; small arms, cannon, materials of artillery, powder, munition, fireworks for
  • 80. military purposes; range of cannon, artillery for the attack or defense of places or coasts, field artillery, military bridges. The students are practically taught artillery drill with field and siege guns, practice with artillery, repair of siege batteries, bridges of boats or rafts. The ten lectures allowed for the course of military legislation have for their object the explanation of the principles, practice, and regulations relating to military law, and the connection with the civil laws that affect military men. The twelve lectures on what is called military administration relate to the interior economy of a company, and to the various matters appertaining to the soldier’s messing, mode of payment, necessaries, equipment, lodging, &c. Military art and history is divided into three parts. The first, of five lectures, relates to the history of military institutions and organization. The second, of fifteen lectures, refers to the composition of armies and to considerations respecting the various arms, infantry, cavalry, état-major, artillery and engineers, and the minor operations of war. The third part, of seven lectures, gives the history of some of the most celebrated campaigns in modern times. In the practical exercises, the students make an attack or defense of a work or of a system of fieldworks during their course of fortification, or of a house, farm, village, in the immediate vicinity of the school, or make the passage of a river. The students receive twenty lectures in German, and are required to keep up a knowledge of German writing. EXAMINATIONS AT THE SCHOOL. The examinations at the end of the first year take place under the superintendence of the director and assistant director of studies. They are conducted by the professor of each branch of study, assisted by a répétiteur, each of whom assigns a credit to the student under examination, and the mean, expressed as a whole number, represents the result of the student’s examination in that particular branch of study. The examination in military instruction for training (in drill and exercises) is carried on by the officers attached to companies, under the superintendence of the commandant of the battalion, and that relating to practical artillery by the officer in charge of that duty.
  • 81. The pupils’ position is determined, as at the Polytechnic, partly by the marks gained at the examination, partly by those he has obtained during his previous studies. In other words, the half of the credit obtained by a student at this examination in each subject is added to the half of the mean of all the credits assigned to him, in the same subject, for the manner in which he has replied to the questions of the professor and répétiteur during the year; and the sum of these two items represents his total credit at the end of the year. The scale of credit is from 0 to 20, as at the Polytechnic. Every year, before the examinations commence, the commandant and second in command, in concert with the director and assistant director, and in concurrence with the superior officer commanding the battalion for military instruction, are formed into a board to determine the amount of the minimum credit which should be exacted from the students in every branch of study. This minimum is not usually allowed to fall below eight for the scientific, and ten for the military instruction. Any student whose general mean credit is less than eight for the scientific, or ten for the military instruction, or who has a less credit than four for any particular study in the general instruction, or of six for the military instruction, is retained at the school to work during the vacation, and re-examined about eight days before the re- commencement of the course, by a commission composed of the director and assistant director of studies for the general instruction, and of the second in command and the commandant of the battalion, and of one captain for the military instruction. A statement of this second examination is submitted to the minister of war, and those students who pass it in a satisfactory manner are permitted by him to proceed into the first division. Those who do not pass it are reported to the minister of war as deserving of being excluded from the school, unless there be any special grounds for excusing them, such as sickness, in which case, when the fact is properly established before the council of instruction, they are permitted to repeat the year’s studies.
  • 82. Irregularity of conduct is also made a ground for exclusion from the school. In order to estimate the credit to be attached to the conduct of a student, all the punishments to which he can be subjected are converted into a specific number of days of punishment drill. Thus, For each day confined in the police chamber, 4 days’ punishment drill. For each day confined in the prison, 8 days’ punishment drill. The statement is made out under the presidency of the commandant of the school, by the second in command, and the officer in command of the battalion. The credits for conduct are expressed in whole numbers in terms of the scale of 0 to 20, in which the 20 signifies that the student has not been subjected to any punishment whatever, and the 0, that the student’s punishments have amounted to 200 or more days of punishment drill. The number 20 is diminished by deducting 1 for every 10 days of punishment drill. The classification in the order of merit depends upon the total amount of the sum of the numerical marks or credits obtained by each student in every branch of study or instruction. The numerical credit in each subject is found by multiplying the credit awarded in each subject by the co-efficient of influence belonging to it. The co-efficients, representing the influence allowed to each particular kind of examination, in the various branches of study are as follows:— Second Division, or First Year’s Course of Study. Descriptive Geometry, Course, 6 General Instruction. Drawing and Sketches, 2 40 Physical Science applied to the Military Arts, Course, 6 Sketch and Memoir, 2
  • 83. History, 6 Geography and Statistical Memoirs, Course, 5 Sketch and Memoir, 2 Literature, Memoir on 4 German, 4 Drawing, 3 Special Instruction:—Drill, Practice, Manœuvers (Infantry and Cavalry,) 7 Conduct, 3 50 First Division, or Second Year’s Course of Study Infantry. Cavalry. Topography, Course, 3 3 General Instruction. Maps, Memoirs, and Practical Exercises, 3 35 2 32 Fortification, Course, 4 4 Drawings, Memoirs, and Practical Exercises, 3 2 Artillery and Balistic Pendulum, Course, 4 4 Practical Exercises, School of Musketry 2 1 Military Legislation, 2 2 Military Administration, Course, 3 3 Sheets of Accounts, 1 1 Military History and Art, Course, 4 4 Memoirs and applications, 1 1 German, 4 4 Drawing, 1 1 Infantry Theory of Drill, Manœuvers— 3 Schools, 4 9 Special instruction for Practical Instruction 3 Regulations, 2 Cavalry, Riding, 3 12
  • 84. Theoretical and Practical Instruction 7 Veterinary Art, 2 Conduct 6 6 Total, 50 50 To facilitate this classification in order of merit, three distinct tables are prepared,— The first relating to the general instruction; The second relating to the military instruction; and The third relating to the conduct; and they respectively contain, one column in which the names of the students are arranged by companies in the order in which they have been examined; followed by as many columns as there are subjects of examination, for the insertion of their individual credit and the co-efficient of influence, by which each credit is multiplied; and lastly by a column containing the sum of the various products belonging to, and placed opposite each student’s name. These tables are respectively completed by the aid of the existing documents, the first for the general instruction, by the director of studies; the second for the military instruction, by the officer commanding the battalion; the third for conduct, under the direction of the commandant of the school, assisted by the second in command. A jury formed within the school, composed of the general commandant, president, the second in command, the director of studies, and the officer commanding the battalion, is charged with the classification of the students in the order of merit. To effect it, after having verified and established the accuracy of the above tables, the numbers appertaining to each student in the three tables are extracted and inserted in another table, containing the name of each student, and, in three separate columns, the numbers obtained by each in general instruction, military instruction, and conduct, and the sum of these credits in another column.
  • 85. By the aid of this last table, the jury cause another to be compiled, in which the students are arranged in the order of merit as established by the numerical amount of their credits, the highest in the list having the greatest number. If there should be any two or more having the same number of total credits, the priority is determined by giving it to the student who has obtained a superiority of credits in military instruction, conduct, general instruction, notes for the year; and if these prove insufficient, they are finally classed in the same order as they were admitted into the school. A list for passing from the second to the first division is forwarded to the minister at war, with a report in which the results for the year are compared with the results of the preceding year; and the minister at war, with these reports before him, decides who are ineligible from incompetency, or by reason of their conduct, to pass to the other division. The period when the final examinations before leaving the school are to commence, is fixed by the president of the jury, specially appointed to carry on this final examination, in concert with the general commandant of the school. The president of the jury directs and superintends the whole of the arrangements for conducting the examination; and during each kind of examination, a member of the corps, upon the science of which the student is being questioned, assists the examiner, and, as regards the military instruction, each examiner is aided by a captain belonging to the battalion. The examination is carried on in precisely the same manner as that already described for the end of the first year’s course of study. And the final classification is ascertained by adding to the numerical credits obtained by each student during his second year’s course of study, in the manner already fully explained, one-tenth of the numerical credits obtained at the examinations at the end of the first year.
  • 86. The same regulations as to the minimum credit which a student must obtain in order to pass from one division to the other, at the end of the first year, which are stated in page 160, are equally applicable to his passing from the school to become a second lieutenant in the army. A list of the names of those students who are found qualified for the rank of second lieutenant is sent to the minister at war, and a second list is also sent, containing the names of those students that have, when subjected to a second or revised examination, been pronounced by the jury before whom they were re-examined as qualified. Those whose names appear in the first list are permitted to choose according to their position in the order of merit, the staff corps or infantry, according to the number required for the first named service, and to name the regiments of infantry in which they desire to serve. Those intended for the cavalry are placed at the disposal of the officer commanding the regiment which they wish to enter. Those whose names appear in the second list are not permitted to choose their corps, but are placed by the minister at war in such corps as may have vacancies in it, or where he may think proper. The students who are selected to enter the staff corps, after competing successfully with the second lieutenants of the army, proceed as second lieutenants to the staff school at Paris. Those who fail pass into the army as privates, according to the terms of the engagement made on entering the school.
  • 87. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com