Empowerment evaluation

Empowerment evaluation (EE) is an evaluation approach designed to help communities monitor and evaluate their own performance. It is used in comprehensive community initiatives as well as small-scale settings and is designed to help groups accomplish their goals. According to David Fetterman, "Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination".[1] An expanded definition is: "Empowerment evaluation is an evaluation approach that aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs."[2]

Scope

Empowerment evaluation has been used in programs ranging from a fifteen million dollar Hewlett-Packard corporate philanthropy effort[3] to accreditation in higher education[4] and from the NASA Jet Propulsion Laboratory’s Mars Mars Rover project[5] to battered women's shelters.[6] Empowerment evaluation has been used by government, foundations, businesses, and non-profits, as well as Native American reservations. It is a global phenomenon, with projects and workshops around the world including Australia, Brazil, Canada, Ethiopia, Finland, Israel, Japan, Mexico, Nepal, New Zealand, South Africa, Spain, Thailand, the United Kingdom, and the United States. A sample of sponsors and clients includes Casey Family Programs, Center for Disease Control and Prevention, Family & Children Services, Health Trust, Knight Foundation, Poynter, Stanford University, State of Arkansas, UNICEF and Volunteers of America.[7]

History and publications

Empowerment evaluation was introduced in 1993 by David Fetterman during his presidential address at the American Evaluation Association’s (AEA) annual meeting.[1]

The approach was initially well received by some researchers who commented on the complementary relationship between EE and community psychology, social work, community development and adult education. They highlighted how it inverted traditional definitions of evaluation, shifting power from the evaluator to program staff and participants. Early supporters positively noted the focus on social justice and self-determination. One colleague compared the writings of approach to Martin Luther's 95 Theses.[8][9][10]

Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability[11] the first empowerment evaluation book, provided an introduction to theory and practice. It highlighted EE's scope, ranging from its use in a national educational reform movement to its endorsement by the W. K. Kellogg Foundation’s Director of Evaluation. The book presented examples in various contexts, including: federal, state, and local government, HIV prevention and related health initiatives, African American communities and battered women’s shelters. This first volume also provided various theoretical and philosophical frameworks as well as workshop and technical assistance tools.

Foundations of Empowerment Evaluation,[12] was the second EE book. The book provided steps and cases. It highlighted the role of the Internet to facilitate and disseminate the approach.

The third book was titled: Empowerment evaluation principles in practice.[13] It emphasized greater conceptual clarity by making explicit EE's underlying principles, ranging from improvement and inclusion to capacity building and social justice. In addition, it highlighted its commitment to accountability and outcomes, by stating them as an explicit principle and presenting substantive outcome examples. Cases described include educational reform, youth development programs and child abuse prevention programs.[14]

Theories

The primary theories guiding empowerment evaluation are process use and theories of use and action.[15][16][17]

Process use represents much of the rationale or logic underlying EE in practice, because it cultivates ownership by placing the approach in community and staff members’ hands.

The alignment of theories of use and action explain how empowerment evaluation helps people produce desired results.[18][19][20][21][22][23]

Process use

Empowerment evaluation is designed to be used by people. It places evaluation in the hands of community and staff members. The more that people are engaged in conducting their own evaluations the more likely they are to believe in them, because the evaluation findings are theirs. In addition, a byproduct of this experience is that they learn to think lucratively. This makes them more likely to make decisions and take actions based on their evaluation data. This way of thinking is at the heart of process use.[24]

Principles

Empowerment evaluation is guided by 10 principles.[25] These principles help evaluators and community members align decisions with the larger purpose or goals associated with capacity building and self-determination.

  1. Improvement – help people improve program performance
  2. Community ownership – value and facilitate community control
  3. Inclusion – invite involvement, participation, and diversity
  4. Democratic participation – open participation and fair decision making
  5. Social justice – address social inequities in society
  6. Community knowledge – respect and value community knowledge
  7. Evidence-based strategies – respect and use both community and scholarly knowledge
  8. Capacity building – enhance stakeholder ability to evaluate and improve planning and implementation
  9. Organizational learning – apply data to evaluate and implement practices and inform decision making
  10. Accountability – emphasize outcomes and accountability

Concepts

Key concepts include: critical friends, cultures of evidence, cycles of reflection and action, communities of learners, and reflective practitioners.[26] A critical friend, for example, is an evaluator who provide constructive feedback.[27] They help to ensure the evaluation remains organized, rigorous and honest.

Steps

EE's three-step approach includes:[28][12]

  1. establish their mission;
  2. review their current status; and
  3. plan for the future.

This approach is popular in part due to its simplicity, effectiveness and transparency.

A second approach is the 10-step Getting to Outcomes (GTO).[29] GTO helps participants answer 10 questions using relevant literature, methods and tools. The 10 accountability questions and literature to address them are:

  1. What are the needs and resources? (Needs assessment; resource assessment)
  2. What are the goals, target population and desired outcomes? (Goal setting)
  3. How does the intervention incorporate knowledge of science and best practices in this area? (Science and best practices)
  4. How does the intervention fit with existing programs? (Collaboration; cultural competence)
  5. What capacities do you need to implement a quality program? (Capacity building)
  6. How will this intervention be carried out? (Planning)
  7. How will the quality of implementation be assessed? (Process evaluation)
  8. How well did the intervention work? (Outcome and impact evaluation)
  9. How will quality improvement strategies be incorporated? (Total quality management; continuous quality improvement)
  10. If the intervention is (or components are) successful, how will the intervention be sustained? (Sustainability and institutionalization)

A manual with worksheets addresses how to answer the questions.[30] While GTO has been used primarily in substance abuse prevention, customized GTOs have been developed for preventing underage drinking[31] and promoting positive youth development.[32] Several books are downloadable. In addition, EE can employ photo journalism, online surveys, virtual conferencing and self-assessments.[33]

Monitoring

Conventional and innovative evaluation tools monitor outcomes, including online surveys, focus groups and interviews, as well as the use of quasi-experimental designs. In addition, program specific metrics are developed, using baselines, benchmarks, goals and actual performance. For example, a minority tobacco prevention program in Arkansas established:

  1. Baselines (the number of tobacco users)
  2. Goals (the yearly number of subjects helped)
  3. Benchmarks (the monthly number of subjects helped)
  4. Performance (the number of subjects who stop smoking)

These metrics help the community monitor implementation, by comparing performance with benchmarks. It also enables them to make mid-course corrections.

Selected case examples

Stanford University School of Medicine applied the technique to curricular decision making.[26] EE contributed to improvements in course and clerkship ratings. For example, the average student ratings for required courses improved significantly (P = .04; Student’s one-sample t test).

EE guided Hewlett-Packard’s $15 million Digital Village Initiative. The initiative was designed to help bridge the digital divide in communities of color. Outcomes ranged from Native American’s building one of the largest unlicensed wireless systems in the country to creating a high-resolution digital printing press.[3]

The State of Arkansas used EE in academically distressed schools and tobacco prevention. Outcomes include improving test scores, upgrading school-level performance and preventing and reducing tobacco consumption.[34][35]

A school district in South Carolina invested millions of their own dollars to provide each student with a personalized computing device as an educational tool. EE was used to support large scale implementation of the initiative and monitor outcomes associated with teacher and student behavior change.[36]

Rationale

Response to critique

EE is conducted by an internal group, not an external individual. Programs are dynamic, not static and thus require more fluid, responsive, and continual assessment. The evaluator becomes a coach, rather than the expert. Investigating worth and merit is not sufficient. The focus should also be on program improvement. Empowerment evaluation, as a group activity, builds in self-checks on bias. Internal and external forms of evaluation are compatible and reinforcing. However, the Joint Committee's standards were applied and empowerment evaluation was found to be consistent with the spirit of the standards. Empowerment evaluation is not a threat to traditional evaluation. It may instead help to revitalize it.[37]

Empowerment evaluation is part of an emancipatory research stream. Its unique contribution is its focus on fostering self-determination and building capacity. Empowerment evaluation is guided by process use. Additional effort could be made to further distinguish empowerment from collaborative, participatory, stakeholder, and utilization forms of evaluation. Empowerment evaluation should be limited or focused on the disenfranchised and issues of liberation. Empowerment evaluation has become a part of the evaluation landscape.[38]

Empowerment evaluation is part of a world-wide movement. It is now part of the evaluation field. However, empowerment evaluation needs to focus on the consumer, rather than staff members. In addition, the definition of empowerment evaluation has changed. Bias in evaluation can be removed by distancing oneself from the group or program being assessed. Internal and external forms of evaluation are needed. Empowerment evaluators serve as evaluation consultants.[39]

The definition of empowerment is the same as when the approach was first defined and introduced to the field. However, it has been expanded to further clarify the purpose of the approach. Fetterman and Wandersman agree that empowerment evaluation is part of an emancipatory stream of research. It also relies on process use to guide it. They also believe that greater effort is needed to further distinguish empowerment from other forms of stakeholder involved approaches. However, empowerment evaluation can be viewed along a continuum from less empowering to more empowering in nature. Empowerment evaluation is designed to help the disenfranchised. However, the boundaries are much broader and inclusive. Everyone can benefit from self-assessment and becoming more self-determined.
Fetterman advocated that evaluation be shared with a broader population.[1][40]

Debates and controversy

Empowerment evaluation challenged the status quo concerning who is in control of an evaluation and what it means to be an evaluator. Conventionally, evaluations are conducted by a specialist. In EE, the group or community performs the evaluation, guided by an empowerment evaluator or “critical friend.”

First wave of criticism

Shufflebeam claimed that evaluation should be left in the hands of professionals who objectively investigate the worth or merit of an object and that EE violates the (as yet unadopted) Joint Committee's Program Evaluation Standards.[41][42]

Fetterman and Scriven agreed on the value of both internal and external evaluations. They also agree on a focus on the consumer. However, staff members, sponsors, and policy makers also have important roles to play in evaluation. Scriven however claimed that the evaluator must maintain distance from program participants to avoid bias.[43][44]

Chelimsky re-framed the discussion between Fetterman, Patton and Scriven, explaining that evaluations serve multiple purposes: 1) accountability; 2) development; and 3) knowledge. Scriven, and to a lesser extent Patton, focused on accountability, while Fetterman focused on development.[45]

Second wave

The second wave of debate and discussion emerged between 2005-2007. The primary critiques focused on conceptual and methodological clarity:

Cousins attempted to differentiate between similar approaches, e.g. collaborative, participatory, and empowerment evaluation. Cousins asked whether EE is practical (focusing on decision making), or transformative (focusing on self-determination) and viewed self-evaluation as more likely to have a self-serving bias. This critic noted the variability in attempts at empowerment evaluation.[46]

Miller and Campbell conducted a systematic literature review of empowerment evaluation. They highlighted types or modes of EE, as well as settings, reasons for use, selection process and degree of participation. The highlighted practice variants depending on the size of the evaluation. They suggested that clients were selecting it for appropriate reasons, such as capacity building, self-determination, accountability, cultivating ownership and institutionalization of evaluations. However, they also found that approximately 25% were empowerment in name only. In addition, they argued for additional conceptual clarity.[47]

Patton accepted EE as part of the evaluation field and proposed that given its established status, additional clarity distinguishing collaborative, participatory, utilization and empowerment evaluation would be fruitful. He acknowledged improvements ranging improved definitions and added the 10 principles. He was concerned that self-determination was not on the list. Patton applauded and recommended process use for empowerment evaluation. He accepted the contributors' commitment to forthrightly describing problems. Patton proposed greater emphasis on outcomes or results in EE.[48]

Scriven believes that self-evaluation is flawed, because it is inherently self-serving, and rejected its use for professional development.[49] He questioned the ability of EE to actually empower people and recommended a neutral evaluator role. He suggested that internal and external evaluations are not compatible. He also suggests that empowerment as well as randomized controls are merely forms of ideology.[50]

Response to critique

Fetterman and Wandersman responded by attempting to enhance conceptual clarity, provide greater methodological specificity and highlight EEs commitment to accountability and outcomes. They acknowledged and applauded Miller and Campbell's systematic review of EE projects, while noting neglected or omitted case examples and questioning some of their methodology.

They claimed that the 10 principles contributed to conceptual clarity and that people empower themselves. They asserted that evaluations are inherently subjective and are shaped by culture and political context, and that EE is committed to honesty and rigor. EE is more inclusive than traditional evaluations, placing cross-checks on data and decisions. Participants often know more about problems than outsiders and have a vested interest in making their programs work. They claimed that internal and external evaluations can operate together effectively as additional cross-checks.

While the similarities among collaborative, participatory and empowerment evaluation were described in the first and second empowerment evaluation books, they recommended Cousins' tool to highlight the differences, focusing on depth of participation and control of evaluation technical decision making[51]

The most significant response to the critiques focused on outcomes. Fetterman & Wandersman argued that outcomes and results were important to EE. They highlighting specific project outcomes including:

Outcomes

Scriven's assessment

Scriven agreed that external evaluators sometimes miss problems obvious to program staff members. He also stated they have less credibility with them than an internal evaluator. As a result, he concluded, it is less likely their recommendations will be implemented.[54]

Scriven agreed that EE contributed to improvements in internal staff program evaluations and that empowerment evaluation could make a contribution to evaluation if combined with third-party evaluation.[55]

Professional association affiliation and awards

Empowerment evaluation was a catalyst for the creation of the American Evaluation Association's Collaborative, Participatory, and Empowerment Evaluation topical interest group. Approximately 20% of the American Evaluation Association membership is affiliated with the topical interest group.[56] SAGE Publications, a social science textbook publisher, cited an empowerment evaluation book as one of their "classic titles in research methods."[57] Four empowerment evaluators received honors from the association: Margret Dugan, David Fetterman, Shakeh Kaftarian, and Abraham Wandersman.[58]

Notes and references

  1. 1 2 3 Fetterman 1994.
  2. Wandersman et al. Keener.
  3. 1 2 Fetterman 2005, pp. 98-107.
  4. Fetterman 2011.
  5. Fetterman & Bowman 2002.
  6. Andrews 1996.
  7. videos
  8. Altman 1997.
  9. Brown 1997.
  10. Wild 1997.
  11. Fetterman, Kaftarian & Wandersman 1996.
  12. 1 2 Fetterman 2001b.
  13. Fetterman & Wandersman 2004.
  14. See Donaldson, 2005 review of Empowerment evaluation principles in practice
  15. Argyris & Schon 1978.
  16. Patton 1997a.
  17. Patton 1997b.
  18. Dunst, Trivette & LaPointe 1992.
  19. Zimmerman 2000.
  20. Zimmerman et al. 1992.
  21. Zimmerman & Rappaport 1988 See Bandura, 1982 concerning self-efficacy.
  22. Alkin & Christie 2004.
  23. Christie 2003.
  24. Patton 1997b, p. 189.
  25. Fetterman & Wandersman 2004, pp. 1–2, 27-41,42-72.
  26. 1 2 3 Fetterman, Deitz & Gesundheit 2010.
  27. Fetterman 2009.
  28. Fetterman 2001a.
  29. Wandersman et al. 2000.
  30. Chinman, Imm & Wandersman 2004.
  31. Imm, Chinman & Wandersman 2006.
  32. Fisher et al. 2006.
  33. Sabo 2001.
  34. Fetterman 2005, pp. 107-121.
  35. 1 2 Fetterman & Wandersman 2007.
  36. Lamont, A., Wright, A., Wandersman, A, & Hamm, D. (2014). An empowerment evaluation approach to implementing with quality at scale. In Fetterman, Kaftarian, & Wandersman (Eds), Empowerment evaluation: Knowledge and tools for self assessment, evaluation capacity building, & accountability (2nd ed).
  37. Fetterman 1995.
  38. Patton(1997a.
  39. Scriven & 997.
  40. David M. Fetterman (2002-07-03). "Empowerment evaluation". Evaluation Practice. 15: 1–15. doi:10.1016/0886-1633(94)90055-8. Retrieved 2013-01-27.
  41. "Program Evaluation Standards Statements « Joint Committee on Standards for Educational Evaluation". Jcsee.org. Retrieved 2013-01-27.
  42. Stufflebeam 1994.
  43. Fetterman 2010.
  44. Debate between Fetterman, Patton and Scriven is available online in text form from Journal of MultiDisciplinary Evaluation. It was also recorded and is available in Claremont's virtual library
  45. Fetterman 1997.
  46. Cousins 2005.
  47. Miller & Campbell 2006.
  48. Patton 2005.
  49. Scriven 2005.
  50. Smith 2007.
  51. Fetterman 2001, p. 113.
  52. (Chinman, et al. 2008.
  53. David Fetterman. "Arkansas Evaluation Center". Arkansasevaluationcenter.blogspot.com. Retrieved 2013-01-27.
  54. Scriven 1997, p. 12.
  55. Scriven 1997, p. 174.
  56. Rodriguez-Campos 2012.
  57. How SAGE has shaped Research Methods, p. 12. SAGE Publications
  58. Patton 1997a, p. 148 American Evaluation Association Award Recipients

References

External links

Ignite Lecture

This article is issued from Wikipedia - version of the 11/25/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.