A recent discussion on DO-Consult turned to the question of how we can evaluate the success of electronic citizen engagement projects. One list member pointed out that the issues probably aren’t too different from the challenges of evaluating off-line engagement efforts, which inspired me to pull together some resources on how to evaluate citizen engagement.

It turns out that the Hewlett Foundation has funded a joint research project on evaluating dialogue & deliberation, jointly undertaken by the Deliberative Democracy Consortium and the National Conference on Dialogue and Deliberation (the NCDD’s web site is a great resource for general consultation and deliberation issues).

The project has gathered 50 assessment tools, reports and papers, some of which will soon be available on the NCDD’s resources page. To access the full set of resources immediately, you need to access the archives of the NCDD’s listserv on evaluating dialogue and deliberation (which requires you to register with the NCDD site — a very quick & easy process that will be initiated when you try to access the archive). Visit the NCDD’s e-mail list page and click on the “Evaluating Dialogue & Deliberation” list.

The NCDD’s web site also includes a paper by Angie Boyce of the Boston Museum of Science that offers a very nice review of the evaluation literature. See excerpts below; those who would find it useful to read the literature review in full (3 pages of a 9-page paper on “Evaluating Public Engagement: Deliberative Democracy and the Science Museum”) can download the paper in Word format.

The Canadian government has a report on “
Evaluation and Citizen Engagement” that seems to be aimed at public servants trying to build evaluation processes into their own engagement projects. The report includes an annotated bibliography on the subject, much of it focused on “subject-centered evaluation” — i.e. evaluation by participants.

From Boyce, “Evaluating Public Engagement”, 2004:

[T]he evaluation literature on public participation and deliberative democracy is still in its infancy. Evaluation is only beginning to be considered a critical component in the development process (Rowe and Frewer, 2000; Einsiedel, 2002; Abelson, Forest et al., 2003)

Webler develops an evaluative framework based on two “metacriteria”; competence, which he defines as “psychological heuristics, listening and communication skills, self-reflection, and consensus building” and fairness, which occurs when “people are provided equal opportunities to determine the agenda, the rules for discourse, to speak and raise question, and equal access to knowledge and interpretations” (Webler, 1995). Webler qualifies competence and fairness as criteria by identifying conditions under which they are most likely to occur….

Rowe and Frewer…divide evaluation criteria into two parts: acceptance criteria, which refer to how the procedure is constructed and implemented, and process criteria, which are related to how the public will accept the procedure….

Einsiedel’s work….developed evaluation criteria from the literature on constructive technology assessment (which is front-end and design focused) and deliberative democracy (Habermas’s rules for discourse). She divided evaluation into three components: institutional/organizational criteria (which focus on how the opportunity for public participation emerged and was shaped), process criteria (which focus on what procedures were used as part of the participatory process), and outcome criteria (which focus on the impacts on participants, the community, the larger public, and the policy process in general).

… Perhaps one of the most extensive evaluation efforts that has been published to date is by Horlock-Jones et. al in their evaluation of the GM Nation? public debate sponsored by the British government on genetic modification. They used three sets of criteria: the aims and objectives of the Steering Board (in charge of implementing the debate), normative criteria (transparency, well-defined tasks, unbiased, inclusive, sufficient resources, effective and fair dialogue) and third, focus on participant views of success using surveys (Horlock-Jones, Walls et al., 2004). By using three different sets of criteria, they show that normative criteria must co-exist with stakeholder goals and participant perceptions.

…Joss describes several approaches to evaluating consensus conferences: efficiency (organization and management), effectiveness (external impact and outcomes), formative study (concurrent look at structure and process with possible intervention), cross-cultural studies (wider cultural context comparisons), and cost-benefit analysis (cost-effectiveness) (Joss, 1995).

… Interestingly, while scholars have developed different evaluative frameworks, the methodologies used in evaluation are largely similar. They look at discourse, documentation, and social relationships, using some quantitative but mostly qualitative methodologies. Indeed, it could be said that evaluation has taken an ethnographic turn. Webler advocates discourse analysis with a particular focus on the participant perspective. Einsiedel conducted participant observations, collected materials used by participants, distributed questionnaires, recorded questions to the facilitator, and did interviews with randomly selected citizens and experts of interest. Horlock-Jones used some of the same methodologies as Einsiedel as well as conducted media analysis and public opinion surveys. In addition, they divided their observations into structured observations (looking for specific behaviors) and ethnographic recording. Joss listed his methodologies the most specifically out of the scholars reviewed in this paper; he used multiple methodologies including: keeping a log book and document/files archive, conducting group discussions, handing out questionnaires, conducting interviews, asking participants keep diaries, conducting a literature search, monitoring conferences in other settings, and audio-taping all of the procedures. Future evaluation work should discuss the merits and drawbacks to methodologies used in order to inform and improve methodological procedures for the evaluation community.