Do We Make a Difference?
2016 Distinguished Service Ruby Lecture
October 26, 2016
Pennie K. Crinion, Ed.D.
University of Illinois Extension


Introduction/Appreciation

Good Afternoon. So here I am, at my 17th National ESP Conference, never imaging that I would be standing here as the Distinguished Service Ruby Award recipient.   I remember my first ESP National Conference which was held in Annapolis, Maryland and my thoughts about the Ruby Award.  I was awed by the credentials of the recipient of the award and the requirement to deliver a lecture. I was also very impressed with the quality of the concurrent sessions, speakers, and opportunities to meet Extension colleagues from Oregon to New York and all states in-between.  In short, I have greatly appreciated the ESP professional development opportunities to learn about new programs and to hone my leadership skills serving on various ESP national committees.

My sincerest thanks to ESP for this award.  I feel truly honored to have been chosen as the ESP Distinguished Service Ruby Award recipient.  I also want to thank my Illinois Alpha Nu chapter colleagues who were insistent on nominating me for this award and gave me their own “Ruby Award” at our last Alpha Nu chapter state meeting. And I want to thank my family for supporting me for the past 18 years in what has been a weekly 140-mile commute from a suburb of Chicago to my office which is currently on the University of Illinois campus in Champaign-Urbana.

Once again thanks to all of you and for the opportunity to now share my thoughts regarding the question—“Do We Make a Difference?” 

Let me clarify this question and make sure we are on the same page by further defining “difference.”  In my remarks today “difference” refers to having a positive effect on individuals and communities who seek information and participate in Extension programs.  That positive effect can further be described as changes in awareness, knowledge, attitudes, skills, aspirations, and practices of individuals who attend Extension’s educational programs.  In addition, “difference” also refers to positive changes in communities brought about by Extension faculty and staff working in partnership with community members and leaders.  Impact is another term for difference.

At some early point in my life and perhaps through interactions with my parents, teachers, and my religious upbringing, I determined that I was expected to “make a difference”.  In addition, I believed that difference would need to be focused on meeting the needs of others.  I imagine that I’m not the only one in this room to aspire to make a difference. My comments today are not designed to generate personal introspection of your life or mine.  Instead I want to use this opportunity to focus on evaluation of Extension’s educational delivery and impact.      

After completing high school, I earned my bachelor’s degree and K-12 teacher certification. I then completed my master’s degree in adult education while employed in my first Extension position.  I was well-prepared to be an educator after reading Robert Mager’s books (Goal Analysis and Preparing Instructional Objectives). Drawing on those resources, I was able to identify goals and objectives for Extension programs.  I made sure that I used action verbs that could be measured as I wrote those objectives.  In planning a program, I applied my skills in choosing methods and developing teaching outlines.  I also applied my skills learned in a semester graduate course on evaluating educational outcomes--testing for students’ knowledge gained and applied. 

After assuming the position of an Extension regional director, I soon discovered that a majority of the field staff didn’t conduct program evaluations.  In addition, some staff only collected data regarding their presentation skills.  So I shared the need for conducting evaluations, particularly impact evaluations, during performance conferences with the field staff in my region of responsibility.  And I rewarded those who did so with higher ratings and the highest salary increases.  My reasoning was based on the expectation that they were able to, in some way, make an effort to document bringing about changes in individuals and communities through their educational efforts.

So why don’t some Extension Staff evaluate programs?

Let me share a personal example.  I don’t like peas and usually separate them out on my plate if I find them unexpectedly in my food when eating out.  My mother said I spit out the first spoonful as a baby and thereafter, even when she tried to disguise them by mixing them in with other vegetables. I speculate that the reason I didn’t like eating peas was their texture--at least that’s my assumption today. 

So here’s a list of rhetorical questions that suggests possible reasons some staff don’t evaluate their programs.

So why don’t some Extension Staff evaluate programs?  Is it

·         A lack of academic training regarding developing and conducting an evaluation?

·         Lack of time to prepare and conduct the evaluation?

·         Belief that the information isn’t being used by others?

·         Fear that participant feedback won’t be positive regarding their program delivery?

·         Fear that changes in knowledge or practices won’t be significant?

I considered asking staff this question at a training I was conducting but decided they would be unlikely to give an honest answer.

And by the way, the only time I ever ate peas was when I was student teaching and invited to have dinner with my supervising teacher. That woman was able to bring fear into the hearts and minds of her students and student teachers. So I fearfully ate her pea salad!

However, I don’t believe fear of their regional director is likely the reason some staff don’t evaluate their programs. 

Before we continue to explore making a difference, let’s take a few minutes to focus on the on the history, mandates, and uses of evaluation. 

Extension evaluation efforts are a means of gathering data that is designed to be useful in decision-making to 1) improve programs, 2) manage the organization, and 3) document impact that sustains public relations with our participants and stakeholders. 

This latter use received a great deal of attention beginning in the 1970s, when the country was experiencing a slow-down in the economy.  Organizations were being judged on results (not just efforts) by tax payers who wanted to “get their money’s worth”.  In response, Congress issued a mandate for a national evaluation of Extension.  Termed the era of accountability, Extension began to ask of itself critical questions.  Extension leaders recognized that evaluation might play a prominent role in gathering data to answer questions from the public, and thus, ensure continued funding. These are the questions they faced.

·         Are programs focusing on the most critical needs?

·         Could and should Extension serve a broader range of clientele?

·         Does Extension adapt quickly enough to changes in social, economic, and environmental conditions?

·         Are Extension programs cost effective? 

·         Are programs having intended impacts?

·         Are we making a difference?

Extension has continued to recognize the need for accountability. Our Annual Federal reports are expected to include documented measurable program impact on individuals and communities.  Bennet’s hierarchy and logic models have helped us define levels of impact that we should be measuring.  In addition, our current efforts to develop and share public value statements that encompass that data is another example of taking action and reporting impact.

So are we using evaluation to document differences we may be making?

My Florida colleague Glenn Israel and his compatriots Alexa Lamm and Amy Harder conducted a study to examine Extension professionals’ perceptions of how evaluation is used.  The study results, published in 2011, indicated that Extension professionals are engaged in a wide variety of evaluation behaviors.  The majority are keeping program participation records, and conducting posttests of their activities and posttests of their overall program.  However, Israel suggests that they are only engaging in evaluation at the most basic level.

The participants in the study felt evaluation was a critical tool for improving Extension programs.  They used their evaluation results to make decisions about their programs.  However, they were not sure their results served the information needs of their community stakeholders. They also were only in slight agreement that County Directors might be using their evaluation data to make decisions. 

A second national study conducted by Israel and colleagues in eight states gives additional insight into the behaviors in which Extension professionals are engaging.  Their results indicated that while Extension professionals are not being hired for their evaluation expertise, many do need and value evaluation and associate themselves as having some evaluation skills and abilities.

So how do we build individual and organizational evaluation capacity? 

Building evaluation capacity is a frequent topic of conversation among those of us who have leadership responsibility for evaluation.  We discuss the uses, challenges, and the need for organizational commitment and approaches in building capacity for evaluation. 

Here are some of those challenges.

1.    Tailoring accountability data for multiple stakeholders and decision makers.

2.    Going beyond measuring knowledge change to identify application of knowledge by our participants.

3.    Going beyond application of knowledge by our participants and measuring changes in our communities with respect to social, environmental, and economic conditions.

4.    Determining the economic value of our educational programs.

5.    Declining response rates for evaluations of online programs.

Our Extension approaches vary in building capacity to address these and other evaluation challenges.

One approach is to offer training and support to Extension educators, placing the expectation for conducting effective program evaluation solely on field faculty and staff.   Others question this approach.  In Illinois multi-county units have been hiring individuals with skills in communication who can respond to Challenge #1.  These staff members can tailor reports using evaluation findings to match various stakeholders and decision-makers needs and interests regarding Extension’s impact. They also can include evaluation findings in releases to media outlets that enhance current and potential program participants’ ability to advocate Extension’s value to stakeholders and decision-makers.

Another approach to capacity building is to hire one or more evaluation “specialists” who have an academic background and experience in evaluation. Their role would focus on providing or helping develop faculty and staff skills in data analysis. These specialists would also provide expertise in the development of a variety of evaluation methods.  Examples of these methods in meeting challenges #2, #3, #4, and #5, include 1) using a control group, 2) random sampling, 3) collecting data at the beginning and end of a program series or using a retrospective post-pre format, and 4) following up with participants on application of knowledge gained.     

I recognize that land-grant institution faculty and administrators would expect evidence of validity and reliability in partnering with Extension on grants. Such partnership would enhance the promotion and tenure efforts of Extension faculty and staff.  For partnerships that involve a multi-state participation or determination of the economic value of our educational programs, Extension Directors need to provide a budget/funds for additional evaluation specialists.  Funds would support the teaming of these evaluation specialists and faculty who are involved in research-specific efforts to collect identifiable social, environmental, and economic conditions.

Heather Boyd, a former Virginia Tech faculty member who is now at the University of Notre Dame suggests that administrators and internal evaluators should engage in evaluation-focused conversations that involve stakeholders, field faculty and staff, and specialists to address what is realistic and what is possible to evaluate. They need to set a positive tone for what can be done.  They also need to acknowledge and communicate that both program improvement and program results are important findings in evaluation processes.  Administrators and evaluators working together can create and communicate expectations that contribute positively to the organizational climate for evaluation and evaluation capacity building.

A third approach encompasses building evaluation capacity and developing organizational support broadly across the organization.  The evaluation specialist works with program field faculty and staff to conduct program evaluations and leads opportunities that involve and require research-based and evidenced-based data collection.

Identification and support for Extension Evaluation Champions is another way to build evaluation capacity.  Evaluation champions are people who practice and promote program evaluation within their organization but have no formal appointment as evaluators.  They are catalysts for improvement, who advocate, role model, informally mentor, and assist in training their colleagues. They can be found at all levels of the organization.  I believe they are an important key to the success of evaluation capacity building efforts in Extension.

Evaluation Champions

Recently I have been involved in a multi-state research effort to gather information from Extension Evaluation Champions.  Our objective was to better understand the actions, motivation, and needs of these identified champions.  Our goal was to build evaluation capacity by supporting current champions and recruiting and developing others. The findings of that effort are being published in two articles in the October 2016 Journal of Human Sciences and Extension.

Our study identified ten faculty and staff members from each of four states nominated by their peers based on their advocacy, practice, and/or training efforts for conducting program evaluations.  Along with two other colleagues from North Carolina and Virginia, we conducted a qualitative phone interview designed to identify these 40 champions’ roles and motivations related to evaluating programs.

These individuals were serving at all organizational staffing levels and varied in experience, academic discipline(s) and practice networks.  Their reasons for advocating for evaluation most often focused on funding or accountability.

So what motivated them to conduct evaluations initially?  A few of these champions (like me) told us their motivation began in grad school.  A few others were motivated to take additional formal coursework.  More typically, they learned evaluation skills through mentoring and project team experiences. 

For all respondents, much skill practice is self-taught through trial and error, or learning from a peer, expert, or mentor.  Many of these champions view evaluation as a means to “tell the Extension story”, improve programs, and “make a difference in their community”. Sustained motivation is strongly related to early and intensive training, reinforced by rewarding evaluation practice.

Evaluation champions’ practice incorporates thinking about, initiating, measuring, and using data. They frequently manage evaluation by engaging campus experts, community partners, and funders and by working in project teams. Their priorities focus on using evaluation processes and data to improve programs and to improve their own program delivery skills.  They help other program staff and clients succeed and effectively interpret program goals and results to stakeholders.

Many champions also provided short- or long-term mentoring and training in the context of project teams.  Conducting professional development for others often occurs when “teachable moments” arise. Topics most often cited were planning an evaluation, selecting methods, and evaluation use.

One of the most enduring themes across advocacy, practice, and training experiences is champions’ expressed passion for “making a difference” and confirming that through evaluation. 

Evaluation Needs 

With respect to Extension professionals’ needs to improve evaluation understanding and practice, nearly three-fourths of the evaluation champions expressed a need for training.  Most sought assistance and specialized skills that included completing Institutional Review Board applications and guidance on questions they need to ask. Others sought to develop skills in mastering statistics, content analysis, and interpreting and communicating evaluation results. 

Beyond the need for training resources, respondents desired specific and clear guidelines on when, what, and to what degree to evaluate, especially those associated with reporting and engaged in efforts regarding promotion. Inadequate resources was a frequent theme.  Resource deficits in time to complete evaluations, organizational support, and recognition were more often cited than money as critical for effective evaluations.

Evaluation champions in more than one state wanted to know more about how their data were used at the state level.  Despite citing constraints on training, these champions once again consistently emphasized commitment to their work and desire to make a difference.

The champions were also asked about professional needs for evaluation support through technology.  Champions cited website use and the need for more and better online tools for data collection, social media analysis, and software for reporting, as well as online access to experts.

Our study suggests that Extension organizations can, do, and should support the work of evaluation champions.     



So do we make a difference? 

I believe the verdict is still out but should remain a topic of conversation and goal. In my current work with teams who offer statewide programs, I help design evaluations, tally, and draft reports for state and federal stakeholders and share the statewide impact reports as well as drafts of local impact with the teams.  Most often the impact focuses on knowledge change with very little follow-up on the application of that knowledge. 

The Israel studies that surveyed all staff in one state suggest that a substantial percentage of Extension professionals are doing just enough to complete mandatory reports. Professionals who valued their own personal use were more likely to conduct in-depth evaluations. However, as indicated earlier, the authors indicated that many do need and value evaluations and associate themselves as having some evaluation skills and abilities. 

Our study of evaluation champions nominated by peers and who have applied their evaluation skills have clearly verbalized their commitment to evaluation and interest in making a difference.

Both studies found that leadership and organizational culture had an effect on evaluation.  

So here’s a list that I am asking you to consider related to enhancing the gathering of data that indicates we make a difference.

Extension Administrators Can and Should

·         Provide clear communication from administration demonstrating their support and value of evaluation that measures impacts.

·         Provide guidelines that are specific and clear regarding evaluation expectations that encompass measuring impacts, as well as decision-making for program improvement.

·         Provide for recognition of evaluation efforts that document impacts.

·         Consider the need for establishing evaluation specialist positions.

If you have a state or local level administrative role, you play an important part in clearly communicating support regarding the value and expectations for evaluating Extension programs. You can provide recognition of evaluation efforts that document impacts. In addition, your efforts to recognize and communicate use of evaluation findings that document participant changes in knowledge, skills, and attitude, practice changes, and longer term social, environmental, and economic changes are equally important.  

Support/Take Actions/Encourage Colleagues to Experience the following:

·         Early and intensive new staff training that addresses evaluation expectations and skills.

·         Opportunities for staff training related to a variety of topics, levels of training and support resources in diverse formats, with emphasis on online tools to enhance evaluation that documents impacts.

·         Opportunities for mentoring and project team experiences.

·         Creating and engaging in discussions surrounding gathering data to document impacts at the state, local, and team level.

·         Opportunities for connections with and assistance from evaluation specialists.

If you are an Extension faculty or staff member who delivers programs, consider helping to establish and conduct training for new staff.  The North Central Extension state leaders for program development are in the process of exploring the creation of a portal for sharing program planning, evaluation, and reporting training opportunities and resources.  Also consider taking the opportunity to mentor others and lead team conversations and consider advocating for connections and assistance from evaluation specialists.

In closing I hope you will join me in choosing one or more bullets to address and support as Extension Evaluation Champions and as vital partners in a larger program development plan for making a difference.  Thank you for allowing me to share my thoughts, observations and suggestions. And thank you again for this wonderful honor. I consider it a sign that I have perhaps made a difference as an Extension professional.



References

Andrews, Mary. (1983). Evaluation: An Essential Process. Journal of Extension. Volume 21, Number 5, pp. 8-12.

Boyd, Heather H. (2009) (2009). Practical Tips for Evaluators and Administrators to Work Together in Building Evaluation Capacity. Journal of Extension. Volume 47, Number 2, Article Number 2IAW1.

Lamm, Alexa J.  & Israel, Glenn D. (2011). Organizational Approach to Understanding Evaluation in Extension. Journal of Agriculture Education. Volume 52, Number 4, pp136-149.

Lamm, Alexa J., Israel, Glenn D., & Harder, Amy (2011). Getting to the Bottom Line: How Using Evaluation Results to enhance Extension Programs can lead to Greater Levels of Accountability. Journal of Agriculture Education. Volume 53, Number 4, pp.44-55.

Mager, Robert F.  Goal Analysis: How to Clarify Your Goals to Actually Achieve Them. Revised Third Edition. Belmont, California: David S. Lake Publishers, 1997.

Mager, Robert F.  Preparing Instructional Objectives: A Critical Tool in the Development of Effective Instruction. Revised Third Edition. Belmont, California: David S. Lake Publishers, 1997.

Rennekamp, Roger A. & Arnold, Mary E. (2009). What Progress, Program Evaluation? Reflections on a Quarter-Century of Extension Evaluation Practice, Journal of Extension. Volume 47, Number 3, Article 3COM1.

Silliman, Benjamin, Crinion, Pennie, & Archibald, Thomas. (2016). Evaluation Champions: What They Do, Why They Do It, and Why It Matters to Organizations. Journal of Human Science and Extension. Volume 4, Number 3, October 2016. ISSN 2325-5226.

Silliman, Benjamin, Crinion, Pennie, & Archibald, Thomas. (2016). Evaluation Champions: What They Need and Where They Fit in Organizational Learning. Journal of Human Science and Extension. Volume 4, Number 3, October 2016. ISSN 2325-5226.