Date and time: Wednesday 6th July AND Wednesday 13th July 2022, 9.30am to 1.00pm AEDT (registration from 9.15am). Registrants are to attend both sessions. (full day workshop - 2 sessions)
Venue: Via Zoom. Details will be emailed to registrants just prior to the workshop start time
Facilitator: Dr. Ian Patrick
Register online by: 4 July 2022. Spaces limited to 25 participants
The workshop will provide participants with insight into theory-based approaches to evaluation, and specifically into the role of Program Theory and Program Logic to provide a clear understanding, focus and direction to the practice of evaluation. The use of Program Theory and Program Logic will be clearly detailed within a staged conceptual model, with guidance provided on how they can be applied within the planning and implementation of an evaluation.
Areas covered in the workshop include the use of Program Theory and Program Logic to:
- Identify the expected cause and effect relationships within a program, and the critical assumptions which underpin whether anticipated change occurs.
- Establish relationships between the more operational constructs of inputs, activities, outputs, outcomes, and impacts as they apply to a program
- Identify critical areas of focus for monitoring and evaluation including determining evaluation questions across different evaluation domains
The role of stakeholders in the development of the Program Theory and Program Logic and ways to promote their participation will be a point of emphasis. The workshop will consider how monitoring and evaluation activities can establish the validity of the Program Theory and Program Logic and assist in making adjustments to these models as a program matures or understandings about its focus change. Constraints and limitations in the use of Program Theory and Program Logic will also be identified, together with common constraints and pitfalls in implementation and means to address these.
The objectives of the workshop are to:
- Develop insight into theory-based approaches to evaluation, and specifically into program theory and program logic
- Become conversant with different approaches to construction of program theory and program logic models, including the critical role of assumptions
- Consider how program theory and program logic are applicable to different stages of an evaluation, including developing evaluation questions, and how adjustments to models developed can be made as the program matures
- Learn to avoid common pitfalls associated with the use of theory-based approaches
This workshop aligns with competencies in the AES Evaluator’s Professional Learning Competency Framework. The identified domains are:
- Domain 1 – Evaluative attitude and professional practice
- Domain 2 – Evaluation theory
- Domain 3 – Culture, stakeholders and context
- Domain 4 – Research methods and systematic inquiry
- Domain 7 – Evaluation Activities
Who should attend?
This workshop is pitched at beginner to intermediate level and does not assume prior experience with theory-based approaches. It is suitable for those involved in the planning, implementation or management of monitoring and evaluation activities.
Workshop start times
- VIC, NSW, ACT, TAS: 9.30am
- SA, NT: 9.00am
- WA: 7.30am
- New Zealand: 11.30am
- For other time zones please go to https://www.timeanddate.com/worldclock/converter.html
About the facilitator
Dr. Ian Patrick is a self-employed consultant undertaking evaluation related roles in both Australia and the Asia-Pacific region. Ian has considerable experience as a trainer and has delivered workshops in areas such as Developing Monitoring and Evaluation Frameworks, Introduction to Monitoring and Evaluation, Advanced Monitoring and Evaluation, Impact Assessment, and Participatory Evaluation. This experience crosses Australia, New Zealand, United States, UK, Ireland and a range of developing countries particularly in the Asia-Pacific region. Ian is an Honorary Senior Fellow with the School of Social and Political Sciences, University of Melbourne. He is co-author of the SAGE text, Developing Monitoring and Evaluation Frameworks (2016).