Skip to main content

ADDIE - EVALUATION






Introducing Evaluation

The final stage of the ADDIE model has arrived. Evaluation has been reached.  Yet, is it the end? Evaluation is an integral part of the ADDIE model, but it is also an integrated part.  In this chapter, evaluation itself will be evaluated, and by the end of this chapter you will be able to:

  • Summarize the theoretical foundations of evaluation and its application.
  • Define evaluation.
  • Categorize tasks roles within the three types of evaluation.
  • Explain the process of evaluation and its role within instructional design.
  • Recognize leaders in the domain of evaluation.
  • Develop a plan for ongoing evaluation in an instructional design project.





Figure 1: ADDIE Model of Design (Fav203, 2012)

Why do we evaluate?

Evaluation helps us to determine whether our instructional implementation was effective in meeting our goals. As you can see in figure 1, evaluation sits at the center of the ADDIE model, and it provides feedback to all stages of the process to continually improve our instructional design.  Evaluation can answer questions such as: Have the learners obtained the knowledge and skills that are needed?  Are our instructional goals effective for the requirements of the instructional program?  Are our learners able to transfer their learning into the desired contextual setting?  Do our lesson plans, instructional materials, media, assessments, etc. meet the learning needs? Does the implementation provide effective instruction and carry out the intended lesson plan, and instructional objectives? Do we need to make any changes to our design to improve the effectiveness and overall satisfaction with the instruction? These questions help shape the instruction, confirm what and to what extent the learn is learning, and validates the learning over time to support the choices made in the instructional design, as well as how the program holds up over time.

What Is Evaluation?

Introduction

To get started with evaluation, it is crucial to understand the overall picture of the process.  The use of varied and multiple forms of evaluation throughout the design cycle is one of the most important processes for an instructional designer to employ. To that end, this first section of the evaluation chapter attempts to explain what evaluation is in terms of the varying types of evaluation; evaluation’s overarching relationship throughout the ADDIE model; the need for both validity and reliability; how to develop standards of measurement; and evaluation’s application in both education and training.

What Does Evaluation Look Like?

The first, most important step to understanding evaluation is to develop a knowledge-base about what the process of evaluation entails.  In other words, a design must comprehend evaluation in terms of the three components of evaluation: formative, summative, and confirmative.  Each of these forms of evaluation is examined in detail here, both through the definition of the form itself and an explanation of some of the key tools within each.

Formative

Historically speaking, formative evaluation was not the first of the evaluation processes to have been developed, but it is addressed first in this chapter because of its role within the design process. Yet, it is important to place the development of the theory behind formative evaluation in context.  Reiser (2001) summarizes the history of formative evaluation by explaining that the training materials developed by the U.S. government in response to Sputnik were implemented without verifying their effectiveness. These training programs were then later demonstrated to be lacking by Michael Scriven, who developed a procedure for testing and revision that became known as formative evaluation (Reiser, 2001).

Formative evaluation is the process of ongoing evaluation throughout the design process for the betterment of design and procedure within each stage.  One way to think about this is to liken it to a chef tasting his food before he sends it out to the customer. Morrison, Ross, Kalman, and Kemp (2013) explain that the formative evaluation process utilizes data from media, instruction, and learner engagement to formulate a picture of learning from which the designer can make changes to the product before the final implementation. Boston (2002, p. 2) states the purpose of formative evaluation as “all activities that teachers and students undertake to get information that can be used diagnostically to alter teaching and learning.” Regardless if instructional designers or classroom practitioners conduct the practice, formative evaluation results in the improvement of instructional processes for the betterment of the learner.

To effectively conduct formative evaluation, instructional designers must consider a variety of data sources to create a full picture of the effectiveness of their design. Morrison et al. (2013) propose that connoisseur-based, decision-oriented, objective-based, public relations, constructivist evaluations are each appropriate data points within the formative process.  As such, an examination of each format in turn will provide a framework for moving forward with formative evaluation.

Connoisseur-Based

Subject-matter experts, or SMEs, and design experts are the primary resources in connoisseur-based evaluations.  These experts provide instructional analyses, performance objectives, instruction, test and other assessments to verify objectives, instructional analysis, context accuracy, material appropriateness, test item validity, sequencing. Each of these points allow the designer to improve the organization and flow of instruction, the accuracy of content, the readability of materials, the instructional practices, and total effectiveness (Morrison et al., 2013). In short, SMEs analyze the instruction from which they make suggestions for improvement.

Decision-Oriented

Often as instructional designers, one must make choices within the program of study being developed that require reflective thought and consideration.  Morrison et al. (2013) describe this type of formative evaluation as decision-oriented.  The questions asked during decision-oriented evaluations may develop out of the professional knowledge of an instructional designer or design team.  These questions subsequently require the designer to develop further tools to assess the question, and as such should be completed at a time when change is still an option and financial prudent (Morrison et al., 2013).

Objective-Based

If a program of study is not delivering the desired results, a provision for possible change should be considered. Through an examination of the goals of a course of instruction, the success of a learner’s performance may be analyzed. This is the primary focus for objective-based evaluation.  While making formative changes are best conducted during earlier stages of the ADDIE cycle, these changes may come later if the situation dictates it.  Objective-based evaluations may generate such results. According to Morrison et al. (2013), when summative and confirmative evaluations demonstrate undesirable effects, then the results may be used as a formative evaluation tool to make improvements. Morrison et al. (2013) recommend combining the results of objective-based evaluations with connoisseur-based because of the limited ability to make changes from the data from pre-test/post-test format objectives-based assessment typically employs.  However, Dimitrov and Rumrill (2003) suggest that analysis of variance and covariance statistical tests can be used to improve test design. The application of statistical analyses improve the validity and reliability of the design. Therefore, this may also suggest that similar comparisons may also be useful in improving overall instruction.

Public Relations

Occasionally the formative process for a program may call for showing off the value of the project as it is being developed.  Morrison et al. (2013, p. 325) refers to this form of formative data as “public-relations-inspired studies.” Borrowing from components of the other formats discussed above, this type of evaluation is a complementary process that combines data from various sources to generate funding and support for the program (Morrison et al., 2013). However, it should be noted that this process should happen during the later stages of development, because the presentation of underdeveloped programs may do more harm than good in the development process (e.g., the cancellation of pilot programs due to underwhelming results).

Constructivist Methods

Some models of evaluation are described as being behavior driven and biased.  In response to those methods, multiple educational theorist have proposed that the use of open-ended assessments allowing for multiple perspectives that can be defended by the learner (Richey, Klein, & Tracey, 2011). Such assessments pull deeply from constructivist learning theory. Duffy and Cunningham (1996) make the analogy that “an intelligence test measures intelligence but is not itself intelligence; an achievement tests measures a sample of a learned domain but is not itself that domain. Like micrometers and rulers, intelligence and achievement tests are tools (metrics) applied to the variables but somehow distinct from them” (p. 17). How does this impact the formative nature of assessment?  Constructivist methods are applicable within the development of instruction through the feedback of the learner to shape the nature of learning and how it is evaluated.

Summative

Dick et al. (2009) claim the ultimate summative evaluation question is “Did it solve the problem” (p. 320)? That is the essence of summative evaluation. Continuing with the chef analogy from above, one asks, “Did the customer enjoy the food?” The parties involved in the evaluation take the data, and draw a conclusion about the effectiveness of the deigned instruction.  However, over time summative evaluation has developed into a process that is more complex than the initial question may let on. In modern instructional design, practitioners investigate multiple questions through testing to assess the learning that ideal happens.  This differs from the formative evaluation above in that summative assessments are not typically used to assess the program, but the learner. However, summative evaluations can also be used to assess the effectiveness of learning, efficiency and cost effectiveness, lastly attitudes and reactions to learning (Morrison et al., 2013).

Learning Effectiveness

Like the overall process of summative evaluation is summarized above with one simple question, so can its effectiveness.  How well did the student learn? Perhaps even, did we teach the learner the right thing? “Measurement of effectiveness can be ascertained from test scores, ratings of projects and performance, and records of observations of learners’ behavior” (Morrison et al., 2013, p. 328). However, maybe the single question is not enough. Dick et al. (2009) outline a comprehensive plan for summative evaluation throughout the design process, including collecting data from SMEs and during field trials to feedback.  This shifts the focus from the learner to the final form of the instruction.  Either way, the data collected tests the successfulness of the instruction and learning.

Learning Efficiency and Cost Effectiveness

While learning efficiency and cost effectiveness of the instruction are certainly distinct constructs, the successfulness of the former certainly impacts the later. Learning efficiency is a matter of resources (e.g., time, instructors, facilities, etc.), and how those resources are used within the instruction to reach the goal of successful instruction (Morrison et al., 2013). Dick et al. (2009) recommend comparing the materials against an organization’s needs, target group, and resources.  The end result is the analysis of the data to make a final conclusion about the cost effectiveness based on any number of prescribed formulas.  Morrison et al. (2013) acknowledge the relationship between this form of summative evaluation and confirmative, and sets the difference at the time it takes to implement the evaluation.

Attitudes & Reactions to Learning

The attitudes and reactions to the learning, while integral to formative evaluation, can be summatively evaluated, as well. Morrison et al. (2013) explain there are two uses for attitudinal evaluation: evaluating the instruction and evaluating outcomes within the learning.  While a majority of objectives within learning are cognitive, psychomotor and affective objectives may also be goals of learning.  Summative evaluations often center on measuring achievement of objectives.  As a result, there is a natural connection between attitudes and the assessment of affective objectives.  Conversely, designers may utilize summative assessments that collect data on the final versions of their learning product.  This summative assessment measures the reactions to the learning.

Confirmative

The customer ate the food and enjoyed it.  But, did they come back?  The ongoing value of learning is the driving question behind confirmative evaluation. Confirmative evaluation methods may not differ much from formative and summative outside of the element of time.  Confirmative evaluation seeks to answer questions about the learner and the context for learning. Moseley and Solomon (1997) describe confirmative evaluation as falling on a continuum between a customer’s or learner’s expectation and assessments.

Reference:

Calhoun, C. (2020, February 22). Addie: Evaluation. Cheryl D. Calhoun. Retrieved February 28, 2022, from https://cheryldcalhoun.com/2014/11/24/addie-evaluation/#:~:text=%20%20%20Summative%20%20Evaluation%20%20,%20%20%20%201%20more%20rows%20

Comments

Popular posts from this blog

ADDIE

  ADDIE: 5 Steps To Effective Training   The Addie model is an instructional design methodology used to help organize and streamline the production of your course content. Developed in the 1970’s, ADDIE is still the most commonly used model for instructional design. Why? – It’s simple and effective!   In this post, we take a look at the various stages involved and also how you can begin using ADDIE today. Addie Explained Addie is an acronym for the five stages of a development process: Analysis, Design, Development, Implementation, and Evaluation. The ADDIE model relies on each stage being done in the given order but with a focus on reflection and iteration. The model gives you a streamlined, focused approach that provides feedback for continuous improvement. The 5 Steps of The Addie Process Step 1: Analysis Before you start developing any content or training strategies, you should analyze the current situation in terms of training, knowledge gaps etc. Start with a series o

ADDIE ~ ANALYSIS

     Analysis Civic education relates to core themes that lie at the very center  of                                    American government and politics and civic duties. The lessons are designed to encourage democratic appreciation by students. This course will teach students the basic practices of democracy  in the United States and that people have different values, interests, and opinions. Computer-based training model for online student access via personal computer.   Windows or Mac PC   Tech Support links in classroom links   Help – Please use the information provided in the syllabus. Contact information and detailed response included.   These lessons about the fundamentals of representative democracy are designed mainly for  Civics, and American government courses taught at the secondary and adult levels. By the end of this course you will be able to: Learn basic principles of Civic Duty Identify why we need government and what is the meaning of the Social contract?

ADDIE - DESIGN

  ADDIE design is an outline of instructional strategies is creating to include: learning content, activities, assessments, and media selections. It should be sequential and logical.  Three Design Steps Step 1 : Clearly define all learning outcomes and objectives.  Cognitive – Knowledge – Head Affective – Attitude – Heart Psychomotor – Skill – Hands Step 2 : Make final determinations for: The course will be taught - Online Classroom   The software technology will be the Google Classroom Platform The structure, look, and graphic design The Student population is adult education e-Learning will be – Synchronous e-Learning will be Text Driven/ Media selection will include text, graphics, audio narration, video, photos, etc. The training will be designed to appeal to social, government, history, justice, or law majors. Content sources and subject matter analysis Media selection (text, graphics, audio narration, video, photos, etc.) Exercises, applicable and appropriate gam