Monday, November 7, 2016

Reading Check #5 and Assessment and Evaluation Practice

Chapter 12 dives into evaluation components to assess trainee skills and evaluate effectiveness of instruction. This was an interesting chapter because I believe assessment is a process in its own, and should be designed alongside instructional content, not as an afterthought. The chapter goes into how to form objective tests, use ratings instruments, or rubrics to define skill level and how to design open-ended surveys to collect feedback.

Multiple choice tests must present a direct relationship between instructional objectives and test items, and should be written in an easy, straightforward manner. Questions can be easily agree, with a stem, or question/incomplete statement, plus alternatives. Multiple choice, compared with True or False, is a better measure to test higher order learning. Graphs or tables to analyze, evaluate, contrast, predict and synthesize information are examples of multiple choice testing higher order learning. A good tip provided in the reading is to ensure that with True or False questions, be sure the entire statement is entirely true or entirely false.

Matching items can identify relationships, but the items must be limited to 6 or 7 and must be as short as possible, another tip is to include 1 or 2 distractor items to prevent guessing. Constructed response tests with short answer items or essay questions are another measure to gage learning, however, time to complete will vary greatly per student, and if the topic is not stated clearly, learners can veer off topic. Points to note: do not give students a choice of essay topic, grade in the blind, outline a model answer, and inform students of grading criteria/conditions. Another example listed in the text: problem solving questions based on problem-based learning.

Grading measures for the Instructional Designer include, Ratings of Performance, Checklists, and rating scales with values assigned to each element are subject to potential grading bias. The best method is a rubric. Rubrics give a descriptive, holistic characterization of quality in student’s work. Rubrics can be highly informative and useful for feedback. Another good tip I thought was the use of an ‘Indirect checklist/rating measures’ in the form of a job-based survey or job application survey. This is great when trying to reduce cost. The last interesting assessment was the portfolio assessment, which can yield a richer, tangible product and leaves a more meaningful impression with students.

To measure the effectiveness of instruction, it is important to note that attitudes cannot be measured directly. The point is for students to evaluate instruction and suggest improvements. One measure is to describe Affective outcomes – to gage before and after training/workshop success. Another measure is through observation/anecdotal records, but again, this can lead to potential bias and is not cost effective or practical in many contexts. The most common assessment is the assessment of behavior, through a questionnaire/survey with open and closed ended items. The last measure listed was an ‘Interview’, structured with reactions to discuss. This was another interesting way to evaluate programs I had never thought of before.

For the group project of the procedure learning on how to write objectives using the ABCD Model, I would select the following instruments to assess trainees and evaluate instruction: Matching items, Portfolio assessment, and affective outcomes.

I selected matching items because this is a quick and efficient way to assess trainees during instruction. In terms of how we will use this item during instruction, we can use a matching worksheet with the ABCD model defined in one column (plus one or two distractor items to prevent guessing) and the ‘Audience, Behavior, Condition, Degree’ listed as selections in another column. We will measure basic knowledge and understanding of the procedure and successful completion of one of the objectives with the matching items instrument.

I selected Portfolio assessment because students can use this to showcase their work at their own pace. During instruction, the portfolio assessment instrument will be used to complete an objective writing practice task and then synthesize the information learned throughout the instruction with the production of an infographic. This can all be contained in a Portfolio, which should be meaningful to learners as it is tangible. I guess another item, then, would be a rubric to grade the portfolio assessment. We will measure students higher-order thinking and achievement of learning objectives.

The last item I selected was affective outcomes, to gather data on the before and after results of the instruction. This will be in the form of a survey with a rating scale. Simple close-ended questions will be addressed before the instruction and after, to compare results. An example question is, “Ability to define the ABCD model” with a rating scale of 1-5. We will measure data points of students and their familiarity with the ABCD model (in line with learning objectives) pre-and post-instruction.



No comments:

Post a Comment