Taxonomy
|
Activities
|
Examples
|
1.
Reviewing
|
Supports students in identifying correct and incorrect examples of objectives
|
Google Docs (matching worksheet)
|
2.
Communicating
|
Supports students in their interaction with other students as well as
the teacher
|
Audio/video conference tool (specifically Go to Meeting)
|
3.
Connecting
|
Supports students in synthesizing information learned from lesson
|
MS Word (infographic from template)
|
4.
Collaborating
|
Supports student engagement in learning activities with other
students
|
Wiki (for reference post-instruction)
|
5.
Evaluating
|
Supports students in assessing the program and their work
|
Survey Monkey (for final assessment and course evaluation)
|
Tuesday, November 29, 2016
Technology Exploration
Reading Check #6 - Chapter 15 Summary
Chapter 15 Summary: Planning for Instructional Implementation
The reading suggests to use persuasion for instructors or
participants to readily adopt the instruction, known as ‘planned change.’ The
job of the instructional designer is to cultivate buy-in for adopting the
instructional intervention. There are four components to the process:
innovation, communication, time, and social system. Innovation depends on the
relative advantage, or usefulness, as well as user compatibility, innovation
complexity (if it is too difficult, users will be reluctant to use it), ability
to try it on a small scale first (sample), and ability to observe the results.
Communication is key to the process, and determines who should communicate the
planned change, whether it be the ID, or the SME; the recommended is whoever
has more in common with the selected group. Time is considered alongside who
adopts the change first, and this varies depending on the product. The social
system involves the relationships amongst members of the target group, and who
will communicate the benefits or adoption, or resistance.
The CLER model stands
for configuration, linkages, environment and resources. Configurations
represents the networks of relationships within the organization, with four
categories: individuals, groups, institutions and cultures. Linkages to
determine informal and formal relationships to serve as communication links.
The environment represents the physical, social, and intellectual forces
contained within a configuration and can affect the innovation by providing a
supportive, inhibitive, or neutral atmosphere. Resources are used to support
the implementation process, and can be in the form of money, or finances,
company infrastructure, a database, web-based instruction, personnel resources
to provide training or facilitation, or even the use of tablets. In planning
the instruction using the CLER model, first consider the company’s configuration,
the individual instructional designer, group, and the institutions to establish
the key relationships. Determine the management linkages, and the supporting
environment, and whether it supports the project or not.
Another model is the TPC – technical, political and
cultural. To go into detail about this model: technical recognizes how the
innovation will affect work processes, while political means power and
influence of relationships, and cultural recognizes the company’s values.
There are many decisions when it comes to training programs.
First, there is program promotion, or getting members to enlist in the training
by advertising it. Next, there are many delivery considerations, depending on
the size and structure of the organization. Classroom facilities such as
training rooms, as well as media equipment selection are considerations for
delivery. Instructors are another implementation decision, and this depends on
scheduling to minimize impact on productivity as well as instructor training to
increase knowledge and skills. Supervisors play an important role in preparing
people for training, and what the employee is expected to do.
Monday, November 7, 2016
Reading Check #5 and Assessment and Evaluation Practice
Chapter 12 dives into evaluation components to assess
trainee skills and evaluate effectiveness of instruction. This was an
interesting chapter because I believe assessment is a process in its own, and
should be designed alongside instructional content, not as an afterthought. The
chapter goes into how to form objective tests, use ratings instruments, or
rubrics to define skill level and how to design open-ended surveys to collect
feedback.
Multiple choice tests must present a direct relationship between
instructional objectives and test items, and should be written in an easy,
straightforward manner. Questions can be easily agree, with a stem, or
question/incomplete statement, plus alternatives. Multiple choice, compared
with True or False, is a better measure to test higher order learning. Graphs
or tables to analyze, evaluate, contrast, predict and synthesize information are
examples of multiple choice testing higher order learning. A good tip provided
in the reading is to ensure that with True or False questions, be sure the
entire statement is entirely true or entirely false.
Matching items can identify relationships, but the items
must be limited to 6 or 7 and must be as short as possible, another tip is to
include 1 or 2 distractor items to prevent guessing. Constructed response tests
with short answer items or essay questions are another measure to gage
learning, however, time to complete will vary greatly per student, and if the
topic is not stated clearly, learners can veer off topic. Points to note: do not
give students a choice of essay topic, grade in the blind, outline a model
answer, and inform students of grading criteria/conditions. Another example
listed in the text: problem solving questions based on problem-based learning.
Grading measures for the Instructional Designer include, Ratings
of Performance, Checklists, and rating scales with values assigned to each
element are subject to potential grading bias. The best method is a rubric. Rubrics
give a descriptive, holistic characterization of quality in student’s work.
Rubrics can be highly informative and useful for feedback. Another good tip I
thought was the use of an ‘Indirect checklist/rating measures’ in the form of a
job-based survey or job application survey. This is great when trying to reduce
cost. The last interesting assessment was the portfolio assessment, which can
yield a richer, tangible product and leaves a more meaningful impression with
students.
To measure the effectiveness of instruction, it is important
to note that attitudes cannot be measured directly. The point is for students
to evaluate instruction and suggest improvements. One measure is to describe Affective
outcomes – to gage before and after training/workshop success. Another measure
is through observation/anecdotal records, but again, this can lead to potential
bias and is not cost effective or practical in many contexts. The most common
assessment is the assessment of behavior, through a questionnaire/survey with
open and closed ended items. The last measure listed was an ‘Interview’, structured
with reactions to discuss. This was another interesting way to evaluate
programs I had never thought of before.
For the group project of the procedure learning on how to
write objectives using the ABCD Model, I would select the following instruments
to assess trainees and evaluate instruction: Matching items, Portfolio
assessment, and affective outcomes.
I selected matching items because this is a quick and
efficient way to assess trainees during instruction. In terms of how we will
use this item during instruction, we can use a matching worksheet with the ABCD
model defined in one column (plus one or two distractor items to prevent
guessing) and the ‘Audience, Behavior, Condition, Degree’ listed as selections
in another column. We will measure basic knowledge and understanding of the
procedure and successful completion of one of the objectives with the matching
items instrument.
I selected Portfolio assessment because students can use
this to showcase their work at their own pace. During instruction, the
portfolio assessment instrument will be used to complete an objective writing
practice task and then synthesize the information learned throughout the
instruction with the production of an infographic. This can all be contained in
a Portfolio, which should be meaningful to learners as it is tangible. I guess
another item, then, would be a rubric to grade the portfolio assessment. We
will measure students higher-order thinking and achievement of learning
objectives.
The last item I selected was affective outcomes, to gather
data on the before and after results of the instruction. This will be in the
form of a survey with a rating scale. Simple close-ended questions will be
addressed before the instruction and after, to compare results. An example
question is, “Ability to define the ABCD model” with a rating scale of 1-5. We
will measure data points of students and their familiarity with the ABCD model (in
line with learning objectives) pre-and post-instruction.
Subscribe to:
Posts (Atom)