16624
16624
16624
com
https://ebookball.com/product/online-assessment-and-
measurement-case-studies-from-higher-education-1st-edition-
by-scott-howell-mary-hricko-
isbn-1591407206-978-1591407201-23474/
OR CLICK HERE
DOWLOAD EBOOK
https://ebookball.com/product/english-for-psychology-in-higher-
education-studies-1st-edition-by-jane-
short-1859644465-9781859644461-23744/
ebookball.com
ebookball.com
ebookball.com
Crisis Response in Higher Education How the Pandemic
Challenged University Operations and Organisation 1st
edition by Mats Benner, Jonathan Grant, Mary O’Kane
3030978397 978-3030978396
https://ebookball.com/product/crisis-response-in-higher-education-how-
the-pandemic-challenged-university-operations-and-organisation-1st-
edition-by-mats-benner-jonathan-grant-mary-
oaeurtmkane-3030978397-978-3030978396-24288/
ebookball.com
ebookball.com
ebookball.com
https://ebookball.com/product/e-learning-strategies-of-higher-
education-institutions-1st-edition-by-luismang-
isbn-9036523761-978-1234567890-20554/
ebookball.com
Online Assessment
and Measurement:
Case Studies from
Higher Education, K-12
and Corporate
Scott L. Howell
Brigham Young University, USA
Mary Hricko
Kent State University, USA
Copyright © 2006 by Idea Group Inc. All rights reserved. No part of this book may be reproduced,
stored or distributed in any form or by any means, electronic or mechanical, including photocopying,
without written permission from the publisher.
Product or company names used in this book are for identification purposes only. Inclusion of the
names of the products or companies does not indicate a claim of ownership by IGI of the trademark
or registered trademark.
All work contributed to this book is new, previously-unpublished material. Each chapter is assigned to at
least 2-3 expert reviewers and is subject to a blind, peer review by these reviewers. The views expressed
in this book are those of the authors, but not necessarily of the publisher.
Online Assessment and
Measurement:
Case Studies from Higher
Education, K-12 and Corporate
Table of Contents
Preface ........................................................................................................................viii
Chapter I
Continuous Assessment for Improved Student Outcomes:
Examples from WebCT’s Exemplary Course Project ........................................... 1
Daniel Volchok, WebCT, USA
Maisie Caines, College of the North Atlantic, Canada
David Graf, Nova Southeastern University, USA
Chapter II
Coordinating a Distance Learning Software Rollout with the Needs
and Visions of a Mature Test Organization: Political and Technical
Lessons Learned ........................................................................................................ 15
Luke Fernandez, Weber State University, USA
Chapter III
Assessing the Relationship between Learner Satisfaction and Faculty
Participation in Online Course Discussions ............................................................. 27
Dana Offerman, Capella University, USA
Kimberly Pearce, Capella University, USA
Christopher Tassava, Capella Unversity, USA
Chapter IV
Authentic Online Assessment: Three Case Studies in Teacher Education ...... 42
Mike Keppell, The Hong Kong Institute of Education, Hong Kong
Eliza Au Kit Oi, The Hong Kong Institute of Education, Hong Kong
Ada Ma Wai Wing, The Hong Kong Institute of Education, Hong Kong
Chapter V
Electronic Tools for Online Assessments: An Illustrative Case Study
from Teacher Education .......................................................................................... 62
Jon Margerum-Leys, Eastern Michigan University, USA
Kristin M. Bass, University of California, Berkeley, USA
Chapter VI
Online Assessment of Foreign Language Proficiency: Meeting
Development, Design and Delivery Challenges ................................................... 82
Paula M. Winke, Michigan State University, USA
Chapter VII
Using Web-Enabled Technology in a Performance-Based Accreditation
Environment .............................................................................................................. 98
John C. Wise, Penn State University, USA
Dhaneshwar Lall, Penn State University, USA
Peter J. Shull, Penn State University, USA
Dhushy Sathianathan, Penn State University, USA
Sang Ha Lee, Penn State University, USA
Chapter VIII
Moving Beyond Objective Testing in Online Assessment ................................ 116
Helen S. Ashton, Heriot-Watt University, UK
Cliff E. Beevers, Heriot-Watt University, UK
Colin D. Milligan, Heriot-Watt University, UK
David K. Schofield, Heriot-Watt University, UK
Ruth C. Thomas, Heriot-Watt University, UK
Martin A. Youngson, Heriot-Watt University, UK
Chapter IX
Development of a Computerized Adaptive Test for a Large-Scale Testing
Program: A Case Study of the Development of the North Carolina
Computerized Adaptive Testing System .............................................................. 130
Lori McLeod, RTI International, USA
Albert Bethke, RTI International, USA
Cheryl Hill, University of North Carolina at Chapel Hill, USA
Pamela Van Dyk, North Carolina Department of Public Instruction, USA
Kelly Burling, North Carolina Department of Public Instruction, USA
Chapter X
Introducing a Computer-Adaptive Testing System to a Small School
District ..................................................................................................................... 143
Timothy Pelton, University of Victoria, Canada
Leslee Francis Pelton, University of Victoria, Canada
Chapter XI
A Computer-Adaptive Mathematics Test Accommodates Third Grade Students with
Special Needs in the Pacific Northwest .................................................................... 157
Luke Duesbery, University of Oregon, USA
Leanne Ketterlin-Geller, University of Oregon, USA
Jan D. McCoy, Learning Point Associates, USA
Gerald Tindal, University of Oregon, USA
Chapter XII
Designing an Online Formative Assessment that Helps Prepare Students
and Teachers for a Summative Assessment: A Case Study—A Two-Year
Pilot Project Pairing an Online Classroom Assessment with a Statewide
High-Stakes Test ...................................................................................................... 170
Stephanie JL Gertz, Riverside Publishing, USA
Sally Askman, Bill & Melinda Gates Foundation, USA
Chapter XIII
Online Assessment in the K-12 Classroom: A Formative Assessment
Model for Improving Student Performance on Standardized Tests ......................... 181
Jacqueline B. Shrago, ThinkLink Learning, USA
Michael K. Smith, ThinkLink Learning, USA
Chapter XIV
Online Assessment Distribution Models for Testing Programs: Lessons Learned
from Operational Experience .................................................................................... 196
Anthony R. Zara, Pearson VUE, USA
Chapter XV
From Paper and Pencil to Computerized: A Dental Board’s Perspective ............... 206
G. Eric Jenson, Thomson Prometric, USA
Chapter XVI
Assessment Elements in Web-Based Training ........................................................ 214
Kimberly Payne, Imedia.it, USA
Chapter XVII
The Seven C’s of Comprehensive Online Assessment: Lessons Learned
from 36 Million Classroom Assessments in the Cisco Networking
Academy Program ..................................................................................................... 229
John T. Behrens, Cisco Systems, USA
Tara A. Collison, Cisco Systems, USA
Sarah DeMark, Cisco Systems, USA
Chapter XVIII
Feasibility Studies on Transitioning Assessment Programs from Paper
and Pencil to CBT Delivery ....................................................................................... 246
Sandra Greenberg, Professional Examination Service, USA
I. Leon Smith, Professional Examination Service, USA
Foreword
The 18 chapters in this volume feature a number of case studies detailing online
applications and the uses of assessment and measurement methodologies, systems,
and practices across three broad educational or training areas, namely elementary
and secondary (K-12), higher education, and corporate. Although the experiences of
those who experimented with the varied forms of online assessment and measure-
ment are individual, they are also generalizable. Their stories of migrating old as-
sessments and introducing new ones to the online environment have been frustrating
and challenging but also exhilarating and rewarding. This volume makes significant
contributions to the growing body of knowledge and experience in the burgeoning
academic and applied areas of online assessment and measurement.
Even though the authors of and settings for each of these case studies are diverse,
their experiences are common. They have identified the foundational need for wide-
spread stakeholder inquiry , communication, and professional development and the
importance of counting financial costs and not underestimating information technol-
ogy (IT) requirements up front. The case studies capture newness and an early unfa-
miliarity with online assessment and measurement in both attitude and approach and
summarily declare findings as premature and systems as experimental. They agree
that any systems they adopt or processes they develop have not become a panacea.
They also observe that the greatest challenge for online assessment and measure-
ment is change itself. Administrators, instructors, and learners are all affected by the
most important change that new online technologies, theories, and models make pos-
sible: the tight and inextricable integration of learning and assessment. Clearly , this
phenomenon appears to have opened up so many possibilities that the newness and
largeness of it all is still overwhelming and unsettling to many .
The future of online assessment and measurement is in the hands of the early adopt-
ers and hardy pioneers who are determined to overcome the challenges and help push
forward and out the learning and assessment paradigm. All these pioneers know-even
if they can’t prove it yet-that teaching, training, operational, learning, and assessment
advances can only be realized as online assessment and measurement, and all that it
represents, is better understood. Surely online assessment and measurement prom-
ises to revolutionize the way that corporate, K-12 and higher education institutions
and professionals approach and realize their educational missions.
Now we provide brief summaries of and introductions to each of the eight higher edu-
cation, five K-12 and five corporate chapters.
viii
Preface
Chapter II. “Coordinating a Distance Learning Software Rollout with the Needs
and Visions of a Mature Test Organization: Political and Technical Lessons
Learned”
Any university or college seriously considering the adoption of a large LCMS (e.g.,
WebCT or Blackboard) with its online assessment and measurement capability for
not only its campus and testing center but also its distance education programs will
ix
benefit from this chapter. The sometimes dif ficult and costly lessons learned by one
university promise to inform the strategic planning efforts of other like institutions,
especially if they already have an established testing center . The competing needs
and interests of faculty in the classroom, administrators in the testing center , and
both in the distance education context are best addressed before, not after , the inte-
gration of an expensive, enterprise-wide LCMS for online assessment and measure-
ment purposes.
Chapter III. “Assessing the Relationship between Learner Satisfaction and Fac-
ulty Participation in Online Course Discussions”
This chapter is written by senior administrators from one of the largest and most
well known for profit international distance education institutions: Capella Univer-
sity. The brief history of this successful “dot edu” university-provided by the authors
as context for their case study-is as fascinating as the case study itself. This specific
case study focuses on the assessment and evaluation of the instructor and adult learner
interaction in an online setting by using the Capella-developed Faculty Development
Feedback System (FDFS). The authors explain how FDFS is used to assess online
instructor immediacy within a course by measuring the frequency and quality of in-
structor and learner interaction in online course discussions. The authors also exam-
ine the intersection of FDFS results and course evaluation ratings with the surprising
conclusion that the learner-content relationship may be more important than the
learner-instructor relationship. This finding further validates a claim by distance edu-
cation researcher Terry Anderson that “sufficient levels of deep and meaningful learn-
ing can be developed as long as one of the three forms of interaction (student-teacher;
student-student; student-content) is at very high levels. The other two may be offered
at minimal levels or even eliminated without degrading the educational experience”
(Anderson, 2002, p. 4).
Chapter IV. “Authentic Online Assessment: Three Case Studies in Teacher Educa-
tion”
This chapter reminds readers of the importance of aligning objectives, activities, and
assessment. Three diverse teacher education programs at the Hong Kong Institute of
Education undertook an initiative to update their assessment strategy by using tech-
nology-mediated tools unavailable to the teacher and student just a few years ago.
This effort not only realized new insights about and experience with online assess-
ment but also resulted in a redesign of the whole curricular package to ensure full
alignment and integrity among all the pieces. The authors extol the benefits of online
assessments, especially in promoting more timely and helpful peer collaboration
and review. They note the value of using online assessments to better prepare stu-
dents for high tech jobs, where online assessment and measurement are common-
place and to succeed in lifelong learning opportunities that will increasingly rely on
online assessment and measurement strategies. The three case studies also show-
case, through the eyes of students, the impact that formative-not summative-online
assessments and feedback had on their learning process.
x
change to build consensus and foster democratization so that their accreditation ef-
fort is not perceived as only a top-down mandate. Because outcomes are the focus of
new engineering accreditation standards, the 17 engineering programs of the 12 cam-
pus university system turned to a simple Web-based tool to build outcome consensus
from a dispersed faculty. This online self-assessment instrument captured on a weekly
basis three types of class level data from the faculty: learning goal(s), learning ac-
tivities to support each goal, and performance summary . The authors speak of the
benefits that accrued from this exercise, including the use of the aggregated data to
make and build program level decisions and outcomes. They also remind the readers
throughout the chapter about the importance of online assessment systems being
scalable-stretchable and shrinkable-according to need. The second online assessment
system developed focused on three data sources or measurements that would provide
further evidence to the accreditation agency that outcomes were being met. Those
three data targets included student performance on each outcome (as determined by
the faculty member), faculty perception of course effectiveness, and students’ per-
ception of their own degree of mastery at the outcome level. This triangulation of
data clearly provides the multilevel, outcomes-based measurements critical for not
just accreditation but, even more importantly, continuous improvement and advance-
ment of student learning.
that can then be associated with best practices that are worthy of dissemination to
other teachers. Finally, the authors give reasons why this CAT should supply just one
of the measures of student progress and that its best use in the school district setting
is “to support student learning by providing timely information to teachers in the
classroom, and school administrators”.
Chapter XII. “Designing an Online Formative Assessment that Helps Prepare Stu-
dents and Teachers for a Summative Assessment: A Case Study—A Two Year Pilot
Pairing an Online Classroom Assessment with a Statewide High-Stakes Test”
This case study showcases the role of online assessment and measurement in better
preparing elementary students and their teachers for a mandated statewide test. Even
though the statewide test is paper-based, the Gates Foundation and the W ashington
State Education Department piloted an online initiative to assess students along the
way so that teachers could better respond to areas of concern in a more timely fash-
ion. This experience highlights some of the advantages of online assessment and
measurement, particularly in a diagnostic role, while at the same time exposing some
of the practical and technical challenges that are frequently overlooked in the transi-
tion to more automated and computer-based testing environments.
xiv
oper, and deliverer to work carefully through the many issues involved in the success-
ful distribution of online assessments. He also shares some creative but effective data
mining strategies used to detect possible security breaches and promote tighter secu-
rity. At the conclusion of the chapter the author looks into the future and predicts more
sophistication and capacity in all dimensions of distribution that will better enable
online assessment sponsors to satisfy their goals with fewer of the current constraints.
Chapter XV. “From Paper and Pencil to Computerized: A Dental Board’s Perspec-
tive”
A large testing company, Thomson Prometric shares its experience in helping the Den-
tal Board shift its certification testing approach for dentists, hygienists, and dental
assistants from paper to the Web (computer based over secured networks and Internet
based) after a security breach occurred in 2001. The author reviews previous concerns
with online assessment, focusing on general access to computers, unfamiliarity with
computers, and computer anxiety, and then points out how few, if any, of these histori-
cal concerns still exist today. This case study reminds any company looking to transi-
tion its assessment model of the current and relevant issues executives should con-
sider in their analysis of whether they transition from paper to the Web, to what degree,
and how quickly. Some of the online assessment and measurement issues addressed
through this case study and discussion include security , a more authentic testing
experience, exam availability and distribution, automated scoring, additional respon-
dent data, item and form functionality, administrative efficiencies, technical consider-
ations, data storage, and costs. After the Dental Board case study considered all these
issues, one exigency was emphasized as probably more important than them all: “sell-
ing the idea to the candidates.” Change does not always come easy, and even though
it is evident that online assessment is the wave of the future, many test takers would
still prefer the former way.
Chapter XVII. “The Seven C’s of Comprehensive Online Assessment: Lessons Learned
from 36 Million Classroom Assessments in the Cisco Networking Academy Program”
In a day of corporate nondisclosure and secrets, this chapter is a welcome and refresh-
ing addition to the growing body of knowledge concerning online assessment and
measurement from a large corporation. Arguably, Cisco Corporation and its Networking
Academy Program have had more experience in online certification assessment with its
36 million assessment instances than any other institution. The seven C’s described
herein are best practices as much as they are overarching principles for anyone in-
volved in online assessment and measurement, including certification testing. Although
some people may consider the sevenC’s as common sense, invariably one or two of the
C’s have been overlooked in many institutional assessment strategies and have be-
come the Achilles heel in an otherwise successful experience. The sevenC’s are claims,
collaboration, complexity, contextualization, computation, communication, and coordi-
nation. In context, each of these terms becomes a story unto itself and represents the
critical and somewhat discrete dimensions of online assessment that, when combined,
enjoy a synergistic and successful interaction. The authors share glimpses of the ge-
nius of their proven but innovative model by briefly introducing their use of network-
ing-literal and figurative-to link instructional objectives to learning activities to assess-
ment and then back; to provide a worldwide collaboration in instrument design and
development, including language and cultural translation for over 150 countries and
nine languages; and to just-in-time prescripted computations that enable the assess-
ment professional to customize and interpret the results according to unique institu-
tional needs. The third C, complexity, while referring to the capacity of technologies to
accommodate new and more complex item types, for example, simulations and other
performance-based assessments, also represents the many complexities and exciting
opportunities associated with online assessment and measurement.
and Presentation of Feasibility Report. Threading their way through all these phases
are stakeholder support and buy-in, which the authors emphasize as the most critical
aspect of any feasibility study. Finally, three exhibits are included that are ready-made
checklists or questions to gather information about the existing examination program,
expectations of the stakeholders, and vendor characteristics that best match up with
institutional objectives.
One of the authors of this volume may have best summed up the excitement, optimism,
and outlook associated with this book and its discussion about online assessment and
measurement: “We are just beginning to have a small idea of the real extent of the
possibilities for online assessment.” Unquestionably, all the authors, researchers, and
practitioners herein wish that they had had the benefit of this volume’ s collective
experience and knowledge before beginning their experiments and adventures with
online assessment and measurement. It is the hope of these editors that this book
series will encourage and inform the practical and scholarly inquiry and application of
online assessment and measurement for many years yet to come.
Scott Howell
Mary Hricko
December 2004
Continuous Assessment for Improved Student Outcomes 1
Chapter I
Continuous Assessment
for Improved
Student Outcomes:
Examples from WebCT’s
Exemplary Course Project
Daniel Volchok, WebCT, USA
Abstract
WebCT views assessment as an activity that is integral to the full scope of the
learning process. A variety of methods and strategies are available to course
designers and instructors to assess student performance before, during, and after
a course has taken place. WebCT provides three major categories of assessment
tools (self-tests, quizzes and surveys, and assignments within these tools) and
seven types of questions (multiple choice, including true/false; combination multiple
choice; matching; calculated; short answer; jumbled sentence; and paragraph). The
layout, design, and administration of assessments is flexible through selective release,
timed assessments, and the sequencing of questions. Through examples from the WebCT
Exemplary Course Project, this chapter reviews the many tools and methods available
and describes the assessment, grading, and reporting capabilities of WebCT.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
2 Volchok, Caines & Graf
Introduction
Assessments can be an effective way to gather critical information about student and
course performance (WebCT, 2001). A variety of assessment methods and strategies
are available to course designers and instructors in WebCT. This chapter reviews the
many tools and methods available and describes the assessment, grading, and reporting
capabilities through examples from the WebCT Exemplary Course Project.
Assessment is often thought of as a tool that measures a student’s knowledge of the
course content. In contrast, WebCT views assessment as an activity that is integral to
the full scope of the learning process. The advent of online learning has opened new
methods to assess student performance before, during, and after a course has taken
place.
Precourse Assessment
Faculty can choose to assess students through a number of methods. Quizzes and other
tools can be embedded within the course content to gauge students’ knowledge of the
material. Subsequent or remedial material can then be selectively released based on
the outcome of the assessment. Self-tests and surveys can be used throughout the
course to aid instructors in the delivery of course material.
Postcourse Assessment
At the conclusion of the course, traditional assessment methods such as quizzes and
exams can be administered. In addition, posttests can then be correlated with pretests
to measure student learning. Course evaluations can also be administered to evaluate
the effectiveness of the course and the instructor. Finally, the instructor has a number
of reports available to evaluate the effectiveness of the assessments.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Continuous Assessment for Improved Student Outcomes 3
Self-Tests
The self-test tool in WebCT allows the creation of simple multiple choice tests that
students can use as a risk free assessment of their knowledge of specific information,
concepts, and ideas. Students can take them an unlimited number of times and receive
immediate and detailed feedback on their answers. No marks are assigned or recorded.
Self-tests are often added to specific pages of related content to reinforce the
concepts, terms, and other information that students should learn.
Dr. Robert Sanders of Portland State University makes exemplary use of the self-test
tool in Spanish 101. Every module in the course contains a series of graded self-tests
that assess all language building skills: grammar, structure, listening, comprehension,
and writing. Self-tests allow students to practice concepts and skills and receive
immediate feedback prior to taking a graded quiz.
The question types used in the assessment consist of multiple choice, short answer,
matching, and paragraph. The questions often include audio and/or video elements to
test student comprehension and listening skills. Figure 1 and Figure 2 show samples
of some of the question types used.
In the Linear Systems and Control course at the University of Texas at San Antonio,
Cheryl Schrader provides two review quizzes on prerequisite material to students
throughout the semester. These self-tests are designed as self-assessments and can be
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
4 Volchok, Caines & Graf
accessed and taken as often as the students wish. Feedback embedded in the answer to
each question directs students to additional resources for further learning. The
assessment ends with a question to students on how prepared they feel in that particular
content area. The students’ answers are then analyzed and used to determine balanced
discussion groups among the students for solving complex mathematical problems.
Other uses of the self-test include the creation of chapter, unit, or module summaries,
similar to traditional textbook chapter questions that ask students to recall important
ideas and concepts. These self-tests give students an opportunity to review course
materials in preparation for formal assessments, such as quizzes, midterms, and final
exams. They also allow students who are unfamiliar and inexperienced with online
quizzing the opportunity to practice answering questions in a nonstressful environ-
ment within the online course.
The quiz and survey tools in WebCT allow for the creation of robust, time sensitive
quizzes and anonymous surveys. Both may contain various question types, such as
multiple choice, matching, short answer, and essay. Quizzes can be automatically
graded by the WebCT system, and both quizzes and surveys provide detailed reporting
of statistics and student responses. Surveys and quizzes are identical in the way they
are added to a course, populated with questions, and released to students. The essential
differences between a survey and a quiz are that surveys are programmed to be
anonymous and do not have any point value or grade associated with them. The results
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Continuous Assessment for Improved Student Outcomes 5
of the quiz or survey can be e-mailed to a third party source for review and tabulation.
Quiz grades are automatically recorded in the student management area, but the
instructor can manually edit a quiz grade if necessary.
The quiz and survey tools can be used in many ways throughout an online course. Some
more innovative quizzing methods include providing a quiz at the beginning of each
unit or chapter to determine whether students have completed reading assignments,
releasing quizzes only to students with a specific grade on a previous assignment, or
releasing content only when a student scores a certain passing grade on a quiz. Surveys
can be used to allow students to register their opinion without fear of an attack on their
personal opinions or to post data tabulated from the survey in the course to allow
discussions or debate. Many institutions also use the survey tool in WebCT for
instructor and course evaluations.
Cheryl Schrader’s Linear Systems and Control is designed as a blended learning
environment. In addition to the self-tests discussed earlier in this chapter, Schrader
also uses the quiz tool to deliver weekly multiple choice quizzes as homework
assignments. These quizzes consist of five questions designed to reinforce the
concepts covered in class and in the assigned readings.
In the Measurement and Evaluation in Education course at the University of Central
Florida, Dr. Robert Paugh uses the WebCT Survey tool extensively to collect feedback
on course activities. Upon completion of an assigned activity, students are required to
complete a feedback form that asks questions such as “Describe difficulties you
encountered in completing this activity,” “How were difficulties overcome?”, “How
would you revise the assignment and/or its instructions?”, “Were the objectives of the
assignment clear?”, “Did the assignment help you obtain the course objectives?”, and
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
6 Volchok, Caines & Graf
so forth. This feedback is then analyzed, and revisions to the course and activities are
made accordingly. Figure 3 shows a small portion of an assignment feedback form
(WebCT, 2002–2004).
In the Clinical Calculations course at Middle Georgia College, students apply
formulas and mechanics of calculating and preparing accurate medication dosages.
Marilyn Halaska, facilitator, uses the quiz tool in an “Earn While You Learn” activity,
in which students are required to complete a quiz the end of each unit to test their
knowledge and skill in calculating dosages. As an optional activity, students can use the
results of the quiz to generate a Medication Occurrence Report. Students become
engaged in detailed risk analysis by reviewing errors made on the test, identifying
personal errors and error sources, and developing a safety plan to prevent the
recurrence of similar errors. These optional Medication Occurrence Reports provide
students the opportunity to regain up to 50% of the points lost on the quiz.
Assignments
In face to face courses, assignments and homework are typically distributed to the
students, completed, and then returned to the instructor for review and grading. The
Assignments tool in WebCT creates an online area for students to view and download
assignments, instructions, and files as well as a space for students to submit their
work. Essentially, the assignments tool is a shared folder inside the course where any
type of file can be exchanged between the instructor and the students. Students can
receive their assignments anytime after the instructor has uploaded and released them.
Instructors can collect completed assignments and grade them at their convenience.
Instructors have the flexibility to program assignments in WebCT to be released based
on a date and time and for a specified length of time. The instructor can allow late
submissions of assignments or specify a cutoff date for each assignment; the date and
time that each assignment is submitted are recorded in WebCT. To save instructors
time, each time an assignment is created, a column for grading is automatically added
to the student management database. Comments on graded assignments are displayed
automatically when students view their grades. This provides students with rich, rapid
feedback on their work while alleviating the administrative burden on faculty.
Faculty use the Assignment tool in WebCT in a variety of ways to assess students,
including the following:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Continuous Assessment for Improved Student Outcomes 7
4. Gather resources and list them comprehensively: Students search for resources
on the Internet- geographically, within a document, library, or computer program-
and then create a comprehensive list of resources.
5. PowerPoint presentation to create or edit: A template or rubric for presentations
is distributed. A completed presentation, including original material, is then
returned.
6. Creation of a graph or diagram: Students are asked to produce an original graph
or diagram from data that is distributed to them or gathered by them.
7. Formation of a bibliography or reference sheet: Bibliographic or other styles can
be uploaded for students to use as a guide to produce their own reference sheets.
8. Written research paper: Outlines, drafts, and finished papers can be edited and
exchanged (WebCT, 2003).
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
8 Volchok, Caines & Graf
Students in Dan Barnett’s Eastern Religions course at Butte College complete three
collaborative writing projects throughout the semester. Students are divided into groups
of three for each project. Private discussion areas are created for each group, where
students share their ideas and work together to create the writing project. The group’s
facilitator (a rotated role) uploads the finished Word document to the assignment drop
box. The instructor provides extensive feedback to the students in the Word document
and then returns it to the group facilitator through the drop box. The assignments, in turn,
are distributed to the remaining group members by the group facilitator.
In addition to using WebCT’s self-test, quiz and survey, and assignment tools, instruc-
tors have been assessing student learning in a number of creative ways. For example,
students in the University of Victoria’s Supervised Practicum course facilitated by
Michelle Koroll are assessed on how well they construct their own learning experience.
In this constructivist, problem-based learning environment, assessment activities con-
sist of formative and summative practicum performance evaluations, critical documen-
tation and self evaluation of their practicum experience, and participation in assigned
online discussions.
The critical documentation is one of the main assessment pieces within the course. As
students translate theory into practice, they are required to document their learning
journey through the practicum experience. According to Koroll, “Students are ex-
pected to set learning goals at the beginning of the course for themselves in eight
prescribed practice areas and then to document their progress toward these goals”
(WebCT, 2002–2004). Students create a learning plan that the instructor and practicum
supervisor review. At the end of the course and practicum experience, students
compile a learning portfolio that documents their plans and evidence of knowledge and
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Continuous Assessment for Improved Student Outcomes 9
skills development. They critically evaluate and reflect on their learning process by using
the documentation compiled.
In her Ethics for Health Care Professionals course at the Medical College of Georgia,
Kathleen Hernlen requires students to complete a number of “quandary” assignments.
By completing these assignments, students self assess their ethical decision making
skills. Within each quandary, the student is given a real life health care dilemma and a
choice to make. The students are allowed the opportunity to see the consequences of
the choice and may repeat each quandary as many times as they like, choosing different
paths and observing the consequences each time.
For example, a Web Quest assignment causes students to explore the cultural
implications involved in ethical decision making. The students are given the beginning
of a story and asked to finish it, exploring all possible options for the characters in the
story. In completing the assignment, students apply what they have learned about the
culture to the solution. They are also required to develop two epilogues to the story:
a best case scenario and a worst case scenario.
At the University of Maryland, students in Water and Nutrient Management Plan-
ning for the Nursery and Greenhouse Industry work collaboratively in project man-
agement teams composed of a resident student, an extension professional, and a
nursery or greenhouse crop grower. The ultimate objective of the course is to write
a site-specific nutrient management plan for the nursery or greenhouse as represented
by the grower in each team.
The project management teams engage in problem solving activities to obtain or create
information and data that they then apply to the nutrient management planning process.
Students obtain some information by the use of hyperlinks to Web-based resources
outside the course, such as federal and state government Web sites. Students also visit
the offices of state agricultural agencies to obtain soil maps not available on the
Internet. As a result, they become familiar with agency resources and personnel who
can assist them in their project efforts.
In addition, required on-site research activities encourage collaborative team efforts
and student participation. Each team utilizes the resources and specific crop produc-
tion methods of the nursery or greenhouse that they are assigned to study and develops
nutrient management plans. Students conduct on-site evaluations of the physical
layout and operational management of the nursery or greenhouse. They are required,
as a team, to perform a series of tests of specific crop production practices. Students
then incorporate data obtained from these tests, along with data from the physical
layout and operational management of the site, into the planning process.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
10 Volchok, Caines & Graf
can be used in multiple assessments. Questions can be created either within an individual
WebCT course assessment or within the database. Either way, all questions are available
to use for any assessment in any WebCT course. Within the database, questions can be
categorized, modified, or even developed offline. Statistics on each question are
maintained over the life of the question.
WebCT allows for seven different question types: multiple choice (including true/
false), combination multiple choice, matching, calculated, short answer, jumbled
sentence, and paragraph. Within each question type are numerous available settings to
allow the faculty or designer to customize the performance of the question.
Multiple choice questions are by far the most common form in use. WebCT can
automatically score these questions, making them easy to grade. There are two types
of traditional multiple choice questions: pick-one and pick-multiple answers. Pick-
one questions ask the student to choose one correct answer from multiple selections,
including true or false. Pick-multiple questions ask the student to pick one, some, all,
or none of the selections, and are good for making a series of related judgments.
Matching Questions
Matching questions ask students to associate items in one list with items in another.
Items may have more than one correct match or no match at all. WebCT allows faculty
to create three types of matching lists: (a) short answer matching, which consists of one
to three words; (b) long answer matching, which consists of several words or phrases;
and (c) images.
Calculated Questions
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Continuous Assessment for Improved Student Outcomes 11
and variables, so students will receive different questions and have to figure out different
answers, all based on the single application of a formula. For example, if students are
asked to calculate the perimeter of a yard, the yard size will vary with each test, but the
formula for calculating a parameter will always be a constant.
Short answer questions require the student to type in specific answers to questions,
usually in a fill-in-the-blank format. In automatically grading short-answer questions,
WebCT allows for the responses to be exact words or phrases, contain the targeted
word or phrase, or specify a regular Perl expression, which WebCT uses to evaluate
a student’s answer. It is also possible to program a percentage of points for responses
that are within the range of acceptability but not the exact response. Short answer
questions can contain one or more answer boxes, which the student uses to fill in their
answers.
Jumbled sentence questions present students with a sentence from which words are
missing. Drop-down lists containing several word selections appear for the missing
words, and students must select the correct word. This question type is popular in
foreign language courses.
Paragraph Questions
Paragraph questions, or essay questions, test the breadth and depth of a student’s
understanding and application of ideas, concepts, beliefs, and attitudes. Students have
the opportunity to organize their thoughts and demonstrate their abilities to think,
analyze, and persuade. Paragraph questions are the only question type for which
WebCT cannot create an automatic grade, for obvious reasons.
Quiz Structure
In conjunction with the many types of questions available, there are a variety of ways
to design and administer assessments within the WebCT course environment. The
structure of the quiz, or its architecture, can have a great impact on what the quiz
measures and how it is measured. How the question is asked is just as important as the
question itself. The design of a quiz should consider three main factors: (a) the
sequencing of the questions, (b) how much time is allowed to answer the questions, and
(c) how many attempts a student is allowed to pass the quiz (WebCT, 2004a).
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
12 Volchok, Caines & Graf
Question Sequencing
WebCT allows the instructor to deliver all the questions at once or release each
question one at a time. Instructors can also decide whether to allow students to revisit
any question or force them to answer or skip a question without the ability to revisit.
As instructors decide upon a sequence, it is valuable to vary the form of the questions
and answers. They can design each question to quiz for a common concept in a different
way, mix different forms of questions within the quiz, vary the way a question is posed,
and change the position of the correct answer.
Timed Exams
WebCT provides the opportunity to offer quizzes for a specified period of time. The
instructor can disallow answer submission if the time limit has expired. A timed exam
can lessen the likelihood of cheating, help measure how rapidly students can perform
a task or recall facts, and assist students in using their time effectively.
Quiz Retakes
WebCT can permit student’s multiple quiz attempts. The instructor can program up to
five retakes, set the retakes to “unlimited,” and set the time duration between retakes
for a set number of minutes, hours, or days. Students retaking quizzes are given a
second chance to show mastery of the material. WebCT can score and assign a final
grade on multiple attempts by using the first attempt, the latest attempt, the highest
attempt, or an average of all attempts.
WebCT also provides a number of other settings to control the availability and
management of the assessment. The availability of the assessment can be based on a
date or time or other criteria, such as user ID or grades. The selective release
functionality in WebCT can be used to accommodate accessibility issues, such as
longer quizzing time. Questions can be randomized within the quiz or survey so that no
two students receive the same set. Access to a quiz or survey can be password
protected or be released to specific computers, such as those in a quizzing center,
using the IP address mask feature.
For all question types that can be autograded, WebCT allows the instructor to give
feedback to the students on their response. If they give the incorrect response, the
correct answer can be displayed, along with an explanation why the choice they
selected is not correct.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Continuous Assessment for Improved Student Outcomes 13
Grading Options
Instructors also have a wide range of options for how questions are graded. For
example, in matching questions, instructors can opt for three different scoring
schemes: equally weighted, where points are equally distributed among all correct
answers; all or nothing, where all answers must be correct to receive points for the
question; or right less wrong, where the point value for incorrect matches is
subtracted from the point value for correct matches. Use of the latter two schemes
cuts down on guessing. Similarly, in multiple choice questions where the student can
select more than one correct response, scoring for the question can be cumulative;
that is, if some of the correct answers are selected, partial credit will be awarded, or
all or nothing, meaning that all correct answers must be chosen to receive points.
Allowing a negative score means that students will be penalized for guessing wrong
answers by the value of the wrong answer being subtracted from the value of the
correct answers.
In order to prevent answer sharing, WebCT provides the ability to present different
questions within a single assessment. A question set is a group of questions from
which one question is randomly selected and presented to students. Instructors can
create question sets by selecting a set of questions from the repository of questions
or by adding question alternates. They can then indicate the number of questions to be
randomly selected from the set and presented to students.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
14 Volchok, Caines & Graf
Performance reports provide each student’s answer to quiz questions and also provide
statistics on which answers students selected. For example, in multiple choice questions,
a breakdown is provided on how frequently each of the possible answers was selected.
Instructors can also display the performance for individual questions while comparing
the performance of selected students to that of the entire class. Item statistics reports
can be generated for the class as a whole or for all individuals in a class. Available
statistics include the percent of students who answered correctly, the mean, the
standard deviation, and a comparison to the whole class’s performance. Summary
statistics are similar to the item statistics report but are at the assessment level,
showing the mean score either by person or by class on a particular assessment.Class
statistics show a question by question analysis of the performance of an entire class
or section on a quiz. One class value for each question is displayed for each of
attempts, percent answering correctly, discrimination, mean, and standard deviation.
Assessment in WebCT involves more than just providing quizzes and exams to measure
students’ knowledge of the subject matter. Rather, assessment is integrated into the
entire course from the beginning to the end, encompassing self-tests, graded and
ungraded surveys and quizzes, and individual and collaborative assignments. Students
are given immediate and developmental feedback on their assessments, and instruc-
tors are provided numerous reports to evaluate the effectiveness of the assessments.
The WebCT Exemplary Course Project provides many examples of outstanding uses
of the assessment capabilities of WebCT. Additional examples from selected exem-
plary courses and further information about the project are available at www.webct.com/
exemplary.
References
Vallone, C. (2004, July). Online learning’s impact on global education. Proceedings
of the Sixth Annual WebCT User Conference, USA.
WebCT. (2001, November). Online assessment—What’s good for the instructor is good
for the student. WebCT Newsletter. Retrieved December 6, 2004, from http://
www.webct.com/service/ViewContent?contentID=8318910
WebCT. (2002–2004). WebCT exemplary course project. Retrieved November 15, 2004,
from http://www.webct.com/exemplary
WebCT. (2003). Quizzing and assessment with WebCT.
WebCT. (2004a). Creating quizzes, self tests, and surveys in WebCT Vista 3.x.
WebCT. (2004b). Managing and evaluating assessments in WebCT Vista 3.x.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Coordinating a Distance Learning Software Rollout 15
Chapter II
Coordinating a
Distance Learning
Software Rollout
with the Needs and
Visions of a Mature
Test Organization:
Political and Technical
Lessons Learned
Luke Fernandez, Weber State University, USA
Abstract
As online education programs expand across the country, many universities are
adopting course management software (CMS) such as Blackboard or WebCT.
Although these out-of-the-box solutions may simplify the administration of online
classes, they do not comprehensively meet the needs of mature testing organizations.
This chapter describes the process that has led Weber State University to reconsider
whether an out-of-the-box implementation of WebCT actually is a sensible strategy.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
16 Fernandez
It looks at both the political and technical issues involved in choosing a test
technology, examining how institutions can best balance and meet the needs of
both distance-learning organizations and testing centers.
Introduction
University administrators who are contemplating or presently engaged in rolling out
learning management systems (LMSs) face ponderous technological and political
logistics. Weber State University is currently one year into its LMS rollout and plans
on having the LMS implementation completed by the spring of 2005. In most respects,
this rollout has been successful and has moved forward relatively smoothly. However,
in respect to online assessments, the LMS initiative continues to experience prob-
lems. These problems have resulted from incomplete knowledge about assessment
needs on the part of the organizations initiating the rollout, a lack of dialogue between
the distance learning organizations and test organizations, and inchoate approaches
for achieving consensus on test related issues.
This chapter elaborates on these problems, broaches the main questions that should
have been tackled by the initiative in order to forward a smoother rollout, and lists
some strategies that may be worth implementing in order to build consensus on
assessment issues as the rollout moves into its concluding stages. In doing so, it seeks
to offer a model for how other institutions may coordinate distance learning software
with the needs of a mature test organization.
Background
At Weber State University, over 110,000 departmental tests are delivered each
semester through a home grown, Web-enabled assessment tool. This tool has evolved
over the years in close coordination with the expanding needs of Weber’s testing
centers and faculty. The technologies and organizational structures that deliver
departmental testing at Weber State have grown in such close concert that they appear
at times almost inextricably attached. The needs of the testing centers are served by
the software, and the policies and practices that give expression to a testing center’s
identity are in turn defined by what the software can and cannot do.
Although this organic relationship has been in place for over five years, it has recently
been challenged by Weber State’s burgeoning, and very profitable, distance learning
organization, which has purchased and begun implementing Vista-WebCT’s most
advanced distance learning software. The Vista product has a built-in assessment tool,
and the distance learning organization has signaled that its ultimate goal has been to
replace the home grown solution with the Vista product in the interest of promoting
efficiency, system integration, and standardization. The WebCT product promises to
realize many of these goals because:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Coordinating a Distance Learning Software Rollout 17
• It provides easy session and navigational integration across the student portal
and all course management tools.
• It provides easy data integration with student information systems, thereby
facilitating enrollment and identity management across semesters.
• It provides standardization by dint of the fact that it is a technology that is
documented and supported by an established company with a large community of
users.
Sold and promoted largely on the basis that it is more capable of realizing integration,
efficiency, and standardization than the technologies that have been around until now,
WebCT promises much, and the groups promoting the technology on campus have
worked zealously to be the handmaiden to this promise. However, after a year of use,
and after more than $350,000 has been expended on training and marketing WebCT on
the Weber State campus, a core set of faculty and test administrators are continuing
to resist the replacement of the home grown assessment tool with the WebCT
assessment technology. According to these users, the Vista assessment tools:
Because so much money has been invested in promoting the WebCT product and
because a vociferous contingent of users resist this promotion, a certain amount of
polarization has occurred. It is difficult to establish clarity or consensus on testing
issues because the technology is no longer perceived as something that resolves a
problem that is common to all. Instead, it is perceived as an initiative that promotes
some interests at the expense of others. When the evaluation of technology is colored
by interest, as it has been at Weber State, technology assessment becomes a more
challenging task. How, in the face of this challenge, can clarity and consensus be
achieved?
One possible way to achieve this is to avail oneself of outside literature. After all, if
internal parties are not considered disinterested, then perhaps a review and evaluation
of assessment literature generated by outside parties can produce more objective
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
18 Fernandez
perspectives. There are several repositories for this literature. Among the most
obvious is Web literature published by MIT’s open knowledge initiative, the IMS
Global Learning Consortium, the National College Testing Association, and for profit
testing companies. However, all these resources have limitations. MIT and the IMS
Global Learning Consortium have been developing and promoting specifications for
learning technology for a number of years. The adoption of these standards helps make
learning technology more interchangeable, and these standards should be attended to
when selecting or developing an assessment tool. In this chapter those needs fall under
the aegis of the term standardization. But while modularity and interchangeability are
important technical considerations, they are not the only campus needs that deserve
to be considered when choosing an assessment tool.
As an organization that is chartered to focus “on issues relating to test administration,
test development, test scoring, and assessment” 1 the National College Testing Asso-
ciation (NCTA) is also an important resource. Currently, however, the main energies
of this organization are devoted to delivering and administering certification and
placement exams like the LSAT, the MCAT, or Compass. The NCTA is beginning to
attend to the technical and administrative challenges associated with the delivery of
tests managed and created by faculty, but this subject has not been the NCTA’s primary
focus. Because of this, at this point they are not a repository of literature that can help
a campus wisely choose an online testing technology.
In addition to these resources, some literature is published by for profit testing
companies. Questionmark, for example, publishes a white paper entitled “Delivering
Computerized Assessments Safely and Securely,” which reviews some of the security
concerns that need to be considered when choosing an assessment tool that can deliver
tests securely. However, their account lacks scholarly integrity because it promotes
their own software, neglects to cite competing software technologies, and overlooks
important security features, including features that allow faculty to restrict test
delivery to one or more test centers and those that facilitate the distribution of
passwords to remote proctors. Although literature generated by for profit companies
can sometimes be illuminating, its pecuniary orientation often compromises its
credibility.
A review of the literature suggests that although off campus parties may cast some
light on the controversies that have bedeviled Weber’s LMS rollout, there doesn’t
appear to be a technology or body of technological information that can obviate all
political difference. Online testing technology, and its associated literature, is still
evolving. At this point in time, it is not so refined that it is capable of anticipating every
competing testing need or transcending and eliminating the political controversies
that are fueled by these competing needs. The rest of this chapter spells out some of
these concerns as well as some strategies that can mitigate their more invidious
effects. These concerns, which are political, technical, and procedural in nature,
revolve around the need to:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Coordinating a Distance Learning Software Rollout 19
The technical and political challenges that arise in meeting these needs are addressed
in detail in the following section.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
20 Fernandez
Some of the problems articulated in the preceding section are undoubtedly specific to
the particular constellations of organizational power that exist at Weber State. But any
university that is contemplating an upgrade to their distance learning software and has
mature and autonomous testing and distance learning organizations on campus may
stand to benefit from Weber State’s experience. On a concrete level, the needs and
protocols of the distance learning organization have differed from those in the testing
organizations. Although discrepancies in financial power have allowed the Vista
initiative to subordinate (and almost exclude) the needs of one of these organizations,
this subordination has been costly. A more effective strategy would have ensured that
both organizations had been brought to the strategizing table. Even if the technological
outcomes end up being the same, at the very least this approach would have been good
for public relations and consensus building. It would mitigate perceptions that the
initiatives are being imposed from above, that they are failing to consult with
assessment experts, and that there is an intentional attempt to exclude.
Financial Strategies
Whether a test organization chooses to charge for departmental testing can have an
impact on organizational influence vis-à-vis distance learning organizations. Although
secure departmental testing through the testing centers may constitute a plurality of
testing at Weber State, this has not given the testing organizations commensurate
influence in strategizing about Weber State’s future assessment technologies. It may
be sensible to provide a free service to instructors and to the distance learning
organizations (who have fielded a plurality of their tests through the testing centers).
But testing center directors may want to make sure that the service is formally
recognized by the university and accorded appropriate recognition when it comes time
to make technology decisions that should fall, at least in part, under the testing
center’s jurisdiction.
Technical Concerns
Scalability Concerns
As test taking has grown at the campus testing centers, so too has demand on test center
resources. In order to supply enough resources to meet demand, Weber State testing
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Coordinating a Distance Learning Software Rollout 21
centers have begun to look for procedures and technologies that exact as much efficiency
as possible from limited numbers of testing center personnel and limited numbers of Web-
enabled computers. The quest for greater efficiency in the face of growth and the
concomitant rollout of the LMS has fueled two important questions: What testing
technologies adequately accommodate growing testing loads, and which ones can
facilitate the types of efficiencies needed in testing centers?
On any regular given school day during the fall or spring semester, about 1,700 tests
were delivered through the home grown assessment tool. However, during finals week,
this number spiked to as many as 2,500 tests, of which 67% were delivered in testing
centers. These spikes in demand created long lines in the testing centers as students
queued up for testing terminals that were in short supply.
With exponential growth in online test taking, Weber State’s most pressing concern
in the last two years was to find an assessment technology that could continue to
smoothly accommodate more users as testing demand increased. The pilot semester
with WebCT’s assessment tool did not live up to expectations, because the system
actually was inoperable for a significant portion of finals week, even when assessment
was confined to a small number of pilot students. However, these scalability problems
have largely been solved in the home grown technology, which uses a clustered
architecture, allowing more servers to be plugged into the system as loads increase.
Given the resources and expertise available to WebCT, it is expected that scalability
questions will eventually be resolved, even if the initial experience was not always a
happy one.
It is worth bearing in mind, however, that the ability to accommodate increased load
constitutes only one dimension of testing scalability. As can be seen at Weber State,
the resolution of the technological bottleneck (e.g., the ability of the servers to
smoothly accommodate load) has revealed a new bottleneck that is epitomized by the
long student queues that form when students wait for secure testing terminals during
peak demand periods. Weber has implemented or needs to implement a number of
organizational and technological strategies in order to scale effectively to this
problem. These strategies include the following:
• Creating flexible testing centers that can expand and contract the number of
terminals that are available as demand expands and contracts throughout the
semester.
• Creating integrated scheduling software that will:
• Track how many minutes, on average, a student needs in order to take a test
• Allow students to reserve a secure terminal for authorized tests so that they do
not have to wait in lines at the testing center
• Allow faculty members to reserve a bank of terminals for a given time period so
that they can be assured that their students will be able to take the test during
a given time period
• Allow testing administrators to track and forecast test taking demand so that
they act proactively
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
22 Fernandez
Notably, WebCT Vista does not appear to have features that cater to these needs. In some
ways, this is not surprising. The WebCT software was designed primarily as a tool for
delivering distance learning tests; it is not designed around the needs of a testing center.
When the WebCT tool was initially promoted on campus, it was touted as a piece of
software that could address the needs of the distance learning organization and the
testing center. However, after two semesters of use a number of technological lacunae
have been encountered in the product, it has not only failed to scale to very nominal
testing loads, but it also has not incorporated the sort of integrated scheduling tools
that are necessary for a testing center to act proactively or to use its terminals in a
fashion that efficiently caters to changing demands. These features are a critical
component in assessment scalability-at least if the university plans on allocating
resources in as efficient and cost effective a fashion as possible. Because scheduling
is not currently a component in LMS software, universities, at least for now, need to
look elsewhere for technologies that can cater to the scalability needs of testing
centers.
Security Concerns
Currently, the test delivery mechanisms of WebCT and Blackboard are not as secure
as they could be-at least when they are not used in conjunction with technologies like
Questionmark, home grown assessment tools, and/or secure browsers. This was not
apparent when Weber State initially contracted with WebCT. However, when the
security issue did become manifest, many parties on campus began to wonder whether
the technology could be imposed wholesale on test organizations that were accus-
tomed to delivery mechanisms that offer more security.
When WebCT came to town, it was hailed as a comprehensive solution that could
address not only the needs of distance learning but those of the testing centers as well.
However, two semesters of use and a gap analysis study have revealed that WebCT fails
to cater to basic testing security needs. In fact, according to one WebCT representa-
tive, the WebCT assessment tool was never designed for high-stakes testing or
proctored testing. To cater to these demands, a testing tool needs the following
features:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Coordinating a Distance Learning Software Rollout 23
• It must provide an interface that an administrator can easily configure so that a test-
taker is confined to the test interface. In other words, the testing utility must
prevent test takers from being able to access the World Wide Web or other
course content.
WebCT advocates at Weber have placed a great deal of faith in the merits of
outsourcing and LMSs. The general attitude was that a company as large as WebCT
must have been aware of the scalability and security needs of testing organizations and
that through the use of effective change management, Weber could use an out-of-box
solution to meet its testing needs. The perception was that the obstacles impeding a
full embrace of the WebCT solution were cultural rather than technological in origin.
Although WebCT had suggested the commission of a formal gap analysis during the
pilot phase, these were not encouraged, presumably because they would impede rather
than inform the WebCT rollout.
However, after two semesters of use, it is becoming obvious that some of the
obstacles that had been minimized by advocates should not have been. Significant
security and scalability needs are not adequately addressed in the current Vista
product, and glossing over them, rather than requisitioning formal gap analysis, has
made it more difficult for the steering committee to strategize in as informed a
fashion as possible. It also has increased the perception that the committee has failed
to engage in full disclosure.
Although LMS assessment tools are fairly flexible, faculty members are inevitably
going to score and deliver tests in ways that are not accommodated in out-of-the-box
solutions. The move toward generic campus-wide technology and the move away from
technology that is customized and precisely fitted to testing centers have raised two
questions of import: What level of customization should testing centers provide in
appealing to the particular assessment needs of faculty, and what are the costs and
benefits associated with this customization and how are they measured?
WebCT’s Vista software, like any ERP solution, promises to centralize and integrate
systems that had hitherto been decentralized. Centralization initiatives, whether they
occur in the realm of politics or in the realm of information technology, are touted for
a variety of reasons. These include the potential to make administration more effi-
cient, the potential to standardize procedures, and the potential to reduce costs and to
coordinate systems or organizations that had hitherto worked at cross-purposes.
Although these potentials are all compelling, it is important to recognize that system and
organizational integration are not always panaceas or unequivocally salutary in their
effects. Put another way, if federated systems and autonomy are principles that have a
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
24 Fernandez
place in politics, they may also have a place in the world of campus information
technology and, more specifically, in the rollout of distance learning software,
especially when it impacts departmental testing.
At Weber State, the tensions between initiatives that promote centralization and
traditions that esteem a degree of organizational autonomy are being played out in a
miniature but nonetheless illustrative scale. On one hand, the parties rushing forward
with the Vista implementation argue that a “vanilla implementation” is in order and that
requests for assessment features that fall out of the vanilla implementation have to go
through an extended review process. This approach may seem sensible from the
standpoint of administrators who want to move toward more centralized administra-
tion. But it comes at significant cost to faculty and testing organizations who, previous
to the WebCT initiative, had been able to streamline work processes by getting
changes made to assessment software through a relatively autonomous testing orga-
nization.
From an administrative perspective, centralization appears to be a cost-effective
solution, because it means that less money needs to be spent on customized program-
ming. But from a more comprehensive perspective, what appears as cost cutting may
only be the devolution of cost. Costs that at one time were shouldered by administra-
tion are now devolved to faculty and to testing organizations, who are forced to use
software that fails to expedite their business processes as effectively as more
customized solutions. So far, there hasn’t been a clear calculus or method for
determining whether the move toward centralization is creating a situation where
benefits exceed costs. Very likely, the decrease in autonomy and the ability of testing
organizations to innovate and customize is merely devolving costs from administra-
tion onto faculty and the testing organizations that support them.
Procedural Concerns
In the previous two sections (“Political and Financial Concerns” and “Technical
Concerns”), I highlight some of the more central constituencies that Weber State
needed to include in an LMS as well as some of the more central technical needs that
deserved consideration at Weber State. In retrospect, Weber’s rollout would have
gone much smoother had it taken care to more clearly identify campus testing needs
and the constituencies associated with those needs. Weber’s experience suggests that
LMS rollouts are facilitated by:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Coordinating a Distance Learning Software Rollout 25
Other institutions are likely to have different combinations of needs and constituen-
cies than those of Weber State. However, the procedures for mitigating the political
discontent that is generated when needs are left unmet may not be so unique-in this
respect, all software rollouts are the same. Needs and technical concerns should be
identified, and the political challenges that may be associated with those concerns
should be researched in order to gauge the import of those needs. In the references
section of this chapter, I include a Web citation that links to a spreadsheet (Fernandez,
2004b). This spreadsheet enumerates Weber’s testing needs as well as the Weber
constituencies that are associated with those needs. Institutions that are interested in
implementing some of the procedures enumerated may find this spreadsheet to be a
helpful tool.
Conclusion
Weber State’s move toward WebCT’s Vista product was initiated because it promised
to integrate disparate e-learning systems. Theoretically, this system would be easier
to administer and would be more intuitive for faculty to use. In many ways these
promises have been realized by the pilot rollout. Administration is significantly
easier, course design and teaching are simpler, and students can navigate between
course content and teaching tools more seamlessly than they could when these
utilities were served by disparate technologies. Long term, these trends should
contribute to better distance learning.
Although the rollout promises significant advantages in most respects, there are
significant problems in using a vanilla implementation of WebCT’s assessment
technology. This area of the product:
• Doesn’t allow faculty members to easily configure their tests so that they are
available only at selected testing centers
• Cannot deliver closed-book tests in an effective fashion
• Does not allow proctors to easily access passwords that they need for adminis-
tering tests
Although Weber State test organizations were aware of most of the technological
problems in the WebCT product from very early on, this expertise could not be
effectively conveyed to the steering committee because the test organizations did not
have any formal representation. The testing organizations may have been marginalized
because they were not seen as a party that added significant revenue streams to the
university. But given that they deliver more than 40% of Weber’s online tests (and
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
26 Fernandez
References
Fernandez, L. (2004a). NCTA departmental testing survey. Retrieved July, 2004,
from http://chitester1.weber.edu/misc/DTSR/Survey%20description.htm
Fernandez, L. (2004b). Needs spreadsheet. Retrieved July, 2004, from http://
chitester1.weber.edu/misc/DTSR/Needs_vf.xls
Shepherd, E., Kleeman, J., Phaup, J., Fair, K., & Martin, B. (2003). Delivering
computerized assessments safely and securely. Retrieved July, 2004, from
http://www.questionmark.com/us/whitepapers/index.htm
Endnotes
1
Quoted from the main page of the National College Testing Association Web
site: http://www.ncta-testing.org
2
Results from a recent poll of National Collegiate Test Association members
indicate that only a very small portion (i.e., less than 5%) of NCTA test centers
deliver more than 5,000 online departmental tests per semester. More than half
the centers deliver fewer than 100 online departmental tests per semester, which
suggests that departmental and high-stakes online testing is hardly even a
peripheral activity at many testing centers (Fernandez, 2004a). Given these
numbers, Weber State is in a minority. However, online testing is growing.
Weber’s experiences may indicate where other schools will be going.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Learner Satisfaction and Faculty Participation 27
Chapter III
Assessing the
Relationship between
Learner Satisfaction
and Faculty
Participation in Online
Course Discussions
Dana Offerman, Capella University, USA
Abstract
Faculty-student interaction in online courses heightens student satisfaction and
success. Capella University studied the relationship between learner satisfaction
with faculty (as reported in end-of-course evaluations) and faculty participation in
online courses (as measured by monitoring faculty interaction). Learners appear
to be more responsive in courses led by instructors who seem committed to learner
success and dedicated to improving learner engagement with the subject matter
and other learners. Some instructor behaviors, including certain messages and
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
28 Offerman, Pearce & Tassava
Introduction
Most online courses are highly dependent upon frequent interaction—often in the form
of threaded discussions between students and faculty. Research suggests that faculty
interaction with students in an online course is an indicator of student satisfaction and
success, as well as a means to overcome the inherent sense of isolation that can
characterize online learning and teaching. Therefore, online courses are developed to
emphasize recurrent interaction between faculty and students and between students in
an attempt to emulate the dialogue that occurs in a conventional classroom. However,
our ability to assess the value of these discussions to student learning (both perceived
and actual) has been problematic.
This chapter provides a case study of Capella University’s attempt to understand the
relationship between learners’ self-reported satisfaction levels and actual measure-
ments of faculty participation in online courses. This study addressed several issues
crucial to online assessment, including the development of standards and methods for
measuring both the quantity and quality of faculty-to-learner interaction in online
courses, the obstacles to effective measurement of faculty-to-learner interaction, and
the application of the results of online measurement to ongoing faculty development
and to setting performance expectations for faculty. The results of Capella’s investi-
gation promise to shed light on the qualitative measures of online interaction,
furthering that literature and complementing the larger literature on quantitative
measures.
The chapter begins with a discussion of some of the relevant literature on interaction
within online courses and explains how the theoretical framework of this literature has
influenced Capella University’s online course development model and faculty expec-
tations. The chapter then discusses two aspects of an attempt to understand learner
satisfaction with online interaction: Capella’s Faculty Development Feedback System
(FDFS) and an internal Capella study on the relationship between the faculty feedback
and learner satisfaction (as measured by course evaluations). The chapter concludes
with an analysis of the assessment project, a description of actions taken as a result of
the research, and a summary of implications for further research, institutional
practice, and instructional design.
Founded in 1993, Capella University presently enrolls over 12,900 adult learners and
offers bachelor’s, master’s, and doctoral degrees as well as certificate programs in
five schools: Business & Technology, Education, Human Services, Undergraduate
Studies and Psychology. In 1997, Capella was accredited by the Higher Learning
Commission of the North Central Association of Colleges and Schools. Since 2000, the
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Learner Satisfaction and Faculty Participation 29
university has maintained its accreditation with the commission through the Academic
Quality Improvement Program, a continuous quality improvement approach to reaccredi-
tation.
As of April 2005, the university’s active enrollment comprised:
Capella has 760 faculty members, 15% of whom arecore faculty, meaning that they are
full-time, salaried employees of the university with both instructional and administrative
responsibilities. The remaining 85% of the faculty areadjuncts—independent contrac-
tors who serve as instructors in the “courseroom” (Capella’s name for the online course
environment), who act as dissertation mentors and committee members, and who develop
new courses. Seventy-five percent of all faculty members have earned doctorates.
Capella’s online courses are developed collaboratively by faculty and instructional
designers. In the first phase of this process, subject-matter experts drawn from the
core and adjunct faculty conceptualize course content, identify course objectives, and
align those objectives with program objectives. Faculty select appropriate course
materials and resources for the course media center; write requirements, including
weekly discussion questions, for weekly learning units and for the course as a whole;
and develop rubrics to assess course assignments. In the second phase of the course
development process, instructional designers—specially trained curricular and techni-
cal specialists—work with faculty course developers to apply sound instructional
design principles to the courses, assure that the courses adhere to Capella’s pedagogical
model, edit the content and check technical features such as hyperlinks, and provide
specialized services such as graphic arts, Web design, and audiovisual resources.
After school administrators approve a developed course, qualified instructors—includ-
ing an experienced lead instructor—are assigned, and the course is opened to enrollment.
Instructors teaching individual sections adhere to the preset course schedule and
conduct the course by using the prepared discussion questions, assignments, and due
dates. Changes to discussion questions or assignments must be approved by core
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
30 Offerman, Pearce & Tassava
faculty. Capella maintains this administrative control over course structure to ensure that
all learners have a consistent experience, regardless of section or instructor. Instructors
do enjoy flexibility in that they can post new content to evolving discussions. In addition,
any instructor can suggest a course change to the lead instructor, who evaluates the
change, decides whether to implement the change, and assures that the change is made
in subsequent course revisions.
This course production system works very effectively to develop and launch com-
pletely new courses and to revise existing courses. In both cases, the system helps
Capella address internal and external issues, such as the need to develop clear course
competencies aligned with program outcomes, demands for programs or courses to
meet emerging or new standards, and so forth.
Essential to Capella’s course delivery process are faculty development activities,
which continuously train faculty members for online instruction. The faculty develop-
ment system is directed by staff in the office of the provost. Three sequential courses
orient and train faculty members for their numerous roles: performing online instruc-
tion, acting as mentors to doctoral learners, and serving on comprehensive examina-
tions and dissertation committees. The faculty development courses culminate with a
weeks-long shadowing experience, in which a novice faculty member observes a
seasoned instructor in an actual online course. Shadowing is the final step before a new
faculty member is approved to serve as an online instructor. However, faculty
development activity does not cease after initial training and shadowing. Working
together online, face-to-face, in conference calls, and via e-mail correspondence,
both core and adjunct faculty share best practices, discuss important issues, update
each other on current activities, and build a coherent Capella faculty community. This
work complements and extends the faculty development courses themselves. Capella
ensures that active instructors continue to receive useful feedback from learners and
fellow faculty members alike. Capella’s FDFS was developed to assess the success of
instructors in the courses.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Learner Satisfaction and Faculty Participation 31
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
32 Offerman, Pearce & Tassava
• Content-Related
• Informational: Provides content-related information
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea
Group Inc. is prohibited.
Another Random Document on
Scribd Without Any Related Topics
485 Census of India, 1901, vol. iii. p. 206.
489 F. Blumentritt, “Sitten und Bräuche der Ilocanen,” Globus, xlviii. No. 12, p.
202.
491 Rev. J. Roscoe, “Further Notes on the Manners and Customs of the
Baganda,” Journal of the Anthropological Institute, xxxii. (1902) p. 38.
497 Bavaria, Landes- und Volkskunde des Königreichs Bayern, ii. 299.
502 H. Geurtjens, “Le Cérémonial des voyages aux Îles Keij,” Anthropos, v.
(1910) p. 352.
506 H. Oldenberg, Die Religion des Veda, p. 505; M. Bloomfield, Hymns of the
Atharva-Veda, p. 240; W. Caland, Altindisches Zauberritual, p. 37.
508 Spencer and Gillen, Northern Tribes of Central Australia, pp. 624 sq.
517 North Indian Notes and Queries, ii. 215, No. 760; W. Crooke, Popular
Religion and Folklore of Northern India (Westminster, 1896), i. 261.
519 R. F. Kaindl, “Zauberglaube bei den Rutenen,” Globus, lxi. (1892) p. 282.
521 J. Brand, Popular Antiquities of Great Britain, iii. 278 sq. (Bohn’s ed.).
522 W. Henderson, Folklore of the Northern Counties of England, pp. 239 sqq.;
J. W. Wolf, Niederländische Sagen (Leipsic, 1843), pp. 363–365.
525 F. S. Krauss, Volksglaube und religiöser Brauch der Südslaven, p. 140. The
custom of placing coins on the eyes of a corpse to prevent them from opening is
not uncommon. Its observance in England is attested by the experienced Mrs.
Gamp:—“When Gamp was summonsed to his long home, and I see him a-lying in
Guy’s Hospital with a penny piece on each eye, and his wooden leg under his left
arm, I thought I should have fainted away. But I bore up” (C. Dickens, Martin
Chuzzlewit, ch. xix.).
528 Father Lambert, in Missions Catholiques, xi. (1879) p. 43; id., Mœurs et
superstitions des Néo-Calédoniens (Nouméa, 1900), pp. 30 sq.
529 Hesiod, Works and Days, 750 sqq. But the lines are not free from ambiguity.
See F. A. Paley’s note on the passage.
530 E. Doutté, Magie et religion dans l’Afrique du Nord (Algiers, 1908), pp. 302
sq.
531 J. Campbell, Travels in South Africa, Second Journey (London, 1822), ii.
206; Barnabas Shaw, Memorials of South Africa (London, 1840), p. 66.
538 F. Boas, “The Eskimo of Baffin Land and Hudson Bay,” Bulletin of the
American Museum of Natural History, xv. pt. i. (1901) p. 160.
541 H. Geurtjens, “Le Cérémonial des voyages aux Iles Keij,” Anthropos, v.
(1910) p. 352.
543 E. Doutté, Magie et religion dans l’Afrique du Nord (Algiers, 1908), pp. 244
sq.
546 B. A. Hely, “Notes on Totemism, etc., among the Western Tribes,” British
New Guinea: Annual Report for 1894–95, p. 56.
549 E. Poeppig, Reise in Chile, Peru und auf dem Amazonenstrome, ii. 323.
551 A. Jaussen, “Coutumes arabes,” Revue Biblique, April 1903, p. 245; id.,
Coutumes arabes au pays de Moab, p. 36.
553 Spencer and Gillen, Native Tribes of Central Australia, pp. 545 sq.
573 J. Dos Santos, Eastern Ethiopia, book i. ch. 20 (G. McCall Theal, Records of
South-Eastern Africa, vii. 224).
576 The king was Iphiclus; the wise man was Melampus. See Apollodorus, i. 9.
12; Eustathius on Homer, Od. xi. 292; Schol. on Theocritus, iii. 43. The way in
which the king’s impotence was caused by the knife is clearly indicated by the
scholiast, on Theocritus: συνέβη ἐπενεγκεῖν αὐτὴν [scil. τὴν μάχαιραν] τοῖς μορίοις
τοῦ παιδός. In this scholium we must correct ἐκτέμνοντι . . . δένδρον into
ἐκτέμνοντι . . . ζῷα. Eustathius (l.c.) quotes the scholium in this latter form. The
animals were rams, according to Apollodorus.
582 The First Nine Books of the Danish History of Saxo Grammaticus, translated
by O. Elton (London, 1894), p. 16. The original runs thus: Lecturi regem veteres
affixis humo saxis insistere suffragiaque promere consueverant, subjectorum
lapidum firmitate facti constantiam ominaturi (Historia Danica, lib. i. p. 22, ed. P.
E. Müller).
583 Aristotle, Constitution of Athens, 7 and 55; Plutarch, Solon, 25; Pollux, viii.
86.
587 W. Munzinger, Sitten und Recht der Bogos (Winterthur, 1859), pp. 33 sq.
For an Indian example of swearing on a stone see J. Eliot, “Observations on the
Inhabitants of the Garrow Hills,” Asiatick Researches, iii. 30 sq. (8vo ed.). On the
custom see further my article, “Folk-lore in the Old Testament,” in Anthropological
Essays presented to E. B. Tylor (Oxford, 1907), pp. 131 sqq.
589 Ptolemaeus, Nova Historia, in Photius, Bibliotheca, p. 153, ed. I. Bekker; id.
in Mythographi Graeci, ed. A. Westermann, p. 198.
590 Spencer and Gillen, Northern Tribes of Central Australia, pp. 253 sq.
591 Spencer and Gillen, op. cit. p. 472.
592 P. J. de Arriaga, Extirpacion de la idolatria del Piru (Lima, 1621), pp. 15, 16,
25
593 Father Lambert, in Missions Catholiques, xii. (1880) pp. 273, 287, xxv.
(1893) pp. 104–106, 116–118; id., Mœurs et Superstitions des Néo-Calédoniens
(Nouméa, 1900), pp. 217, 218, 222, 292–304. Compare Glaumont, “Usages,
mœurs et coutumes des Néo-Calédoniens,” Revue d’Ethnographie, vii. (1889) pp.
114 sq. (whose account of the stones is borrowed from Father Lambert).
595 W. Ridgeway, The Early Age of Greece (Cambridge, 1901), i. 330 sq.; id.,
“The Origin of Jewellery,” Report of the British Association for 1903 (meeting at
Southport), pp. 815 sq.
596 Orphica: Lithica, 230 sqq., ed. G. Hermann. Pliny mentions (Nat. Hist.
xxxvii. 192) a white tree-stone (“dendritis alba”) which, if buried under a tree that
was being felled, would prevent the woodman’s axe from being blunted.
597 Orphica: Lithica, 189 sqq.; compare Pliny, Nat. Hist. xxxvii. 162.
600 K. Freiherr von Leoprechting, Aus dem Lechrain (Munich, 1855), p. 92.
601 Orphica: Lithica, 335 sqq. This was perhaps the “dragon-stone” which was
supposed to confer extraordinary sharpness of vision on its owner. See Ptolemaeus
Hephaestionis, Nov. Hist. v. p. 150, in Photius, Bibliotheca, ed. I. Bekker, p. 192 of
A. Westermann’s Mythographi Graeci.
605 Adonis, Attis, Osiris, Second Edition, pp. 361 sqq., 369 sqq.
606 E. Doutté, Magie et religion dans l’Afrique du Nord, pp. 131 sq.
607 The Grihya-Sûtras, translated by H. Oldenberg, part i. pp. 43, 285 sq., part
ii. pp. 47 sq., 193 sqq. (Sacred Books of the East, vols. xxix. and xxx.). In the last
passage the address to the star is fuller and more explicit. A part of it runs thus:
—“He who knows thee (the polar star) as the firm, immovable Brahman with its
children and with its grandchildren, with such a man children and grandchildren
will firmly dwell, servants and pupils, garments and woollen blankets, bronze and
gold, wives and kings, food, safety, long life, glory, renown, splendour, strength,
holy lustre, and the enjoyment of food. May all these things firmly and immovably
dwell with me!”
615 P. Sébillot, op. cit. i. 129–132; M. E. James in Folklore, ix. (1898) p. 189.
619 Rev. C. Harrison, “Religion and Family among the Haidas,” Journal of the
Anthropological Institute, xxi. (1892) pp. 17 sq.
620 C. Martin, “Über die Eingeborenen von Chiloe,” Zeitschrift für Ethnologie, ix.
(1877) p. 179.
630 The principles of contagious magic are lucidly stated and copiously
illustrated by Mr. E. S. Hartland in the second volume of his Legend of Perseus
(London, 1895).
631 Meantime I may refer the reader to The Golden Bough, Second Edition, i.
367 sqq.
632 R. Parkinson, Dreissig Jähre in der Südsee (Stuttgart, 1907), pp. 118 sq.
636 A. W. Howitt, in Journal of the Anthropological Institute, xiii. (1884) pp. 456
sq.; id., Native Tribes of South-East Australia, p. 561.
644 W. Wyatt Gill, Jottings from the Pacific, pp. 222 sq. On the use of roof-
thatch in superstitious ceremonies see W. Caland, Altindisches Zauberritual, pp. 82
n.² 182 sq. In the present case the virtue of the thatch clearly depends on its
harbouring rats. Some Dravidian tribes forbid a menstruous woman to touch the
house-thatch (W. Crooke, Popular Religion and Folklore of Northern India,
Westminster, 1896, i. 269).
648 F. Blumentritt, “Sitten und Bräuche der Ilocanen,” Globus, xlviii. No. 12, p.
200.
650 E. Meier, Deutsche Sagen, Sitten und Gebräuche aus Schwaben, p. 510, §
415.
653 J. V. Grohmann, Aberglauben und Gebräuche aus Böhmen und Mähren, pp.
55 at top, p. 111, § 825. Mr. A. P. Goudy kindly translated the Czech words for me.
664 R. Taylor, Te Ika A Maui, or New Zealand and its Inhabitants² (London,
1870), p. 184.
665 Elsdon Best, quoted by W. H. Goldie, “Maori Medical Lore,” Transactions and
Proceedings of the New Zealand Institute, xxxvii. (1904) pp. 94 sq.
666 George Bennett, Wanderings in New South Wales, Batavia, Pedir Coast,
Singapore and China (London, 1834), i. 128, note*. As to fenua or whenua in the
sense of “placenta” and “land,” see E. Tregear, Maori-Polynesian Comparative
Dictionary (Wellington, N.Z., 1891), pp. 620 sq.
671 Spencer and Gillen, op. cit. p. 608. The writers add that the child has no
special connexion with the tree in after years. We may suspect that such a
connexion did exist in former times.
674 The Rev. Lorimer Fison, in a letter to me dated May 29, 1901.
675 Dr. Hahl, “Mittheilungen über Sitten und rechtliche Verhältnisse auf Ponape,”
Ethnologisches Notizblatt, ii. (Berlin, 1901) p. 10.
676 R. Parkinson, “Beiträge zur Ethnologie der Gilbertinsulaner,” Internationales
Archiv für Ethnographie, ii. (1889) p. 35. In these islands the children of well-to-do
parents are always adopted by other people as soon as they are weaned. See ib.
p. 33.
682 Riedel, op. cit. pp. 23, 135, 236, 328, 391, 417, 449, 468.
687 N. Adriani and A. C. Kruijt, “Van Posso naar Parigi, Sigi en Lindoe,”
Mededeelingen van wege het Nederlandsche Zendelinggenootschap, xlii. (1898)
pp. 434 sq. In Parigi after a birth the kindspek (?) is wrapt in a leaf and hung in a
tree at some distance from the house. For the people think that if it were burned,
the child would die (ibid. p. 434).
688 N. Adriani and A. C. Kruijt, “Van Posso naar Mori,” Mededeelingen van wege
het Nederl. Zendelinggenootschap, xliv. (1900) pp. 161 sq.
689 A. C. Kruijt, “Eenige ethnografische aanteekeningen omtrent de Toboengkoe
en de Tomori,” ibid. p. 218.
710 J. J. M. de Groot, The Religious System of China, iv. (Leyden, 1901) pp. 396
sq.
711 H. von Siebold, Ethnologische Studien über die Aino (Berlin, 1881), p. 32.
712 Ph. Paulitschke, Ethnographie Nordost Afrikas: die materielle Cultur der
Danâkil, Galla und Somâl (Berlin, 1893), p. 192.
713 J. Roscoe, “Further Notes on the Manners and Customs of the Baganda,”
Journal of the Anthropological Institute, xxxii. (1902) pp. 33, 45, 46, 63, 76; id.
“Kibuka, the War God of the Baganda,” Man, vii. (1907) pp. 164 sq. In the former
of these two accounts Mr. Roscoe speaks of the placenta, not the navel-string, as
the “twin” (mulongo).
714 Garcilasso de la Vega, Royal Commentaries of the Yncas, bk. ii. ch. 24, vol.
i. p. 186, Markham’s translation.
717 J. Teit, “The Thompson Indians of British Columbia,” pp. 304 sq. (Memoir of
the American Museum of Natural History, The Jesup North Pacific Expedition, vol.
i. part iv.).
719 J. Mooney, “The Indian Navel Cord,” Journal of American Folk-lore, xvii.
(1904) p. 197.
720 Bavaria, Landes- und Volkskunde des Königreichs Bayern, iv. 2, p. 346.
723 R. F. Kaindl, “Zauberglaube bei den Rutenen in der Bukowina und Galizien,”
Globus, lxi. (1892) p. 282.
724 A. Kuhn, Märkische Sagen und Märchen (Berlin, 1843), pp. 379 sq.
727 J. Grimm, Deutsche Mythologie,⁴ ii. 728, note 1. As to the East Indian belief
see above, pp. 187 sq.
731 J. Grimm, Deutsche Mythologie,⁴ ii. 728 sq., iii. 266 sq.; M. Bartels, op. cit.
p. 70. Grimm speaks as if it were only the caul which became a fylgia. I follow Dr.
Bartels.
732 Meantime I may refer to The Golden Bough, Second Edition, iii. 350 sqq.
For other superstitions concerning the afterbirth and navel-string see H. Ploss, Das
Kind,² i. 15 sqq., ii. 198 sq. The connexion of these parts of the body with the
idea of the external soul has already been indicated by Mr. E. Crawley (The Mystic
Rose, London, 1902, p. 119).
735 Fr. Boas, “The Social Organization and the Secret Societies of the Kwakiutl
Indians,” Report of the U.S. National Museum for 1895, p. 440.
738 Francis Bacon, Natural History, cent. x. § 998. Compare J. Brand Popular
Antiquities, iii. 305, quoting Werenfels. In Dryden’s play The Tempest (Act v. Scene
1) Ariel directs Prospero to anoint the sword which wounded Hippolito and to wrap
it up close from the air. See Dryden’s Works, ed. Scott, vol. iii. p. 191 (first
edition).
740 “Death from Lockjaw at Norwich,” The Peoples Weekly Journal for Norfolk,
July 19, 1902, p. 8.
743 F. Panzer, Beitrag zur deutschen Mythologie, ii. 305, compare 277.
748 F. Panzer, Beitrag zur deutschen Mythologie, ii. 302; W. Kolbe, Hessische
Volks- Sitten und Gebräuche im Lichte der heidnischen Vorzeit (Marburg, 1888), p.
87.
754 Theocritus, Id. ii. 53 sq. Similarly the witch in Virgil (Eclog. viii. 92 sqq.)
buries under her threshold certain personal relics (exuviae) which her lover had
left behind.
756 E. Meier, Deutsche Sagen, Sitten und Gebräuche aus Schwaben, pp. 245
sq.; A. Kuhn, Sagen, Gebräuche und Märchen aus Westfalen, ii. 192; id., Die
Herabkunft des Feuers,² pp. 200 sq.; W. Mannhardt, Die Götterwelt der deutschen
und nordischen Völker, i. 203 note. Compare Montanus, Die deutsche Volksfeste,
Volksbräuche und deutscher Volksglaube, p. 117.
757 Fison and Howitt, Kamilaroi and Kurnai, p. 250; A. W. Howitt, “On Australian
Medicine Men,” Journal of the Anthropological Institute, xvi. (1887) pp. 26 sq.; id.,
Native Tribes of South-East Australia, pp. 366 sq. According to one account a cross
should be made in the footprint with a piece of quartz, and round the footprint
thus marked the bones of kangaroos should be stuck in the ground. See R. Brough
Smyth, Aborigines of Victoria, i. 476 sq. These and many of the following
examples were cited by me in Folklore, i. (1890) pp. 157 sqq. For more instances
of the same sort see E. S. Hartland, The Legend of Perseus, ii. (London, 1895) 78–
83.
761 Elsdon Best, “Spiritual Concepts of the Maori,” Journal of the Polynesian
Society, ix. (1900) p. 196.
765 F. Mason, “On Dwellings, Works of Art, Laws, etc., of the Karens,” Journal of
the Asiatic Society of Bengal, xxxvii. (1868) part ii. p. 149.
767 Id., Tribes and Castes of the North-Western Provinces and Oudh, ii. 221.
770 Josaphat Hahn, “Die Ovaherero,” Zeitschrift der Gesellschaft für Erdkunde
zu Berlin, iv. (1869) p. 503.
773 J. Teit, “The Shuswap” (Leyden and New York, 1909) p. 613 (Memoir of the
American Museum of Natural History, The Jesup North Pacific Expedition, vol. ii.
part vii.).
775 K. Bartsch, Sagen, Märchen und Gebräuche aus Meklenburg, ii. 329 sq., §§
1597, 1598, 1601a.
778 Josaphat Hahn, loc. cit.; K. Bartsch, op. cit. ii. 330, 334, §§ 1599, 1611abc,
compare p. 332, § 1607; R. Andree, Ethnographische Parallelen und Vergleiche,
Neue Folge (Leipsic, 1889), pp. 8, 11.
785 Saxo Grammaticus, Historia Danica, i. p. 40, ed. P. E. Müller (pp. 28 sq., O.
Elton’s English translation).
791 J. Teit, “The Thompson Indians of British Columbia,” p. 371 (The Jesup
North Pacific Expedition, vol. i. part iv.).
795 J. Batchelor, The Ainu and their Folklore (London, 1901), p. 516.
796 H. Callaway, The Religious System of the Amazulu, part iii. pp. 345 sq.
801 K. Vetter, Komm herüber und hilf uns! oder die Arbeit der Neuen
Dettelsauer Mission, Heft iii. (Barmen, 1898) p. 10.
803 For detailed proof of this I may refer to my article, “Some popular
Superstitions of the Ancients,” Folklore, i. (1890) pp. 147 sqq.
805 J. Richardson, Travels in the Great Desert of Sahara (London, 1848), ii. 65.
806 Jamblichus, Plutarch, Clement of Alexandria, Diogenes Laertius, Suidas,
ll.cc.
808 Malay magic in particular is deeply tinctured with a belief in spirits, to whom
the magician appeals by kindly words and small gifts of food, drink, and even
money. See R. J. Wilkinson, Malay Beliefs (London and Leyden, 1906), pp. 67 sqq.
Here, therefore, religion is encroaching on magic, as it might naturally be expected
to do in a race so comparatively advanced as the Malays.
809 “Religio est, quae superioris cujusdam naturae, quam divinam vocant,
curam caerimoniamque adfert,” Cicero, De inventione, ii. 161.
811 “Piety is not a religion, though it is the soul of all religions. A man has not a
religion simply by having pious inclinations, any more than he has a country simply
by having philanthropy. A man has not a country until he is a citizen in a state,
until he undertakes to follow and uphold certain laws, to obey certain magistrates,
and to adopt certain ways of living and acting. Religion is neither a theology nor a
theosophy; it is more than all this; it is a discipline, a law, a yoke, an indissoluble
engagement” (Joubert, quoted by Matthew Arnold, Essays in Criticism, First Series,
London, 1898, p. 288).
815 A. Wiedemann, Die Religion der alten Ägypter (Münster i. W., 1890), pp.
142–145, 148; G. Maspero, Histoire ancienne des peuples de l’Orient classique: les
origines (Paris, 1895), pp. 212 sq.
818 Monier Williams, Religious Thought and Life in India (London, 1883), pp.
201 sq.
819 To prevent misconception I would ask the reader to observe that the earlier
stage here spoken of, in which magic is confused with religion, is not, in my
opinion, the earliest of all, having been preceded by a still earlier stage in which
magic existed alone. See below, pp. 233 sqq. On my view, the evolution of
thought on this subject has passed through three stages: first, a stage in which
magic existed without religion; second, a stage in which religion, having arisen,
co-operated, and was to some extent confused, with magic; and third, a stage in
which, the radical difference of principle between the two having been recognised,
their relation was that of open hostility.
823 H. Oldenberg, op. cit. p. 477. For particular examples of the blending of
magical with religious ritual in ancient India see pp. 311 sqq., 369 sq., 476 sqq.,
522 sq. of the same work.
824 S. Lévi, La Doctrine du sacrifice dans les Brâhma as (Paris, 1898), p. 129.
825 M. Bloomfield, Hymns of the Atharva-Veda, pp. xlv. sq. (Sacred Books of the
East, vol. xlii.).
830 A. Wiedemann, Die Religion der alten Ägypter (Münster i. W., 1890), p. 154.
837 For the evidence see my Totemism and Exogamy, vol. i. pp. 141 sqq.
838 The suggestion has been made by Prof. H. Oldenberg (Die Religion des
Veda, p. 59), who seems, however, to regard a belief in spirits as part of the raw
material of magic. If the view which I have put forward tentatively is correct, faith
in magic is probably older than a belief in spirits. The same view as to the priority
of magic to religion, and apparently also as to the absence of spirits from primitive
magic, was held by Hegel. It was not until long after the discussion in the text had
been written that I became aware that my conclusions had been to a large extent
anticipated by the German philosopher. See Appendix at the end of this volume.
839 After a visit to the ruined Greek temples of Paestum, whose beauty and
splendour impressed him all the more by contrast with the savagery of the
surrounding peasantry, Renan wrote: “J’ai tremblé pour la civilisation, en la voyant
si limitée, assise sur une faible assiette, reposant sur si peu d’individus dans le
pays même où elle est regnante.” See E. Renan et M. Berthelot, Correspondance
(Paris, 1898), pp. 75 sq.
840 See above, pp. 68 sq.; “The Witch-burning at Clonmel,” Folklore, vi. (1895)
pp. 373–384; F. S. Krauss, Volksglaube und religiöser Brauch der Südslaven, pp.
144 sqq.
843 C. Fossey, op. cit. pp. 137–139. For the incident of the magical
disappearance and reappearance of the garment, see P. Jensen, Assyrisch-
Babylonische Mythen und Epen (Berlin, 1900), p. 23; R. F. Harper, Assyrian and
Babylonian Literature (New York, 1901), p. 291.
845 Fr. Kauffmann, Balder, Mythus und Sage (Strasburg, 1902), pp. 177–203.
Compare J. Grimm, Deutsche Mythologie,⁴ ii. 1024–1026.
848 W. Mannhardt, Antike Wald- und Feldkulte, p. 342, note. The heathen
Swedes appear to have mimicked thunder, perhaps as a rain-charm, by means of
large bronze hammers, which they called Thor’s hammers. See Saxo Grammaticus,
Historia Danica, lib. xiii. p. 630, ed. P. E. Müller; Olaus Magnus, Historia, iii. 8.
854 J. Owen Dorsey, “Omaha Sociology,” Third Annual Report of the Bureau of
Ethnology (Washington, 1884), p. 347. Compare Charlevoix, Voyage dans
l’Amérique septentrionale, ii. 187.
863 W. Weston, in The Geographical Journal, vii. (1896) p. 143; id., in Journal of
the Anthropological Institute, xxvi. (1897) p. 30; id., Mountaineering and
Exploration in the Japanese Alps, p. 161. The ceremony is not purely magical, for
it is intended to attract the attention of the powerful spirit who has a small shrine
on the top of the mountain.
864 J. Batchelor, The Ainu and their Folklore (London, 1901), p. 333. Some of
the ancient processions with ships may perhaps have been rain-charms. See J.
Grimm, Deutsche Mythologie,⁴ i. 213–220; Pausanias, i. 29. 1, with my note.
865 Tournier, Notice sur le Laos Français (Hanoi, 1900), p. 80. In the temple of
the Syrian goddess at Hierapolis on the Euphrates there was a chasm into which
water was poured twice a year by people who assembled for the purpose from the
whole of Syria and Arabia. See Lucian, De dea Syria, 12 sq. The ceremony was
perhaps a rain-charm. Compare Pausanias, i. 18. 7, with my notes.
866 Spencer and Gillen, Northern Tribes of Central Australia, pp. 313 sq.
872 Rascher, “Die Sulka,” Archiv für Anthropologie, xxix. (1904) p. 225; R.
Parkinson, Dreissig Jahre in der Südsee, pp. 196 sq.
873 Indian Antiquary, xxiv. (1895) p. 359.
876 J. Teit, “The Thompson Indians of British Columbia,” p. 345 (Memoirs of the
American Museum of Natural History, The Jesup North Pacific Expedition, vol. i.
part iv.).
878 A. C. Kruijt, “Regen lokken en regen verdrijven bij de Toradja’s van Midden
Celebes,” Tijdschrift voor Indische Taal- Land- en Volkenkunde, xliv. (1901) p. 2.
880 E. M. Gordon, Indian Folk Tales (London, 1908), p. 20; id. in Journal and
Proceedings of the Asiatic Society of Bengal, New Series, i. (1905) p. 183.
882 W. E. Roth, op. cit. p. 168; id., North Queensland Ethnography, Bulletin No.
5 (Brisbane, 1903), p. 10.
883 S. Gason, “The Dieyerie Tribe,” Native Tribes of South Australia, pp. 276
sqq.; A. W. Howitt, “The Dieri and other Kindred Tribes of Central Australia,”
Journal of the Anthropological Institute, xx. (1891) pp. 91 sq.; id., Native Tribes of
South-East Australia, pp. 394–396. As to the Mura-muras, see A. W. Howitt, Native
Tribes of South-East Australia, pp. 475 sqq., 779 sqq.
884 A. W. Howitt, “The Dieri and other Kindred Tribes of Central Australia,”
Journal of the Anthropological Institute, xx. (1891) pp. 92 sq.; id., Native Tribes of
South-East Australia, pp. 396, 744.
889 Spencer and Gillen, Northern Tribes of Central Australia, pp. 294–296, 630
sq.
890 F. J. Gillen, in Report of the Work of the Horn Scientific Expedition to Central
Australia, part iv., Anthropology (London and Melbourne, 1896), pp. 177–179;
Spencer and Gillen, Native Tribes of Central Australia, pp. 189–193.
891 As to the connexion of the plover with rain in Central Australia, see above,
p. 259. It is curious that the same association has procured for the bird its name
in English, French (pluvier, from the Latin pluvia), and German (Regenpfeifer).
Ornithologists are not agreed as to the reason for this association in the popular
mind. See Alfred Newton, Dictionary of Birds (London, 1893–1896), pp. 730 sq.
895 Fr. Boas, loc. cit.; id. in Sixth Report On the North-Western Tribes of
Canada, pp. 58, 62 (separate reprint from the Report of the British Association for
1890); id. in Eleventh Report on the North-Western Tribes of Canada, p. 5
(separate reprint from the Report of the British Association for 1896).
896 Fr. Boas, in Sixth Report on the North-Western Tribes of Canada, pp. 39 sq.
(separate reprint from the Report of the British Association for 1890).
897 British Central Africa Gazette, No. 86 (vol. v. no. 6), 30th April 1898, p. 3.
ebookball.com