A Literature Review On The Student
A Literature Review On The Student
www.emeraldinsight.com/2514-5789.htm
Student
A literature review on the student evaluation of
evaluation of teaching teaching
Abstract
Purpose – Competition among higher education institutions has pushed universities to expand their
competitive advantages. Based on the assumption that the core functions of universities are academic,
understanding the teaching–learning process with the help of student evaluation of teaching (SET) would
seem to be a logical solution in increasing competitiveness. The paper aims to discuss these issues.
Design/methodology/approach – The current paper presents a narrative literature review examining how
SETs work within the concept of service marketing, focusing specifically on the search, experience, and
credence qualities of the provider. A review of the various factors that affect the collection of SETs is
also included.
Findings – Relevant findings show the influence of students’ prior expectations on SET ratings. Therefore,
teachers are advised to establish a psychological contract with the students at the start of the semester. Such
an agreement should be negotiated, setting out the potential benefits of undertaking the course and a clear
definition of acceptable performance within the class. Moreover, connections should be made between courses
and subjects in order to provide an overall view of the entire program together with future career pathways.
Originality/value – Given the complex factors affecting SETs and the antecedents involved, there appears
to be no single perfect tool to adequately reflect what is happening in the classroom. As different SETs may
be needed for different courses and subjects, options such as faculty self-evaluation and peer-evaluation
might be considered to augment current SETs.
Keywords Higher education, Student expectations, Service marketing, Teacher evaluation,
Teaching and learning process
Paper type Research paper
1. Introduction
For the past number of years, the increasing number of degree providing institutions has
dramatically changed global higher education (Altbach et al., 2009; Usher, 2009). This rising
number of higher education institutions has actually led to increased competition among
universities (Naidoo, 2016). Furthermore, with cutbacks in government funding for higher
education (Mitchell et al., 2016), differentiation is essential for universities to distinguish
themselves and compete with other institutions (Staley and Trinkle, 2011). Such differentiation
of higher education institutions has become commonplace, forcing universities to become
more innovative, cost conscious, and entrepreneurial (Longanecker, 2016; MacGregor, 2015).
These global dilemmas are not new to Taiwan, wherein universities have to outperform
each other for financial subsidies, while also competing to recruit new students (Chou and
Ching, 2012). The problem of recruitment results from a serious decline of birth rate in
Taiwan. The National Statistics Office of Taiwan (2018) reported that birth figures declined
© Gregory Ching. Published in Higher Education Evaluation and Development. Published by Emerald Higher Education Evaluation and
Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Development
Vol. 12 No. 2, 2018
Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial pp. 63-84
and non-commercial purposes), subject to full attribution to the original publication and authors. The full Emerald Publishing Limited
2514-5789
terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode DOI 10.1108/HEED-04-2018-0009
HEED from 346,208 in 1985 to 166,886 in 2010, representing a fall of almost 50 percent. Projecting
12,2 these numbers into university entrants, a drop of around 20,000 incoming students can be
noted for the academic year 2016/2017 (Chang, 2014). In fact, only 241,000 freshmen
students are noted for the current 2017/2018 academic year and this number is expected to
drop to around only 157,000 in 2028 (Wu, 2018). This issue of declining number of students
has resulted in financial difficulties for academic institutions (Chen and Chang, 2010).
64 In such difficult times, it is crucial for higher education institutions in Taiwan to
differentiate themselves and develop their competitive advantages.
In the age of big data, differentiation can be achieved with the help of large data sets that
provide institutions with the capacity to address complex institutional issues (Daniel, 2015;
Norris and Baer, 2013). Many researchers have begun to collect and analyze institutional
data sets to address various administrative and instructional issues faced by the
universities (Picciano, 2012). The results of these studies can provide school administrators
and students with useful information (Castleman, 2016). In Taiwan, big data has provided
institutions with information on topics such as trends in enrollment rates, students’ online
learning performances, and research outputs measured by number of academic publications
(Tseng, 2016). Another study reported on the advantages of collecting and understanding
student learning experiences using big data (Lin and Chen, 2016). Based on the assumption
that the core functions of higher education institutions remains to be academic (Altbach,
2011), i.e., teaching and learning, determining and understanding the quality of the teaching
learning process with the aid of big data can be extremely useful.
In order to understand the quality of the teaching learning process, higher
education institutions in Taiwan and elsewhere have long been using the student
evaluation of teaching (SET), which provides feedback on teaching performance and
appraises faculty members (Aleamoni and Hexner, 1980; Centra, 1979; Clayson, 2009;
Curran and Rosen, 2006; Pozo-Muñoz et al., 2000). Even though the practice of using SETs is
well established in higher education institutions (Rice, 1988; Wachtel, 2006) and is
considered relatively reliable for evaluating courses and instructors (Aleamoni, 1999; Marsh,
1987, 2007; Nasser and Fresko, 2002), its usefulness and effectiveness has been challenged
(Boring et al., 2016).
Over time, numerous issues have arisen in research on SETs. It has been claimed that
SETs are used as a tool by students to reward or punish their instructor (Clayson et al.,
2006), that SET results differ across areas (course, subject, and discipline) (Chen, 2006) and
type (including course design and class size) of study (Feldman, 1978; Marsh, 1980), and that
completion rate and background demographics of students significantly affect SETs (Stark
and Freishtat, 2014). Moreover, SETs can be biased with respect to the gender of the
instructor and that of the students (Boring et al., 2016). Interestingly, recent research has
found that effective teachers are receiving low SET ratings (Braga et al., 2014; Kornell and
Hausman, 2016). This has caused many institutions, including universities in Taiwan, to
analyze and redesign their SETs (Chen, 2006; Zhang, 2003).
In light of these issues, the current paper shall seek to provide a better understanding of
the inner workings of SETs. With the better understanding of SETs, more appropriate and
effective evaluation tools can be developed. In addition, the categorization of education as a
type of services (WTO, 1998) has also opened up new ways of looking into the entire
academe. Anchoring on the narrative literature review paradigm, this paper will shape the
discussion of SETs within the concept of service marketing. A common framework used to
evaluate services, which is to determine the search, experience, and credence qualities of the
provider (Fisk et al., 2014, p. 151; Wilson et al., 2012, p. 29). In addition, the paper will review
the definitions of SET in the existing literature as well as the dimensions commonly used to
measure the quality of teaching. Finally, the various factors that affects the collection of
SETs are discussed.
2. Methodology Student
The current study is anchored on a literature review paradigm. For any study, a literature evaluation of
review is an integral part of the entire process (Fink, 2005; Hart, 1998; Machi and McEvoy, teaching
2016) In general, literature reviews involve database retrievals and searches defined by a
specific topic (Rother, 2007). To perform a comprehensive literature review, researchers
adopt various approaches for organizing and synthesizing information, adopting either a
qualitative or quantitative perspective for data interpretation (Baumeister and Leary, 1997; 65
Cronin et al., 2008; Fink, 2005; Hart, 1998; Lipsey and Wilson, 2001; Petticrew and Roberts,
2005; Rocco and Plakhotnik, 2009; Torraco, 2005).
For the current study, the researcher adopts a narrative literature review approach.
Narrative review or more commonly refer to as traditional literature review is a
comprehensive, critical, and objective analysis of the current knowledge on a topic (Charles
Stuart University Library, 2018). The review should be objective insofar as it should have a
specific focus but should provide critiques of important issues (Dudovskiy, 2018). More
importantly, the results of a narrative review are qualitative in nature (Rother, 2007).
The study follows the suggestion of Green et al. (2006) with regard to synthesizing search
results retrieved from computer databases. For the current study, the researcher used
Google Scholar as a starting point, followed by searches within ProQuest and PsycINFO.
Keywords used for searches were “student evaluation of teaching” and related terminologies
(see next section for more information on SET synonymous terms). Selections of relevant
articles are explicit and potentially biased insofar as the researcher focuses on the search,
experience, and credence qualities of providers within SET studies. Data analysis methods
consist of a procedure for organizing information into specific themes developed by Miles
and Huberman (1994) and Glaser’s (1965, 1978) technique for continuous comparison of
previously gathered data.
Teacher Search
4. Dimensions of SET
As has been noted, SETs are complex and involves multiple interrelated dimensions. In his
early meta-analysis, Feldman (1978) shows that although most studies focus on the overall
rating of the instructor. However, SETs that focus only on summative evaluations and that
use global measures ( few summary items) are highly discouraged (Cashin and Downey,
1992; Marks, 2000; Sproule, 2000). The majority of SETs aim at a more comprehensive
rating of teachers and, as Marsh (2007) notes, are mostly constructed around the concept of
effective teaching. The usefulness and effectiveness of an SET depends on how well it can
capture the concepts it measures. Hence, careful design is essential (Aleamoni, 1974, 1999;
Aleamoni and Hexner, 1980; Arreola, 2007).
One of the early syntheses of SETs is conducted by analyzing students’ views of the
characteristics of a superior teacher (Feldman, 1976). For the study, three categories are
identified: presentation, which includes teachers’ enthusiasm for teaching and for the subject
matter, their ability to motivate students’, their knowledge of the subject matter, clarity of
presentation, and organization of the course; facilitation, which denotes teachers’ availability for
consultation (helpfulness), their ability to show concern and respect for students (friendliness),
and their capacity to encourage learners through class interactions and discussions (openness);
and regulation, which includes the teachers’ ability to set clear objectives and requirements,
appropriateness of course materials (including supplementary learning resources) and
coursework (with regard to difficulty and workload), fairness in evaluating students and
providing feedback, and classroom management skills (Feldman, 1976).
Another early analysis of SETs conducted by Hildebrand (1973) and Hildebrand et al.
(1971) and his associates identifies five constructs for measuring the effectiveness of
teaching: analytic/synthetic skills, which includes the depth of a teacher’s scholarship and
his or her analytic ability and conceptual understanding of the course content; organization/
clarity, denoting the teacher’s presentation skills in the course subject area; instructor group
interaction, which describes the teacher’s ability to actively interact with the class, his or
overall rapport with the class, sensitivity to class responses, and ability to maintain active
class participation; instructor–individual student interaction, which includes the teacher’s
ability to establish mutual respect and rapport with individual students; and dynamism/
enthusiasm, which relates to the teacher’s enthusiasm for teaching and includes confidence,
excitement about the subject, and pleasure in teaching (Hildebrand et al., 1971, p. 18).
More recently, the SEEQ is frequently used by many higher education institutions. The
SEEQ measures nine factors that constitute quality instruction (Marsh, 1982, 1987; Marsh
and Dunkin, 1997; Richardson, 2005). These are assignments and readings, breadth of
coverage, examinations and grading, group interaction, individual rapport, instructor
enthusiasm, learning and academic value, organization and clarity, and workload and
difficulty (Marsh, 2007, p. 323). Some SEEQ studies include an overall summative evaluation Student
of the course subject as an additional factor (Schellhase, 2010). The similarities with evaluation of
Hildebrand (1973) and Hildebrand et al. (1971) and Feldman’s (1976) criteria of effective teaching
teaching are apparent.
In a series of studies conducted at the University of Hawaii, SET is first analyzed with
respect to the perspectives of faculty members, which identifies important factors such as
evaluation information based from students, information from peers (colleagues), student 71
performance and grades, and external performance evaluations of teachers (Meredith, 1977).
A study that included apprentice teachers (practice teachers) found that students preferred
instructors who exhibited classroom organizational skills, who focused on students’ learning
outcomes, and who interacted well with students (Meredith and Bub, 1977). A set of
evaluation criteria was developed based on a study of both faculty members and students in
the School of Law at the University of Hawaii, which included dimensions such as knowledge
of subject matter, ability to stimulate interest and motivate students, organization of the
course, preparation for the course, concern for students, quality of course materials, and an
overall summative evaluation of the teacher (Meredith, 1978). Other studies measured
teaching excellence by subject mastery, teaching skills, and personal qualities of the teacher
(Meredith, 1985b), while an overall analysis of student satisfaction used the criteria social
interaction, teaching quality, campus environment, employment opportunities, and classroom
facilities (Meredith, 1985a), all of which contribute to SET ratings.
In summary, it is noted that SETs can vary depending on whether the evaluations are
from the perspective of faculty members (how teachers teach) or from the students (how
students learn). However, although several variations of SETs exist, comparisons suggest that
as long as the overall objective is to evaluate effective teaching, dimensions within these SETs
are interrelated and may overlap (Marsh, 1984, 2007; Marsh and Bailey, 1993; Marsh and
Dunkin, 1997). A study conducted by the American Association of University Professors
involving 9,000 faculty members found that SETs are generally established with controversial
biases and issues (Flaherty, 2015). The more important issue is the establishment of the
objectives for SET implementation within the university and careful decision making about
who should participate in the development of such an evaluation instrument.
5. Antecedents of SET
Within the vast literature on SETs, analysis of their validity and reliability has identified
various antecedents affecting effective evaluation. SET ratings are dependent on several
issues, including the various biases already discussed. The first obvious antecedent is the
instructor, as can be discerned from the previous discussions. Besides personality issues,
gender plays an important role. Boring et al. (2016) find that SETs are statistically biased
against female faculty, and that such biases can cause effective teachers to get lower SET
ratings than less effective ones. MacNell et al. (2015) conducted an experiment in which
students were blind to the gender of their online course instructors. For the experiment, two
online course instructors were selected, one male and one female, and each was given two
classes to teach. Later in the course, each instructor presented as one gender to one class
and the opposite gender to the other class. The SET results gathered at the end of the
semester are interesting. Regardless of the instructor’s real gender, students gave
the teacher they thought was male and the actual male teacher higher SET ratings than
the teacher they perceived as female. This experiment clearly shows that the rating
difference results from gender bias (Marcotte, 2014).
Previous studies also show that the time of SET evaluation matters. As discussed, when
SET evaluations are administered during the middle of the semester, results can assist
teachers in re-evaluating their course design to better fit with the students’ needs and
capabilities. However, this phenomenon is limited. SETs are mostly given before the end of
HEED the term or during final examinations, and studies have shown that ratings taken at this
12,2 time tend to be lower compared to evaluations conducted a few weeks before final exams
(Braskamp et al., 1984). Interestingly, no significant differences were found when comparing
SET ratings gathered before the end of the semester with those taken in the first week of the
succeeding term (Frey, 1976). This debunks the fallacy that students tend seek revenge on
teachers because of issues with the grades received (Clayson et al., 2006; Skinner and
72 Belmont, 1993). In fact, studies have proven that students who received poor grades were
less likely to care enough to complete the SET (Liegle and McDonald, 2005).
In terms of the students themselves, as previously mentioned the background
demographics of students do significantly affect SETs (Stark and Freishtat, 2014). Although
some biases are found between gender and SET ratings (Boring et al., 2016; Feldman, 1977),
still there are no consistent evidence of such difference exists (Wachtel, 2006). For instance,
different studies have shown that male and female students give higher ratings as
compared to their peers of opposite genders (Tatro, 1995). In some instances, students
evaluate their same gender teachers higher than their opposite gender instructors
(Centra, 1993a, b). With regards to ethnicity, Marsh et al. (1997) translated the SEEQ
instrument into Chinese and found that there are no significant differences with the results
reported as compared with the studies done in the USA. In other Chinese studies, besides the
significant differences in SET ratings between students of various discipline and nature
(Chen and Watkins, 2010; Liu et al., 2016), it is well noted that linguistics or foreign language
teachers tend to received higher evaluations than the faculty of other discipline (Chen and
Watkins, 2010).
Administration conditions or the way SETs are administered also matters. Currently,
SETs are mostly collected using online course evaluations (Spooren and Van Loon, 2012).
However, literature shows that online SETs results in lower participation (Anderson et al.,
2005; Avery et al., 2006), although reminders do increase the response rate (Norris and Conn,
2005). With paper-and-pen SETs, the person administering the evaluation also contributes
to any inconsistencies in the ratings. This holds true even if the teacher leaves the room
during the SET administration and the forms are anonymous, as students may still be
reluctant to provide an objective evaluation (Pulich, 1984). Many researchers have agreed
that SETs should be entrusted to a third-party individual for effective collection (Braskamp
et al., 1984; Centra, 1979).
The characteristics of the course subject also matters. Wachtel (2006) notes that the
nature of the course subject, such as whether it is a required course or an elective, affects
how students rate its importance. Sometimes students give higher ratings for elective course
subjects due to their having a prior interest in the subject (Feldman, 1978). Class schedule
can sometimes affect ratings, and odd schedules such as early morning classes or late
afternoon classes have been found to receive the lowest SET ratings (Koushki and Kuhn,
1982). However, inconsistencies were found in several other studies (Aleamoni, 1999; Centra,
1979; Feldman, 1978; Wachtel, 2006), but it has been suggested that the level of the course is
a relevant factor. The year or level of the course is closely related to the students’ age; as
students continue with their studies, they becomes more mature and become aware that
their opinions are taken seriously by the school administration (Spooren and Van Loon,
2012). Class size has also been found to have an impact (Feldman, 1978; Marsh, 2007) since
bigger classes tend to present less opportunities for interaction between the teacher and the
individual students, which can affect ratings (Meredith and Ogasawara, 1982). Finally, the
subject area and the discipline also greatly influence SET ratings. Since the discipline affects
how classes are held (e.g. laboratory classes compared to lecture intensive courses),
comparisons between colleges are not advisable (Wachtel, 2006). For instance, task-oriented
subjects such as mathematics and science offer less interaction than the social sciences
(Centra, 1993a, b).
In summary, apart from the issues relating to students that affect SETs discussed in the Student
“Experience Qualities” section of this paper, including their gender, learning motivations, evaluation of
and grade expectations (Boring et al., 2016), many more have been added to the discussion. teaching
Having examined the various antecedents of SETs, it is apparent that one model is not
suitable for all instances. More specifically, one single type of SET cannot and should not be
used to collect students’ perception across all courses and subjects. This is actually the main
reason why some higher education institutions choose to use global measures to collect the 73
summative evaluations of the class. In practice, separate SETs should be used for different
course types. Since this can place a significant burden on institutions, careful analysis and
research is necessary.
6. Conclusion
To sum up, literature has shown that the use of SETs to collect information regarding the
teaching–learning process is commonplace. However, given the complex nature of academic
processes, the data resulting from SETs are questionable and limited. The current paper
presents a review of the literature on SETs, focusing on the concept of service marketing
evaluation. The framework’s three criteria are used to examine SETs, whereby the teacher
represents the “search” qualities, the teaching process the “experience” qualities, and the
learning outcomes the “credence” qualities.
The search qualities with regard to SETs are the easily observable attributes of teachers.
These may include the appearance, gender, age, ethnicity, and personalities traits of faculty
members. In practice, course subject selections are made prior to enrollment in a course;
students can compare faculty members when deciding which one to enroll with. Hence, the
expectations of students are important. It has been noted that stereotyping faculty members
according to certain demographic factors such as gender and age is unfair since these
features are fixed and impossible to change. Students should look beyond these obvious
factors and focus more on the teachers’ credibility and competencies.
Beyond initial search preferences, students place much importance on evaluating their
learning experiences. As the literature suggests, for the sake of simplicity, many SETs
include only global summative evaluations of the teaching–learning process. However,
given that the nature of the learning experience is complex and multidimensional, evidence
to support student development should be in the form of formative judgments. Furthermore,
the actual teaching–learning process is composed of repeated service encounters (a semester
in Taiwan typically lasts around 18 weeks). It is, therefore, difficult to determine whether a
single class experience or the collective sum of the semester’s learning encounters contribute
to the SET ratings. Considering the influence of prior expectations on SET ratings, teachers
are advised to establish a psychological contract with the students. To make these
agreements effective, they should be negotiated at the start of the term, so that they are
shared contracts between the teacher and the students.
Finally, accepting that university education is characterized by high credence qualities,
students must be aware of the concept of academic delay of gratification, so that they
understand and accept that the benefits of undertaking a course are not immediate.
Combining this with the importance of students’ expectations and the usefulness of creating
a psychological contract, clear definitions of the potential benefits and acceptable
performance should be provided during the first class. Moreover, connections should be
made between previous, current, and future courses, thus providing an overview of the
entire program together with career pathways.
In summary, since SETs are frequently used to collect information on effective teaching, it
is important for higher education institutions to establish what kinds of SETs are effective.
Given the complex factors involved and the various antecedents of SETs, it appears that no
one perfect tool exists to accurately measure what happens in the classroom. As different
HEED SETs may be necessary for different courses and subjects, options such as faculty members’
12,2 self-evaluation and/or faculty members’ peer-evaluation might be considered to provide what
is lacking in SETs. It is hoped that as technology advances, an innovative way of collecting
SETs might be found to make the process more productive.
References
Adams, G.R. (1977), “Physical attractiveness research: toward a developmental social psychology of
beauty”, Human Development, Vol. 20 No. 4, pp. 217-239, available at: https://doi.org/10.
1159/000271558
Aleamoni, L.M. (1974), “Typical faculty concerns about student evaluation of instruction”, NACTA,
Vol. 20 No. 1, pp. 16-21.
Aleamoni, L.M. (1999), “Student rating myths versus research facts from 1924 to 1998”, Journal of
Personnel Evaluation in Education, Vol. 13 No. 2, pp. 153-166, available at: https://doi.org/10.102
3/A:1008168421283
Aleamoni, L.M. and Hexner, P.Z. (1980), “A review of the research on student evaluation and a report on
the effect of different sets of instructions on student course and instructor evaluation”,
Instructional Science, Vol. 9 No. 1, pp. 67-84.
Algozzine, B., Gretes, J., Flowers, C., Howley, L., Beattie, J., Spooner, F., Mohanty, G. and Bray, M. (2004),
“Student evaluation of college teaching: a practice in search of principles”, College Teaching,
Vol. 52 No. 4, pp. 134-141, available at: https://doi.org/10.3200/CTCH.52.4.134-141
Altbach, P.G. (2011), “Introduction”, in Altbach, P.G. (Ed.), Leadership for World-Class Universities:
Challenges for Developing Countries, Routledge, New York, NY, pp. 1-7.
Altbach, P.G., Reisberg, L. and Rumbley, L.E. (2009), Trends in Global Higher Education: Tracking an
Academic Revolution, UNESCO, Paris.
Ames, C. and Archer, J. (1988), “Achievement goals in the classroom: students’ learning strategies and
motivation processes”, Journal of Educational Psychology, Vol. 80 No. 3, pp. 260-267.
Anderson, H.M., Cain, J. and Bird, E. (2005), “Online student course evaluations: review of literature and
a pilot study”, American Journal of Pharmaceutical Education, Vol. 69 No. 1, pp. 34-43.
Arreola, R.A. (2007), Developing a Comprehensive Faculty Evaluation System: A Guide to Designing,
Building, and Operating Large-Scale Faculty Evaluation Systems, Jossey-Bass, San Francisco, CA.
Arthur, L. (2009), “From performativity to professionalism: lectures’ responses to student feedback”, Student
Teaching in Higher Education, Vol. 14 No. 4, pp. 441-454, available at: https://doi.org/10.1080/ evaluation of
13562510903050228
teaching
Avery, R.J., Bryant, W.K., Mathios, A., Kang, H. and Bell, D. (2006), “Electronic course evaluations: does
an online delivery system influence student evaluations?”, The Journal of Economic Education,
Vol. 36 No. 1, pp. 21-37, available at: https://doi.org/10.3200/JECE.37.1.21-37
Babad, E., Darley, J.M. and Kaplowitz, H. (1999), “Developmental aspects in students’ course selection”, 75
Journal of Educational Psychology, Vol. 91 No. 1, pp. 157-168.
Babcock, P. (2010), “Real costs of nominal grade inflation? New evidence from student course
evaluations”, Economic Inquiry, Vol. 48 No. 4, pp. 983-996, available at: https://doi.org/10.1111/j.
1465-7295.2009.00245.x
Baumeister, R.F. and Leary, M.R. (1997), “Writing narrative literature reviews”, Review of General
Psychology, Vol. 1 No. 3, pp. 311-320.
Beatty, M.J. and Zahn, C.J. (1990), “Are student ratings of communication instructors due to ‘easy’
grading practices? An analysis of teacher credibility and student-reported performance levels”,
Communication Education, Vol. 39 No. 4, pp. 275-282, available at: https://doi.org/10.1080/0
3634529009378809
Bell, R.A. and Daly, J.A. (1984), “The affinity-seeking function of communication”, Communication
Monographs, Vol. 51 No. 2, pp. 91-115, available at: https://doi.org/10.1080/03637758409390188
Bembenutty, H. (1999), “Sustaining motivation and academic goals: the role of academic delay of
gratification”, Learning and Individual Differences, Vol. 11 No. 3, pp. 233-257, available at:
https://doi.org/10.1016/S1041-6080(99)80002-8
Bembenutty, H. (2009), “Teaching effectiveness, course evaluation, and academic performance: the role
of academic delay of gratification”, Journal of Advanced Academics, Vol. 20 No. 2, pp. 326-355.
Bembenutty, H. and Karabenick, S.A. (1998), “Academic delay of gratification”, Learning and
Individual Differences, Vol. 10 No. 4, pp. 329-346, available at: https://doi.org/10.1016/S1041-6080
(99)80126-5
Bembenutty, H. and Karabenick, S.A. (2004), “Inherent association between academic delay of
gratification, future time perspective, and self-regulated learning”, Educational Psychology Review,
Vol. 16 No. 1, pp. 35-57, available at: https://doi.org/10.1023/B:EDPR.0000012344.34008.5c
Bennett, S.K. (1982), “Student perceptions of and expectations for male and female instructors: evidence
relating to the question of gender bias in teaching evaluation”, Journal of Educational
Psychology, Vol. 74 No. 2, pp. 170-179, available at: https://doi.org/10.1037/0022-0663.74.2.170
Berk, R.A. (2005), “Survey of 12 strategies to measure teaching effectiveness”, International Journal of
Teaching and Learning in Higher Education, Vol. 17 No. 1, pp. 48-62.
Berscheid, E. and Walster, E. (1974), “Physical attractiveness”, Advances in Experimental Social
Psychology, Vol. 7 No. 1, pp. 157-215, available at: https://doi.org/10.1016/S0065-2601(08)60037-4
Betoret, F.D. (2007), “The influence of students’ and teachers’ thinking styles on student course
satisfaction and on their learning process”, Educational Psychology: An International Journal of
Experimental Educational Psychology, Vol. 27 No. 2, pp. 219-234, available at: https://doi.org/10.
1080/01443410601066701
Bolliger, D.U. (2004), “Key factors for determining student satisfaction in online courses”, International
Journal on E-Learning, Vol. 3 No. 1, pp. 61-67.
Bonds-Raacke, J. and Raacke, J.D. (2007), “The relationship between physical attractiveness of professors
and students’ ratings of professor quality”, Journal of Psychiatry, Psychology and Mental Health,
Vol. 1 No. 2, pp. 1-7.
Boring, A., Ottoboni, K. and Stark, P.B. (2016), “Student evaluations of teaching (mostly) do not
measure teaching effectiveness”, ScienceOpen Research, available at: https://doi.org/10.14293/S2
199-1006.1.SOR-EDU.AETBZC.v1 (accessed March 30, 2018).
HEED Braga, M., Paccagnella, M. and Pellizzari, M. (2014), “Evaluating students’ evaluations of professors”,
12,2 Economics of Education Review, Vol. 41 No. 1, pp. 71-88, available at: https://doi.org/10.1016/j.
econedurev.2014.04.002
Braskamp, L.A., Brandenburg, D.C. and Ory, J.C. (1984), Evaluating Teaching Effectiveness, Sage,
Newbury Park, CA.
Brownell, J.E. and Swaner, L.E. (2009), “High-impact practices: applying the learning outcomes
76 literature to the development of successful campus programs”, Peer Review, Vol. 11 No. 2,
pp. 26-30.
Buck, S. and Tiene, D. (1989), “The impact of physical attractiveness, gender, and teaching philosophy
on teacher evaluations”, The Journal of Educational Research, Vol. 82 No. 3, pp. 172-177,
available at: https://doi.org/10.1080/00220671.1989.10885887
Carter, R.E. (2016), “Faculty scholarship has a profound positive association with student evaluations
of teaching: except when it doesn’t”, Journal of Marketing Education, Vol. 38 No. 1, pp. 18-36.
Cashin, W.E. and Downey, R.G. (1992), “Using global student rating items for summative evaluation”,
Journal of Educational Psychology, Vol. 84 No. 4, pp. 563-572.
Castleman, B. (2016), “Data-driven behavioral nudges: a low-cost strategy to improve postsecondary
education”, paper presented at the Annual Conference of the Association for Institutional Research,
New Orleans, LA.
Centra, J.A. (1979), Determining Faculty Effectiveness: Assessing Teaching, Research, and Service for
Personnel Decisions and Improvement, Jossey-Bass, San Francisco, CA.
Centra, J.A. (1993a), Determining Faculty Effectiveness, Jossey-Bass, San Francisco, CA.
Centra, J.A. (1993b), Reflective Faculty Evaluation, Jossey-Bass, San Francisco, CA.
Chang, J. (2014), “Number of universities should be reduced: education minister”, available at: www.
chinapost.nownews.com/20140925-50233 (accessed March 30, 2018).
Charles Stuart University Library (2018), “Literature review: traditional or narrative literature reviews”,
available at: http://libguides.csu.edu.au/c.php?g=476545&p=3997199 (accessed March 30, 2018).
Chen, C.Y. (2006), “A study on teaching evaluation in public universities in Taiwan (Wguó gōnglìdàxué
jiàoshī jiàoxué píngjiàn zhī yánjiū)”, unpublished doctoral dissertation, National ChengChi
University, Taipei.
Chen, D.-S. and Chang, M.-K. (2010), “Higher education in Taiwan: the crisis of rapid expansion”, available at:
www.isa-sociology.org/universities-in-crisis/?p=417 (accessed March 30, 2018).
Chen, G.-H. and Watkins, D. (2010), “Stability and correlates of student evaluations of teaching at a
Chinese university”, Assessment and Evaluation in Higher Education, Vol. 35 No. 6, pp. 675-685,
available at: https://doi.org/10.1080/02602930902977715
Chen, L. (2016), “Do student characteristics affect course evaluation completion?”, paper presented at
the 2016 Annual Conference of the Association for Institutional Research, New Orleans, LA.
Chou, C.P. and Ching, G.S. (2012), Taiwan Education at the Crossroad: When Globalization Meets
Localization, Palgrave Macmillan, New York, NY.
Chuah, K.L. and Hill, C. (2004), “Student evaluation of teacher performance: random pre-destination”,
Journal of College Teaching & Learning, Vol. 1 No. 6, pp. 109-114.
Clayson, D.E. (1999), “Students’ evaluation of teaching effectiveness: some implications of stability”,
Journal of Marketing Education, Vol. 21 No. 1, pp. 68-75.
Clayson, D.E. (2009), “Student evaluations of teaching: are they related to what students learn? A meta-
analysis and review of the literature”, Journal of Marketing Education, Vol. 31 No. 1, pp. 16-30.
Clayson, D.E. and Sheffet, M.J. (2006), “Personality and the student evaluation of teaching”, Journal of
Marketing Education, Vol. 28 No. 2, pp. 149-160.
Clayson, D.E., Frost, T.F. and Sheffet, M.J. (2006), “Grades and the student evaluation of instruction: a
test of the reciprocity effect”, Academy of Management: Learning and Education, Vol. 5 No. 1,
pp. 52-65, available at: https://doi.org/10.5465/AMLE.2006.20388384
Coburn, L. (1984), “Student evaluation of teacher performance”, ERIC Document Reproduction Service Student
No. ED289887, National Institute of Education, Washington, DC. evaluation of
Coffey, M. and Gibbs, G. (2001), “The evaluation of the Student Evaluation of Educational Quality teaching
Questionnaire (SEEQ) in UK higher education”, Assessment & Evaluation in Higher Education,
Vol. 26 No. 1, pp. 89-93, available at: https://doi.org/10.1080/02602930020022318
Cohen, P.A. (1980), “Effectiveness of student-rating feedback for improving college instruction: a
meta-analysis of findings”, Research in Higher Education, Vol. 13 No. 4, pp. 321-341, doi: 10.1007/ 77
bf00976252.
Costin, F., Greenough, W.T. and Menges, R.J. (1971), “Student ratings of college teaching: reliability,
validity, and usefulness”, Review of Educational Research, Vol. 41 No. 5, pp. 511-535.
Cronin, P., Ryan, F. and Coughlan, M. (2008), “Undertaking a literature review: a step-by-step
approach”, British Journal of Nursing, Vol. 17 No. 1, pp. 38-43.
Curran, J.M. and Rosen, D.E. (2006), “Student attitudes toward college courses: an examination
of influences and intentions”, Journal of Marketing Education, Vol. 28 No. 2, pp. 135-148,
available at: https://doi.org/10.1177/0273475306288401
Daniel, B. (2015), “Big data and analytics in higher education: opportunities and challenges”, British
Journal of Educational Technology, Vol. 46 No. 5, pp. 904-920, available at: https://doi.org/10.
1111/bjet.12230
Darby, M.R. and Karni, E. (1973), “Free competition and the optimal amount of fraud”, The Journal of
Law & Economics, Vol. 16 No. 1, pp. 67-88.
Dee, T.S. (2005), “A teacher like me: does race, ethnicity, or gender matter?”, The American Economic
Review, Vol. 95 No. 2, pp. 158-165.
Douglas, D. and Gifford, R. (2001), “Evaluation of the physical classroom by students and professors: a
lens model approach”, Educational Research, Vol. 43 No. 3, pp. 295-309, available at: https://doi.
org/10.1080/00131880110081053
Dudovskiy, J. (2018), “The ultimate guide to writing a dissertation in business studies: a step-by-step
assistance”, Research Methodology.
Duggan, M. and Carlson-Bancroft, A. (2016), “How Emerson College increased participation rates in
course evaluations and NSSE”, paper presented at the Annual Conference of the Association for
Institutional Research, New Orleans, LA.
Eagly, A.H., Ashmore, R.D., Makhijani, M.G. and Longo, L.C. (1991), “What is beautiful is good, but …:
a meta-analytic review of research on the physical attractiveness stereotype”, Psychological
Bulletin, Vol. 110 No. 1, pp. 109-128, available at: https://doi.org/10.1037/0033-2909.110.1.109
Ehrenberg, R.G., Goldhaber, D.D. and Brewer, D.J. (1995), “Do teachers’ race, gender, and ethnicity
matter? Evidence from the national education longitudinal study of 1988”, Industrial and Labor
Relations Review, Vol. 48 No. 3, pp. 547-561.
Feldman, K.A. (1976), “The superior college teacher from the students’ view”, Research in Higher
Education, Vol. 5 No. 3, pp. 243-288, available at: https://doi.org/10.1007/BF00991967
Feldman, K.A. (1977), “Consistency and variability among college students in rating their teachers and
courses: a review and analysis”, Research in Higher Education, Vol. 6 No. 3, pp. 223-274.
Feldman, K.A. (1978), “Course characteristics and college students’ ratings of their teachers: what we
know and what we don’t”, Research in Higher Education, Vol. 9 No. 3, pp. 199-242, available at:
https://doi.org/10.1007/BF00976997
Feldman, K.A. (1984), “Class size and college students’ evaluations of teachers and courses: a closer
look”, Research in Higher Education, Vol. 21 No. 1, pp. 45-116, available at: https://doi.org/10.100
7/BF00975035
Felton, J., Mitchell, J. and Stinson, M. (2004), “Web-based student evaluations of professors: the
relations between perceived quality, easiness, and sexiness”, Assessment & Evaluation in Higher
Education, Vol. 29 No. 1, pp. 91-108, available at: https://doi.org/10.1080/0260293032000158180
HEED Fink, A. (2005), Conducting Research Literature Reviews: From the Internet to Paper, 2nd ed., Sage,
12,2 Thousand Oaks, CA.
Fisk, R.P., Grove, S.J. and John, J. (2014), Services Marketing: An Interactive Approach, 4th ed., Cengage
Learning, Mason, OH.
Flaherty, C. (2015), “Flawed evaluations”, available at: www.insidehighered.com/news/2015/06/10/aaup-
committee-survey-data-raise-questions-effectiveness-student-teaching (accessed March 30, 2018).
78 Flood, B. (1970), “Student evaluation of teacher performance”, Journal of Education for Librarianship,
Vol. 10 No. 4, pp. 283-285, available at: https://doi.org/10.2307/40322085
Fox, R., Peck, R.F., Blattstein, A. and Blattstein, D. (1983), “Student evaluation of teacher as a measure
of teacher behavior and teacher impact on students”, The Journal of Educational Research,
Vol. 77 No. 1, pp. 16-21.
Frey, P.W. (1976), “Validity of student instructional ratings: does timing matter?”, The Journal of
Higher Education, Vol. 47 No. 3, pp. 327-336.
Galetzka, M., Verhoeven, J.W.M. and Pruyn, A.T.H. (2006), “Service validity and service reliability of
search, experience and credence services: a scenario study”, International Journal of Service
Industry Management, Vol. 17 No. 3, pp. 271-283, available at: https://doi.org/10.1108/09564230
610667113
Gikas, J. and Grant, M.M. (2013), “Mobile computing devices in higher education: student perspectives
on learning with cellphones, smartphones & social media”, The Internet and Higher Education,
Vol. 19 No. 1, pp. 18-26.
Glaser, B.G. (1965), “The constant comparative method of qualitative analysis”, Social Problems, Vol. 12
No. 4, pp. 436-445.
Glaser, B.G. (1978), Theoretical Sensitivity: Advances in the Methodology of Grounded Theory, Sociology
Press, Mill Valley, CA.
Grammatikopoulos, V., Linardakis, M., Gregoriadis, A. and Oikonomidis, V. (2015), “Assessing
the Students’ Evaluations of Educational Quality (SEEQ) questionnaire in Greek higher
education”, Higher Education, Vol. 70 No. 3, pp. 395-408, available at: https://doi.org/10.1007/s10
734-014-9837-7
Green, B.N., Johnson, C.D. and Adams, A. (2006), “Writing narrative literature review for peer-reviewed
journals: secrets of the trade”, Journal of Chiropractic Medicine, Vol. 5 No. 3, pp. 101-117.
Greene, B.A., Miller, R.B., Crowson, H.M., Duke, B.L. and Akey, K.L. (2004), “Predicting high school
students’ cognitive engagement and achievement: contributions of classroom perceptions and
motivation”, Contemporary Educational Psychology, Vol. 29 No. 4, pp. 462-482, available at:
https://doi.org/10.1016/j.cedpsych.2004.01.006
Grove, S.J. and Fisk, R.P. (1997), “The impact of other customers on service experiences: a critical
incident examination of ‘getting alone’ ”, Journal of Retailing, Vol. 73 No. 1, pp. 63-85.
Harackiewicz, J.M., Barron, K.E., Pintrich, P.R., Elliot, A.J. and Thrash, T.M. (2002), “Revision of
achievement goal theory: necessary and illuminating”, Journal of Educational Psychology, Vol. 94
No. 3, pp. 638-645, available at: https://doi.org/10.1037/0022-0663.94.3.638
Hart, C. (1998), Doing a Literature Review: Releasing the Social Science Research Imagination, Sage,
Thousand Oaks, CA.
Hildebrand, M. (1973), “The character and skills of the effective professor”, The Journal of Higher
Education, Vol. 44 No. 1, pp. 41-50.
Hildebrand, M., Wilson, R.C. and Dienst, E.R. (1971), Evaluating University Teaching, Center for
Research and Development in Higher Education, Berkeley, CA.
Hill, M.C. and Epps, K.K. (2010), “The impact of physical classroom environment on student
satisfaction and student evaluation of teaching in the university environment”, Academy of
Educational Leadership Journal, Vol. 14 No. 4, pp. 65-79.
Howell, A.J. and Symbaluk, D.G. (2001), “Published student ratings of instruction: revealing and Student
reconciling the views of students and faculty”, Journal of Educational Psychology, Vol. 93 No. 4, evaluation of
pp. 790-796, available at: https://doi.org/10.1037/0022-0663.93.4.790
teaching
Hu, Y.-L. and Ching, G.S. (2012), “Factors affecting student engagement: an analysis on how and why
students learn”, Conference on Creative Education, Scientific Research Publishing, Irvine, CA,
pp. 989-992.
Hu, Y.-L., Hung, C.-H. and Ching, G.S. (2015), “Student-faculty interaction: mediating between student 79
engagement factors and educational outcome gains”, International Journal of Research Studies in
Education, Vol. 4 No. 1, pp. 43-53, available at: https://doi.org/10.5861/ijrse.2014.800
Hultman, M. and Oghazi, P. (2008), “Good looks - good courses: the link between physical attractiveness
and perceived performance in higher educational services”, in Thyne, M., Deans, K.R. and
Gnoth, J. (Eds), Australian and New Zealand Marketing Academy Conference, University of
Otago, Dunedin, pp. 2588-2597.
Huynh, P. (2015), “Overcoming low response rates for online course evaluations”, paper presented at
the Annual Conference of the Association for Institutional Research, Denver, CO.
Kolb, D.A., Rubin, I.M. and McIntyre, J.M. (1984), Organizational Psychology: An Experimental
Approach to Organizational Behavior, Prentice-Hall, Englewood Cliffs, NJ.
Kornell, N. and Hausman, H. (2016), “Do the best teachers get the best rating?”, Frontiers in Psychology,
Vol. 7, No. 570, pp. 1-8, available at: https://doi.org/10.3389/fpsyg.2016.00570
Koushki, P.A. and Kuhn, H.A.J. (1982), “How reliable are student evaluations of teachers?”, Engineering
Education, Vol. 72 No. 3, pp. 362-367.
Kuh, G.D. (2008), High-Impact Educational Practices: What they are, who has Access to them, and why
they Matter, AACU, Washington, DC.
Kuh, G.D., O’Donnell, K. and Reed, S. (2013), Ensuring Quality and Taking High-Impact Practices to
Scale, AACU, Washington, DC.
Kuzmanovic, M., Savic, G., Popovic, M. and Martic, M. (2012), “A new approach to evaluation of university
teaching considering heterogeneity of students’ preferences”, Procedia – Social and Behavioral
Sciences, Vol. 64 No. 1, pp. 402-411, available at: https://doi.org/10.1016/j.sbspro.2012.11.047
Liao, S. (2013), “Psychological contract between teacher and student improves teaching process in the
network courses of college: a study based on the network course of psychology in Shaoguan
University”, in Luo, X. (Ed.), International Conference on Education Technology and
Management Science, Atlantis Press, Amsterdam, pp. 885-887.
Lidice, A. and Saglam, G. (2013), “Using students’ evaluations to measure educational quality”,
Procedia – Social and Behavioral Sciences, Vol. 70 No. 25, pp. 1009-1015, available at:
https://doi.org/10.1016/j.sbspro.2013.01.152
Liegle, J.O. and McDonald, D.S. (2005), “Lessons learned from online vs paper-based computer
information students evaluation system”, Information Systems Education Journal, Vol. 3 No. 37,
pp. 1-14, available at: http://isedj.org/3/37/
Lin, J.-H. and Chen, J.-H. (2016), “SELF-COLA: assessing students learning experiences and
first-year outcomes”, paper presented at the International Conference: Higher Education
Institutional Research, New Orleans, LA.
Lipsey, M.W. and Wilson, D.B. (2001), Practical Meta-Analysis, Vol. 49, Sage, Thousand Oaks, CA.
Liu, S., Keeley, J. and Buskist, W. (2016), “Chinese college students’ perceptions of excellent teachers
across three disciplines”, Psychology, Chemical Engineering, and Education, Vol. 43 No. 1, pp. 70-74,
available at: https://doi.org/10.1177/0098628315620888
Longanecker, D. (2016), “Higher education in the new normal of the 21st century: an era of
evidence based change”, paper presented at the Annual Conference of the Association for
Institutional Research, New Orleans, LA.
Lubienski, C. (2007), “Marketing schools: consumer goods and competitive incentives for consumer
information”, Education and Urban Society, Vol. 40 No. 1, pp. 118-141.
HEED MacGregor, K. (2015), “Six key elements of an entrepreneurial university”, University World News,
12,2 available at: www.universityworldnews.com/article.php?story=20151106141848199 (accessed
March 30, 2018).
Machi, L.A. and McEvoy, B.T. (2016), The Literature Review: Six Steps to Success, 3rd ed., Sage,
Thousand Oaks, CA.
MacNell, L., Driscoll, A. and Hunt, A.N. (2015), “What’s in a name: exposing gender bias in student
80 ratings of teaching”, Innovative Higher Education, Vol. 40 No. 4, pp. 291-303, available at: https://
doi,org/10.1007/s10755-014-9313-4
Mäkinen, J., Olkinuora, E. and Lonka, K. (2004), “Students at risk: students’ general study orientations
and abandoning/prolonging the course of studies”, Higher Education, Vol. 48 No. 2, pp. 173-188,
available at: https://doi.org/10.1023/B:HIGH.0000034312.79289.ab
Marcotte, A. (2014), “Best way for professors to get good student evaluations? Be male”, available at:
www.slate.com/blogs/xx_factor/2014/12/09/gender_bias_in_student_evaluations_professors_
of_online_courses_who_present.html (accessed March 30, 2018).
Marks, R.B. (2000), “Determinants of student evaluations of global measures of instructor and course
value”, Journal of Marketing Education, Vol. 22 No. 2, pp. 108-119, available at: https://doi.org/10.
1177/0273475300222005
Marlin, J.W. Jr (1987), “Student perceptions of end-of-course evaluations”, The Journal of Higher
Education, Vol. 58 No. 6, pp. 704-716.
Marsh, H.W. (1980), “The influence of student, course, and instructor characteristics in evaluations of
university teaching”, American Educational Research Journal, Vol. 17 No. 2, pp. 219-237.
Marsh, H.W. (1982), “SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations
of university teaching”, British Journal of Educational Psychology, Vol. 52 No. 1, pp. 77-95,
available at: https://doi,org/10.1111/j.2044-8279.1982.tb02505.x
Marsh, H.W. (1984), “Students’ evaluations of university teaching: dimensionality, reliability, validity,
potential baises, and utility”, Journal of Educational Psychology, Vol. 76 No. 5, pp. 707-754,
available at: https://doi.org/10.1037/0022-0663.76.5.707
Marsh, H.W. (1987), “Students’ evaluations of university teaching: research findings, methodological
issues, and directions for future research”, International Journal of Educational Research, Vol. 11
No. 3, pp. 253-388, available at: https://doi.org/10.1016/0883-0355(87)90001-2
Marsh, H.W. (1991), “Multidimensional students’ evaluations of teaching effectiveness: a test of
alternative higher-order structures”, Journal of Educational Psychology, Vol. 83 No. 2, pp. 285-296,
available at: https://doi.org/10.1037/0022-0663.83.2.285
Marsh, H.W. (2007), “Students’ evaluations of university teaching: dimensionality, reliability, validity,
potential biases and usefulness”, in Perry, R.P. and Smart, J.C. (Eds), The Scholarship of
Teaching and Learning in Higher Education: An Evidence-Based Perspective, Springer,
Dordrecht, pp. 319-383.
Marsh, H.W. and Bailey, M. (1993), “Multidimensional students’ evaluations of teaching effectiveness: a
profile analysis”, The Journal of Higher Education, Vol. 64 No. 1, pp. 1-18, available at: https://
doi.org/10.2307/2959975
Marsh, H.W. and Dunkin, M.J. (1997), “Students’ evaluations of university teaching: a multidimensional
perspective”, in Perry, R.P. and Smart, J.C. (Eds), Effective Teaching in Higher Education:
Research and Practice, Agathon, New York, NY, pp. 241-320.
Marsh, H.W. and Roche, L.A. (1997), “Making students’ evaluations of teaching effectiveness effective: the
critical issues of validity, bias, and utility”, American Psychologist, Vol. 52 No. 11, pp. 1187-1197,
available at: https://doi.org/10.1037/0003-066X.52.11.1187
Marsh, H.W. and Roche, L.A. (2000), “Effects of grading leniency and low workload on students’
evaluations of teaching: popular myth, bias, validity, or innocent bystanders?”, Journal of
Educational Psychology, Vol. 92 No. 1, pp. 202-228, available at: https://doi.org/10.1037/0022-0
663.92.1.202
Marsh, H.W., Hau, K.-T., Chung, C.-M. and Siu, T.L.P. (1997), “Students’ evaluations of university Student
teaching: Chinese version of the students’ evaluations of educational quality instrument”, evaluation of
Journal of Educational Psychology, Vol. 89 No. 3, pp. 568-572, available at: https://doi.org/10.10
37/0022-0663.89.3.568 teaching
Mehrabian, A. (1968), “Some referents and measures of nonverbal behavior”, Behavior Research Methods
& Instrumentation, Vol. 1 No. 6, pp. 203-207, available at: https://doi.org/10.3758/BF03208096
Meredith, G.M. (1977), “Faculty-based indicators of teaching effectiveness in higher education”, Psychological 81
Reports, Vol. 41 No. 2, pp. 675-676, available at: https://doi.org/10.2466/pr0.1977.41.2.675
Meredith, G.M. (1978), “Student-based ratings of teaching effectiveness in legal education”,
Psychological Reports, Vol. 43 No. 3, pp. 953-954, available at: https://doi.org/10.2466/pr0.1978.
43.3.953
Meredith, G.M. (1985a), “Student-based indicators of campus satisfaction as an outcome of higher
education”, Psychological Reports, Vol. 56 No. 2, pp. 597-598, available at: https://doi.org/10.2466/
pr0.1985.56.2.597
Meredith, G.M. (1985b), “Two rating indicators of excellence in teaching in lecture
format courses”, Psychological Reports, Vol. 56 No. 1, pp. 52-54, available at: https://doi,org/
10.2466/pr0.1985.56.1.52
Meredith, G.M. and Bub, D.N. (1977), “Evaluation of apprenticeship teaching in higher education”,
Psychological Reports, Vol. 40 No. 3, pp. 1123-1126, available at: https://doi.org/10.2466/pr0.1977.
40.3c.1123
Meredith, G.M. and Ogasawara, T.H. (1982), “Preference for class size in lecture-format courses among
college students”, Psychological Reports, Vol. 51 No. 3, pp. 961-962, available at: https://doi.org/
10.2466/pr0.1982.51.3.961
Miles, M. and Huberman, M. (1994), Qualitative Data Analysis, 2nd ed., Sage, Beverly Hills, CA.
Miller, R.B., Greene, B.A., Montalvo, G.P., Ravindran, B. and Nichols, J.D. (1996), “Engagement in
academic work: the role of learning goals, future consequences, pleasing others, and perceived
ability”, Contemporary Educational Psychology, Vol. 21 No. 4, pp. 388-422, available at: https://
doi.org/10.1006/ceps.1996.0028
Mitchell, M., Leachman, M. and Masterson, K. (2016), “Funding down, tuition up”, available at: www.
cbpp.org/research/state-budget-and-tax/funding-down-tuition-up (accessed March 30, 2018).
Mogan, J. and Knox, J.E. (1987), “Characteristics of ‘best’ and ‘worst’ clinical teachers as perceived by
university nursing faculty and students”, Journal of Advanced Nursing, Vol. 12 No. 3, pp. 331-337,
available at: https://doi.org/10.1111/j.1365-2648.1987.tb01339.x
Mortelmans, D. and Spooren, P. (2009), “A revalidation of the SET37 questionnaire for student
evaluations of teaching”, Educational Studies, Vol. 35 No. 5, pp. 547-552, available at: https://doi.
org/10.1080/03055690902880299
Murray, H.G., Rushton, J.P. and Paunonen, S.V. (1990), “Teacher personality traits and student
instructional ratings in six types of university courses”, Journal of Educational Psychology,
Vol. 82 No. 2, pp. 250-261, available at: https://doi.org/10.1037/0022-0663.82.2.250
Naidoo, R. (2016), “Higher education is trapped in a competition fetish”, University World News,
available at: www.universityworldnews.com/article.php?story=20160413131355443 (accessed
March 30, 2018).
Nasser, F. and Fresko, B. (2002), “Faculty views of student evaluation of college teaching”, Assessment
& Evaluation in Higher Education, Vol. 27 No. 2, pp. 187-198, available at: https://doi.org/10.10
80/02602930220128751
National Statistics office of Taiwan (2018), “Statistical tables”, available at: https://eng.stat.gov.tw/lp.
asp?ctNode=1629&CtUnit=779&BaseDSD=7&mp=5 (accessed January 1, 2018).
Norris, D. and Baer, L. (2013), Building Organizational Capacity for Analytics, Educause, Louisville, CO.
Norris, J. and Conn, C. (2005), “Investigating strategies for increasing student response rates to online
delivered course evaluations”, Quarterly Review of Distance Education, Vol. 6 No. 1, pp. 13-29.
HEED Nowell, C. (2007), “The impact of relative grade expectations on student evaluation of teaching”,
12,2 International Review of Economics Education, Vol. 6 No. 2, pp. 42-56, available at: https://doi.org/
10.1016/S1477-3880(15)30104-3
Otani, K., Kim, B.J. and Cho, J.-I. (2012), “Student evaluation of teaching (SET) in higher education: how
to use SET more effectively and efficiently in public affairs education”, Journal of Public Affairs
Education, Vol. 18 No. 3, pp. 531-544.
82 Otter, S. (1995), “Learning outcomes in higher education”, in Burke, J. (Ed.), Outcomes, Learning and the
Curriculum: Implications for NVQ’s, GNVQ’s and Other Qualifications, Falmer Press,
Bristol, PA, pp. 273-284.
Overall, J.U. and Marsh, H.W. (1979), “Midterm feedback from students: Its relationship to instructional
improvement and students’ cognitive and affective outcomes”, Journal of Educational
Psychology, Vol. 71 No. 6, pp. 856-865.
Patton, T.O. (1999), “Ethnicity and gender: an examination of its impact on instructor credibility in the
university classroom”, The Howard Journal of Communications, Vol. 10 No. 2, pp. 123-144,
available at: https://doi.org/10.1080/106461799246852
Perry, R.P., Abrami, P.C., Leventhal, L. and Check, J. (1979), “Instructor reputation: an expectancy
relationship involving student ratings and achievement”, Journal of Educational Psychology,
Vol. 71 No. 6, pp. 776-787, available at: https://doi.org/10.1037/0022-0663.71.6.776
Petticrew, M. and Roberts, H. (2005), Systematic Reviews in the Social Sciences: A Practical Guide,
Blackwell Publishers, Malden, MA.
Picciano, A.G. (2012), “The evolution of big data and learning analytics in American higher education”,
Journal of Asynchronous Learning Networks, Vol. 16 No. 3, pp. 9-20.
Pietersen, C. (2014), “Negotiating a shared psychological contract with students”, Mediterranean
Journal of Social Sciences, Vol. 5 No. 7, pp. 25-33, available at: https://doi.org/10.5901/mjss.2014.
v5n7p25
Pogue, L.L. and Ahyun, K. (2006), “The effect of teacher nonverbal immediacy and credibility on
student motivation and affective learning”, Communication Education, Vol. 55 No. 3,
pp. 331-344, available at: https://doi.org/10.1080/03634520600748623
Poonyakanok, P., Thisayakorn, N. and Digby, P.W. (1986), “Student evaluation of teacher performance:
some initial research findings from Thailand”, Teaching and Teacher Education, Vol. 2 No. 2,
pp. 145-154, available at: https://doi.org/10.1016/0742-051X(86)90013-2
Powell, R.W. (1977), “Grades, learning, and student evaluation of instruction”, Research in Higher
Education, Vol. 7 No. 3, pp. 193-205, available at: https://doi.org/10.1007/BF00991986
Pozo-Muñoz, C., Rebolloso-Pacheco, E. and Fernández-Ramírez, B. (2000), “The ‘ideal teacher’:
implications for student evaluation of teacher effectiveness”, Assessment & Evaluation in Higher
Education, Vol. 25 No. 3, pp. 253-263, available at: https://doi.org/10.1080/02602930050135121
Pravikoff, P. and Nadasen, D. (2015), “Course evaluations simplified: the largest US public university
did it and you can too”, paper presented at the Annual Conference of the Association for
Institutional Research, Denver, CO.
Pulich, M.A. (1984), “Better use of student evaluations for teaching effectiveness”, Improving College
and University Teaching, Vol. 32 No. 2, pp. 91-94.
Remedios, R. and Lieberman, D.A. (2008), “I liked your course because you taught me well: the
influence of grades, workload, expectations and goals on students’ evaluations of teaching”,
British Educational Research Journal, Vol. 34 No. 1, pp. 91-115, available at: https://doi.org/10.10
80/01411920701492043
Rice, L.C. (1988), “Student evaluation of teaching: problems and prospects”, Teaching Philosophy,
Vol. 11 No. 4, pp. 329-344, available at: https://doi.org/10.5840/teachphil198811484
Richardson, J.T.E. (2005), “Instruments for obtaining student feedback: a review of the literature”,
Assessment & Evaluation in Higher Education, Vol. 30 No. 4, pp. 387-415, available at: https://doi.
org/10.1080/02602930500099193
Rivera, J.C. and Rice, M.L. (2002), “A comparison of student outcomes and satisfaction between Student
traditional and web based course offerings”, Online Journal of Distance Learning evaluation of
Administration, Vol. 5 No. 3, pp. 1-11, available at: www.westga.edu/~distance/ojdla/fall53/
rivera53.html (accessed March 30, 2018). teaching
Rocco, T.S. and Plakhotnik, M.S. (2009), “Literature reviews, conceptual frameworks, and theoretical
frameworks: terms, functions, and distinctions”, Human Resource Development Review, Vol. 8
No. 1, pp. 120-130, available at: https://doi.org/10.1177/1534484309332617
Rodin, M. and Rodin, B. (1972), “Student evaluations of teachers”, Science, Vol. 177 No. 4055,
83
pp. 1164-1166, available at: https://doi.org/10.1126/science.177.4055.1164
Roehling, M.V. (1997), “The origins and early development of the psychological contract construct”,
Journal of Management History, Vol. 3 No. 2, pp. 204-217.
Rother, E.T. (2007), “Systematic literature review × narrative review”, Acta Paulista de Enfermagem,
Vol. 20 No. 2, pp. vii-viii.
Rousseau, D.M. (1995), Psychological Contracts in Organizations, Sage, Thousand Oaks, CA.
Rousseau, D.M. (2001), “Schema, promise and mutuality: the building blocks of the psychological
contract”, Journal of Occupational and Organizational Psychology, Vol. 74 No. 4, pp. 511-541,
available at: https://doi.org/10.1348/096317901167505
Sander, P., Stevenson, K., King, M. and Coates, D. (2000), “University students’ expectations of
teaching”, Studies in Higher Education, Vol. 25 No. 3, pp. 309-323, available at: https://doi.org/10.
1080/03075070050193433
Schellhase, K.C. (2010), “The relationship between student evaluation of instruction scores and faculty
formal educational coursework”, Athletic Training Education Journal, Vol. 5 No. 4, pp. 156-164.
Shulman, L. (1987), “Knowledge and teaching: Foundations of the new reform”, Harvard Educational
Review, Vol. 57 No. 1, pp. 1-23, available at: https://doi.org/10.17763/haer.57.1.j463w79r56455411
Skinner, E.A. and Belmont, M.J. (1993), “Motivation in the classroom: reciprocal effects of teacher
behavior and student engagement across the school year”, Journal of Educational Psychology,
Vol. 85 No. 4, pp. 571-581, available at: https://doi.org/10.1037/0022-0663.85.4.571
Sohr-Preston, S.L., Boswell, S.S., McCaleb, K. and Robertson, D. (2016), “Professor gender, age, and
‘hotness’ in influencing college students’ generation and interpretation of professor ratings”,
Higher Learning Research Communications, Vol. 6 No. 3, pp. 1-23, available at: https://doi.org/10.
18870/hlrc.v6i3.328
Solomon, M.R., Surprenant, C.F., Czepiel, J.A. and Gutman, E.G. (1985), “A role theory perspective on
dyadic interactions: the service encounter”, Journal of Marketing, Vol. 49 No. 1, pp. 99-111.
Spooren, P. and Van Loon, F. (2012), “Who participates (not)? A non-response analysis on students’
evaluations of teaching”, Procedia – Social and Behavioral Sciences, Vol. 69 No. 1, pp. 990-996.
Spooren, P., Brockx, B. and Mortelmans, D. (2013), “On the validity of student evaluation of teaching:
the state of the art”, Review of Educational Research, Vol. 83 No. 4, pp. 598-642, available at:
https://doi.org/10.3102/0034654313496870
Sproule, R. (2000), “Student evaluation of teaching: methodological critique”, Education Policy Analysis
Archives, Vol. 8 No. 50, pp. 1-23, available at: https://doi.org/10.14507/epaa.v8n50.2000
Staley, D.J. and Trinkle, D.A. (2011), “The changing landscape of higher education”,
Educause, pp. 16-32, available at: http://er.educause.edu/articles/2011/2/the-changing-
landscape-of-higher-education (accessed March 30, 2018).
Stark, P. and Freishtat, R. (2014), “An evaluation of course evaluations”, ScienceOpen Research, available at:
https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1 (accessed March 30, 2018).
Tagiuri, R. (1969), “Person perception”, in Lindzey, G. and Aronson, E. (Eds), The Handbook of Social
Psychology: The Individual in a Social Context, 2nd ed., Vol. 3, Addison-Wesley, Reading, MA,
pp. 395-449.
Tatro, C.N. (1995), “Gender effects on student evaluations of faculty”, Journal of Research and
Development in Education, Vol. 28 No. 3, pp. 169-173.
HEED Torraco, R.J. (2005), “Writing integrative literature reviews: guidelines and examples”, Human
12,2 Resource Development Review, Vol. 4 No. 3, pp. 356-367.
Tseng, Y.-H. (2016), “Development and application of databases for institutional research and analysis
(Xiàowù yánjiū zīliàokù de jiàngòu yfēnxī yìngyòng)”, Contemporary Educational Research
Quarterly, Vol. 24 No. 1, pp. 107-134, available at: https://doi.org/10.6151/CERQ.2016.2401.04
Usher, A. (2009), “Ten years back and ten years forward: developments and trends in higher education
in Europe region”, paper presented at the UNESCO Forum on Higher Education in the
84 Europe Region, Bucharest.
Vlăsceanu, L., Grünberg, L. and Pârlea, D. (2004), Quality Assurance and Accreditation: A Glossary
of Basic Terms and Definitions, United Nations Educational, Scientific and Cultural
Organization, Bucharest.
Voss, R., Gruber, T. and Szmigin, I. (2007), “Service quality in higher education: the role of student
expectations”, Journal of Business Research, Vol. 60 No. 9, pp. 949-959, available at: https://doi.
org/10.1016/j.jbusres.2007.01.020
Wachtel, H.K. (2006), “Student evaluation of college teaching effectiveness: a brief review”, Assessment
& Evaluation in Higher Education, Vol. 23 No. 2, pp. 191-212, available at: https://doi.org/10.10
80/0260293980230207
Wilson, A., Zeithaml, V.A., Bitner, M.J. and Gremler, D.D. (2012), Services Marketing: Integrating
Customer Focus Across the Firm, 2nd European ed., McGraw-Hill Education, Berkshire.
WTO (1998), “Education services”, Document No. S/C/W/49 98-3691, World Trade Organization,
Geneva, available at: www.wto.org/english/tratop_e/serv_e/w49.doc (accessed March 30, 2018).
Wright, P., Whittington, R. and Whittenburg, G.E. (1984), “Student ratings of teaching effectiveness:
what the research reveals”, Journal of Accounting Education, Vol. 2 No. 2, pp. 5-30, available at:
https://doi.org/10.1016/0748-5751(84)90002-2
Wu, P.-M. (2018), “The declining birthrate threatens. The number of college freshmen is reduced to only
100,000 after 10 years (shǎo znhuà fā wēi dà zhuān xiào yuàn xīn shēng 10 nián hòu jiǎn jìn 10
wàn rén)”, available at: https://udn.com/news/story/7266/3156797 (accessed March 30, 2018).
Zeithaml, V.A. (1981), “How consumer evaluation processes differ between goods and services”,
in Donnelly, J.H. and George, W.R. (Eds), Marketing of Services, American Marketing
Association, Chicago, IL, pp. 186-190.
Zhang, Y.-W. (2003), “Development of student instructional rating scale (Dàxuéshēng
jiàoxuépíngjiànliàngbiǎo zhī fāzhǎnyánjiū)”, Journal of Education and Psychology, Vol. 26
No. 2, pp. 227-239.
Corresponding author
Gregory Ching can be contacted at: gregory_ching@yahoo.com
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com