WESLEYAN UNIVERSITY-PHILIPPINES
GRADUATE SCHOOL
A proposed model for selecting appropriate assessment
JHON CARLO S. VILLA
Wesleyan University-Philippines
March 8, 2025
Abstract
This study proposes a model in selecting appropriate assessment methods.
One of the principles of high-quality assessment is appropriateness.
Specifically, assessment must be constructively aligned with the level of
learning outcomes as to consider its appropriateness. The proposed model is
comprehensible enough to use in identifying the assessment methods to be
used. The researchers analyzed contents about assessment and select
faculty members and school administrators for a focus group discussion and
were able to validate for the finalization of the proposed model. Considering
the levels of learning outcomes under the taxonomy of objectives in cognitive
domain, types of assessment, and examples of assessment methods, the
proposed model was carefully designed and developed to establish the
alignment of the assessment methods. The proposed model can be used by
educators in designing their assessment methods which are properly aligned
and appropriate. Also, the proposed model is the first model for selecting
appropriate assessment methods.
Keywords: assessment, appropriateness, alignment, learning outcomes,
types of assessment
1. Introduction
In the book of De Guzman et.al (2015), they mentioned several principles to
have a quality assessment. One of these is the appropriateness of
assessment methods. They also cited that in an outcome-based approach,
the program and course outcomes are linked to the achievement of learning
through teaching methods and resources in support to the assessment. This
notion is defined by Biggs (2003) as constructive alignment.
In constructive alignment, assessment corresponds with the intended
learning outcomes, and students develop knowledge through educational
experiences. Students demonstrate evidence of achieving outcomes through
assessments that facilitate the construction of knowledge and skills.
Instruction offers students the opportunity to exhibit their understanding
(Biggs, 2011).
Researchers have suggested that the objective of assessment ought to tackle
national challenges, including global competition, preparedness for college
and careers, and disparities in academic achievement (Darling-Hammond,
2015; Wagner, 2014).
Why is the assessment necessary? Richmond et al., (2019) present a range of
objectives related to teacher and program assessment. These encompass
enhancing teacher quality and illustrating effects on student outcomes,
accreditation, program enhancement, self-assessment, and instruction
centered on social justice. Scholars have asserted that the objective of
assessment should be to address national challenges, including global
competition, college and career readiness, and academic achievement gaps
(Darling-Hammond, 2015; Wagner, 2014).
Assessment is a vital component of the learning process, significantly
enhancing the quality of student learning. In my experience, appropriate
assessment not only affords students the opportunity to learn but also
enables the tutor to evaluate learning outcomes and equips learners for the
workforce. Conversely, inappropriate assessment leads students to adopt a
superficial approach to learning, resulting in their departure from university
without acquiring the skills and attributes valued by employers and the
institution. Therefore, the utilization of suitable assessments and feedback is
essential, not only for improving students' learning experiences but also for
assuring they achieve a more profound conceptual grasp (Yerrabati, 2017).
It is essential to identify methods for leveraging data effectively to foster the
growth of exceptional, equity-focused educators who enhance student
learning and are dedicated to addressing social justice issues with
proficiency. Furthermore, it is essential to recognize that assessment should
not serve to assimilate children, but rather to respond to, sustain, and
revitalize their individual needs (Ladson-Billings, 2014; McCarty & Lee, 2014;
Paris, 2012). Assessment is inherently complex and fulfills various purposes.
The various purposes should dictate both the types of data collected and the
methods employed for their analysis, utilization, and dissemination
(Richmond et al., 2019).
What criteria should we evaluate? Goe, Bell, and Little (2008) assert, "What is
measured is a reflection of what is valued, and as a corollary, what is
measured is valued" (p. 4). The parameters measured are critically significant
in an assessment instrument. Which knowledge, dispositions, and skills ought
to be evaluated? This subject is addressed using several sources, including
standards, accreditation criteria, nationally implemented classroom
observation methods, theoretical frameworks, research, practical application,
and personal experience. Numerous scholars have proposed frameworks for
comprehending the knowledge and competencies required of teachers.
Darling-Hammond (2012) delineates a specific set of knowledge and skills,
which includes understanding content concepts; linking content to prior
knowledge and experiences; scaffolding learning; facilitating standards-based
and outcome-based instruction; offering students opportunities to apply
knowledge and master content; assessing student learning, making
instructional adjustments, and aiding students in self-monitoring their
learning; providing explicit feedback; and managing student behavior and
classroom routines.
Other academics have proposed that the field of education can be enhanced
through a concentrated emphasis on a specific set of "high-leverage" or
"core" techniques that transcend grade levels, courses, and student
demographics (Ball & Forzani, 2009; Forzani, 2014; Grossman, Hammerness,
& McDonald, 2009; McDonald, Kazemi, & Kavanagh, 2013). It is important to
acknowledge that some have highlighted the risk of an emphasis on subject
matter potentially prioritizing content over actions that tackle social justice
and cultural responsiveness (Philip et al., 2018; Richmond, Bartell, Floden, &
Petchauer, 2017). Generic knowledge and abilities are frequently regarded as
objective and unbiased, intended to facilitate effective instruction for all
pupils. However, others have contested this concept, Mirra, Garcia, and
Morrell (2015) contend that neutral assessment instruments may marginalize
specific populations, asserting that what is perceived as objective actually
reflects the experiences of those with greater societal power, while the
experiences of marginalized groups are minimized or entirely disregarded (p.
17). Flynn (2015) rejects the concept of neutrality as deceptive, illusory, and
misleading. He contends, “The creation of a tool occurs within a context
characterized by specific assumptions, intentions, and consequences (both
intended and unintended).” (p. 212). In actuality, “Most performance
measurement protocols inadequately address teaching practices, commonly
referred to as ‘culturally responsive pedagogy’ or CRP; consequently, any
high-stakes teaching evaluation is likely—unintentionally and ironically—to
disadvantage the students who require highly effective teaching the most.”
(Hawley & Irvine, 2011, p. 1). This practice positions the dominant culture as
the focal point of assessment, leading to the systemic marginalization of
individuals who are considered "other." (Salazar, 2018).
The endeavor to ascertain unbiased, objective metrics may obscure
epistemic disparities over what constitutes effective teaching and learning.
Jones and Brownell (2014) illustrate that frequently utilized observation
instruments, such as the Framework for Teaching, may not accurately
represent the instructional techniques that research suggests are
advantageous for children with disabilities. Nava critiques the concepts of
objectivity and neutrality by considering the values and requirements of
varied learners in her paper in this issue. She asserts that numerous
classroom observation evaluation instruments omit equity, humanizing
pedagogy, and social justice, and subsequently outlines the creation of
content-specific observation rubrics that reflect the program's ideals of
equality and humanizing pedagogy. The discipline will persist in addressing
the issue of what ought to be evaluated, and we will maintain our
involvement in both context-reduced and context-responsive evaluation
methodologies. Further study is required to assess the effects of each
technique on pedagogy, learning outcomes, job satisfaction, and employee
retention (Richmond et al., 2019).
By "who," we refer to "Who ought to be responsible for developing and
administering assessments; who should interpret and convey the data
derived from these assessments; and for whom are these assessments
significant?" The Organization for Economic Cooperation and Development
(2009) underscores the necessity of involving various stakeholders in teacher
assessment, including parents, students, teachers, school leaders, teacher
unions, educational administrators, and policymakers in the formulation and
execution of teacher evaluation and assessment procedures (p. 4). Salazar
and Lerner (2019) underscore the significance of involving students and
parents in assessments, “especially those whose survival relies on education
as the ultimate equalizer, as asserted by Horace Mann” (p. 144). While
researchers and practitioners advocate for an inclusive approach to
assessment participation, it is crucial to inquire: For whom is assessment
significant? It is significant for all participants in the educational community,
particularly for those who are consistently marginalized.
The terminology employed to characterize assessment participants is
significant; for instance, the term “stakeholder” suggests a transactional
framework focused on return on investment, whereas “community” conveys
a sense of collaboration and shared understanding. This issue's essays
employ diverse terminology to characterize the assessment community and
affirm that assessment holds significance for students, educators,
policymakers, programs, and the discipline. Ultimately, assessment is
significant when participants can leverage the results effectively to enhance
teaching and learning. Simultaneously, who is responsible for developing the
assessments is significant? This involves not only comprehensive
consideration of the information and skills required for assessment
development. Assessment developers promote educational concepts
grounded in their subjective evaluations of quality and value (Flynn, 2015).
Therefore, it is crucial for those creating assessments of teacher and program
quality to be inclusive and representative of the communities these
assessments aim to serve (Salazar & Lerner, 2019).
What criteria should we employ for assessment? Scholars in the K-12
evaluation domain have extensively documented insights into the intricate
and challenging challenges that arise when employing measurement
instruments for purposes other than their intended design. Classroom
observation is the predominant method employed to assess teacher
effectiveness (Little, Goe, & Bell, 2009). Since 2013, all states have mandated
classroom observation as an element of their teacher evaluation systems
(Hull, 2013). The majority of observation instruments employed in teacher
assessment were designed for research environments, and we should not
presume that they will operate similarly within the realm of teacher
education. Research indicates that observation scores frequently exhibit bias
and are influenced by numerous contextual factors (Garrett & Steinberg,
2015; Gill, Shoji, Coen, & Place, 2016; Steinberg & Garrett, 2016; Whitehurst,
Chingos, & Lindquist, 2014).
School and building administrators frequently encounter difficulties in
employing observation systems as they were instructed (Bell et al., 2013;
Bell, Jones, Qi, & Lewis, 2018; Donaldson & Woulfin, 2018). Administrators
engage in the observation process with distinct methodologies and priorities
compared to raters in a research study; their emphasis extends beyond
merely generating "reliable" scores to foster relationships with employees,
with a primary objective of facilitating staff improvement. These findings
influence our perspective on the utilization of these tools in preservice
environments. How can those expected to utilize these tools be equipped to
comprehend the intended purpose of certain tools and to employ them, as
well as to apply scores in a correct and acceptable manner? Salazar and
Lerner (2019) urge scholars and practitioners to "transcend our self-imposed
limitations" (p. 144) in the context of teacher assessment. They and other
scholars (e.g., Croft, Roberts, & Stenhouse, 2015) promote community and
equitable approaches to teacher assessment.
This encompasses involving students and communities in the creation of
assessment instruments; evaluating the influence of teachers on students'
comprehensive potential (e.g., academic, cultural competence,
transformative abilities); and employing alternative terminology for teacher
assessment, such as “teaching and learning collectivo” or “teacher and
student development,” to foster collaboration and support (Salazar & Lerner,
2019, p. 145).
In the educational context, particularly in tertiary education, most teachers
provide various assessment tools in defining students’ learning. However,
selecting an appropriate assessment is one of the things that teachers forget
in giving the assessment. Thus, this may result in poor performance due to
misalignment of what is supposed to be measured. So, this paper aims to
propose a model in selecting appropriate assessments. This proposed model
aims to help educators to identify and select appropriate measures in
students’ learning.
Research Problem
This paper aims to propose a model for selecting appropriate assessments.
Specifically, this paper aims to answer the following research questions:
1. What are the levels of student learning may be considered in selecting
appropriate assessment methods?
2. What are the types of assessment may be considered in selecting
appropriate assessment methods?
3. What model in selecting appropriate assessments may be developed?
2. Methodology
This paper utilizes a qualitative research design to answer research problems.
Data Gathering Procedure
To be able to collect enough data for this study, the following phases were
followed:
Phase I - Content Analysis. In this phase, the researchers examined
assessment of learning books and selected topics about the alignment and
appropriateness of the assessment methods. This will help researchers to
provide information on the development of the proposed model.
Phase II – Focus Group Discussion. In this phase, the researchers invited
faculty members and school administrators for a dialogue and consultation
related in selecting and utilizing different assessment methods to assess
students’ learning. During the focus group discussion, the following are key
questions being discussed:
1. What are the assessment methods they are utilizing in assessing
students’ learning?
2. What are the levels of student learning may be considered in selecting
appropriate assessments?
3. What are the types of assessments they are utilizing in selecting
appropriate assessments?
Phase III – Development of the Proposed Model for Selecting
Appropriate Assessment. After getting the information from phase 1 and
2, a preliminary model was developed showing the process of selecting
appropriate assessment methods.
Phase IV – Validation of the Proposed Model for Selecting
Appropriate Assessment. During this phase, the proposed model was
introduced to the faculty, teachers, educators and administrators and
experts. The proposed model was used to identify and select appropriate
assessments, flaws and strong points of the model were identified for
finalizing the proposed model.
Participants of the Study
The researchers invited 33 faculty members and 4 school administrators to
discuss and consult the levels of students’ learning as well as types of
assessment they considered in administering assessment methods. These
participants were selected purposively based on their expertise to provide
valid and reliable information about the assessment. Also, they selected
according to their enthusiasm to be part of the study.
Data Analysis
To preserve the complete and comprehensive results of the data, the
researchers interpreted the result qualitatively. The outcomes of the content
analysis are presented as part of the writing or identifying the learning
outcomes. On the other hand, the results from stages 2–4 were evaluated
qualitatively using thematic analysis and coded key themes from participant
replies and discussion outcomes: (1) analyzing the results of interviews and
focus-group discussions; (2) coding by selecting data relevant to the study;
(3) organizing the data into study-related themes; and (4) drafting the report
based on the study's research questions. Actual statements are reported to
support and strengthen certain concepts, methods, and attitudes (Pawilen,
2021).
3. Results and Discussion
The findings of this study were organized and presented according to the
research questions of this paper.
1. Levels of Student’s Learning that can be considered in Selecting
Appropriate Assessment Methods
The content analysis and focus group discussion made by the researchers
who participated by the participants resulted in the levels of student’s
learning that can be considered in selecting appropriate assessment
methods. Table 1 presents learning outcomes based on the levels of
objectives in the cognitive domain. These are also based on the experiences
of some educators on the alignment of assessment to their learning
outcomes. These levels of learning outcomes were based on the Revised
Bloom’s Taxonomy of objectives in cognitive domain where the lower order
thinking skills and higher order thinking skills can also be classified.
Table 1. Levels of Student’s Learning that can be considered in
Selecting Appropriate Assessment
Revised Bloom’s Taxonomy in Cognitive Domain
Lower Order Thinking Skills
Remembering
Understanding
Applying
Higher Order Thinking Skills
Analyzing
Evaluating
Creating
The Revised Bloom’s Taxonomy which was identified as the levels of learning
outcomes is needed to select an appropriate assessment aligned with the
objectives or learning outcomes. The revised bloom’s taxonomy was
originally developed by Benjamin Bloom and later revised by Anderson and
Krathwohl in 2001 with six levels under the cognitive domain. In Abrogena,
et.al, 2024, learning objectives, teaching strategies, learning activities, and
assessments are the four parts of instruction in the field of education. Setting
goals for the students' development is the first thing a teacher does before
instruction begins. Based on these goals, learning objectives or outcomes can
be developed. A learning outcome refers to the specific knowledge, abilities,
and values that a student has attained upon completing a unit or period of
study, resulting from their involvement in relevant and significant learning
experiences. A structured compilation of learning outcomes assists educators
in planning and executing suitable instruction, as well as in devising reliable
assessment activities and procedures (De Guzman & Adamos, 2015).
Krathwohl (2002) as cited in De Guzman & Adamos (2015), emphasized that
the revised Bloom's taxonomy table serves not only to categorize
instructional and learning activities aimed at achieving objectives but also to
evaluate assessments that measure learners' attainment and mastery of
those objectives.
2. Types of Assessment can be considered in Selecting Appropriate
Assessment Methods
Table 2 presents the result of content analysis and focus group discussion
attended by selected faculty members and administrators representing
different areas such as education, curriculum, business education, medical
education, communication and arts, psychology, engineering, tourism,
language, criminal justice education, and industry, the researchers were able
to identify the types of assessment that can be aligned with the levels of
student learning outcomes. Thus, the selection of types of assessment can
also be considered depending on the level of learning outcomes.
Table 2. Types of Assessment can be considered in Selecting
Appropriate
Assessment Methods
Traditional Assessment
Pen-and-paper/pencil-and-paper
Objective
Subjective
Selected-response test
Alternative/Authentic Assessment
Performance-based
Product-based
Real-life/real-world experience
The concept of assessment can also be discussed as to what type of
assessment the teacher is using. Some of the non-education graduates
personally did not know that in education, there are several types of
assessment that a teacher must use considering its alignment to the learning
outcomes. In the study of Babasoro & Cabrillas (2024), findings indicated that
teachers were apprehensive regarding students' expectations and
adaptations in classroom management due to their transition from their
previous job. A further issue was reconciling the instructional methods for
varied learners. Commonly employed strategies by educators included
implementing interactive activities and establishing norms.
As to the result of FGD, the participants responded to different assessment
methods they are using and later organized those examples into the type of
assessment. The participants’ responses boiled down to the two types of
assessment which are traditional in nature and non-traditional assessment
which is also known as authentic assessment and/or alternative assessment.
Conventional assessment methods, such as pencil-and-paper assessments,
may fail to effectively motivate pupils to learn or accurately evaluate their
language proficiency throughout the learning process. Furthermore, certain
educators may face instances where children perform adequately on pencil-
and-paper assessments of grammar and sentence structure yet err in
practical language application during oral communication or written tasks.
Educators must ensure that their selected assessments appropriately reflect
genuine language usage (Phongsirikul, 2018). Similarly, traditional
assessment denotes traditional evaluation methods or instruments that
convey student learning data (Balagtas, et.al, 2019).
Alternative assessment methods are employed to address the gaps left by
traditional assessment approaches. Alternative evaluation methods, including
journals, logs, portfolios, self-assessment, and peer-assessment, elucidate
students' linguistic capabilities (Phongsirikul, 2018). The majority are
regarded as performance-based assessments. Alternative assessment is
characterized by necessitating student performance, creation, and production
(Herman, Aschbacher, and Winters, 1992), employing real-world contexts or
simulations, emphasizing both processes and products (Aschbacher, 1991),
and offering insights into students' strengths and weaknesses (Huerta-
Macias, 1995). Similarly, authentic assessment denotes the application of
evaluative procedures or instruments that enable learners to execute or
produce a result that holds significance for them, as these are grounded in
authentic real-world contexts (Balagtas, et.al, 2019). Additionally, alternative
assessment denotes the utilization of non-traditional procedures or
instruments to gather data regarding student learning. Examples of alternate
evaluation approaches include performance-oriented and product-oriented
assessments (Balagtas, et.al, 2019).
Thus, the participants believed that teachers must select suitable assessment
methods to enhance student learning and accurately evaluate the degree to
which students have met their learning objectives. If a student performs
poorly on an assessment, one can conclude that the instruction is
inadequate. However, the teaching is exceptional, yet it does not pertain to
the evaluation of learning objectives.
3. The Proposed Model for Selecting Appropriate Assessment
Methods
Figure 1 shows the proposed model for selecting appropriate assessment
methods. The proposed model for selecting appropriate assessment methods
was developed, revised, and finalized because of the results of the document
analysis, the conversation that took place inside the focus group, as well as
the recommendations and comments that were made during the validation
phase.
Figure 1. Proposed Model for Selecting Appropriate Assessment
Methods
3. Conclusion
Assessment must reflect the attainment of the learning objectives. Thus, a
clear statement of the learning outcomes is highly recommended, this is
where the basis of everything inside the instruction. A clear learning outcome
includes the level of the students’ learning we want to expect from the
learners to demonstrate; it’s either lower order thinking skills or the higher
order thinking skills. Thus, assessment must be parallel to the objectives.
After a clear statement of the learning outcome, educators must consider the
type of assessment they will be using. Again, the way of considering the type
of assessment must also be based on the level of students’ learning
outcomes. For this paper, the type of assessment revolves in selecting
between tradition or non-traditional type of assessment
(alternative/authentic assessment).
The proposed model provides a clear picture of how educators can select the
appropriate assessment method depending on the written student’s learning
outcomes. It is an easy model to use as the educators will be guided in
selecting their appropriate assessment if the learning outcomes are clear.
The model suggests that after administering the assessment, feedback and
reflections must be done to also assess and reflect on the attainment of
learning outcomes if the outcomes are achieved or not to determine what
possible adjustments can be made.
References
Abrogena, L.G., Agno, L.C., Calistro, F.A.C., Diego, M.C.S., Pacie, E.R., &
Sadora, B.E.B. (2024). Assessment in Learning 1(Traditional
Assessment). Books Atbp. Publishing Corp., 707 Tiaga cor. Kasipagan
Streets, Barangka Drive, Mandaluyong City.
Babasoro, C., & Cabrillas, A. (2024). Classroom Management Experiences of
Non-Education Graduate Teachers. CGCI International Journal of
Administration, Management, Education and Technology.
https://doi.org/10.70059/40ng8k70.
Balagtas, M.U., David, A.P., Golla, E.F., Magno, C.P., & Valladolid, V.C. (2019).
Assessment in Learning 1: Outcomes-based Workstext. Quezon City:
Rex Book
Store, Inc.
Balagtas, M.U., David, A.P., Golla, E.F., Magno, C.P., & Valladolid, V.C. (2020).
Assessment in Learning 2: Outcomes-based Workstext. Quezon City:
Rex Book
Store, Inc.
Ball, D., & Forzani, F. M. (2009). The work of teaching and the challenge for
teacher education. Journal of Teacher Education, 60(5), 497-511.
Bell, C. A., Jones, N. D., Lewis, J. M., Qi, Y., Liu, S., & McLeod, M. (2013).
Understanding consequential assessment systems of teaching: Year 1
final report. Los Angeles, CA: Los Angeles Unified School District.
Bell, C. A., Jones, N. D., Qi, Y., & Lewis, J. M. (2018). Strategies for assessing
classroom teaching: Examining administrator thinking as validity
evidence. Educational Assessment, 23(4), 229-249.
Biggs, J.B. (2003). Aligning teaching for constructing learning. Retrieved from
https://www.heacademy.ac.uk/sites/deafault/files/resources/id477_aligni
ng_teaching_for_constructing_learning.pdf
Biggs, J. (2011). Constructive alignment.
http://www.johnbiggs/com.au/constructive-alignment/
Croft, S. J., Roberts, M. A., & Stenhouse, V. L. (2015). The perfect storm of
education reform: High-stakes testing and teacher evaluation. Social
Justice, 42, 70-92.
Darling-Hammond, L. (2015). The flat world and education: How America’s
commitment to equity will determine our future. New York, NY: Teachers
College Press.
De Guzman, A.S., & Adamos, J.L. (2015). Assessment of Learning 1. Adriana
Publishing Co., Inc., 776 Aurora Blvd., cor. Boston St., Cubao, Quezon
City, Manila, Philippines.
Donaldson, M. L., & Woulfin, S. (2018). From tinkering to going “rogue”: How
principals use agency when enacting new teacher evaluation systems.
Educational Evaluation and Policy Analysis, 40(4), 531-556.
Forzani, F. M. (2014). Understanding “core practices” and “practice-based”
teacher education: Learning from the past. Journal of Teacher Education,
65(4), 357-368.
Flynn, J. E. (2015). Racing the unconsidered: Considering whiteness, rubric,
and the function of oppression. In M. Tenam Zemach & J. E. Flynn (Eds.),
Rubric nation: Critical inquiries on the impact of rubrics in education (pp.
201-221). Charlotte, NC: Information Age.
Garrett, R., & Steinberg, M. P. (2015). Examining teacher effectiveness using
classroom observation scores: Evidence from the randomization of
teachers to students. Educational Evaluation and Policy Analysis, 37(2),
224-242.
Gill, B., Shoji, M., Coen, T., & Place, K. (2016). The content, predictive power,
and potential bias in five widely used teacher observation instruments
(REL 2017–191). Washington, DC: US Department of Education, Institute
of Education Sciences. National Center for Education Evaluation and
Regional Assistance, Regional Educational Laboratory Mid-Atlantic.
Retrieved from http://ies.ed.gov/ncee/edlabs
Goe, L., Bell, C., & Little, O. (2008). Approaches to evaluating teacher
effectiveness: A research synthesis. National Comprehensive Center for
Teacher Quality. Retrieved from
https://files.eric.ed.gov/fulltext/ED521228.pdf
Grossman, P., Hammerness, K., & McDonald, M. (2009). Redefining teaching,
re-imagining teacher education. Teachers and Teaching: Theory and
Practice, 15(2), 273-289.
Hawley, W. D., & Irvine, J. J. (2011, December). The teaching evaluation gap:
Why students’ cultural identities hold the key. Education Week, 31, 30-
31.
Hull, J. (2013). Trends in teacher evaluation: How states are measuring
teacher performance. Alexandria, VA: National School Boards
Association.
Jones, N. D., & Brownell, M. T. (2014). Examining the use of classroom
observations in the evaluation of special education teachers.
Assessment for Effective Intervention, 39(2), 112-124.
Ladson-Billings, G. (2014). Culturally relevant pedagogy 2.0: A.k.a. the remix.
Harvard Educational Review, 84(1), 74-84.
Little, O., Goe, L., & Bell, C. (2009). A practical guide to evaluating teacher
effectiveness. National Comprehensive Center for Teacher Quality.
Retrieved from https://files.eric.ed.gov/fulltext/ED543776.pdf
Mirra, N., Garcia, A., & Morrell, E. (2015). Doing youth participatory action
research: Transforming inquiry with researchers, educators, and
students. New York, NY: Routledge.
McCarty, T., & Lee, T. (2014). Critical culturally sustaining/revitalizing
pedagogy and Indigenous education sovereignty. Harvard Educational
Review, 8(4), 101-124.
McDonald, M., Kazemi, E., & Kavanagh, S. S. (2013). Core practices and
pedagogies of teacher education: A call for a common language and
collective activity. Journal of Teacher Education, 64(5), 378-386.
Paris, D. (2012). Culturally sustaining pedagogy: A needed change in stance,
terminology, and practice. Educational Researcher, 41(3), 93-97.
Pawilen, G.T. (2021). Integrating Indigenous Knowledge in the Philippine
Elementary Science Curriculum. International Journal of Curriculum and
Instruction. 13. 2. 1148-1160.
Philip, T. M., Souto-Manning, M., Anderson, L., Horn, I. J., Carter Andrews, D.,
Stillman, J., & Varghese, M. (2018). Making justice peripheral by
constructing practice as “core”: How the increasing prominence of core
practices challenges teacher education. Journal of Teacher Education.
https://doi.org/10.1177/0022487118798324
Phongsirikul, M. (2018). Traditional and Alternative Assessments in ELT:
Students’ and Teachers’ Perceptions. US Department of Education
(.gov).
Richmond, G., Salazar, M., & Jones, N. (2019). Assessment and the Future of
Teacher Education. Journal of Teacher Education, 70, 86 - 89.
https://doi.org/10.1177/0022487118824331.
Richmond, G., Bartell, T., Floden, R., & Petchauer, E. (2017). Core teaching
practices: Addressing both social justice and academic subject matter.
Journal of Teacher Education, 68(5), 432-434.
Salazar, M., & Lerner, J. (2019). Teacher evaluation as culture: A framework
for equitable and excellent teaching. New York, NY: Routledge.
Steinberg, M. P., & Garrett, R. (2016). Classroom composition and measured
teacher performance: What do teacher observation scores really
measure? Educational Evaluation and Policy Analysis, 38(2), 293-317.
Wagner, T. (2014). The global achievement gap: Why even our best schools
don’t teach the new survival skills our children need and what we can do
about it. New York, NY: Basic Books.
Whitehurst, G., Chingos, M., & Lindquist, K. (2014). Evaluating teachers with
classroom observations: Lessons learned in four districts. Washington,
DC: Brown Center on Education Policy at Brookings.
Yerrabati, S. (2017). Choosing Appropriate Assessment and Feedback
Methods. Compass: Journal of Learning and Teaching, 10.
https://doi.org/10.21100/COMPASS.V10I1.374.