Question No 1.dox
Question No 1.dox
Question No 1.dox
But being a teacher involves much more than executing lesson plans. Teaching is a
highly sophisticated profession that regularly extends beyond academics. In addition to
ensuring that students experience academic success, teachers must also function as
surrogate parents, mentors and counselors, and even almost-politicians. There is almost
no limit to the roles a teacher may play.
Due to the sheer amount of time that school is in session, teachers are tasked with being
positive role models and mentors to their students every day. Students learn so much more
than math, language arts, and social studies from their teachers—they learn social skills like
how to be kind to others and make friends, when to ask for help or be independent, how to
distinguish between right and wrong, and other life lessons that parents tend to echo. In many
cases, students learn these things from teachers first.
The nuances of a teacher's role as a semi-parent largely depend on the age of their students but
almost all teachers learn to care deeply for their students and always want the best for them.
Whether a student is close with their teacher or not, they probably respect and revere them
much like they do their own parents or guardians and teachers probably treat them as they
would their own children. In some cases, teachers may be a student's only mentor.
Teachers as Intermediaries
Even though a teacher is often like a parent, that doesn't leave a child's real family out of the
picture—teachers are only one part of a larger equation. Teaching demands almost
daily communication with families about everything from academics to behavior. Some of the
most common forms of parent-teacher interaction include:
Parent-teacher conferences
Progress reports
Weekly newsletters
Emails, texts, and calls
IEP meetings
On top of these standard practices, teachers must often explain their choices to parents and
conciliate them when there is conflict. If a parent or guardian finds out about something going
on in the classroom that they don't like, a teacher must be prepared to defend their choices
and their students. They must make informed decisions about how to act in their students'
favor and then be able to justify these, always standing firm but hearing families out.
Teachers are the middlemen between parents and their children in education and parents are
easily frustrated when they don't understand how or why something is being taught. Teachers
must keep families in the loop as much as possible to prevent this but also be ready if someone
is displeased with their decisions. Teaching entails always championing what is best for
students and explaining how practices are beneficial as needed.
Teachers as Advocates
A teacher's role is ever-changing. While teachers were once issued curriculum materials with a
clear set of instructions detailing exactly how to teach them, this was not an equitable or
effective approach because it did not acknowledge student individuality or real-life application.
Now, teaching is responsive—it evolves to fit the needs and demands of any political and
cultural climate.
A responsive teacher counsels their students to use the knowledge they learn in school to
become valuable members of society. They advocate for being informed and productive citizens
by educating about social justice and current events. Teachers must always be aware, ethical,
equitable, and engaged.
The modern teaching profession also (often) includes advocating for students on a political
level. Many teachers:
Work with politicians, colleagues, and community members to set clear and attainable
standards for students.
Participate in the decision making to deal with problems affecting students' learning.
Mentor new teachers to prepare them to teach the youth of their generation.
A teacher's work is far-reaching and critical—the world just wouldn't be the same without it.
Classroom assessment
Classroom assessment is something all teachers do as part of their responsibility for enhancing
student learning and motivation, documenting student performance, and reporting results for
accountability purposes. There are many different types of classroom assessments that can be
used, each with advantages and disadvantages. To the extent that teachers base their
assessments on known indicators of high quality, such as reliability and validity, align their
assessments with clear and appropriate learning objectives and targets, use formative
assessment with meaningful feedback, and grade student work to provide meaningful summary
indications of performance, student learning and motivation will be enhanced, as well as
documented.
Assessments also are used to identify individual student weaknesses and strengths so that
educators can provide specialized academic support, educational programming, and/or social
services. While assessment can take a wide variety of forms in education, the following
descriptions provide a representative overview of a few major forms of educational
assessment.
Pre-assessments
Pre-assessments are administered before students begin a lesson, unit, course, or academic
program. Students are not necessarily expected to know most, or even any, of the material
evaluated by pre-assessments—they are generally used to (1) establish a baseline against which
educators measure learning progress over the duration of a program, course, or instructional
period, or (2) determine general academic readiness for a course, program, grade level, or new
academic program that student may be transferring into.
Formative assessments
Summative assessments
Performance assessments
Portfolio-based assessments
The purpose of an assessment generally drives the way it is designed, and there are many ways
in which assessments can be used. A portfolio of student work can be a used as both a
“formative” and “summative” form of assessment. Teacher-created assessments, which may
also be created by teams of teachers, are commonly used in a single course or grade level in a
school. In short, assessments are usually created for highly specialized purposes. Assessments
were carried out using three different ways and students’ performances were analyzed for
comparison. Three methods used were fairly successful (60% to 79%) and students were able to
gain high marks for the end examination. The best performance was shown when students
carried out the assessment with the facilitation through interaction with the teacher (78% to
80%). Once assessments were completed by the students, demonstrations of the same on the
board by the teacher acted for large classroom (about 60 students) as they were able to
evaluate their performances and corrected answers.
Question number 2
Define learning outcomes and objectives. Differentiate between them.
Learning objectives:
Objectives are statements that define the expected goal of a curriculum, course, lesson or
activity in terms of demonstrable skills or knowledge that will be acquired by a student as a
result of instruction.
Objectives describe the goals and intentions of the professor who teaches the course.
Objectives, often termed the input in the course, state the purpose and goals of the
course.
Objectives focus on content and skills important within the classroom or program.
Objectives may describe what the staff and faculty will do.
Objectives can often be numerous, specific, and detailed. Assessing and reporting on
each objective for each student may be impossible.
It might help you to think about the difference in terms of perspective. Learning objectives are
usually viewed from the instructor’s perspective (what does the instructor want to
accomplish?) while learning outcomes are seen more from the learner’s perspective (what will
the course teach me, as a learner?). Of course, the two are closely related, because a trainer’s
objectives will ultimately be translated into the learner’s outcomes, as long as the course
successfully serves its purpose.
We’ve built a free and easy-to-use Learning Objective Maker so you can start creating your own
goals and learning objectives.
As mentioned above, learning objectives help foster a sense of purpose for all the parties
involved. They enable authors and trainers to shift their focus from delivery to creating an
engaging experience for learners. Learners and administration benefit too. Let’s review the
advantages for each group.
Orientation:
Learners can get a sense of what questions the course will be asking upfront. This
allows them to have a better idea of the skills or knowledge changes they’ll go
through by committing to a course.
Learner action:
Subsequently, knowing the desired learning objectives beforehand enables learners
to choose courses according to their interests and goals. They can review specific
sections of the content, engage with the material selectively, determine whether
they need different learning methods, decide on what sections to skip, or evaluate
their own progress throughout the course.
Content planning:
Knowing what questions the course or module aims to answer will likely make it
easier to create the content. Trainers and authors can strategically sequence
sections, determine how much time each section will take to complete, and even
identify what information, features, or images are needed. In other words, they’ll
have an outline they can work toward and align the content with.
Better assessment:
The goal of any assessment is to monitor learner progress and provide feedback to
the learners. Learning objectives can simplify this process by serving as a grading
guideline. Administration can rely on learning objectives as a clear standard for
measuring learner progress and achievement, which can also lead to more accurate
and meaningful feedback for learners.
Create quality e-learning faster:
Want to create high-quality training in a fraction of the time? Watch our free webinar for 6
time-saving tricks to create e-eLearning.
Before you can define learning objectives you need to identify what levels of learning you want
learners to achieve. The industry standard for this is Bloom’s taxonomy, which has six levels of
learning. The most basic level of learning is ‘Remembering’, and the highest level of learning is
‘Creating’.
Easy generator helps Subject Matter Experts (without a didactics background) create effective
learning objectives with our Learning Objectives Maker that has seamlessly integrated Bloom’s
taxonomy into the software and allows the authors to create an objective with four easy steps.
Learning outcomes:
Learning Outcomes are statements that describe or list measurable and essential mastered
content-knowledge reflecting skills, competencies, and knowledge that students have
achieved and can demonstrate upon successfully completing a course.
Student Learning Outcomes catalog the overarching "products" of the course and are
the evidence that the goals or objectives were achieved.
Outcomes express higher-level thinking skills that integrate course content and activities
and can be observed as a behavior, skill, or discrete useable knowledge upon
completing the course.
Outcomes are exactly what assessments are intended to show -- specifically what the
student will be able to do upon completing the course.
An assessable outcome can be displayed or observed and evaluated against criteria.
Outcomes are clear and measurable criteria for guiding the teaching, learning, and
assessment process in the course.
Examples:
2. Each objective will have one learning outcome associated with it.
Government
When given a major decision made by a governmental leader, you will be able to
identify the major factors that the leader had to consider and discuss why the action
was taken and what apparent trade - offs were made.
Economics
Demonstrate graphically and explain how a change in expectations will affect the
loanable funds market.
Management
Identify (based on readings, case studies, or personal experiences) those activities that
are most likely to distinguish effective, well - managed technology development
programs from ineffective programs.
Statistics
When given two events, you will be able to determine whether they are independent or
whether there is a relationship between them (that is, one event affects the probability
of the other). On the basis of this determination, you will be able to select and use the
appropriate rules of conditional probability to determine the probability that a certain
event will occur.
Religion
When given a definition of the term religion, students will be able to identify which of
the following characteristics is emphasized: feeling, ritual activity, belief, monotheism,
the solitary individual, social valuation, illusion, ultimate reality, and value.
Music
On hearing musical selections, you will be able to identify those that are examples of
chamber music and be able to identify the form, texture, and makeup of the ensemble.
Art
Psychology
When given a case study, you will be able to identify whether it describes a case of
schizophrenia, and if it does, which of the following schizophrenic reactions are
involved: hebephrenic, catatonic.
Learning outcomes improve the learning experience for the three main stakeholders of any
learning program: the learners, the instructors, and the administrators/managers:
They give learners a better understanding of the specific knowledge and skills they will
acquire during the course.
Focusing on outcomes from the beginning places greater emphasis on the relevant,
practical knowledge and skills to be gained.
This makes learning more effective because learners have a clear sense of what the
desired outcome looks like.
Clear learning outcomes also help learners see why content and assessments are
relevant to them.
Focusing on learning outcomes puts trainers more in touch with the learner’s
perspective. It also gives them a clearer sense of purpose when creating their course.
They help course creators choose the best assessment techniques.
With them, trainers have a measurable standard for judging the success of their course.
When learning outcomes are defined, it gives managers a clear mark for measuring
whether a specific course, resource, or activity has delivered a good return on
investment.
They enable administrators to evaluate the effectiveness of their training program as a
whole.
They act as a guide for evaluating the performance of course creators, so they can
improve methods and achieve better results when needed.
Lastly, focusing on learning outcomes allows administrators to create a learner-focused
training program, in which all activities are centered on giving learners the knowledge
and tools they need for success.
For instructors and content authors, focusing on outcomes is a great way to improve
the effectiveness of your course. That’s because it encourages you to put yourself in the
learner’s shoes. By consciously putting learning outcomes into words, you gain a clearer
understanding of your purpose as an instructor.
They are also valuable because they give instructors, learners, and administrators clear,
measurable criteria for assessing whether a course has done its job and if you need to improve
your approach to the material. If you start with a clear learning outcome in mind but find that
the course fails or struggles to achieve this outcome, then you know that you need to rethink
your approach.
If you are a training manager, you will probably also think of learning outcomes in financial
terms. After all, your organization is investing valuable resources in its training program, so it’s
important that the training content delivers a good return on that investment. Learning
outcomes are precisely that return on investment.
That means clear, measurable learning outcomes are essential for evaluating whether a specific
training activity is worth the time and money. If a course fails to deliver on learning outcomes,
it’s time to try a new strategy.
Learning outcomes are broad statements of what is achieved and assessed at the end of a
course of study. The concept of learning outcomes and outcome-based education is high on
today's education agenda. The idea has features in common with the move to instructional
objectives which became fashionable in the 1960s, but which never had the impact on
education practice that it merited. Five important differences between learning outcomes
and instructional objectives can be recognized:
Learning outcomes, if set out appropriately, are intuitive and user friendly. They can be
used easily in curriculum planning, in teaching and learning and in assessment.
Learning outcomes are broad statements and are usually designed round a framework
of 8-12 higher order outcomes.
The outcomes recognize the authentic interaction and integration in clinical practice of
knowledge, skills and attitudes and the artificiality of separating these.
Learning outcomes represent what is achieved and assessed at the end of a course of
study and not only the aspirations or what is intended to be achieved.
A design-down approach encourages ownership of the outcomes by teachers and
students.
Goals, aims, objectives and outcomes are terms that are often used in educational
settings.
These are used to determine what is to be taught by the teacher and what is expected
of students at the end of the program.
One of the major reasons to use such terms is to determine if the course is successful or
if there are any changes that are required to the course to teach it in any other way or
any other methods that can be used to help make the course more effective.
Learning outcome and learning objective are two terms that are often confusing for
many people as they believe they refer to the same things in learning. However, they
are not.
These are used to refer to different things in learning.
Learning Outcome refers to the expectations kept from the student at the end of the
course.
At the end of a course, many teachers take a test to determine what the student has
learned from the course syllabus.
The outcome determines how effective the course was in teaching the student and how
determined the student was to learn. It also helps uncover any glitches in the learning
program that can help the teacher understand how effective their teaching method is.
Learning outcomes are determined using tests and projects.
Tests help understand how much the student understood, while projects help
determine how well can the student apply the learning in real-life scenarios.
Learning outcomes is basically the outcome of the syllabus.
The outcome does not indicate the methodologies used by the professor to teach the
subject matter or what activities are undertaken by the students to learn and
understand the subject matter. It will only indicate at the end of the course, how much
the student understood from the subject matter.
The professor can also state the outcome expected at the beginning of the class.
Learning objective is described as what the student can expect from the teacher at the
end of the course.
It is actually the opposite of the outcome. In learning objective, the subject matter that
will be covered during the duration of course can be called as learning objective.
It determines what the course will have provided to the student.
It can be described as what is the ‘added value’ of the teaching.
Learning objective determine what the student will be able to understand after the
course is completed and what the teacher will have covered in the duration of the
course.
Learning objectives are specific, attainable, realistic and measurable.
To many the terms are often used interchangeably, however the above paragraphs state
the small difference that exists between the two. While, learning outcome is what the
student can expect from the course, objective is what the teacher will have taught the
student by the end of the course.
Both of the terms are used together to gauge the effectiveness of a course and how the
course was effective for the student.
It helps determine what the student learned and what the teacher taught.
Question no 3
What are aptitude tests used for? Define the types of aptitude tests.
Aptitude tests
Aptitude is defined as a “Pronounced innate capacity for or ability in a given line of endeavor
such as a particular art, school subject or vocation.”
OR
Importance
Aptitude tests, as well as personality tests, can be useful in determining skills and abilities to
highlight on a resume or cover letter when applying for a job.
For example, the Modern Language Aptitude Test (MLAT) measures a student's potential for
successfully mastering foreign languages. Aptitude tests can also help determine if a student
needs special education services.
For older students, Differential Aptitude Tests (DATs) assess a range of aptitudes from spatial
relations to language usage. The results can help administrators make curricular
recommendations.
Counselors might use high scores in a test in mechanical reasoning, for example, to guide a
student toward courses that prepare them for engineering or designing studies in college.
Students who score well in tests measuring speed, accuracy, and problem-solving might decide
to choose coursework in computer science, finance, or other fields requiring attention to
detail.
Career Assessment Tests
Some companies use aptitude tests to help them make hiring decisions. These tests, which are
called career assessment tests, help human resources personnel learn more about a
prospective employee's strengths and weaknesses. Career assessment tests may also be used
within a company for making decisions about promotions.
Situational judgment tests are a specific kind of career aptitude test that can help predict how
an employee might respond to specific situations in the workplace. They are also used to
assess an employee's communication style and ability to work within a team.
Some situational judgment tests focus specifically on customer service careers. These tests
analyze personality traits that are predictive of successful interaction with the public, such as
empathy, diplomacy, and patience. Situational judgment tests may also be used to predict
future competency in sales-related careers.
People encounter a variety of aptitude tests throughout their personal and professional lives,
often starting while they are children going to school.
Students encounter a variety of aptitude tests throughout school as they think about what they
might like to study in college or do for as a career. High school students often take several
aptitude tests that are designed to help them determine what they might want to study in
college or pursue as a career.
These tests can sometimes give a general idea of what might interest students as a future
career. For example, a student might take an aptitude test suggesting that they are good with
numbers and data. Such results might imply that a career as an accountant, banker, or
stockbroker would be a good choice for that particular student.
Another student might find that they have strong language and verbal skills, which might
suggest that a career as an English teacher, writer, or journalist might be a good choice.
some aptitude tests might have a very narrow focus that limits what they are able to predict.
Other tests that look at multiple domains are much more similar to intelligence tests.
Special aptitude tests are designed to look at an individual's capacity in a particular area. 1 For
example, a business that is looking to hire a computer programmer will consider a candidate's
work history and interview performance, but they might also want to administer an aptitude
test to determine if a person possess the necessary skill to perform the job.
In this case, the special aptitude test is designed to look at a very narrow range of ability: how
skilled and knowledgeable the candidate is at computer programming.
Multiple aptitude tests are designed to measure two or more different abilities. 2 In some cases,
such tests may even resemble intelligence tests in terms of their focus and scope. The
Scholastic Assessment Test (SAT) that high school students take during their senior year is a
good example of a multiple aptitude test.
A number of tests are available for measuring mechanical aptitude for a fairly large field of
occupations rather than for a single occupation.
Different opinions are held regarding the term clerical aptitude. Super opines that it refers to
the ability of routine clerical work. Bills points out clerical duties “include the gathering,
classification, and presentation of data of all sorts, and analysis and use of these data in
planning, executing and determining the results of operation.”
A number of tests are available for measuring clerical aptitude:
Minnesota Clerical Aptitude Test.
General Clerical Aptitude.
The Detroit Clerical Aptitude Examination.
P.R.W. Test.
Orissa Test of Clerical Aptitude.
Clerical Aptitude Test.
There are many tests to measure aptitude in medicine, science, mathematics, law, engineering,
teaching etc.
Question no 4
Matching Type
An objective test consisting of two sets of items to be matched with each other for a
specified attribute.
choices progressively reduce as one successfully matches the items on the left with the
Disadvantages of matching
The efficiency in data analysis that matching provides is limited by several
disadvantages.
The greatest disadvantage of matching is that the effect of matching factor on the
occurrence of the disease of interest cannot be studied anymore.
One should therefore limit matching to factors that are already known to be risk factors
for the studied outcome.
If statistical software with logistic regression are available, it is possible to control for
many confounding factors during the analysis of the study, and
therefore preventing confounding by matching during the design of the study might not
be needed, especially if the study is including a large population and there are few
chances that we will end up with empty strata.
If matching is performed, it must also be taken into account in the statistical analysis,
because a matched OR needs to be calculated, and conditional logistic regression need
to be used.
However, the study of the matching factor as an effect modifier is still possible if doing a
stratified analysis over several categories of the matching factor.
When matching on age, analysis is still feasible within each age stratum created.
However, to use different age categories than those used for matching would require a
multivariable analysis.
Trying to identify a dose response involving a matching factor would also require a
multivariable model of analysis.
Matching on criteria that are only associated with exposure and not with outcome
further biases the measurement of the effect. In this situation the matching factor is not
a confounding factor and matching would bring the OR towards 1.
Another difficulty occurs when matching on several factors.
It then becomes difficult (time and energy) to logistically identify and recruit controls
due the high number of matching factors (e.g., same age, sex, socio economic status,
occupation, etc.).
Matching on several criteria may improve the efficiency of statistical analysis with a
reduced sample size but the difficulties to recruit controls may jeopardize that
efficiency.
It may also exclude cases for which no matched controls can be identified. In addition,
matching on many criteria increases the risk of matching on exposure (therefore
bringing the OR closer to one). This is sometimes called overmatching.
One major challenge when matching is to properly define the various strata of the
matching variable.
When frequency matching on age, we need to make sure that, within each of the age
group created, age is no longer a confounding factor.
This is sometimes called residual confounding.
Several analyses with several width of age strata may be tested. For example, let's
suppose we stratify on several age groups 20 years wide (0-19, 20-39, 40-59, 60-79,
80+).
To assess if age is still a confounder within one age group we could further stratify (by
five years age group) and test if age is still a confounding factor inside a 20 years wide
age group.
So, it may still be important to take account of age as a potential confounder in a
multivariable analysis.
Question number 5
How will you define reliability of test? Also write its types.
Reliability
Reliability is the degree of consistency of a measure. A test will be reliable when it gives the
same repeated result under the same conditions.
In everyday language, we use the word reliable to mean that something is dependable
and that it will give behave predictably every time.
We might talk of a football player as reliable, meaning that he gives a good performance
game after game.
Test Relatability
Test reliability refers to the extent to which a test measures without error. It is highly related to
test validity. Test reliability can be thought of as precision; the extent to which measurement
occurs without error. Test validity can be thought of as accuracy; the extent to which the test
measures the hypothesized underlying construct. Reliability is not a constant property of a test
and is better thought of as different types of reliability for different populations at different
levels of the construct being measured.
Test-retest reliability
Test-retest reliability measures the consistency of results when you repeat the same test on the
same sample at a different point in time. You use it when you are measuring something that
you expect to stay constant in your sample.
Importance
Many factors can influence your results at different points in time: for example, respondents
might experience different moods, or external conditions might affect their ability to respond
accurately.
Test-retest reliability can be used to assess how well a method resists these factors over time.
The smaller the difference between the two sets of results, the higher the test-retest reliability.
How to measure it
To measure test-retest reliability, you conduct the same test on the same group of people at
two different points in time. Then you calculate the correlation between the two sets of results.
Interrater reliability
Interrater reliability (also called interobserver reliability) measures the degree of agreement
between different people observing or assessing the same thing. You use it when data is
collected by researchers assigning ratings, scores or categories to one or more variables.
Importance
People are subjective, so different observers’ perceptions of situations and phenomena
naturally differ. Reliable research aims to minimize subjectivity as much as possible so that a
different researcher could replicate the same results.
When designing the scale and criteria for data collection, it’s important to make sure that
different people will rate the same variable consistently with minimal bias. This is especially
important when there are multiple researchers involved in data collection or analysis.
How to measure it
To measure interrater reliability, different researchers conduct the same measurement or
observation on the same sample. Then you calculate the correlation between their different
sets of results. If all the researchers give similar ratings, the test has high interrater reliability.
Clearly define your variables and the methods that will be used to measure them.
Develop detailed, objective criteria for how the variables will be rated, counted or
categorized.
If multiple researchers are involved, ensure that they all have exactly the same
information and training.
Importance
If you want to use multiple different versions of a test (for example, to avoid respondents
repeating the same answers from memory), you first need to make sure that all the sets of
questions or measurements give reliable results.
How to measure it
The most common way to measure parallel form’s reliability is to produce a large set of
questions to evaluate the same thing, then divide these randomly into two question sets.
The same group of respondent’s answers both sets, and you calculate the correlation between
the results. High correlation between the two indicates high parallel form’s reliability.
Ensure that all questions or test items are based on the same theory and formulated to
measure the same thing.
Internal consistency
Internal consistency assesses the correlation between multiple items in a test that are intended
to measure the same construct.
You can calculate internal consistency without repeating the test or involving other researchers,
so it’s a good way of assessing reliability when you only have one data set.
Importance
When you devise a set of questions or ratings that will be combined into an overall score, you
have to make sure that all of the items really do reflect the same thing. If responses to different
items contradict one another, the test might be unreliable.
How to measure it
Two common methods are used to measure internal consistency.
Average inter-item correlation: For a set of measures designed to assess the same
construct, you calculate the correlation between the results of all possible pairs of items
and then calculate the average.
Split-half reliability: You randomly split a set of measures into two sets. After testing the
entire set on the respondents, you calculate the correlation between the two sets of
responses.
Take care when devising questions or measures: those intended to reflect the same
concept should be based on the same theory and carefully formulated.