[go: up one dir, main page]

0% found this document useful (0 votes)
93 views8 pages

Psychological Statistics Reviewer

Uploaded by

apellanesjesam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views8 pages

Psychological Statistics Reviewer

Uploaded by

apellanesjesam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

PSYCHOLOGICAL STATISTICS

WHAT IS STATISTICS?  Simple Random Sampling: Each member of the population


The term statistics originated from the Latin word status which has an equal chance of being selected.
means “state.” It became popular only in the 18th century. Original  Systematic Random Sampling: Elements are selected at
definition - “The science dealing with data about the condition regular intervals from a list, ensuring randomness.
of a state or community.” Statistics refers to a set of mathematical  Stratified Random Sampling: The population is divided
procedures for organizing, summarizing, and interpreting into groups (strata) based on shared characteristics, and
information (Gravetter & Wallnau, 2017). It is a branch of then a random sample is drawn from each stratum.
mathematics that focuses on the organization, analysis, and  Cluster Sampling: The population is divided into clusters,
interpretation of a group of numbers (Aron et al., 2013). and a sample of clusters is randomly selected. Individuals
within those clusters are then sampled.
TWO BRANCHES OF STATISTICAL METHODS
 Multistage Sampling: A combination of different probability
DISCRIPTIVE STATISTICS
sampling methods at various stages to select a sample.
Procedures for summarizing a group of scores or otherwise making
 Convenience Sampling: Participants are selected based on
them more understandable. Whenever our purpose is merely to
ease of access and availability.
describe a set of data, we are employing descriptive
 Quota Sampling: The sample is structured to reflect the
statistics. Psychologists use descriptive statistics to summarize proportions of different subgroups in the population.
and describe a group of numbers from a research study.
 Snowball Sampling: Participants are recruited through
INFERENTIAL STATISTICS referrals from other participants, often used for hard-to-
Procedures for drawing conclusions based on the scores collected in reach populations.
a research study but going beyond them. Psychologists use  Purposive Sampling: Participants are selected based on
inferential statistics to draw conclusions and to make inferences that specific characteristics or expertise that align with the
are based on the numbers from a research study but that go beyond research goals.
the numbers. Essentially, inferential procedures are for deciding  Voluntary Response Sampling: Participants self-select to
whether to believe what the sample data seem to indicate participate in the study.
about the scores and relationship that would be found in the  Self-Selection Sampling: Individuals choose to participate
population. in the study, often based on their interest or willingness.

VARIABLE--- is a condition or characteristic that can have different values. FREQUENCY DISTRIBUTION
VALUE --- possible number or category that a score can have.  UNIMODAL DISTRIBUTION
POPULATION--- the set of all individuals of interest in a particular Frequency distribution with one value clearly having a larger
study (Gravetter & Wallnau, 2017). frequency than any other.
SAMPLE--- is a set of individuals selected from a population, usually  BIMODAL DISTRIBUTION
intended to represent the population in a research study (Gravetter Frequency distribution with two approximately equal
& Wallnau, 2017). frequencies, each clearly larger than any of the others.
SAMPLING --- the process of selecting observations (a sample) to  MULTIMODAL DISTRIBUTION
provide an adequate description and inferences of the population. Frequency distribution with two or more high frequencies
separated by a lower frequency; a bimodal distribution is the
SAMPLING TECHNIQUES special case of two high frequencies.
 SYMMETRICAL DISTRIBUTION

IVS
Distribution in which the pattern of frequencies on the left by the number of numbers. The symbol “μ” is used for the
and right side are mirror images of each other mean of a population. The symbol “X̅ is used for the mean of
 SKEWED DISTRIBUTION a sample. The formula for μ is:
Distribution in which the scores pile up on one side of the Where ΣX is the sum of all the numbers in the population and
middle and are spread out on the other side; distribution that N is the number of numbers in the population.
is not symmetrical. 2. MEDIAN The middle score for a set of data that has been
arranged in order of magnitude. The median is less affected by
LEVELS OF MEASUREMENTS
outliers and skewed data.
 NOMINAL
3. MODE The most frequent value
Data is categorized in a purely qualitative way. Example:
4. RANGE a measure of dispersion, specifically the difference
gender, hair color, ethnicity, marital status, car brands
between the highest and lowest values in a dataset. To find
 ORDINAL
the range, you subtract the smallest value from the largest
Data is categorized and ranked in order. Example:
value in the dataset. Example: In the dataset {2, 5, 8, 10,
satisfaction level or any likert-type questions
3}, the range is 10 - 2 = 8.
 INTERVAL
5. VARIANCE Describes the spread or dispersion of data points
Categorize, rank, and measure the differences between
around the central tendency (mean, median, or mode). It
values. However, it doesn't have a true zero point, meaning
quantifies how much the data points differ from each other
that zero doesn't represent the absence of the measured
and from the average. Essentially, it tells you how much your
variable. Example, temperature in Celsius or Fahrenheit, or
data is spread out. Imagine a group of students' scores on a
years.
test. Variance would describe how much those scores are
 RATIO
spread out around the average. A high variance would mean
Categorize, rank, measure the differences between values,
some students scored very high and some very low, while a
and has a true zero point, meaning that zero represents the
low variance would mean scores are mostly close to the
absence of the measured variable. For example, height,
average.
weight, income, or age. In Kelvin Temperature scale, there
6. STANDARD DEVIATION simply the square root of the
are no negative degrees of temperature-zero means an
variance.
absolute lack of thermal energy.
DISCRETE DATA- values are countable and the data cannot
be subdivided into different parts. Example; number of
students, players,
CONTINUOUS DATA- values are measureable and can be
divided into parts. Often answers the question “how much?”
requires a detailed measurement. Example: height (5 ft 6 in)
temperature(35 degree Celsius).

MEASURES OF CENTRAL TENDENCY


The central tendency of a distribution refers to the middle of the
group of scores. Measures of central tendency refers to the set of DIFFERENT TYPES OF RELATIONSHIPS
measures that reflect where on the scale the distribution is  SYMMETRICAL- One variable influences the other in a
centered. unidirectional way, with the effect flowing from the
independent variable to the dependent variable.
1. MEAN The arithmetic mean is the most common measure of
central tendency. It is simply the sum of the numbers divided
 ASYMMETRICAL- Both variables are associated but neither studied as the main focus of the research, but they can still have an
influences the other in a causal way; they may both be impact on the dependent variable. Examples: participant
affected by a third, unobserved variable. characteristics, environmental factors, or other influences not
 RECIPROCAL - Both variables mutually influence each directly related to the primary research question.
other, creating a feedback loop where each impacts the
COMPONENT VARIABLE refers to a variable that lies between the
other.
independent variable (the manipulated variable) and the dependent
FOUR TYPES OF ASYMMETRICAL RELATIONSHIP variable (the measured variable). It can influence how the
1. PROPERTY DISPOSITION Refers to the inherent independent variable affects the dependent variable, potentially
characteristics of a person affecting an outcome. Example: age mediating or moderating the relationship. For example, in a study
may influence IQ but IQ does not influence age. examining the effect of a new drug (independent variable) on blood
pressure (dependent variable), a component variable might be the
2. ENDS MEAN RELATIONSHIP Cause and effect patient's initial blood pressure level.
relationships, where one factor leads to an outcome.
Example: the # of hours studied affects grades but grades do ANTECEDENT VARIABLE factors that come before and influence
not determine study hours. the independent and dependent variables being studied. They can
3. STIMULUS-RESPONSIVE RELATIONSHIP Direct cause- be used to predict the emergence of certain behaviors or
and-effect situations, where an external event leads to a outcomes. It’s a variable that happens first and affects both the main
specific reaction. Example: a punch causes pain cause and the main effect you're looking at. Example: A child's
4. DISPOSITION & RESPONSE RELATIONSHIPS Involves family income (antecedent variable) can influence both their
personal tendencies or habits influencing how someone access to educational resources (independent variable) and their
responds to situations. Includes traits, habits, prejudice, or academic performance (dependent variable).
biases in shaping behavior. INTERVENING VARIABLE also known as a mediating variable, is
CONDITIONS OF CAUSALITY a factor that comes between the independent variable and the
 EMPERICAL ASSOCIATION-Based on observation. Defined dependent variable. It helps explain how or why the independent
as information gathered from observations variable affects the dependent variable.
 TIME ORDER-the cause must happen before or at the same HYPOTHESIS TESTING
time as the effect, never after. a statistical method used to make inferences about population
 NON-SPURIOUSNESS-ensures that the observed variables parameters based on a sample of data. It involves formulating two
(x & y) are not caused by a third (z) variable. competing hypotheses – the null hypothesis (H0) and the
DISTORTIVE VARIABLE Converts a positive relationship into a alternative hypothesis (H1 or Ha) – and then collecting and
negative relationship. It changes and misleads the real relationship analyzing data to determine whether there is enough evidence to
between two things being studied. Example: exercise and weight reject the null hypothesis in favor of the alternative hypothesis.
loss diet could be a distortive variable because it also affects weight Null hypothesis (H0): It represents a default assumption or a
loss. statement of no effect, no difference, or no change.
EXTRANEOUS VARIABLE Affects the relationship between two Alternative hypothesis (H1 or Ha): It represents a claim or
variables, even though it’s not the main focus of the study. If two statement that contradicts the null hypothesis and is what the
variables seem related but actually aren’t, an extraneous variable researcher aims to support.
can reveal that the connection is spurious (false & misleading). Significance level, often denoted by α, is the probability of
These are variables that are not being intentionally manipulated or rejecting the null hypothesis when it is actually true.
Common choices for α are 0.05, 0.01, or 0.10.  Ensure that the subjects in one group are also in the other
If the p-value is less than the significance level (α), the null (related samples).
hypothesis is rejected in favor of the alternative hypothesis. If the  Check for no significant outliers in both groups.
test statistic is beyond the critical value, a similar decision is made.  Verify that the differences in your dependent variable
between related groups form an approximately normal
If the null hypothesis is rejected, it suggests that there is enough
distribution.
evidence to support the alternative hypothesis.

If the null hypothesis is not rejected, there is insufficient


evidence to support the alternative hypothesis, and it is retained.

CHI-SQUARE
The Chi-square (χ2\chi^2) test is used to determine if there is a
significant difference between observed and expected frequencies.

The formula is:

ONE-TAILED TEST A statistical test in which the critical area of a


T-TEST FOR DEPENDENT SAMPLES distribution is one sided so that it is either greater than or less than
A T-test is a statistical test used to determine if there is a significant a certain value, but not both. When to use it: You have a strong
difference between the means of two groups. It’s commonly used to hunch about whether something will increase OR decrease, but not
compare the means of samples from two populations, under the both. “I predict this new drug will lower blood pressure.”
assumption that the populations have normal distributions and equal
variances. The T-test calculates a T-value, which is then compared TWO-TAILED TEST A method of calculating statistical significance
to a critical value from a T-distribution to determine if the difference in which the critical area of a distribution is two sided and tests
between the means is statistically significant. whether a sample is greater than or less than a certain range of
values. When to use it: You just want to know if there's any
INDEPENDENT SAMPLES T TEST- an inferential statistical test difference at all, whether it's an increase OR a decrease. You don't
that determines whether there is a statistically significant difference have a specific direction in mind. “Will this new drug affect blood
between the means in two unrelated groups. pressure (higher or lower)?”
ONE-SAMPLE T TEST- investigate whether there’s a differences
DETERMINING THE ALPHA (SIGNIFICANCE LEVEL) & CRITICAL VALUE
between a group and a standard value or whether a subgroup
 To calculate alpha, subtract your desired confidence level
belongs to a population.
from 1. For instance, if you aim for 95% confidence, alpha
PAIRED-SAMPLES T TEST Investigate whether there’s a difference would be 1 – .95 = 5% or 0.05 in decimal for a one-tailed
within a group between two points in time (within-subjects). An test.
inferential statistical test that determines whether there is a  For two-tailed tests, divide the alpha level by 2. Using the
statistically significant difference between the means of two related same example, the two-tailed alpha would be .05/2 = 2.5%
groups. or 0.025 in decimal.
 Determine the critical value of the test statistic based on the
ASSUMPTIONS FOR THE T TEST DEPENDENT significance level (alpha) of the test and the degrees of
 Use a continuous scale for your dependent variable.
freedom. The degrees of freedom are calculated as the PEARSON PRODUCT MOMENT CORRELATION
sample size minus 1. The most widely used in statistics to measure the degree of the
relationship between the linear related variables. The Pearson
correlation would require both variables to be normally distributed.
For example, in the stock market, if we want to measure how two
products related to each other. Pearson r is used to measure the
degree of relationship between the two products. It is founded by
Karl Pearson.

CORRELATION COEFFICIENT Computed from the sample data


T-TEST FOR INDEPENDENT SAMPLES measures the strength and direction of a linear relationship between
To run the T-test independent analysis we need to prepare 1 two variables. The following formula is used to calculate the Pearson
independent categorical variable that has 2 levels of group. r correlation:
ASSUMPTIONS FOR THE T-TEST INDEPENDENT n: the number of data pairs
The samples are random samples. The sample data are independent Σx: the sum of all the individual x-
of one another. When the sample sizes are less than 30, the values.
populations must be approximately normally distributed. Your Σy: the sum of all the individual y-
dependent variable should be measure on a continuous scale (at the
values
interval measure or ratio level.
Σxy: the sum of the products of each
corresponding x and y value. For each data pair (xi,yi), you multiply xi
by yi, and then you add up all these products.
The value of the correlation coefficient varies between +1 and -1.

 strong positive linear relationship between = the


variables the value of r will be close to +1
 strong negative linear relationship between = the
variables the value of r will be close to -1
1. DEFINE THE NULL & ALTERNATIVE HYPOTHESIS  When there is no linear relationship between the
2. DETERMINE THE ALPHA LEVEL & CRITICAL VALUE variables or only weak relationship, the value of r= will be
3. CALCULATE THE DEGREES OF FREEDOM close to 0.

4. CALCULATE THE TEST STATISTICS: find the mean


variance (standard deviation) add all the scores then divide
the score by the sample size.

TEST OF SIGNIFICANCE
A test of significance for the coefficient of correlation may be used
5. RESULTS & CONCLUSION to find out if the computed Pearson's could have occurred in a
population in which the two variables are related or not. The test Samples follow normal distribution. Samples have been selected
static follows the t distribution with n - 2 degrees of freedom. The randomly and independently. Each group should have common
significance is computed using the formula of t test: variance. Data are independent

t = t test for correlation coefficient PROCEDURE (ONE-WAY ANOVA)


r = correlation coefficient 1. STATE THE HYPOTHESIS
n = number of paired samples 2. SOLVE FOR THE DEGREES OF FREEDOM
ASSUMPTIONS IN PEARSON PRODUCT MOMENT N= the total number of
CORRELATION TEST observations
 Subjects are randomly selected. K= the number of groups
 Both populations are normally distributed 3. SUM OF SQUARES (TOTAL)
PROCEDURE: FORMULA: SST = Σ(xi -x̄)^2
1. SET UP THE HYPOTHESIS Where (xi) represents each individual point.
2. SET THE LEVEL OF SIGNIFICANCE Where (x̄) represents the grand mean. (x̄ = Σx/n)
3. CALCULATE THE PEARSON’S R 4. CALCULATE THE VARIANCE
4. CALCULATE THE DEGREES OF FREEDOM (DF=N-2) AND MS (Between) = SS (Between)/df (Between)
DETERMINE THE CRITICAL VALUE OF T MS (Within) = SS (Within)/df (Within)
5. CALCULATE THE VALUE OF THE T & DETERMINE THE 5. CALCULATE THE F STATISTIC
STATISTICAL DECISION FOR HYPOTHESIS TESTING If F = MS (Between)/MS (Within)
tcomputed < tcritical, do not reject Ho. If tcomputed ≥ tcritical, 6. INTERPRET
reject Ho. *F-statistic > F critical = reject the null, there is a significance
*p-value > 0.05 = failed to reject the null hypothesis / accept
When the null hypothesis has been rejected for a specific significance the null
level, there are possible relationships between X and Y variables:
 There is a direct cause-and-effect relationship between the PROCEDURE (TWO-WAY ANOVA)
two variables.
 There is a reverse cause-and-effect relationship between the
two variables.
 The relationship between the two variables may be caused
by the third variable.
 There may be a complexity of interrelationship among many SUM OF SQUARES TOTAL
variables.
 The relationship between the two variables may be
coincidental.

ANALYSIS OF VARIANCE DEGREES OF FREEDOM (TOTAL)


An Analysis of Variance (ANOVA) test is a statistical test used to df (Total) = (n)(p)(q)-1
determine if there is a statistically significant difference between two
or more categorical groups by testing for differences of means using Where: n is the number of people per group; p is the number of
a variance. groups in Factor A; q is the number of groups in Factor B.

ASSUMPTIONS FOR ANOVA


VARIANCE (TOTAL) SUM OF SQUARES (BETWEEN) PROCEDURE

1. RANK THE DATA GIVEN (highest to lowest)


2. SUBTRACT THE RANKS & SQUARE, THEN ADD THE SUM
OF THE D2
3. T-STAT

n = Numerical value for the


NON-PARAMETRIC STATISTICS refers to a statistical method in number of observations.
which the data are not assumed to come from prescribed models
that are determined by a small number of parameters; examples of rs = Spearman’s Rank Coefficient
such models include the normal distribution model and the linear 4. find the following: N = Numerical value for the number of
regression model. Nonparametric statistics sometimes uses data observations; Rs = Spearman’s Rank Coefficient; df=
that is ordinal, meaning it does not rely on numbers, but rather on maximum number of logically independent values; a=The
a ranking or order of sorts. For example, a survey conveying level of uncertainty the researcher is willing to accept (alpha
consumer preferences ranging from like to dislike would be or significance level); critical value= (n-2)=
considered ordinal data. Nonparametric statistics includes
nonparametric descriptive statistics, statistical models, inference, MANN-WHITNEY U TEST the non-parametric alternative test to
and statistical tests. The model structure of nonparametric models the independent sample t-test. It is a non-parametric test that is
is not specified a priori but is instead determined from data. The used to compare two sample means that come from the same
term nonparametric is not meant to imply that such models population, and used to test whether two sample means are equal
completely lack parameters, but rather that the number and nature or not. Usually, the Mann-Whitney U test is used when the data is
of the parameters are flexible and not fixed in advance. A histogram ordinal or when the assumptions of the t-test are not met.
is an example of a nonparametric estimate of a probability
FORMULA
distribution.
R1 & R2 = Rank of the Sample Size
METHODS OF NON-PARAMETRIC STATISTICS
N1 & N2 = Numerical value for the number of
SPEARMAN RANK CORRELATION the nonparametric equivalent observations.
of the Pearson correlation and is appropriate when the linearity of U = Mann-Whitney U Test
two variables must be assessed in the context of their relationship PROCEDURE
to one another, and/or when the variables are of an ordinal level of 1. FIND n1 & n2 then rank in ascending order if there are
measurement. This approach can also be used when the data is not repeating numbers, find the midpoint by adding the ranks
normally distributed and is not sensitive to outliers, unlike the and divide them by 2.
Pearson correlation. Spearman’s rank correlation is applicable to 2. REGROUP AND GET THE SUM OF r1 & r2
tasks where finding the correlation between two ordinal variables 3. SOLVE & PICK THE LOWEST NUMBER
through the lens of linearity (often with the use of a monotonic
APPLICATION OF STATISTICS IN PSYCHOLOGICAL
function)
RESEARCH The systematic use of tests to quantify psychophysical
FORMULA behavior, abilities, and problems and to make predictions about
di = Distinction between each observation’s psychological performance. An essentially objective and
two ranks ; n = Numerical value for the standardized measure of a sample behavior. Psychological testing
number of observations. 𝝆 = SR Coefficient involves the use of formal tests such as questionnaires or checklists
to help arrive at a diagnosis and a treatment plan. Psychological 2. PERSONALITY TESTING A personality test is a tool used
assessment is a series of tests conducted by a psychologist to gather to measure personal characteristics, behavior patterns, and
information about how people think, feel, behave and react. preferences.
 SELF-REPORT PERSONALITY INVENTORIES A type
CHARACTERISTICS:
of psychological test in which a person fills out a survey
RELIABILITY: The psychological assessment/test must produce
or questionnaire with or without the help of an
the same result no matter when it’s taken.
investigator. Big 5 & 16 personality factor
VALIDITY: Provides a direct check on how well the test fulfills its
 MEASURING INTERESTS & ATTITUDE refers to an
function. individual’s interest as measured by interest inventories.
NORMS: The test must have established norms for comparison. The Strong Interest Inventory A career-planning tool
OBJECTTIVITY: The assessment must be free from any personal
that measures your interests over several categories. The
bias for its scoring, interpretation of scoring or administration. SII helps you identify which careers you are most well-
TYPES OF PSYCHOLOGICAL TESTING suited for, based on your preferences in school subjects,
1. ABILITY TESTING Also known as aptitude tests, assess work activities and leisure activities.
various cognitive abilities such as verbal, numerical, abstract  PROJECTIVE TECHNIQUE are methods of investigation
reasoning, spatial awareness, logical reasoning, and that use unstructured stimuli to elicit responses from the
problem-solving. These tests are designed to evaluate subject that reveal their hidden feelings, thoughts, and
individuals’ natural talents and potential to learn and succeed motives.
in specific areas. The Rorschach is a projective psychological test in which
 INDIVIDUAL TESTS a person is asked to describe what he or she sees in 10
 STANFORD-BINET INTELLIGENCE SCALE inkblots.
measures intelligence through five factors of cognitive
ability. These factors are fluid reasoning, knowledge,
quantitative reasoning, visual-spatial processing and
working memory.
 WECHSLER SCALES a series of psychological tests
designed to measure human intelligence and memory
functions. Successive edition of scales: one for adults,
one for school age children, and one for preschool
children.

 THE KAUFMAN SCALES The K-ABC and KAIT,


represent attempts to move beyond the atheoretical
stance of the older intelligence scales. They sought to
create instruments that by a design would be
anchored in evolving theories of intelligence, would
include developmentally appropriate tasks.
 TESTS FOR SPECIAL POPULATION
 GROUP TESTING

You might also like