[go: up one dir, main page]

0% found this document useful (0 votes)
4 views14 pages

SPSS Output

The document provides an overview of statistical tests available in SPSS, including tests for normality (Shapiro-Wilk and Kolmogorov-Smirnov), correlation interpretation, regression analysis, t-tests, ANOVA, MANOVA, and non-parametric tests. Each section includes explanations of the tests, assumptions, SPSS output interpretation, and examples of how to analyze the results. It emphasizes the significance level (p-value) for determining statistical significance across various analyses.

Uploaded by

24mpsy49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views14 pages

SPSS Output

The document provides an overview of statistical tests available in SPSS, including tests for normality (Shapiro-Wilk and Kolmogorov-Smirnov), correlation interpretation, regression analysis, t-tests, ANOVA, MANOVA, and non-parametric tests. Each section includes explanations of the tests, assumptions, SPSS output interpretation, and examples of how to analyze the results. It emphasizes the significance level (p-value) for determining statistical significance across various analyses.

Uploaded by

24mpsy49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Statistical Packages for Social Sciences

Output Screen
Tests of Normality
(e.g., via Analyze > Descriptive Statistics > Explore or Analyze > Descriptive Statistics > Frequencies with plots),
SPSS usually reports two main tests of normality:

Shapiro-Wilk Test

● Best for small samples (n < 50, but usable up to 2000).


● Null hypothesis (H₀): Your data is normally distributed.
● Interpretation:
○ If p > 0.05 → Fail to reject H₀ → Data is normally distributed.

○ If p < 0.05 → Reject H₀ → Data is not normally distributed (skewed or kurtosis).

Kolmogorov-Smirnov (K-S) Test

● Works for larger samples.


● Same logic as above.
○ p > 0.05 → Normal

○ p < 0.05 → Not normal

Visual (Graphical) Tests in SPSS


Histogram with Normal Curve
- Bell-shaped curve.
- Symmetrical around the mean.
- If your data matches the normal curve closely, that’s a good sign.
Q-Q Plot (Quantile-Quantile Plot)
- Dots should lie along the straight diagonal line.
- Deviation from this line = departure from normality.
- Systematic curves (like S-shapes) indicate skewness or kurtosis.

Sample SPSS Output Interpretation Example

Let’s say you’re testing the normality of a variable called "Stress Level" and you get:

Test Statistic df Sig. (p-value)

Shapiro-Wilk .976 45 .315


Kolmogorov-Smirnov .105 45 .200*

Interpretation:
- Both p-values > 0.05, so we fail to reject the null hypothesis.
- Data is likely normally distributed.
- You can proceed with parametric tests (like t-test, ANOVA, regression, etc.).

Correlation

Variable A Variable B

Variable A 1 r = .72**

Variable B r = .72** 1

How to Interpret the Output

Correlation Coefficient (r or ρ)

- +1.0 = perfect positive correlation


- -1.0 = perfect negative correlation
- 0 = no linear correlation

Strength of Relationship:

Coefficient (r or ρ) Strength

0.00 – 0.10 Very weak

0.10 – 0.30 Weak


0.30 – 0.50 Moderate

0.50 – 0.70 Strong

0.70 – 1.00 Very strong

Significance Level (p-value)

- If p > (greater than) 0.05 → No statistically significant correlation → fail to reject the null

hypothesis.
- If p ≤ (lesser than or equal to) 0.05 → Statistically significant correlation →
reject the null hypothesis.
- If p = 0.000 → SPSS rounds it to 0.000, but interpret it as p < .001 (very strong evidence of

correlation).

📊 Choosing Between Pearson and Spearman

Feature Pearson Spearman

Data type Interval or ratio Ordinal, interval, or ratio

Relationship type Linear Monotonic (can be curved)

Normality required? Yes (especially in small samples) No


Outlier sensitivity Sensitive Less sensitive

Example SPSS Output Interpretation (Pearson)

Job Satisfaction Empowerment

Job Satisfaction 1 .65**

Empowerment .65** 1

r = .65, p = .000 → There is a strong, positive, and statistically significant correlation between job

satisfaction and empowerment.

Scatter Plots
Purpose
Scatter plots visually show how two continuous variables relate. This is especially helpful before
running correlations or regression.
Key Components

Part of the Plot Meaning

X-axis Independent variable


Y-axis Dependent variable

Dots (data points) Each point represents one observation

What to Look For:


Direction
- Positive correlation: As X increases, Y increases (↗ upward trend).

- Negative correlation: As X increases, Y decreases (↘ downward trend).


- No correlation: Points are scattered randomly; no clear pattern.
Strength
- Tightly clustered points = stronger correlation.
- Loosely scattered points = weaker correlation.
Shape
- Linear: Points form a straight-ish line → good for Pearson correlation.
- Curved/Non-linear: May require non-parametric tests or transformation.
Outliers
- Individual points far away from the rest → may affect correlation or regression.

Regression

Model Summary Table

R R Square Adjusted R Square Std. Error

.764 .584 .573 3.12

R: Correlation between predicted and observed values.


p < 0.05 → Predictor significantly contributes to the model.
p > 0.05 → Predictor is not statistically significant.
R Square: The proportion of variance in the dependent variable explained by the model.
E.g., .584 = 58.4% of the variability in the outcome is explained by the predictors.
Adjusted R Square: A more accurate version of R² that adjusts for the number of predictors.
Std. Error: Standard deviation of residuals — lower is better.

ANOVA Table

Model Sum of Squares df Mean Square F Sig.

Regression 456.78 2 228.39 32.11 .000

Residual 324.12 47 6.89

F-test: Tests if the overall regression model is statistically significant.

Coefficients Table

Predictor B (Unstd.) Std. Error Beta (Std.) t Sig.

(Constant) 5.24 1.12 — 4.67 .000

Variable A 0.82 0.19 .546 4.32 .000

Variable B 0.42 0.17 .326 2.47 .018

B (Unstandardized Coefficients): Raw effect of the predictor.

→ E.g., for every 1 unit increase in Variable A, the dependent variable increases by 0.82.
Stepwise Regression in SPSS
- SPSS adds/removes predictors based on statistical criteria (usually p-values).
- Output shows:
Which variables were entered/removed at each step.
Final model only includes significant predictors.

Let’s say you’re predicting Job Satisfaction from Empowerment and Leadership Style. Your coefficients
table shows:

Predictor B p

Empowerment .45 .002

Leadership Style .32 .016

Interpretation:
Both Empowerment and Leadership Style significantly predict Job Satisfaction. For each unit increase in
Empowerment, Job Satisfaction increases by 0.45 units, controlling for Leadership Style.

T-Test
Independent Sample T-Test (Two-Sample T-Test)
Purpose: To compare the means of two independent groups to see if they are significantly different.

Use When:
- The two groups are not related (e.g., males vs females, treatment group vs control group).
- You want to know if group means differ in the population.
Assumptions:
- Normal distribution of data in both groups.
- Homogeneity of variances (equal variances).
- Independent observations.
SPSS Output Interpretation:
Group Statistics Table:
- Shows mean, N, std. deviation for each group.
Comparing stress levels between two different departments. If p = .03, stress levels significantly differ between
groups.

Paired Sample T-Test (Twin/Dependent T-Test)


Purpose: To compare the means of two related groups or repeated measures to see if there's a significant difference.
Use When:
- Same subjects tested at two different times (e.g., before and after treatment).
- Matched pairs (e.g., twins, couples, case-control studies).
Assumptions:
- Data is normally distributed.
- The two sets of data are related or dependent.
SPSS Output Interpretation:
Paired Samples Statistics Table: Shows mean and std. deviation of both time points or paired groups.
Measuring anxiety before and after a mindfulness workshop. If p = .01, anxiety levels significantly decreased after
the workshop.

ANOVA (Analysis of Variance)


Purpose: To test whether three or more group means are significantly different from each other for one
dependent variable.
Types of ANOVA:
One-Way ANOVA

● Use When: One independent variable (with 3+ levels) and one dependent variable.
● Example: Comparing mean anxiety levels across three therapy types.

Two-Way ANOVA

● Use When: Two independent variables and one dependent variable.


● Purpose: Tests:
○ Main effect of each factor.
○ Interaction effect between the factors.
● Example: Effect of teaching method and gender on student performance.
📉 SPSS Output for ANOVA:

Look at the ANOVA Table:


Source df F Sig. (p-value)

Between Groups < .05 → Significant

Within Groups > .05 → Not Significant

MANOVA (Multivariate Analysis of Variance)

Purpose: To assess whether two or more groups differ across multiple dependent variables simultaneously.

Types of MANOVA:

One-Way MANOVA

● Use When: One IV (group factor), multiple DVs.


● Example: Effect of therapy type on anxiety, depression, and sleep quality.

Two-Way MANOVA

● Use When: Two IVs and multiple DVs.


● Example: Effect of therapy type and gender on anxiety, depression, and sleep quality.

📉 SPSS Output for MANOVA:

Look at the multivariate tests table — especially:


Wilks’ Lambda: If p < .05, there is a significant overall effect on combined dependent variables.

Post-Hoc Tests
Purpose: To find which specific group pairs are significantly different after a significant ANOVA result.
Common Post-Hoc Test: Tukey’s HSD
- Best For: Equal or nearly equal group sizes.
- What It Does: Compares all pairs of group means and adjusts for multiple comparisons.
📉 SPSS Output for Post-Hoc (Tukey):

Group 1 Group 2 Mean Difference Sig. (p-value)

Therapy A Therapy B -3.25 .02 → Significant

Therapy A Therapy C -1.10 .40 → Not Significant

Interpretation: Significant differences exist only where p < .05.

Chi Square

Feature Goodness of Fit Test of Independence

Variables One categorical variable Two categorical variables


Involved

Purpose Test if observed frequencies match expected Test if two variables are
distribution associated

Null Distribution fits the expected Variables are independent


Hypothesis

Example Color preference Gender vs. Food Choice


p < 0.05 Distribution doesn’t fit Variables are related

SPSS Chi-Square, df, Sig. Pearson Chi-Square, Sig.


Output
Focus

Non Parametric Tests (Quick View)

Test Groups Relationship What it compares Analogy

Mann-Whitney U 2 Independent Ranks “Man vs Man”

Wilcoxon Signed 2 Related Ranks “Signs deal twice”


Rank

Sign Test 2 Related Signs (+/-) “Just direction”

Kruskal-Wallis 3+ Independent Ranks “King Kruskal judges”

Friedman 3+ Related Ranks “Friedman’s family”

Median Test 2+ Independent Medians “Middle matters”


Mann-Whitney U Test

Used for: Comparing 2 independent groups

Interpretation:

- If p < 0.05 → Significant difference between the two groups.

- If p > 0.05 → No significant difference


Example:
You compare customer satisfaction scores for Store A and Store B.
p = 0.02 → One store significantly outperformed the other.

Wilcoxon Signed Rank Test

Used for: Comparing 2 related/paired groups (e.g., before-after)


Interpretation:

- If p < 0.05 → Significant change between the two time points or conditions

- If p > 0.05 → No significant difference

Example:
Test scores before and after training.
p = 0.01 → Training had a significant effect.

Sign Test

Used for: 2 related groups (like Wilcoxon), but focuses only on direction (+/-)
Interpretation:

- If p < 0.05 → Significant directional change (e.g., most improved/worsened)

- If p > 0.05 → No consistent direction of change

Example:
Satisfaction before and after app update.
p = 0.30 → No strong directional shift.

Kruskal-Wallis H Test
Used for: Comparing 3+ independent groups
Interpretation:

If p < 0.05 → At least one group differs significantly

If p > 0.05 → No significant difference between groups


Example:
Comparing test scores from 3 teaching methods.
p = 0.04 → Some methods work better — post hoc needed to find out which.

Friedman Test

Used for: Comparing 3+ related (repeated-measures) groups


Interpretation:

● If p < 0.05 → At least one condition is significantly different

→ Suggests a change over time or conditions.

→ Post hoc: Wilcoxon Signed Rank on pairs to find where.

● If p > 0.05 → No significant change across conditions

Example:
Performance in 3 different lighting settings.
p = 0.01 → One condition significantly affected performance.

Median Test

Used for: Comparing medians across 2 or more independent groups

Interpretation:

- If p < 0.05 → At least one group has a significantly different median

- If p > 0.05 → Groups have similar medians

Example:
Checking if median income differs by region.
p = 0.06 → No significant difference in medians.

General Rule for All Tests

If p < 0.05 There is a significant difference

If p > 0.05 There is no significant difference

You might also like