[go: up one dir, main page]

0% found this document useful (0 votes)
20 views69 pages

Unit - 3

This document covers the fundamentals of hypothesis testing, including the concepts of null and alternative hypotheses, significance levels, and types of errors. It explains various statistical methods such as ANOVA and nonparametric tests, detailing their applications and importance in research. Additionally, it outlines the P-value approach for decision-making in hypothesis testing.

Uploaded by

sufian.cs22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views69 pages

Unit - 3

This document covers the fundamentals of hypothesis testing, including the concepts of null and alternative hypotheses, significance levels, and types of errors. It explains various statistical methods such as ANOVA and nonparametric tests, detailing their applications and importance in research. Additionally, it outlines the P-value approach for decision-making in hypothesis testing.

Uploaded by

sufian.cs22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Unit - 3

Testing of hypothesis: Basic concepts, P-value approach, Analysis of


variance: The ANOVA technique, The basic principle of ANOVA,
One-way ANOVA, Two-way ANOVA. Other Nonparametric tests:
Wilcoxon Signed Rank Sum Test, Mann Whitney U Test, RUN Test,
Kruskul Wallis test.
Basic concepts
• A hypothesis is a key tool in research, primarily used to guide new
experiments and observations.
• Many experiments are specifically designed to test hypotheses.
• In decision-making, especially in fields like social science where direct
knowledge of population parameters is limited, hypothesis testing
helps determine whether sample data support a hypothesis strongly
enough to generalize findings.
Example 2: Social Media Use and Stress
Question:
Does using social media more than 3 hours a day increase stress
levels?
H₀: Social media use does not affect stress.
H₁: High social media use increases stress.
Data:
Survey stress levels of two groups:
Less than 3 hours
More than 3 hours
Decision:
If stress is higher in the second group → reject H₀.
WHAT IS A HYPOTHESIS?
• A hypothesis is more than just an assumption—it's a formal question
or proposition that a researcher aims to test.
• In research, a hypothesis often takes the form of a predictive
statement that links an independent variable to a dependent
variable and it must be testable using scientific methods.
• Examples:
Students who sleep at least 8 hours a night score higher on exams.

IV: Amount of sleep


DV: Exam scores
Testable? Yes – compare test scores of students based on sleep duration.
Characteristics of Hypothesis
Clarity & Precision
Should be clearly and precisely stated to ensure reliable conclusions.
Testability
Must be capable of being tested through observations or experiments.
Relationship Between Variables
Should define the relationship (if it's a relational hypothesis).
Specific & Limited Scope
Narrower, focused hypotheses are easier and more practical to test.
Simplicity
Should be stated in simple terms for better understanding, without affecting its value.
Consistency with Known Facts
Should align with established knowledge and accepted facts.
Timely Testability
Must be testable within a reasonable period—not take forever to verify.
Explanatory Power
Must explain the original problem using accepted theories and known facts.
BASIC CONCEPTS CONCERNING TESTING OF
HYPOTHESES
1. NULL HYPOTHESES
• The null hypothesis is generally symbolized as H0
• The alternative hypothesis as Ha
• Comparing method A with method B about its superiority and if the
assumption that both methods are equally good, then this
assumption is termed as the null hypothesis.
• As against this, we may think that the method A is superior or the
method B is inferior, we are then stating what is termed as alternative
hypothesis.
Null Hypothesis (H₀)
•Assumes no difference or no effect
•Example:
"Method A and Method B are equally good"
"Population mean μ = 100"
Symbolically: H₀: μ = 100
Alternative Hypothesis (Hₐ)
• States there is a difference or effect
• Opposes H₀
• Example: If we suspect method A is better than B, that's Hₐ.

H₀ vs Hₐ – Example
• H₀: μ = 100 (Mean is 100)
• If sample results don’t support H₀, we reject it and accept Hₐ.
2. The level of significance

• A key concept in hypothesis testing


• Denoted by α (alpha)
• Commonly used values: 5% (0.05) or 1% (0.01)
• Represents the probability of rejecting H₀ when it is actually true (Type I
Error)
What Does 5% Significance Mean?
If α = 0.05, there is a 5% chance of making a wrong decision by rejecting a true null
hypothesis.
In other words:
“We are 95% confident in our decision to reject H₀”
C. Decision rule or test of hypothesis
• A decision rule tells us when to:
• Accept the null hypothesis (H₀)
• Or reject H₀ in favor of the alternative hypothesis (Hₐ)
• Based on sample data and predefined criteria
Why Use a Decision Rule?
Ensures objective, consistent decision-making
Avoids bias in interpreting results
Based on:
Type of hypothesis
Sample size
Level of significance (α)
Example – Defective Items in a Lot
H₀: The lot is good (few defective items)
Hₐ: The lot is not good (many defective items)
Decision Rule:
Test 10 items
If 0 or 1 defective item → Accept H₀
If 2 or more defective items → Reject H₀ (Accept Hₐ)
(d) Type I and Type II errors:
In hypothesis testing, two types of errors can occur:
• Type I Error (α error)
• Rejecting H₀ when it is actually true
• Denoted by α (alpha)
• Also called the level of significance
• Example: Concluding a product is defective when it’s actually fine
• Type II Error (β error)
• Accepting H₀ when it is actually false
• Denoted by β (beta)
• Example: Concluding a product is fine when it’s actually defective

Error Possibilities Table

Decision Accept H₀ Reject H₀


H₀ is true Correct decision Type I error (α)
H₀ is false Type II error (β) Correct decision
Trade-Off Between Errors
• Reducing α increases the risk of β, and vice versa.
• Both errors cannot be minimized at the same time with a fixed
sample size.
• Decision-makers choose α based on the consequences of each error.
• Example: If a Type II error is more dangerous (e.g., health risk), it's safer to
accept more Type I errors.
e. Two-tailed and One-tailed tests:
Two-tailed Test
Used when we want to test if the sample mean is different (either
higher or lower) than the hypothesized population mean.
Hypotheses:
Null Hypothesis (H₀): μ = μ₀
Alternative Hypothesis (Hₐ): μ ≠ μ₀
Rejection regions: Both left and right tails of the distribution
Critical values (at 5% significance level):
Z < -1.96 or Z > +1.96
Use case: Checking for any significant difference in either direction.
One-tailed Test
Used when the difference is expected in one direction only (either greater
than or less than the hypothesized value).
👉 Left-tailed test:
H₀: μ = μ₀
Hₐ: μ < μ₀
Rejection region: Left tail
Critical value (at 5% level): Z < -1.645
👉 Right-tailed test:
H₀: μ = μ₀
Hₐ: μ > μ₀
Rejection region: Right tail
Critical value (at 5% level): Z > +1.645
Example Scenario: Quality Check on a Juice Bottle
A company claims that its juice bottles contain exactly
500 ml of juice.
Two-Tailed Test Example
You, as a quality inspector, want to check if the actual amount is
different (could be less or more) than 500 ml.
H₀ (Null Hypothesis): μ = 500 ml
Hₐ (Alternative Hypothesis): μ ≠ 500 ml
You're testing both sides — is it less or more than 500 ml?
This is a two-tailed test.
✅ Use when you're checking for any deviation from the claimed
value.
One-Tailed Test Example
Let’s say you're only worried if customers are getting less juice than
500 ml.
H₀ (Null Hypothesis): μ = 500 ml
Hₐ (Alternative Hypothesis): μ < 500 ml
You're only testing if it's less than the claim.
This is a left-tailed test (one-tailed).
Or if you're checking whether machines are overfilling bottles:
Hₐ: μ > 500 ml → right-tailed test
✅ Use when you're concerned about a specific direction of change.
P-value Approach
P-value Approach
• The p-value (probability value) is a fundamental concept in
statistical hypothesis testing, widely used in research to
determine the significance of results.
• It helps researchers decide whether to reject or fail to reject the
null hypothesis (H0).
• The p-value is the probability of obtaining test results at least
as extreme as the observed results, assuming that the null
hypothesis (H0) is true.
• A small p-value (typically ≤ 0.05) indicates strong evidence
against H0, leading to its rejection.
• A large p-value (> 0.05) suggests weak evidence against H0, meaning
we fail to reject it.
Steps of the P-value Approach
1. State the Hypotheses
You start by defining two hypotheses:
• Null Hypothesis (H₀): This is the assumption that there is no effect or no difference.
It's often a statement of equality (e.g., μ = 50, no change, no difference).
• Alternative Hypothesis (H₁ or Ha): This is what you're trying to prove. It suggests
that there is an effect, a difference, or a relationship (e.g., μ ≠ 50, there is a change).
2. Choose a Significance Level (α)
The significance level (α) is the threshold you use to decide whether to reject H₀.
A common choice is α = 0.05, but it can also be 0.01, 0.10, or another value,
depending on the situation.
3. Calculate the Test Statistic
Use the appropriate statistical test (like a z-test or t-test) based on your
data. This test will calculate a test statistic (like a z-score or t-value)
that measures how far your sample data is from the null hypothesis
value in terms of standard error.

4. Calculate the P-value


The p-value is the probability of getting a result at least as extreme as
the one you observed, assuming the null hypothesis is true.
5. Compare P-value to α and Make a Decision
• If the p-value ≤ α: Reject the null hypothesis. There is sufficient
evidence to support the alternative hypothesis.
• If the p-value > α: Fail to reject the null hypothesis. There isn't
enough evidence to support the alternative hypothesis.
Why Use the P-Value Approach?
• It provides a more exact measure of evidence against H0H_0H0.
• You can easily compare results across studies using P-values.
• It’s widely used in scientific literature and statistical software.
General Steps to Calculate a Test Statistic
1. State Your Hypotheses
• Null hypothesis (H0): what you assume to be true (e.g., μ=50)
• Alternative hypothesis (Ha): what you're testing (e.g., μ≠50)
2. Choose the Right Test
• z-test: use when population standard deviation (σ) is known, and the sample
size is large (n≥30)
• t-test: use when population standard deviation is unknown, especially with a
small sample (n<30)
4. Interpret the Test Statistic
•Compare it to critical values OR
•Use it to find the p-value
Example
Example
• Suppose you're testing whether a new drug lowers blood pressure
more than an old one.
• H0: New drug is no better than the old one.
• H1: New drug is better.
• You do a test and get a p-value of 0.03.
• If your α=0.05 then 0.03 < 0.05, so you reject H0.
• You conclude there's evidence the new drug is more effective.
Analysis of Variance (ANOVA)
Analysis of Variance (ANOVA)
• It is a statistical method used to test whether there are significant
differences between the means of three or more independent (unrelated)
groups..
• ANOVA stands for Analysis of Variance
• It is a powerful statistical tool used in:
• Economics
• Biology
• Education
• Psychology
• Sociology
• Business/Industry
• And many other fields
Why Use ANOVA?
• Useful when more than two sample means are being compared.
• ANOVA allows us to:
• Perform a simultaneous test
• Identify significant differences across multiple groups
Key Purpose of ANOVA
• Determines if the sample means are significantly different
• Helps assess whether samples come from populations with the same
mean
• Answers the question:
“Are all group means equal?”
• Practical Applications
• Comparing crop yields from various seed types
• Evaluating fuel efficiency of different car models
• Studying smoking habits among student groups
• Any case involving more than two populations
• Advantages of ANOVA
• Efficient: Avoids multiple pairwise comparisons
• Cost-effective: Saves time and resources
• Comprehensive: Detects interaction effects that may be missed in pairwise
tests
• Reliable: Provides a global view of group differences
What is ANOVA?
A.N.O.V.A = Analysis of Variance
A statistical method used to compare means of three or more groups.
It tells us whether the differences between group means are statistically
significant.
Who Developed ANOVA?
Sir Ronald A. Fisher
British statistician and geneticist.
First introduced the concept of 'variance'.
Developed the ANOVA technique in the 1920s.
His work revolutionized experimental design and agricultural studies.
“Variance” was a term first coined and applied by Fisher in his
groundbreaking work on statistical methods.
Why Use ANOVA?
ANOVA helps when:
You have multiple groups or treatments.
You want to test differences in group means.
You want to avoid multiple t-tests, which increase the risk of Type errors.
Example Use Case
Let’s say you're testing the effect of 3 fertilizers on crop yield:
Fertilizer A
Fertilizer B
Fertilizer C
ANOVA helps you answer:
“Do all fertilizers perform the same?”
“Is one significantly better?”
How It Works (Basic Idea)
ANOVA compares two sources of variance:
Between-group variance (differences between group means)
Within-group variance (variation within each group)
If between-group variance is much larger → likely real differences exist
between groups.
Types of ANOVA
One-Way ANOVA
One independent variable (e.g., fertilizer type)
Two-Way ANOVA
Two independent variables (e.g., fertilizer type & sunlight)
Repeated Measures ANOVA
Same subjects tested under different conditions
THE BASIC PRINCIPLE OF ANOVA
• ANOVA has several types based on the number of independent variables
and how data is collected.
One-Way ANOVA
🔹 Definition
Used when there is one independent variable with two or more groups.
🔹 Example
Testing the effect of fertilizer type (A, B, C) on crop yield.
🔹 Purpose
To check if any group mean is significantly different from the others.
📌 Only one factor is tested.
Two-Way ANOVA
🔹 Definition
Used when there are two independent variables.
🔹 Example
Testing the effect of:
Fertilizer type (A, B, C)
Sunlight exposure (Full sun, Partial shade)
🔹 Purpose
To determine:
Main effect of each variable
Interaction effect between them
📌 Helps see if combinations influence results.
Repeated Measures ANOVA
🔹 Definition
Used when the same subjects are tested under different conditions
or over time.
🔹 Example
Measuring plant growth with the same fertilizer, but at Week 1,
Week 2, Week 3.
🔹 Purpose
To analyze within-subject variability.
📌 Common in time-series or medical experiments.
Nonparametric tests
What is a Statistical Test?
A statistical test is a formal method for deciding the reasonableness of
a hypothesis using sample data.
It involves:
A hypothesis (the main assertion to be tested).
A model (assumptions like population distribution, randomness, etc.).
Traditional tests (parametric) assume specific distributions (e.g.,
normal distribution). But when such assumptions cannot be made,
we use nonparametric or distribution-free tests.
What Makes a Test Nonparametric?
No assumptions about population distribution (like normality).
No use of population parameters (mean, standard deviation, etc.).
Based on order statistics or rank-based techniques.
Useful when data is ordinal, skewed or from unknown distributions.
Important Types of Nonparametric Tests:
(i) One-Sample Tests
Purpose: To test a hypothesis about a single population value.
Example: One-sample Sign Test
(ii) Two or More Sample Tests
Purpose: To test for differences between independent samples.
Examples:
Two-sample Sign Test
Fisher-Irwin Test
Rank Sum Test / Mann-Whitney U Test
(iii) Tests of Association or Dependence
Purpose: To test relationships between variables.
Examples:
Spearman’s Rank Correlation
Kendall’s Coefficient of Concordance
(iv) Tests for Variation (ANOVA Alternatives)
Purpose: Compare variation across groups without assuming normality.
Example: Kruskal-Wallis Test
(v) Tests of Randomness
Purpose: Check if the data sequence is random.
Example: Runs Test (One-sample)
(vi) Tests for Categorical Data
Purpose: Check for independence or association in categorical data.
Example: Chi-Square Test
Also used for goodness-of-fit and comparing theoretical vs. actual
distributions.
Sign Tests
• The sign test is a simple non-parametric test used to evaluate the
median of a continuous, symmetric population. It is not concerned
with the actual values of observations, but only with their signs
(positive or negative) in relation to a hypothesized median (μ₀).
• Types of Sign Tests:
• One-sample sign test
• Two-sample sign test (not discussed in detail here)
• One-Sample Sign Test:
• Purpose: To test a hypothesis about the population median.
• Assumption: The population is continuous and symmetric.
• Null Hypothesis (H₀): The median equals a specific value μ₀.
• Procedure:
• Compare each sample value to μ₀:
• Assign a ‘+’ if the sample value > μ₀
• Assign a ‘–’ if the sample value < μ₀
• Discard any value equal to μ₀
• The number of + and – signs is treated as a binomial distribution with p = 0.5
• Test the null hypothesis using:
• Binomial distribution (for small sample sizes)
• Normal approximation (for large samples)
• This method allows testing without making assumptions about the distribution of the
data beyond symmetry and continuity.
One-Sample Sign Test – Illustration:
Objective: Test if professional golfers’ average score for four rounds is less
than μ₀ = 284.
Data: 11 scores = {280, 282, 290, 273, 283, 283, 275, 284, 282, 279, 281}
Step 1: Sign Assignment:
Compare each score to 284
Replace with ‘+’ if > 284, ‘–’ if < 284, discard if = 284
Result: 1 ‘+’, 9 ‘–’, 1 discarded → n = 10
Step 2: Binomial Test:
Under H₀: p = 0.5, compute probability of getting 1 or fewer + signs out of 10 trials:
P(X ≤ 1) = P(0) + P(1) = 0.001 + 0.010 = 0.011
Since 0.011 < 0.05, reject H₀.
Conclusion: Average score is significantly less than 284.
Alternative (Normal Approximation):
Use normal approximation for binomial:
Standard error: σ = √(pq/n) = 0.1581
Acceptance region lower bound: 0.5 – (1.64)(0.1581) ≈ 0.2407
Observed proportion: 1/10 = 0.1 < 0.2407 → reject H₀
Same conclusion: μ < 284
Two-Sample Sign Test (Paired Data):
• Used for paired samples (e.g., before-and-after measurements or
matched pairs)
• For each pair (Xᵢ, Yᵢ):
• Assign ‘+’ if Xᵢ > Yᵢ
• Assign ‘–’ if Xᵢ < Yᵢ
• Discard if Xᵢ = Yᵢ
• If sample sizes differ, randomly pair and discard extra values
• Test procedure is the same as in one-sample sign test (binomial or
normal approx.)
• Objective:
• To test if archaeologist X is better than Y at finding artifacts using the sign test
at 1% level of significance.
• Data:
• Artifact counts over 30 days by archaeologists X and Y.
• For each day, compare Xᵢ to Yᵢ:
• Assign ‘+’ if Xᵢ > Yᵢ
• Assign ‘–’ if Xᵢ < Yᵢ
• Assign ‘0’ if Xᵢ = Yᵢ (discard these from analysis)
• Step-by-Step:
• After comparison:
• ‘+’ signs = 20
• ‘–’ signs = 6
• ‘0’ signs = 4
• Effective sample size (n) = 26
• Hypotheses:
• H₀: X and Y are equally good (p = 0.5)
• H₁: X is better than Y (p > 0.5)
• Proportions:
• Observed proportion of ‘+’s (successes):
p̂ = 20/26 ≈ 0.7692
• Standard error under H₀ (p = 0.5):
σ = √(pq/n) = √(0.5 × 0.5 / 26) ≈ 0.0981
• Normal Approximation:
• Use one-tailed test (since H₁ is one-sided)
• Critical z-value for 1% significance (right-tailed): 2.32
• Upper bound of acceptance region:
0.5 + (2.32 × 0.0981) = 0.7276
• Decision:
• Observed p̂ = 0.7692 > 0.7276
• Falls in rejection region, so:
• Reject H₀
• Accept H₁: Archaeologist X is better than Y
Fisher-Irwin Test
• Purpose:
• The Fisher-Irwin test is a non-parametric (distribution-free) test used
to determine whether there is a significant difference between two
groups in terms of a binary outcome (e.g., pass/fail).
• Application Example:
• To compare the effectiveness of two training programs (New vs. Old),
12 workers were divided into two groups of 6. After training, they
were tested with results:
Training Type Passed Failed Total
New (A) 5 1 6
Old (B) 3 3 6
Total 8 4 12

Hypotheses:
H₀ (Null Hypothesis): Both training programs are equally effective
H₁ (Alternative Hypothesis): The new training program is better

Test Procedure:
Calculate the probability of getting the observed result or a
more extreme one under the null hypothesis:
P(A does as well or better) = P(5 pass, 1 fail) + P(6 pass, 0 fail)
= 0.24 + 0.03 = 0.27
Alternatively,
P(B does as poorly or worse) = P(3 pass, 3 fail) + P(2 pass, 4 fail)
= 0.24 + 0.03 = 0.27
Decision Rule:
Significance level (α) = 0.05
Since 0.27 > 0.05, the result is not statistically significant
Conclusion: Do not reject H₀ → Both training programs are
equally effective

Notes:
Fisher-Irwin is best used for binary classification (e.g., pass/fail)
Limitation: It ignores the magnitude of results (e.g., actual test
scores), potentially discarding valuable information
For more nuanced data, use Wilcoxon test or other
non-parametric alternatives
McNemer Test
• Purpose:
• Non-parametric test for paired nominal data
• Measures change in responses before and after treatment
• Ideal for "Yes/No", "Favour/Not Favour" responses from the same subjects
Example Summary:
Before-after responses from 1000 respondents
Calculated χ2=32.67\chi^2 = 32.67χ2=32.67
Critical value at 5% significance = 3.84
✅ Result: Since 32.67 > 3.84, reject H₀
Wilcoxon Signed Rank Sum Test
• The Wilcoxon matched-pairs test is used in paired data situations,
such as:
• Before-and-after studies (e.g., treatment effects)
• Comparing matched individuals (e.g., husband-wife, machine A vs.
machine B)
• Studies where both direction and magnitude of differences can be
determined.
Steps to Apply the Test
State Hypotheses:
Null (H₀): No difference between the paired observations.
Alternative (H₁): There is a difference between the paired observations.
Calculate Differences:
Compute differences di=Ai for each pair.
Ignore the sign initially when ranking, but keep track of it later.
Drop Zero Differences:
If di=0, that pair is excluded from the analysis.
Rank the Absolute Differences:
Assign ranks based on ∣di∣, ignoring the sign.
If there are ties, assign the average of the tied ranks.
• Assign Signs to Ranks:
• Restore the signs (positive or negative) to the corresponding ranks.
• Calculate Test Statistic T:
• Sum of positive ranks
• Sum of negative ranks
• Take T as the smaller of the two sums
• Decision Rule:
• For n ≤ 25, use critical values table.
• Reject H₀ if calculated T≤table value.
Illustrative Example Summary
Study: 16 subjects compare quality of Brand A vs. Brand B.
Data: Paired scores given for both brands.
Zero Difference: Pair 8 has no difference → excluded, so n=15.

Calculated T: 18.5
Table Value at 5% (two-tailed) for n = 15: 25
Conclusion:
Since T = 18.5 < 25, we reject the null hypothesis.
There is a significant difference in perceived quality between Brand
A and Brand B.
Mann Whitney U Test
• The Mann-Whitney U Test (also called the Wilcoxon rank-sum test) is
a non-parametric test used in research methodology to compare
differences between two independent groups when the dependent
variable is either ordinal or continuous but not normally distributed.
• When to Use the Mann-Whitney U Test
• You have two independent groups (e.g., Group A vs. Group B).
• Your data is ordinal or non-normally distributed continuous data.
• You want to test whether the distribution of the two groups is the same
(usually comparing medians).
• Examples of Use
• Comparing test scores between two teaching methods when the
scores are not normally distributed.
• Evaluating patient satisfaction scores (ranked 1–5) between two
clinics.
• Comparing reaction times of two different age groups.
• How It Works
1. Combine all scores from both groups and rank them from lowest to
highest.
2. Assign ranks (average in case of ties).
3. Calculate the sum of ranks for each group.
4. Use the ranks to calculate the U statistic for each group.
5. Determine the p-value from the U statistic to test significance.
• Hypotheses
• Null hypothesis (H₀): There is no difference in the distributions (or
medians) of the two groups.
• Alternative hypothesis (H₁): There is a difference in the distributions of the
two groups.
• Advantages
• Doesn’t assume normality.
• Can be used with small sample sizes.
• Works with ordinal data.
• Limitations
• Less powerful than parametric tests (like t-test) when assumptions of those
tests are met.
• Doesn’t tell you the magnitude of difference, just that a difference exists.
Example
• Scenario:
• You want to test if two groups of students (Group A and Group B)
differ in how many hours they study per week.
• Group A: 5, 6, 7
• Group B: 8, 9, 10
Step 1: Combine and Rank All Data
Combined data:
5, 6, 7, 8, 9, 10
Assign ranks:
•5 → rank 1
•6 → rank 2
•7 → rank 3
•8 → rank 4
•9 → rank 5
•10 → rank 6

Step 2: Add Up Ranks for Each Group


•Group A (5, 6, 7) → Ranks: 1 + 2 + 3 = 6
•Group B (8, 9, 10) → Ranks: 4 + 5 + 6 = 15
Kruskal-Wallis Test

You might also like