# Significance and probability p value the probability

• 22

This preview shows page 10 - 13 out of 22 pages.

Significance and Probability P- value - the probability of getting your value, or a value that is more extreme, if the null hypothesis is true. Threshold is typically set at 0.05. A small p (less than or equal to 0.05) is said to be significant A “significant” finding indicates that there is strong evidence that our findings were not by chance , and we should then reject the null hypothesis in favor of the alternative hypothesis. Acceptance/Rejection Regions and Critical values Test statistic - The sample statistic one uses to either reject H o (and conclude H 1 ) or not to reject H o . Critical values - The values of the test statistic that separate the rejection and non-rejection regions. Rejection region - the set of values for the test statistic that leads to rejection of H o .
Non-rejection region - the set of values not in the rejection region that leads to non-rejection of H o . Drawing conclusions A small p (less than or equal to 0.05) will fall in the rejection region – and therefore is said to be significant A “significant” finding indicates that there is strong evidence that our findings were not by chance , and we should then reject the null hypothesis in favor of the alternative hypothesis. *Note on why we can never “prove” the alternative. Type I and Type II Error Hypothesis testing Remember in hypothesis testing that the first thing we do is develop our hypotheses – the null and the alternative: Null Hypothesis – ( H o )the hypothesis that states that there is no significant difference between specified samples. Any difference observed is due to chance.
Alternative Hypothesis – ( H 1 )contrary to the null. Assumes that sample observations are influenced by some non-random cause. The differences observed are not by chance. Assess Data and Draw conclusions We test the null because we always assume that there will be no difference. We then look at the probability of getting our results. If the probability is very small (significant) we can assume that the null is NOT true. Therefore we reject the null in favor of the alternative. BUT – what if our conclusion is wrong? Types of error Type I – occurs when we incorrectly reject a true null hypothesis. This leads one to conclude that a supposed effect or relationship exists when it doesn't. A false positive . Type II – occurs when we fail to reject a false null hypothesis. This leads one to conclude that there was not an effect or relationship, when there was. A false negative . Detecting errors It is impossible to know for sure when an error occurs, but there are ways researchers can control the likelihood of making an error. Statistical considerations that are used to determine the needed sample size for the study: Power Effect Size Rejection criterion ( p -value) Probability of making a Type II error Power – the ability a test has of correctly rejecting the null hypothesis, avoiding a type II error.
• • • 