Unformatted text preview: Chapter 4 Chapter 4
Hypothesis Testing, Power, and Control: A Review of the Basics Three levels of hypotheses Conceptual hypotheses Research hypotheses Statistical hypotheses – State expected relationships among concepts. – Concepts are operationalized so that they are measurable. – State the expected relationship between or among summary values of populations, called parameters. Null hypothesis (H0) Alternative hypothesis (H1) Testing the null hypothesis Testing the null hypothesis Null hypothesis Alternative hypothesis – The hypothesis being statistically tested when you use inferential statistics. – The researcher hopes to show that the null is not likely to be true (i.e., hopes to nullify it). – The hypothesis the researcher postulated at the outset of the study. – If the researcher can show that the null is not supported by the data, then he or she is able to accept the alternative hypothesis. Testing the null hypothesis Testing the null hypothesis Steps in testing a research hypothesis:
1. State the null and the alternative. 2. Collect the data and conduct the appropriate statistical analysis. 3. Reject the null and accept the alternative or fail to reject the null. 4. State your inferential conclusion. In order to make a conceptual hypothesis into a In order to make a conceptual hypothesis into a research hypothesis we need to _____.
A. B. C. D. E. define the population parameters test the conceptual hypothesis describe the confounding variables operationalize the concepts none of the above HG Wells (1896) Island of Dr. Moreau “The Sayer of the Law” “Not to go on allfours; that is the Law. Are we not Men? “Not to suck up Drink; that is the Law. Are we not Men? “Not to eat Fish or Flesh; that is the Law. Are we not Men? “Not to claw the Bark of Trees; that is the Law. Are we not Men? “Not to chase other Men; that is the Law. Are we not Men?” Sayer of the Law Sayer of the Law “Inferences are based on samples taken from the population” “That is the law!” “Are we not researchers?” Statistical significance Statistical significance Statistical difference Significance levels (α) – The probability that the groups are the same is very low. – Alpha (α) is the level of significance chosen by the researcher to evaluate the null hypothesis. – 5% or 1% Inferential Errors: Inferential Errors: Type I and Type II Type I Error – Rejecting a true null. – Probability is equal to alpha (α). Type II Error – Failing to reject a false null. – Probability is beta (β). Power – our ability to reject false nulls. Alpha, Beta, and Power Alpha, Beta, and Power
cr itical t = 1.70562 0.3 0.2 0.1 β
3 2 1 0 1 2 α
3 4 5 0 Inferential Errors: Inferential Errors: Type I and Type II
True State of Affairs Null is true Our decision Reject the null Fail to reject the null Type I error (α) Correct inference Null is false Correct inference (1 β = power) power) Type II error (β) Power and how to increase it Power and how to increase it A powerful test of the null is more likely to lead us to reject false nulls than a less powerful test. Powerful tests are more sensitive than less powerful tests to differences between the actual outcome (what you found) and the expected outcome (null hypothesis). Power, or the probability of rejecting a false null, is 1 – β. Power and how to increase it Power and how to increase it Ways to increase power: – Be carefulabout how you measure your variables. – Use more powerful statistical analyses. – Use designs that provide good control over extraneous variables. – Restrict your sample to a specific group of individuals. – Increase your sample size reduces variance due to sampling error. – Maximize treatment manipulation. Statistical power is _____. Statistical power is _____.
A. B. C. D. a measure of experimental strength equal to β the probability of rejecting a true null hypothesis the probability of rejecting a false null hypothesis Which of the following will NOT increase Which of the following will NOT increase statistical power?
A. B. C. D. E. Use a more reliable measure Use a more restrictive sample (e.g., use only women 1820 years of age) Decrease alpha Increase sample size All of the above Effect size Effect size – a measure of the strength of the relationship between/among variables. Effect size helps us determine if differences are not only statistically significant, but also whether they are important. Powerful tests should be considered to be tests that detect large effects. Effect size Effect size Ways to calculate effect size: – Cohen’s d – use with ttests. – Coefficient of determination (r2) – use with correlations. – etasquared (η2) – use with ANOVAs. – Cramer’s v – use with Chisquare analyses. Power and the role of Power and the role of replication in research Power increases when we replicate findings in a new study with different participants in a different setting. When calculating effect size for a When calculating effect size for a correlation, what should you do? A) Use Cohen's d B) Square the correlation C) Use etasquared D) Use Cramer's v External and internal validity External and internal validity External validity Internal validity – When the findings of a study can be generalized to other populations and settings. – Refers to the validity of the measures within the study. – The internal validity of an experiment is directly related to the researcher’s control of extraneous variables. Extraneous variable Confounding and Confounding and extraneous variables Confounding variable Spurious effect – A variable that may affect the outcome of a study but was not manipulated by the researcher. – A variable that is systematically related to the independent and dependent variable. – An outcome that was influenced not by the independent variable itself but rather by a variable that was confounded with the independent variable. Controlled variable Confounding and Confounding and extraneous variables – A variable that the researcher takes into account when designing the research study or experiment. Nuisance variables – Variables that contribute variance to our dependent measures and cloud the results. Controlling extraneous variables Controlling extraneous variables Elimination Constancy – Get rid of the extraneous variables completely (e.g.. by conducting research in a lab). – Keep the various parts of the experiment constant (e.g.. instructions, measuring instruments, questions). – Make variables other than the primary IV secondary variables to study (e.g.. gender). Secondary variable as an IV Controlling extraneous variables Controlling extraneous variables Randomization: Random assignment of participants to groups Repeated measures Statistical control – Randomly assigning participants to each of the treatment conditions so that we can assume the groups are initially equivalent. – Use the same participants in all conditions. – Treat the extraneous variable as a covariate and use statistical procedures to remove it from the analysis. The difference between an extraneous The difference between an extraneous variable and a confounding variable is
A) an extraneous variable is systematically related to the independent and dependent variable. B) a confounding variable is systematically related to the independent and dependent variable. C) an extraneous variable can become a controlled variable. D) a confounding variable can become a controlled variable. Which of the following is NOT one of the reasons that it is Which of the following is NOT one of the reasons that it is important to control extraneous variables? A) So we can better estimate the influence of the independent variable on the dependent variable. B) So we can increase internal validity. C) So we can increase external validity. D) So we can increase power. ...
View Full Document
- Spring '10
- Psychology, Null hypothesis, researcher, Use, extraneous variables