Two independent samples t test When you want to compare the mean score on a

Two independent samples t test when you want to

This preview shows page 23 - 25 out of 27 pages.

Two independent samples t-test - When you want to compare the mean score on a continuous variable for two different groups of participants (two populations, one variable) - Will tell you whether there is a statistically significant difference in the mean scores of two groups (whether males and females differ significantly in terms of their perception levels) Step one: formulate the null and alternative hypothesis Same mean preference (null) or different mean preference (alternative) Step two: test equality of variance Equal variances H 0 Variance are different (H 1 ) Firstly, you must test the equality of variances. Looking at equal variances assumed- look at sig (can either reject or accept H 0 of equal variances) Then look at sig (2-tailed) (either reject or accept H 0 of equal means) One-way analysis of variance (ANOVA) - Used when you have one independent variable with three or more levels (groups) and one dependent continuous variable - Will determine if there is a significant difference in the mean scores on the dependent variable across the three groups Dependent variable (DV) = metric (Y) Independent variable (IV) = categorical (X) All have the same effect/influence (null hypothesis) All have different effect/influence (alternative hypothesis) Look at sig and determine whether to reject or accept null hypothesis To determine the strength, divide between groups by the total Correlation and regression Correlation - Relationship between two metric variables - Used to describe the strength of direction of the linear relationship between two variables There is no linear association (null) There is a linear association (alternative) - Look at sig (2tailed) to determine if there is an association or not - Then look at pearson correlation to determine the strength of the association (0-1) Linear (multiple) regression A statistical procedure for analysing associative relationships between a metric-dependent variable and one or more metric- independent variables. It tells you how much of the variance in your dependent variable can be explained by your independent variables. It also gives you an indication of the relative contribution of each independent variable. Form a null and alternative hypothesis for each variable
Image of page 23
- First check if the model is valid (F test) by looking at regression sig - Explain the coefficients by looking at the sig of each and determine whether to accept or reject the null hypothesis for each one - Look at the fit of the model by R square (0-1) Causal Research What is causal research (causal inference)? Most interesting questions in marketing (and life) are causal questions - Does higher price lead to more profit? - Did the new ad campaign work (i.e., increase revenue/profit/brand equity)? - Is smoking bad for your health? But there are a lot of confusion around causality - People use the term loosely (and confuse causality with empirical association) - Establishing causality is complicated Two variables - X and Y Definition of causal inference - Whether a change in one marketing variable (X) produces a change in another variable (Y)
Image of page 24
Image of page 25

You've reached the end of your free preview.

Want to read all 27 pages?

  • Fall '17
  • Erica Brady

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture