################################################################################################### #### Week 13 Lab: ANCOVA, long to wide, more repeated measures #################################### ################################################################################################### # We'll start by taking a different approach to one of last week's data sets, dealing with startle. # Then we'll look at within-subject conditions with more than 2 levels, converting them from long to wide as well. ######################################################## #### ANCOVA ############################################ ######################################################## rm(list=ls()) #clear all objects from workspace # read in the data (same as last week): # What's going on here again? # Let's imagine the researchers consider the "predictable" shocks to be a good symbol of baseline anxiety # in response to electric shocks. There also may be only a moderate correlation between anxiety in the # predictable and unpredictable condition. # What is a difference approach we could use to anlayze the data? # This is a term that was coined in the ANOVA # literature before the GLM became more popular in psychology. It is just a fancy term for # a model in which we're also controlling for a variable (i.e., where we're not allowing a control # variable - a covariate - to contribute to the effect of the focal predictor[s]). # What is the main benefit of using ANCOVA over a difference score? # How does it achieve this benefit? # What is the correlation between predictable and unpredictable scores? # Maybe we should check the correlation in the control group # Let's prep our BG contrasts (copied from last week) # but let's use varRegressors to get effect sizes # Let's do an ANCOVA # What's going on here? # This is a complex sentence, but it's important all the elements are there. When we see a significant # effect in an ANCOVA model, we must indicate this effect has an influence over and above the pre-test, # whatever that may be. # Say I measure how much sugar someone eats in one day, then randomly assign them to either receive an # intervention discussing the health risks of sugar or not, then measure their sugar intake the # following day. If the intervention has an effect, the effect occurs over and above the predictive # power of the previous day's sugar intake. # Why is ANCOVA still probably NOT the correct way to analyze the data from this experiment? ########################################################################## #### LONG TO WIDE ######################################################## ########################################################################## rm(list=ls()) #clear all objects from workspace # We're going to return to an example we've done before, with a prejudice reduction intervention and IAT scores. # Participants are assigned to one of two conditions, and their "implicit bias" is measured at three timepoints: # right after the intervention, 4 weeks later, and 8 weeks later. # Are these data in long or wide format? # How do you know? ?reshape # We need to give this function a dataframe and some information about what kinds of variables # we have in the data. # timevar: the IV/predictor that varies within participants # idvar: the things that vary between participants # direction: are we going long to wide or wide to long? # What do the startle columns represent now? # The only thing that's still unclear is which column is which. R names the columns based on the # levels of the timevar we give it, which is why it's just numbers here. We could rename the values # before transforming, or just rename them now. ###################################################################################### #### 2 (between) x 3+ (within) designs ############################################### ###################################################################################### # Now we can work with these data to assess the effects of the prejudice reduction intervention over time. # To justify this analysis, there would have to be a prediction that the effect of the intervention is # somehow time-varying, maybe growing over time. # How do we test the main effect of condition? # remember, intervention is coded 0 (control) and 1 (intervention) # Conclusion? # Now we want to see whether the effect of the intervention varies based on time. # The problem is we can't use "contrasts" with the outcome since our outcome technically involves 3 different # levels, so instead we'll use math! # Let's say we first want to see whether the impact is larger in week 8 than it is during the earlier time # points overall. How to do this with math? # Let's break this down a little: # *NOTE*: technically, our outcome "contrast" isn't unit-weight. So why is it we can interpret the # parameter as representing specifically the difference between the groups? Essentially, unit weighting matters # on the predictor side but not on the outcome side. We can show this using math if you're really curious. # In this example, it makes more sense to do two pairwise comparisons instead: week 8 to week 4 and week # 4 to week 0. Let's do both (and we'll practice doing so without creating new variables) # conclusion? # conclusion? # conclusion? # How do we get the simple effects of condition at each time point? # conclusion? #### CORRECTING FOR MULTIPLE COMPARISONS #### # We're going to use Holm-Bonferroni. Even though we're using a small enough number to do Fisher's, # the means of doing it in this situation is quite complex and, because of how the F stat is calculated # in this specific situation, possibly more conservative than doing holm-bonferroni. # Let's do it for our pairwise comparisons, and bear in mind we care about the interaction term. # make a vector to include our p values #################################################################### #### GRAPHING ###################################################### ####################################################################