1. Loading, setting up

We use variables for cognitive (made from “Important to you” and “Important to your future goals” variables), behavioral (“How well were you concentrating” and “How hard were you working”) and affective (“Did you enjoy” and “Was the main activity interesting”) variables.

2. Identifying the number of MEPs.

In this section, we identity the numbers based on the r-squared values and the cross-validation Fleiss’ Kappa.

# df <- mutate(df, interaction = challenge * good_at)

df$hard_working[is.nan(df$hard_working)] <- NA

plot_r_squared(df,
               learning,
               hard_working,
               enjoy,
               challenge,
               good_at,
               to_center = TRUE,
               to_scale = TRUE,
               r_squared_table = FALSE)

# x <- cross_validate(df,
#                     learning,
#                     hard_working,
#                     enjoy,
#                     challenge,
#                     good_at,
#                     n_profiles = 3,
#                     k = 30,
#                     to_center = TRUE,
#                     to_scale = TRUE)
# 
# x

Creating profiles

p5 <- create_profiles(df,
                      learning,
                      hard_working,
                      enjoy,
                      challenge,
                      good_at,
                      n_profiles = 5,
                      to_center = TRUE,
                      to_scale = TRUE)

plot(p5)

# p5$ggplot_obj + hrbrthemes::theme_ipsum() + ylab("Z-score") + scale_fill_discrete("") + theme(text = element_text(angle = 45, hjust = 1))

five_p <- p5$.data

ggplot(df, aes(x = challenge, y = learning)) +
    geom_jitter()

# p6 <- create_profiles(df,
#                       behavioral_engagement,
#                       cognitive_engagement,
#                       affective_engagement,
#                       n_profiles = 6,
#                       to_center = TRUE,
#                       to_scale = TRUE)
# 
# plot(p6)
# 
# six_p <- p6$.data

Modeling

Using amount of time spent in profiles to predict changes in outcomes

##                 rowname overall_post_interest overall_pre_interest prof_1
## 1 overall_post_interest                                                  
## 2  overall_pre_interest                   .59                            
## 3                prof_1                  -.19                 -.08       
## 4                prof_2                   .09                  .09   -.39
## 5                prof_3                   .33                  .15   -.18
## 6                prof_4                  -.11                 -.10   -.27
## 7                prof_5                  -.14                 -.05   -.06
##   prof_2 prof_3 prof_4 prof_5
## 1                            
## 2                            
## 3                            
## 4                            
## 5   -.14                     
## 6   -.27   -.32              
## 7   -.31   -.27   -.26
    overall_post_interest
    B std. Error p
Fixed Parts
(Intercept)   1.11 0.32 <.001
overall_pre_interest   0.51 0.07 <.001
prof_1   -0.27 0.40 .493
prof_2   0.61 0.30 .041
prof_3   1.31 0.31 <.001
prof_4   0.38 0.29 .198
Random Parts
Nprogram_ID   9
ICCprogram_ID   0.063
Observations   142
R2 / Ω02   .494 / .494
    overall_post_utility_value
    B std. Error p
Fixed Parts
(Intercept)   2.87 0.38 <.001
overall_pre_utility_value   0.36 0.08 <.001
prof_1   -1.37 0.40 <.001
prof_2   -0.43 0.33 .202
prof_4   -0.88 0.31 .004
prof_5   -1.18 0.33 <.001
Random Parts
Nprogram_ID   9
ICCprogram_ID   0.034
Observations   140
R2 / Ω02   .301 / .300
    overall_post_competence_beliefs
    B std. Error p
Fixed Parts
(Intercept)   1.77 0.31 <.001
overall_pre_competence_beliefs   0.34 0.07 <.001
prof_1   0.25 0.38 .511
prof_2   0.58 0.29 .043
prof_3   1.18 0.31 <.001
prof_4   0.24 0.28 .398
Random Parts
Nprogram_ID   9
ICCprogram_ID   0.043
Observations   142
R2 / Ω02   .338 / .337