AI warmth and competence

Author

Pascal König and Sveinung Arnesen

Abstract

How does the increasing adoption of AI systems in government bear on citizens affective ties to government authorities? This paper addresses this question drawing on social cognition theory and conceptualize AI systems as a social presence that can engender emotional responses. It tests the argument that AI use leads to a transfer of low perceived warmth of AI systems to government decision-makers. It probes the mechanisms behind this relationship by examining how is conditioned by perceived social distance of AI systems and the power asymmetries to which citizens are exposed in a decision-making context. The analysis also examines whether prior trust and transparency about AI influence on decisions attenuate the presumed transference. Based on data from a survey containing a vignette experiment, the study finds that XYZ.

Keywords: decision-making; legitimacy; artificial intelligence; warmth; trust

Data Analysis

R Session Info and Load libraries

Code
# Set Global Options

knitr::opts_chunk$set(
  echo = TRUE,      # Show code in output
  warning = FALSE,  # Hide warnings
  message = FALSE,  # Hide messages
  fig.width = 7,    # Set figure width
  fig.height = 5,   # Set figure height
  fig.align = "center" # Align figures in the center
)

sessionInfo()
R version 4.3.1 (2023-06-16 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 11 x64 (build 22631)

Matrix products: default


locale:
[1] LC_COLLATE=English_United States.utf8 
[2] LC_CTYPE=English_United States.utf8   
[3] LC_MONETARY=English_United States.utf8
[4] LC_NUMERIC=C                          
[5] LC_TIME=English_United States.utf8    

time zone: Europe/Oslo
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] htmlwidgets_1.6.4 compiler_4.3.1    fastmap_1.2.0     cli_3.6.2        
 [5] tools_4.3.1       htmltools_0.5.8.1 rstudioapi_0.17.1 yaml_2.3.8       
 [9] rmarkdown_2.29    knitr_1.49        xfun_0.49         digest_0.6.34    
[13] jsonlite_1.8.9    rlang_1.1.3       evaluate_1.0.1   
Code
setwd("C:/Users/svein/OneDrive - NORCE/Prosjekter/fAIrgov/AI and dissatisfaction")

library(kableExtra)       # tidy tables
library(tidyverse)        # ggplot, dplyr, and friends
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.1
✔ ggplot2   3.5.1     ✔ tibble    3.2.1
✔ lubridate 1.9.4     ✔ tidyr     1.3.1
✔ purrr     1.0.2     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter()     masks stats::filter()
✖ dplyr::group_rows() masks kableExtra::group_rows()
✖ dplyr::lag()        masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
Code
library(ggplot2)
library(haven)            # Read Stata files
library(broom)            # Convert model objects to tidy data frames
library(cregg)            # Automatically calculate frequentist conjoint AMCEs and MMs
library(survey)           # Panel-ish regression models
Loading required package: grid
Loading required package: Matrix

Attaching package: 'Matrix'

The following objects are masked from 'package:tidyr':

    expand, pack, unpack

Loading required package: survival

Attaching package: 'survey'

The following object is masked from 'package:graphics':

    dotchart
Code
library(scales)           # Nicer labeling functions

Attaching package: 'scales'

The following object is masked from 'package:purrr':

    discard

The following object is masked from 'package:readr':

    col_factor
Code
library(marginaleffects)  # Calculate marginal effects
library(broom.helpers)    # Add empty reference categories to tidy model data frames
library(ggforce)          # For facet_col()
library(patchwork)        # Combine ggplot plots

Set Up Theme and Functions

Code
# Inspired by Andrew Heiss https://www.andrewheiss.com/blog/2023/07/25/conjoint-bayesian-frequentist-guide/#marginal-means

library(ggplot2)
library(scales)  
library(tidyverse)

# Define theme function 
theme_nice <- function() {
  theme_minimal(base_family = "Jost") +
    theme(panel.grid.minor = element_blank(),
          plot.title = element_text(family = "Jost", face = "bold"),
          axis.title = element_text(family = "Jost Medium"),
          axis.title.x = element_text(hjust = 0),
          axis.title.y = element_text(hjust = 1),
          strip.text = element_text(family = "Jost", face = "bold",
                                    size = rel(0.75), hjust = 0),
          strip.background = element_rect(fill = "grey90", color = NA))
}

# Set the default theme *after* defining it
theme_set(theme_nice())

# Update default font settings
update_geom_defaults("text", list(family = "Jost", fontface = "plain"))
update_geom_defaults("label", list(family = "Jost", fontface = "plain"))


# Colors for heterogeneous effects
parties <- c("#1696d2", "#db2b27")


# Functions for formatting things as percentage points
label_pp <- label_number(accuracy = 1, scale = 100, 
                         suffix = " pp.", style_negative = "minus")

label_amce <- label_number(accuracy = 0.1, scale = 100, suffix = " pp.", 
                           style_negative = "minus", style_positive = "plus")

Load and Rename Data

Code
library(sjlabelled)
# NCP data
df_ncp <- read_spss("Norwegian Citizen Panel - round 31 - v-100-O.sav")

 df_ncp <- df_ncp %>% 
   select(responseid, r31_dmaik, r31_dmaip, r31_dmaiw, starts_with("r31_eds"),  starts_with("r31_edt")) %>% 
   rename(admexp_goal = r31_edsbix_ran1, 
          admexp_gender = r31_edsbix_ran2,
          admexp_age = r31_edsbix_ran3,
          admexp_ai = r31_edsbix_ran4,
          admpost_id = r31_edsbix,
          admpost_accept = r31_edsbax,
          admpost_friendly = r31_edsbdx_1,
          admpost_intent = r31_edsbdx_2,
          admpost_comp = r31_edsbdx_3,
          admpost_intel = r31_edsbdx_4, 
          teachexp_goal = r31_edteix_ran1,
          teachexp_gender = r31_edteix_ran2, 
          teachexp_age = r31_edteix_ran3, 
          teachexp_ai = r31_edteix_ran4, 
          teachpost_id = r31_edteix, 
          teachpost_accept = r31_edteax,
          teachpost_friendly = r31_edtedx_1,
          teachpost_intent = r31_edtedx_2,
          teachpost_comp = r31_edtedx_3,
          teachpost_intel = r31_edtedx_4) %>% 
     mutate(
    admexp_goal = as.factor(admexp_goal),
    admexp_gender = as.factor(admexp_gender),
    admexp_age = as.factor(admexp_age),
    admexp_ai = as.factor(admexp_ai),
    admpost_id = as.numeric(admpost_id),
    admpost_id = 6 - admpost_id, #reverse scale
    admpost_accept = as.numeric(admpost_accept),
    admpost_accept = 6 - admpost_accept, #reverse scale
    admpost_friendly = as.numeric(admpost_friendly),
    admpost_friendly = 6 - admpost_friendly, #reverse scale
    admpost_intent = as.numeric(admpost_intent),
    admpost_intent = 6 - admpost_intent, #reverse scale
    admpost_comp = as.numeric(admpost_comp),
    admpost_comp = 6 - admpost_comp, #reverse scale
    admpost_intel = as.numeric(admpost_intel), 
    admpost_intel = 6 - admpost_intel, #reverse scale
    teachexp_goal = as.factor(teachexp_goal),
    teachexp_gender = as.factor(teachexp_gender),
    teachexp_age = as.factor(teachexp_age),
    teachexp_ai = as.factor(teachexp_ai),
    teachpost_id = as.numeric(teachpost_id),
    teachpost_id = 6 - teachpost_id, #reverse scale
    teachpost_accept = as.numeric(teachpost_accept),
    teachpost_accept = 6 - teachpost_accept, #reverse scale
    teachpost_friendly = as.numeric(teachpost_friendly),
    teachpost_friendly = 6 - teachpost_friendly, #reverse scale
    teachpost_intent = as.numeric(teachpost_intent),
    teachpost_intent = 6 - teachpost_intent, #reverse scale
    teachpost_comp = as.numeric(teachpost_comp),
    teachpost_comp = 6 - teachpost_comp, #reverse scale
    teachpost_intel = as.numeric(teachpost_intel), 
    teachpost_intel = 6 - teachpost_intel #reverse scale
    
  )

head(df_ncp)
       responseid r31_dmaik r31_dmaip r31_dmaiw admexp_goal admexp_gender
1 425243588157031        98        98        98           2             1
2 425243593078031         4         2         3        <NA>          <NA>
3 425243589581031        98        98        98        <NA>          <NA>
4 425243586014031        98        98        98        <NA>          <NA>
5 425243590421031        98        98        98           1             1
6 425243587771031        98        98        98        <NA>          <NA>
  admexp_age admexp_ai admpost_id admpost_accept admpost_friendly
1          3         1          4              5                3
2       <NA>      <NA>         -1             -1               -1
3       <NA>      <NA>         -1             -1               -1
4       <NA>      <NA>         -1             -1               -1
5          1         2          4              4                4
6       <NA>      <NA>         -1             -1               -1
  admpost_intent admpost_comp admpost_intel teachexp_goal teachexp_gender
1              5            4             4          <NA>            <NA>
2             -1           -1            -1          <NA>            <NA>
3             -1           -1            -1             2               2
4             -1           -1            -1          <NA>            <NA>
5              4            4             4          <NA>            <NA>
6             -1           -1            -1          <NA>            <NA>
  teachexp_age teachexp_ai teachpost_id teachpost_accept teachpost_friendly
1         <NA>        <NA>           -1               -1                 -1
2         <NA>        <NA>           -1               -1                 -1
3            2           2            4                4                  4
4         <NA>        <NA>           -1               -1                 -1
5         <NA>        <NA>           -1               -1                 -1
6         <NA>        <NA>           -1               -1                 -1
  teachpost_intent teachpost_comp teachpost_intel
1               -1             -1              -1
2               -1             -1              -1
3                4              4               4
4               -1             -1              -1
5               -1             -1              -1
6               -1             -1              -1
Code
# Make a little lookup table for nicer feature labels
variable_lookup <- tribble(
  ~variable,    ~variable_nice,
  "responseid", "Respondent-ID",
  "r_31dmaik",  " ",
  "r_31dmaip", " ",
  "r31_dmaiw", " ",
  "admexp_goal", "The goal of the public servant's task",
  "admexp_gender",     "The public servant's gender",
  "admexp_age", "The public servant's age",
  "admexp_ai", "Whether the public servant used AI or not",
    "teachexp_goal", "The goal of the teacher's task",
  "teachexp_gender",     "The teacher's gender",
  "teachexp_age", "The teacher's age",
  "teachexp_ai", "Whether the teacher used AI or not",
  
) %>%
  mutate(variable_nice = fct_inorder(variable_nice))

Label and Reorder

Code
df_ncp <- df_ncp %>%
  mutate(
    admexp_goal = case_when(
      admexp_goal == 1 ~ "Maximize learning outcomes",
      admexp_goal == 2 ~ "Equalize learning outcomes",
      TRUE ~ as.character(admexp_goal) # Keeps other values
    ) %>% factor() %>% lvls_reorder(c(1, 2)),

    admexp_gender = case_when(
      admexp_gender == 1 ~ "Female",
      admexp_gender == 2 ~ "Male",
      TRUE ~ as.character(admexp_gender)
    ) %>% factor() %>% lvls_reorder(c(1, 2)),

    admexp_age = case_when(
      admexp_age == 1 ~ "20s",
      admexp_age == 2 ~ "40s",
      admexp_age == 3 ~ "60s",
      TRUE ~ as.character(admexp_age)
    ) %>% factor() %>% lvls_reorder(c(1, 2, 3)),

    admexp_ai = case_when(
      admexp_ai == 1 ~ "No AI involved",
      admexp_ai == 2 ~ "AI involved",
      TRUE ~ as.character(admexp_ai)
    ) %>% factor() %>% lvls_reorder(c(1, 2)),

    teachexp_goal = case_when(
      teachexp_goal == 1 ~ "Maximize learning outcomes",
      teachexp_goal == 2 ~ "Equalize learning outcomes",
      TRUE ~ as.character(teachexp_goal)
    ) %>% factor() %>% lvls_reorder(c(1, 2)),

    teachexp_gender = case_when(
      teachexp_gender == 1 ~ "Female",
      teachexp_gender == 2 ~ "Male",
      TRUE ~ as.character(teachexp_gender)
    ) %>% factor() %>% lvls_reorder(c(1, 2)),

    teachexp_age = case_when(
      teachexp_age == 1 ~ "20s",
      teachexp_age == 2 ~ "40s",
      teachexp_age == 3 ~ "60s",
      TRUE ~ as.character(teachexp_age)
    ) %>% factor() %>% lvls_reorder(c(1, 2, 3)),

    teachexp_ai = case_when(
      teachexp_ai == 1 ~ "No AI involved",
      teachexp_ai == 2 ~ "AI involved",
      TRUE ~ as.character(teachexp_ai)
    ) %>% factor() %>% lvls_reorder(c(1, 2))
  )



# Make a little lookup table for nicer feature labels
variable_lookup <- tribble(
  ~variable,    ~variable_nice,
  "responseid", "Respondent-ID",
  "r_31dmaik",  " ",
  "r_31dmaip", " ",
  "r31_dmaiw", " ",
  "admexp_goal", "The goal of the public servant's task",
  "admexp_gender",     "The public servant's gender",
  "admexp_age", "The public servant's age",
  "admexp_ai", "Whether the public servant used AI or not",
    "teachexp_goal", "The goal of the teacher's task",
  "teachexp_gender",     "The teacher's gender",
  "teachexp_age", "The teacher's age",
  "teachexp_ai", "Whether the teacher used AI or not",

) %>%
  mutate(variable_nice = fct_inorder(variable_nice))

Study 1: Public servant experiment

Treatment effect on identity

Code
library(marginaleffects)
library(ggforce)

lm_admpost_id <- df_ncp %>%
   filter(!is.na(admexp_ai) & admpost_id %in% 1:5) %>% 
  lm(admpost_id ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .)

mm_admpost_id <- marginal_means(
  lm_admpost_id,
  newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
  wts = "cells"
)
mm_admpost_id
           term                      value estimate  std.error statistic
1   admexp_goal Equalize learning outcomes 2.672087 0.03127894  85.42768
2   admexp_goal Maximize learning outcomes 2.681095 0.03093158  86.67825
3 admexp_gender                     Female 2.728319 0.03095897  88.12691
4 admexp_gender                       Male 2.623986 0.03125070  83.96565
5    admexp_age                        20s 2.640268 0.03812832  69.24692
6    admexp_age                        40s 2.657784 0.03897456  68.19279
7    admexp_age                        60s 2.728553 0.03723921  73.27098
8     admexp_ai                AI involved 2.429553 0.03050347  79.64842
9     admexp_ai             No AI involved 2.944186 0.03174107  92.75637
  p.value s.value conf.low conf.high
1       0     Inf 2.610781  2.733392
2       0     Inf 2.620471  2.741720
3       0     Inf 2.667640  2.788997
4       0     Inf 2.562735  2.685236
5       0     Inf 2.565538  2.714999
6       0     Inf 2.581395  2.734173
7       0     Inf 2.655566  2.801541
8       0     Inf 2.369768  2.489339
9       0     Inf 2.881975  3.006397
Code
plot_mm_admpost_id <- mm_admpost_id %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot1 <- ggplot(
  plot_mm_admpost_id,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
  scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
   guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot1

Code
ggsave(filename = 'admpost_id.png', plot=last_plot(), dpi=300)

# library(marginaleffects)
# library(ggforce)
# 
# lm_admpost_id <- df_ncp %>%
#    filter(admpost_id %in% 1:5) %>% 
#   lm(admpost_id ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .) 
# 
# mm_admpost_id <- marginal_means(
#   lm_admpost_id,
#   newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
#   wts = "cells"
# )
# 
# mm_admpost_id <- marginaleffects(
#   lm_admpost_id,
#   variables = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai")
# )
# 
# mm_admpost_id
# 
# plot_mm_admpost_id <- mm_admpost_id %>% 
#   as_tibble() %>% 
#   mutate(value = fct_inorder(value)) %>%
#   left_join(variable_lookup, by = join_by(term == variable)) %>% 
#   mutate(across(c(value, variable_nice), ~fct_inorder(.)))
# 
# plot1 <- ggplot(
#   plot_mm_admpost_id,
#   aes(x = estimate, y = value, color = variable_nice)
# ) +
#   geom_vline(xintercept = 0.5) +
#   geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
#    guides(color = "none") +
#   labs(
#    # title = " ",
#     #subtitle = " ",
#     x = NULL,
#     y = NULL,
#     color = "Feature"  ) +
#   facet_col(facets = "variable_nice", scales = "free_y", space = "free")
# 
# plot1
# 
# ggsave(filename = 'admpost_id.png', plot=last_plot(), dpi=300)

Treatment effect on acceptance

Code
library(marginaleffects)
library(ggforce)

lm_admpost_accept <- df_ncp %>%
  filter(!is.na(admexp_ai) & admpost_accept %in% 1:5) %>% 
  lm(admpost_accept ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .)

mm_admpost_accept <- marginal_means(
  lm_admpost_accept,
  newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
  wts = "cells"
)
mm_admpost_accept
           term                      value estimate  std.error statistic
1   admexp_goal Equalize learning outcomes 3.147738 0.02946169 106.84174
2   admexp_goal Maximize learning outcomes 3.141704 0.02903587 108.20077
3 admexp_gender                     Female 3.206148 0.02915374 109.97383
4 admexp_gender                       Male 3.082418 0.02934002 105.05848
5    admexp_age                        20s 3.151432 0.03581128  88.00111
6    admexp_age                        40s 3.154066 0.03661954  86.13067
7    admexp_age                        60s 3.129581 0.03507722  89.21975
8     admexp_ai                AI involved 2.817708 0.02856575  98.63940
9     admexp_ai             No AI involved 3.504780 0.02997823 116.91084
  p.value s.value conf.low conf.high
1       0     Inf 3.089994  3.205482
2       0     Inf 3.084795  3.198613
3       0     Inf 3.149008  3.263289
4       0     Inf 3.024912  3.139923
5       0     Inf 3.081244  3.221621
6       0     Inf 3.082293  3.225839
7       0     Inf 3.060831  3.198331
8       0     Inf 2.761720  2.873696
9       0     Inf 3.446024  3.563536
Code
plot_mm_admpost_accept <- mm_admpost_accept %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot1 <- ggplot(
  plot_mm_admpost_accept,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot1

Code
ggsave(filename = 'admpost_accept.png', plot=last_plot(), dpi=300)

Treatment effect on perceived friendliness

Code
library(marginaleffects)
library(ggforce)

lm_admpost_friendly <- df_ncp %>%
  filter(!is.na(admexp_ai) & admpost_friendly %in% 1:5) %>% 
  lm(admpost_friendly ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .)

mm_admpost_friendly <- marginal_means(
  lm_admpost_friendly,
  newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
  wts = "cells"
)
mm_admpost_friendly
           term                      value estimate  std.error statistic
1   admexp_goal Equalize learning outcomes 3.100604 0.02909904 106.55347
2   admexp_goal Maximize learning outcomes 3.118164 0.02866960 108.76203
3 admexp_gender                     Female 3.181908 0.02876811 110.60537
4 admexp_gender                       Male 3.035964 0.02899711 104.69883
5    admexp_age                        20s 3.133627 0.03515590  89.13516
6    admexp_age                        40s 3.094427 0.03609571  85.72839
7    admexp_age                        60s 3.099855 0.03490059  88.81956
8     admexp_ai                AI involved 2.861033 0.02811234 101.77142
9     admexp_ai             No AI involved 3.387198 0.02971840 113.97648
  p.value s.value conf.low conf.high
1       0     Inf 3.043571  3.157637
2       0     Inf 3.061973  3.174355
3       0     Inf 3.125523  3.238292
4       0     Inf 2.979131  3.092797
5       0     Inf 3.064723  3.202531
6       0     Inf 3.023681  3.165174
7       0     Inf 3.031451  3.168259
8       0     Inf 2.805934  2.916132
9       0     Inf 3.328951  3.445445
Code
plot_mm_admpost_friendly <- mm_admpost_friendly %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_admpost_friendly <- ggplot(
  plot_mm_admpost_friendly,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_admpost_friendly

Code
ggsave(filename = 'admpost_friendly.png', plot=last_plot(), dpi=300)

Treatment effect on perceived good intentions

Code
library(marginaleffects)
library(ggforce)

lm_admpost_intent <- df_ncp %>%
  filter(!is.na(admexp_ai) & admpost_intent %in% 1:5) %>% 
  lm(admpost_intent ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .)

mm_admpost_intent <- marginal_means(
  lm_admpost_intent,
  newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
  wts = "cells"
)
mm_admpost_intent
           term                      value estimate  std.error statistic
1   admexp_goal Equalize learning outcomes 3.709924 0.02641177  140.4648
2   admexp_goal Maximize learning outcomes 3.737523 0.02599348  143.7870
3 admexp_gender                     Female 3.789963 0.02606586  145.3995
4 admexp_gender                       Male 3.656546 0.02633649  138.8395
5    admexp_age                        20s 3.760563 0.03208847  117.1936
6    admexp_age                        40s 3.702190 0.03266881  113.3249
7    admexp_age                        60s 3.708844 0.03153804  117.5991
8     admexp_ai                AI involved 3.543907 0.02559447  138.4638
9     admexp_ai             No AI involved 3.922091 0.02685092  146.0691
  p.value s.value conf.low conf.high
1       0     Inf 3.658158  3.761690
2       0     Inf 3.686577  3.788469
3       0     Inf 3.738875  3.841051
4       0     Inf 3.604928  3.708165
5       0     Inf 3.697671  3.823456
6       0     Inf 3.638160  3.766219
7       0     Inf 3.647030  3.770657
8       0     Inf 3.493743  3.594071
9       0     Inf 3.869464  3.974718
Code
plot_mm_admpost_intent <- mm_admpost_intent %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_admpost_intent <- ggplot(
  plot_mm_admpost_intent,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_admpost_intent

Code
ggsave(filename = 'admpost_intent.png', plot=last_plot(), dpi=300)

Treatment effect on perceived competence

Code
library(marginaleffects)
library(ggforce)

lm_admpost_comp <- df_ncp %>%
  filter(!is.na(admexp_ai) & admpost_comp %in% 1:5) %>% 
  lm(admpost_comp ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .)

mm_admpost_comp <- marginal_means(
  lm_admpost_comp,
  newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
  wts = "cells"
)
mm_admpost_comp
           term                      value estimate  std.error statistic
1   admexp_goal Equalize learning outcomes 3.202138 0.02999805 106.74487
2   admexp_goal Maximize learning outcomes 3.197170 0.02955614 108.17278
3 admexp_gender                     Female 3.291667 0.02961207 111.15963
4 admexp_gender                       Male 3.105518 0.02993992 103.72500
5    admexp_age                        20s 3.115607 0.03658033  85.17164
6    admexp_age                        40s 3.230428 0.03698337  87.34813
7    admexp_age                        60s 3.251389 0.03586200  90.66391
8     admexp_ai                AI involved 2.912727 0.02901378 100.39116
9     admexp_ai             No AI involved 3.518706 0.03059867 114.99538
  p.value s.value conf.low conf.high
1       0     Inf 3.143343  3.260933
2       0     Inf 3.139241  3.255099
3       0     Inf 3.233628  3.349705
4       0     Inf 3.046837  3.164199
5       0     Inf 3.043911  3.187303
6       0     Inf 3.157942  3.302914
7       0     Inf 3.181101  3.321677
8       0     Inf 2.855861  2.969593
9       0     Inf 3.458733  3.578678
Code
plot_mm_admpost_comp <- mm_admpost_comp %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_admpost_comp <- ggplot(
  plot_mm_admpost_comp,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_admpost_comp

Code
ggsave(filename = 'admpost_comp.png', plot=last_plot(), dpi=300)

Treatment effect on perceived intelligence

Code
library(marginaleffects)
library(ggforce)

lm_admpost_intel <- df_ncp %>%
  filter(!is.na(admexp_ai) & admpost_intel %in% 1:5) %>% 
  lm(admpost_intel ~ admexp_goal + admexp_gender + admexp_age + admexp_ai, data = .)

mm_admpost_intel <- marginal_means(
  lm_admpost_intel,
  newdata = c("admexp_goal", "admexp_gender", "admexp_age", "admexp_ai"),
  wts = "cells"
)
mm_admpost_intel
           term                      value estimate  std.error statistic
1   admexp_goal Equalize learning outcomes 3.184158 0.02889193 110.20928
2   admexp_goal Maximize learning outcomes 3.207329 0.02851335 112.48518
3 admexp_gender                     Female 3.300193 0.02852708 115.68633
4 admexp_gender                       Male 3.089021 0.02887764 106.96931
5    admexp_age                        20s 3.222222 0.03510825  91.77964
6    admexp_age                        40s 3.204236 0.03571388  89.71963
7    admexp_age                        60s 3.162393 0.03465521  91.25303
8     admexp_ai                AI involved 2.994475 0.02786264 107.47277
9     admexp_ai             No AI involved 3.423517 0.02961935 115.58381
  p.value s.value conf.low conf.high
1       0     Inf 3.127531  3.240786
2       0     Inf 3.151444  3.263214
3       0     Inf 3.244281  3.356105
4       0     Inf 3.032422  3.145620
5       0     Inf 3.153411  3.291033
6       0     Inf 3.134238  3.274234
7       0     Inf 3.094470  3.230316
8       0     Inf 2.939865  3.049085
9       0     Inf 3.365464  3.481570
Code
plot_mm_admpost_intel <- mm_admpost_intel %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_admpost_intel <- ggplot(
  plot_mm_admpost_intel,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
 scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_admpost_intel

Code
ggsave(filename = 'admpost_intel.png', plot=last_plot(), dpi=300)

Study 2: Teacher experiment

Treatment effect on identity

Code
library(marginaleffects)
library(ggforce)

lm_teachpost_id <- df_ncp %>%
   filter(!is.na(teachexp_ai) & teachpost_id %in% 1:5) %>% 
  lm(teachpost_id ~ teachexp_goal + teachexp_gender + teachexp_age + teachexp_ai, data = .)

mm_teachpost_id <- marginal_means(
  lm_teachpost_id,
  newdata = c("teachexp_goal", "teachexp_gender", "teachexp_age", "teachexp_ai"),
  wts = "cells"
)
mm_teachpost_id
             term                      value estimate  std.error statistic
1   teachexp_goal Equalize learning outcomes 2.909091 0.03038866  95.72950
2   teachexp_goal Maximize learning outcomes 2.999120 0.03062825  97.92006
3 teachexp_gender                     Female 2.949521 0.03046790  96.80751
4 teachexp_gender                       Male 2.958005 0.03054779  96.83204
5    teachexp_age                        20s 2.948886 0.03738869  78.87107
6    teachexp_age                        40s 2.966376 0.03644558  81.39193
7    teachexp_age                        60s 2.944904 0.03832958  76.83110
8     teachexp_ai                AI involved 2.365833 0.03033617  77.98718
9     teachexp_ai             No AI involved 3.555163 0.03068227 115.87027
  p.value s.value conf.low conf.high
1       0     Inf 2.849530  2.968652
2       0     Inf 2.939090  3.059151
3       0     Inf 2.889805  3.009237
4       0     Inf 2.898133  3.017878
5       0     Inf 2.875605  3.022166
6       0     Inf 2.894944  3.037808
7       0     Inf 2.869779  3.020028
8       0     Inf 2.306375  2.425290
9       0     Inf 3.495027  3.615299
Code
plot_mm_teachpost_id <- mm_teachpost_id %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_teachpost_id <- ggplot(
  plot_mm_teachpost_id,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
  scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
   guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_teachpost_id

Code
ggsave(filename = 'teachpost_id.png', plot=last_plot(), dpi=300)

Treatment effect on acceptance

Code
library(marginaleffects)
library(ggforce)

lm_teachpost_accept <- df_ncp %>%
  filter(!is.na(teachexp_ai) &  teachpost_accept %in% 1:5) %>% 
  lm(teachpost_accept ~ teachexp_goal + teachexp_gender + teachexp_age + teachexp_ai, data = .)

mm_teachpost_accept <- marginal_means(
  lm_teachpost_accept,
  newdata = c("teachexp_goal", "teachexp_gender", "teachexp_age", "teachexp_ai"),
  wts = "cells"
)
mm_teachpost_accept
             term                      value estimate  std.error statistic
1   teachexp_goal Equalize learning outcomes 3.470228 0.02736676  126.8045
2   teachexp_goal Maximize learning outcomes 3.469388 0.02754830  125.9383
3 teachexp_gender                     Female 3.496035 0.02745102  127.3554
4 teachexp_gender                       Male 3.443563 0.02746301  125.3891
5    teachexp_age                        20s 3.442536 0.03361309  102.4165
6    teachexp_age                        40s 3.488693 0.03277934  106.4297
7    teachexp_age                        60s 3.477654 0.03456201  100.6207
8     teachexp_ai                AI involved 2.666377 0.02725955   97.8144
9     teachexp_ai             No AI involved 4.296959 0.02765894  155.3552
  p.value s.value conf.low conf.high
1       0     Inf 3.416590  3.523866
2       0     Inf 3.415394  3.523381
3       0     Inf 3.442232  3.549838
4       0     Inf 3.389736  3.497389
5       0     Inf 3.376656  3.508417
6       0     Inf 3.424447  3.552940
7       0     Inf 3.409913  3.545394
8       0     Inf 2.612949  2.719805
9       0     Inf 4.242748  4.351169
Code
plot_mm_teachpost_accept <- mm_teachpost_accept %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_teachpost_accept <- ggplot(
  plot_mm_teachpost_accept,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot1

Code
ggsave(filename = 'teachpost_accept.png', plot=last_plot(), dpi=300)

Treatment effect on perceived friendliness

Code
library(marginaleffects)
library(ggforce)

lm_teachpost_friendly <- df_ncp %>%
  filter(!is.na(teachexp_ai) &  teachpost_friendly %in% 1:5) %>% 
  lm(teachpost_friendly ~ teachexp_goal + teachexp_gender + teachexp_age + teachexp_ai, data = .)

mm_teachpost_friendly <- marginal_means(
  lm_teachpost_friendly,
  newdata = c("teachexp_goal", "teachexp_gender", "teachexp_age", "teachexp_ai"),
  wts = "cells"
)
mm_teachpost_friendly
             term                      value estimate  std.error statistic
1   teachexp_goal Equalize learning outcomes 3.500911 0.02732130  128.1385
2   teachexp_goal Maximize learning outcomes 3.490792 0.02747183  127.0680
3 teachexp_gender                     Female 3.516423 0.02734621  128.5890
4 teachexp_gender                       Male 3.475184 0.02744656  126.6163
5    teachexp_age                        20s 3.455296 0.03357649  102.9082
6    teachexp_age                        40s 3.474376 0.03281756  105.8694
7    teachexp_age                        60s 3.561782 0.03431608  103.7934
8     teachexp_ai                AI involved 3.004537 0.02727167  110.1706
9     teachexp_ai             No AI involved 3.996303 0.02752256  145.2010
  p.value s.value conf.low conf.high
1       0     Inf 3.447362  3.554460
2       0     Inf 3.436948  3.544636
3       0     Inf 3.462826  3.570021
4       0     Inf 3.421390  3.528978
5       0     Inf 3.389487  3.521104
6       0     Inf 3.410055  3.538697
7       0     Inf 3.494523  3.629040
8       0     Inf 2.951086  3.057989
9       0     Inf 3.942360  4.050246
Code
plot_mm_teachpost_friendly <- mm_teachpost_friendly %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_teachpost_friendly <- ggplot(
  plot_mm_teachpost_friendly,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
 scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_teachpost_friendly

Code
ggsave(filename = 'teachpost_friendly.png', plot=last_plot(), dpi=300)

Treatment effect on perceived good intentions

Code
library(marginaleffects)
library(ggforce)

lm_teachpost_intent <- df_ncp %>%
  filter(!is.na(teachexp_ai) &  teachpost_intent %in% 1:5) %>% 
  lm(teachpost_intent ~ teachexp_goal + teachexp_gender + teachexp_age + teachexp_ai, data = .)

mm_teachpost_intent <- marginal_means(
  lm_teachpost_intent,
  newdata = c("teachexp_goal", "teachexp_gender", "teachexp_age", "teachexp_ai"),
  wts = "cells"
)
mm_teachpost_intent
             term                      value estimate  std.error statistic
1   teachexp_goal Equalize learning outcomes 3.967972 0.02463879  161.0457
2   teachexp_goal Maximize learning outcomes 3.929793 0.02478255  158.5710
3 teachexp_gender                     Female 3.957371 0.02461690  160.7583
4 teachexp_gender                       Male 3.940487 0.02480486  158.8595
5    teachexp_age                        20s 3.930108 0.03028420  129.7742
6    teachexp_age                        40s 3.952625 0.02955820  133.7235
7    teachexp_age                        60s 3.964789 0.03100070  127.8935
8     teachexp_ai                AI involved 3.533688 0.02459507  143.6747
9     teachexp_ai             No AI involved 4.372177 0.02482726  176.1039
  p.value s.value conf.low conf.high
1       0     Inf 3.919680  4.016263
2       0     Inf 3.881220  3.978366
3       0     Inf 3.909123  4.005619
4       0     Inf 3.891870  3.989104
5       0     Inf 3.870752  3.989463
6       0     Inf 3.894692  4.010558
7       0     Inf 3.904028  4.025549
8       0     Inf 3.485482  3.581893
9       0     Inf 4.323517  4.420838
Code
plot_mm_teachpost_intent <- mm_teachpost_intent %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_teachpost_intent <- ggplot(
  plot_mm_teachpost_intent,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_teachpost_intent

Code
ggsave(filename = 'teachpost_intent.png', plot=last_plot(), dpi=300)

Treatment effect on perceived competence

Code
library(marginaleffects)
library(ggforce)

lm_teachpost_comp <- df_ncp %>%
  filter(!is.na(teachexp_ai) &  teachpost_comp %in% 1:5) %>% 
  lm(teachpost_comp ~ teachexp_goal + teachexp_gender + teachexp_age + teachexp_ai, data = .)

mm_teachpost_comp <- marginal_means(
  lm_teachpost_comp,
  newdata = c("teachexp_goal", "teachexp_gender", "teachexp_age", "teachexp_ai"),
  wts = "cells"
)
mm_teachpost_comp
             term                      value estimate  std.error statistic
1   teachexp_goal Equalize learning outcomes 3.475336 0.02752634  126.2550
2   teachexp_goal Maximize learning outcomes 3.509572 0.02775124  126.4654
3 teachexp_gender                     Female 3.502698 0.02756344  127.0777
4 teachexp_gender                       Male 3.481818 0.02771338  125.6367
5    teachexp_age                        20s 3.461224 0.03390332  102.0910
6    teachexp_age                        40s 3.510336 0.03303812  106.2511
7    teachexp_age                        60s 3.504979 0.03466640  101.1059
8     teachexp_ai                AI involved 2.934588 0.02751400  106.6580
9     teachexp_ai             No AI involved 4.060219 0.02776391  146.2409
  p.value s.value conf.low conf.high
1       0     Inf 3.421386  3.529287
2       0     Inf 3.455180  3.563963
3       0     Inf 3.448674  3.556721
4       0     Inf 3.427501  3.536135
5       0     Inf 3.394775  3.527674
6       0     Inf 3.445582  3.575089
7       0     Inf 3.437034  3.572924
8       0     Inf 2.880661  2.988514
9       0     Inf 4.005803  4.114635
Code
plot_mm_teachpost_comp <- mm_teachpost_comp %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_teachpost_comp <- ggplot(
  plot_mm_teachpost_comp,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
 scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_teachpost_comp

Code
ggsave(filename = 'teachpost_comp.png', plot=last_plot(), dpi=300)

Treatment effect on perceived intelligence

Code
library(marginaleffects)
library(ggforce)

lm_teachpost_intel <- df_ncp %>%
  filter(!is.na(teachexp_ai) &  teachpost_intel %in% 1:5) %>% 
  lm(teachpost_intel ~ teachexp_goal + teachexp_gender + teachexp_age + teachexp_ai, data = .)

mm_teachpost_intel <- marginal_means(
  lm_teachpost_intel,
  newdata = c("teachexp_goal", "teachexp_gender", "teachexp_age", "teachexp_ai"),
  wts = "cells"
)
mm_teachpost_intel
             term                      value estimate  std.error statistic
1   teachexp_goal Equalize learning outcomes 3.465517 0.02778613 124.72111
2   teachexp_goal Maximize learning outcomes 3.486636 0.02800295 124.50958
3 teachexp_gender                     Female 3.507273 0.02781138 126.10926
4 teachexp_gender                       Male 3.444342 0.02797718 123.11256
5    teachexp_age                        20s 3.469613 0.03428070 101.21187
6    teachexp_age                        40s 3.486911 0.03337109 104.48899
7    teachexp_age                        60s 3.470672 0.03488833  99.47946
8     teachexp_ai                AI involved 3.062218 0.02769830 110.55618
9     teachexp_ai             No AI involved 3.901670 0.02809374 138.88040
  p.value s.value conf.low conf.high
1       0     Inf 3.411057  3.519977
2       0     Inf 3.431751  3.541521
3       0     Inf 3.452763  3.561782
4       0     Inf 3.389508  3.499176
5       0     Inf 3.402424  3.536802
6       0     Inf 3.421505  3.552317
7       0     Inf 3.402293  3.539052
8       0     Inf 3.007931  3.116506
9       0     Inf 3.846607  3.956732
Code
plot_mm_teachpost_intel <- mm_teachpost_intel %>% 
  as_tibble() %>% 
  mutate(value = fct_inorder(value)) %>%
  left_join(variable_lookup, by = join_by(term == variable)) %>% 
  mutate(across(c(value, variable_nice), ~fct_inorder(.)))

plot_mm_teachpost_intel <- ggplot(
  plot_mm_teachpost_intel,
  aes(x = estimate, y = value, color = variable_nice)
) +
  geom_vline(xintercept = 0.5) +
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high)) +
scale_x_continuous(limits = c(1, 5),
                     breaks = c(1:5)) +
  guides(color = "none") +
  labs(
   # title = " ",
    #subtitle = " ",
    x = NULL,
    y = NULL,
    color = "Feature"  ) +
  facet_col(facets = "variable_nice", scales = "free_y", space = "free")

plot_mm_teachpost_intel

Code
ggsave(filename = 'teachpost_intel.png', plot=last_plot(), dpi=300)

Testing hypotheses

H1

H1a: AI use versus a human decision-maker leads to lower perceptions of warmth regarding the decision-maker. H1b: AI use versus a human decision-maker leads to higher perceptions of competence regarding the decision-maker.

Code
library(tidyverse)
library(ggplot2)
library(patchwork)

# H1A
# Plot with AI treatment only

# Combine all marginal means into a single tidy dataframe
H1_mm_combined_ai <- bind_rows(
  mm_admpost_comp %>% mutate(variable = "Competence", group = "Adm"),
  mm_admpost_intel %>% mutate(variable = "Intelligence", group = "Adm"),
  mm_teachpost_comp %>% mutate(variable = "Competence", group = "Teach"),
  mm_teachpost_intel %>% mutate(variable = "Intelligence", group = "Teach")
)

# Extract AI treatment values from the "value" column
H1_mm_combined_ai <- H1_mm_combined_ai %>%
  mutate(
    treatment = case_when(
      value == "No AI involved" ~ "No AI",
      value == "AI involved" ~ "AI Involved",
      TRUE ~ value  # Keeps other variables unchanged
    ),
    variable = factor(variable, levels = c("Competence", "Intelligence"))
  ) %>%
  filter(treatment %in% c("No AI", "AI Involved"))  # Keep only AI-related observations

plot_H1A_ai <- ggplot(H1_mm_combined_ai, aes(x = estimate, y = treatment, color = treatment, shape = group)) +
  geom_vline(xintercept = 0.5, linetype = "dashed", color = "gray") +  # Reference line
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high), position = position_dodge(width = 0.5)) +
  scale_x_continuous(limits = c(1, 6), breaks = c(1:5)) +  # Match original scale
  scale_color_manual(values = c("No AI" = "#E69F00", "AI Involved" = "#56B4E9")) +  # Differentiate AI conditions
  labs(
    title = "Hypothesis 1A",
    x = " ",
    y = " ",
    color = "AI Treatment",
    shape = "Group"
  ) +
  facet_wrap(~ variable) +  # Separate Competence & Intelligence
  theme_minimal()

# Display the plot
#plot_H1A_ai


# H1B
# Plot with AI treatment only

# Combine all marginal means into a single tidy dataframe
H1_mm_combined_ai <- bind_rows(
  mm_admpost_friendly %>% mutate(variable = "Friendliness", group = "Adm"),
  mm_admpost_intent %>% mutate(variable = "Good intentions", group = "Adm"),
  mm_teachpost_friendly %>% mutate(variable = "Friendliness", group = "Teach"),
  mm_teachpost_intent %>% mutate(variable = "Good intentions", group = "Teach")
)

# Extract AI treatment values from the "value" column
H1_mm_combined_ai <- H1_mm_combined_ai %>%
  mutate(
    treatment = case_when(
      value == "No AI involved" ~ "No AI",
      value == "AI involved" ~ "AI Involved",
      TRUE ~ value  # Keeps other variables unchanged
    ),
    variable = factor(variable, levels = c("Friendliness", "Good intentions"))
  ) %>%
  filter(treatment %in% c("No AI", "AI Involved"))  # Keep only AI-related observations

plot_H1B_ai <- ggplot(H1_mm_combined_ai, aes(x = estimate, y = treatment, color = treatment, shape = group)) +
  geom_vline(xintercept = 0.5, linetype = "dashed", color = "gray") +  # Reference line
  geom_pointrange(aes(xmin = conf.low, xmax = conf.high), position = position_dodge(width = 0.5)) +
  scale_x_continuous(limits = c(1, 6), breaks = c(1:5)) +  # Match original scale
  scale_color_manual(values = c("No AI" = "#E69F00", "AI Involved" = "#56B4E9")) +  # Differentiate AI conditions
  labs(
    title = "Hypothesis 1B",
    x = "Marginal Mean Estimate",
    y = " ",
    color = "AI Treatment",
    shape = "Group"
  ) +
  facet_wrap(~ variable) +  # Separate Friendliness and Good intentions
  theme_minimal()

# plot_H1B_ai

# Display the combined H1A and H1B plot
plot_H1 <- plot_H1A_ai / plot_H1B_ai

plot_H1

H2

H2a: The effect of AI use on the acceptance of the decision-maker using the AI system is negatively mediated through warmth perceptions. H2b: The effect of AI use on the acceptance of the decision-maker using the AI system is positively mediated through competence perceptions.

School administrator context

Code
library(mediation)

# Step 1: Model the effect of AI on warmth perceptions (Friendliness & Intentions)
lm_friendliness <- lm(admpost_friendly ~ admexp_ai, data = df_ncp)
lm_intentions <- lm(admpost_intent ~ admexp_ai, data = df_ncp)

# Step 2: Model the effect of warmth perceptions on acceptance
lm_acceptance <- lm(admpost_accept ~ admexp_ai + admpost_friendly + admpost_intent, data = df_ncp)

# Step 3: Mediation Analysis (Friendliness)
med_model_friendliness <- mediate(lm_friendliness, lm_acceptance, treat = "admexp_ai", mediator = "admpost_friendly", boot = TRUE, sims = 1000)

# Step 4: Mediation Analysis (Intentions)
med_model_intentions <- mediate(lm_intentions, lm_acceptance, treat = "admexp_ai", mediator = "admpost_intent", boot = TRUE, sims = 1000)

# Print mediation results
summary(med_model_friendliness)

Causal Mediation Analysis 

Nonparametric Bootstrap Confidence Intervals with the Percentile Method

               Estimate 95% CI Lower 95% CI Upper p-value    
ACME             0.0555       0.0351         0.08  <2e-16 ***
ADE              0.4457       0.3654         0.52  <2e-16 ***
Total Effect     0.5012       0.4136         0.58  <2e-16 ***
Prop. Mediated   0.1107       0.0701         0.16  <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Sample Size Used: 2267 


Simulations: 1000 
Code
summary(med_model_intentions)

Causal Mediation Analysis 

Nonparametric Bootstrap Confidence Intervals with the Percentile Method

               Estimate 95% CI Lower 95% CI Upper p-value    
ACME             0.1042       0.0665         0.14  <2e-16 ***
ADE              0.4457       0.3570         0.53  <2e-16 ***
Total Effect     0.5499       0.4571         0.64  <2e-16 ***
Prop. Mediated   0.1895       0.1241         0.26  <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Sample Size Used: 2267 


Simulations: 1000 
Code
# Extract results into a dataframe
mediation_results <- tibble(
  mediator = c("Friendliness", "Intentions"),
  indirect_effect = c(med_model_friendliness$d0, med_model_intentions$d0),
  direct_effect = c(med_model_friendliness$z0, med_model_intentions$z0),
  total_effect = c(med_model_friendliness$tau.coef, med_model_intentions$tau.coef),
  lower_CI = c(med_model_friendliness$d0.ci[1], med_model_intentions$d0.ci[1]),
  upper_CI = c(med_model_friendliness$d0.ci[2], med_model_intentions$d0.ci[2])
)

# Visualizing using ggplot
library(ggplot2)

ggplot(mediation_results, aes(x = mediator)) +
  geom_col(aes(y = indirect_effect, fill = "Indirect Effect"), position = "dodge", width = 0.6) +
  geom_col(aes(y = direct_effect, fill = "Direct Effect"), position = "dodge", width = 0.6) +
  geom_errorbar(aes(ymin = lower_CI, ymax = upper_CI), width = 0.2, position = position_dodge(width = 0.6)) +
  scale_fill_manual(values = c("Indirect Effect" = "#56B4E9", "Direct Effect" = "#E69F00")) +
  labs(
    title = "Mediation Analysis: Direct & Indirect Effects",
    x = "Mediator",
    y = "Effect Size",
    fill = "Effect Type"
  ) +
  theme_minimal()

Code
library(DiagrammeR)

DiagrammeR("
graph TD;
    A[AI Use] -->|Effect on Friendliness| B[Friendliness];
    A -->|Effect on Intentions| C[Intentions];
    B -->|Mediated Effect| D[Acceptance];
    C -->|Mediated Effect| D;
    A -->|Direct Effect| D;
    style A fill:#FFDDC1,stroke:#E69F00;
    style B fill:#D0E1F9,stroke:#56B4E9;
    style C fill:#D0E1F9,stroke:#56B4E9;
    style D fill:#A0D995,stroke:#009E73;
")
Code
summary(lm_friendliness)  # AI → Friendliness

Call:
lm(formula = admpost_friendly ~ admexp_ai, data = df_ncp)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.9615 -0.5888  0.4112  1.0385  2.4112 

Coefficients:
                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)              2.58879    0.03829  67.603  < 2e-16 ***
admexp_aiNo AI involved  0.37268    0.05523   6.748 1.89e-11 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.314 on 2265 degrees of freedom
  (9248 observations deleted due to missingness)
Multiple R-squared:  0.01971,   Adjusted R-squared:  0.01928 
F-statistic: 45.54 on 1 and 2265 DF,  p-value: 1.893e-11
Code
summary(lm_intentions)    # AI → Good Intentions

Call:
lm(formula = admpost_intent ~ admexp_ai, data = df_ncp)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.6486 -0.3602  0.3514  0.6398  1.6398 

Coefficients:
                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)              3.36024    0.03559  94.402  < 2e-16 ***
admexp_aiNo AI involved  0.28839    0.05133   5.618 2.17e-08 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.221 on 2265 degrees of freedom
  (9248 observations deleted due to missingness)
Multiple R-squared:  0.01374,   Adjusted R-squared:  0.01331 
F-statistic: 31.56 on 1 and 2265 DF,  p-value: 2.17e-08

A mediation model examines whether the effect of an independent variable (AI use) on a dependent variable (acceptance of the decision-maker) operates through an intermediate variable (warmth perceptions). In this case, two warmth-related mediators are tested separately: friendliness and good intentions.

The results for good intentions show that AI use significantly affects acceptance both directly and indirectly through warmth perceptions. The Average Causal Mediation Effect (ACME) = 0.1042, meaning that AI use influences acceptance indirectly through perceptions of good intentions. The direct effect (ADE) = 0.4457, indicating that AI also has a strong effect on acceptance that is not explained by warmth.

The total effect of AI use on acceptance is 0.5499, and approximately 18.95% of this effect is mediated through perceptions of good intentions. This suggests that while warmth plays a role in shaping acceptance, the majority of the effect (about 81%) comes from other factors.

In comparison to friendliness (11% mediated effect), good intentions appear to be a stronger mediator, suggesting that perceptions of the decision-maker’s intentions may be more influential than their perceived friendliness in shaping acceptance of AI-assisted decisions.

Given that the effect of using AI has a negative impact on warmth perceptions, the mediation analyses shows that this effect is carried over indirectly to negatively affect the acceptance of using AI by the public servant.

Teacher context

Code
library(mediation)

# Step 1: Model the effect of AI on warmth perceptions (Friendliness & Intentions)
lm_friendliness <- lm(teachpost_friendly ~ teachexp_ai, data = df_ncp)
lm_intentions <- lm(teachpost_intent ~ teachexp_ai, data = df_ncp)

# Step 2: Model the effect of warmth perceptions on acceptance
lm_acceptance <- lm(teachpost_accept ~ teachexp_ai + teachpost_friendly + teachpost_intent, data = df_ncp)

# Step 3: Mediation Analysis (Friendliness)
med_model_friendliness <- mediate(lm_friendliness, lm_acceptance, treat = "teachexp_ai", mediator = "teachpost_friendly", boot = TRUE, sims = 1000)

# Step 4: Mediation Analysis (Intentions)
med_model_intentions <- mediate(lm_intentions, lm_acceptance, treat = "teachexp_ai", mediator = "teachpost_intent", boot = TRUE, sims = 1000)

# Print mediation results
summary(med_model_friendliness)

Causal Mediation Analysis 

Nonparametric Bootstrap Confidence Intervals with the Percentile Method

               Estimate 95% CI Lower 95% CI Upper p-value    
ACME             0.1252       0.0788         0.17  <2e-16 ***
ADE              1.2351       1.1306         1.33  <2e-16 ***
Total Effect     1.3604       1.2588         1.46  <2e-16 ***
Prop. Mediated   0.0920       0.0576         0.13  <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Sample Size Used: 2302 


Simulations: 1000 
Code
summary(med_model_intentions)

Causal Mediation Analysis 

Nonparametric Bootstrap Confidence Intervals with the Percentile Method

               Estimate 95% CI Lower 95% CI Upper p-value    
ACME              0.236        0.181         0.29  <2e-16 ***
ADE               1.235        1.135         1.33  <2e-16 ***
Total Effect      1.472        1.376         1.56  <2e-16 ***
Prop. Mediated    0.161        0.122         0.20  <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Sample Size Used: 2302 


Simulations: 1000 
Code
# Extract results into a dataframe
mediation_results <- tibble(
  mediator = c("Friendliness", "Intentions"),
  indirect_effect = c(med_model_friendliness$d0, med_model_intentions$d0),
  direct_effect = c(med_model_friendliness$z0, med_model_intentions$z0),
  total_effect = c(med_model_friendliness$tau.coef, med_model_intentions$tau.coef),
  lower_CI = c(med_model_friendliness$d0.ci[1], med_model_intentions$d0.ci[1]),
  upper_CI = c(med_model_friendliness$d0.ci[2], med_model_intentions$d0.ci[2])
)

# Visualizing using ggplot
library(ggplot2)

ggplot(mediation_results, aes(x = mediator)) +
  geom_col(aes(y = indirect_effect, fill = "Indirect Effect"), position = "dodge", width = 0.6) +
  geom_col(aes(y = direct_effect, fill = "Direct Effect"), position = "dodge", width = 0.6) +
  geom_errorbar(aes(ymin = lower_CI, ymax = upper_CI), width = 0.2, position = position_dodge(width = 0.6)) +
  scale_fill_manual(values = c("Indirect Effect" = "#56B4E9", "Direct Effect" = "#E69F00")) +
  labs(
    title = "Mediation Analysis: Direct & Indirect Effects",
    x = "Mediator",
    y = "Effect Size",
    fill = "Effect Type"
  ) +
  theme_minimal()

Code
library(DiagrammeR)

DiagrammeR("
graph TD;
    A[AI Use] -->|Effect on Friendliness| B[Friendliness];
    A -->|Effect on Intentions| C[Intentions];
    B -->|Mediated Effect| D[Acceptance];
    C -->|Mediated Effect| D;
    A -->|Direct Effect| D;
    style A fill:#FFDDC1,stroke:#E69F00;
    style B fill:#D0E1F9,stroke:#56B4E9;
    style C fill:#D0E1F9,stroke:#56B4E9;
    style D fill:#A0D995,stroke:#009E73;
")
Code
summary(lm_friendliness)  # AI → Friendliness

Call:
lm(formula = teachpost_friendly ~ teachexp_ai, data = df_ncp)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.8063 -0.8063  0.1937  1.1604  2.1604 

Coefficients:
                          Estimate Std. Error t value Pr(>|t|)    
(Intercept)                2.83962    0.03430    82.8   <2e-16 ***
teachexp_aiNo AI involved  0.96672    0.04882    19.8   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.171 on 2300 degrees of freedom
  (9213 observations deleted due to missingness)
Multiple R-squared:  0.1456,    Adjusted R-squared:  0.1453 
F-statistic: 392.1 on 1 and 2300 DF,  p-value: < 2.2e-16
Code
summary(lm_intentions)    # AI → Good Intentions

Call:
lm(formula = teachpost_intent ~ teachexp_ai, data = df_ncp)

Residuals:
    Min      1Q  Median      3Q     Max 
-4.2606 -0.4185  0.5815  0.7394  1.5815 

Coefficients:
                          Estimate Std. Error t value Pr(>|t|)    
(Intercept)                3.41852    0.03066  111.51   <2e-16 ***
teachexp_aiNo AI involved  0.84204    0.04364   19.29   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.047 on 2300 degrees of freedom
  (9213 observations deleted due to missingness)
Multiple R-squared:  0.1393,    Adjusted R-squared:  0.1389 
F-statistic: 372.3 on 1 and 2300 DF,  p-value: < 2.2e-16

This mediation analysis examines whether warmth perceptions—friendliness and good intentions—mediate the negative effect of AI use on the acceptance of a teacher using AI. The results indicate that AI use significantly decreases warmth perceptions, as teachers using AI are seen as less friendly (β = 0.97, p < 0.001) and having weaker intentions (β = 0.84, p < 0.001) compared to when AI is not involved. Lower warmth perceptions, in turn, are associated with reduced acceptance of the teacher’s decision, suggesting that warmth plays a role in shaping public attitudes toward AI-assisted decision-making.

AI use also directly reduces acceptance of the teacher (ADE = −1.24, p < 0.001), meaning that respondents are less willing to accept decisions made by a teacher using AI compared to one making decisions without AI involvement. However, part of this negative effect is mediated through warmth perceptions. Specifically, friendliness mediates about 9% of the total effect (ACME = 0.13, p < 0.001), while good intentions mediate about 16% (ACME = 0.24, p < 0.001). The total negative effect of AI use on acceptance (−1.36 for friendliness, −1.47 for intentions) suggests that the decline in warmth perceptions partially explains why AI lowers acceptance.

These results indicate that while warmth perceptions are a significant pathway through which AI use reduces acceptance, they do not fully account for the negative impact of AI. Other mechanisms, such as distrust in AI, concerns about fairness, or a preference for human decision-makers, may further contribute to this decline. The stronger mediation effect of good intentions (16%) compared to friendliness (9%) suggests that perceptions of the teacher’s motivations and trustworthiness play a larger role than their friendliness in shaping acceptance.

H3

H3: The effect of AI use on the perceived warmth of a decision-maker using the AI system is stronger in a setting in which decisions are made directly about citizens as opposed to decision hat affect them indirectly.

The two scenarios (school administrator and teacher) are presented to separate subsamples of the survey respondents. Thus, we compare mediation effects across contexts is to check if the ACME confidence intervals overlap. If they do not overlap, it suggests a significant difference between the mediation effects.

The estimate of the mediation effect of friendliness in the school administrator scenario has an estimated ACME of 0.055, with a 95 percent upper confidence level of 0.080. The teacher scenario ACME estimate is 0.125, with a lower 95 percent confidence level of 0.081, which means that the mediation effect is (barely) statistically significantly stronger in this scenario.

With regards to the mediation effect of good intentions, the ACME estimate for the school administrator scenario is 0.104, with an upper confidence level of 0.140. The teacher scenario estimate is 0.236, with a lower confidence level of 0.184. Also here the confidence intervals do not overlap, and this time the distance between them is larger and statistically significant.

Together, the results show that the mediation effect of warmth is stronger in the teacher scenario than in the school administrator scenario, support H3.