Value alignment aims to ensure that AI systems behave in ways that respect human values. Most LLMs, such as those developed by OpenAI, undergo value alignment before release through reinforcement learning from human feedback. However, in current practice, users are treated as little more than anonymous annotators—replaceable and disposable. We believe users can and should play a more meaningful role in aligning LLMs.
This workshop brings together researchers from LLMs, human-computer interaction, ethics, and healthcare to discuss user roles in LLM alignment. We choose AI research assistant as a case study and participants will engage in a conversation sharing their experiences and discussing human roles in aligning LLMs with their needs and values.”
Anne Arzberger (TU Delft), Martha Lewis (UvA), Noor Bruijn (Erasmus MC), and Giorgia Pozzi (TU Delft)



