About

I’m interested in making computing education more accessible and personally meaningful through Human-Computer Interaction (HCI) and AI. My interest in computing began in seventh grade when I started programming on my TI-84 calculator. As a freshman at Temple University, I began researching how to help novice students understand programming concepts.

My work currently explores how large language models (LLMs) can generate analogies, explanations, and learning materials that reflect students' interests and backgrounds. I am passionate about making computing more inclusive, especially for students who don’t yet see themselves represented in technical spaces.

In the fall of 2025, I’ll begin my PhD at the University of Michigan School of Information. Go blue!

Research

My research focuses on using LLMs to personalize learning in computing education. I study how AI-generated explanations and analogies can be adapted to align with students’ interests, cultural backgrounds, and learning needs.

I combine LLMs with perspectives from HCI to better understand how to support intrinsic motivation and deliver adaptive feedback. This includes building interactive tools, analyzing student responses, and evaluating how personalized support affects comprehension. I’m focused on making sure these systems help students learn, not replace or mislead them.

Publications

Sort by:
Like a Nesting Doll: Analyzing Recursion Analogies Generated by CS Students using Large Language Models
Seth Bernstein, Paul Denny, Juho Leinonen, Stephen MacNeil, et al.
ITiCSE 2024
Analyzing Students' Preferences for LLM-Generated Analogies
Seth Bernstein, Paul Denny, Juho Leinonen, Stephen MacNeil, et al.
ITiCSE 2024
Generating Diverse Code Explanations using the GPT-3 Large Language Model
Stephen MacNeil, Andrew Tran, Dan Mogil, Seth Bernstein, et al.
ICER 2022
Comparing Code Explanations Created by Students and Large Language Models
Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, et al.
ITiCSE 2023
The Implications of Large Language Models for CS Teachers and Students
Stephen MacNeil, Joanne Kim, Juho Leinonen, Paul Denny, Seth Bernstein, et al.
SIGCSE 2023
Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models
Stephen MacNeil, Paul Denny, Andrew Tran, Juho Leinonen, Seth Bernstein, et al.
ACE 2024
Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book
Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, et al.
SIGCSE 2023
Automatically Generating CS Learning Materials with Large Language Models
Stephen MacNeil, Andrew Tran, Juho Leinonen, Paul Denny, Joanne Kim, Arto Hellas, Seth Bernstein, et al.
SIGCSE 2023
Prompt middleware: Mapping prompts for large language models to UI affordances
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
arXiv 2023