An AI-powered feature that helps learners deepen content comprehension through guided prompts and in-lesson chat.
The goal of this project was to explore how AI could enhance the learner experience within Guides (WorkRamp’s lesson format). While AI-generated audio of lesson content already exists, the vision for the AI Assistant for Learners was to build on that foundation by creating an integrated assistant that could also support comprehension through features like flashcard and quiz generation, preset prompts, and conversational Q&A.
A key challenge surfaced early: introducing AI in this context risked enabling learners to shortcut the learning process rather than deepen it. Additionally, it was unclear whether the feature would drive genuine learner value or primarily serve as a competitive checkbox for prospective customers. Despite those concerns, leadership determined that an in-lesson AI assistant was strategically important for platform competitiveness.
With that in mind, my design focus was on balancing visibility, engagement, and restraint—ensuring the Assistant felt approachable and valuable enough for learners to actually use, while avoiding designs that felt intrusive or gimmicky. It needed to be simple, purposeful, and demo-ready without sacrificing usability or learner trust. While I led the UX exploration and visual design, the Product Manager addressed potential misuse and the CTO defined guardrails for what the Assistant could and should do.
From a UX standpoint, I concentrated on how learners would access and interact with the Assistant especially the entry point, preset prompt behavior, and chat states. I explored multiple directions, including launching from a floating button, embedding a text field in the initial state, and positioning the chat panel on either side of the Guide. I also tested variations for how preset prompts might be displayed to help learners get started and understand the Assistant’s purpose.
Next Steps: Measuring Impact and Refining the Experience
The AI Assistant for Learners is now in development, guided by the design framework established through this exploration. The initial release will focus on validating real-world engagement and usefulness. Once launched, the team plans to closely monitor which preset prompts are used most often to understand how learners naturally interact with the Assistant. In parallel, we’ll gather feedback when the Assistant can’t complete a request—insights that will directly shape improvements to prompt design, training data, and feature scope.
This feedback-driven approach will help determine where AI can most meaningfully support comprehension versus where human-centered design remains essential.