Helping teams build better products by understanding user behavior
and making clearer, evidence-driven decisions.
I’m a UX researcher with expertise in attention, memory, learning, and behavioral measurement. My work sits at the intersection of cognitive science and product strategy, focused on helping teams understand how people think, learn, and behave.
I completed my PhD in Cognitive Neuroscience at the University of Notre Dame, where I studied visual working memory and attention and developed experimental paradigms to better understand how information is encoded, maintained, and affected by cognitive load.
I later worked as a UX Researcher at Microsoft Mixed Reality, investigating perception, cognition, and usability in augmented reality systems to inform interface design and human performance in immersive environments.
Today, I lead research and product work at Laureata, applying mixed-methods and experimental approaches to learning technology, feature design, pricing, and product direction.
Across roles, my focus has been consistent: turning rigorous research into practical product decisions with clear user and business impact.
Understanding how people encode, retain, and revisit information in digital learning environments.
Understanding how attention, memory, and context shape product use, decision-making, and engagement.
Turning research into practical decisions around feature design, usability, pricing, and prioritization.
Laureata is a learning platform designed to help students study more effectively through course-based content, retrieval practice, and behaviorally informed product design. My role focused on identifying which features students actually wanted, where friction emerged, and how research should shape product priorities.
Open-ended user feedback consistently surfaced the same request: students wanted to focus on specific topics within a course, especially when preparing for exams or reviewing a single week of material.
In structured follow-up surveys, a majority of users reported that topic-level control would meaningfully improve their study efficiency, reinforcing this as a core unmet need rather than a niche preference.
In response, we shifted from a course-based model to a more flexible, topic-driven experience, restructuring how content was organized and accessed so students could narrow their studying to the material most relevant in the moment.
This quickly became the most-used entry point in the app, with most users actively selecting topics rather than relying on the default “all topics” option. Follow-up focus groups revealed a need for multi-topic selection, leading to an additional iteration that allowed users to study several topics simultaneously.
Research revealed a key barrier: students didn’t trust generic study tools to reflect their actual courses. Without clear course-level relevance, many dropped off before engaging.
We iterated on the landing and onboarding experience through extensive A/B testing, surfacing real university courses, searchable catalogs, and school-specific context.
These changes significantly improved perceived relevance and trust, ultimately tripling conversion from visit to download.
Research consistently showed a slight preference for multiple choice over flashcards, but the difference was small and varied across users. This suggested that a single standardized approach would not fully meet the needs of the broader student population.
Rather than forcing a single study method, we designed the product to support three modes: multiple choice, flashcards, or a mix of both. This allowed users to tailor the experience to their preferences while still reinforcing learning through active recall and immediate feedback.
This flexibility increased engagement by allowing students to choose how they studied, rather than adapting to a fixed system.
When evaluating potential feature directions, cognitive training style exercises emerged as the second most desired feature, behind only course specific study tools.
However, both internal research and the broader cognitive science literature suggest that transfer from these tasks is typically limited to near domains, with minimal impact on real world academic performance.
Rather than positioning these exercises as core study tools, we intentionally framed them as an optional engagement layer, designed to increase app usage and provide variety without overstating their educational impact.
This approach allowed us to align with strong user interest while maintaining scientific integrity and preserving focus on evidence based study methods.
A/B testing, survey design, behavioral analytics, and experimental design.
User interviews, usability testing, concept evaluation, and product validation.
Eye-tracking, pupillometry, and attention and memory paradigms.
Exceptional products emerge from a deep understanding of users, behavior, and context. Research is not just a step in design. It is a foundation for meaningful product decisions.
Great UX research clarifies product direction, reveals hidden needs, and helps teams make more confident decisions about what to build, refine, and prioritize.
I’m always open to conversations about UX research, product strategy, learning science, and research leadership.
Email: danielschor@outlook.com
LinkedIn: LinkedIn profile
Resume: Download PDF