Helena Savenko
Toronto-based Product Manager and co-founder specializing in AI products, experimentation, and 0→1 feature development. I build solutions that combine customer empathy with measurable impact.

LinkedIn | Contact me via LinkedIn
Case Study 1
AI-Powered Grading at Leading Online Learning Platform
Context
A leading online learning platform serves millions of learners globally through thousands of courses. Traditional grading options were limited - educators could only use peer reviews or automated quizzes, each with significant drawbacks for comprehensive assessment at scale.
The Challenge
  • Learners needed timely, comprehensive feedback to improve and stay engaged.
  • Educators lacked tools for rigorous grading at scale -existing options couldn't provide high-quality assessment for complex work.
  • Manual grading was impractical for courses with thousands of learners.
  • Limited grading options created bottlenecks that impacted course completion rates.
My Role
Product Lead at a major EdTech company - owned vision, strategy, and execution for AI-powered grading features.
Solution
Launched two AI-powered grading products:
  1. AI-Enhanced Peer Reviews: Improved the peer review process with AI-assisted quality checks and comprehensive feedback.
  2. AI-Graded Assessments: First solution enabling rigorous assessment of open-ended questions at scale, providing detailed and personalized feedback to each learner.
Both products used AI to provide immediate, comprehensive feedback while enabling educators to assess complex work that was previously impractical at scale.
Approach
  • Conducted user research with educators and learners to understand pain points with existing grading methods.
  • Collaborated with AI/ML engineers to define grading accuracy and feedback quality requirements.
  • Aligned cross-functional stakeholders (Engineering, Design, Education Product teams) on product vision.
  • Defined success metrics around learner engagement, course completion, and educator adoption.
  • Led iterative development and testing to ensure quality and effectiveness.
Impact
  • Improved learner engagement and course completion by providing timely, personalized feedback.
  • Enabled educators to create more rigorous assessments previously impossible at scale.
  • Reduced workload for educators through automation while maintaining assessment quality.
  • Created foundation for AI-powered learning experiences across the platform.
  • First-of-its-kind solution for assessing open-ended work with personalized feedback at scale.
Case Study 2
Success Orchestration - AI-Powered Retention System
Context
Online learning faces a persistent challenge: learner drop-off. At a major EdTech platform, understanding why learners disengage and when to intervene could dramatically improve course completion rates and learner outcomes.
The Challenge
  • High learner drop-off rates across courses.
  • No systematic way to identify at-risk learners before they disengaged.
  • Generic interventions weren't personalized to individual learner needs.
  • Required scalable solution for millions of learners.
My Role
Product Lead at a major EdTech company - designed strategy, led experimentation, and defined technical roadmap.
Solution
An AI-powered retention system that:
  • Collects behavioral signals (login patterns, assignment completion, content engagement),
  • Identifies potential drop-off through pattern analysis,
  • Triggers timely, contextual interventions to re-engage learners.
Approach
Phase 0: Vision & Prototype
  • Built a working prototype using Figma Make to demonstrate how the retention system could work end-to-end,
  • Used the prototype to communicate vision to stakeholders and secure initial buy-in.
Phase 1: Validation
  • Led proof-of-concept experiments to validate that behavioral signals could predict drop-off,
  • Tested different intervention types and timing to determine what re-engaged learners,
  • Used data analysis to identify which signals were most predictive.
Phase 2: Technical Roadmap
Aligned executives on phased technical evolution:
  • V1: Rule-based signal system (if X behavior, then Y intervention).
  • V2: ML-powered predictions (predict drop-off probability, optimize intervention timing).
  • V3: LLM-powered conversational interventions (AI assistant provides contextual, personalized support).
Phase 3: Cross-Functional Execution
  • Partnered with Data Science team on signal collection and model development.
  • Collaborated with Engineering on system architecture and integration.
  • Worked with Content/Learning Design teams on intervention messaging.
  • Presented regular updates to executives, securing buy-in for multi-phase roadmap.
Impact
  • Created data-driven framework for proactive learner support at scale.
  • Established technical foundation that could evolve from simple rules to sophisticated AI.
  • Demonstrated how AI could be applied thoughtfully in phases, starting with validation before heavy investment.