Helping students find the right tutor with confidence

Engoo 2022

Impact: 175% increase in confidence that search results matched students needs

Role: Project Lead (Product Designer)

Helping students find the right tutor with confidence

Engoo 2022

Impact: 175% increase in confidence that search results matched students needs

Role: Project Lead (Product Designer)

Helping students find the right tutor with confidence

Engoo 2022

Impact: 175% increase in confidence that search results matched students needs

Role: Project Lead (Product Designer)

Helping students find the right tutor with confidence

Engoo 2022

Impact: 175% increase in confidence that search results matched students needs

Role: Project Lead (Product Designer)

SITUATION

Different learners need different tutors

SITUATION

Different learners need different tutors

SITUATION

Different learners need different tutors

Research in language acquisition highlights the importance of adapting learning experiences to individual needs and contexts.¹ In tutor-based learning, this is reflected in tutor selection, where factors such as teaching style, personality, communication approach, and perceived rapport can influence a learner’s engagement and outcomes.³

Engoo is an online language learning platform where users learn and practise languages through 1:1 lessons with tutors.

An analysis of a sample of 40 Engoo tutor reviews revealed that differences in learning preferences – particularly around lesson pace, structure, and teaching style – often shaped student experiences. For example, some students valued slower-paced lessons, while others preferred faster-paced ones:

Research in language acquisition highlights the importance of adapting learning experiences to individual needs and contexts.¹ In tutor-based learning, this is reflected in tutor selection, where factors such as teaching style, personality, communication approach, and perceived rapport can influence a learner’s engagement and outcomes.³

Engoo is an online language learning platform where users learn and practise languages through 1:1 lessons with tutors.

An analysis of a sample of 40 Engoo tutor reviews revealed that differences in learning preferences – particularly around lesson pace, structure, and teaching style – often shaped student experiences. For example, some students valued slower-paced lessons, while others preferred faster-paced ones:

Research in language acquisition highlights the importance of adapting learning experiences to individual needs and contexts.¹ In tutor-based learning, this is reflected in tutor selection, where factors such as teaching style, personality, communication approach, and perceived rapport can influence a learner’s engagement and outcomes.³

Engoo is an online language learning platform where users learn and practise languages through 1:1 lessons with tutors.

An analysis of a sample of 40 Engoo tutor reviews revealed that differences in learning preferences – particularly around lesson pace, structure, and teaching style – often shaped student experiences. For example, some students valued slower-paced lessons, while others preferred faster-paced ones:

  • “I think it suits people who have just started studying English, but not those who want normal speed conversation.”

  • “There were parts where the teacher spoke too fast for me to understand. This teacher may be for advanced learners.”

This project set out to improve how the platform supports student-tutor matching.

This project set out to improve how the platform supports student-tutor matching.

This project set out to improve how the platform supports student-tutor matching.

Original vs Redesign

COMPLICATION

When visibility is driven by ratings alone

COMPLICATION

When visibility is driven by ratings alone

COMPLICATION

When visibility is driven by ratings alone

Online ratings are often used as a proxy for quality, but research shows they can be an incomplete and sometimes misleading indicator of true performance or fit, particularly in subjective experiences such as teaching and learning.²

On Engoo, this becomes problematic as tutor search results are ranked solely by rating, making it the only signal of quality. Tutors have raised concerns that this system does not account for differences in student preferences, and that even small variations in reviews can have a disproportionate impact on their visibility and access to bookings.

Online ratings are often used as a proxy for quality, but research shows they can be an incomplete and sometimes misleading indicator of true performance or fit, particularly in subjective experiences such as teaching and learning.²

On Engoo, this becomes problematic as tutor search results are ranked solely by rating, making it the only signal of quality. Tutors have raised concerns that this system does not account for differences in student preferences, and that even small variations in reviews can have a disproportionate impact on their visibility and access to bookings.

Online ratings are often used as a proxy for quality, but research shows they can be an incomplete and sometimes misleading indicator of true performance or fit, particularly in subjective experiences such as teaching and learning.²

On Engoo, this becomes problematic as tutor search results are ranked solely by rating, making it the only signal of quality. Tutors have raised concerns that this system does not account for differences in student preferences, and that even small variations in reviews can have a disproportionate impact on their visibility and access to bookings.

To understand how students evaluate and select tutors, I conducted task-based usability sessions with six participants. Each session involved two tasks using a think-aloud approach, followed by a short interview. Participants were given a realistic scenario to ground their decisions.

To understand how students evaluate and select tutors, I conducted task-based usability sessions with six participants. Each session involved two tasks using a think-aloud approach, followed by a short interview. Participants were given a realistic scenario to ground their decisions.

To understand how students evaluate and select tutors, I conducted task-based usability sessions with six participants. Each session involved two tasks using a think-aloud approach, followed by a short interview. Participants were given a realistic scenario to ground their decisions.

What was uncovered:

What was uncovered:

What was uncovered:

Filtering behaviour was limited:

Beyond logistical filters like language and availability, 67% of participants relied on just one fit-related filter (Over 3 years experience). Other available filters such as age, gender, and tags were rarely used.

Filtering behaviour was limited:

Beyond logistical filters like language and availability, 67% of participants relied on just one fit-related filter (Over 3 years experience). Other available filters such as age, gender, and tags were rarely used.

Filtering behaviour was limited:

Beyond logistical filters like language and availability, 67% of participants relied on just one fit-related filter (Over 3 years experience). Other available filters such as age, gender, and tags were rarely used.

Filtered results did not inspire confidence:

Participants reported low confidence that the search results reflected a tutor that would suit their needs, with an average score of 1.6/5.

Filtered results did not inspire confidence:

Participants reported low confidence that the search results reflected a tutor that would suit their needs, with an average score of 1.6/5.

Filtered results did not inspire confidence:

Participants reported low confidence that the search results reflected a tutor that would suit their needs, with an average score of 1.6/5.

Decision-making relied on profile-level information:

100% of participants based their final decision on information available within individual tutor profiles, rather than the search results page.

Decision-making relied on profile-level information:

100% of participants based their final decision on information available within individual tutor profiles, rather than the search results page.

Decision-making relied on profile-level information:

100% of participants based their final decision on information available within individual tutor profiles, rather than the search results page.

QUESTION

How might we support meaningful student-tutor matching?

QUESTION

How might we support meaningful student-tutor matching?

QUESTION

How might we support meaningful student-tutor matching?

I defined three key design questions to guide exploration:

  1. How might we help students find tutors that reflect what matters most to them?

  2. How might we make it easier for students to compare tutors at a glance?

  3. How might we help students feel confident that the tutors shown match their needs?

I reviewed comparable platforms such as Preply, italki, and Cambly to identify patterns that could be adapted to Engoo. I then iteratively developed and refined design concepts, progressing from low-fidelity to high-fidelity prototypes with ongoing feedback from the same participants.

I defined three key design questions to guide exploration:

  1. How might we help students find tutors that reflect what matters most to them?

  2. How might we make it easier for students to compare tutors at a glance?

  3. How might we help students feel confident that the tutors shown match their needs?

I reviewed comparable platforms such as Preply, italki, and Cambly to identify patterns that could be adapted to Engoo. I then iteratively developed and refined design concepts, progressing from low-fidelity to high-fidelity prototypes with ongoing feedback from the same participants.

I defined three key design questions to guide exploration:

  1. How might we help students find tutors that reflect what matters most to them?

  2. How might we make it easier for students to compare tutors at a glance?

  3. How might we help students feel confident that the tutors shown match their needs?

I reviewed comparable platforms such as Preply, italki, and Cambly to identify patterns that could be adapted to Engoo. I then iteratively developed and refined design concepts, progressing from low-fidelity to high-fidelity prototypes with ongoing feedback from the same participants.

Initial wireframe

ANSWER

Let students define what matters

ANSWER

Let students define what matters

ANSWER

Let students define what matters

To give students more control over how tutors are surfaced, sorting options were introduced, allowing results to be prioritised by different criteria such as popularity, number of reviews, and rating. 

Filters were also refined to reflect how students evaluate tutors – adding level, skills, and accent, while removing less relevant options like age and gender.

To give students more control over how tutors are surfaced, sorting options were introduced, allowing results to be prioritised by different criteria such as popularity, number of reviews, and rating. 

Filters were also refined to reflect how students evaluate tutors – adding level, skills, and accent, while removing less relevant options like age and gender.

To give students more control over how tutors are surfaced, sorting options were introduced, allowing results to be prioritised by different criteria such as popularity, number of reviews, and rating. 

Filters were also refined to reflect how students evaluate tutors – adding level, skills, and accent, while removing less relevant options like age and gender.

Refinements from user testing:
Refinements from user testing:
Refinements from user testing:
  • Sorting by “Top picks” was removed after testing showed it was not a priority for participants.

  • Participants expected “Language” and “Saved” to be the most important fields during their search, and so the order of controls was adjusted – Language placed first and saved tutors positioned at the end.

Bring the right information forward

Start with asking, not scrolling

Bring the right information forward

To support comparison at a glance, more relevant tutor information surfaced directly on the tutor card.

Drawing on precedent analysis and user research, the following attributes were prioritised:

  • Name and image

  • Rating and number of ratings

  • Levels taught

  • Common sentiment from students

  • Types of skills and lessons offered

To support comparison at a glance, more relevant tutor information surfaced directly on the tutor card.

Drawing on precedent analysis and user research, the following attributes were prioritised:

  • Name and image

  • Rating and number of ratings

  • Levels taught

  • Common sentiment from students

  • Types of skills and lessons offered

To support comparison at a glance, more relevant tutor information surfaced directly on the tutor card.

Drawing on precedent analysis and user research, the following attributes were prioritised:

  • Name and image

  • Rating and number of ratings

  • Levels taught

  • Common sentiment from students

  • Types of skills and lessons offered

Refinements from user testing:
Refinements from user testing:
Refinements from user testing:
  • The “headline” was removed following user testing, as participants found it provided limited value within the available space. Instead, key information was presented in a more structured, scannable format to support quick comparison across tutors.

Make the system's logic visible

Make the system's logic visible

Make the system's logic visible

To address users’ low confidence in search results, indicators were introduced on tutor cards to show which search criteria contributed to each result. This added transparency, helping users understand why specific tutors were shown and building confidence in their selection.

To address users’ low confidence in search results, indicators were introduced on tutor cards to show which search criteria contributed to each result. This added transparency, helping users understand why specific tutors were shown and building confidence in their selection.

To address users’ low confidence in search results, indicators were introduced on tutor cards to show which search criteria contributed to each result. This added transparency, helping users understand why specific tutors were shown and building confidence in their selection.

Refinements from user testing:
Refinements from user testing:
Refinements from user testing:

In earlier designs, matching criteria were displayed as a list beneath each tutor image. However, this made comparison difficult, as each card only displayed compatible fields, resulting in many cards appearing visually similar.

This was replaced with a compact icon overlay on the tutor image, with details of the system logic revealed on hover. This allowed the key decision-making information to remain visible on the card.

OUTCOME

Improving confidence and decision-making in tutor selection

OUTCOME

Improving confidence and decision-making in tutor selection

Improving confidence and decision-making in tutor selection

The same task-based evaluation was conducted on the updated design to assess its impact on how students search for and select tutors.

Compared to the initial test, several improvements were observed:

  • 100% of participants utilised at least three filters (including the original), compared to 67% previously relying on just one.

  • Average confidence in the accuracy and suitability of results increased by 175%,  from 1.6 to 4.4/5.

  • The proportion of participants who relied on information from the tutor profile page to justify their decision decreased by 50%, from 100% to 50%.

The Final Design

The Final Design

The Final Design

In addition to the core changes, a heuristic evaluation informed broader improvements across the platform to support overall usability:

  • Simplifying information architecture

  • Refining copy to better align with users’ mental models

  • Improving text legibility to meet accessibility standards

In addition to the core changes, a heuristic evaluation informed broader improvements across the platform to support overall usability:

  • Simplifying information architecture

  • Refining copy to better align with users’ mental models

  • Improving text legibility to meet accessibility standards

In addition to the core changes, a heuristic evaluation informed broader improvements across the platform to support overall usability:

  • Simplifying information architecture

  • Refining copy to better align with users’ mental models

  • Improving text legibility to meet accessibility standards

LEARNINGS

Transparency builds trust

LEARNINGS

Transparency builds trust

LEARNINGS

Transparency builds trust

This project highlighted how strongly transparency in system logic influences user confidence. Even when results are relevant, users may hesitate if they don’t understand why those results are shown.

While often unnoticed, subtle elements such as loading states, system feedback, and visual cues play a critical role in building trust and supporting decision-making.

This project highlighted how strongly transparency in system logic influences user confidence. Even when results are relevant, users may hesitate if they don’t understand why those results are shown.

While often unnoticed, subtle elements such as loading states, system feedback, and visual cues play a critical role in building trust and supporting decision-making.

This project highlighted how strongly transparency in system logic influences user confidence. Even when results are relevant, users may hesitate if they don’t understand why those results are shown.

While often unnoticed, subtle elements such as loading states, system feedback, and visual cues play a critical role in building trust and supporting decision-making.

Resources

Resources

Resources

¹ Lightbown, P. M., & Spada, N. (2021). How languages are learned (5th ed.). Oxford University Press.

² Luca, M. (2016). Reviews, reputation, and revenue: The case of Yelp.com. Harvard Business School Working Paper.

³ Roorda, D. L., Koomen, H. M. Y., Spilt, J. L., & Oort, F. J. (2011). The influence of affective teacher-student relationships on students’ school engagement and achievement: A meta-analytic approach. Review of Educational Research, 81(4), 493–529. https://doi.org/10.3102/0034654311421793

¹ Lightbown, P. M., & Spada, N. (2021). How languages are learned (5th ed.). Oxford University Press.

² Luca, M. (2016). Reviews, reputation, and revenue: The case of Yelp.com. Harvard Business School Working Paper.

³ Roorda, D. L., Koomen, H. M. Y., Spilt, J. L., & Oort, F. J. (2011). The influence of affective teacher-student relationships on students’ school engagement and achievement: A meta-analytic approach. Review of Educational Research, 81(4), 493–529. https://doi.org/10.3102/0034654311421793

¹ Lightbown, P. M., & Spada, N. (2021). How languages are learned (5th ed.). Oxford University Press.

² Luca, M. (2016). Reviews, reputation, and revenue: The case of Yelp.com. Harvard Business School Working Paper.

³ Roorda, D. L., Koomen, H. M. Y., Spilt, J. L., & Oort, F. J. (2011). The influence of affective teacher-student relationships on students’ school engagement and achievement: A meta-analytic approach. Review of Educational Research, 81(4), 493–529. https://doi.org/10.3102/0034654311421793

Thanks for coming by!

I’m always up for connecting with new people, so feel free to get in touch.

Thanks for coming by!

I’m always up for connecting with new people, so feel free to get in touch.

Thanks for coming by!

I’m always up for connecting with new people, so feel free to get in touch.