Tutor App Redesign
I was part of a UX Design and Research challenge/workshop to increase the confidence score of an app that matched students with tutors. This is not client work, however, it's a way to show my approach to research and design testing that I have used in client work.

Version 1 tutor list

My redesigned tutor list
StudySesh is a fictitious app that matches professional tutors to students. User testing through Lyssna showed that the app only garnered a confidence score of 4.53. The aim was to bring the score up, hopefully to 5 or over.
In this case study, I'll walk you through my process and demonstrate my methodology and redesign strategy to show how I increased the confidence score to 7.56 (so far).
Workshop conditions and constraints
Participants
Participants:
-
N = 42
-
Remote
-
Global
Schedule
2 hours, held in the evening (ET):
-
1 hour, 30 minutes for briefings.
-
2 minutes for sketching.
-
15 minutes for hi-fi design​.
-
1 minute for voting.
-
20 minutes for Lyssna data gathering and review.
Design
1 screen. determined by the workshop leader:
-
Based on analytics.*
-
Screen could "long scroll".
-
Adhere to style guide.
-
2-minute sketch time.
-
1 high fidelity Figma screen.
* Because the app is fictitious, the analytics were made up for the workshop challenge.
App Screens: Version 1
The first round prototype
The first round app screens were designed by the workshop leader for the workshop.​​
The hypothetical situation
-
45% of users drop off at Create Account.
1st round mobile app screens for the redesign and testing workshop.
1st round test results

Participants were asked, "Imagine that you are a parent using StudySesh for the first time for your own child. After answering the questions on these screens, how confident would you be that the recommended tutors are a great match for your child and their learning goals?"
The mean score was 4.53.
Analyzing feedback: 1st round
User metrics (hypothetical)
Since 45% of users dropped off right before sign-up, we would redesign the screen immediately preceding it, "Tell Us About Your Child." ​
Analyzing feedback
We only had a couple of minutes to review the 19 test participants' comments, quickly scrolling through the list to find some actionable feedback for the redesign.
"...My child has specific learning needs that weren't asked about before recommending tutors."

This screen had choices for grade level and subjects that students needed help with.

My sketch and high fidelity redesign
2 Minute sketches

-
My sketches aimed to capture ideas vs. the visual design.
-
I interpreted "learning needs" as meaning more details about grades and learning challenges, such as ADHD.
-
We were provided with and instructed to adhere to a minimalist design system and style guide.
15 minute Figma mockup

-
Initially, I thought knowing grades might be useful.
-
I added choices to select learning challenges.
My high-fidelity Figma screen design.
Voting and testing

Testing a new screen
-
The moderator chose 3 of the 42 designs to vote on.
-
My design was not one of the 3 chosen.
-
We had 1 minute to vote in Figjam.
-
The winning screen was uploaded to Lyssna. The previous questions and demographics were repeated for the test.
-
The score increased to 6.96

"It looks like it suggests teachers based on the qualities, characteristics and requirements of my child. But...
It doesn't show which tutors will work well with children who have learning needs."

Reviewing results of the 2nd design test
What should I address in my design?
Still curious how my design would fare in the test, I decided to pick up where the workshop left off. There were now more responses to the test, and I reviewed the detailed feedback from the 25 participants.
​
I exported the Lyssna participant feedback as a CSV in order to analyze and quantify the data.
Shifting participant concerns
At a high level, participants now had trouble trusting the results. (fig. 1)
Drilling down into the feedback revealed that the majority of participants wanted more information about the tutors' teaching styles, personalities, and credentials. Without this information, participants did not feel confident that the tutors would match their children's needs. (fig.2)
Based on this feedback, I would tweak some ideas from my v2 design and redesign the tutor card.

Fig. 1

Fig. 2
High level feedback
"Based on the fields shown and the tutors recommended, I wasn't able to get a good understanding of their style, personalities, or anything that makes them a good fit."


-
10 out of 25 participants didn't trust their tutor match results.
Detailed feedback
-
9 out of 10 participants wanted to know the tutors' teaching credentials.
-
7 out of 10 participants wanted more information about the tutors' teaching style and personality.
-
3 out of 10 participants didn't believe every tutor rated 5 stars.
Redesigned and new screens: my 2nd iteration
Having a bit more time than 15 minutes to design a screen, I thought through my 1st solution. I added a couple of extra screens to avoid long scrolling on the static screens for the Lyssna test and added more criteria to match children with tutors.

I added subcategories for subjects in a drop-down, vs. only main categories. This change was requested in feedback.

I kept my version of learning needs, and used Perplexity AI to generate a range of learning needs students might have.

I included the winning concept from the designer whose screen was chosen in the workshop test and added more options.
Create an account
Creating an account is probably important for StudySesh's business goals. However, I thought its presentation might be improved.

Original Create Account screen.

-
I thought that allowing more of a preview for tutor cards (similar to Glassdoor's UI) would help users better understand the results they would get before providing the app with an email and password. In allowing a partial view, I hoped to build my test participants' confidence in the legitimacy and usefulness of their results.
My redesigned Create Account screen.
Tutor information
Participants wanted more tutor details, so I redesigned the tutor card to provide more information and build trust for participants. I also added "sneak peeks" for tutor matches on the landing and sign-up screens, I took care to match at least three criteria points with tutor credentials from the survey questions on the tutor card results. I also added sort, filter, and favorites functions.

Version 1 tutor list screen.

-
The results screen keeps the "Here are your matches" text and adds sort, filter, favorite, and account profile functions. "Favorite" would be used to compare tutors later.
-
"Credentials" shows the details of tutors' education.
-
A "How you match" section shows the parents' choices and tutor expertise matches for children.
-
An "About" section allows tutors to present themselves in their own voices.
My redesigned tutor list screen.
-
"Schedule a call" allows users to select a time to chat with the tutor for a more detailed conversation about their needs.
Uploaded screens for Lyssna testing
My design's Lyssna test results

Confidence increased
"The onboarding is designed with good empathy with me."

-
20 participants rated my design at 7 or higher.
-
5 participants rated my design 5 or lower, but nobody
gave it a 1.
-
The score increased to 7.56
Detailed analysis of the design
What went right?
-
10 out of 25 participants felt the app was useful, appreciated the questions in the survey, and that learning needs and styles were represented. They also liked that scheduling was flexible.
-
1 out of 25 said the screens were easy to read.

Where can my design improve?
-
3 participants stated they wanted more detailed questions in the survey.
-
2 participants had trouble reading and/or seeing the screens.
-
1 participant stated there were too many typos, but I didn't see any.
-
1 participant stated they were skeptical that the app could register high-quality tutors in real life. This is something that can only be proven after launch.
"To give it 10/10 I would like a few more questions to be asked of my child."

"Learning needs and learning styles pages provides a personalised experience for my child."

Retrospective

"To give it 10/10 I would like a few more questions to be asked of my child."

N=25
60 minutes
Results analysis
It's better to have a bit more time to do a deep dive of the feedback. Seeing the numbers behind the qualitative information helps to form ideas for the best solutions.
Tell me more
Participants left feedback that would benefit from a moderated test. People wanted more questions in the flow, but what kinds of questions do they want?
Research time
The additional time I took to analyze the 2nd round of test results helped me create a tailored solution that increased people's confidence in the app.

Test design
The legibility of the screens might have been affected by my decision to add more screens to the test. In retrospect, I probably didn't need one of the sign up screens.
Next Steps
Prototype
At this point, I want to create an MVP interactive prototype and perhaps add some questions to test with participants.
Usability testing
Once the prototype is completed, I will interview and test the app with at least 5 participants to better understand the specifics of what people would like to see.
Iterate
Based on the feedback I get in the interviews and tests, I will iterate the design and test again until the confidence score is at least a 9.
Review of before and after screens
1st round mobile app screens for the redesign and split testing workshop:

1st version scheduling screen and my screen designs:


Thank you for stopping by!
If you'd like to have a conversation about how I can help with your project, please contact me!