Scribs: a writing platform for ESL students
A UX case study by four design students at the UW Human Centered Design & Engineering graduate program.
This project was done as part of the HCDE 518: User-Centered Design graduate class (Fall 2021) at the University of Washington. Our team included Kasturi Khanke, Anh Nguyen, Sherry Wang, and Hailee Kenney.
What is “Scribs”?
Scribs is a writing platform for writing essays and giving feedback, designed with ESL (English as a Second Language) students in mind.
Why did our team design Scribs?
We selected this project idea based on our personal experiences with the ESL writing process. Some of us have been students, some of us have been tutors — we all agree that the writing feedback processes we’ve experienced aren’t optimized for students to grow as writers. We wanted to design a tool that could improve this process for students and educators. This led us to our first design question:
How might we help ESL students improve their writing?
For 10 weeks, we applied the user-centered design process to guide our product development.
We conducted a total of six user interviews with five ESL students and one subject matter expert, who was an ESL educator. For student interviews, we asked them about their experience receiving feedback from educators, including what works for them, what doesn’t work, and what their ideal review experience looks like. For the educator, we asked about common struggles ESL students face, and how he structured his review process to help students improve their writing.
After reviewing our interview notes, we found four insights:
To complement our interviews, we sent a survey and received 35 responses from a mix of college students and professionals who speak English as a second language. The survey collected information about who they are, how they seek feedback, and what types of feedback they find valuable. The survey responses confirmed that receiving feedback is important in the writing process.
We wanted to know how our product differs from existing writing tools in the market — which is why we did a competitive analysis to find gaps in existing products. We used a PMI (plus, minus, and interesting) matrix to assess a product’s features. “Pluses (+)” are positive experiences in a feature. “Minuses (-) are negative experiences. “Interesting (I)” is for a unique feature or idea.
The market research helped us understand which existing tools are useful and which to avoid. For instance, we found that writing services are often associated with a payment plan, anonymous reviewing, and, oftentimes, scams. Knowing this helped us avoid building a writing service that is tied to unethical values.
Our survey and interviews helped us create three personas to represent the goals and pain points of our key users: Master’s students, Ph.D. students, and writing educators. We decided to make the Master’s student our primary persona because one, our team has experience as Master’s students, and two, the solution for Master’s students is more broadly applicable — it can be used for undergrads or even high school students.
We felt that the Ph.D. students use software that is too advanced (e.g. Latex), and it was hard to design an overly technical solution. Here are three of our expanded user personas:
Based on our user research, we refined our design question from “How might we help ESL students improve their writing in English?” to:
How might we help ESL students better understand feedback from educators throughout the writing process?
The design question and personas led us to define our product’s design goals:
Sketching + Dot voting
To address the design question, each member of our team did two rounds of sketching to ideate the product features. In our first round, we generated six ideas and then voted for our favorite sketches. After the dot voting, we identified five themes to take to the second round. The themes were tracking writing progress, receiving feedback, answering a quiz, meeting synchronously, and creating an outline.
From the five themes, we expanded our sketches, voted for our favorites, and finalized our top three features. They were: creating an outline, giving audio feedback, and expressing gratitude. We agreed that these features best addressed our users’ pain points.
We wanted to better understand how the features work in specific scenarios, so we created four storyboards, one for each persona. We then refined to two storyboards which became the primary use cases of our product. The main characters of the storyboards, the writer and reviewer, also became our product’s main user roles.
The storyboards provided a “backbone” of the product’s interaction. To make these interactions more specific, we created two user flows for the writer and reviewer.
We then translated the user flow into interactive mid-fidelity wireframes, with the help of the sketches to guide our mid-fi design.
Using the wireframes, we performed testing with four users: three ESL students and one native English speaker. We gained feedback across the four interviews, which we later used to redesign aspects of our wireframes.
We used our wireframes to test three features. They were:
- Viewing and creating an outline with our modular outline tool.
- Leaving feedback via written and audio commenting.
- “Reacting” to a draft.
After reviewing the testing results, these were the key changes we made:
Our design journey: scoping and refining
Most of the feedback we received was about how overwhelming the initial screens were. Understandably, this was expected, since users had never used a writing tool that goes beyond writing and commenting. To help users adapt to a new way of writing, we decluttered unnecessary design elements so users can focus on the important actions.
Looking back, we found our design journey had come a long way. We started off thinking we would create a find-a-reviewer service for writers but realized this wasn’t the right thing to do. Our usability test revealed our users’ mental model, and we used this information to iterate on our product.
After refining the screens based on the usability tests, we translated our mid-fidelity iteration to a high-fidelity prototype. Here are the main interactions:
Writer creates a new writing project and writes an outline
When the writer opens the Scribs web app, they can see all of their writing projects, both shared and owned, and their review status. If the writer creates a new writing project, they are prompted to create an outline.
🎯 Targeted problem: Our usability testing showed that users were confused by modules, saying that they didn’t know where to start. We solved this by providing templates that show them what an outline looks like. If they start with “blank,” the app takes the user step-by-step, one module at a time.
Writer requests a review
When the writer is ready to send their draft for review, they select the “Send for review” button. They are prompted to fill out more details about their project and write a message to the reviewer.
On the reviewer’s side, they can open the writer’s message on the side of the essay.
🎯 Targeted problem: In our user research, we found that reviewers often ask for expectations from the writers. To help reviewers receive this, the app asks the writer for more details (e.g. tone, purpose, and current progress) before sending a review.
Writer translates a comment
If the writer doesn’t understand a comment, they can translate it into their native language.
🎯 Targeted problem: Our user research showed that ESL writers often use online translation tools when they don’t understand their teacher’s comment. In the comment, the writer can use the built-in translation tool so they can hover back and forth between English and their native language.
Reviewer records an audio comment
The reviewer can leave an audio comment to help them explain their comment more thoroughly.
🎯 Targeted problem: Our user research showed that writers prefer in-person feedback because their reviewers explain more thoroughly. To facilitate asynchronous feedback, the app provides an audio commenting tool for reviewers to verbalize their comments as if they’re giving them in person.
Review reacts and leaves a comment
The reviewer can select a reaction (either thumb up, star-gazed eyes, or confused face) to express how they feel about a passage. They’re then prompted to leave a comment to explain why they feel that way.
🎯 Targeted problem: Our user research showed that writers prefer in-person feedback because they can observe how reviewers feel about each part of their essay. We included emojis to facilitate this type of emotional feedback.
What did we learn?
- Scope, scope, scope! We had so many great ideas we wanted to incorporate into the project but had to continue to scope down our design to complete it within the timeframe.
- Deliver the MVP. Our user testing showed us that there was a lot of needless complexity in our design. Scaling back to only include the necessary elements greatly improved our final product.
- Time management is crucial. During the design phase, we only spent 10 days (including a Thanksgiving break) designing the wireframes and conducting user testing. It was manageable, but stressful, and we could only have done it by splitting the work among ourselves.
- “That looks familiar.” Since we designed a product that was new, it was important not to overwhelm users with elements that were too difficult. In the testing phase, we learned to listen to users describing the interactions with metaphors.
What would we do if we had more time?
- Our user research was significantly constrained by time and resources, which means that we largely spoke to graduate students with 10+ of English learning experience. Going forward it would be helpful to survey other demographics, especially students with lower English proficiency.
- Our design was scoped down to a few main features due to time constraints. In the future, we plan to add new-user onboarding tutorials, video conferencing, comment categorization, and other features to make a thorough product.
Feedback from our professor.
We are over the moon by our professor’s kind words.
Really excellent job with this! Your storyboards are top-notch and your flow diagrams are very comprehensive — you have some really strong ideas here and it’s clear that you have built this on solid user research … As someone who frequently writes papers and gives feedback, this seems like a really great idea!
We would like to thank our professor Julie Kientz and TA Dawn Sakaguchi-Tang for guiding us through this project!