One Thing to Do is a side project I've been working on for the past few weeks. This post details its conception, the alpha version, a usability survey, and results/discussion of that survey. If you're just here for the juicy UX stuff, read the following problem/solution statements, then skip ahead to the Usability Test section.
Todo lists are a helpful way of organizing the tasks that you need and/or want to do, but for people with ADHD/ADD, it can often be very difficult to actually find the motivation to check an item off that list. When the list gets too long, it can feel especially overwhelming.
One of the most effective means of self-motivation I've discovered is regular self-affirmations. By lifting your spirits, it's much easier to move that energy and apply it to something totally unrelated—like Newton's law of emotion or something.
This project is a webapp that requires you to say something nice about yourself every time you want to add a new item to your todo list. Rather than view all of your todos at once in a daunting list, the todo viewer picks a random one to show you, but also picks a random affirmation to display right beneath it. This way the user experience is more focused and kinder than a standard to do list 💚
Originally I planned a due date and priority functionality in the viewer but scrapped due to feeling like it wasn't in the spirit of the app. For some reason I left the priority field in the form, but didn't connect it to any functionality. big brain tiem.
I care deeply for this project. Therefore, I thought it was essential to conduct a UX test for this, to see what my friends expect from an experience like this.
A beautiful question is one which always provides meaningful data. There is a common pitfall in surveys where the researcher will rely on open-ended an questions such as "How did you feel about this experience?" in hopes of receiving long, thoughtful/emotional responses about everything users loved or hated about the experience, and every little detail in the usability test. This is lazy survey design. In reality, I find you'll only get such responses from users who had an extreme 1 or 5 star experience. The majority of people will have a 3 star experience, and may not have any overly compelling feelings to share in the big textbox. That being said, you should have a catch-all comment box just in case, but keep your expectations low.
I asked the following questions,
I didn't ask how easy it was, I just asked whether they could do it. Non-judgemental. Objective. If I didn't get close to 100% on this, then I'd need to seriously reconsider the layout and button styling. I asked users to first try to complete the following testing criteria in Airplane mode, to see if my precious service worker was functioning offline. If that didn't work, they could complete the test with Interwebs turned on.
The purpose of this question was partially to find bugs, but mostly to see if the UI matched the user's expectations of interactivity. If they clicked a button, did a change happen that they weren't expecting? Did the UI not move when they were expecting a change?
When you ask broadly "How do you feel about how the app looks?" people tend to respond with interactivity or other UX complaints, as I saw in a study I did on the wkdu.org website a few months ago. Not only did I specify cosmetics, I gave some examples to jog user's memory.
This is an instance of the broad question I warned against earlier. I didn't expect much here, but the goal was to reframe the "anything unexpected?" question into a more direct implicit question of "What would you do instead?"
This was the big question. By this point in the survey, users have been thinking critically about the experience, and they can connect it to their broader experiences of task management. This question was meant to help me really focus the scope for the beta version to the features that people value most.
This question was meant to guage how compelling the experience of using the app was, and for me to quickly understand how much room I have to improve it :)
I sent the link across a few Discord servers I'm in, but most often I got results from directly tagging or messaging people, especially friends that had been eagerly waiting for this alpha release. I received 6 submissions to the survey, each with wonderful details, suggestions, and gripes (whether or not they realized it, lol).
Two users said they would not use this to replace their existing task management strategy, but 4 users said they would consider using it long term with some additional improvements. For a mid-fidelity alpha, that feels quite promising.
There were some healthy concerns about text size being too small or links that should be styled as buttons, and/or given more breathing room.
People seemed to enjoy the negative space, and cute/friendly attitude of the design. If anything they wanted more! There were multiple requests for fun/rewarding visuals. One user jokingly suggested a custom image upload so they could send themselves sassy memes for motivation. Images communicate more quickly than words, so I understand this desire. I think creating fun animations, an encouraging UI character, relaxing photos, or something to that effect—happening as a prompt in realtime, rather than as a reward system—may be a really effective feature. Maybe I'll check out the unsplash API or design some silly interactive features.
All users could complete all test critera. Simple UI = Successful UI.
3/4 people could complete the test offline. Two did not respond to the question. This was an oversight on my part for not making the question required. Not exactly conclusive, but promising. The one user who couldn't might be on an older Android version. I'll make a bigger deal out of this in the beta survey, and possibly follow up with the other two respondants.
There is possibly one glitch where someone went to the Add task form, then navigated back the the task viewer and saw the default task message "Enjoy yourself!", but I'm more inclined to think this was due to the user not having any saved tasks. In my extensive testing for such bugs I haven't seen anything like that. Otherwise, no one stumbled on any technical glitches in the code, which is a great first step for an alpha. However, there were several cases of unexpected results of interactions...
A big stumbling block almost every user found, and some didn't understand, was that the default state of the task viewer was indistinguishable from user impacted state, once a few tasks had been entered. If a user had completed all of their tasks, they would see the default message "Enjoy yourself!" but this looked like something they had added, leading to confusion and frustration when they couldn't delete it. (As a side note, if so many users forgot this was the default message, this suggests the "Add" button I have is very effective in drawing attention).
Two users who self-identified as perhaps not being the target audience, wanted more control over the data. They wanted to set due dates, checklists, subtask dependencies, etc. I'll discuss this more in the conclusion.
A couple of users were really confused by the distinction between the Add form submit button "Let's Do This" and the back link "Done adding stuff". My intention here was to let users add multiple tasks at a time, then go back to the task viewer once they were done. Users interpreted the back link as a secondary submit button, which made it feel like they had to submit the data twice. Since the name of the app is One Thing to Do, and the task viewer only shows one task, I can see why some users might be inclined to just add one task, then immediately want to see it in the task viewer, rather than staying on the form and adding 3 or 4 tasks (aside: I should always show the most recently added task after users leave the form, then engage the random picker). I'm not sure whether it's better to prioritize single task adding or multiple task adding. It reminds me of the following, though:
One of the clever designs of this app is that by forcing users to enter a positive self-affirmation prior to submitting a task, users need to slow down and consider what things they really need to do. Trying to come up with nice things to say to yourself can be a strange and stressful feeling if you're not accustomed to it. It can take a long time while you sit there and consult the ceiling fan for inspiration. Therefore, it prevents users from spamming a long list of things to do.
Two users reported finding it somewhat overwhelming, and likely would have benefited from more specific prompts/suggestions. I plan to ask much more targeted questions in the beta survey so I can get a better sense of users' cognitive load while generating self-affirmations. In the beta I may experiment with some ways to make generating self-affirmations more fun, but not too easy, so they don't become too generic. The caveat to that is I don't want to be too heavy handed about the self-affirmations necessarily, because it's important for users to grow into this mentality (myself included), rather than be completely forced into it. The occasional "i can haz cheezburger" affirmation might actually be fine.
My end-goal with this app is to slowly change people's behavior such that by using this app over a long period of time, they can close the gap between the time it takes to write a task and the time it takes to write a self-affirmation. In this way we can classically train ourselves to think kind thoughts whenever there's a task to be done. It's the most diabolical scheme I've ever conceived of.
The biggest question that arose in analyzing the results was defining the target demographic. Am I only targeting users with motivational disorders who liked the focused and upbeat experience, or am I also trying to to appease the (valid) concerns of more organized people who expected greater access to the data model? I'm inclined to trust my gut here, and hone in on users like me, who need this tool to motivate themselves to do assorted tasks.
There is already an abundance of task management solutions, of which each offer granular control over task hierarchy, dependencies, and sorting. The fundamental difference between One Thing to Do and the other task management apps is that the others solve the problem of understanding an array of tasks you have to do, whereas my solution sovles the problem of doing a single task you have to do. The other apps work up high at the self-awareness level of cognition with less tangible emotional impact, whereas I'm targetting lower cognitive functions with more immediate impact, like... feeling good about yourself. Who doesn't want that?