We've just upgraded from Indigo Lite to Indigo Studio and we're very happy with the added functionality. We've made our prototypes and are ready to create usability studies. But I'm having trouble to set up a useful study. For example let's use this study of a simple design:
indigodesigned.com/.../
The idea is to use the start button to begin, choose a emoticon, enter some text and hit send. These steps are also defined in one Task in the study. With this study I run into the following issues:
1. For the study I don't care what type of emoticon the user selects, but the flow of the study only lets me choose 1 emoticon and disables the others. Of course I could say in the task description: 'You are very happy about this product', to guide the user to select a certain emoticon. But what I would like is to edit the task flow so it detects a group of elements is clicked instead of a certain element in that group.
2. For the study I care if the user first would like to enter text and then chooses the emoticon or vise versa. But unfortunately the task flow only dictates the flow I choose. Of course I could analyse the click map to see what the user clicks first. But because all elements are disabled in the flow except the one the 'need' to click it could lead to users stop using the study. What I would like is if I can set up the study so users can interact with the elements in a more free-flow form. For example I could record certain steps for a task, group a few steps in that task and give it an indication that 'it doesn't matter in what order the user follows these steps'. Or perhaps the user may skip steps because some fields are optional to fill-out.
3. Since there isn't a Material Design/Android pack (yet) in Indigo Studio I've used a screenpart library. In specific this one: indigodesigned.com/community made by reiss.cashmore. As you can see in the study I've used a material designed textfield. Unfortunately the participants of the study cannot enter a random text in the textfield, but are limited to what I have defined when I recorded the steps. In this case if the user enters 'ok' they can continue. Is this a bug? Because the base element for this screenpart to enter text is an 'input box' and should not care what the user enters in the field (like a normal textfield).
I hope you can comment on these issues so I can set up an usability study that is useful for us and does not frustrate the user.
I understood your question better now. If interacting with the text field is optional, I will leave it out of the task flow. If it's necessary to interact, the prototype needs to have some interaction defined for the textbox. For example, focuses on, or changes.
One known issue is incorrectly showing a misclick marker for UI elements that have some built in behavior. Text fields behave like text field should whether you have explicitly defined an interaction or not. But you as a moderator can choose to ignore that in your click map report.
Ah, I have discovered that if I do not enter text in the textfield, but only 'activate' the textfield by clicking it, when I record the task flow, the participant can enter a random text in the textfield and still continue the study. An example is here:
https://indigodesigned.com/study/run/wpma8jbe50np/
Thank you George Abraham for your elaborate answer. I hope the recording/replaying of a user session will be a succes, because it sound exactly what I want is most cases for our prototypes. Of course the closed task studies are also very helpful.
I've made a video about my last comment. Here you can see some weird things happening. I've recorded myself as a participant in the study and clicked exactly the same things as I did when I recorded the flow for this study. I've added the click map for you to view the results. The weird things that happen are:
- an incorrect failed click- I cannot enter a random text, but it needed to be exactly what I entered when I recorded the flow of the study (which is 'ok')- it didn't record some clicks on the send button, while I did click the button.
Here is the video: https://youtu.be/NCAs5cc9m8k
I've also added the project files for you to analyze the components used in this prototype.
Here’s a slightly longer explanation for why we expect steps for conducting usability studies, and you can let us know whether it clarifies our intentions.
The current usability studies, as designed, align better with closed tasks as opposed to open-ended tasks.
More importantly, closed tasks significantly reduce the burden of prototyping, focusing on the obvious flows first. As part of the study, we are testing the assumption whether users and designers agree about what’s obvious. Furthermore, we don't have to spend the time designing all the combinations to answer the question about usability. In the time required to create one complex prototype, we would like our users to create 3 simpler alternatives :D
See the image showing a simple flow to buy an iPhone 7.
It looks simple, but you can only imagine the amount of time one will need to spend to design all the flows, without revealing more information about usability. For this example:
As you can tell, there is always a trade-off between speed of prototyping (and iterating), and flexibility in using the prototype. And the usability tasks are a representation of this trade-off. It’s better to leave all the logic complexities for the actual development.
You briefly talk about this yourself, but in this particular instance the task ends up sounding a bit artificial. You said, "You are very happy about this product...". Although it sounds a bit artificial, from a usability point of view, it will answer whether it was obvious to the user what to do. I admit that for this specific example it feels more natural to let users select any of the feedback options. We will look into this.
Hi!
Thank you for your feedback, and excellent observations.
About the usability study, currently we don’t support multiple paths as it made it impossible to compare the pattern of usage across users with Click maps. If we also allow “choose your own journey” style study, which I admit is feels more natural, we need an alternate way of representing the results.
We are experimenting with recording/replaying user sessions, which will be a better fit. With that in place, we will be able to relax the requirements to record a specific flow, and instead pick a start and end point for a task. That, I believe, should serve your specific need.
I did not follow your comment about being unable to type text. Was the intention to disable the text box till the user selects a satisfaction icon? In the example you shared, I am able to pick the third icon, and type in the text box.
We haven’t forgotten about the Android pack. It’s on our near term to do. We are doing some groundwork to make Screenparts more powerful, which will make custom controls more useful.