[ux] User Needs Discovery Session

José René Gutiérrez hola at josernitos.com
Wed Feb 24 06:32:13 UTC 2021


Hi Nimisha! So great to see so much progress since the last meetup.

I got to say, this is a beautiful conversation about how to understand
human behaviour. I love it.

Nimisha, if you enjoy watching people exist while they are doing the most
typical and normal things like washing their hands, or how they talk, or
even how they cook, then you'll have fun watching people using a browser.
So I hope it's an interesting experience and you continue to do them.

Sajolida, everything is subjective and biased. But your concerns are more
about screening as you've said about mentioning Tails/Tor beforehand since
they might want to receive your incentive. Another concern and decision is
that you screen out tech-savvy people. I imagine you want to understand how
a non-technical user understands the technology. And this is great, but it
will depend on Nim goals.

What you bring on demographics is interesting. First of all because
normally what we want to understand is behaviour and not where does an
individual exist. But, I think it could matter, after all how you can
access the Internet depends on the different laws that each country has. A
person from Costa Rica compared to China is definitely different. If you
are doing a User Needs Discovery, I think they could play a role. But, it
depends on what your goal is. I recommend this interesting article by Indi
Young
<https://medium.com/inclusive-software/describing-personas-af992e3fc527>
where she talks about Personas and if demographics play a role or not. It
truly depends on what you (Nim) are after.

Nim, I'm curious about how you want to frame this research. I think the
resource Sajolida shared, of Task Scenarios, is great. How you frame what
you want to know is everything. You want to shed light into what we could
improve. But you want to try to make the session as much closer to reality
as you can think of. The main reason behind this is because people tend to
think that we are evaluating their ability to do things or they want to
help you as a researcher so they try really hard, but what we are really
evaluating is the design. So try not to make it about a task in particular,
but around a situation that they would have to encounter in real life.

Finally, this would be your first UR, right? I hope you have fun. You will
learn a lot about Tor and about humans. What's not to like about that?

Anyway, sorry I got too excited and nerding out.
:)

José

On Tue, Feb 23, 2021 at 8:07 PM sajolida <sajolida at pimienta.org> wrote:

> Nimisha Vijay:
>  > Hi all! I'm Nimisha, I was there at the monthly UX meeting last time
>  > as nim, and I wanted to work on user research.
>  >
>  > Here's the plan I had for conducting the sessions (it's practically my
>  > first time, I would love feedback :))
>  > https://pad.riseup.net/p/nimsTorURAgenda-keep
> <https://pad.riseup.net/p/nimsTorURAgenda-keep>
>  >
>  > Let me know if there are any issues or clarifications
>
> Hi, I'm sajolida and I do UX for Tails (though nor for Tor).
>
> That sounds exciting!
>
> Some notes based on what I saw in the pad:
>
> - Here is an example LimeSurvey questionnaire that I use for screening
>    test participants for Tails:
>
>    * PDF: https://un.poivron.org/~sajolida/screening.pdf
>    * LimeSurvey structure: https://un.poivron.org/~sajolida/screening.lss
>
>    I try not to bias their answer by not saying in the questionnaire that
>    the tests are about Tails (or To in this case). I've seen people cheat
>    on their answers because they really wanted to be selected.
>
>    For example, comparing with
>
> https://gitlab.torproject.org/tpo/ux/research/-/blob/master/scripts%20and%20activities/2020/user_demographics-en.md
> :
>
>    * I'm asking them about their familiarity with many different privacy
>      tools instead of asking them about Tails or Tor explicitly. This
>      list also helps me spot cheater (eg. the PirateChat in the list
>      doesn't exist) or know a bit more about their technological profile.
>
>    * I avoid asking people to evaluate their own tech-savviness because
>      it might be very subjective and biased. I prefer relying on their
>      knowledge of other privacy tool and the favorite OS. I try to avoid
>      Linux users for example, as they tend to be more tech-savvy. I ask
>      them about their relationship with computers when I do the sound
>      check (see below).
>
>    * I don't ask demographic questions because I don't find them useful
>      for what I want to know about users.
>
>    * You can ask about the recording in the screening form already. That
>      can save you some rejections down the line. Given that you want to
>      study discovery and first-time use, I would definitely ask to record
>      the session as it might be super useful to share extracts with devs.
>
> - You can reuse my consent form (itself adapted from Bernard Tyers):
>
> https://gitlab.tails.boum.org/tails/ux/-/raw/master/tools/consent_form.fodt
>
> - Using Jitsi, you can record the session directly into DropBox. I know
>    that Tor uses BigBlueButton for their internal meetings. You could
>    also ask them whether their instance of BigBlueButton allows
>    recording. BigBlueButton is less hungry than Jitsi in terms of
>    bandwith and CPU.
>
> - Writing good tasks is critical to have good tests. See for example:
>
>    https://www.nngroup.com/articles/task-scenarios-usability-testing/
>
>    I'm happy to review your tasks once you have them drafted. I never
>    disclose any of the tasks in advance to test participants and always
>    give them one after the one. This prevents priming and allows me to
>    adjust on the go depending on the participant.
>
> - For remote tests, I always schedule a sound check a few days before
>    the tests. But if you have a lot of time and participants and can
>    spare a few trials and errors maybe it's not worth it.
>
> - I usually takes me about 15 minutes to get started before the actual
>    tasks: sign the consent form, explain the methodology, etc. Recruiting
>    participants also takes a significant amount of time if you want to do
>    a good screening and sound check, so I usually schedule them for 2
>    hours each. To make most of my time with each participant, I tend to
>    write more tasks than what all participants will be able to do. Some
>    participants might do a few more tasks than others and it's fine.
>
> - I always offer a monetary incentive to test participants. It makes it
>    easier to recruit diverse participants who are not familiar with the
>    technology yet and it reduces the number of people who don't show up.
>    I wonder if Tor could buy you some Amazon gift cards or send our some
>    perks to test participants.
>
> - I never ask test participants to take notes. They should be busy using
>    the interface while you take notes. Save some time at the end of the
>    tests to debrief some interesting parts with them. Until the end of
>    the tests, they should stick to using the interface and thinking
>    aloud.
>
> Good luck with all that!
>
> --
> sajolida
> Tails — https://tails.boum.org/
> UX · Fundraising · Technical Writing
>
> _______________________________________________
> UX mailing list
> UX at lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/ux
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.torproject.org/pipermail/ux/attachments/20210224/2dd25e9e/attachment-0001.htm>


More information about the UX mailing list