These are in response to a conversation with phw and cohosh about observing how people use Tor Browser.
Here are links relating to the 2015 UX Sprint, at which we did one-on-one observations of a few users. https://blog.torproject.org/ux-sprint-2015-wrapup https://trac.torproject.org/projects/tor/wiki/org/meetings/2015UXsprint The videos of the sessions are here: https://people.torproject.org/~dcf/uxsprint2015/ The source code for our screen recording setup is here. I'm attaching the README. https://www.bamsoftware.com/git/repo.eecs.berkeley.edu/tor-ux.git
Here's a rough description of the whole process. We ran user experiments over two days. We were hampered by late recruiting (Berkeley only approved the experiment literally the day before) and we only had five participants, three on Saturday and two on Sunday. But that number turned out to be plenty because of how labor-intensive the process was.
We had all the devs in a room downstairs with a projector. The experiments were run upstairs in a smaller room on a laptop. What we did was screencast the laptop screen to the projector downstairs (using VLC streaming through an SSH tunnel). Developers could watch user interactions live, and we also recorded the whole thing. We audio recorded the subjects' voices on handheld audio recorders (we encouraged them to think out loud as they were using the software). The idea was to transcribe the audio and add it to the screen recordings as subtitles. That was a huge amount of work, but the videos are really interesting and enlightening.
We gave participants the choice of a Windows or Mac laptop. It happens that all of the chose Mac, though we had Windows ready to go. Windows was a big pain to set up with VLC and SSH; I might do that differently a second time.
We originally planned to run up to two experiments simultaneously. That would have been way too hectic. It's a lot of work greeting people, doing paperwork, doing the experiment, making sure you reset the computer in between experiments--we really needed an extra person downstairs all that time. With more researchers (i.e., people allowed to interact with participants) it would be possible.
We budgeted one hour of one-on-one time with each participant. It ended up taking between 15 and 30 minutes of actual experiment for each (that's how long the resultant videos are), but that doesn't count all the paperwork and setup time. 60 minutes of actual experiment would have been too long. It could be that long if it were not one on one. But also, there was high variance in how long things took to complete. Some users took longer to install than others. Another took a long time trying to find a specific UI element.
The repo for our PETS 2017 paper is here: https://github.com/lindanlee/PETS2017-paper The screen capture videos from a pilot session of recording are here: https://github.com/lindanlee/PETS2017-paper/tree/master/sessions/pre/videos Scripts relating to setting up the simulated censorship firewall are here: https://github.com/lindanlee/PETS2017-paper/tree/master/experiment