[anti-censorship-team] Resources related to user testing and screen recording

David Fifield david at bamsoftware.com
Sun Jul 14 11:46:09 UTC 2019


These are in response to a conversation with phw and cohosh about
observing how people use Tor Browser.

Here are links relating to the 2015 UX Sprint, at which we did
one-on-one observations of a few users.
	https://blog.torproject.org/ux-sprint-2015-wrapup
	https://trac.torproject.org/projects/tor/wiki/org/meetings/2015UXsprint
The videos of the sessions are here:
	https://people.torproject.org/~dcf/uxsprint2015/
The source code for our screen recording setup is here. I'm attaching
the README.
	https://www.bamsoftware.com/git/repo.eecs.berkeley.edu/tor-ux.git

Here's a rough description of the whole process.
	We ran user experiments over two days. We were hampered by late
	recruiting (Berkeley only approved the experiment literally the
	day before) and we only had five participants, three on Saturday
	and two on Sunday. But that number turned out to be plenty
	because of how labor-intensive the process was.

	We had all the devs in a room downstairs with a projector. The
	experiments were run upstairs in a smaller room on a laptop.
	What we did was screencast the laptop screen to the projector
	downstairs (using VLC streaming through an SSH tunnel).
	Developers could watch user interactions live, and we also
	recorded the whole thing. We audio recorded the subjects' voices
	on handheld audio recorders (we encouraged them to think out
	loud as they were using the software). The idea was to
	transcribe the audio and add it to the screen recordings as
	subtitles. That was a huge amount of work, but the videos are
	really interesting and enlightening.

	We gave participants the choice of a Windows or Mac laptop. It
	happens that all of the chose Mac, though we had Windows ready
	to go. Windows was a big pain to set up with VLC and SSH; I
	might do that differently a second time.

	We originally planned to run up to two experiments
	simultaneously. That would have been way too hectic. It's a lot
	of work greeting people, doing paperwork, doing the experiment,
	making sure you reset the computer in between experiments--we
	really needed an extra person downstairs all that time. With
	more researchers (i.e., people allowed to interact with
	participants) it would be possible.

	We budgeted one hour of one-on-one time with each participant.
	It ended up taking between 15 and 30 minutes of actual
	experiment for each (that's how long the resultant videos are),
	but that doesn't count all the paperwork and setup time. 60
	minutes of actual experiment would have been too long. It could
	be that long if it were not one on one. But also, there was high
	variance in how long things took to complete. Some users took
	longer to install than others. Another took a long time trying
	to find a specific UI element.

The repo for our PETS 2017 paper is here:
	https://github.com/lindanlee/PETS2017-paper
The screen capture videos from a pilot session of recording are here:
	https://github.com/lindanlee/PETS2017-paper/tree/master/sessions/pre/videos
Scripts relating to setting up the simulated censorship firewall are
here:
	https://github.com/lindanlee/PETS2017-paper/tree/master/experiment
-------------- next part --------------
Instructions for doing a video-recorded user study. This is written from
memory and is incomplete. This is what we did for the 2015 Tor UX
Sprint:

https://trac.torproject.org/projects/tor/wiki/org/meetings/2015UXsprint
https://blog.torproject.org/blog/ux-sprint-2015-wrapup

The output of the process is silent screencast videos with text captions
that record what was spoken during the experiment.

Linda Naeun Lee <lnl at berkeley.edu>
David Fifield <fifield at eecs.berkeley.edu>


== Setting up on Mac OS X 10.10 ==

Create a remoteviewer account.

Create /Users/USER/ux as the user you're going to use.

Enable ssh, in System Preferences under "Sharing". Restrict ssh to the
remoteviewer account.

Turn on the firewall and block everything but ssh.

Install VLC.

Run stream.sh in a Terminal. It will create a local capture file and
also start streaming to http://127.0.0.1:8080/.

On your viewing computer, run
	ssh -L 8080:127.0.0.1:8080 -N <Mac IP address>
This sets up an ssh tunnel to the streaming VLC. Then, also on the
viewing computer, run
	wget http://127.0.0.1:8080/ -O capture-mac-$(date +'%Y%m%d-%H:%M:%S').mp4

Now you have the video being saved in two places: on the experiment
computer itself, and on the viewer computer.

We didn't figure out how to terminate stream.sh cleanly, so the capture
file will be missing necessary metadata at the end of the file. You can
reencode with avconv to fix that. The copy you download should be fine.


== Setting up on Windows 8 ==

Create a remoteviewer account.

Create C:\Users\USER\ux as the user you're going to use. Copy cursor.png
into the directory. cursor.png is needed in order for the mouse pointer
to show up in the videos.

Install Cygwin and its openssh and cyrgunsrv packages.

http://www.noah.org/ssh/cygwin-sshd.html
http://cygwin.wikia.com/wiki/Sshd

Install VLC.

Run stream.bat in a console. We couldn't figure out how to make the
console and VLC icons disappear while it's running. You might have to
edit the paths in stream.bat.

On your viewing computer, run
	ssh -L 8081:127.0.0.1:8080 -N <Windows IP address>
This sets up an ssh tunnel to the streaming VLC. Then, also on the
viewing computer, run
	wget http://127.0.0.1:8080/ -O capture-windows-$(date +'%Y%m%d-%H:%M:%S').mp4

Now you have the video being saved in two places: on the experiment
computer itself, and on the viewer computer.


== Cleanup between experiments ==

Delete files they downloaded.

Uninstall Tor Browser (delete from /Applications).

Empty Trash.

Enter all browsers and clear history/reset.

Quit any running apps.

=== Re-enable Gatekeeper on OS X ===

If they changed the global "Allow apps downloaded from" setting on the
Security & Privacy "General" pane, just change it back to "Mac App
Store".

If they did "Open Anyway" on the app, then run the command
	spctl --disable /path/to/Tor\ Browser.app


== Syncing audio/video masters ==

Use Kdenlive (https://kdenlive.org/, apt-get install kdenlive).

Go to "Settings?Manage Project Profiles" and create a new profile.
	Description: "MacBook Air"
	Size: 1440?900
	Frame rate: 10/1
	Pixel aspect ratio: 1/1
	Display aspect ratio: 16/10
	Colorspace: Unknown
	Progressive
If you used other than a MacBook Air, create a profile to match.

Start a new project. "Project?Project Settings" and choose the Video
Profile you created.

Add the original video and audio files ("Project?Add Clip"). Drag the
clips down to the audio and video tracks.

Drag the audio horizontally until it lines up with the video. You can
find a button click or something to match on.

Highlight the zone you want to export by pressing "i" at the beginning
and "o" at the end (like "in" and "out").

Then do "Project?Render" and "Render to File".


== Making captioned videos ==

You have to first transcribe all the speech. Play the video in VLC at a
slow speed, like 50%, or as slow as you need to keep up with typing.
Dump the text into a file. It helps if you put line breaks where there
are pauses, and blank lines when another speaker starts speaking.

Then you have to add timing information to the text. We used the
programs captions.py and date.py. But don't do that, find a better way.
We didn't try it, but https://amara.org/ seems to be commonly used. You
can possibly set up your own, to avoid sending original files to an
untrusted service.

The output of the timing process is files in .srt and .vtt formats. .srt
can be loaded separately in a video player and it's also what we'll use
to build embedded Kate captions into an Ogg Theora file. .vtt is WebVTT,
which you use to add captions to HTML5 video. Kate is a derived format
that can be multiplexed with Theora video.

https://en.wikipedia.org/wiki/SubRip#SubRip_text_file_format
https://developer.mozilla.org/en-US/docs/Web/API/Web_Video_Text_Tracks_Format
https://wiki.xiph.org/OggKate#Text_movie_subtitles

Strip the audio from your original video master and encode to Theora:
	avconv -i file.mp4 -b 800k -an file-silent.ogg

Build a Kate stream and multiplex it into the silent video.
	kateenc -t srt -M -c SUB -l en_US -o file-captions.ogg file.srt
	oggz merge -o file.ogg file-silent.ogg file-captions.ogg

To make an HTML5 video with captions:
	<video controls>
		<source src="file.ogg">
		<track src="file.vtt" kind="captions" srclang="en" default>
	</video>


More information about the anti-censorship-team mailing list