Hi,
Tails
=====
Misc
----
- Interviewed an activist in struggle against a nuclear project in
France who uses Tails and trains others:
https://tails.net/contribute/how/user_experience/interviews/robin/
It was the first time we invited for an interview someone who
contacted us initially by email on our Help Desk. It proved to be a
success!
- Explained why and how we've used https://tails.net/home as the
homepage of Tor Browser in Tails until now. (#20822)
Project 165
-----------
- Submitted for review a huge branch with improvements to the Tails
website (#20766), which covers:
Navigation:
* Add a secondary navigation for the Documentation section. (#20393)
* Add a hamburger menu for this navigation on mobile. (#20065)
* Move Contribute from our top navigation to the footer only. (#20835)
Typography:
* Use a bigger font everywhere on our website. (#17665)
* Use a variable font on our website. (#16188)
* H1 and H2 headings are hard to differentiate. (#20805)
Accessibility:
* Add H1 heading to the beginning of page content. (#7506)
* Add WCAG "skip to main content" quick link. (#7507)
Screenshots:
https://gitlab.tails.boum.org/tails/tails/-/merge_requests/2046
- Did some follow-up on Tor Browser running as Flatpak:
* Updated our documentation. (#20867)
https://tails.net/doc/anonymous_internet/Tor_Browser/#confinement
* Moved forward removing the GNOME bookmarks for "Tor Browser" and
"Tor Browser (persistent)". (#15028)
Donate Neo
==========
- Proposed improvements to the CAPTCHA. (donate-neo#65)
- Documented 11 small issues and proposed solutions. (donate-neo#168)
--
sajolida
The Tor Project — UX Designer
Hey everyone!
Here are our meeting logs:
http://meetbot.debian.net/tor-meeting/2025/tor-meeting.2025-04-24-16.00.html
And our meeting pad:
Anti-censorship work meeting pad
--------------------------------
Anti-censorship
--------------------------------
Next meeting: Thursday, May 1 16:00 UTC
Facilitator: shelikhoo
^^^(See Facilitator Queue at tail)
Weekly meetings, every Thursday at 16:00 UTC, in #tor-meeting at OFTC
(channel is logged while meetings are in progress)
This week's Facilitator: onyinyang
== Goal of this meeting ==
Weekly check-in about the status of anti-censorship work at Tor.
Coordinate collaboration between people/teams on anti-censorship at the
Tor Project and Tor community.
== Links to Useful documents ==
* Our anti-censorship roadmap:
*
Roadmap:https://gitlab.torproject.org/groups/tpo/anti-censorship/-/boards
* The anti-censorship team's wiki page:
*
https://gitlab.torproject.org/tpo/anti-censorship/team/-/wikis/home
* Past meeting notes can be found at:
* https://lists.torproject.org/pipermail/tor-project/
* Tickets that need reviews: from projects, we are working on:
* All needs review tickets:
*
https://gitlab.torproject.org/groups/tpo/anti-censorship/-/merge_requests?s…
* Project 158 <-- meskio working on it
*
https://gitlab.torproject.org/groups/tpo/anti-censorship/-/issues/?label_na…
== Announcements ==
*
== Discussion ==
- About
SnowflakeStaging(https://gitlab.torproject.org/shelikhoo/snowflakestaging)
* Please have a trial of the Snowflake Packet Transport Mode!
* Relicense as MIT
* Anything we wants to modify?
* Anything we wants to add?
* We will discuss this again next week to get feedback since other
snowflake developers were away today
== Actions ==
== Interesting links ==
== Reading group ==
* We will discuss "" on
* Questions to ask and goals to have:
* What aspects of the paper are questionable?
* Are there immediate actions we can take based on this work?
* Are there long-term actions we can take based on this work?
* Is there future work that we want to call out in hopes
that others will pick it up?
== Updates ==
Name:
This week:
- What you worked on this week.
Next week:
- What you are planning to work on next week.
Help with:
- Something you need help with.
cecylia (cohosh): 2025-04-03
Last week:
- fixed caching for snowflake shadow integration tests
(snowflake#40457)
- fixed rust installation bug in shadow CI tests (snowflake#40456)
- added option in conjure pt to switch between three supported
transports (conjure#10)
- set up a test conjure registration server
This week:
- support conjure work
- follow up on snowflake rendezvous failures
- reduce/remove use of mutexes for broker metrics
(snowflake#40458)
- take a look at potential snowflake orbot bug
- https://github.com/guardianproject/orbot-android/issues/1183
dcf: 2025-04-17
Last week:
Next week:
- open issue to have snowflake-client log whenever KCPInErrors
is nonzero
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
- parent:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Help with:
meskio: 2024-04-03
Last week:
- create a container for bridgestrap
- investigate why email is not going out in gettor/email
distributor (tpa/team#42109)
- lyrebird: support dual stack IPv4/v6 in non-linux
(lyrebird#40023)
- update obfs4-bridge docker image (docker-obfs4-bridge#21)
Next week:
- steps towards a rdsys in containers (rdsys#219)
Shelikhoo: 2024-04-24
Last Week:
- [Testing] Unreliable+unordered WebRTC data channel transport
for Snowflake rev2 (cont.)(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
) testing environment setup/research
- Snowflake Staging Server (cont.)
Discussion(https://gitlab.torproject.org/tpo/tpa/team/-/issues/42080)
- Snowfalke Staging Server Experiment
- [Merge Request] Add updated docker compose file for
snowflake(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
)
- [Invesgate] Dependency proxy is broken: "Error: invalid
reference format"
(https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…)
- [Merge Reqeust] CI: fix invalid group name by removing trail
slash (
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
)
- Write Document for Snowflake Staging Environment
- Vantage monitoring
- Merge request reviews
Next Week/TODO:
- Merge request reviews
- [Testing] Unreliable+unordered WebRTC data channel transport
for Snowflake rev2 (cont.)(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
) improvements
- Snowfalke Staging Server Experiment(Almost Done ^~^)
onyinyang: 2025-04-03
Last week(s):
- Continued trying to fix new issues with phantombox setup on
fedora (to no avail)
- Started looking at turbotunnel implementation for conjure
-
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/conj…
Next week:
- Procure a Ubuntu/Debian system and test things with phantombox
As time allows:
- review Tor browser Lox integration
https://gitlab.torproject.org/tpo/applications/tor-browser/-/merge_requests…
- add TTL cache to lox MR for duplicate responses:
https://gitlab.torproject.org/tpo/anti-censorship/lox/-/merge_requests/305
- Work on outstanding milestone issues:
- key rotation automation
Later:
pending decision on abandoning lox wasm in favour of some kind
of FFI?
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/43096):
- add pref to handle timing for pubkey checks in Tor browser
- add trusted invitation logic to tor browser integration:
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/42974
- improve metrics collection/think about how to show Lox is
working/valuable
- sketch out Lox blog post/usage notes for forum
(long term things were discussed at the meeting!):
- brainstorming grouping strategies for Lox buckets (of
bridges) and gathering context on how types of bridges are
distributed/use in practice
Question: What makes a bridge usable for a given user, and
how can we encode that to best ensure we're getting the most appropriate
resources to people?
1. Are there some obvious grouping strategies that we
can already consider?
e.g., by PT, by bandwidth (lower bandwidth bridges
sacrificed to open-invitation buckets?), by locale (to be matched with a
requesting user's geoip or something?)
2. Does it make sense to group 3 bridges/bucket, so
trusted users have access to 3 bridges (and untrusted users have access
to 1)? More? Less?
theodorsm: 2025-04-10
Last weeks:
- fixing bugs in covert-dtls, only mimic DTLS 1.2
- Running proxy with covert-dtls
- MR covert-dtls:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Next weeks:
- Write instructions on how to configure covert-dtls with
snowflake client (are we going to run a user test?)
- Fix merge conflicts in MR
(https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…).
- Condensing thesis into paper
Help with:
- Test stability of covert-dtls in snowflake
Facilitator Queue:
meskio onyinyang shelikhoo
1. First available staff in the Facilitator Queue will be the
facilitator for the meeting
2. After facilitating the meeting, the facilitator will be moved to the
tail of the queue
--
---
onyinyang
GPG Fingerprint 3CC3 F8CC E9D0 A92F A108 38EF 156A 6435 430C 2036
Summary: start upgrading servers during the Debian 13 ("trixie")
freeze, if it goes well, complete most of the fleet upgrade in around
June 2025, with full completion by the end of 2025, with a 2026 year
free of major upgrades entirely. Improve automation, retire old
container images.
Deadline: 2 weeks, 2025-04-01
# Background
Debian 13 ("trixie"), currently "testing", is going into freeze soon, which
means we should have a new Debian stable release in 2025. It has been
a long-standing tradition at TPA to collaborate in the Debian
development process and part of that process is to upgrade our servers
during the freeze. Upgrading during the freeze makes it easier for us
to fix bugs as we find them and contribute them to the community.
The [freeze dates announced by the debian.org release team][] are:
2025-03-15 - Milestone 1 - Transition and toolchain freeze
2025-04-15 - Milestone 2 - Soft Freeze
2025-05-15 - Milestone 3 - Hard Freeze - for key packages and
packages without autopkgtests
To be announced - Milestone 4 - Full Freeze
We have entered the "transition and toolchain freeze" which locks
changes on packages like compilers and interpreters unless
exceptions. See the [Debian freeze policy][] for an explanation of
each step.
Even though we've just completed the Debian 11 ("bullseye") and 12
("bookworm") upgrades in late 2024, we feel it's a good idea to start
*and* complete the Debian 13 upgrades in 2025. That way, we can hope of
having a year or two (2026-2027?) *without* any major upgrades.
This proposal is part of the [Debian 13 trixie upgrade milestone][],
itself part of the [2025 TPA roadmap][].
[freeze dates announced by the debian.org release team]: https://lists.debian.org/debian-devel-announce/2025/01/msg00004.html
[Debian freeze policy]: https://release.debian.org/testing/freeze_policy.html
[Debian 13 trixie upgrade milestone]: https://gitlab.torproject.org/groups/tpo/tpa/-/milestones/12
[2025 TPA roadmap]: https://gitlab.torproject.org/tpo/tpa/team/-/wikis/roadmap/2025
# Proposal
As usual, we perform the upgrades in three batches, in increasing
order of complexity, starting in 2025Q2, hoping to finish by the end
of 2025.
Note that, this year, this proposal also includes upgrading the Tails
infrastructure as well. To help with merging rotations in the two
teams, TPA staff will upgrade Tails machines, with Tails folks
assistance, and vice-versa.
## Affected users
All service admins are affected by this change. If you have shell
access on any TPA server, you want to read this announcement.
In the past, TPA has typically kept a page detailing notable changes
and a proposal like this one would link against the upstream release
notes. Unfortunately, at the time writing, upstream hasn't yet
produced release notes (as we're still in testing).
We're hoping the documentation will be refined by the time we're ready
to coordinate the second batch of updates, around May 2025, when we
will send reminders to affected teams.
We do expect the Debian 13 upgrade to be less disruptive than bookworm,
mainly because Python 2 is already retired.
## Notable changes
For now, here are some known changes that are already in Debian 13:
| Package | 12 (bookworm) | 13 (trixie) |
|--------------------|---------------|-------------|
| Ansible | 7.7 | 11.2 |
| Apache | 2.4.62 | 2.4.63 |
| Bash | 5.2.15 | 5.2.37 |
| Emacs | 28.2 | 30.1 |
| Fish | 3.6 | 4.0 |
| Git | 2.39 | 2.45 |
| GCC | 12.2 | 14.2 |
| Golang | 1.19 | 1.24 |
| Linux kernel image | 6.1 series | 6.12 series |
| LLVM | 14 | 19 |
| MariaDB | 10.11 | 11.4 |
| Nginx | 1.22 | 1.26 |
| OpenJDK | 17 | 21 |
| OpenLDAP | 2.5.13 | 2.6.9 |
| OpenSSL | 3.0 | 3.4 |
| PHP | 8.2 | 8.4 |
| Podman | 4.3 | 5.4 |
| PostgreSQL | 15 | 17 |
| Prometheus | 2.42 | 2.53 |
| Puppet | 7 | 8 |
| Python | 3.11 | 3.13 |
| Rustc | 1.63 | 1.85 |
| Vim | 9.0 | 9.1 |
Most of those, except "tool chains" (e.g. LLVM/GCC) can still change,
as we're not in the full freeze yet.
## Upgrade schedule
The upgrade is split in multiple batches:
- automation and installer changes
- low complexity: mostly TPA services and less critical Tails servers
- moderate complexity: TPA "service admins" machines and remaining
Tails physical servers and VMs running services from the official
Debian repositories only
- high complexity: Tails VMs running services not from the official
Debian repositories
- cleanup
The free time between the first two batches will also allow us to
cover for unplanned contingencies: upgrades that could drag on and
other work that will inevitably need to be performed.
The objective is to do the batches in collective "upgrade parties"
that should be "fun" for the team. This policy has proven to be
effective in the previous upgrades and we are eager to repeat it
again.
### Upgrade automation and installer changes
First, we tweak the installers to deploy Debian 13 by default to avoid
installing further "old" systems. This includes the bare-metal
installers but also and especially the virtual machine installers and
container images.
Concretely, we're planning on changing the `latest` container image
tag to point to `trixie` in early April. A full *year* later, the
`bookworm` container images will be retired. Note that we are already
planning the retirement of the "old stable" (`bullseye`) container
images, see [tpo/tpa/base-images#19][], for which you may have
already been contacted.
New `idle` canary servers will be setup in Debian 13 to test
integration with the rest of the infrastructure, and future new
machine installs will be done in Debian 13.
We also want to work on automating the upgrade procedure
further. We've had catastrophic errors in the PostgreSQL upgrade
procedure in the past, in particular, but the whole procedure is now
considered ripe for automation, see [tpo/tpa/team#41485][] for
details.
[tpo/tpa/base-images#19]: https://gitlab.torproject.org/tpo/tpa/base-images/-/issues/19
[tpo/tpa/team#41485]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/41485
### Batch 1: low complexity
This is scheduled during two weeks: TPA boxes will be upgraded in
the last week of April, and Tails in the first week of May.
The idea is to start the upgrade long enough before the vacations to
give us plenty of time to recover, and some room to start the second
batch.
In April, Debian should also be in "soft freeze", not quite a fully
"stable" environment, but that should be good enough for simple
setups.
35 TPA machines:
```
archive-01.torproject.orgcdn-backend-sunet-02.torproject.orgchives.torproject.orgdal-rescue-01.torproject.orgdal-rescue-02.torproject.orggayi.torproject.orghetzner-hel1-02.torproject.orghetzner-hel1-03.torproject.orghetzner-nbg1-01.torproject.orghetzner-nbg1-02.torproject.orgidle-dal-02.torproject.orgidle-fsn-01.torproject.orglists-01.torproject.orgloghost01.torproject.orgmandos-01.torproject.orgmedia-01.torproject.orgminio-01.torproject.orgmta-dal-01.torproject.orgmx-dal-01.torproject.orgneriniflorum.torproject.orgns3.torproject.orgns5.torproject.orgpalmeri.torproject.orgperdulce.torproject.orgsrs-dal-01.torproject.orgssh-dal-01.torproject.orgstatic-gitlab-shim.torproject.orgstaticiforme.torproject.orgstatic-master-fsn.torproject.orgsubmit-01.torproject.orgvault-01.torproject.orgweb-dal-07.torproject.orgweb-dal-08.torproject.orgweb-fsn-01.torproject.orgweb-fsn-02.torproject.org
```
4 Tails machines:
```
ecours.tails.net
puppet.lizard
skink.tails.netstone.tails.net
```
In the [first batch of bookworm machines][], we ended up taking 20
minutes per machine, done in a single day, but warned that the second
batch took longer.
It's probably safe to estimate 20 hours (30 minutes per machine) for
this work, in a single week.
Feedback and coordination of this batch happens in [issue batch 1][].
[first batch of bookworm machines]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/41251
[issue batch 1]: "https://gitlab.torproject.org/tpo/tpa/team/-/issues/42071"
### Batch 2: moderate complexity
This is scheduled for the last week of may for TPA machines, and the
first week of June for Tails.
At this point, Debian testing should be in "hard freeze", which should
be more stable.
40 TPA machines:
```
anonticket-01.torproject.orgbackup-storage-01.torproject.orgbacula-director-01.torproject.orgbtcpayserver-02.torproject.orgbungei.torproject.orgcarinatum.torproject.orgcheck-01.torproject.orgci-runner-x86-02.torproject.orgci-runner-x86-03.torproject.orgcolchicifolium.torproject.orgcollector-02.torproject.orgcrm-int-01.torproject.orgdangerzone-01.torproject.orgdonate-01.torproject.orgdonate-review-01.torproject.orgforum-01.torproject.orggitlab-02.torproject.orghenryi.torproject.orgmaterculae.torproject.orgmeronense.torproject.orgmetricsdb-01.torproject.orgmetricsdb-02.torproject.orgmetrics-store-01.torproject.orgonionbalance-02.torproject.orgonionoo-backend-03.torproject.orgpolyanthum.torproject.orgprobetelemetry-01.torproject.orgrdsys-frontend-01.torproject.orgrdsys-test-01.torproject.orgrelay-01.torproject.orgrude.torproject.orgsurvey-01.torproject.orgtbb-nightlies-master.torproject.orgtb-build-02.torproject.orgtb-build-03.torproject.orgtb-build-06.torproject.orgtb-pkgstage-01.torproject.orgtb-tester-01.torproject.orgtelegram-bot-01.torproject.orgweather-01.torproject.org
```
17 Tails machines:
```
apt-proxy.lizard
apt.lizard
bitcoin.lizard
bittorrent.lizard
bridge.lizard
dns.lizard
dragon.tails.net
gitlab-runner.iguana
iguana.tails.netlizard.tails.net
mail.lizard
misc.lizard
puppet-git.lizard
rsync.lizard
teels.tails.net
whisperback.lizard
www.lizard
```
The [second batch of bookworm upgrades][] took 33 hours for 31
machines, so about one hour per box. Here we have 57 machines, so it
will likely take us 60 hours (or two weeks) to complete the upgrade.
Feedback and coordination of this batch happens in [issue batch 2][].
[second batch of bookworm upgrades]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/41252
[issue batch 2]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/42070
### Batch 3: high complexity
Those machines are harder to upgrade, or more critical. In the case of
TPA machines, we typically regroup the Ganeti servers and all the
"snowflake" servers that are not properly Puppetized and full of
legacy, namely the LDAP, DNS, and Puppet servers.
That said, we waited a long time to upgrade the Ganeti cluster for
bookworm, and it turned out to be trivial, so perhaps those could
eventually be made part of the second batch.
15 TPA machines:
```
- [ ] alberti.torproject.org
- [ ] dal-node-01.torproject.org
- [ ] dal-node-02.torproject.org
- [ ] dal-node-03.torproject.org
- [ ] fsn-node-01.torproject.org
- [ ] fsn-node-02.torproject.org
- [ ] fsn-node-03.torproject.org
- [ ] fsn-node-04.torproject.org
- [ ] fsn-node-05.torproject.org
- [ ] fsn-node-06.torproject.org
- [ ] fsn-node-07.torproject.org
- [ ] fsn-node-08.torproject.org
- [ ] nevii.torproject.org
- [ ] pauli.torproject.org
- [ ] puppetdb-01.torproject.org
```
It seems like the [bookworm Ganeti upgrade][] took roughly 10h of
work. We ballpark the rest of the upgrade to another 10h of work, so
possibly 20h.
11 Tails machines:
```
- [ ] isoworker1.dragon
- [ ] isoworker2.dragon
- [ ] isoworker3.dragon
- [ ] isoworker4.dragon
- [ ] isoworker5.dragon
- [ ] isoworker6.iguana
- [ ] isoworker7.iguana
- [ ] isoworker8.iguana
- [ ] jenkins.dragon
- [ ] survey.lizard
- [ ] translate.lizard
```
The challenge with Tails upgrades is the coordination with the Tails
team, in particular for the Jenkins upgrades.
Feedback and coordination of this batch happens in [issue batch 3][].
[bookworm Ganeti upgrade]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/41254
[issue batch 3]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/42069
### Cleanup work
Once the upgrade is completed and the entire fleet is again running a
single OS, it's time for cleanup. This involves updating configuration
files to the new versions and removing old compatibility code in
Puppet, removing old container images, and generally wrapping things
up.
This process has been historically neglected, but we're hoping to wrap
this up, worst case in 2026.
## Timeline
- 2025-Q2
- W14 (first week of April): default container image changed to
`trixie`, installer defaults changed and first tests in
production
- W18 (last week of April): Batch 1 upgrades, TPA machines
- W19 (first week of May): Batch 1 upgrades, Tails machines
- W22 (last week of May): Batch 2 upgrades, TPA machines
- W23 (first week of June): Batch 2 upgrades, Tails machines
- 2025-Q3 to Q4: Batch 3 upgrades
- 2026-Q2: bookworm container image retired
## Deadline
The community has until the beginning of the above timeline to
manifest concerns or objections.
Two weeks before performing the upgrades of each batch, a new
announcement will be sent with details of the changes and impacted
services.
# Alternatives considered
## Retirements or rebuilds
We do not plan any major upgrade or retirements in the third phase
this time.
In the future, we hope to decouple those as much as possible, as the
Icinga retirement and Mailman 3 became blockers that slowed down the
upgrade significantly for bookworm. In both cases, however, the
upgrades *were* challenging and had to be performed one way or
another, so it's unclear if we can optimize this any further.
We are clear, however, that we will not postpone an upgrade for a
server retirement. Dangerzone, for example, is scheduled for
retirement ([TPA-RFC-78][]) but is still planned as normal above.
[TPA-RFC-78]: https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-78-danger…
# Costs
| Task | Estimate | Certainty | Worst case |
|-------------------|----------|-----------|------------|
| Automation | 20h | extreme | 100h |
| Installer changes | 4h | low | 4.4h |
| Batch 1 | 20h | low | 22h |
| Batch 2 | 60h | medium | 90h |
| Batch 3 | 20h | high | 40h |
| Cleanup | 20h | medium | 30h |
| **Total** | 144h | ~high | ~286h |
The entire work here should consist of over 140 hours of work, or 18
days, or about 4 weeks full time. Worst case doubles that.
The above is done in "hours" because that's how we estimated batches
in the past, but here's an estimate that's based on the [Kaplan-Moss
estimation technique][].
[Kaplan-Moss estimation technique]: https://jacobian.org/2021/may/25/my-estimation-technique/
| Task | Estimate | Certainty | Worst case |
|-------------------|----------|-----------|------------|
| Automation | 3d | extreme | 15d |
| Installer changes | 1d | low | 1.1d |
| Batch 1 | 3d | low | 3.3d |
| Batch 2 | 10d | medium | 20d |
| Batch 3 | 3d | high | 6d |
| Cleanup | 3d | medium | 4.5d |
| **Total** | 23d | ~high | ~50d |
This is *roughly* equivalent, if a little higher (23 days instead of
18), for example.
It should be noted that automation is not expected to drastically
reduce the total time spent in batches (currently 16 days or 100
hours). The main goal of automation is more to reduce the likelihood
of catastrophic errors, and make it easier to share our upgrade
procedure with the world. We're still hoping to reduce the time spent
in batches, hopefully by 10-20%, which would bring the total number of
days across batches from 16 days to 14d, or from 100 h to 80 hours.
# Approvals required
This proposal needs approval from TPA team members, but service admins
can request additional delay if they are worried about their service
being affected by the upgrade.
Comments or feedback can be provided in issues linked above, or the
general process can be commented on in issue [tpo/tpa/team#41990][].
# References
* [Debian 13 trixie upgrade milestone][]
* [discussion ticket][tpo/tpa/team#41990]
[TPA bookworm upgrade procedure]: https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades/bookworm
[tpo/tpa/team#41990]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/41990
--
Antoine Beaupré
torproject.org system administration
Hey everyone!
Here are our meeting logs:
http://meetbot.debian.net/tor-meeting/2025/tor-meeting.2025-04-17-16.00.html
And our meeting pad:
Anti-censorship work meeting pad
--------------------------------
Anti-censorship
--------------------------------
Next meeting: Thursday, April 24 16:00 UTC
Facilitator: meskio
^^^(See Facilitator Queue at tail)
Weekly meetings, every Thursday at 16:00 UTC, in #tor-meeting at OFTC
(channel is logged while meetings are in progress)
This week's Facilitator: shelikhoo
== Goal of this meeting ==
Weekly check-in about the status of anti-censorship work at Tor.
Coordinate collaboration between people/teams on anti-censorship at the
Tor Project and Tor community.
== Links to Useful documents ==
* Our anti-censorship roadmap:
*
Roadmap:https://gitlab.torproject.org/groups/tpo/anti-censorship/-/boards
* The anti-censorship team's wiki page:
*
https://gitlab.torproject.org/tpo/anti-censorship/team/-/wikis/home
* Past meeting notes can be found at:
* https://lists.torproject.org/pipermail/tor-project/
* Tickets that need reviews: from projects, we are working on:
* All needs review tickets:
*
https://gitlab.torproject.org/groups/tpo/anti-censorship/-/merge_requests?s…
* Project 158 <-- meskio working on it
*
https://gitlab.torproject.org/groups/tpo/anti-censorship/-/issues/?label_na…
== Announcements ==
*
== Discussion ==
Apr 17 New:
== Actions ==
== Interesting links ==
*
https://gitlab.torproject.org/tpo/anti-censorship/censorship-analysis/-/iss…
* "users from some regions report tls 1.3 being blocked" (Russia)
== Reading group ==
* We will discuss "" on
* Questions to ask and goals to have:
* What aspects of the paper are questionable?
* Are there immediate actions we can take based on this work?
* Are there long-term actions we can take based on this work?
* Is there future work that we want to call out in hopes
that others will pick it up?
== Updates ==
Name:
This week:
- What you worked on this week.
Next week:
- What you are planning to work on next week.
Help with:
- Something you need help with.
cecylia (cohosh): 2025-04-03
Last week:
- fixed caching for snowflake shadow integration tests
(snowflake#40457)
- fixed rust installation bug in shadow CI tests (snowflake#40456)
- added option in conjure pt to switch between three supported
transports (conjure#10)
- set up a test conjure registration server
This week:
- support conjure work
- follow up on snowflake rendezvous failures
- reduce/remove use of mutexes for broker metrics
(snowflake#40458)
- take a look at potential snowflake orbot bug
- https://github.com/guardianproject/orbot-android/issues/1183
dcf: 2025-04-17
Last week:
Next week:
- open issue to have snowflake-client log whenever KCPInErrors
is nonzero
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
- parent:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Help with:
meskio: 2024-04-03
Last week:
- create a container for bridgestrap
- investigate why email is not going out in gettor/email
distributor (tpa/team#42109)
- lyrebird: support dual stack IPv4/v6 in non-linux
(lyrebird#40023)
- update obfs4-bridge docker image (docker-obfs4-bridge#21)
Next week:
- steps towards a rdsys in containers (rdsys#219)
Shelikhoo: 2024-04-17
Last Week:
- [Testing] Unreliable+unordered WebRTC data channel transport
for Snowflake rev2 (cont.)(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
) testing environment setup/research
- Snowflake Staging Server (cont.)
Discussion(https://gitlab.torproject.org/tpo/tpa/team/-/issues/42080)
- Snowfalke Staging Server Experiment
- [Merge Request] Add updated docker compose file for
snowflake(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
)
- [Merge Request] Add Approved Certificate for WebTunnel (
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/webt…
)
- Vantage monitoring
- Merge request reviews
Next Week/TODO:
- Merge request reviews
- [Testing] Unreliable+unordered WebRTC data channel transport
for Snowflake rev2 (cont.)(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
) improvements
- Snowfalke Staging Server Experiment(Almost Done ^~^)
onyinyang: 2025-04-03
Last week(s):
- Continued work on Decoy routing for Conjure
- problems with networking in phantombox have impeded
progress on this
Next week:
- Start looking into next milestone isues:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/conj…
- review Tor browser Lox integration
https://gitlab.torproject.org/tpo/applications/tor-browser/-/merge_requests…
- add TTL cache to lox MR for duplicate responses:
https://gitlab.torproject.org/tpo/anti-censorship/lox/-/merge_requests/305
As time allows:
- Work on outstanding milestone issues:
- key rotation automation
Later:
pending decision on abandoning lox wasm in favour of some kind
of FFI?
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/43096):
- add pref to handle timing for pubkey checks in Tor browser
- add trusted invitation logic to tor browser integration:
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/42974
- improve metrics collection/think about how to show Lox is
working/valuable
- sketch out Lox blog post/usage notes for forum
(long term things were discussed at the meeting!):
- brainstorming grouping strategies for Lox buckets (of
bridges) and gathering context on how types of bridges are
distributed/use in practice
Question: What makes a bridge usable for a given user, and
how can we encode that to best ensure we're getting the most appropriate
resources to people?
1. Are there some obvious grouping strategies that we
can already consider?
e.g., by PT, by bandwidth (lower bandwidth bridges
sacrificed to open-invitation buckets?), by locale (to be matched with a
requesting user's geoip or something?)
2. Does it make sense to group 3 bridges/bucket, so
trusted users have access to 3 bridges (and untrusted users have access
to 1)? More? Less?
theodorsm: 2025-04-10
Last weeks:
- fixing bugs in covert-dtls, only mimic DTLS 1.2
- Running proxy with covert-dtls
- MR covert-dtls:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Next weeks:
- Write instructions on how to configure covert-dtls with
snowflake client (are we going to run a user test?)
- Fix merge conflicts in MR
(https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…).
- Condensing thesis into paper
Help with:
- Test stability of covert-dtls in snowflake
Facilitator Queue:
meskio onyinyang shelikhoo
1. First available staff in the Facilitator Queue will be the
facilitator for the meeting
2. After facilitating the meeting, the facilitator will be moved to the
tail of the queue
Hello!
Several of us will be out for Easter Monday, so we will reschedule our
weekly meeting to Tuesday at the same time in the same place.
Hope you all enjoy some well-deserved rest this weekend!
best,
-morgan
---
affected users: container registry users
deadline: 2025-05-08 (3 weeks)
status: proposed
discussion: https://gitlab.torproject.org/tpo/tpa/base-images/-/issues/24
---
Summary: TPA container images will follow upstream OS support schedules
Table of contents:
- Proposal
- Debian images
- Ubuntu images
- Alternatives considered
- Different schedules according to image type
- Upgrades in lockstep with our major upgrades
- Upgrade completes before EOL
- Upgrade completes after EOL
- References
# Proposal
Container image versions published by TPA as part of the `base-images`
repository will be supported following upstream (Debian and Ubuntu)
support policies, including "LTS" releases.
In other words, we will *not* retire the images in lockstep with the
normal "major release" upgrade policy, which typically starts the
upgrade during the freeze and aims to retire the previous release
within a year.
This is to give our users a fallback if they have trouble with the
major upgrades, and to simplify our upgrade policy.
This implies supporting 4 or 5 Debian build per image, per
architecture, depending on how long upstream live, including testing
and unstable.
We can make exceptions in case our major upgrades take an extremely
long time (say, past the LTS EOL date), but we *strongly* encourage
all container image users to regularly follow the latest "stable"
release (if not "testing") to keep their things up to date, regardless
of TPA's major upgrades schedules.
Before image retirements, we'll send an announcement, typically about
a year in advance (when the new stable is released, which is typically
a year before the previous LTS drops out of support) and a month
before the actual retirement.
## Debian images
Those are the Debian images currently supported and their scheduled
retirement date.
| codename | version | end of support |
|------------|---------|----------------|
| `bullseye` | 11 | 2026-08-31 |
| `bookworm` | 12 | 2028-06-30 |
| `trixie` | 13 | likely 2030 |
| `sid` | N/A | N/A |
Note that `bullseye` was actually retired already, before this
proposal was adopted ([tpo/tpa/base-images#19][]).
[tpo/tpa/base-images#19]: https://gitlab.torproject.org/tpo/tpa/base-images/-/issues/19
## Ubuntu images
Ubuntu releases are tracked separately, as we do not actually perform
Ubuntu major upgrades. So we currently have those images:
| codename | version | end of support |
|------------|-----------|----------------|
| `focal` | 20.04 LTS | 2025-05-29 |
| `jammy` | 22.04 LTS | 2027-06-01 |
| `noble` | 24.04 LTS | 2029-05-31 |
| `oracular` | 24.10 | 2025-07 |
Concretely, it means we're supporting a relatively constant number (4)
of upstream releases.
Note that we do not currently build other images on top of Ubuntu
images, and would discourage such an approach, as Ubuntu is typically
not supported by TPA, except to build third-party software (in this
case, "C" Tor).
# Alternatives considered
Those approaches were discussed but ultimately discarded.
## Different schedules according to image type
We've also considered having different schedules for different image
types, for example having only "stable" for some less common images.
This, however, would be confusing for users: they would need to
*guess* what exactly we consider to be a "common" image.
This implies we build more images than we might truly need (e.g. who
really needs the `redis-server` image from `testing` *and*
`unstable`?) but this seems like a small cost to pay for the tradeoff.
We currently do not feel the number of built images is a problem in
our pipelines.
## Upgrades in lockstep with our major upgrades
We've also considered retiring container images in lockstep with the
major OS upgrades as performed by TPA. For Debian, this would have
*not* include LTS releases, unless our upgrades are delayed. For
Ubuntu, it includes LTS releases and supported rolling releases.
For Debian, it meant we generally supported 3 releases (including
testing and unstable), except during the upgrade, when we support 4
versions of the container images for however long it takes to complete
the upgrade after the stable release.
This was confusing, as the lifetime of an image depended upon the
speed at which major upgrades were performed. Those are highly
variable, as they depend on the team's workload and the difficulties
encountered (or not) during the procedure.
It could mean that support for a container image would abruptly be
dropped if the major upgrade crossed the LTS boundary, although this
is also a problem with the current proposal, alleviated by
pre-retirement announcements.
### Upgrade completes before EOL
In this case, we complete the Debian 13 upgrade before the EOL:
- 2025-04-01: Debian 13 upgrade starts, 12 and 13 images supported
- 2025-06-10: Debian 13 released, Debian 14 becomes `testing`, 12, 13
and 14 images supported
- 2026-02-15: Debian 13 upgrade completes
- 2026-06-10: Debian 12 becomes LTS, 12 support dropped, 13 and 14 supported
In this case, "oldstable" images (Debian 12) images are supported 4
months after the major upgrade completion, and 14 months after the
upgrades start.
### Upgrade completes after EOL
In this case, we complete the Debian 13 upgrade after the EOL:
- 2025-04-01: Debian 13 upgrade starts, 12 and 13 images supported
- 2025-06-10: Debian 13 released, Debian 14 becomes `testing`, 12, 13
and 14 images supported
- 2026-06-10: Debian 12 becomes LTS, 12, 13 and 14 supported
- 2027-02-15: Debian 13 upgrade completes, Debian 12 images support
dropped, 13 and 14 supported
- 2028-06-30: Debian 12 LTS support dropped upstream
In this case, "oldstable" (Debian 12) images are supported zero months
after the major upgrades completes, and 22 months after the upgrade
started.
# References
- [discussion issue][]
- [Debian release support schedule][]
- [Ubuntu][] and [Debian][] release timelines at Wikipedia
- [Debian major upgrades progress and history][]
[discussion issue]: https://gitlab.torproject.org/tpo/tpa/base-images/-/issues/24
[Debian major upgrades progress and history]: https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades#all-time-…
[Debian]: https://en.wikipedia.org/wiki/Debian_version_history#Release_table
[Ubuntu]: https://en.wikipedia.org/wiki/Ubuntu_version_history#Table_of_versions
[Debian release support schedule]: https://www.debian.org/releases/
--
Antoine Beaupré
torproject.org system administration
Hey everyone!
Here are our meeting logs:
http://meetbot.debian.net/tor-meeting/2025/tor-meeting.2025-04-10-16.00.html
And our meeting pad:
Anti-censorship work meeting pad
--------------------------------
Anti-censorship
--------------------------------
Next meeting: Thursday, April 10 16:00 UTC
Facilitator: meskio
^^^(See Facilitator Queue at tail)
Weekly meetings, every Thursday at 16:00 UTC, in #tor-meeting at OFTC
(channel is logged while meetings are in progress)
This week's Facilitator: shelikhoo
== Goal of this meeting ==
Weekly check-in about the status of anti-censorship work at Tor.
Coordinate collaboration between people/teams on anti-censorship at the
Tor Project and Tor community.
== Links to Useful documents ==
* Our anti-censorship roadmap:
*
Roadmap:https://gitlab.torproject.org/groups/tpo/anti-censorship/-/boards
* The anti-censorship team's wiki page:
*
https://gitlab.torproject.org/tpo/anti-censorship/team/-/wikis/home
* Past meeting notes can be found at:
* https://lists.torproject.org/pipermail/tor-project/
* Tickets that need reviews: from projects, we are working on:
* All needs review tickets:
*
https://gitlab.torproject.org/groups/tpo/anti-censorship/-/merge_requests?s…
* Project 158 <-- meskio working on it
*
https://gitlab.torproject.org/groups/tpo/anti-censorship/-/issues/?label_na…
== Announcements ==
*
== Discussion ==
Apr 10 New:
== Actions ==
== Interesting links ==
*
https://opencollective.com/censorship-circumvention/projects/snowflake-dail…
== Reading group ==
* We will discuss "" on
* Questions to ask and goals to have:
* What aspects of the paper are questionable?
* Are there immediate actions we can take based on this work?
* Are there long-term actions we can take based on this work?
* Is there future work that we want to call out in hopes
that others will pick it up?
== Updates ==
Name:
This week:
- What you worked on this week.
Next week:
- What you are planning to work on next week.
Help with:
- Something you need help with.
cecylia (cohosh): 2025-04-03
Last week:
- fixed caching for snowflake shadow integration tests
(snowflake#40457)
- fixed rust installation bug in shadow CI tests (snowflake#40456)
- added option in conjure pt to switch between three supported
transports (conjure#10)
- set up a test conjure registration server
This week:
- support conjure work
- follow up on snowflake rendezvous failures
- reduce/remove use of mutexes for broker metrics
(snowflake#40458)
- take a look at potential snowflake orbot bug
- https://github.com/guardianproject/orbot-android/issues/1183
dcf: 2025-04-10
Last week:
- commented on snowflake option renaming
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Next week:
- open issue to have snowflake-client log whenever KCPInErrors
is nonzero
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
- parent:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Help with:
meskio: 2024-04-03
Last week:
- create a container for bridgestrap
- investigate why email is not going out in gettor/email
distributor (tpa/team#42109)
- lyrebird: support dual stack IPv4/v6 in non-linux
(lyrebird#40023)
- update obfs4-bridge docker image (docker-obfs4-bridge#21)
Next week:
- steps towards a rdsys in containers (rdsys#219)
Shelikhoo: 2024-04-10
Last Week:
- [Testing] Unreliable+unordered WebRTC data channel transport
for Snowflake rev2 (cont.)(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
) testing environment setup/research
- Snowflake Staging Server (cont.)
Discussion(https://gitlab.torproject.org/tpo/tpa/team/-/issues/42080)
- Snowfalke Staging Server Experiment
- Vantage monitoring
- Merge request reviews
Next Week/TODO:
- Merge request reviews
- [Testing] Unreliable+unordered WebRTC data channel transport
for Snowflake rev2 (cont.)(
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
) improvements
- Snowfalke Staging Server Experiment
onyinyang: 2025-04-03
Last week(s):
- Finished AMP Cache <> Conjure integration \o/
- Started work on Decoy routing for Conjure
Next week:
- Finish up Decoy routing for Conjure
- review Tor browser Lox integration
https://gitlab.torproject.org/tpo/applications/tor-browser/-/merge_requests…
- add TTL cache to lox MR for duplicate responses:
https://gitlab.torproject.org/tpo/anti-censorship/lox/-/merge_requests/305
As time allows:
- Work on outstanding milestone issues:
- key rotation automation
Later:
pending decision on abandoning lox wasm in favour of some kind
of FFI?
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/43096):
- add pref to handle timing for pubkey checks in Tor browser
- add trusted invitation logic to tor browser integration:
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/42974
- improve metrics collection/think about how to show Lox is
working/valuable
- sketch out Lox blog post/usage notes for forum
(long term things were discussed at the meeting!):
- brainstorming grouping strategies for Lox buckets (of
bridges) and gathering context on how types of bridges are
distributed/use in practice
Question: What makes a bridge usable for a given user, and
how can we encode that to best ensure we're getting the most appropriate
resources to people?
1. Are there some obvious grouping strategies that we
can already consider?
e.g., by PT, by bandwidth (lower bandwidth bridges
sacrificed to open-invitation buckets?), by locale (to be matched with a
requesting user's geoip or something?)
2. Does it make sense to group 3 bridges/bucket, so
trusted users have access to 3 bridges (and untrusted users have access
to 1)? More? Less?
theodorsm: 2025-04-10
Last weeks:
- fixing bugs in covert-dtls, only mimic DTLS 1.2
- Running proxy with covert-dtls
- MR covert-dtls:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
Next weeks:
- Write instructions on how to configure covert-dtls with
snowflake client (are we going to run a user test?)
- Fix merge conflicts in MR
(https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…).
- Condensing thesis into paper
Help with:
- Test stability of covert-dtls in snowflake
Facilitator Queue:
meskio onyinyang shelikhoo
1. First available staff in the Facilitator Queue will be the
facilitator for the meeting
2. After facilitating the meeting, the facilitator will be moved to the
tail of the queue
Hi!
Here are our monthly minutes.
# Roll call: who's there and emergencies
anarcat, groente, lavamind, lelutin and zen present, no emergency
warranting a change in schedule.
# Dashboard review
We reviewed our dashboards as per our weekly checking.
## Normal per-user check-in
* https://gitlab.torproject.org/groups/tpo/-/boards?scope=all&utf8=%E2%9C%93&…
* https://gitlab.torproject.org/groups/tpo/-/boards?scope=all&utf8=%E2%9C%93&…
* https://gitlab.torproject.org/groups/tpo/-/boards?scope=all&utf8=%E2%9C%93&…
* https://gitlab.torproject.org/groups/tpo/-/boards?scope=all&utf8=%E2%9C%93&…
* https://gitlab.torproject.org/groups/tpo/-/boards?scope=all&utf8=%E2%9C%93&…
## General dashboards:
* https://gitlab.torproject.org/tpo/tpa/team/-/boards/117
* https://gitlab.torproject.org/groups/tpo/web/-/boards
* https://gitlab.torproject.org/groups/tpo/tpa/-/boards
# First quarter recap
We reviewed our [plan for Q1][] and observed we've accomplished a lot of work:
- Puppet Gitlab MR workflow
- MinIO RFC
- Prometheus work
- download page work stalled
- lots of email work done
- good planning on the tails merge as well
All around a pretty succesful, if really busy, quarter.
[plan for Q1]: https://gitlab.torproject.org/tpo/tpa/team/-/wikis/meeting/2025-01-13#2025q…
# Second quarter priorities and coordination
We evaluated what we're hoping to do in the second quarter, and there's
again a lot to be done:
- upgrade to trixie, batch 1 (last week of april, first week of may!),
batch 2 in may/june if all goes well
- rdsys and snowflake containerization (VM setup in progress for the
latter)
- network team test network (VM setup in progress)
- mail monitoring improvements
- authentication merge plan
- minio in production (RFC coming up)
- puppet merge work starting
- weblate and jenkins upgrades at the end of the quarter?
# Holidays planning
We have started planning for the northern hemisphere "summer" holidays,
as people have already started booking things up for July and August.
So far, it looks like we'll have one week with a 3-person overlap,
leaving still 2 people on shifts. We've shuffled shifts around to keep
the number of shifts over the year constant but avoid having people on
shifts while on vacations and maxmizing the period between shifts to
reduce the pain.
As usual, we're taking great care to not leave everyone, all at once, on
vacation in high risk activities. ;)
# Metrics of the month
* hosts in Puppet: 94, LDAP: 94, Prometheus exporters: 606
* number of Apache servers monitored: 33, hits per second: 760
* number of self-hosted nameservers: 6, mail servers: 93
* pending upgrades: 0, reboots: 0
* average load: 1.41, memory available: 3.76 TiB/5.86 TiB, running processes: 166
* disk free/total: 59.67 TiB/147.48 TiB
* bytes sent: 568.24 MB/s, received: 387.83 MB/s
* [GitLab tickets][]: 244 tickets including...
* open: 1
* icebox: 138
* future: 52
* needs information: 6
* backlog: 22
* next: 8
* doing: 10
* needs review: 8
* (closed: 4017)
[Gitlab tickets]: https://gitlab.torproject.org/tpo/tpa/team/-/boards
--
Antoine Beaupré
torproject.org system administration