Hello Everyone,
I recently inadvertently opened a much larger can of worms than I'd
intended when fixing a bug reported downstream where cURL would,
when configured with certain DNS backends, fail to resolve .onion
addresses.
https://bugs.gentoo.org/887287
After doing some digging I discovered that the c-ares library was
updated in 2018 to intentionally fail to resolve .onion addresses
in line with RFC 7686, and another reported 'Bug' in cURL for
leaking .onion DNS requests:
https://github.com/c-ares/c-ares/issues/196https://github.com/curl/curl/issues/543
I took the obviously sane and uncontroversial approach of making
sure that cURL would always behave the same way regardless of the
DNS backend in use, and that it would output a useful error message
when it failed to resolve a .onion address.
Unfortunately, this has made a lot of people very angry and been
~~widely regarded as a bad move~~panned by a small subset of
downstream cURL users:
https://github.com/curl/curl/discussions/11125https://infosec.exchange/@harrysintonen/110977751446379372https://gitlab.torproject.org/tpo/core/torspec/-/issues/202
I accept that, in particular, transproxy users are being inconvenienced,
but I also don't want to go back to 'cURL leaks .onion DNS requests
_sometimes_'. As a career sysadmin and downstream bug triager: this
is the stuff that keeps me up late at night. Quite literally, far too
often.
I have found, however that the downstreams that I expected to be
inconvenienced
most (Whonix and Tails) simply use socks:
https://github.com/Kicksecure/sdwdate/commit/5724d83b258a469b7a9a7bbc651539…https://github.com/Kicksecure/tb-updater/commit/d040c12085a527f4d39cb1751f2…https://github.com/Kicksecure/usability-misc/blob/8f722bbbc7b7f2f3a35619a5a…https://gitlab.tails.boum.org/tails/tails/-/issues/19488https://gitlab.tails.boum.org/tails/tails/-/merge_requests/1123
I've asked in the Tor Specifications issue (inspired by Silvio's
suggestions), and in the cURL issue, but I seem to be getting nowhere
and the impacted users are clamouring for a quick band-aid solution,
which I feel will work out worse for everyone in the long run:
>How can client applications (safely):
>
>1.discover that they're in a Tor-enabled environment
>2.resolve onion services only via Tor in that circumstance
>3.not leak .onion resolution attempts at all
>
>Right now, not making these requests in the first place is the
>safest (and correct) thing to do, however inconvenient it may be.
>Rather than immediately trying to come up with a band-aid approach
>to this problem, a sane mechanism needs to be implemented to:
>
>1.prevent each application from coming up with their own solution
>2.prevent inconsistency in .onion resolution (i.e. no "oh it only
>leaks if DO_ONION_RESOLUTION is set")
>3.provide a standardised mechanism for applications that want to be Tor
>aware to discover that they're in a Tor-enabled environment.
I'm not particularly attached to that last point, but it's worth discussing.
On a related note:
-is the use of a transparent proxy recommended?
-is there a sane alternative that involves as minimal configuration
as possible for these users?
I'm not sure what the best way forward is here, but I'm hoping that
actual Tor developers might have a useful opinion on the matter, or
at least be able to point me in the right direction.
Thanks for your time,
Cheers,
Matt
Cecylia, Arlo, Serene, Shelikhoo, and I are writing a research paper
about Snowflake. Here is a draft:
https://www.bamsoftware.com/papers/snowflake/snowflake.20231003.e6e1c30d.pdf
We're writing to check a factual claim in the section about having
multiple backend bridges. Basically, we wanted it to be possible for
there to be multiple Snowflake bridge sites run by different groups of
people, and we did not want to share the same relay identity keys across
all bridge sites, because of the increased risk of the keys being
exposed. Therefore every bridge site has its own relay identity, which
requires the client to know the relay fingerprints in advance and that
it be the client (and not, e.g., the broker) that decides which bridge
to use.
1. Is our general description (quoted below) of the design constraints
as they bear on Tor correct?
2. Is §4.2 "CERTS cells" the right part of tor-spec to cite to make our
point?
https://gitlab.torproject.org/tpo/core/torspec/-/blob/b345ca044131b2eb18e6a…https://github.com/turfed/snowflake-paper/blob/e6e1c30dde6716dc5e54a32f2134…
A Tor bridge is identified by a long-term identity public key.
If, on connecting to a bridge, the client finds that the
bridge's identity is not the expected one, the client will
terminate the connection \cite[\S 4.2]{tor-spec}. The Tor client
can configure at most one identity per bridge; there is no way
to indicate (with a certificate, for example) that multiple
identities should be considered equivalent. This constraint
leaves two options: either all Snowflake bridges must share the
same cryptographic identity, or else it must be the client that
makes the choice of what bridge to use. While the former option
is possible to do (by synchronizing identity keys across
servers), every added bridge would increase the risk of
compromising the all-important identity keys. Our vision was
that different bridge sites would run in different locations
with their own management teams, and that any compromise of a
bridge site should affect that site only.
In my own experiments, providing an incorrect relay fingerprint leads to
errors in connection_or_client_learned_peer_id:
https://gitlab.torproject.org/tpo/core/tor/-/blob/tor-0.4.7.13/src/core/or/…
[warn] Tried connecting to router at 192.0.2.3:80 ID=<none> RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A71, but RSA + ed25519 identity keys were not as expected: wanted 1111111111111111111111111111111111111111 + no ed25519 key but got 2B280B23E1107BB62ABFC40DDCC8824814F80A72 + 1zOHpg+FxqQfi/6jDLtCpHHqBTH8gjYmCKXkus1D5Ko.
[warn] Problem bootstrapping. Stuck at 14% (handshake): Handshaking with a relay. (Unexpected identity in router certificate; IDENTITY; count 1; recommendation warn; host 1111111111111111111111111111111111111111 at 192.0.2.3:80)