[tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

nusenu nusenu-lists at riseup.net
Mon Jul 6 23:01:12 UTC 2020


> I've written up what I think would be a useful building block:
> https://gitlab.torproject.org/tpo/metrics/relay-search/-/issues/40001

thanks, I'll reply here since I (and probably others) can not reply there.

> Three highlights from that ticket that tie into this thread:
> 
> (A) Limiting each "unverified" relay family to 0.5% doesn't by itself
> limit the total fraction of the network that's unverified. I see a lot of
> merit in another option, where the total (global, network-wide) influence
> from relays we don't "know" is limited to some fraction, like 50% or 25%.

I like it (it is even stricter than what I proposed), you are basically saying
the "known" pool should always control a fixed (or minimal?) portion - lets say 75% - 
of the entire network no matter what capacity the "unknown" pool has but it doesn't address the key question: 
How do you specifically define "known" and how do you verify entities before you move them to the "known" pool?


> (B) I don't know what you have in mind with verifying a physical address
> (somebody goes there in person? somebody sends a postal letter and waits
> for a response?)

The process is outlined at the bottom of my first email in this thread
(short: a random challenge sent to an address in a letter which is returned via email).

> but I think it's trying to be a proxy for verifying
> that we trust the relay operator, 

"trust" is a strong word. I wouldn't call them 'trusted' just because they
demonstrated their ability to pay someone to scan letters send
to a physical address.

I would describe it more as a proxy for "less likely to be a random opportunistic attacker
exploiting tor users with zero risks for themselves".

> and I think we should brainstorm more
> options for achieving this trust. In particular, I think "humans knowing
> humans" could provide a stronger foundation.

I'm all ears for better options but at some point I'd like to see
some actual improvement in practice.

I would dislike to be in the same situation in one year from now
because we are still discussing the perfect solution.

> More generally, I think we need to very carefully consider the extra
> steps we require from relay operators (plus the work they imply for
> ourselves), and what security we get from them. 

I agree.


> (C) Whichever mechanism(s) we pick for assigning trust to relays,
> one gap that's been bothering me lately is that we lack the tools for
> tracking and visualizing which relays we trust, especially over time,

> and especially with the amount of network churn that the Tor network
> sees. It would be great to have an easier tool where each of us could
> assess the overall network by whichever "trust" mechanisms we pick --
> and then armed with that better intuition, we could pick the ones that
> are most ready for use now and use them to influence network weights.


reminds me of an atlas feature request for family level graphs
https://trac.torproject.org/projects/tor/ticket/23509
https://lists.torproject.org/pipermail/tor-relays/2017-September/012942.html

I'm generating some timeseries graphs now to see what exit fraction (stacked)
is managed by 
https://torservers.net/partners.html
and those mentioned at the bottom of
https://lists.torproject.org/pipermail/tor-relays/2020-January/018022.html
+ some custom additions for operators I had some contact before
over time (past 6 months).
spoiler: it used to be >50% until some malicious actor came along and reduced it to <50%

Seeing their usual fraction over time can be used as an input when deciding what
fixed fraction should always be managed by them.

> At the same time, we need to take other approaches to reduce the impact
> and incentives for having evil relays in the network. For examples:
> 
> (1) We need to finish getting rid of v2 onion services, so we stop the
> stupid arms race with threat intelligence companies who run relays in
> order to get the HSDir flag in order to scrape legacy onion addresses.

outlined, planned and announced (great):
https://blog.torproject.org/v2-deprecation-timeline


> (2) We need to get rid of http and other unauthenticated internet protocols:

This is something browser vendors will tackle for us I hope, but it
will not be anytime soon.

kind regards,
nusenu




-- 
https://mastodon.social/@nusenu

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.torproject.org/pipermail/tor-relays/attachments/20200707/79160369/attachment.sig>


More information about the tor-relays mailing list