[tor-talk] Design of next-generation Tor systems

carlo von lynX lynX at time.to.get.psyced.org
Wed Jun 22 11:46:29 UTC 2016


Hello Aymeric!

On Wed, Jun 22, 2016 at 12:41:35PM +0200, Aymeric Vitte wrote:
> Not a specialist with gnunet (how does peers discovery works with

I think this talk is very worthwhile in explaining how gnunet works
and why it is indeed a *replacement* for the current internet, not
something that needs to run on top of it:

	http://cdn.media.ccc.de/congress/2013/workshops/30c3-WS-en-YBTI_Mesh-Bart_Polot-GNUnet_Wireless_Mesh_DHT.webm
	http://cdn.media.ccc.de/congress/2013/workshops/30c3-WS-en-YBTI_Mesh-Bart_Polot-GNUnet_Wireless_Mesh_DHT.mp4

> gnunet?) but one question can be: if you are using onion routing then
> why are you using gnunet?

I don't understand the question. The strategy of peeling onions, be
it in circuits, packetwise or within a mixed net, is still the #1
method for achieving metadata protection. See also Paul's post.
GNUnet does several other things well, but still it makes sense to
also implement onion routing. But onion routing alone does not
provide all of the things needed by secushare: 1. one-to-many
distribution and scalability, 2. sibyl attack resistant design and
3. framed application traffic + constant bandwidth usage to defy 
traffic shaping attacks.

It's the combination of these measures that allows us to leave out
one aspect when necessary. For example we can afford not to have a
permanent bandwidth allocation if we have framing and onion routing,
thus allowing for distributed social networking apps on mobile phones,
or we can afford to reduce the onion routing for real-time traffic
if we can segment and mix it with file sharing gossip, this way also
defeating phoneme detection attacks on voice traffic. Or we can surf
the insecure web and risk fingerprinting attacks if the stuff then
goes through onion routes over constant bandwidth channels. Alpha
mixing would probably help, too, but constant bandwidth is easier
and already implemented. We have an advantage in synergy if we use
one big fat tool to replace the broken Internet rather than try to
have separate tools attempt anonymous real-time and bulk data exchange
without cashing in on the synergies. And nothing keeps us from 
securing our best secrets like our political views, commentaries and
dick pics by using all measures at once.

> I would put a bemol on the (generally widely propagated) statement that
> a DHT is necessarily insecure and not resistant to sybil attacks, in the

It is a question of how you design the application, but I think I
elaborated that well in the paragraph.. no?

> Peersm case the nodeIDs are temporary and related to the onion keys of
> the peers (so sybils can not position themselves where they like in the
> DHT, which from some research is maybe not enough but makes sybils' work
> difficult for sure), they expire after each session, the DHT does not
> contain direct information about the peers but what the peers know about
> others (ie what the rdv points know about how to reach others), in
> addition the peers are informing themselves directly about what they
> know (the rdv point to reach this peerID and/or the introduction point
> to establish WebRTC connections between two peers), the DHT is used only
> if nothing is known about a peer, so sybils should invade both layers
> which seems quite unlikely.

Temporary use of Tor's DHT is good, but how temporary can it be if the name
of the onion is all you have to get started?

> As a corollary your section "Onion routing the secushare way" is similar
> to Peersm, as well as the concepts that 3 hops are not required (2 for
> Peersm), indeed a peer can't know its position in the path, in addition
> for Peersm the peers are acting as rdv points AND peers, and circuits
> can "extend" from a rdv point to others (A wants to reach D, B knows
> from C that C knows how to reach D, the path goes through two rdv points
> B and C, each point being connected via two hops), so they can't even
> know if they are serving the data as the first rdv point acting as a
> peer, relaying it as the first rdv point, or relaying it as the nth rdv
> point, or serving it through n rdv points

Great minds think alike. I have to bump up looking into your technology
in my TODO list. It has always been in there, but other priorities keep
urging themselves into the foreground, like writing a decent remote
control tool for Tor.  ;)

> Tor in his current form will never allow to do p2p, as commented here
> again:
> https://lists.torproject.org/pipermail/tor-talk/2016-June/041529.html,

It could work for Ricochet-like P2P messaging, although the APIs are
too cumbersome I think. When a Tor connection comes from a certain
person, its virtual IP should resolve to the equivalent onion address.
That would make P2P operations trivial to implement. Whereas you are
totally right regarding file sharing.. it is the same point I am
making in the first paragraph on scalability. Also why do people even
think of using an insecure file sharing tool (Bittorrent) over an
anonymizing network that isn't designed for it if they can use a
file sharing system that is designed to be anonymous? gnunet-fs works
great from what I've seen...

> the only fact that nodes will not extend to nodes that are not
> registered in the directories just makes this impossible, but as your
> onion routing solution or mine show we don't need it.

I think it works differently with gnunet, but the result is similar
I guess. Maybe Bart's video will empower you to tell us about the
similarities and differences to Peersm.

Griffin: tor-talk is frequently a discussion platform for onion
routing theory beyond the boundaries of what the Tor platform can
do. Is that off-topic? Do people who design next generation Tors
need to find themselves a different place to discuss?


-- 
  E-mail is public! Talk to me in private using encryption:
         http://loupsycedyglgamf.onion/LynX/
          irc://loupsycedyglgamf.onion:67/lynX
         https://psyced.org:34443/LynX/


More information about the tor-talk mailing list