[tor-talk] Cryptographic social networking project

carlo von lynX lynX at time.to.get.psyced.org
Thu Jan 1 23:27:05 UTC 2015

Happy new ear (as they say in Italy).

On Tue, Dec 30, 2014 at 01:23:55PM +0330, contact at sharebook.com wrote:
> User only establish hundreds of Tor hidden services once, when launch
> the app, which cost high computational power for awhile, but when all
> hidden services are connected then it doesn't cost much CPU power to
> keep connections alive and there is no reason to change hidden service's
> circuit in order to send new "Notifications" to Bob, Alice can send
> hundreds of "Notifications" to him through same hidden service circuit.

So let's assume less than 1% of Facebook users use this.. let's take
a million for example. Hundreds of thousands of Tor users would then
be keeping hundreds of circuits open while they are interacting with
the Bulk data servers. What do Tor backend experts think of this

> The real problem with current hidden service setup is that it doesn't
> support sending keep-alive packets with high delays to keep circuits
> open which cause battery problems on mobile devices because constantly
> sending keep-alive packets doesn't let the device go to sleep mode but
> Tor can solve this problem and many others in their next generations. 

Yes, that can be improved.

> >We are specifically developing
> >a multicast distribution layer into GNUnet to address
> >these types of use. See http://secushare.org/scalability [1]
> >and http://secushare.org/pubsub [2] for further details.
> If by "Multicast" you mean send the packet to one node to send it to all
> others, then you need fully trust that node for anonymization which
> motivates attackers to run malicious nodes because even with small
> fraction of nodes compared to whole network, they can deanonymize users.

Why would you add attackers to your social network?
You think you gain social score by that?
Of course such a social network must disincentivate adding strangers.

> If your "Multicast" protocol sends packets to all receivers separately,
> then it doesn't make any difference with sending a TCP packet to all Tor
> hidden services... 

No, we would call it "round robin" if it wasn't doing multicast.
I don't see this lack of trust scenario you are painting.
We are doing a social application, it is natural for a social
application to trust the other members of the same multicast group
to cooperate on achieving the common goal.

> >But that's not all yet, according to your document
> >each posting or comment isn't actually delivered directly 
> >but rather stored in form of what you call a "Block" on a
> >"PseudonymousServer." All of the 167 recipients have thus
> >to maintain a circuit to one or more PseudonymousServers
> >in order to retrieve the ongoing comments of the discussion.

You didn't comment this part. So a million people also keep
refreshing circuits to a cloud of PseudonymousServers for
each line of chat going on on any profile. Yes?

> >This also opens up doubts concerning anonymity. If a
> >global passive observer can correlate EntryNode activity
> >with the traffic going in and out of PseudonymousServer,
> >wouldn't it be likely that very similar bursts of Block
> >retrievals would allow to reconstruct the social graph
> >of Alice?
> As mentioned in document, we assume anonymity works. If attacker can
> break Tor, i guess no other protocol can resist them to anonymize
> metadata. All metadata protection protocols have some degree of trust on
> distributed nodes that handle anonymization, high latency onion routing
> networks provide more protection against correlating timing attacks but
> it doesn't worth the trade-offs because a global adversary that watch
> entire planet still can compute correlations on enter-exit points of
> communications. 

You're talking as if there was no right or wrong ways to
use Tor, but we've seen plenty of problems with applications
that centralize some of their logic into a single hidden
service server or two. Tor works best when the transactions
are fully decentralized.

Even better if future anonymity networks do not have any
centralized components at all, at least for certain jobs.

> We learned even NSA
> (http://www.theguardian.com/world/interactive/2013/oct/04/tor-stinks-nsa-presentation-document)
> can't break Tor in mass scales and that give us enough confidence for
> the rest of attackers, too. 

You are saying this in the mailing list of people that know
that it's not that simple. Some wrong uses of Tor are
pretty easy to deanonymize. And then there's traffic shaping.

> >Even more, if the attacker p0wned this specific
> >PseudonymousServer and thus knows which Blocks are being
> >retrieved? Your design doc specifies that you are lacking
> >an incentive for creating large numbers of such Pseudony-
> >mousServers, thus the attackers would be the only ones to
> >have a motivation to offer such "free" services.
> We don't trust "PseudonymousServer" at all. We assume it's already
> p0wned by hackers or legal orders, you even don't need hack them, you
> just stand outside the datacenter and simply eavesdrop their network
> traffic because all traffic for "PseudonymousServer" is over http 
> We only trust the anonymity network for anonymizing user's connection to
> "PseudonymousServer". 

And I'm saying that won't be enough to achieve neither
the anonymity nor the scalabilty needed for a social network.

More information about the tor-talk mailing list