On Thu, Jan 1, 2015 at 9:24 AM, Michael Rogers michael@briarproject.org wrote:
Are you proposing that chaff would be sent end-to-end along circuits?
No. That wouldn't work. Crypto of circuits would blind middles to any end to end padding, and not able to contribute control back to you, let alone anonymously without knowing you were using them. Circuits also need move around and share and cross links randomly on demand further making intermediate fill control an impossible task. There'd also be regions in the mesh with too little/much fill, and unfilled capacity mismatches at the endpoints.
Here it sounds like you're proposing hop-by-hop chaff, not end-to-end?
"Hop by hop" would be different in that, yes, it follows the same path as your "end to end" onion encrypted private data circuit, but is not encapsulated within it, rather negotiated in some other band along the way. That still seems quite difficult to design and manage.
I'm proposing each node fill in a star pattern with other nodes one hop away.
(That's what "generating enough traffic to saturate the core" seems to imply.)
No. If you have 10k node mesh and too few are talking across it, a quiet day, it becomes easier to determine paths and endpoint pairs of those who are.
That tells you how much chaff to send in total, but not how much to send on each link.
No. Buy or allocate 1Mbit of internet for your tor. You now have to fill that 1Mbit with tor. So find enough nodes from consensus who also have a need to fill some part of their own quota and start passing fill with them.
You'd have to think about whether to use one of them as your own guard or make another filled connection to your real guard.
A relay can't send chaff to every other relay
I previously said you are not opening connections to every relay in the consensus full matrix, only enough to both have enough connections (say three), and fill your dedicated rate.
so you can't fill all the links fulltime.
You'd have to think about how people with 64k vs 128k vs 512k vs 1Mbit vs 1Gbit could spread their needs and abilities for chaff around the whole cloud (to make sure that at least every edge client is full all the time). That's in control, or maybe even the consensus. Are there 10 1Mbit nodes around to fill 1 10Mbit node? Do nodes have to scale their commitments dynamically somehow in a way that does not follow the path of real wheat? I don't know yet.
The amount of wheat+chaff on each link must change in response to the amount of wheat.
Of course, wheat occupies space in a stream of buckets you'd otherwise be filling with chaff.
The question is, do those changes only occur when circuits are opened and closed - in which case the endpoint must ask each relay to allocate some bandwidth for the lifetime of the circuit
No, I don't think it should depend on circuit flux, that's way too complex. Only upon node flux. That's much more stable and just one easy hop away.
or do changes occur in response to changes in the amount of wheat, in which case we would need to find a function that allocates bandwidth to each link in response to the amount of wheat, without leaking too much information about the amount of wheat?
If the node capacities of the variety network three paragraphs up showed the need for a leak free dynamic commitment function, then yes. If not, then all you need is to know how many out of 100 chaff buckets you need to swap out for wheat based on demand of circuits you know are going through you.
It seems like there are a lot of unanswered questions here.
Yes there are :) I'm just putting out potential elements I figured might fit the puzzle if people actually sat down and tried to solve link padding in a network such as Tor. (Unfortunately I don't tend to sit very long.)
If I understand right, you were using jitter to disguise the
I think that was it. It might, if needed, also hide the time it takes your CPU to recalc the wheat ratio in response to demand of a new circuit.
Either way, decoupling the arrival and departure times necessarily involves delaying packets, which isn't acceptable for Tor's existing low-latency use cases.
Disagree as before. The average round trip time between a tor client and clearnet via an exit is ~1000ms plus. Adding 8 hops x 5ms avg = 80ms max of jitter isn't going to turn a network into a store and forward yawn session.
So we're looking at two classes of traffic here: one with lower latency, and the other with better protection against traffic confirmation.
Whether or not such jitter is a useful defense needs evaluated. It may be possible to path through relays in the consensus that do add jitter to fill, or path through ones that don't add jitter in order to get minimum latency for irc/voice/video. A SocksPort setting perhaps.
I'll try to reread these parts from you.
When I'm sitting down :) I've got 4 mails of same subject from you on tor-dev, reference therein to guardian-dev. I think I got to all your questions though.