syverson at itd.nrl.navy.mil
Mon Nov 25 17:10:17 UTC 2002
> Date: Mon, 25 Nov 2002 01:59:33 -0500
> From: Roger Dingledine <arma at mit.edu>
> On Fri, Nov 22, 2002 at 03:13:59PM -0500, Bruce Montrose wrote:
> > test#2 was only slightly worse than test#1 probably because privoxy does
> > data scrubbing and httpap did not.
> httpap is obsolete (I've stopped maintaining it, and I know there were
> bugs there when I abandoned it). So when doing timing tests you should use
> privoxy -- but first go to http://config.privoxy.org/toggle?set=disable
> to turn off all the data scrubbing.
> > Both test#1 and test#2 perform horribly under great stress.
> Yes, this makes sense. The problem is that we do too many public key
> ops when we're being flooded with creates. That is, we use almost all of
> each second processing create cells, so we don't process data cells very
> often. Our congestion control algorithm makes it even worse, because
> each circuit can push at most 1200 bytes before needing a sendme cell
> to work its way back through the circuit.
> I've raised the receive windows by a factor of 10, so now we can send
> 12000 bytes between sendmes. It won't help much, though (I don't think
> it will hurt either), because nodes are simply too starved for time to
> get to all of the data cells.
It was concerns such as these that prompted us to have a separate
module to handle creates in the old (middle?) architecture. Then nodes
that get a lot of new connections could shunt the public key operations
off to the crypto module (which might live on another machine or even
on a crypto hardware processor) while other cells can flow unimpeded. I
certainly don't want to suggest ripping everything apart to do this. If
there's anything I've learned in the last few years looking at both our
own and other systems, it's to keep it as simple as possible. But, is
there any way to allow the high load computations to be parallelized
with ongoing connections in this way?
More information about the tor-dev