[tor-dev] Using Shadow to increase agility of Sponsor R deliverables
rob.g.jansen at nrl.navy.mil
Fri Nov 21 15:14:23 UTC 2014
> On Nov 20, 2014, at 4:59 PM, David Goulet <dgoulet at ev0ke.net> wrote:
> On 20 Nov (14:45:12), Rob Jansen wrote:
>> Are there other HS performance improvements that we think may be ready by January?
> On my part, I have a chutney network with an HS and clients that fetch
> data on it. I'm currently working on instrumenting the HS subsystem so
> we can gather performance data and analyze it for meaningful pointers on
> where are the contention points, confirm expected behaviors, etc...
> I'll begin soon updating the following ticket with more information on
> the work I'm doing. (I'm in Boston right now collaborating with Nick for
> the week so things are a bit more slow on this front until monday).
> This could be used also with shadow I presume. Since the deadline is
> near us, I choose chutney for simplicity reasons here.
Chutney is the right tool for tracing CPU resource problems. Shadow is the right tool when trying to gather realistic network level performance statistics, and testing code at scale. Also, Shadow potentially runs faster than real time if you are only using a handful of nodes. If you are not using Shadow because it is too complex, then please, please let me help with that.
> I'll have a talk
> with Nick tomorrow on how we can possibly have this instrumentation
> upstream (either logs, controller event or/and tracing).
That would be great! Making it easy to gather data, even if only in TestingTorNetwork mode, will pay dividends.
> Things are going forward, we still have some work ahead to gather the HS
> performance baseline and start trying to improve it. I'm fairly
> confident that the performance statistics in a private network will give
> us a good insight on the current situation.
> Feel free to propose anything that could be useful to make this thing
> more efficient/faster/useful :).
I totally agree that a private network is the right approach. A small network will be useful to isolate some performance issues, but I think we also need to make sure we test at a larger scale with the addition of realistic background traffic, etc, so that we understand the performance benefits in a more realistic environment. Shadow allows us to do this and have stats across the entire network on the order of hours. I have the resources to run at least 6000 relays and 30000 clients in a private ShadowTor deployment, and I hope that having results on this scale will impress our funder in January.
Perhaps after you finish your traces in chutney and work out some of the code bottlenecks, I can run some more realistic network experiments in Shadow. (Separate branches for each improvement would help here.) Would this actually be helpful? Or do we think that by the time we get to the Shadow step we would have already learned everything we need to know?
More information about the tor-dev