Hi,
I'm running Tor on a router and was wondering why the Tor daemon uses so
much memory. Did a pmap:
pmap `pidof tor`
And got the following result:
1703: /usr/sbin/tor --PidFile /var/run/tor.pid
00400000 1024K r-x-- /usr/sbin/tor
0050f000 4K r---- /usr/sbin/tor
00510000 20K rw--- /usr/sbin/tor
00515000 12K rwx-- [ anon ]
00ae9000 36K rwx-- [ anon ]
00af2000 17288K rwx-- [ anon ]
7713a000 7140K r---- /tmp/lib/tor/cached-microdescs
77833000 516K rw--- …
[View More] [ anon ]
778f5000 348K r-x-- /lib/libuClibc-0.9.33.2.so
7794c000 60K ----- [ anon ]
7795b000 4K r---- /lib/libuClibc-0.9.33.2.so
7795c000 4K rw--- /lib/libuClibc-0.9.33.2.so
7795d000 20K rw--- [ anon ]
77962000 80K r-x-- /lib/libgcc_s.so.1
77976000 60K ----- [ anon ]
77985000 4K rw--- /lib/libgcc_s.so.1
77986000 12K r-x-- /lib/libdl-0.9.33.2.so
77989000 60K ----- [ anon ]
77998000 4K r---- /lib/libdl-0.9.33.2.so
77999000 4K rw--- /lib/libdl-0.9.33.2.so
7799a000 76K r-x-- /lib/libpthread-0.9.33.2.so
779ad000 60K ----- [ anon ]
779bc000 4K r---- /lib/libpthread-0.9.33.2.so
779bd000 4K rw--- /lib/libpthread-0.9.33.2.so
779be000 8K rw--- [ anon ]
779c0000 1300K r-x-- /usr/lib/libcrypto.so.1.0.0
77b05000 64K ----- [ anon ]
77b15000 72K rw--- /usr/lib/libcrypto.so.1.0.0
77b27000 4K rw--- [ anon ]
77b28000 292K r-x-- /usr/lib/libssl.so.1.0.0
77b71000 60K ----- [ anon ]
77b80000 20K rw--- /usr/lib/libssl.so.1.0.0
77b85000 176K r-x-- /usr/lib/libevent-2.0.so.5.1.9
77bb1000 64K ----- [ anon ]
77bc1000 4K rw--- /usr/lib/libevent-2.0.so.5.1.9
77bc2000 88K r-x-- /lib/libm-0.9.33.2.so
77bd8000 60K ----- [ anon ]
77be7000 4K rw--- /lib/libm-0.9.33.2.so
77be8000 56K r-x-- /usr/lib/libz.so.1.2.8
77bf6000 60K ----- [ anon ]
77c05000 4K rw--- /usr/lib/libz.so.1.2.8
77c06000 28K r-x-- /lib/ld-uClibc-0.9.33.2.so
77c1a000 8K rw--- [ anon ]
77c1c000 4K r---- /lib/ld-uClibc-0.9.33.2.so
77c1d000 4K rw--- /lib/ld-uClibc-0.9.33.2.so
7f9a0000 132K rw--- [ stack ]
7fff7000 4K r-x-- [ anon ]
total 29360K
As you can see there is a large 17288K block which turns out to be a
heap (of course). When I dumped the block and looked inside I found it
was full of router data. Looks like it is mostly an in-memory database
of the router list.
This worries me. If in the future the router list grows, my router (and
many other routers running Tor) can run out of memory. For me, it looks
a little bit strange to have an in-memory database of the router list.
Is there a reason for having this data in memory? And, can something be
done about it?
Rob.
https://hoevenstein.nl
[View Less]
Hi everyone,
this is the second status report on the Tails Server GSoC project.
In the last two weeks I worked on these things:
* Discussing the design of the GUI on the Tails-UX mailing list. I sent
the last prototype of the GUI two weeks ago and we have been discussing
it since.
* Write a specification proposal for the service API [1]
* Continue implementation of the GUI. I restructured the project and
implemented some more features like monitoring the service's status via
systemd …
[View More]manager.
Next I plan to get a fully functional prototype of the GUI, to be able
to conduct first user tests. This also requires some more features of
the CLI. I also plan to discuss the service specification.
Cheers!
[1] https://tails.boum.org/blueprint/tails_server/#index4h1
[View Less]
Sir,
I want to work in tor messenger and want to know what all I have to do to
get in the flow of the work.
Any help would be very helpful.
Regards,
Akash Das
IIIT-Sricity
hello all,
I am an aspirant for gsoc 2017 studying cse in IIIT-Sricity and i want to
contribute to this community..can i get some help to start/cope up with the
work that is going on..
Regards,
Akash Das
IIIT-Sricity
On Tue, May 17, 2016 at 5:30 PM, <tor-dev-request(a)lists.torproject.org>
wrote:
> Send tor-dev mailing list submissions to
> tor-dev(a)lists.torproject.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.…
[View More]torproject.org/cgi-bin/mailman/listinfo/tor-dev
> or, via email, send a message with subject or body 'help' to
> tor-dev-request(a)lists.torproject.org
>
> You can reach the person managing the list at
> tor-dev-owner(a)lists.torproject.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of tor-dev digest..."
>
>
> Today's Topics:
>
> 1. Re: onionoo.tpo stuck at 2016-05-13 12:00 (Karsten Loesing)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 17 May 2016 10:57:45 +0200
> From: Karsten Loesing <karsten(a)torproject.org>
> To: nusenu <nusenu(a)openmailbox.org>
> Cc: "tor-dev(a)lists.torproject.org" <tor-dev(a)lists.torproject.org>
> Subject: Re: [tor-dev] onionoo.tpo stuck at 2016-05-13 12:00
> Message-ID: <573ADD09.1020104(a)torproject.org>
> Content-Type: text/plain; charset=utf-8
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 14/05/16 12:00, nusenu wrote:
> > Hi Karsten,
>
> Hi Nusenu,
>
> > I was surprised that ornetradar did not send a single email for
> > yesterday's new relays.
> >
> > After looking into it, it turned out it is an onionoo problem.
> >
> > "relays_published":"2016-05-13 12:00:00"
> >
> > https://onionoo.torproject.org/details?limit=4
>
> It looks like the Onionoo host has some serious load problems (again).
> It's at "relays_published":"2016-05-16 19:00:00" now, so it didn't
> stop entirely, but it's still behind. I just kicked it, maybe it'll
> catch up in the next few hours.
>
> As a short-term fix, feel free to switch to the Onionoo mirror:
>
> https://onionoo.thecthulhu.com/summary?limit=0
>
> As a medium-term fix, extend your client to fetch both headers
> (/summary?limit=0) and get the full details from the host with more
> recent "relays_published" line.
>
> As a long-term fix we should fix most/all inconsistencies between
> Onionoo instances (different encodings and other minor problems) and
> add a hostname that round-robins requests to all available mirrors.
>
> > regards, nusenu
>
> Thanks for letting me know!
>
> All the best,
> Karsten
>
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - http://gpgtools.org
>
> iQEcBAEBAgAGBQJXOt0JAAoJEC3ESO/4X7XBLqIH/RYSZrlcKE3+Z9tx0dvdeYRN
> OTvwX632AGnfn8Ej2x9mGzf12Qkv3ED6Of979z5Zu02uCgTpOfgKy8MxxjnW0qDu
> oWvAw1OeO0T+kw+tD5IbjKhqDX0dYgNzoiYS2W/SwkR+OmGEX1i5/nVZm/M7hJFo
> VWikP0th4ajVhIRwTEILjuMK3DS6UruHYYX7gCrLdvYOC3R11h3AXOWBgbblT8wz
> x0jPKmNiOSWSKdpPCD5DHQbHaifNWMTqvZEqURhDpSKpfqUDZ82i7b/rG5bXlYZM
> KdjMUdF0e5NIxzKhg0rQkaJFAeSdgXhj++yqJdGfNXYgtu4QDccTCaKdQKb67gs=
> =AmcN
> -----END PGP SIGNATURE-----
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> tor-dev mailing list
> tor-dev(a)lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
> ------------------------------
>
> End of tor-dev Digest, Vol 64, Issue 26
> ***************************************
>
[View Less]
Hi Mridul,
I'm copying tor-dev@, so other folks can chime in.
On Thu, May 12, 2016 at 10:16:45AM +0530, Mridul Malpotra wrote:
> a) Can you give me a short description about the program flow on how the
> EventHandler class enables modules to be executed in exitmap? From my
> initial pondering over the code, the circuits seem to be created in
> succession. Then for each circuit creation, the listener catches the event
> and starts by calling module_closure for invoking probe() …
[View More]function specific
> to each module. However, I am having trouble understanding the role of the
> command utility and IPC queue, for which I can see a separate queue having
> the exit_fingerprint and socket pairs but fail to comprehend how it is
> being used.
Like you said, circuits are created sequentially in exitmap.py:419. We
have configurable inter-circuit delay (exitmap.py:442) to reduce the
load on the Tor network.
Before exitmap asks Stem to create circuits, it registers event
listeners for all circuit and stream events (exitmap.py:371). So
whenever Tor tells Stem that a circuit or stream has changed, Stem
notifies exitmap. We only care about a subset of all circuit and stream
events, though.
All the event code is in eventhandler.py. Once Tor managed to create a
new circuit, it tells Stem, which tells exitmap. We catch this event in
the new_circuit() function (eventhandler.py:254).
Now here's where the command utility come in. Exitmap modules can do
one of two things. They can either use pure Python to do their scan
(e.g., by using httplib to fetch a web site), or use an external tool
and parse its output (e.g., by running the openssl command line tool).
For the first case, we offer the function run_python_over_tor()
(command.py:37). It basically monkey-patches Python's socket.socket and
makes it go over Tor.
External programs are a little bit more tricky, and handled by the
Command object (command.py:58). We will have multiple instances of our
command line tool running at the same time, and they all connect to
Tor's SOCKS port. Exitmap then maps a stream (e.g., whatever openssl
does) to a circuit. But how does exitmap know which one among the, say,
10 streams belongs to a given circuit? If we don't attach them
correctly, we can still detect MitM attacks, but we will get the alert
for the wrong exit relay. To correctly attach streams to circuits,
exitmap modules remember the source port of the command line tool and
hand it back to exitmap (note that a module runs in a different process
than exitmap) over the queue. We can get the source port by parsing the
output of torsocks. However, the current version of torsocks does not
implement this yet.
> b) For calculating all possible exits and creating circuits,
> ServerDescriptors is set to 0 probably to avoid conflict with changed
> consensus data in the middle of module execution. Also, for each module run
> in the same execution, the original consensus downloaded during
> bootstrapping Tor is being used. In my use case however, which involves
> long duration scanning, we will need to update the cached-consensus after
> some iterations. One way I think this can be made possible is by having an
> asynchronous task that updates the consensus after say, an hour or so.
> Conflicts could be avoided mid-module execution by either stalling
> execution if near the DA consensus time period or use the old
> cached-consensus. I would like to ask you whether my conclusion is correct
> and if so, what other ways can I explore?
Yes, that sounds reasonable to me. In fact, we might as well
periodically download server descriptors instead of consensuses, so we
can also scan relays that don't have the Exit flag, but have an exit
policy. The consensus does not necessarily tell us what a relay's exit
policy looks like. For more information, see:
<https://github.com/NullHypothesis/exitmap/issues/13>
> c) In exitmap, we are taking in the analysis_dir parameter in the command
> line and storing it in a global variable in util.py. However, nowhere is
> the dump_to_file function [1] being called to report on bad exits. What was
> the working that was thought of behind this function? Was exitmap to dump
> the log entries in files inside the analysis_dir for every false negative?
> Can't seem to find code where the function is called.
dump_to_file() is only used in modules, and not by exitmap itself. You
certainly have a point here; the way exitmap logs stuff is inconsistent
because it's entirely module-driven. Seems worthy of some improvements.
I hope that clears things up a bit. Let me know if you want me to
elaborate on something.
Cheers,
Philipp
[View Less]
Hello,
Thanks for the feedback so far.
[ PEOPLE THAT HAVE BIG SCARY ADVERSARIES IN THEIR THREAT MODEL
STILL SHOULD NOT USE THIS. ]
New version with changes some that add functionality, some code of
quality stuff, hence a version bump to 0.0.2, especially since it'll
probably be a bit before I can focus on tackling the TODO items.
Source: https://git.schwanenlied.me/yawning/cfc
XPI: https://people.torproject.org/~yawning/volatile/cfc-20160327/
Major changes:
* Properly deregister …
[View More]the HTTP event listeners on addon unload.
* Toned down the snark when I rewrite the CloudFlare captcha page,
since I wasn't very nice.
* Additional quality of life/privacy improvements courtesy of Will
Scott, both optional and enabled by default.
* (QoL) Skip useless landing pages (github.com/twitter.com will be
auto-redirected to the "search" pages).
* (Privacy) Kill twitter's outbound link tracking (t.co URLs) by
rewriting the DOM to go to the actual URL when possible. Since
DOM changes made from content scripts are isolated from page
scripts, this shouldn't substantially alter behavior.
* (Code quality) Use a pref listener to handle preference changes.
TODO:
* Try to figure out a way to mitigate the ability for archive.is to
track you. The IFRAME based approach might work here, needs more
investigation.
* Handle custom CloudFlare captcha pages (In general my philosophy is
to minimize false positives, over avoiding false negatives).
Looking at the regexes in dcf's post, disabling the title check may
be all that's needed.
* Handle CloudFlare 503 pages.
* Get samples of other common blanket CDN based Tor blocking/major
sites that block Tor, and implement bypass methods similar to how
CloudFlare is handled.
* Look into adding a "contact site owner" button as suggested by Jeff
Burdges et al (Difficult?).
* Support a user specified "always use archive.is for these sites"
list.
* UI improvements.
* More Quality of Life/Privacy improvements (Come for the Street
Signs, stay for the user scripts).
* I will eventually get annoyed enough at being linked to mobile
wikipedia that I will rewrite URLs to strip out the ".m.".
* Test this on Fennec.
* Maybe throw this up on addons.mozilla.org.
Regards,
--
Yawning Angel
[View Less]
Hi everyone,
this is the first status report on the Tails Server GSoC project. I
officially began working on it on April 25th, although I already did
some work in the weeks before.
This is what I have done so far:
* Updating the blueprint of the Tails Server [1]
* Implementing two iterations of prototypes of the GUI. The most recent
one is available on gitlab [2].
* Discussing the design of the GUI on the Tails-UX mailing list [3][4]
* I began implementing the CLI, the current code (not …
[View More]ready for review
yet) is also available on gitlab [2]
Next I plan to continue the design of the GUI, after others commented on
the new prototype. There are still some features missing which I will
implement in time. In parallel I will continue implementing the CLI and
discussing the design decisions.
I did quite a lot during the last two weeks and I will have some other
work to do in the next two weeks, so I expect to get less work done
until the next report.
Cheers!
[1] https://tails.boum.org/blueprint/tails_server/
[2] https://gitlab.com/segfault_/tails_server
[3] https://mailman.boum.org/pipermail/tails-ux/2016-April/thread.html#919
[4] https://mailman.boum.org/pipermail/tails-ux/2016-May/thread.html#953
[View Less]
Hi Yawning. We are thinking about how to integrate your Firejail profile
[0] in Whonix. Do you plan to upstream the changes in the
start-tor-browser script soon?
If not then what is the best way to ensure the custom script survives
TBB upgrades?
[0] https://git.schwanenlied.me/yawning/tor-firejail
I'm working on an exitmap module that wants to feed order of 5000
short-lived streams through each exit relay. I think this is running
foul of some sort of upper limit (in STEM, or in Tor itself, not sure)
on the number of streams a circuit can be used for, or how long, or
something. What I see in the logs (note: I've modified
eventhandler.py for more detailed debug logs) looks like
2016-04-22 16:07:53,306 [DEBUG]: Circuit status change: CIRC 6
LAUNCHED PURPOSE=GENERAL TIME_CREATED=2016-04-…
[View More]22T20:07:53.305851
2016-04-22 16:07:53,325 [DEBUG]: Circuit status change: CIRC 6
EXTENDED [fp] PURPOSE=GENERAL TIME_CREATED=2016-04-22T20:07:53.305851
2016-04-22 16:07:54,114 [DEBUG]: Circuit status change: CIRC 6
EXTENDED [fp],[fp] PURPOSE=GENERAL
TIME_CREATED=2016-04-22T20:07:53.305851
2016-04-22 16:07:54,115 [DEBUG]: Circuit status change: CIRC 6 BUILT
[fp],[fp] PURPOSE=GENERAL TIME_CREATED=2016-04-22T20:07:53.305851
2016-04-22 16:07:54,136 [DEBUG]: Port 47488 preparing to attach to circuit 6
2016-04-22 16:07:54,136 [DEBUG]: Port 47488 circuit 6 waiting for stream.
2016-04-22 16:07:54,155 [DEBUG]: Attempting to attach stream 65 to circuit 6.
2016-04-22 16:07:54,387 [DEBUG]: Port 47492 preparing to attach to circuit 6
2016-04-22 16:07:54,387 [DEBUG]: Port 47492 circuit 6 waiting for stream.
2016-04-22 16:07:54,388 [DEBUG]: Attempting to attach stream 67 to circuit 6.
2016-04-22 16:07:54,809 [DEBUG]: Port 47496 preparing to attach to circuit 6
2016-04-22 16:07:54,810 [DEBUG]: Port 47496 circuit 6 waiting for stream.
2016-04-22 16:07:54,810 [DEBUG]: Attempting to attach stream 69 to circuit 6.
2016-04-22 16:07:55,060 [DEBUG]: Port 47502 preparing to attach to circuit 6
2016-04-22 16:07:55,060 [DEBUG]: Port 47502 circuit 6 waiting for stream.
2016-04-22 16:07:55,061 [DEBUG]: Attempting to attach stream 72 to circuit 6.
2016-04-22 16:07:55,468 [DEBUG]: Port 47506 preparing to attach to circuit 6
2016-04-22 16:07:55,468 [DEBUG]: Port 47506 circuit 6 waiting for stream.
2016-04-22 16:07:55,469 [DEBUG]: Attempting to attach stream 74 to circuit 6.
2016-04-22 16:07:55,720 [DEBUG]: Port 47508 preparing to attach to circuit 6
2016-04-22 16:07:55,720 [DEBUG]: Port 47508 circuit 6 waiting for stream.
2016-04-22 16:07:55,990 [DEBUG]: Port 47512 preparing to attach to circuit 6
2016-04-22 16:07:55,990 [DEBUG]: Port 47512 circuit 6 waiting for stream.
2016-04-22 16:07:55,990 [DEBUG]: Attempting to attach stream 77 to circuit 6.
2016-04-22 16:07:56,241 [DEBUG]: Port 47518 preparing to attach to circuit 6
2016-04-22 16:07:56,241 [DEBUG]: Port 47518 circuit 6 waiting for stream.
2016-04-22 16:07:56,242 [DEBUG]: Attempting to attach stream 80 to circuit 6.
2016-04-22 16:07:56,492 [DEBUG]: Port 47528 preparing to attach to circuit 6
2016-04-22 16:07:56,492 [DEBUG]: Port 47528 circuit 6 waiting for stream.
2016-04-22 16:07:56,495 [DEBUG]: Attempting to attach stream 85 to circuit 6.
2016-04-22 16:07:56,836 [DEBUG]: Port 47536 preparing to attach to circuit 6
2016-04-22 16:07:56,836 [DEBUG]: Port 47536 circuit 6 waiting for stream.
2016-04-22 16:07:56,836 [DEBUG]: Attempting to attach stream 89 to circuit 6.
2016-04-22 16:07:57,100 [DEBUG]: Port 47540 preparing to attach to circuit 6
2016-04-22 16:07:57,100 [DEBUG]: Attempting to attach stream 91 to circuit 6.
2016-04-22 16:07:57,351 [DEBUG]: Port 47544 preparing to attach to circuit 6
2016-04-22 16:07:57,351 [DEBUG]: Port 47544 circuit 6 waiting for stream.
2016-04-22 16:07:57,352 [DEBUG]: Attempting to attach stream 93 to circuit 6.
2016-04-22 16:07:57,769 [DEBUG]: Port 47550 preparing to attach to circuit 6
2016-04-22 16:07:57,769 [DEBUG]: Port 47550 circuit 6 waiting for stream.
2016-04-22 16:07:57,769 [DEBUG]: Attempting to attach stream 96 to circuit 6.
2016-04-22 16:07:58,118 [DEBUG]: Port 47554 preparing to attach to circuit 6
2016-04-22 16:07:58,118 [DEBUG]: Port 47554 circuit 6 waiting for stream.
2016-04-22 16:07:58,118 [DEBUG]: Attempting to attach stream 98 to circuit 6.
2016-04-22 16:08:04,697 [DEBUG]: Circuit status change: CIRC 6 CLOSED
[fp],[fp] PURPOSE=PATH_BIAS_TESTING
TIME_CREATED=2016-04-22T20:07:53.305851 REASON=FINISHED
2016-04-22 16:08:05,878 [DEBUG]: Port 47690 preparing to attach to circuit 6
2016-04-22 16:08:05,878 [DEBUG]: Port 47690 circuit 6 waiting for stream.
2016-04-22 16:08:05,879 [DEBUG]: Attempting to attach stream 166 to circuit 6.
2016-04-22 16:08:05,879 [WARNING]: Failed to attach stream because:
Unknown circuit "6"
You can see that circuit 6 is no longer available, but the module is
still trying to use it. It looks like it lasted for almost exactly
ten seconds, which smells like a time limit, but I can't find any
relevant configuration parameters in the documentation.
Rather than trying to make a circuit survive as long as necessary,
which might not even be possible, it'd probably be better if exitmap
could notice that it's lost a circuit that's still in use and create a
new one. However, I'm not sure how to do that in the current
architecture. The existing code has "one circuit per module
invocation = one circuit per exit node" as a pretty deeply embedded
design constraint.
Anyone have any ideas? Full logfile available on request, but it's huge.
zw
[View Less]
Hi All,
Currently, the shared random system treats the first vote of the next
commit phase specially - it's the only time it tries to agree on a new
shared random value.
This makes one consensus in each day particularly attractive to
adversaries (or particularly vulnerable to failures), because if we fail
to agree on that consensus, the next shared random value is never
agreed. This makes hidden service behaviour far more predictable
for the next 24 hours.
But we can try to agree 12 times on …
[View More]a new shared random value like this:
* for the first consensus in the new commit period:
* vote the calculated shared random value based on the commits and reveals we've seen;
* for subsequent consensuses in the new commit period:
* if we have an agreed shared random value from a trusted, previous
consensus in the period, vote that value;
* if not, (if the new shared random value is missing from all previous consensuses in that
period, or there is no trusted consensus), continue to vote our calculated value.
This way, we try up to 12 times to agree on a shared random value. (But we
never change an agreed value after we've agreed on it.)
This is very similar to how the previous shared random value is determined.
I've added #19045 to keep track of this:
https://trac.torproject.org/projects/tor/ticket/19045
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com
PGP 968F094B
ricochet:ekmygaiu4rzgsk6n
[View Less]