2017-03-10 21:13 GMT+01:00 ng0 contact.ng0@cryptolab.net:
Massimo La Morgia transcribed 6.7K bytes:
On Fri, Mar 10, 2017 at 5:39 PM, David Fifield david@bamsoftware.com wrote:
On Fri, Mar 10, 2017 at 12:58:55PM +0100, Massimo La Morgia wrote:
we are a research group at Sapienza University, Rome, Italy. We do
research on
distributed systems, Tor, and the Dark Web. As part of our work, we
have
developed OnionGatherer, a service that gives up-to-date information
about Dark
Web hidden services to Tor users.
...and presumably helps you build a crowdsourced list of onion services that you plan to use for some other research purpose?
yes, of course in this way we are building a crowdsourced list of onion services, but is not really different from onion directories. At this time we have no plan for other research that use this
crowdsourced
list.
If you're planning a research project on Tor users, you should write to the research safety board and get ideas about how ot do it in a way
that
minimizes risk. https://research.torproject.org/safetyboard.html
thank you for the suggestion.
This idea seems, to me, to have a lot of privacy problems. You're
asking
people to use Chrome instead of Tor Browser, which means they will be vulnerable to a lot of fingerprinting and trivial deanonymization attacks.
No we are not asking people to use chrome for browsing on tor, but we are offering a service that can help them to know if a onion address is up before start to surf with Tor Browser
Having only an extension for Chrome based browsers implies asking users to use Chrome based browsers. If there were a choice between Firefox and Chrome extensions, it would be less clear and not implying.
Yes, you're right, but we have created this extension in order to offer a service to people. We chose to start with Chrome because it has a greater number of users. We would be happy if it will be used and also developed for Firefox.
Your extension reports not only the onion domains that it finds, but also the URL of the page you were browsing at the time: var onionsJson = JSON.stringify({onions:onions, website: window.location.href}); You need to at least inform your research subjects/users what of their private data you are storing and what you are doing with it.
As you can see from the source code we are not storing any sensitive data like ip or users information. do you think that only URL page can damage user privacy?
This aside, do you just check if the page still exists or the top level onion domain you found this page on? If so, this would be an improvement I'd suggest, to only use the toplevel domain. I have not looked at your code.
Thank you for the suggestion, we'll improve the website's URL management asap.
You're using two different regexes for onion URLs that aren't the same. The one used during replacement doesn't match "https", so I guess it will fail on URLs like https://facebookcorewwwi.onion/. /^(http(s)?://)?.{16}(.onion)/?.*$/ /(http://)?\b[\w\d]{16}.onion(/[\S]*|)/
Yes, you right, thank you for the feedback.
tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev