[tor-talk] CloudFlare blog post

Greg Norcie gnorcie at cdt.org
Thu Mar 31 18:58:13 UTC 2016


So the post seems to weigh heavily towards proof of work in Tor Browser,
rather than running .onion sites. (Which apparently attract less malicious
traffic? Interesting tidbit)

My question: why not simply move to using SHA-256? The main point in the
blog seemed to be that that using .onion sites is not workable due to the
use of SHA-1. Since the Tor Project has limited resources, it seems like
switching hashes and asking websites to use .onion addies would create less
work for the devs but have a similar effect to a proof of work module in
Tor Browser.

However, I may be missing something important, and if so please feel free
to enlighten me :)


/********************************************/
Greg Norcie (norcie at cdt.org)
Staff Technologist
Center for Democracy & Technology
District of Columbia office
(p) 202-637-9800
PGP: http://norcie.com/pgp.txt



*CDT's Annual Dinner (Tech Prom) is April 6, 2016.  Don't miss out!learn
more at https://cdt.org/annual-dinner <https://cdt.org/annual-dinner>*
/*******************************************/

On Thu, Mar 31, 2016 at 2:04 PM, Andreas Krey <a.krey at gmx.de> wrote:

> On Thu, 31 Mar 2016 11:27:24 +0000, Joe Btfsplk wrote:
> ...
> > >What I wonder is how they want to make a difference using .onion
> addresses
> > >for their customers - tor crawlers can take that redirect just so.
> > Andreas, sorry - don't understand part of your comment.
> > "It would be quite a lot of effort to do... *what?*... this way... -
> > sorry, it won't work any better."
>
> They said that automatically providing cloudflared sites with
> onion addresses would make it easier to detect nonmalicious
> tor use, but I wonder why they expect that the bad guys don't
> immediately use the onion instead of the plain site as well.
>
> ...
> > I've seen Cloudflare on low value target sites, like wood screw mfg info
> > sites & similar.  Unless other screw mfgs are sabotaging them, I doubt
> > much malicious activity is directed at such sites.
>
> This is simply the default setting, I guess. CF isn't just
> a abuse shield, it is first a CDN. There are sites where
> there is nothing relevant to harvest, and there are sites
> where there is, but they all use couldflare for different
> reasons, and get the scraper protection for free, and not
> necessarily on their intention.
>
> > 94% is saying essentially ALL Tor traffic / requests are "per se"
> > malicious or use inordinate amt of resources.  That leaves me & 6% of
> > users that aren't.
>
> Users != Traffic.
>
> > Maybe ? he's counting crawler *individual* requests - page by page - as
> > malicious?  They might make many more requests than real users, thus the
> > 94% claim?
>
> Quite probably.
>
> ...
> > His statement(s) & reasoning about blocking Tor still seem strange.  As
> > they say, "follow the money trail."  "Money trumps all other reasons /
> > motives."
>
> Tell that the authors of the software this mailing list is for.
>
> > I still say trackers aren't going to pay sites for TBB traffic. Don't
> > say, "You're using Tor - get lost" - bad for public relations.  Instead,
> > play dumb & covertly discourage (some) Tor users  - so they access the
> > site w/ unhardened browsers.
>
> Tracking is not cloudflare's business, it's the business of the site owner.
>
> > Can't sites tell the difference in actions of crawlers & real users?
>
> Not as easily as just using cloudflare as a front. Heck, my colleague
> has cloudflare in front of one of his sites, even though there was
> probable more traffic for setting that up than the site on a good day.
>
> > I'm sure some use browsers other than TBB for crawling & malicious
> > activity.  Can't sites block / time-out crawlers from continuing to
> > access entire site, once it becomes apparent - regardless of which
> browser?
>
> Yes. That would lock out the entire exit, and with the crawling
> density this apparently basically never gives tor users access.
>
> This is also what cloudflare does, just over longer time, and
> giving a captcha instead of an reject.
>
> > I get "time outs" from making 2 very narrow term searches in < 2 min. or
> > so, on some sites I'm registered on & participated - for a long time.
> > Why can't sites do the same w/ crawlers' rapid, repeated requests?
>
> Crawlers would immediately get smart and stretch their requests out?
>
> Andreas
>
> --
> "Totally trivial. Famous last words."
> From: Linus Torvalds <torvalds@*.org>
> Date: Fri, 22 Jan 2010 07:29:21 -0800
> --
> tor-talk mailing list - tor-talk at lists.torproject.org
> To unsubscribe or change other settings go to
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
>


More information about the tor-talk mailing list