[tor-talk] tor-blocking sites
art at globaleaks.org
Sun Feb 12 16:56:18 UTC 2012
On Feb 10, 2012, at 12:10 AM, Mike Perry wrote:
> As far as I know, no one has ever tried it. Some academics once pointed
> out that proof-of-work would not work for email, but that was primarily
> because email is often one-to-many. They did not consider one-to-one
> activity (like web page access) in their analysis. Perhaps everyone
> simply read their work and just assumed proof-of-work could never work
> for anything?
I think another thing that has not been properly dealt with proof of work
systems is that they they are generally not "ecological". By this I mean
that by doing a proof-of-work you are wasting resources by consuming
This phenomenon is especially evident in systems such as bitcoin, where
the hardcore miners will do precise estimates as to how many power they
will use up (therefore money spent) in relationship to how much they are
able to get back (bit coins generated).
A direction that I see extremely interesting is integrating distributed calculation
projects a-la SETI at home, The Lattice Project etc. into proof of work systems.
This way you would not only be wasting your resources but would actually be
doing something useful.
If we fail to make proof-of-work systems systems, I think we are making the
spammers win. If proof-of-work starts to become widely adopted and widespread
we should have though of this today, rather then tomorrow when people
will be wasting Peta-Watts of power over useless CPU cycles.
>> Did you try to estimate how much CPU work would get one a token once
>> such system is deployed full-scale, with spammers (possibly with
>> botnets) competing for resources? E.g., you can get a rule-of-thumb
>> estimate by putting some dollar value on a token, and looking at the
>> generic-CPU work required for an equivalent Bitcoin amount.
> The proposed system has two knobs that site admins can use: computation
> quantity, and computation freshness. As scraping abuse increases, admins
> would be free to set the "price" higher as needed, and require more
> recent, fresh computation as needed. When abuse is low, the requirements
> can be turned down.
> I created these two knobs because what we have seen over the years is
> that scraping abuse over Tor is not constant. Every few months, some
> jerk decides "Hey, I know, I'll scrape $SITEX and resell the data and
> make MEEELIONS", until the bans or captchas go up and they shut down.
> Then, all is quiet until the bans expire and the next jerk gets the idea
> a few months later. At least, this is the pattern that the Scroogle
> admin sees. I assume the situation is similar with Google directly, but
> they are very tight lipped.
This is a very good question especially since we are seeing more and more
devices mobile devices and systems that don't have access to the same
You just can't pick a size that fits all because the most advanced mobile phone
will never have 1/100 of the CPU power of a recent desktop computer.
More information about the tor-talk