On 2013-09-12 09:25 , Kevin Butler wrote:
[generic 203 proposal (and similar http scheme) comments]
- HTTPS requires certificates, self-signed ones can easily be blocked
as they are self-signed and thus likely not important.
If the certs are all 'similar' (same CA, formatting etc) they can be
blocked based on that. Because of cert, you need a hostname too and
that gives another possibility of blocking
- exact fingerprints of both client (if going that route) and server
cert should be checked. There are too many entities with their own
Root CA, thus the chained link cannot be trusted, though should be
checked. (generation of a matching fingerprint for each hostname
still takes a bit and cannot easily be done quickly at connect-time)
[..]
> I think the best course of action is to use a webserver's coreClient side can likely be done similar to or using some work I am
> functionalities to our advantage. I have not made much consideration for
> client implementation.
working on which we can hopefully finalize and put out in the open soon.
Server side indeed, a module of sorts is the best way to go, you cannot
become a real webserver unless you are one. Still you need to take care
of headers set, responses given and response times etc.
> * The users Tor client (assuming they added the bridge), connects to
> the server over https(tls) to the root domain. It should alsoAnd that is where the trick lies, you basically would have to ask a real
> downloads all the resources attached to the main page, emulating a
> web browser for the initial document.
browser to do so as timing, how many items are fetched and how,
User-Agent and everything are clear signatures of that browser.
As such, don't ever emulate. The above project would fit this quite well
(though we avoid any use of HTTPS due to the cert concerns above).
[..some good stuff..]
> * So we have our file F, and a precomputed value Z which was the
> function applied Y times and has a hmac H. We set a cookie on the> o The server should remember which IPs which were given this Y
> client base64("Y || random padding || H")
> value.
Due to the way that HTTP/HTTPS works today, limiting/fixing on IP is
near impossible. There are lots and lots of people who are sitting
behind distributed proxies and/or otherwise changing addresses. (AFTR is
getting more widespread too).
Also note that some adversaries can do in-line hijacking of connections,
and thus effectively start their own connection from the same IP, or
replay the connection etc... as such IP-checking is mostly out...
> This cookie should pretty much look like any sessionWhile you can likely do it as a module, you will likely need to store
> cookie that comes out of rails, drupal, asp, anyone who's doing
> cookie sessions correctly. Once the cookie is added to the
> headers, just serve the document as usual. Essentially this
> should all be possible in an apache/nginx module as the page
> content shouldn't matter.
these details outside due to differences in threading/forking models of
apache modules (likely the same for nginx, I did not invest time in
making that module for our thing yet, though with an externalized part
that is easy to do at one point)
[..]
> o When rotating keys we should be sure to not accept requests on
> the old handlers, by either removing them(404) or by 403ingBetter is to always return the same response but ignore any further
> them, whatever.
processing.
Note that you cannot know about pre-play or re-play attacks.
With SSL these become a bit less problematic fortunately.
But if MITMd they still exist.
[..]
> o The idea here is that the webserver (apache/nginx) is working
> EXACTLY as a normal webserver should, unless someone hits theseThe moment you do a ratelimit you are denying possibly legit clients.
> exact urls which they should have a negligable chance of doing
> unless they have the current shared secret. There might be a
> timing attack here, but in that case we can just add a million
> other handlers that all lead to a 403? (But either way, if
> someones spamming thousands of requests then you should be able
> to ip block, but rotating keys should help reduce the
> feasability of timing attacks or brute forcing?)
The only thing an adversary has to do is create $ratelimit amount of
requests, presto.
> * So, how does the client figure out the url to use for wss://? Using
> the cache headers, the client should be able to determine which fileI think this is a cool idea (using cache times), though it can be hard
> is F.
to get this right, some websites set nearly unlimited expiration times
on very static content. Thus you always need to be above that, how do
you ensure that?
Also, it kind of assumes that you are running this on an existing
website with HTTPS support...