[tor-dev] Tor and DNS

Ondrej Mikle ondrej.mikle at gmail.com
Wed Feb 8 00:33:35 UTC 2012


On 02/07/2012 07:18 PM, Nick Mathewson wrote:
> On Sat, Feb 4, 2012 at 10:38 PM, Ondrej Mikle <ondrej.mikle at gmail.com> wrote:
>> First draft is ready here:
>>
>> https://github.com/hiviah/torspec/blob/master/proposals/ideas/xxx-dns-dnssec.txt
> 
> Some initial comments:
> 
>>  DNS_BEGIN payload:
>>
>>    RR type  (2 octets)
>>    RR class (2 octets)
>>    ID       (2 octets)
>>    length   (1 octet)
>>    query    (variable)
>>
>>  The RR type and class match counterparts in DNS packet. ID is for
>>  identifying which data belong together, since response can be longer
>>  than single cell's payload. The ID MUST be random and MUST NOT be
>>  copied from xid of request DNS packet (in case of using DNSPort).
> 
> I think you can dispense with the "ID" field entirely; the "StreamID"
> part of the relay cell header should already fulfill this role, if I'm
> understanding the purpose of "ID" correctly.

You're understanding the purpose correctly. I thought that more requests could
be used in a single stream, but after re-reading tor-spec.txt, we can just use
StreamID the same way as for RELAY_RESOLVE(D). So let's ditch the ID.

> Like Jakob, I'm wondering why there isn't any support for setting flags.

See my response to Jakob. I don't think it's worth to use anything else than
flags 0x110 (normal query, recursive, non-authenticated data ok) with DO bit
set. Unless there is a really good reason for other flags, that would only have
potential to leak identifying bits.

We could make an extra reserved fields in the spec for flags and OPT RR and for
now the client will memset them to zeros, exit node will ignore them.

> I wonder whether the "length" field here is redundant with the
> "length" field in the relay header.  Probably not, I guess: Having a
> length field here means we can send
> 
>>  DNS_RESPONSE payload:
>>
>>    ID           (2 octets)
>>    data length  (2 octets)
>>    total length (4 octets)
>>    data         (variable)
> 
> So to be clear, if the reply is 1200 bytes long, then the user will
> receive four cells, with relay payload contents:
>  { ID = x, data_len = 490, total_len = 1200, data = (bytes[0..489] }
>  { ID = x, data_len = 490, total_len = 1200, data = (bytes[490..979] }
>  { ID = x, data_len = 220, total_len = 1200, data = (bytes[980..1199],
> zero padding}
> }

Your example with 1200 byte reply is correct.

> Also, in this case,
> I think the length field in this packet _is_ redundant with the length
> field of the relay cell header.

The inner "length" might be useful in case we wanted to add an extra field
(maybe not a good idea for some other reason, like confusing older OP if we did
add a field later?).

> I think the total_len field could be replaced with a single bit to
> indicate "this is the last cell".

"End" bit would work, but I find it easier to know beforehand how much data to
expect - we don't have to worry about realloc and memory fragmentation. Client
could deny request if claimed total_length is too high right away (or later if
OR keeps pushing more data than claimed).

That also means AXFR/IXFR would be off limits (I'm OK with that).

> 
>>  Data contains the reply DNS packet. Total length describes length of
>>  complete response packet.
> 
> I think we want to do some sanitization on the reply DNS packet. In
> particular, we have no need to say what the transaction ID was, or

Sure, we can scrub transaction id in reply (xid should be random and the client
knows anyway where the exit node is, but why not).

> Initial Questions:
> 
> When running in dnsport mode, it seems we risk leaking information
> about the client resolver based on which requests it makes in what
> order.  Is that so?

Yes. For example, validating vs non-validating resolver is very easy to spot. An
attacker eavesdropping on exit node might have it harder due to caching in
libunbound, but malicious exit node can spot validating resolver just by the
fact that it's asking for DS/DNSKEY records.

Thus client-side validation when using DNSPort or SOCKS resolve must be on by
default.

> How many round trips are we looking at here for typical use cases, and
> what can we do to reduce them?  We've found that anything that adds
> extra round trips to opening a connection in Tor is a real problem for
> a lot of use cases, and so we should try to avoid them as much as
> possible.

Requiring client-side validation for A/AAAA in RELAY_BEGIN is pointless (would
only make it slower), client cannot check where exit node connects and
eavesdropping attacker can easily know which DNS request belongs to DNSPort
request and which to RELAY_BEGIN (that's true in current implementation as well
- if TCP connection does not follow, it's DNSPort/SOCKS resolve request).

So no additional overhead for RELAY_BEGIN.

Case of DNSPort queries - example for addons.mozilla.org with empty cache:

 Standard query A addons.mozilla.org
 Standard query DNSKEY <Root>
 Standard query DS org
 Standard query DNSKEY org
 Standard query DS mozilla.org
 Standard query DNSKEY mozilla.org

Note that we could "preheat" cache by resolving DS and DNSKEY for common TLDs
like com, net, org at Tor start (regardless of whether DNSPort is on or not);
like TBB "preheats" check.toproject.org now :-)

To give you an idea how it looks as cache fills up, here are three requests for
"addons.mozilla.org", "api-dev.bugzilla.mozilla.org", "www.torproject.org",
starting with empty cache:

 Standard query A addons.mozilla.org
 Standard query DNSKEY <Root>
 Standard query DS org
 Standard query DNSKEY org
 Standard query DS mozilla.org
 Standard query DNSKEY mozilla.org
 Standard query A api-dev.bugzilla.mozilla.org
 Standard query A www.torproject.org
 Standard query DS torproject.org
 Standard query DNSKEY torproject.org

Ondrej


More information about the tor-dev mailing list