[tor-dev] Tor and DNS - draft finalized into proposal
ondrej.mikle at gmail.com
Sat Mar 10 14:22:20 UTC 2012
the DNS/DNSSEC resolving draft for seems to be finished.
I added a few thoughts on mitigating circuit correlation (mentioned in proposal
171). Somebody could look at those if they are not totally stupid (last two
paragraphs of section 7).
A note is added about the "DNSSEC stapling"  (extremely difficult, won't be
The draft is here (full text pasted at the end of this mail):
The draft could probably be given a "proposal number" and merged into torspec
proposals directory unless there is an objection.
I'll leave few weeks (2-3) in case someone finds a vulnerability or has an
objection. After that I could slowly begin implementing it in a separate branch.
---- pasted proposal (hopfully will wrap well) ----
Title: Support for full DNS and DNSSEC resolution in Tor
Authors: Ondrej Mikle
Created: 4 February 2012
Modified: 10 March 2012
Adding support for any DNS query type to Tor, as well as DNSSEC support.
Many applications running over Tor need more than just resolving FQDN to
IPv4 and vice versa. Sometimes to prevent DNS leaks the applications have to
be hacked around to be supplied necessary data by hand (e.g. SRV records in
XMPP). TLS connections will benefit from planned TLSA record that provides
certificate pinning to avoid another Diginotar-like fiasco.
DNSSEC is part of the DNS protocol and the most appropriate place for DNSSEC
API would be probably in OS libraries (e.g. libc). However that will
probably take time until it becomes widespread.
On the Tor's side (as opposed to application's side), DNSSEC will provide
protection against DNS cache-poisoning attacks (provided that exit is not
malicious itself, but still reduces attack surface).
1.1 New cells
There will be two new cells, RELAY_DNS_BEGIN and RELAY_DNS_RESPONSE (we'll
use DNS_BEGIN and DNS_RESPONSE for short below).
DNS packet data (variable length)
The DNS packet must be generated internally by libunbound to avoid
fingerprinting users by differences in client resolvers' behavior.
total length (2 octets)
Data contains the reply DNS packet or its part if packet would not fit into
the cell. Total length describes length of complete response packet.
AXFR and IXRF are not supported in this cell by design (see specialized tool
2. Interfaces to applications
DNSPort evdns - existing implementation will be updated to use DNS_BEGIN.
SOCKS proxy - new command will be added, containing RR type, class and
query. Response will simply contain the DNS packet.
3. New options in configuration file
libunbound takes couple of parameters, e.g. trust anchors and cache-size. In
order not to put them all into torrc, there will be only one option,
configuration file name. Tor will be distributed with some sensible
defaults. New option will be named UnboundConfig and value will be
An option DNSQueryPolicy will determine what query types and classes are
- common - class INTERNET, RR types listed on
- full - any query type and class is allowed
Class CHAOS in "common" would not be of much use, since its prevalent use is
for asking authoritative servers.
For client side, full validation would be optional described by option
DNSValidation (0|1). By default validation is turned on, otherwise it would
be easy to fingerprint people who turned it on and asked for not-so-common
records like SRV.
4. Changes to directory flags
Exit nodes will signal their resolving capability by two flags:
- CommonDNS - reflects "common" DNSQueryPolicy
- FullDNS - reflects "full" DNSQueryPolicy
Exit node asked for a RR type not in CommonDNS policy will return REFUSED in
as status in the reply DNS packet contained in DNS_RESPONSE cell.
If new types are added to CommonDNS set (e.g. new RFC adds a record type)
and exit node's Tor version does not recognize it as allowed, it will send
REFUSED as well.
5. Implementation notes
There will be one instance of ub_ctx (libunbound resolver structure) in Tor,
libunbound is thread-safe.
Client will periodically purge incomplete DNS replies. Any unexpected
DNS_RESPONSE will be dropped.
Request for special names (.onion, .exit, .noconnect) will return REFUSED.
RELAY_BEGIN would function "normally", there is no need for returning DNS
data. In case of malicious exit, client can't check he's really connected to
whatever IP is in A/AAAA. We won't send any NSEC/NSEC3 back in case FQDN
does not exist, it would needlessly complicate things. Client can check by
extra query on DNSPort.
AD flag must be zeroed out on client unless validation is performed.
6. Separate tool for AXFR
The AXFR tool will have similar interface like tor-resolve, but will
return raw DNS data.
Parameters are: query domain, server IP of authoritative DNS.
The tool will transfer the data through "ordinary" tunnel using RELAY_BEGIN
and related cells.
This design decision serves two goals:
- DNS_BEGIN and DNS_RESPONSE will be simpler to implement (lower chance of
- in practice it's often useful do AXFR queries on secondary authoritative
IXFR will not be supported (infrequent corner case, can be done by manual
tunnel creation over Tor if truly necessary).
7. Security implications
Client as well as exit MUST block attempts to resolve local RFC 1918, 4193,
4291 adresses (PTR).
An exit node resolving names will use libunbound for all types of resolving,
including lookup of A/AAAA records when connecting stream to desired
server. Ordinary streams will gain a small benefit of defense against DNS
cache poisoning on exit node's network.
Transaction ID is provided randomly by libunbound, no
need to modify. This affects only DNSPort and
As proposal 171 mentions, we need mitigate circuit correlation. One solution
would be keeping multiple streams to multiple exit nodes and picking one at
random for DNS resolution. Other would be keeping DNS-resolving circuit open
only for a short time (e.g. 1-2 minutes).
Yet another option for mitigating circuit correlation would be having
separate circuit for each application, but that would require some
cooperation of application and Tor, e.g. via some LD_PRELOAD mechanism.
8. TTL normalization idea
A bit complex on implementation, because it requires parsing DNS packets at
TTL in reply DNS packet MUST be normalized at exit node so that client won't
learn what other clients queried. The normalization is done in following
- for a RR, the original TTL value received from authoritative DNS server
should be used when sending DNS_RESPONSE, trimming the values to interval
- does not pose "ghost-cache-attack", since once RR is flushed from
libunbound's cache, it must be fetched anew
9. Implementation notes
I noticed that libunbound does not always parallelize requests that could be
parallelized when using a forwarder (this does not apply to unrelated
queries). Thus, A query for addons.mozilla.org looks like (note the
interleaving of query/reponse):
0.000000 Standard query A addons.mozilla.org
0.178366 Standard query response A 184.108.40.206 RRSIG
0.178572 Standard query DNSKEY <Root>
0.178617 Standard query response DNSKEY DNSKEY RRSIG
0.178981 Standard query DS org
0.179041 Standard query response DS DS RRSIG
0.179192 Standard query DNSKEY org
0.179233 Standard query response DNSKEY DNSKEY DNSKEY DNSKEY RRSIG RRSIG
0.179505 Standard query DS mozilla.org
0.179562 Standard query response DS RRSIG
0.179717 Standard query DNSKEY mozilla.org
0.179762 Standard query response DNSKEY DNSKEY DNSKEY RRSIG RRSIG
Further investigation is needed how to work around this. Maybe future
version will have it fixed, since I see DNS queries exiting from unbound
forwarder to authoritative DNS server are parallelized.
10. "DNSSEC stapling"
The following idea tries to mitigate attack where observer of exit node can
learn the fact that client's OR is "heating up DNS cache".
Instead of asking for several records (DS, DNSKEY, etc.), exit node would
send all of them at once in a "stapled response".
Unfortunately this is extremely difficult to implement correctly  .
Thus we need to live with fact that exit node or an eavesdropper of such
exit node will know that an OR used some TLD for the first time.
Causing unrelated errors or vulnerabilities in Tor by implementing this
algorithm is not worth the risk.
More information about the tor-dev