[tor-dev] [draft] Proposal 219: Support for full DNS and DNSSEC resolution in Tor

Bry8 Star bry8star at inventati.org
Tue Aug 13 18:10:10 UTC 2013

Hash: SHA512

After a quick read on it, i'm adding bit more info in below, might
or might not be related and might or might not be helpful.

DNS packets from Tor-client side toward a DNS-Server, should be
encrypted, to keep it non-viewable in exit-node computers, as a
malicious/rogue Tor-node operator can log or create correlation
lists, or, other entities can do such in middle of the path from
exit-node to destination dns-server.  And, DNS packets (which are
sent/received by the DNS-Server located in Tor-exit-node) from
Tor-exit-node side toward various open internet side DNS-Servers can
be un-encrypted, but then, when Tor software uses that exit-nodes
DNS-Resolver/Server, a rogue operator can modify DNS data.

So few TRUSTED dns-servers are required and encrypted connections
are also required, for Anonymity and Privacy.

If open-source Tor-exit node software and open-source libunbound etc
components are kept and run un-modified by the exit-node operator,
then such exit-node can be trusted somewhat i guess.

Trusted (forwarding-) DNS-Servers will need to connect with other
dns-servers in open internet side and obtain DNS query result,
connection with open internet DNS-Servers can be encrypted or
un-encrypted, as DNSSEC authenticated DNS responses cannot be
falsified normally.

So how TorProject can know or find out if a Tor exit-node is whether
and really using an un-modified Tor-server and libunbound
source/software ? or, using a customized/modified version in exit-node ?

I think, some way/mechanism need to be developed, to get running
binary's hash or memory-checksum (of certain key-portion areas of
libunbound and tor-server software), and then sent back to
TorProject's onion-host and then TorProject server need to decide if
it passed a certain integrity test or not, before that node can
become a Exit-node and continued, for others to use.  I guess, then
rogue/malicious exit-node operator will put their effort to bypass
such integrity checks. :(

When 1st time a tor software is ran/started, then, and again at some
RANDOM POINT-OF-TIME (in RANDOM frequency/rate), ... some form of
INTEGRITY checking need to be done on all exit-nodes (or on all
nodes) again & again, ... and removed as exit-node (or as node) when
found compromised.

DNSSEC based DNS resolving and authentication or verification works
like a chain, (dns level after level), with beginning point
pre-trusted, something similar need to be done inside (or for) Tor
software and libunbound, so that a running Tor component/instance
can be (somewhat) "trusted" by some authentication/verification.
(Either TorProject or) the user who is using Tor-proxy, need to be
shown/informed some type of success/fail indicator/word, and such
should be visible in Vidalia's main-window (or in Tray-icon or in
Panel-icon), using some type of icon/picture/word/alphabet/etc.

In many areas, (specially for HTTP, HTTPS) i keep on reading or
reminded (by devs or users) that many things are purposefully kept
close to real usage or this/that, so that it does not single out
users, etc, ... so that anonymity remains intact, etc.  So when it
comes to DNS traffic, then doing extra things inside dns data, or,
to reduce DNS usage than normal or standard usage should not be
done, either.  So pls try to be close, to what a normal
DNS-Server/Resolver do/does, first.

Currently i think there is no way to tell/indicate BIND or Unbound
or (most) other authoritative or forwarding DNS servers over port
53, to initiate & use encrypted DNS packets & traffic (for dns
queries and responses) with all type of dns-servers, (as i think
there is no such IETF standard), so now DNS will be similar to HTTP,
it will NOT be similar to HTTPS. Others will see what's going in/out
in open (un-encrypted) DNS packets.

If HTTPS traffic packets and overhead can be handled by
HTTPS-server, then encrypted-DNS traffic packets should not be
overhead for DNS-Servers either, as/because : client-side have
CACHE, and TTL rdata value in each DNS record instructs all type of
DNS-client-side not to do DNS query again for that DNS record, until
TTL time expired.

So one can ask : So based on above why ENCRYPTED-DNS standard or
feature is not published by IETF yet ? or deployed ? ! Who (or which
entities) is/are benefited by current state of open un-encrypted DNS
traffic usages ?

Some form of RFC/IETF standard need to be created or added into
DNSSEC for encrypted DNS packets (and published to public, for
related developers and users to incorporate such into their side
implementations, software, servers).  May be similar to DNS-over-TLS
or DNS-over-DTLS etc are needed.  In DNSSEC, a level of
DNS/DNS-Server keeps certain type of DNS record(s) for the next DNS
level which is/are below/under it, similarly another DNS record is
needed, to show encryption pub-key code to connect with next DNS
level's NS/DNS-Server.  May be it can be called "TLSD", or "TLSNS",
etc.  Or, the current "DS" record need to be feature enriched
further, and added an option to specify full TLS cert for
encrypted-DNS connection, and a 2nd "DS" record in upper DNS level
can be added & used as encryption pub-key.  (I think, that such
"TLSD", "TLSNS", "DS") DNS-record need to hold/show "full"
encryption certificate code.  Since some "TLD" holds records for
millions of "SLD", a "TLD" level DNS-server will have higher DNS
traffic load on them-selves, (but i think, as they also use over
100,000 interconnected servers, IP-cast, etc, so that should not
cause massive amount of load, and again TTL rdata helps to reduce
frequent re-occurrence of DNS queries).  And, a SLD
(Second-Level-Domain) level DNS/NS-server will not have heavy load,
if operator/owner do not have/use many many sub-domain(s) under a
SLD.  Most SLD do not have more than 5 or 6 sub-domains.  Root-level
DNS servers will not have any significant load either, as there are
only around few hundreds of next DNS level under them, the TLDs.

So IF this becomes true/reality ever, (in a new feature of DNSSEC),
then, along with placing a "DS" record in the upper DNS level
in-advance, by a domain-owner/zone-op user, they will also have to
keep a "TLSD" or a "TLSNS" (like mentioned above).  (And
alternatively IF current "DS" is enriched with more option/feature
to declare/show FULL encryption pub-key then no new DNS-record is
needed, just using another "DS" will be suffice).  Then, client-side
DNS-resolvers will obtain & use that (public side) encryption key to
create encrypted connection with next level NS/DNS servers, (like
DNSSEC), and obtain next DNS level info in encrypted form.  Then all
DNS traffic packets will be encrypted, like HTTPS, then no one in
the middle can normally see what is inside the packets.

1.1.1. DNS_BEGIN : [without a thorough investigation, i've observed
496 bytes will be just about barely enough to fit one TLSA record
holding an entire/FULL 4kbits based SSL cert. -Bry8]

2 : Interfaces to applications : [i've added few notes below (may be
useful). -Bry8]

4 : option c : [if such binding to local ip address can use some
type of encrypted packets between libunbound and related tor
component, then that would be better. -Bry8]

7 : TTL normalization idea : [imho, is not good idea. -Bry8]

8.1 : Where to do the resolution? : [dnssec based lib (libunbound)
should be used directly by the tor executable software/components,
(in all kind of Tor-node), for any traffics connecting on Tor socks
port and needs to do DNS resolution. -Bry8]

When a client-side App is "DANE-DNSSEC" (not only "DNSSEC") aware or
supported, only then such app separately queries for specific "TLSA"
service port's dns record from a specific domain/zone (over DNS port
53), before connecting in that service port using that specific TLSA
based encryption, (so normally, only then TLSA dns is needed and
used).  Since none of the current web-browser by-default supports it
yet, users will need to use such as these : Bloodhound (based on
firefox), or, use firefox extension like "Extended DNSSEC Validator"
on (any) Firefox, TorBrowser, Firefox Portable, etc, for DANE+DNSSEC
based verification to work.  And when all dns queries are checked by
DNSSEC based DNS-Server for availability and authenticity of DNSSEC
signed zone, and if then results are cached, then such results are
at-least DNSSEC authenticated, so all software and connection will
be benefited from this.  Having the TLSA/DANE record(s) on a server
port in a DNSSEC signed zone, is EXTRA benefit on top of DNSSEC.

DANE helps all real domain-owner(s) to publish/declare publicly,
what exact SSL certificate that he/she/they has/have approved and
using and trusts, so that his/her/their clients/visitors can connect
with server using accurate & correct SSL cert and can avoid all
false SSL cert.

Same way "CERT PGP" dns-record can help to declare/publish correct
PGP/GPG signing public-codes, which is used for signing a file,
binaries, software, and can indicate correct associated
email-address, etc.  This record helps to avoid using fake files,
just like a TLSA dns record, which helps to avoid using fake SSL
cert and helps to connect using accurate SSL cert.

All Tor binary software signing GPG (full and public-side) code must
be published/shared via DNS.  Only few developers need to
declare/share their GPG code like this, related to signed pkg
release.  Pls see RFC 4398, section 3.3 and 3.4.

Using/specifying/declaring less-bits or less-bytes (in TLSA) have
less importance, ... than ... obtaining Full+Accurate+Complete and
authenticated data via DNSSEC and then using it for stronger and
better encryption is of higher/more importance than anything else.

(Below info may be of lesser importance).

Right now, These tests & solutions can be used/done:

I have been using regular dig tool for test purpose, which works via
socks5 proxy tunnel, with such as below command (in Terminal /
Command-Prompt / shell window) :

dig @ -c in -t any -p 54 torproject.org +dnssec +additional +vc

And for above command to work, i needed a "socat" based tunnel to
link that local DNS port with a local Tor-proxy running
on (for example) Socat executable files were placed
inside %ProgramFiles%\socat\ folder, and then below command was
placed inside a .cmd batch script file
(dns-54_to_DNS-Srv_via_Tor.cmd) to start-up such tunnel:

@start  "socat"
/D"%ProgramFiles%\socat\" socat.exe TCP4-LISTEN:54,fork

Another alternative:

I have (optionally) also used such as below command, in

dig @ -c in -t any -p 55 torproject.org +dnssec +additional

And for the above to work i used "dns2socks" tools, by running it
via a (windows) batch-script file, like below:

@start  "dns2socks"
/D"%ProgramFiles%\dns2socks\" DNS2SOCKS.exe /q

I already have a full DNSSEC supported local DNS-Resolver running on, so i used other ports (port 54, port 55, etc) for DNS
test related purpose, for taking those type of communications thru
other tunnels or proxies, etc.  But those who have a VM in their
side, then they can disable local DNS-Resolver inside VM, and use
any one of the above commands and change from port 54 or 55, into
53, then all apps will go thru specified socks IP-address:port.

Another DNSSEC alternative (to use via Tor-Proxy) which i've used
from inside a VM, its short/brief description is like this : i
managed to (anonymously) create + configure a "Unbound" DNSSEC
dns-server in a hosting/VPS-server (in open internet side), which
using CentOS.  Then, connected to that internet-side dns-server,
from my-side, using local "Unbound" DNS-Resolver from inside a
(Windows-XP based) VM. This local "Unbound" is configured to use TCP
only DNS-traffic for upstream side, and configured to use both TCP &
UDP for local-side DNS queries & responses.  VM is configured to
"transparently" forward all TCP traffic from inside VM to Tor-proxy
tunnel(s).  Unbound to Unbound communication can be encrypted (see
"ssl-upstream" option related unbound docs) using a
self-created/self-signed SSL/TLS certificate, so that data packets
are not clearly visible to others, and also for another benefit, so
that, encrypted-DNS data is normally not modifiable in middle of the
way by some unwanted person or machine.  Internet-side DNS-servr can
be configured to listen onto many many ports, and each port can
listen and accept many TCP connections.  User-side may need to try
alternative DNS-servr ports if one is busy or slow.

Another alternative which i used before above : On VM (and also on
physical machine in my side), a "socat" based tunnel was configured
to use a Self-signed SSL/TLS certificate to send & receive encrypted
packets toward (and from) a internet-side server computer via
Tor-proxy, internet-side server was running "Unbound" as DNS-Server
inside it, and it was also running few more socat tunnels to link
incoming tor-clients with Unbound DNS-Server's (multiple) DNS ports.

a TLSA-DNSSEC / DANE DNS RR(Resource Record) entry may/can have
CAD(Certificate Association Data) RDATA based on SHA-256 or SHA-512
hash/checksum of a SSL/TLS cert(certificate), and may/can also be a
FULL/entire SSL/TLS cert codes.  A 4K-Bit (4096-bits) based SSL/TLS
cert, may use close to or little-bit less or around 512-bytes of
binary data-size payload if that domain/zone used FULL TLS/SSL
cert's CAD code in TLSA dns-record.  So DNS-record like TXT, TLSA,
CERT, etc can be large sized, and some of them like TXT dns-record
is already in use (before this new TLSA dns record) by vast amount
of domains/zones.

Different type of domain-owner/zone-operator will use different
sizes of data in CAD field of TLSA RR, based on different "u s m"
(where "u" = "Usage", "s" = "Selector", "m" = "Matching-type"),
choices, combinations, cases, their SSL/TLS cert's data, bits, etc.

Those who are very concern or looking for higher security for their
users/clients, AND those who REALLY care about users and themselves,
they will use higher-strength FULL certificates in TLSA, as
encryption (aka: Privacy, aka: Anonymity) is much more important
than slowness, or lesser-bytes.

And so, there MUST be some form of DNS-caching present in
libunbound/Tor-client side, so that these TLSA (or other DNS RR)
queries are NOT done too often, until that TLSA (or other dns)
record's TTL validity time period expired, or when connection
related error occurred using dns-cached data, or when other dns
related error occurred.

DANE:TLSA (DNSSEC) : https://tools.ietf.org/html/rfc6698
Cert(PKIX), PGP in DNS : https://tools.ietf.org/html/rfc4398
DNSSEC : https://tools.ietf.org/html/rfc4033

- -- Bright Star.
bry 8 st ar a. at t. in ven ta ti d.o.t. or g:
bry 8 st ar a. at t. ya hoo d.o.t. c om:

Received from Nick Mathewson, on 2013-08-12 6:12 PM:
> This is a proposal of Ondrej's from last year that I've edited a
> little, with permission.  I'm assigning it a number so that we can
> keep it on the radar.  Discussion invited!  Let's try to answer the
> open questions too.
> Filename: 219-expanded-dns.txt
> Title: Support for full DNS and DNSSEC resolution in Tor
> Authors: Ondrej Mikle
> Created: 4 February 2012
> Modified: 2 August 2013
> Target: 0.2.5.x
> Status: Draft
> 0. Overview
>   Adding support for any DNS query type to Tor.
> 0.1. Motivation
>   Many applications running over Tor need more than just resolving FQDN to
>   IPv4 and vice versa. Sometimes to prevent DNS leaks the applications have to
>   be hacked around to be supplied necessary data by hand (e.g. SRV records in
>   XMPP). TLS connections will benefit from planned TLSA record that provides
>   certificate pinning to avoid another Diginotar-like fiasco.
> 0.2. What about DNSSEC?
>   Routine DNSSEC resolution is not practical with this proposal alone,
>   because of round-trip issues: a single name lookup can require
>   dozens of round trips across a circuit, rendering it very slow. (We
>   don't want to add minutes to every webpage load time!)
>   For records like TLSA that need extra signing, this might not be an
>   unacceptable amount of overhead, but routine hostname lookup, it's
>   probably overkill.
>   [Further, thanks to the changes of proposal 205, DNSSEC for routine
>   hostname lookup is less useful in Tor than it might have been back
>   when we cached IPv4 and IPv6 addresses and used them across multiple
>   circuits and exit nodes.]
>   See section 8 below for more discussion of DNSSEC issues.
> 1. Design
> 1.1 New cells
>   There will be two new cells, RELAY_DNS_BEGIN and RELAY_DNS_RESPONSE (we'll
>   use DNS_BEGIN and DNS_RESPONSE for short below).
> 1.1.1. DNS_BEGIN
>   DNS_BEGIN payload:
>     FLAGS        [2 octets]
>     DNS packet data (variable length, up to length of relay cell.)
>   The DNS packet must be generated internally by Tor to avoid
>   fingerprinting users by differences in client resolvers' behavior.
>   [XXXX We need to specify the exact behavior here: saying "Just do what
>   Libunbound does!" would make it impossible to implement a
>   Tor-compatible client without reverse-engineering libunbound. - NM]
>   The FLAGS field is reserved, and should be set to 0 by all clients.
>   Because of the maximum length of the RELAY cell, the DNS packet may
>   not be longer than 496 bytes. [XXXX Is this enough? -NM]
>   Some fields in the query must be omitted or set to zero: see section 3
>   below.
>   DNS_RESPONSE payload:
>     STATUS [1 octet]
>     CONTENT [variable, up to length of relay cell]
>   If the low bit of STATUS is set, this is the last DNS_RESPONSE that
>   the server will send in response to the given DNS_BEGIN.  Otherwise,
>   there will be more DNS_RESPONSE packets.  The other bits are reserved,
>   and should be set to zero for now.
>   The CONTENT fields of the DNS_RESPONSE cells contain a DNS record,
>   split across multiple cells as needed, encoded as:
>     total length (2 octets)
>     data         (variable)
>   So for example, if the DNS record R1 is only 300 bytes long, then it
>   is sent in a single DNS_RESPONSE cell with payload [01 01 2C] R1.  But
>   if the DNS record R2 is 1024 bytes long, it's sent in 3 DNS_RESPONSE
>   cells, with contents: [00 04 00] R2[0:495], [00] R2[495:992], and
>   [01] R2[992:1024] respectively.
>   [NOTE: I'm using the length field and the is-this-the-last-cell
>   field to allow multi-packet responses in the future. -NM]
>   AXFR and IXRF are not supported in this cell by design (see
>   specialized tool below in section 5).
> 1.1.3. Matching queries to responses.
>   DNS_BEGIN must use a non-zero, distinct StreamID.  The client MUST NOT
>   re-use the same stream ID until it has received a complete response
>   from the server or a RELAY_END cell.
>   The client may cancel a DNS_BEGIN request by sending a RELAY_END cell.
>   The server may refused to answer, or abort answering, a DNS_BEGIN cell
>   by sending a RELAY_END cell.
> 2. Interfaces to applications
>   DNSPort evdns - existing implementation will be updated to use
>   [XXXX we should add a dig-like tool that can work over the socksport
>   via some extension, as tor-resolve does now. -NM]
> 3. Limitations on DNS query
>   Clients must only set query class to IN (INTERNET), since the only
>   other useful class CHAOS is practical for directly querying
>   authoritative servers (OR in this case acts as a recursive resolver).
>   Servers MUST return REFUSED for any for class other than IN.
>   Multiple questions in a single packet are not supported and OR will
>   respond with REFUSED as the DNS error code.
>   All query RR types are allowed.
>   [XXXX I originally thought about some exit policy like "basic RR types" and
>   "all RRs", but managing such list in deployed nodes with extra directory
>   flags outweighs the benefit. Maybe disallow ANY RR type? -OM]
>   Client as well as OR MUST block attempts to resolve local RFC 1918,
>   4193, or 4291 adresses (PTR). REFUSED will be returned as DNS error
>   code from OR.  [XXXX Must they also refuse to report addresses that
>   resolve to these? -NM]
>   [XXX I don't think so. People often use public DNS
>   records that map to private adresses. We can't effectively separate
>   "truly public" records from the ones client's dnsmasq or similar DNS
>   resolver returns. - OM]
>   [XXX Then do you mean "must be returned as the DNS error from the OP"?]
>   Request for special names (.onion, .exit, .noconnect) must never be
>   sent, and will return REFUSED.
>   The DNS transaction ID field MUST be set to zero in all requests and
>   replies; the stream ID field plays the same function in Tor.
> 4. Implementation notes
>   Client will periodically purge incomplete DNS replies. Any unexpected
>   DNS_RESPONSE will be dropped.
>   AD flag must be zeroed out on client unless validation is performed.
>   [XXXX libunbound lowlevel API, Tor+libunbound libevent loop
>   libunbound doesn't publicly expose all the necessary parts of low-level API.
>   It can return the received DNS packet, but not let you construct a packet
>   and get it in wire-format, for example.
>   Options I see:
>   a) patch libunbound to be able feed wire-format DNS packets and add API to
>   obtain constructed packets instead of sending over network
>   b) replace bufferevents for sockets in unbound with something like
>   libevent's paired bufferevents. This means that data extracted from
>   DNS_RESPONSE/DNS_BEGIN cells would be fed directly to some evbuffers that
>   would be picked up by libunbound. It could possibly result in avoiding
>   background thread of libunbound's ub_resolve_async running separate libevent
>   loop.
>   c) bind to some arbitrary local address like and use it as
>   forwarder for libunbound. The code there would pack/unpack the DNS packets
>   from/to libunbound into DNS_BEGIN/DNS_RESPONSE cells. It wouldn't require
>   modification of libunbound code, but it's not pretty either. Also the bind
>   port must be 53 which usually requires superuser privileges.
>   Code of libunbound is fairly complex for me to see outright what would the
>   best approach be.
>   ]
> 5. Separate tool for AXFR
>   The AXFR tool will have similar interface like tor-resolve, but will
>   return raw DNS data.
>   Parameters are: query domain, server IP of authoritative DNS.
>   The tool will transfer the data through "ordinary" tunnel using RELAY_BEGIN
>   and related cells.
>   This design decision serves two goals:
>   - DNS_BEGIN and DNS_RESPONSE will be simpler to implement (lower chance of
>     bugs)
>   - in practice it's often useful do AXFR queries on secondary authoritative
>     DNS servers
>   IXFR will not be supported (infrequent corner case, can be done by manual
>   tunnel creation over Tor if truly necessary).
> 6. Security implications
>   As proposal 171 mentions, we need mitigate circuit correlation. One solution
>   would be keeping multiple streams to multiple exit nodes and picking one at
>   random for DNS resolution. Other would be keeping DNS-resolving circuit open
>   only for a short time (e.g. 1-2 minutes). Randomly changing the circuits
>   however means that it would probably incur additional latency since there
>   would likely be a few cache misses on the newly selected exits.
>   [This needs more analysis; We need to consider the possible attacks
>   here.  It would be good to have a way to tie requests to
>   SocksPorts, perhaps? -NM]
> 7. TTL normalization idea
>   A bit complex on implementation, because it requires parsing DNS packets at
>   exit node.
>   TTL in reply DNS packet MUST be normalized at exit node so that client won't
>   learn what other clients queried. The normalization is done in following
>   way:
>   - for a RR, the original TTL value received from authoritative DNS server
>     should be used when sending DNS_RESPONSE, trimming the values to interval
>     [5, 600]
>   - does not pose "ghost-cache-attack", since once RR is flushed from
>     libunbound's cache, it must be fetched anew
> 8. DNSSEC notes
> 8.1. Where to do the resolution?
>   DNSSEC is part of the DNS protocol and the most appropriate place for DNSSEC
>   API would be probably in OS libraries (e.g. libc). However that will
>   probably take time until it becomes widespread.
>   On the Tor's side (as opposed to application's side), DNSSEC will provide
>   protection against DNS cache-poisoning attacks (provided that exit is not
>   malicious itself, but still reduces attack surface).
> 8.2. Round trips and serialization
>   Following are two examples of resolving two A records. The one for
>   addons.mozila.org is an example of a "common" RR without CNAME/DNAME, the
>   other for www.gov.cn an extreme example chained through 5 CNAMEs and 3 TLDs.
>   The examples below are shown for resolving that started with an empty DNS
>   cache.
>   Note that multiple queries are made by libunbound as it tries to adjust for
>   the latency of network. "Standard query response" below that does not list
>   RR type is a negative NOERROR reply with NSEC/NSEC3 (usually reply to DS
>   query).
>   The effect of DNS cache plays a great role - once DS/DNSKEY for root and a
>   TLD is cached, at most 3 records usually need to be fetched for a record
>   that does not utilize CNAME/DNAME (3 roundtrips for DS, DNSKEY and the
>   record itself if there are no zone cuts below).
>   Query for addons.mozilla.org, 6 roundtrips (not counting retries):
>     Standard query A addons.mozilla.org
>     Standard query A addons.mozilla.org
>     Standard query A addons.mozilla.org
>     Standard query A addons.mozilla.org
>     Standard query A addons.mozilla.org
>     Standard query response A RRSIG
>     Standard query response A RRSIG
>     Standard query response A RRSIG
>     Standard query A addons.mozilla.org
>     Standard query response A RRSIG
>     Standard query response A RRSIG
>     Standard query A addons.mozilla.org
>     Standard query response A RRSIG
>     Standard query response A RRSIG
>     Standard query DNSKEY <Root>
>     Standard query DNSKEY <Root>
>     Standard query response DNSKEY DNSKEY RRSIG
>     Standard query response DNSKEY DNSKEY RRSIG
>     Standard query DS org
>     Standard query response DS DS RRSIG
>     Standard query DNSKEY org
>     Standard query DS mozilla.org
>     Standard query response DS RRSIG
>     Standard query DNSKEY mozilla.org
>     Standard query response DNSKEY DNSKEY DNSKEY RRSIG RRSIG
>   Query for www.gov.cn, 16 roundtrips (not counting retries):
>     Standard query A www.gov.cn
>     Standard query A www.gov.cn
>     Standard query A www.gov.cn
>     Standard query A www.gov.cn
>     Standard query A www.gov.cn
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query A www.gov.cn
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query A www.gov.cn
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query response CNAME www.gov.chinacache.net CNAME
> www.gov.cncssr.chinacache.net CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query A www.gov.chinacache.net
>     Standard query response CNAME www.gov.cncssr.chinacache.net CNAME
> www.gov.foreign.ccgslb.com CNAME wac.0b51.edgecastcdn.net CNAME
> gp1.wac.v2cdn.net A
>     Standard query A www.gov.cncssr.chinacache.net
>     Standard query response CNAME www.gov.foreign.ccgslb.com CNAME
> wac.0b51.edgecastcdn.net CNAME gp1.wac.v2cdn.net A
>     Standard query A www.gov.foreign.ccgslb.com
>     Standard query response CNAME wac.0b51.edgecastcdn.net CNAME
> gp1.wac.v2cdn.net A
>     Standard query A wac.0b51.edgecastcdn.net
>     Standard query response CNAME gp1.wac.v2cdn.net A
>     Standard query A gp1.wac.v2cdn.net
>     Standard query response A
>     Standard query DNSKEY <Root>
>     Standard query response DNSKEY DNSKEY RRSIG
>     Standard query DS cn
>     Standard query response
>     Standard query DS net
>     Standard query response DS RRSIG
>     Standard query DNSKEY net
>     Standard query response DNSKEY DNSKEY RRSIG
>     Standard query DS chinacache.net
>     Standard query response
>     Standard query DS com
>     Standard query response DS RRSIG
>     Standard query DNSKEY com
>     Standard query response DNSKEY DNSKEY RRSIG
>     Standard query DS ccgslb.com
>     Standard query response
>     Standard query DS edgecastcdn.net
>     Standard query response
>     Standard query DS v2cdn.net
>     Standard query response
>   An obvious idea to avoid so many roundtrips is to serialize them together.
>   There has been an attempt to standardize such "DNSSEC stapling" [1], however
>   it's incomplete for the general case, mainly due to various intricacies -
>   proofs of non-existence, NSEC3 opt-out zones, TTL handling (see RFC 4035
>   section 5).
> References:
>   [1] https://www.ietf.org/mail-archive/web/dane/current/msg02823.html
> _______________________________________________
> tor-dev mailing list
> tor-dev at lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


More information about the tor-dev mailing list