[tor-bugs] #23817 [Core Tor/Tor]: Tor re-tries directory mirrors that it knows are missing microdescriptors

Tor Bug Tracker & Wiki blackhole at torproject.org
Mon Oct 30 12:48:15 UTC 2017


#23817: Tor re-tries directory mirrors that it knows are missing microdescriptors
----------------------------------------+----------------------------------
 Reporter:  teor                        |          Owner:  (none)
     Type:  defect                      |         Status:  new
 Priority:  Medium                      |      Milestone:  Tor:
                                        |  0.3.3.x-final
Component:  Core Tor/Tor                |        Version:
 Severity:  Normal                      |     Resolution:
 Keywords:  tor-guard, tor-hs, prop224  |  Actual Points:
Parent ID:  #21969                      |         Points:
 Reviewer:                              |        Sponsor:
----------------------------------------+----------------------------------

Comment (by teor):

 Replying to [comment:6 asn]:
 > Here is an implementation plan of the failure cache idea from comment:4
 .
 >
 > First of all, the interface of the failure cache:
 >
 >   We introduce a `digest256map_t *md_fetch_fail_cache` which maps the
 256-bit md hash to a smartlist of dirguards thru which we failed to fetch
 the md.
 >
 > Now the code logic:
 >
 > 1) We populate `md_fetch_fail_cache` with dirguards in
 `dir_microdesc_download_failed()`.  We remove them from the failure cache
 in `microdescs_add_to_cache()` when we successfuly add an md to the cache.

 Successfully add *that* md to the cache?
 Or any md from that dirguard?

 I think this is ok, as long as we ask for mds in large enough batches.

 > 2) We add another `entry_guard_restriction_t` restriction type in
 `guards_choose_dirguard()`. We currently have one restriction type which
 is designed to restrict guard nodes based on the exit node choice and its
 family. We want another type which uses a smartlist and restricts
 dirguards based on whether we have failed to fetch an md from that
 dirguard. We are gonna use this in step 3.
 >
 > 3) In `directory_get_from_dirserver()` we query the md failure cache and
 pass any results to `directory_pick_generic_dirserver()` and then to
 `guards_choose_dirguard()` which uses the new restriction type to block
 previously failed dirguards from being selected.

 Do we block dirguards that have failed to deliver an md from downloads of
 that md?
 Or do we block dirguards that have failed to deliver any mds from
 downloads of any md?

 > How does this feel like to you?
 >
 > There are two more steps we might want to do:
 >
 > * When we find that we are missing descs for our primary guards, we
 order an immediate download of the missing descs so that the above
 algorithm takes place.

 This seems ok, we will end up leaking our primary guards to our dirguards.
 To reduce the exposure, we should try to trigger this as soon as we get
 the consensus, and batch these in with other md downloads. (That is, if we
 have any mds waiting, we should retry *all* of them, not just the one or
 two primary guard mds.)

 > * If we fail to fetch the mds from all of our primary guards, we retry
 using fallback directories instead of trying deeper down our guard list.
 Teor suggested this, but it seems to be far from trivial to implement
 given the interface of our guard subsystem. If it provides big benefit we
 should consider it.

 This is #23863, we get this behaviour automatically if we mark the guards
 as down (or change the definition of a "down" guard in
 directory_pick_generic_dirserver() so it checks the md failure cache).

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/23817#comment:7>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list