[tor-bugs] #23817 [Core Tor/Tor]: Tor re-tries directory mirrors that it knows are missing microdescriptors

Tor Bug Tracker & Wiki blackhole at torproject.org
Tue Oct 31 12:45:39 UTC 2017


#23817: Tor re-tries directory mirrors that it knows are missing microdescriptors
-------------------------------------------------+-------------------------
 Reporter:  teor                                 |          Owner:  (none)
     Type:  defect                               |         Status:  new
 Priority:  Medium                               |      Milestone:  Tor:
                                                 |  0.3.3.x-final
Component:  Core Tor/Tor                         |        Version:
 Severity:  Normal                               |     Resolution:
 Keywords:  tor-guard, tor-hs, prop224,          |  Actual Points:
  032-backport? 031-backport?                    |
Parent ID:  #21969                               |         Points:
 Reviewer:                                       |        Sponsor:
-------------------------------------------------+-------------------------

Comment (by asn):

 Replying to [comment:7 teor]:
 > Replying to [comment:6 asn]:
 > > Here is an implementation plan of the failure cache idea from
 comment:4 .
 > >
 > > First of all, the interface of the failure cache:
 > >
 > >   We introduce a `digest256map_t *md_fetch_fail_cache` which maps the
 256-bit md hash to a smartlist of dirguards thru which we failed to fetch
 the md.
 > >
 > > Now the code logic:
 > >
 > > 1) We populate `md_fetch_fail_cache` with dirguards in
 `dir_microdesc_download_failed()`.  We remove them from the failure cache
 in `microdescs_add_to_cache()` when we successfuly add an md to the cache.
 >
 > Successfully add *that* md to the cache?
 > Or any md from that dirguard?
 >

 I meant, we remove *that* md from the `md_fetch_fail_cache` if we manage
 to fetch *that* md from any dir.

 > I think this is ok, as long as we ask for mds in large enough batches.
 >
 > > 2) We add another `entry_guard_restriction_t` restriction type in
 `guards_choose_dirguard()`. We currently have one restriction type which
 is designed to restrict guard nodes based on the exit node choice and its
 family. We want another type which uses a smartlist and restricts
 dirguards based on whether we have failed to fetch an md from that
 dirguard. We are gonna use this in step 3.
 > >
 > > 3) In `directory_get_from_dirserver()` we query the md failure cache
 and pass any results to `directory_pick_generic_dirserver()` and then to
 `guards_choose_dirguard()` which uses the new restriction type to block
 previously failed dirguards from being selected.
 >
 > Do we block dirguards that have failed to deliver an md from downloads
 of that md?
 > Or do we block dirguards that have failed to deliver any mds from
 downloads of any md?
 >

 Yes, that's a good question that I forgot to address in this proposal.

 I guess my design above was suggesting that we block dirguards "that have
 failed to deliver any mds from downloads of any md", until those mds get
 fetched from another dirserver and get removed from the failure cache.

 That kinda penalizes dirservers that dont have a totally up-to-date md
 cache, which perhaps kinda makes sense. But perhaps before we become so
 strict, we should check whether we can improve the dirserver logic of
 fetching new mds so that they are almost always up-to-date if possible? Or
 we can do this last part after we implement the failure cache proposal?

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/23817#comment:9>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list