FWIW, I was running a simulation of this algorithm with the first week
of July's consensuses when Isis posted the following way smarter
algorithm:

A better algorithm would be a Consistent Hashring, modified to dynamically
allocate replications in proportion to fraction of total bandwidth weight. As
with a normal Consistent Hashring, replications determine the number times the
relay is uniformly inserted into the hashring.

So I simulated this one also (with one exception: I didn't scale the
number of replications by the total bandwidth…
With Aaron's algorithm, the average hash ring
location mapped to 9.96 distinct relays each day; with Isis'
consistent hash ring approach, the average location mapped to 1.41
distinct relays each day.

Excellent stuff, Isis and Nick! I agree that Isis’s algorithm is superior in that in reduces the number of times an onion service is forced to republish its descriptors because its directories have changed.

Cheers,
Aaron