tor-commits
Threads by month
- ----- 2025 -----
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
December 2016
- 18 participants
- 2010 discussions

19 Dec '16
commit d978216dea6b21ac38230a59d172139185a68dbd
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Sun Dec 18 20:13:58 2016 -0500
Fix parsing bug with unecognized token at EOS
In get_token(), we could read one byte past the end of the
region. This is only a big problem in the case where the region
itself is (a) potentially hostile, and (b) not explicitly
nul-terminated.
This patch fixes the underlying bug, and also makes sure that the
one remaining case of not-NUL-terminated potentially hostile data
gets NUL-terminated.
Fix for bug 21018, TROVE-2016-12-002, and CVE-2016-1254
---
changes/bug21018 | 11 +++++++++++
src/or/routerparse.c | 6 +++---
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/changes/bug21018 b/changes/bug21018
new file mode 100644
index 0000000..49a8b47
--- /dev/null
+++ b/changes/bug21018
@@ -0,0 +1,11 @@
+ o Major bugfixes (parsing, security):
+
+ - Fix a bug in parsing that could cause clients to read a single
+ byte past the end of an allocated region. This bug could be
+ used to cause hardened clients (built with
+ --enable-expensive-hardening) to crash if they tried to visit
+ a hostile hidden service. Non-hardened clients are only
+ affected depending on the details of their platform's memory
+ allocator. Fixes bug 21018; bugfix on 0.2.0.8-alpha. Found by
+ using libFuzzer. Also tracked as TROVE-2016-12-002 and as
+ CVE-2016-1254.
diff --git a/src/or/routerparse.c b/src/or/routerparse.c
index 176c16f..71373ce 100644
--- a/src/or/routerparse.c
+++ b/src/or/routerparse.c
@@ -3857,7 +3857,7 @@ get_next_token(memarea_t *area,
if (tok->tp == ERR_) {
/* No keyword matched; call it an "K_opt" or "A_unrecognized" */
- if (**s == '@')
+ if (*s < eol && **s == '@')
tok->tp = A_UNKNOWN_;
else
tok->tp = K_OPT;
@@ -4863,7 +4863,7 @@ rend_decrypt_introduction_points(char **ipos_decrypted,
crypto_cipher_free(cipher);
len = ipos_encrypted_size - 2 - client_entries_len - CIPHER_IV_LEN;
- dec = tor_malloc(len);
+ dec = tor_malloc_zero(len + 1);
declen = crypto_cipher_decrypt_with_iv(session_key, dec, len,
ipos_encrypted + 2 + client_entries_len,
ipos_encrypted_size - 2 - client_entries_len);
@@ -4895,7 +4895,7 @@ rend_decrypt_introduction_points(char **ipos_decrypted,
"small.");
return -1;
}
- dec = tor_malloc_zero(ipos_encrypted_size - CIPHER_IV_LEN - 1);
+ dec = tor_malloc_zero(ipos_encrypted_size - CIPHER_IV_LEN - 1 + 1);
declen = crypto_cipher_decrypt_with_iv(descriptor_cookie, dec,
ipos_encrypted_size -
1
0

[tor/maint-0.2.8] Make log message warn about detected attempts to exploit 21018.
by nickm@torproject.org 19 Dec '16
by nickm@torproject.org 19 Dec '16
19 Dec '16
commit 0fb3058eced5dce355d777288bd9ec255b875db4
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Sun Dec 18 20:17:28 2016 -0500
Make log message warn about detected attempts to exploit 21018.
---
src/or/rendcommon.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/or/rendcommon.c b/src/or/rendcommon.c
index d1f8b1a..296df55 100644
--- a/src/or/rendcommon.c
+++ b/src/or/rendcommon.c
@@ -1327,8 +1327,10 @@ rend_cache_store_v2_desc_as_client(const char *desc,
intro_size);
if (n_intro_points <= 0) {
log_warn(LD_REND, "Failed to parse introduction points. Either the "
- "service has published a corrupt descriptor or you have "
- "provided invalid authorization data.");
+ "service has published a corrupt descriptor, or you have "
+ "provided invalid authorization data, or (maybe!) the "
+ "server is deliberately serving broken data in an attempt "
+ "to crash you with bug 21018.");
retval = -2;
goto err;
} else if (n_intro_points > MAX_INTRO_POINTS) {
1
0

[tor/maint-0.2.9] Make log message warn about detected attempts to exploit 21018.
by nickm@torproject.org 19 Dec '16
by nickm@torproject.org 19 Dec '16
19 Dec '16
commit 0fb3058eced5dce355d777288bd9ec255b875db4
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Sun Dec 18 20:17:28 2016 -0500
Make log message warn about detected attempts to exploit 21018.
---
src/or/rendcommon.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/or/rendcommon.c b/src/or/rendcommon.c
index d1f8b1a..296df55 100644
--- a/src/or/rendcommon.c
+++ b/src/or/rendcommon.c
@@ -1327,8 +1327,10 @@ rend_cache_store_v2_desc_as_client(const char *desc,
intro_size);
if (n_intro_points <= 0) {
log_warn(LD_REND, "Failed to parse introduction points. Either the "
- "service has published a corrupt descriptor or you have "
- "provided invalid authorization data.");
+ "service has published a corrupt descriptor, or you have "
+ "provided invalid authorization data, or (maybe!) the "
+ "server is deliberately serving broken data in an attempt "
+ "to crash you with bug 21018.");
retval = -2;
goto err;
} else if (n_intro_points > MAX_INTRO_POINTS) {
1
0

19 Dec '16
commit de656474611c43f24dd5fff430de945b34b738bf
Merge: 169a93f c11de4c
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Mon Dec 19 07:58:43 2016 -0500
Merge branch 'maint-0.2.8' into maint-0.2.9
changes/bug21018 | 11 +++++++++++
src/or/rendcache.c | 4 +++-
src/or/routerparse.c | 6 +++---
3 files changed, 17 insertions(+), 4 deletions(-)
1
0

19 Dec '16
commit c11de4c45f5611a12fba005032941a40f5ce5f66
Merge: e030632 0fb3058
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Mon Dec 19 07:58:21 2016 -0500
Merge branch 'bug21018_024' into maint-0.2.8
changes/bug21018 | 11 +++++++++++
src/or/rendcache.c | 4 +++-
src/or/routerparse.c | 6 +++---
3 files changed, 17 insertions(+), 4 deletions(-)
diff --cc src/or/rendcache.c
index f8206cd,0000000..f9ae6d1
mode 100644,000000..100644
--- a/src/or/rendcache.c
+++ b/src/or/rendcache.c
@@@ -1,1011 -1,0 +1,1013 @@@
+/* Copyright (c) 2015-2016, The Tor Project, Inc. */
+/* See LICENSE for licensing information */
+
+/**
+ * \file rendcache.c
+ * \brief Hidden service descriptor cache.
+ **/
+
+#define RENDCACHE_PRIVATE
+#include "rendcache.h"
+
+#include "config.h"
+#include "rephist.h"
+#include "routerlist.h"
+#include "routerparse.h"
+#include "rendcommon.h"
+
+/** Map from service id (as generated by rend_get_service_id) to
+ * rend_cache_entry_t. */
+STATIC strmap_t *rend_cache = NULL;
+
+/** Map from service id to rend_cache_entry_t; only for hidden services. */
+static strmap_t *rend_cache_local_service = NULL;
+
+/** Map from descriptor id to rend_cache_entry_t; only for hidden service
+ * directories. */
+STATIC digestmap_t *rend_cache_v2_dir = NULL;
+
+/** (Client side only) Map from service id to rend_cache_failure_t. This
+ * cache is used to track intro point(IP) failures so we know when to keep
+ * or discard a new descriptor we just fetched. Here is a description of the
+ * cache behavior.
+ *
+ * Everytime tor discards an IP (ex: receives a NACK), we add an entry to
+ * this cache noting the identity digest of the IP and it's failure type for
+ * the service ID. The reason we indexed this cache by service ID is to
+ * differentiate errors that can occur only for a specific service like a
+ * NACK for instance. It applies for one but maybe not for the others.
+ *
+ * Once a service descriptor is fetched and considered valid, each IP is
+ * looked up in this cache and if present, it is discarded from the fetched
+ * descriptor. At the end, all IP(s) in the cache, for a specific service
+ * ID, that were NOT present in the descriptor are removed from this cache.
+ * Which means that if at least one IP was not in this cache, thus usuable,
+ * it's considered a new descriptor so we keep it. Else, if all IPs were in
+ * this cache, we discard the descriptor as it's considered unsuable.
+ *
+ * Once a descriptor is removed from the rend cache or expires, the entry
+ * in this cache is also removed for the service ID.
+ *
+ * This scheme allows us to not realy on the descriptor's timestamp (which
+ * is rounded down to the hour) to know if we have a newer descriptor. We
+ * only rely on the usability of intro points from an internal state. */
+STATIC strmap_t *rend_cache_failure = NULL;
+
+/* DOCDOC */
+STATIC size_t rend_cache_total_allocation = 0;
+
+/** Initializes the service descriptor cache.
+*/
+void
+rend_cache_init(void)
+{
+ rend_cache = strmap_new();
+ rend_cache_v2_dir = digestmap_new();
+ rend_cache_local_service = strmap_new();
+ rend_cache_failure = strmap_new();
+}
+
+/** Return the approximate number of bytes needed to hold <b>e</b>. */
+STATIC size_t
+rend_cache_entry_allocation(const rend_cache_entry_t *e)
+{
+ if (!e)
+ return 0;
+
+ /* This doesn't count intro_nodes or key size */
+ return sizeof(*e) + e->len + sizeof(*e->parsed);
+}
+
+/* DOCDOC */
+size_t
+rend_cache_get_total_allocation(void)
+{
+ return rend_cache_total_allocation;
+}
+
+/** Decrement the total bytes attributed to the rendezvous cache by n. */
+STATIC void
+rend_cache_decrement_allocation(size_t n)
+{
+ static int have_underflowed = 0;
+
+ if (rend_cache_total_allocation >= n) {
+ rend_cache_total_allocation -= n;
+ } else {
+ rend_cache_total_allocation = 0;
+ if (! have_underflowed) {
+ have_underflowed = 1;
+ log_warn(LD_BUG, "Underflow in rend_cache_decrement_allocation");
+ }
+ }
+}
+
+/** Increase the total bytes attributed to the rendezvous cache by n. */
+STATIC void
+rend_cache_increment_allocation(size_t n)
+{
+ static int have_overflowed = 0;
+ if (rend_cache_total_allocation <= SIZE_MAX - n) {
+ rend_cache_total_allocation += n;
+ } else {
+ rend_cache_total_allocation = SIZE_MAX;
+ if (! have_overflowed) {
+ have_overflowed = 1;
+ log_warn(LD_BUG, "Overflow in rend_cache_increment_allocation");
+ }
+ }
+}
+
+/** Helper: free a rend cache failure intro object. */
+STATIC void
+rend_cache_failure_intro_entry_free(rend_cache_failure_intro_t *entry)
+{
+ if (entry == NULL) {
+ return;
+ }
+ tor_free(entry);
+}
+
+static void
+rend_cache_failure_intro_entry_free_(void *entry)
+{
+ rend_cache_failure_intro_entry_free(entry);
+}
+
+/** Allocate a rend cache failure intro object and return it. <b>failure</b>
+ * is set into the object. This function can not fail. */
+STATIC rend_cache_failure_intro_t *
+rend_cache_failure_intro_entry_new(rend_intro_point_failure_t failure)
+{
+ rend_cache_failure_intro_t *entry = tor_malloc(sizeof(*entry));
+ entry->failure_type = failure;
+ entry->created_ts = time(NULL);
+ return entry;
+}
+
+/** Helper: free a rend cache failure object. */
+STATIC void
+rend_cache_failure_entry_free(rend_cache_failure_t *entry)
+{
+ if (entry == NULL) {
+ return;
+ }
+
+ /* Free and remove every intro failure object. */
+ digestmap_free(entry->intro_failures,
+ rend_cache_failure_intro_entry_free_);
+
+ tor_free(entry);
+}
+
+/** Helper: deallocate a rend_cache_failure_t. (Used with strmap_free(),
+ * which requires a function pointer whose argument is void*). */
+STATIC void
+rend_cache_failure_entry_free_(void *entry)
+{
+ rend_cache_failure_entry_free(entry);
+}
+
+/** Allocate a rend cache failure object and return it. This function can
+ * not fail. */
+STATIC rend_cache_failure_t *
+rend_cache_failure_entry_new(void)
+{
+ rend_cache_failure_t *entry = tor_malloc(sizeof(*entry));
+ entry->intro_failures = digestmap_new();
+ return entry;
+}
+
+/** Remove failure cache entry for the service ID in the given descriptor
+ * <b>desc</b>. */
+STATIC void
+rend_cache_failure_remove(rend_service_descriptor_t *desc)
+{
+ char service_id[REND_SERVICE_ID_LEN_BASE32 + 1];
+ rend_cache_failure_t *entry;
+
+ if (desc == NULL) {
+ return;
+ }
+ if (rend_get_service_id(desc->pk, service_id) < 0) {
+ return;
+ }
+ entry = strmap_get_lc(rend_cache_failure, service_id);
+ if (entry != NULL) {
+ strmap_remove_lc(rend_cache_failure, service_id);
+ rend_cache_failure_entry_free(entry);
+ }
+}
+
+/** Helper: free storage held by a single service descriptor cache entry. */
+STATIC void
+rend_cache_entry_free(rend_cache_entry_t *e)
+{
+ if (!e)
+ return;
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ /* We are about to remove a descriptor from the cache so remove the entry
+ * in the failure cache. */
+ rend_cache_failure_remove(e->parsed);
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ tor_free(e);
+}
+
+/** Helper: deallocate a rend_cache_entry_t. (Used with strmap_free(), which
+ * requires a function pointer whose argument is void*). */
+static void
+rend_cache_entry_free_(void *p)
+{
+ rend_cache_entry_free(p);
+}
+
+/** Free all storage held by the service descriptor cache. */
+void
+rend_cache_free_all(void)
+{
+ strmap_free(rend_cache, rend_cache_entry_free_);
+ digestmap_free(rend_cache_v2_dir, rend_cache_entry_free_);
+ strmap_free(rend_cache_local_service, rend_cache_entry_free_);
+ strmap_free(rend_cache_failure, rend_cache_failure_entry_free_);
+ rend_cache = NULL;
+ rend_cache_v2_dir = NULL;
+ rend_cache_local_service = NULL;
+ rend_cache_failure = NULL;
+ rend_cache_total_allocation = 0;
+}
+
+/** Remove all entries that re REND_CACHE_FAILURE_MAX_AGE old. This is
+ * called every second.
+ *
+ * We have to clean these regurlarly else if for whatever reasons an hidden
+ * service goes offline and a client tries to connect to it during that
+ * time, a failure entry is created and the client will be unable to connect
+ * for a while even though the service has return online. */
+void
+rend_cache_failure_clean(time_t now)
+{
+ time_t cutoff = now - REND_CACHE_FAILURE_MAX_AGE;
+ STRMAP_FOREACH_MODIFY(rend_cache_failure, key,
+ rend_cache_failure_t *, ent) {
+ /* Free and remove every intro failure object that match the cutoff. */
+ DIGESTMAP_FOREACH_MODIFY(ent->intro_failures, ip_key,
+ rend_cache_failure_intro_t *, ip_ent) {
+ if (ip_ent->created_ts < cutoff) {
+ rend_cache_failure_intro_entry_free(ip_ent);
+ MAP_DEL_CURRENT(ip_key);
+ }
+ } DIGESTMAP_FOREACH_END;
+ /* If the entry is now empty of intro point failures, remove it. */
+ if (digestmap_isempty(ent->intro_failures)) {
+ rend_cache_failure_entry_free(ent);
+ MAP_DEL_CURRENT(key);
+ }
+ } STRMAP_FOREACH_END;
+}
+
+/** Removes all old entries from the client or service descriptor cache.
+*/
+void
+rend_cache_clean(time_t now, rend_cache_type_t cache_type)
+{
+ strmap_iter_t *iter;
+ const char *key;
+ void *val;
+ rend_cache_entry_t *ent;
+ time_t cutoff = now - REND_CACHE_MAX_AGE - REND_CACHE_MAX_SKEW;
+ strmap_t *cache = NULL;
+
+ if (cache_type == REND_CACHE_TYPE_CLIENT) {
+ cache = rend_cache;
+ } else if (cache_type == REND_CACHE_TYPE_SERVICE) {
+ cache = rend_cache_local_service;
+ }
+ tor_assert(cache);
+
+ for (iter = strmap_iter_init(cache); !strmap_iter_done(iter); ) {
+ strmap_iter_get(iter, &key, &val);
+ ent = (rend_cache_entry_t*)val;
+ if (ent->parsed->timestamp < cutoff) {
+ iter = strmap_iter_next_rmv(cache, iter);
+ rend_cache_entry_free(ent);
+ } else {
+ iter = strmap_iter_next(cache, iter);
+ }
+ }
+}
+
+/** Remove ALL entries from the rendezvous service descriptor cache.
+*/
+void
+rend_cache_purge(void)
+{
+ if (rend_cache) {
+ log_info(LD_REND, "Purging HS descriptor cache");
+ strmap_free(rend_cache, rend_cache_entry_free_);
+ }
+ rend_cache = strmap_new();
+}
+
+/** Remove ALL entries from the failure cache. This is also called when a
+ * NEWNYM signal is received. */
+void
+rend_cache_failure_purge(void)
+{
+ if (rend_cache_failure) {
+ log_info(LD_REND, "Purging HS failure cache");
+ strmap_free(rend_cache_failure, rend_cache_failure_entry_free_);
+ }
+ rend_cache_failure = strmap_new();
+}
+
+/** Lookup the rend failure cache using a relay identity digest in
+ * <b>identity</b> which has DIGEST_LEN bytes and service ID <b>service_id</b>
+ * which is a null-terminated string. If found, the intro failure is set in
+ * <b>intro_entry</b> else it stays untouched. Return 1 iff found else 0. */
+STATIC int
+cache_failure_intro_lookup(const uint8_t *identity, const char *service_id,
+ rend_cache_failure_intro_t **intro_entry)
+{
+ rend_cache_failure_t *elem;
+ rend_cache_failure_intro_t *intro_elem;
+
+ tor_assert(rend_cache_failure);
+
+ if (intro_entry) {
+ *intro_entry = NULL;
+ }
+
+ /* Lookup descriptor and return it. */
+ elem = strmap_get_lc(rend_cache_failure, service_id);
+ if (elem == NULL) {
+ goto not_found;
+ }
+ intro_elem = digestmap_get(elem->intro_failures, (char *) identity);
+ if (intro_elem == NULL) {
+ goto not_found;
+ }
+ if (intro_entry) {
+ *intro_entry = intro_elem;
+ }
+ return 1;
+ not_found:
+ return 0;
+}
+
+/** Allocate a new cache failure intro object and copy the content from
+ * <b>entry</b> to this newly allocated object. Return it. */
+static rend_cache_failure_intro_t *
+cache_failure_intro_dup(const rend_cache_failure_intro_t *entry)
+{
+ rend_cache_failure_intro_t *ent_dup =
+ rend_cache_failure_intro_entry_new(entry->failure_type);
+ ent_dup->created_ts = entry->created_ts;
+ return ent_dup;
+}
+
+/** Add an intro point failure to the failure cache using the relay
+ * <b>identity</b> and service ID <b>service_id</b>. Record the
+ * <b>failure</b> in that object. */
+STATIC void
+cache_failure_intro_add(const uint8_t *identity, const char *service_id,
+ rend_intro_point_failure_t failure)
+{
+ rend_cache_failure_t *fail_entry;
+ rend_cache_failure_intro_t *entry, *old_entry;
+
+ /* Make sure we have a failure object for this service ID and if not,
+ * create it with this new intro failure entry. */
+ fail_entry = strmap_get_lc(rend_cache_failure, service_id);
+ if (fail_entry == NULL) {
+ fail_entry = rend_cache_failure_entry_new();
+ /* Add failure entry to global rend failure cache. */
+ strmap_set_lc(rend_cache_failure, service_id, fail_entry);
+ }
+ entry = rend_cache_failure_intro_entry_new(failure);
+ old_entry = digestmap_set(fail_entry->intro_failures,
+ (char *) identity, entry);
+ /* This _should_ be NULL, but in case it isn't, free it. */
+ rend_cache_failure_intro_entry_free(old_entry);
+}
+
+/** Using a parsed descriptor <b>desc</b>, check if the introduction points
+ * are present in the failure cache and if so they are removed from the
+ * descriptor and kept into the failure cache. Then, each intro points that
+ * are NOT in the descriptor but in the failure cache for the given
+ * <b>service_id</b> are removed from the failure cache. */
+STATIC void
+validate_intro_point_failure(const rend_service_descriptor_t *desc,
+ const char *service_id)
+{
+ rend_cache_failure_t *new_entry, *cur_entry;
+ /* New entry for the service ID that will be replacing the one in the
+ * failure cache since we have a new descriptor. In the case where all
+ * intro points are removed, we are assured that the new entry is the same
+ * as the current one. */
+ new_entry = tor_malloc(sizeof(*new_entry));
+ new_entry->intro_failures = digestmap_new();
+
+ tor_assert(desc);
+
+ SMARTLIST_FOREACH_BEGIN(desc->intro_nodes, rend_intro_point_t *, intro) {
+ int found;
+ rend_cache_failure_intro_t *entry;
+ const uint8_t *identity =
+ (uint8_t *) intro->extend_info->identity_digest;
+
+ found = cache_failure_intro_lookup(identity, service_id, &entry);
+ if (found) {
+ /* Dup here since it will be freed at the end when removing the
+ * original entry in the cache. */
+ rend_cache_failure_intro_t *ent_dup = cache_failure_intro_dup(entry);
+ /* This intro point is in our cache, discard it from the descriptor
+ * because chances are that it's unusable. */
+ SMARTLIST_DEL_CURRENT(desc->intro_nodes, intro);
+ /* Keep it for our new entry. */
+ digestmap_set(new_entry->intro_failures, (char *) identity, ent_dup);
+ /* Only free it when we're done looking at it. */
+ rend_intro_point_free(intro);
+ continue;
+ }
+ } SMARTLIST_FOREACH_END(intro);
+
+ /* Swap the failure entry in the cache and free the current one. */
+ cur_entry = strmap_get_lc(rend_cache_failure, service_id);
+ if (cur_entry != NULL) {
+ rend_cache_failure_entry_free(cur_entry);
+ }
+ strmap_set_lc(rend_cache_failure, service_id, new_entry);
+}
+
+/** Note down an intro failure in the rend failure cache using the type of
+ * failure in <b>failure</b> for the relay identity digest in
+ * <b>identity</b> and service ID <b>service_id</b>. If an entry already
+ * exists in the cache, the failure type is changed with <b>failure</b>. */
+void
+rend_cache_intro_failure_note(rend_intro_point_failure_t failure,
+ const uint8_t *identity,
+ const char *service_id)
+{
+ int found;
+ rend_cache_failure_intro_t *entry;
+
+ found = cache_failure_intro_lookup(identity, service_id, &entry);
+ if (!found) {
+ cache_failure_intro_add(identity, service_id, failure);
+ } else {
+ /* Replace introduction point failure with this one. */
+ entry->failure_type = failure;
+ }
+}
+
+/** Remove all old v2 descriptors and those for which this hidden service
+ * directory is not responsible for any more.
+ *
+ * If at all possible, remove at least <b>force_remove</b> bytes of data.
+ */
+void
+rend_cache_clean_v2_descs_as_dir(time_t now, size_t force_remove)
+{
+ digestmap_iter_t *iter;
+ time_t cutoff = now - REND_CACHE_MAX_AGE - REND_CACHE_MAX_SKEW;
+ const int LAST_SERVED_CUTOFF_STEP = 1800;
+ time_t last_served_cutoff = cutoff;
+ size_t bytes_removed = 0;
+ do {
+ for (iter = digestmap_iter_init(rend_cache_v2_dir);
+ !digestmap_iter_done(iter); ) {
+ const char *key;
+ void *val;
+ rend_cache_entry_t *ent;
+ digestmap_iter_get(iter, &key, &val);
+ ent = val;
+ if (ent->parsed->timestamp < cutoff ||
+ ent->last_served < last_served_cutoff) {
+ char key_base32[REND_DESC_ID_V2_LEN_BASE32 + 1];
+ base32_encode(key_base32, sizeof(key_base32), key, DIGEST_LEN);
+ log_info(LD_REND, "Removing descriptor with ID '%s' from cache",
+ safe_str_client(key_base32));
+ bytes_removed += rend_cache_entry_allocation(ent);
+ iter = digestmap_iter_next_rmv(rend_cache_v2_dir, iter);
+ rend_cache_entry_free(ent);
+ } else {
+ iter = digestmap_iter_next(rend_cache_v2_dir, iter);
+ }
+ }
+
+ /* In case we didn't remove enough bytes, advance the cutoff a little. */
+ last_served_cutoff += LAST_SERVED_CUTOFF_STEP;
+ if (last_served_cutoff > now)
+ break;
+ } while (bytes_removed < force_remove);
+}
+
+/** Lookup in the client cache the given service ID <b>query</b> for
+ * <b>version</b>.
+ *
+ * Return 0 if found and if <b>e</b> is non NULL, set it with the entry
+ * found. Else, a negative value is returned and <b>e</b> is untouched.
+ * -EINVAL means that <b>query</b> is not a valid service id.
+ * -ENOENT means that no entry in the cache was found. */
+int
+rend_cache_lookup_entry(const char *query, int version, rend_cache_entry_t **e)
+{
+ int ret = 0;
+ char key[REND_SERVICE_ID_LEN_BASE32 + 2]; /* <version><query>\0 */
+ rend_cache_entry_t *entry = NULL;
+ static const int default_version = 2;
+
+ tor_assert(rend_cache);
+ tor_assert(query);
+
+ if (!rend_valid_service_id(query)) {
+ ret = -EINVAL;
+ goto end;
+ }
+
+ switch (version) {
+ case 0:
+ log_warn(LD_REND, "Cache lookup of a v0 renddesc is deprecated.");
+ break;
+ case 2:
+ /* Default is version 2. */
+ default:
+ tor_snprintf(key, sizeof(key), "%d%s", default_version, query);
+ entry = strmap_get_lc(rend_cache, key);
+ break;
+ }
+ if (!entry) {
+ ret = -ENOENT;
+ goto end;
+ }
+ tor_assert(entry->parsed && entry->parsed->intro_nodes);
+
+ if (e) {
+ *e = entry;
+ }
+
+ end:
+ return ret;
+}
+
+/*
+ * Lookup the v2 service descriptor with the service ID <b>query</b> in the
+ * local service descriptor cache. Return 0 if found and if <b>e</b> is
+ * non NULL, set it with the entry found. Else, a negative value is returned
+ * and <b>e</b> is untouched.
+ * -EINVAL means that <b>query</b> is not a valid service id.
+ * -ENOENT means that no entry in the cache was found. */
+int
+rend_cache_lookup_v2_desc_as_service(const char *query, rend_cache_entry_t **e)
+{
+ int ret = 0;
+ rend_cache_entry_t *entry = NULL;
+
+ tor_assert(rend_cache_local_service);
+ tor_assert(query);
+
+ if (!rend_valid_service_id(query)) {
+ ret = -EINVAL;
+ goto end;
+ }
+
+ /* Lookup descriptor and return. */
+ entry = strmap_get_lc(rend_cache_local_service, query);
+ if (!entry) {
+ ret = -ENOENT;
+ goto end;
+ }
+
+ if (e) {
+ *e = entry;
+ }
+
+ end:
+ return ret;
+}
+
+/** Lookup the v2 service descriptor with base32-encoded <b>desc_id</b> and
+ * copy the pointer to it to *<b>desc</b>. Return 1 on success, 0 on
+ * well-formed-but-not-found, and -1 on failure.
+ */
+int
+rend_cache_lookup_v2_desc_as_dir(const char *desc_id, const char **desc)
+{
+ rend_cache_entry_t *e;
+ char desc_id_digest[DIGEST_LEN];
+ tor_assert(rend_cache_v2_dir);
+ if (base32_decode(desc_id_digest, DIGEST_LEN,
+ desc_id, REND_DESC_ID_V2_LEN_BASE32) < 0) {
+ log_fn(LOG_PROTOCOL_WARN, LD_REND,
+ "Rejecting v2 rendezvous descriptor request -- descriptor ID "
+ "contains illegal characters: %s",
+ safe_str(desc_id));
+ return -1;
+ }
+ /* Lookup descriptor and return. */
+ e = digestmap_get(rend_cache_v2_dir, desc_id_digest);
+ if (e) {
+ *desc = e->desc;
+ e->last_served = approx_time();
+ return 1;
+ }
+ return 0;
+}
+
+/** Parse the v2 service descriptor(s) in <b>desc</b> and store it/them to the
+ * local rend cache. Don't attempt to decrypt the included list of introduction
+ * points (as we don't have a descriptor cookie for it).
+ *
+ * If we have a newer descriptor with the same ID, ignore this one.
+ * If we have an older descriptor with the same ID, replace it.
+ *
+ * Return 0 on success, or -1 if we couldn't parse any of them.
+ *
+ * We should only call this function for public (e.g. non bridge) relays.
+ */
+int
+rend_cache_store_v2_desc_as_dir(const char *desc)
+{
+ const or_options_t *options = get_options();
+ rend_service_descriptor_t *parsed;
+ char desc_id[DIGEST_LEN];
+ char *intro_content;
+ size_t intro_size;
+ size_t encoded_size;
+ char desc_id_base32[REND_DESC_ID_V2_LEN_BASE32 + 1];
+ int number_parsed = 0, number_stored = 0;
+ const char *current_desc = desc;
+ const char *next_desc;
+ rend_cache_entry_t *e;
+ time_t now = time(NULL);
+ tor_assert(rend_cache_v2_dir);
+ tor_assert(desc);
+ while (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
+ &intro_size, &encoded_size,
+ &next_desc, current_desc, 1) >= 0) {
+ number_parsed++;
+ /* We don't care about the introduction points. */
+ tor_free(intro_content);
+ /* For pretty log statements. */
+ base32_encode(desc_id_base32, sizeof(desc_id_base32),
+ desc_id, DIGEST_LEN);
+ /* Is descriptor too old? */
+ if (parsed->timestamp < now - REND_CACHE_MAX_AGE-REND_CACHE_MAX_SKEW) {
+ log_info(LD_REND, "Service descriptor with desc ID %s is too old.",
+ safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Is descriptor too far in the future? */
+ if (parsed->timestamp > now + REND_CACHE_MAX_SKEW) {
+ log_info(LD_REND, "Service descriptor with desc ID %s is too far in the "
+ "future.",
+ safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Do we already have a newer descriptor? */
+ e = digestmap_get(rend_cache_v2_dir, desc_id);
+ if (e && e->parsed->timestamp > parsed->timestamp) {
+ log_info(LD_REND, "We already have a newer service descriptor with the "
+ "same desc ID %s and version.",
+ safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Do we already have this descriptor? */
+ if (e && !strcmp(desc, e->desc)) {
+ log_info(LD_REND, "We already have this service descriptor with desc "
+ "ID %s.", safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Store received descriptor. */
+ if (!e) {
+ e = tor_malloc_zero(sizeof(rend_cache_entry_t));
+ digestmap_set(rend_cache_v2_dir, desc_id, e);
+ /* Treat something just uploaded as having been served a little
+ * while ago, so that flooding with new descriptors doesn't help
+ * too much.
+ */
+ e->last_served = approx_time() - 3600;
+ } else {
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ }
+ e->parsed = parsed;
+ e->desc = tor_strndup(current_desc, encoded_size);
+ e->len = encoded_size;
+ rend_cache_increment_allocation(rend_cache_entry_allocation(e));
+ log_info(LD_REND, "Successfully stored service descriptor with desc ID "
+ "'%s' and len %d.",
+ safe_str(desc_id_base32), (int)encoded_size);
+ /* Statistics: Note down this potentially new HS. */
+ if (options->HiddenServiceStatistics) {
+ rep_hist_stored_maybe_new_hs(e->parsed->pk);
+ }
+
+ number_stored++;
+ goto advance;
+ skip:
+ rend_service_descriptor_free(parsed);
+ advance:
+ /* advance to next descriptor, if available. */
+ current_desc = next_desc;
+ /* check if there is a next descriptor. */
+ if (!current_desc ||
+ strcmpstart(current_desc, "rendezvous-service-descriptor "))
+ break;
+ }
+ if (!number_parsed) {
+ log_info(LD_REND, "Could not parse any descriptor.");
+ return -1;
+ }
+ log_info(LD_REND, "Parsed %d and added %d descriptor%s.",
+ number_parsed, number_stored, number_stored != 1 ? "s" : "");
+ return 0;
+}
+
+/** Parse the v2 service descriptor in <b>desc</b> and store it to the
+* local service rend cache. Don't attempt to decrypt the included list of
+* introduction points.
+*
+* If we have a newer descriptor with the same ID, ignore this one.
+* If we have an older descriptor with the same ID, replace it.
+*
+* Return 0 on success, or -1 if we couldn't understand the descriptor.
+*/
+int
+rend_cache_store_v2_desc_as_service(const char *desc)
+{
+ rend_service_descriptor_t *parsed = NULL;
+ char desc_id[DIGEST_LEN];
+ char *intro_content = NULL;
+ size_t intro_size;
+ size_t encoded_size;
+ const char *next_desc;
+ char service_id[REND_SERVICE_ID_LEN_BASE32+1];
+ rend_cache_entry_t *e;
+ int retval = -1;
+ tor_assert(rend_cache_local_service);
+ tor_assert(desc);
+
+ /* Parse the descriptor. */
+ if (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
+ &intro_size, &encoded_size,
+ &next_desc, desc, 0) < 0) {
+ log_warn(LD_REND, "Could not parse descriptor.");
+ goto err;
+ }
+ /* Compute service ID from public key. */
+ if (rend_get_service_id(parsed->pk, service_id)<0) {
+ log_warn(LD_REND, "Couldn't compute service ID.");
+ goto err;
+ }
+
+ /* Do we already have a newer descriptor? Allow new descriptors with a
+ rounded timestamp equal to or newer than the current descriptor */
+ e = (rend_cache_entry_t*) strmap_get_lc(rend_cache_local_service,
+ service_id);
+ if (e && e->parsed->timestamp > parsed->timestamp) {
+ log_info(LD_REND, "We already have a newer service descriptor for "
+ "service ID %s.", safe_str_client(service_id));
+ goto okay;
+ }
+ /* We don't care about the introduction points. */
+ tor_free(intro_content);
+ if (!e) {
+ e = tor_malloc_zero(sizeof(rend_cache_entry_t));
+ strmap_set_lc(rend_cache_local_service, service_id, e);
+ } else {
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ }
+ e->parsed = parsed;
+ e->desc = tor_malloc_zero(encoded_size + 1);
+ strlcpy(e->desc, desc, encoded_size + 1);
+ e->len = encoded_size;
+ rend_cache_increment_allocation(rend_cache_entry_allocation(e));
+ log_debug(LD_REND,"Successfully stored rend desc '%s', len %d.",
+ safe_str_client(service_id), (int)encoded_size);
+ return 0;
+
+ okay:
+ retval = 0;
+
+ err:
+ rend_service_descriptor_free(parsed);
+ tor_free(intro_content);
+ return retval;
+}
+
+/** Parse the v2 service descriptor in <b>desc</b>, decrypt the included list
+ * of introduction points with <b>descriptor_cookie</b> (which may also be
+ * <b>NULL</b> if decryption is not necessary), and store the descriptor to
+ * the local cache under its version and service id.
+ *
+ * If we have a newer v2 descriptor with the same ID, ignore this one.
+ * If we have an older descriptor with the same ID, replace it.
+ * If the descriptor's service ID does not match
+ * <b>rend_query</b>-\>onion_address, reject it.
+ *
+ * If the descriptor's descriptor ID doesn't match <b>desc_id_base32</b>,
+ * reject it.
+ *
+ * Return 0 on success, or -1 if we rejected the descriptor.
+ * If entry is not NULL, set it with the cache entry pointer of the descriptor.
+ */
+int
+rend_cache_store_v2_desc_as_client(const char *desc,
+ const char *desc_id_base32,
+ const rend_data_t *rend_query,
+ rend_cache_entry_t **entry)
+{
+ /*XXXX this seems to have a bit of duplicate code with
+ * rend_cache_store_v2_desc_as_dir(). Fix that. */
+ /* Though having similar elements, both functions were separated on
+ * purpose:
+ * - dirs don't care about encoded/encrypted introduction points, clients
+ * do.
+ * - dirs store descriptors in a separate cache by descriptor ID, whereas
+ * clients store them by service ID; both caches are different data
+ * structures and have different access methods.
+ * - dirs store a descriptor only if they are responsible for its ID,
+ * clients do so in every way (because they have requested it before).
+ * - dirs can process multiple concatenated descriptors which is required
+ * for replication, whereas clients only accept a single descriptor.
+ * Thus, combining both methods would result in a lot of if statements
+ * which probably would not improve, but worsen code readability. -KL */
+ rend_service_descriptor_t *parsed = NULL;
+ char desc_id[DIGEST_LEN];
+ char *intro_content = NULL;
+ size_t intro_size;
+ size_t encoded_size;
+ const char *next_desc;
+ time_t now = time(NULL);
+ char key[REND_SERVICE_ID_LEN_BASE32+2];
+ char service_id[REND_SERVICE_ID_LEN_BASE32+1];
+ char want_desc_id[DIGEST_LEN];
+ rend_cache_entry_t *e;
+ int retval = -1;
+ tor_assert(rend_cache);
+ tor_assert(desc);
+ tor_assert(desc_id_base32);
+ memset(want_desc_id, 0, sizeof(want_desc_id));
+ if (entry) {
+ *entry = NULL;
+ }
+ if (base32_decode(want_desc_id, sizeof(want_desc_id),
+ desc_id_base32, strlen(desc_id_base32)) != 0) {
+ log_warn(LD_BUG, "Couldn't decode base32 %s for descriptor id.",
+ escaped_safe_str_client(desc_id_base32));
+ goto err;
+ }
+ /* Parse the descriptor. */
+ if (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
+ &intro_size, &encoded_size,
+ &next_desc, desc, 0) < 0) {
+ log_warn(LD_REND, "Could not parse descriptor.");
+ goto err;
+ }
+ /* Compute service ID from public key. */
+ if (rend_get_service_id(parsed->pk, service_id)<0) {
+ log_warn(LD_REND, "Couldn't compute service ID.");
+ goto err;
+ }
+ if (rend_query->onion_address[0] != '\0' &&
+ strcmp(rend_query->onion_address, service_id)) {
+ log_warn(LD_REND, "Received service descriptor for service ID %s; "
+ "expected descriptor for service ID %s.",
+ service_id, safe_str(rend_query->onion_address));
+ goto err;
+ }
+ if (tor_memneq(desc_id, want_desc_id, DIGEST_LEN)) {
+ log_warn(LD_REND, "Received service descriptor for %s with incorrect "
+ "descriptor ID.", service_id);
+ goto err;
+ }
+
+ /* Decode/decrypt introduction points. */
+ if (intro_content && intro_size > 0) {
+ int n_intro_points;
+ if (rend_query->auth_type != REND_NO_AUTH &&
+ !tor_mem_is_zero(rend_query->descriptor_cookie,
+ sizeof(rend_query->descriptor_cookie))) {
+ char *ipos_decrypted = NULL;
+ size_t ipos_decrypted_size;
+ if (rend_decrypt_introduction_points(&ipos_decrypted,
+ &ipos_decrypted_size,
+ rend_query->descriptor_cookie,
+ intro_content,
+ intro_size) < 0) {
+ log_warn(LD_REND, "Failed to decrypt introduction points. We are "
+ "probably unable to parse the encoded introduction points.");
+ } else {
+ /* Replace encrypted with decrypted introduction points. */
+ log_info(LD_REND, "Successfully decrypted introduction points.");
+ tor_free(intro_content);
+ intro_content = ipos_decrypted;
+ intro_size = ipos_decrypted_size;
+ }
+ }
+ n_intro_points = rend_parse_introduction_points(parsed, intro_content,
+ intro_size);
+ if (n_intro_points <= 0) {
+ log_warn(LD_REND, "Failed to parse introduction points. Either the "
+ "service has published a corrupt descriptor or you have "
- "provided invalid authorization data.");
++ "provided invalid authorization data, or (maybe!) the "
++ "server is deliberately serving broken data in an attempt "
++ "to crash you with bug 21018.");
+ goto err;
+ } else if (n_intro_points > MAX_INTRO_POINTS) {
+ log_warn(LD_REND, "Found too many introduction points on a hidden "
+ "service descriptor for %s. This is probably a (misguided) "
+ "attempt to improve reliability, but it could also be an "
+ "attempt to do a guard enumeration attack. Rejecting.",
+ safe_str_client(service_id));
+
+ goto err;
+ }
+ } else {
+ log_info(LD_REND, "Descriptor does not contain any introduction points.");
+ parsed->intro_nodes = smartlist_new();
+ }
+ /* We don't need the encoded/encrypted introduction points any longer. */
+ tor_free(intro_content);
+ /* Is descriptor too old? */
+ if (parsed->timestamp < now - REND_CACHE_MAX_AGE-REND_CACHE_MAX_SKEW) {
+ log_warn(LD_REND, "Service descriptor with service ID %s is too old.",
+ safe_str_client(service_id));
+ goto err;
+ }
+ /* Is descriptor too far in the future? */
+ if (parsed->timestamp > now + REND_CACHE_MAX_SKEW) {
+ log_warn(LD_REND, "Service descriptor with service ID %s is too far in "
+ "the future.", safe_str_client(service_id));
+ goto err;
+ }
+ /* Do we have the same exact copy already in our cache? */
+ tor_snprintf(key, sizeof(key), "2%s", service_id);
+ e = (rend_cache_entry_t*) strmap_get_lc(rend_cache, key);
+ if (e && !strcmp(desc, e->desc)) {
+ log_info(LD_REND,"We already have this service descriptor %s.",
+ safe_str_client(service_id));
+ goto okay;
+ }
+ /* Verify that we are not replacing an older descriptor. It's important to
+ * avoid an evil HSDir serving old descriptor. We validate if the
+ * timestamp is greater than and not equal because it's a rounded down
+ * timestamp to the hour so if the descriptor changed in the same hour,
+ * the rend cache failure will tells us if we have a new descriptor. */
+ if (e && e->parsed->timestamp > parsed->timestamp) {
+ log_info(LD_REND, "We already have a new enough service descriptor for "
+ "service ID %s with the same desc ID and version.",
+ safe_str_client(service_id));
+ goto okay;
+ }
+ /* Lookup our failure cache for intro point that might be unsuable. */
+ validate_intro_point_failure(parsed, service_id);
+ /* It's now possible that our intro point list is empty, this means that
+ * this descriptor is useless to us because intro points have all failed
+ * somehow before. Discard the descriptor. */
+ if (smartlist_len(parsed->intro_nodes) == 0) {
+ log_info(LD_REND, "Service descriptor with service ID %s, every "
+ "intro points are unusable. Discarding it.",
+ safe_str_client(service_id));
+ goto err;
+ }
+ /* Now either purge the current one and replace it's content or create a
+ * new one and add it to the rend cache. */
+ if (!e) {
+ e = tor_malloc_zero(sizeof(rend_cache_entry_t));
+ strmap_set_lc(rend_cache, key, e);
+ } else {
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ rend_cache_failure_remove(e->parsed);
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ }
+ e->parsed = parsed;
+ e->desc = tor_malloc_zero(encoded_size + 1);
+ strlcpy(e->desc, desc, encoded_size + 1);
+ e->len = encoded_size;
+ rend_cache_increment_allocation(rend_cache_entry_allocation(e));
+ log_debug(LD_REND,"Successfully stored rend desc '%s', len %d.",
+ safe_str_client(service_id), (int)encoded_size);
+ if (entry) {
+ *entry = e;
+ }
+ return 0;
+
+ okay:
+ if (entry) {
+ *entry = e;
+ }
+ retval = 0;
+
+ err:
+ rend_service_descriptor_free(parsed);
+ tor_free(intro_content);
+ return retval;
+}
+
1
0

19 Dec '16
commit c11de4c45f5611a12fba005032941a40f5ce5f66
Merge: e030632 0fb3058
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Mon Dec 19 07:58:21 2016 -0500
Merge branch 'bug21018_024' into maint-0.2.8
changes/bug21018 | 11 +++++++++++
src/or/rendcache.c | 4 +++-
src/or/routerparse.c | 6 +++---
3 files changed, 17 insertions(+), 4 deletions(-)
diff --cc src/or/rendcache.c
index f8206cd,0000000..f9ae6d1
mode 100644,000000..100644
--- a/src/or/rendcache.c
+++ b/src/or/rendcache.c
@@@ -1,1011 -1,0 +1,1013 @@@
+/* Copyright (c) 2015-2016, The Tor Project, Inc. */
+/* See LICENSE for licensing information */
+
+/**
+ * \file rendcache.c
+ * \brief Hidden service descriptor cache.
+ **/
+
+#define RENDCACHE_PRIVATE
+#include "rendcache.h"
+
+#include "config.h"
+#include "rephist.h"
+#include "routerlist.h"
+#include "routerparse.h"
+#include "rendcommon.h"
+
+/** Map from service id (as generated by rend_get_service_id) to
+ * rend_cache_entry_t. */
+STATIC strmap_t *rend_cache = NULL;
+
+/** Map from service id to rend_cache_entry_t; only for hidden services. */
+static strmap_t *rend_cache_local_service = NULL;
+
+/** Map from descriptor id to rend_cache_entry_t; only for hidden service
+ * directories. */
+STATIC digestmap_t *rend_cache_v2_dir = NULL;
+
+/** (Client side only) Map from service id to rend_cache_failure_t. This
+ * cache is used to track intro point(IP) failures so we know when to keep
+ * or discard a new descriptor we just fetched. Here is a description of the
+ * cache behavior.
+ *
+ * Everytime tor discards an IP (ex: receives a NACK), we add an entry to
+ * this cache noting the identity digest of the IP and it's failure type for
+ * the service ID. The reason we indexed this cache by service ID is to
+ * differentiate errors that can occur only for a specific service like a
+ * NACK for instance. It applies for one but maybe not for the others.
+ *
+ * Once a service descriptor is fetched and considered valid, each IP is
+ * looked up in this cache and if present, it is discarded from the fetched
+ * descriptor. At the end, all IP(s) in the cache, for a specific service
+ * ID, that were NOT present in the descriptor are removed from this cache.
+ * Which means that if at least one IP was not in this cache, thus usuable,
+ * it's considered a new descriptor so we keep it. Else, if all IPs were in
+ * this cache, we discard the descriptor as it's considered unsuable.
+ *
+ * Once a descriptor is removed from the rend cache or expires, the entry
+ * in this cache is also removed for the service ID.
+ *
+ * This scheme allows us to not realy on the descriptor's timestamp (which
+ * is rounded down to the hour) to know if we have a newer descriptor. We
+ * only rely on the usability of intro points from an internal state. */
+STATIC strmap_t *rend_cache_failure = NULL;
+
+/* DOCDOC */
+STATIC size_t rend_cache_total_allocation = 0;
+
+/** Initializes the service descriptor cache.
+*/
+void
+rend_cache_init(void)
+{
+ rend_cache = strmap_new();
+ rend_cache_v2_dir = digestmap_new();
+ rend_cache_local_service = strmap_new();
+ rend_cache_failure = strmap_new();
+}
+
+/** Return the approximate number of bytes needed to hold <b>e</b>. */
+STATIC size_t
+rend_cache_entry_allocation(const rend_cache_entry_t *e)
+{
+ if (!e)
+ return 0;
+
+ /* This doesn't count intro_nodes or key size */
+ return sizeof(*e) + e->len + sizeof(*e->parsed);
+}
+
+/* DOCDOC */
+size_t
+rend_cache_get_total_allocation(void)
+{
+ return rend_cache_total_allocation;
+}
+
+/** Decrement the total bytes attributed to the rendezvous cache by n. */
+STATIC void
+rend_cache_decrement_allocation(size_t n)
+{
+ static int have_underflowed = 0;
+
+ if (rend_cache_total_allocation >= n) {
+ rend_cache_total_allocation -= n;
+ } else {
+ rend_cache_total_allocation = 0;
+ if (! have_underflowed) {
+ have_underflowed = 1;
+ log_warn(LD_BUG, "Underflow in rend_cache_decrement_allocation");
+ }
+ }
+}
+
+/** Increase the total bytes attributed to the rendezvous cache by n. */
+STATIC void
+rend_cache_increment_allocation(size_t n)
+{
+ static int have_overflowed = 0;
+ if (rend_cache_total_allocation <= SIZE_MAX - n) {
+ rend_cache_total_allocation += n;
+ } else {
+ rend_cache_total_allocation = SIZE_MAX;
+ if (! have_overflowed) {
+ have_overflowed = 1;
+ log_warn(LD_BUG, "Overflow in rend_cache_increment_allocation");
+ }
+ }
+}
+
+/** Helper: free a rend cache failure intro object. */
+STATIC void
+rend_cache_failure_intro_entry_free(rend_cache_failure_intro_t *entry)
+{
+ if (entry == NULL) {
+ return;
+ }
+ tor_free(entry);
+}
+
+static void
+rend_cache_failure_intro_entry_free_(void *entry)
+{
+ rend_cache_failure_intro_entry_free(entry);
+}
+
+/** Allocate a rend cache failure intro object and return it. <b>failure</b>
+ * is set into the object. This function can not fail. */
+STATIC rend_cache_failure_intro_t *
+rend_cache_failure_intro_entry_new(rend_intro_point_failure_t failure)
+{
+ rend_cache_failure_intro_t *entry = tor_malloc(sizeof(*entry));
+ entry->failure_type = failure;
+ entry->created_ts = time(NULL);
+ return entry;
+}
+
+/** Helper: free a rend cache failure object. */
+STATIC void
+rend_cache_failure_entry_free(rend_cache_failure_t *entry)
+{
+ if (entry == NULL) {
+ return;
+ }
+
+ /* Free and remove every intro failure object. */
+ digestmap_free(entry->intro_failures,
+ rend_cache_failure_intro_entry_free_);
+
+ tor_free(entry);
+}
+
+/** Helper: deallocate a rend_cache_failure_t. (Used with strmap_free(),
+ * which requires a function pointer whose argument is void*). */
+STATIC void
+rend_cache_failure_entry_free_(void *entry)
+{
+ rend_cache_failure_entry_free(entry);
+}
+
+/** Allocate a rend cache failure object and return it. This function can
+ * not fail. */
+STATIC rend_cache_failure_t *
+rend_cache_failure_entry_new(void)
+{
+ rend_cache_failure_t *entry = tor_malloc(sizeof(*entry));
+ entry->intro_failures = digestmap_new();
+ return entry;
+}
+
+/** Remove failure cache entry for the service ID in the given descriptor
+ * <b>desc</b>. */
+STATIC void
+rend_cache_failure_remove(rend_service_descriptor_t *desc)
+{
+ char service_id[REND_SERVICE_ID_LEN_BASE32 + 1];
+ rend_cache_failure_t *entry;
+
+ if (desc == NULL) {
+ return;
+ }
+ if (rend_get_service_id(desc->pk, service_id) < 0) {
+ return;
+ }
+ entry = strmap_get_lc(rend_cache_failure, service_id);
+ if (entry != NULL) {
+ strmap_remove_lc(rend_cache_failure, service_id);
+ rend_cache_failure_entry_free(entry);
+ }
+}
+
+/** Helper: free storage held by a single service descriptor cache entry. */
+STATIC void
+rend_cache_entry_free(rend_cache_entry_t *e)
+{
+ if (!e)
+ return;
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ /* We are about to remove a descriptor from the cache so remove the entry
+ * in the failure cache. */
+ rend_cache_failure_remove(e->parsed);
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ tor_free(e);
+}
+
+/** Helper: deallocate a rend_cache_entry_t. (Used with strmap_free(), which
+ * requires a function pointer whose argument is void*). */
+static void
+rend_cache_entry_free_(void *p)
+{
+ rend_cache_entry_free(p);
+}
+
+/** Free all storage held by the service descriptor cache. */
+void
+rend_cache_free_all(void)
+{
+ strmap_free(rend_cache, rend_cache_entry_free_);
+ digestmap_free(rend_cache_v2_dir, rend_cache_entry_free_);
+ strmap_free(rend_cache_local_service, rend_cache_entry_free_);
+ strmap_free(rend_cache_failure, rend_cache_failure_entry_free_);
+ rend_cache = NULL;
+ rend_cache_v2_dir = NULL;
+ rend_cache_local_service = NULL;
+ rend_cache_failure = NULL;
+ rend_cache_total_allocation = 0;
+}
+
+/** Remove all entries that re REND_CACHE_FAILURE_MAX_AGE old. This is
+ * called every second.
+ *
+ * We have to clean these regurlarly else if for whatever reasons an hidden
+ * service goes offline and a client tries to connect to it during that
+ * time, a failure entry is created and the client will be unable to connect
+ * for a while even though the service has return online. */
+void
+rend_cache_failure_clean(time_t now)
+{
+ time_t cutoff = now - REND_CACHE_FAILURE_MAX_AGE;
+ STRMAP_FOREACH_MODIFY(rend_cache_failure, key,
+ rend_cache_failure_t *, ent) {
+ /* Free and remove every intro failure object that match the cutoff. */
+ DIGESTMAP_FOREACH_MODIFY(ent->intro_failures, ip_key,
+ rend_cache_failure_intro_t *, ip_ent) {
+ if (ip_ent->created_ts < cutoff) {
+ rend_cache_failure_intro_entry_free(ip_ent);
+ MAP_DEL_CURRENT(ip_key);
+ }
+ } DIGESTMAP_FOREACH_END;
+ /* If the entry is now empty of intro point failures, remove it. */
+ if (digestmap_isempty(ent->intro_failures)) {
+ rend_cache_failure_entry_free(ent);
+ MAP_DEL_CURRENT(key);
+ }
+ } STRMAP_FOREACH_END;
+}
+
+/** Removes all old entries from the client or service descriptor cache.
+*/
+void
+rend_cache_clean(time_t now, rend_cache_type_t cache_type)
+{
+ strmap_iter_t *iter;
+ const char *key;
+ void *val;
+ rend_cache_entry_t *ent;
+ time_t cutoff = now - REND_CACHE_MAX_AGE - REND_CACHE_MAX_SKEW;
+ strmap_t *cache = NULL;
+
+ if (cache_type == REND_CACHE_TYPE_CLIENT) {
+ cache = rend_cache;
+ } else if (cache_type == REND_CACHE_TYPE_SERVICE) {
+ cache = rend_cache_local_service;
+ }
+ tor_assert(cache);
+
+ for (iter = strmap_iter_init(cache); !strmap_iter_done(iter); ) {
+ strmap_iter_get(iter, &key, &val);
+ ent = (rend_cache_entry_t*)val;
+ if (ent->parsed->timestamp < cutoff) {
+ iter = strmap_iter_next_rmv(cache, iter);
+ rend_cache_entry_free(ent);
+ } else {
+ iter = strmap_iter_next(cache, iter);
+ }
+ }
+}
+
+/** Remove ALL entries from the rendezvous service descriptor cache.
+*/
+void
+rend_cache_purge(void)
+{
+ if (rend_cache) {
+ log_info(LD_REND, "Purging HS descriptor cache");
+ strmap_free(rend_cache, rend_cache_entry_free_);
+ }
+ rend_cache = strmap_new();
+}
+
+/** Remove ALL entries from the failure cache. This is also called when a
+ * NEWNYM signal is received. */
+void
+rend_cache_failure_purge(void)
+{
+ if (rend_cache_failure) {
+ log_info(LD_REND, "Purging HS failure cache");
+ strmap_free(rend_cache_failure, rend_cache_failure_entry_free_);
+ }
+ rend_cache_failure = strmap_new();
+}
+
+/** Lookup the rend failure cache using a relay identity digest in
+ * <b>identity</b> which has DIGEST_LEN bytes and service ID <b>service_id</b>
+ * which is a null-terminated string. If found, the intro failure is set in
+ * <b>intro_entry</b> else it stays untouched. Return 1 iff found else 0. */
+STATIC int
+cache_failure_intro_lookup(const uint8_t *identity, const char *service_id,
+ rend_cache_failure_intro_t **intro_entry)
+{
+ rend_cache_failure_t *elem;
+ rend_cache_failure_intro_t *intro_elem;
+
+ tor_assert(rend_cache_failure);
+
+ if (intro_entry) {
+ *intro_entry = NULL;
+ }
+
+ /* Lookup descriptor and return it. */
+ elem = strmap_get_lc(rend_cache_failure, service_id);
+ if (elem == NULL) {
+ goto not_found;
+ }
+ intro_elem = digestmap_get(elem->intro_failures, (char *) identity);
+ if (intro_elem == NULL) {
+ goto not_found;
+ }
+ if (intro_entry) {
+ *intro_entry = intro_elem;
+ }
+ return 1;
+ not_found:
+ return 0;
+}
+
+/** Allocate a new cache failure intro object and copy the content from
+ * <b>entry</b> to this newly allocated object. Return it. */
+static rend_cache_failure_intro_t *
+cache_failure_intro_dup(const rend_cache_failure_intro_t *entry)
+{
+ rend_cache_failure_intro_t *ent_dup =
+ rend_cache_failure_intro_entry_new(entry->failure_type);
+ ent_dup->created_ts = entry->created_ts;
+ return ent_dup;
+}
+
+/** Add an intro point failure to the failure cache using the relay
+ * <b>identity</b> and service ID <b>service_id</b>. Record the
+ * <b>failure</b> in that object. */
+STATIC void
+cache_failure_intro_add(const uint8_t *identity, const char *service_id,
+ rend_intro_point_failure_t failure)
+{
+ rend_cache_failure_t *fail_entry;
+ rend_cache_failure_intro_t *entry, *old_entry;
+
+ /* Make sure we have a failure object for this service ID and if not,
+ * create it with this new intro failure entry. */
+ fail_entry = strmap_get_lc(rend_cache_failure, service_id);
+ if (fail_entry == NULL) {
+ fail_entry = rend_cache_failure_entry_new();
+ /* Add failure entry to global rend failure cache. */
+ strmap_set_lc(rend_cache_failure, service_id, fail_entry);
+ }
+ entry = rend_cache_failure_intro_entry_new(failure);
+ old_entry = digestmap_set(fail_entry->intro_failures,
+ (char *) identity, entry);
+ /* This _should_ be NULL, but in case it isn't, free it. */
+ rend_cache_failure_intro_entry_free(old_entry);
+}
+
+/** Using a parsed descriptor <b>desc</b>, check if the introduction points
+ * are present in the failure cache and if so they are removed from the
+ * descriptor and kept into the failure cache. Then, each intro points that
+ * are NOT in the descriptor but in the failure cache for the given
+ * <b>service_id</b> are removed from the failure cache. */
+STATIC void
+validate_intro_point_failure(const rend_service_descriptor_t *desc,
+ const char *service_id)
+{
+ rend_cache_failure_t *new_entry, *cur_entry;
+ /* New entry for the service ID that will be replacing the one in the
+ * failure cache since we have a new descriptor. In the case where all
+ * intro points are removed, we are assured that the new entry is the same
+ * as the current one. */
+ new_entry = tor_malloc(sizeof(*new_entry));
+ new_entry->intro_failures = digestmap_new();
+
+ tor_assert(desc);
+
+ SMARTLIST_FOREACH_BEGIN(desc->intro_nodes, rend_intro_point_t *, intro) {
+ int found;
+ rend_cache_failure_intro_t *entry;
+ const uint8_t *identity =
+ (uint8_t *) intro->extend_info->identity_digest;
+
+ found = cache_failure_intro_lookup(identity, service_id, &entry);
+ if (found) {
+ /* Dup here since it will be freed at the end when removing the
+ * original entry in the cache. */
+ rend_cache_failure_intro_t *ent_dup = cache_failure_intro_dup(entry);
+ /* This intro point is in our cache, discard it from the descriptor
+ * because chances are that it's unusable. */
+ SMARTLIST_DEL_CURRENT(desc->intro_nodes, intro);
+ /* Keep it for our new entry. */
+ digestmap_set(new_entry->intro_failures, (char *) identity, ent_dup);
+ /* Only free it when we're done looking at it. */
+ rend_intro_point_free(intro);
+ continue;
+ }
+ } SMARTLIST_FOREACH_END(intro);
+
+ /* Swap the failure entry in the cache and free the current one. */
+ cur_entry = strmap_get_lc(rend_cache_failure, service_id);
+ if (cur_entry != NULL) {
+ rend_cache_failure_entry_free(cur_entry);
+ }
+ strmap_set_lc(rend_cache_failure, service_id, new_entry);
+}
+
+/** Note down an intro failure in the rend failure cache using the type of
+ * failure in <b>failure</b> for the relay identity digest in
+ * <b>identity</b> and service ID <b>service_id</b>. If an entry already
+ * exists in the cache, the failure type is changed with <b>failure</b>. */
+void
+rend_cache_intro_failure_note(rend_intro_point_failure_t failure,
+ const uint8_t *identity,
+ const char *service_id)
+{
+ int found;
+ rend_cache_failure_intro_t *entry;
+
+ found = cache_failure_intro_lookup(identity, service_id, &entry);
+ if (!found) {
+ cache_failure_intro_add(identity, service_id, failure);
+ } else {
+ /* Replace introduction point failure with this one. */
+ entry->failure_type = failure;
+ }
+}
+
+/** Remove all old v2 descriptors and those for which this hidden service
+ * directory is not responsible for any more.
+ *
+ * If at all possible, remove at least <b>force_remove</b> bytes of data.
+ */
+void
+rend_cache_clean_v2_descs_as_dir(time_t now, size_t force_remove)
+{
+ digestmap_iter_t *iter;
+ time_t cutoff = now - REND_CACHE_MAX_AGE - REND_CACHE_MAX_SKEW;
+ const int LAST_SERVED_CUTOFF_STEP = 1800;
+ time_t last_served_cutoff = cutoff;
+ size_t bytes_removed = 0;
+ do {
+ for (iter = digestmap_iter_init(rend_cache_v2_dir);
+ !digestmap_iter_done(iter); ) {
+ const char *key;
+ void *val;
+ rend_cache_entry_t *ent;
+ digestmap_iter_get(iter, &key, &val);
+ ent = val;
+ if (ent->parsed->timestamp < cutoff ||
+ ent->last_served < last_served_cutoff) {
+ char key_base32[REND_DESC_ID_V2_LEN_BASE32 + 1];
+ base32_encode(key_base32, sizeof(key_base32), key, DIGEST_LEN);
+ log_info(LD_REND, "Removing descriptor with ID '%s' from cache",
+ safe_str_client(key_base32));
+ bytes_removed += rend_cache_entry_allocation(ent);
+ iter = digestmap_iter_next_rmv(rend_cache_v2_dir, iter);
+ rend_cache_entry_free(ent);
+ } else {
+ iter = digestmap_iter_next(rend_cache_v2_dir, iter);
+ }
+ }
+
+ /* In case we didn't remove enough bytes, advance the cutoff a little. */
+ last_served_cutoff += LAST_SERVED_CUTOFF_STEP;
+ if (last_served_cutoff > now)
+ break;
+ } while (bytes_removed < force_remove);
+}
+
+/** Lookup in the client cache the given service ID <b>query</b> for
+ * <b>version</b>.
+ *
+ * Return 0 if found and if <b>e</b> is non NULL, set it with the entry
+ * found. Else, a negative value is returned and <b>e</b> is untouched.
+ * -EINVAL means that <b>query</b> is not a valid service id.
+ * -ENOENT means that no entry in the cache was found. */
+int
+rend_cache_lookup_entry(const char *query, int version, rend_cache_entry_t **e)
+{
+ int ret = 0;
+ char key[REND_SERVICE_ID_LEN_BASE32 + 2]; /* <version><query>\0 */
+ rend_cache_entry_t *entry = NULL;
+ static const int default_version = 2;
+
+ tor_assert(rend_cache);
+ tor_assert(query);
+
+ if (!rend_valid_service_id(query)) {
+ ret = -EINVAL;
+ goto end;
+ }
+
+ switch (version) {
+ case 0:
+ log_warn(LD_REND, "Cache lookup of a v0 renddesc is deprecated.");
+ break;
+ case 2:
+ /* Default is version 2. */
+ default:
+ tor_snprintf(key, sizeof(key), "%d%s", default_version, query);
+ entry = strmap_get_lc(rend_cache, key);
+ break;
+ }
+ if (!entry) {
+ ret = -ENOENT;
+ goto end;
+ }
+ tor_assert(entry->parsed && entry->parsed->intro_nodes);
+
+ if (e) {
+ *e = entry;
+ }
+
+ end:
+ return ret;
+}
+
+/*
+ * Lookup the v2 service descriptor with the service ID <b>query</b> in the
+ * local service descriptor cache. Return 0 if found and if <b>e</b> is
+ * non NULL, set it with the entry found. Else, a negative value is returned
+ * and <b>e</b> is untouched.
+ * -EINVAL means that <b>query</b> is not a valid service id.
+ * -ENOENT means that no entry in the cache was found. */
+int
+rend_cache_lookup_v2_desc_as_service(const char *query, rend_cache_entry_t **e)
+{
+ int ret = 0;
+ rend_cache_entry_t *entry = NULL;
+
+ tor_assert(rend_cache_local_service);
+ tor_assert(query);
+
+ if (!rend_valid_service_id(query)) {
+ ret = -EINVAL;
+ goto end;
+ }
+
+ /* Lookup descriptor and return. */
+ entry = strmap_get_lc(rend_cache_local_service, query);
+ if (!entry) {
+ ret = -ENOENT;
+ goto end;
+ }
+
+ if (e) {
+ *e = entry;
+ }
+
+ end:
+ return ret;
+}
+
+/** Lookup the v2 service descriptor with base32-encoded <b>desc_id</b> and
+ * copy the pointer to it to *<b>desc</b>. Return 1 on success, 0 on
+ * well-formed-but-not-found, and -1 on failure.
+ */
+int
+rend_cache_lookup_v2_desc_as_dir(const char *desc_id, const char **desc)
+{
+ rend_cache_entry_t *e;
+ char desc_id_digest[DIGEST_LEN];
+ tor_assert(rend_cache_v2_dir);
+ if (base32_decode(desc_id_digest, DIGEST_LEN,
+ desc_id, REND_DESC_ID_V2_LEN_BASE32) < 0) {
+ log_fn(LOG_PROTOCOL_WARN, LD_REND,
+ "Rejecting v2 rendezvous descriptor request -- descriptor ID "
+ "contains illegal characters: %s",
+ safe_str(desc_id));
+ return -1;
+ }
+ /* Lookup descriptor and return. */
+ e = digestmap_get(rend_cache_v2_dir, desc_id_digest);
+ if (e) {
+ *desc = e->desc;
+ e->last_served = approx_time();
+ return 1;
+ }
+ return 0;
+}
+
+/** Parse the v2 service descriptor(s) in <b>desc</b> and store it/them to the
+ * local rend cache. Don't attempt to decrypt the included list of introduction
+ * points (as we don't have a descriptor cookie for it).
+ *
+ * If we have a newer descriptor with the same ID, ignore this one.
+ * If we have an older descriptor with the same ID, replace it.
+ *
+ * Return 0 on success, or -1 if we couldn't parse any of them.
+ *
+ * We should only call this function for public (e.g. non bridge) relays.
+ */
+int
+rend_cache_store_v2_desc_as_dir(const char *desc)
+{
+ const or_options_t *options = get_options();
+ rend_service_descriptor_t *parsed;
+ char desc_id[DIGEST_LEN];
+ char *intro_content;
+ size_t intro_size;
+ size_t encoded_size;
+ char desc_id_base32[REND_DESC_ID_V2_LEN_BASE32 + 1];
+ int number_parsed = 0, number_stored = 0;
+ const char *current_desc = desc;
+ const char *next_desc;
+ rend_cache_entry_t *e;
+ time_t now = time(NULL);
+ tor_assert(rend_cache_v2_dir);
+ tor_assert(desc);
+ while (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
+ &intro_size, &encoded_size,
+ &next_desc, current_desc, 1) >= 0) {
+ number_parsed++;
+ /* We don't care about the introduction points. */
+ tor_free(intro_content);
+ /* For pretty log statements. */
+ base32_encode(desc_id_base32, sizeof(desc_id_base32),
+ desc_id, DIGEST_LEN);
+ /* Is descriptor too old? */
+ if (parsed->timestamp < now - REND_CACHE_MAX_AGE-REND_CACHE_MAX_SKEW) {
+ log_info(LD_REND, "Service descriptor with desc ID %s is too old.",
+ safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Is descriptor too far in the future? */
+ if (parsed->timestamp > now + REND_CACHE_MAX_SKEW) {
+ log_info(LD_REND, "Service descriptor with desc ID %s is too far in the "
+ "future.",
+ safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Do we already have a newer descriptor? */
+ e = digestmap_get(rend_cache_v2_dir, desc_id);
+ if (e && e->parsed->timestamp > parsed->timestamp) {
+ log_info(LD_REND, "We already have a newer service descriptor with the "
+ "same desc ID %s and version.",
+ safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Do we already have this descriptor? */
+ if (e && !strcmp(desc, e->desc)) {
+ log_info(LD_REND, "We already have this service descriptor with desc "
+ "ID %s.", safe_str(desc_id_base32));
+ goto skip;
+ }
+ /* Store received descriptor. */
+ if (!e) {
+ e = tor_malloc_zero(sizeof(rend_cache_entry_t));
+ digestmap_set(rend_cache_v2_dir, desc_id, e);
+ /* Treat something just uploaded as having been served a little
+ * while ago, so that flooding with new descriptors doesn't help
+ * too much.
+ */
+ e->last_served = approx_time() - 3600;
+ } else {
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ }
+ e->parsed = parsed;
+ e->desc = tor_strndup(current_desc, encoded_size);
+ e->len = encoded_size;
+ rend_cache_increment_allocation(rend_cache_entry_allocation(e));
+ log_info(LD_REND, "Successfully stored service descriptor with desc ID "
+ "'%s' and len %d.",
+ safe_str(desc_id_base32), (int)encoded_size);
+ /* Statistics: Note down this potentially new HS. */
+ if (options->HiddenServiceStatistics) {
+ rep_hist_stored_maybe_new_hs(e->parsed->pk);
+ }
+
+ number_stored++;
+ goto advance;
+ skip:
+ rend_service_descriptor_free(parsed);
+ advance:
+ /* advance to next descriptor, if available. */
+ current_desc = next_desc;
+ /* check if there is a next descriptor. */
+ if (!current_desc ||
+ strcmpstart(current_desc, "rendezvous-service-descriptor "))
+ break;
+ }
+ if (!number_parsed) {
+ log_info(LD_REND, "Could not parse any descriptor.");
+ return -1;
+ }
+ log_info(LD_REND, "Parsed %d and added %d descriptor%s.",
+ number_parsed, number_stored, number_stored != 1 ? "s" : "");
+ return 0;
+}
+
+/** Parse the v2 service descriptor in <b>desc</b> and store it to the
+* local service rend cache. Don't attempt to decrypt the included list of
+* introduction points.
+*
+* If we have a newer descriptor with the same ID, ignore this one.
+* If we have an older descriptor with the same ID, replace it.
+*
+* Return 0 on success, or -1 if we couldn't understand the descriptor.
+*/
+int
+rend_cache_store_v2_desc_as_service(const char *desc)
+{
+ rend_service_descriptor_t *parsed = NULL;
+ char desc_id[DIGEST_LEN];
+ char *intro_content = NULL;
+ size_t intro_size;
+ size_t encoded_size;
+ const char *next_desc;
+ char service_id[REND_SERVICE_ID_LEN_BASE32+1];
+ rend_cache_entry_t *e;
+ int retval = -1;
+ tor_assert(rend_cache_local_service);
+ tor_assert(desc);
+
+ /* Parse the descriptor. */
+ if (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
+ &intro_size, &encoded_size,
+ &next_desc, desc, 0) < 0) {
+ log_warn(LD_REND, "Could not parse descriptor.");
+ goto err;
+ }
+ /* Compute service ID from public key. */
+ if (rend_get_service_id(parsed->pk, service_id)<0) {
+ log_warn(LD_REND, "Couldn't compute service ID.");
+ goto err;
+ }
+
+ /* Do we already have a newer descriptor? Allow new descriptors with a
+ rounded timestamp equal to or newer than the current descriptor */
+ e = (rend_cache_entry_t*) strmap_get_lc(rend_cache_local_service,
+ service_id);
+ if (e && e->parsed->timestamp > parsed->timestamp) {
+ log_info(LD_REND, "We already have a newer service descriptor for "
+ "service ID %s.", safe_str_client(service_id));
+ goto okay;
+ }
+ /* We don't care about the introduction points. */
+ tor_free(intro_content);
+ if (!e) {
+ e = tor_malloc_zero(sizeof(rend_cache_entry_t));
+ strmap_set_lc(rend_cache_local_service, service_id, e);
+ } else {
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ }
+ e->parsed = parsed;
+ e->desc = tor_malloc_zero(encoded_size + 1);
+ strlcpy(e->desc, desc, encoded_size + 1);
+ e->len = encoded_size;
+ rend_cache_increment_allocation(rend_cache_entry_allocation(e));
+ log_debug(LD_REND,"Successfully stored rend desc '%s', len %d.",
+ safe_str_client(service_id), (int)encoded_size);
+ return 0;
+
+ okay:
+ retval = 0;
+
+ err:
+ rend_service_descriptor_free(parsed);
+ tor_free(intro_content);
+ return retval;
+}
+
+/** Parse the v2 service descriptor in <b>desc</b>, decrypt the included list
+ * of introduction points with <b>descriptor_cookie</b> (which may also be
+ * <b>NULL</b> if decryption is not necessary), and store the descriptor to
+ * the local cache under its version and service id.
+ *
+ * If we have a newer v2 descriptor with the same ID, ignore this one.
+ * If we have an older descriptor with the same ID, replace it.
+ * If the descriptor's service ID does not match
+ * <b>rend_query</b>-\>onion_address, reject it.
+ *
+ * If the descriptor's descriptor ID doesn't match <b>desc_id_base32</b>,
+ * reject it.
+ *
+ * Return 0 on success, or -1 if we rejected the descriptor.
+ * If entry is not NULL, set it with the cache entry pointer of the descriptor.
+ */
+int
+rend_cache_store_v2_desc_as_client(const char *desc,
+ const char *desc_id_base32,
+ const rend_data_t *rend_query,
+ rend_cache_entry_t **entry)
+{
+ /*XXXX this seems to have a bit of duplicate code with
+ * rend_cache_store_v2_desc_as_dir(). Fix that. */
+ /* Though having similar elements, both functions were separated on
+ * purpose:
+ * - dirs don't care about encoded/encrypted introduction points, clients
+ * do.
+ * - dirs store descriptors in a separate cache by descriptor ID, whereas
+ * clients store them by service ID; both caches are different data
+ * structures and have different access methods.
+ * - dirs store a descriptor only if they are responsible for its ID,
+ * clients do so in every way (because they have requested it before).
+ * - dirs can process multiple concatenated descriptors which is required
+ * for replication, whereas clients only accept a single descriptor.
+ * Thus, combining both methods would result in a lot of if statements
+ * which probably would not improve, but worsen code readability. -KL */
+ rend_service_descriptor_t *parsed = NULL;
+ char desc_id[DIGEST_LEN];
+ char *intro_content = NULL;
+ size_t intro_size;
+ size_t encoded_size;
+ const char *next_desc;
+ time_t now = time(NULL);
+ char key[REND_SERVICE_ID_LEN_BASE32+2];
+ char service_id[REND_SERVICE_ID_LEN_BASE32+1];
+ char want_desc_id[DIGEST_LEN];
+ rend_cache_entry_t *e;
+ int retval = -1;
+ tor_assert(rend_cache);
+ tor_assert(desc);
+ tor_assert(desc_id_base32);
+ memset(want_desc_id, 0, sizeof(want_desc_id));
+ if (entry) {
+ *entry = NULL;
+ }
+ if (base32_decode(want_desc_id, sizeof(want_desc_id),
+ desc_id_base32, strlen(desc_id_base32)) != 0) {
+ log_warn(LD_BUG, "Couldn't decode base32 %s for descriptor id.",
+ escaped_safe_str_client(desc_id_base32));
+ goto err;
+ }
+ /* Parse the descriptor. */
+ if (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
+ &intro_size, &encoded_size,
+ &next_desc, desc, 0) < 0) {
+ log_warn(LD_REND, "Could not parse descriptor.");
+ goto err;
+ }
+ /* Compute service ID from public key. */
+ if (rend_get_service_id(parsed->pk, service_id)<0) {
+ log_warn(LD_REND, "Couldn't compute service ID.");
+ goto err;
+ }
+ if (rend_query->onion_address[0] != '\0' &&
+ strcmp(rend_query->onion_address, service_id)) {
+ log_warn(LD_REND, "Received service descriptor for service ID %s; "
+ "expected descriptor for service ID %s.",
+ service_id, safe_str(rend_query->onion_address));
+ goto err;
+ }
+ if (tor_memneq(desc_id, want_desc_id, DIGEST_LEN)) {
+ log_warn(LD_REND, "Received service descriptor for %s with incorrect "
+ "descriptor ID.", service_id);
+ goto err;
+ }
+
+ /* Decode/decrypt introduction points. */
+ if (intro_content && intro_size > 0) {
+ int n_intro_points;
+ if (rend_query->auth_type != REND_NO_AUTH &&
+ !tor_mem_is_zero(rend_query->descriptor_cookie,
+ sizeof(rend_query->descriptor_cookie))) {
+ char *ipos_decrypted = NULL;
+ size_t ipos_decrypted_size;
+ if (rend_decrypt_introduction_points(&ipos_decrypted,
+ &ipos_decrypted_size,
+ rend_query->descriptor_cookie,
+ intro_content,
+ intro_size) < 0) {
+ log_warn(LD_REND, "Failed to decrypt introduction points. We are "
+ "probably unable to parse the encoded introduction points.");
+ } else {
+ /* Replace encrypted with decrypted introduction points. */
+ log_info(LD_REND, "Successfully decrypted introduction points.");
+ tor_free(intro_content);
+ intro_content = ipos_decrypted;
+ intro_size = ipos_decrypted_size;
+ }
+ }
+ n_intro_points = rend_parse_introduction_points(parsed, intro_content,
+ intro_size);
+ if (n_intro_points <= 0) {
+ log_warn(LD_REND, "Failed to parse introduction points. Either the "
+ "service has published a corrupt descriptor or you have "
- "provided invalid authorization data.");
++ "provided invalid authorization data, or (maybe!) the "
++ "server is deliberately serving broken data in an attempt "
++ "to crash you with bug 21018.");
+ goto err;
+ } else if (n_intro_points > MAX_INTRO_POINTS) {
+ log_warn(LD_REND, "Found too many introduction points on a hidden "
+ "service descriptor for %s. This is probably a (misguided) "
+ "attempt to improve reliability, but it could also be an "
+ "attempt to do a guard enumeration attack. Rejecting.",
+ safe_str_client(service_id));
+
+ goto err;
+ }
+ } else {
+ log_info(LD_REND, "Descriptor does not contain any introduction points.");
+ parsed->intro_nodes = smartlist_new();
+ }
+ /* We don't need the encoded/encrypted introduction points any longer. */
+ tor_free(intro_content);
+ /* Is descriptor too old? */
+ if (parsed->timestamp < now - REND_CACHE_MAX_AGE-REND_CACHE_MAX_SKEW) {
+ log_warn(LD_REND, "Service descriptor with service ID %s is too old.",
+ safe_str_client(service_id));
+ goto err;
+ }
+ /* Is descriptor too far in the future? */
+ if (parsed->timestamp > now + REND_CACHE_MAX_SKEW) {
+ log_warn(LD_REND, "Service descriptor with service ID %s is too far in "
+ "the future.", safe_str_client(service_id));
+ goto err;
+ }
+ /* Do we have the same exact copy already in our cache? */
+ tor_snprintf(key, sizeof(key), "2%s", service_id);
+ e = (rend_cache_entry_t*) strmap_get_lc(rend_cache, key);
+ if (e && !strcmp(desc, e->desc)) {
+ log_info(LD_REND,"We already have this service descriptor %s.",
+ safe_str_client(service_id));
+ goto okay;
+ }
+ /* Verify that we are not replacing an older descriptor. It's important to
+ * avoid an evil HSDir serving old descriptor. We validate if the
+ * timestamp is greater than and not equal because it's a rounded down
+ * timestamp to the hour so if the descriptor changed in the same hour,
+ * the rend cache failure will tells us if we have a new descriptor. */
+ if (e && e->parsed->timestamp > parsed->timestamp) {
+ log_info(LD_REND, "We already have a new enough service descriptor for "
+ "service ID %s with the same desc ID and version.",
+ safe_str_client(service_id));
+ goto okay;
+ }
+ /* Lookup our failure cache for intro point that might be unsuable. */
+ validate_intro_point_failure(parsed, service_id);
+ /* It's now possible that our intro point list is empty, this means that
+ * this descriptor is useless to us because intro points have all failed
+ * somehow before. Discard the descriptor. */
+ if (smartlist_len(parsed->intro_nodes) == 0) {
+ log_info(LD_REND, "Service descriptor with service ID %s, every "
+ "intro points are unusable. Discarding it.",
+ safe_str_client(service_id));
+ goto err;
+ }
+ /* Now either purge the current one and replace it's content or create a
+ * new one and add it to the rend cache. */
+ if (!e) {
+ e = tor_malloc_zero(sizeof(rend_cache_entry_t));
+ strmap_set_lc(rend_cache, key, e);
+ } else {
+ rend_cache_decrement_allocation(rend_cache_entry_allocation(e));
+ rend_cache_failure_remove(e->parsed);
+ rend_service_descriptor_free(e->parsed);
+ tor_free(e->desc);
+ }
+ e->parsed = parsed;
+ e->desc = tor_malloc_zero(encoded_size + 1);
+ strlcpy(e->desc, desc, encoded_size + 1);
+ e->len = encoded_size;
+ rend_cache_increment_allocation(rend_cache_entry_allocation(e));
+ log_debug(LD_REND,"Successfully stored rend desc '%s', len %d.",
+ safe_str_client(service_id), (int)encoded_size);
+ if (entry) {
+ *entry = e;
+ }
+ return 0;
+
+ okay:
+ if (entry) {
+ *entry = e;
+ }
+ retval = 0;
+
+ err:
+ rend_service_descriptor_free(parsed);
+ tor_free(intro_content);
+ return retval;
+}
+
1
0

[tor/maint-0.2.8] Update the fallback directory mirror list in December 2016
by nickm@torproject.org 19 Dec '16
by nickm@torproject.org 19 Dec '16
19 Dec '16
commit 4181e812c75a7a1d8d70f29a404d23e6ab875891
Author: teor <teor2345(a)gmail.com>
Date: Mon Dec 19 15:44:20 2016 +1100
Update the fallback directory mirror list in December 2016
Replace the 81 remaining fallbacks of the 100 originally introduced
in Tor 0.2.8.3-alpha in March 2016, with a list of 177 fallbacks
(123 new, 54 existing, 27 removed) generated in December 2016.
Resolves ticket 20170.
---
changes/ticket20170-v3 | 5 +
src/or/fallback_dirs.inc | 392 ++++++++++++++++++++++++++++++++---------------
2 files changed, 272 insertions(+), 125 deletions(-)
diff --git a/changes/ticket20170-v3 b/changes/ticket20170-v3
new file mode 100644
index 0000000..d634e72
--- /dev/null
+++ b/changes/ticket20170-v3
@@ -0,0 +1,5 @@
+ o Minor features (fallback directory list):
+ - Replace the 81 remaining fallbacks of the 100 originally introduced
+ in Tor 0.2.8.3-alpha in March 2016, with a list of 177 fallbacks
+ (123 new, 54 existing, 27 removed) generated in December 2016.
+ Resolves ticket 20170.
diff --git a/src/or/fallback_dirs.inc b/src/or/fallback_dirs.inc
index f628919..be94ff5 100644
--- a/src/or/fallback_dirs.inc
+++ b/src/or/fallback_dirs.inc
@@ -1,281 +1,423 @@
+/* Whitelist & blacklist excluded 1177 of 1389 candidates. */
/* To comment-out entries in this file, use C comments, and add * to the start of each line. (stem finds fallback entries using " at the start of a line.) */
-/* Whitelist & blacklist excluded 1273 of 1553 candidates. */
/* Checked IPv4 DirPorts served a consensus within 15.0s. */
/*
-Final Count: 100 (Eligible 280, Target 356 (1781 * 0.20), Max 100)
-Excluded: 180 (Same Operator 102, Failed/Skipped Download 40, Excess 38)
-Bandwidth Range: 6.0 - 67.2 MB/s
+Final Count: 177 (Eligible 212, Target 392 (1963 * 0.20), Max 200)
+Excluded: 35 (Same Operator 35, Failed/Skipped Download 0, Excess 0)
+Bandwidth Range: 1.2 - 107.3 MByte/s
*/
/*
-Onionoo Source: details Date: 2016-04-18 01:00:00 Version: 3.1
-URL: https:onionoo.torproject.orgdetails?fields=fingerprint%2Cnickname%2Ccontact%2Clast_changed_address_or_port%2Cconsensus_weight%2Cadvertised_bandwidth%2Cor_addresses%2Cdir_address%2Crecommended_version%2Cflags%2Ceffective_family&flag=V2Dir&type=relay&last_seen_days=-7&first_seen_days=7-
+Onionoo Source: details Date: 2016-12-19 03:00:00 Version: 3.1
+URL: https:onionoo.torproject.orgdetails?fields=fingerprint%2Cnickname%2Ccontact%2Clast_changed_address_or_port%2Cconsensus_weight%2Cadvertised_bandwidth%2Cor_addresses%2Cdir_address%2Crecommended_version%2Cflags%2Ceffective_family%2Cplatform&flag=V2Dir&type=relay&last_seen_days=-0&first_seen_days=7-
*/
/*
-Onionoo Source: uptime Date: 2016-04-18 01:00:00 Version: 3.1
-URL: https:onionoo.torproject.orguptime?first_seen_days=7-&flag=V2Dir&type=relay&last_seen_days=-7
+Onionoo Source: uptime Date: 2016-12-19 03:00:00 Version: 3.1
+URL: https:onionoo.torproject.orguptime?first_seen_days=7-&flag=V2Dir&type=relay&last_seen_days=-0
*/
-"193.171.202.146:9030 orport=9001 id=01A9258A46E97FF8B2CAC7910577862C14F2C524"
+"185.13.39.197:80 orport=443 id=001524DD403D729F08F7E5D77813EF12756CFA8D"
" weight=10",
-"5.9.110.236:9030 orport=9001 id=0756B7CD4DFC8182BE23143FAC0642F515182CEB"
-" ipv6=[2a01:4f8:162:51e2::2]:9001"
+"185.100.85.61:80 orport=443 id=025B66CEBC070FCB0519D206CF0CF4965C20C96E"
+" weight=10",
+"62.210.92.11:9030 orport=9001 id=0266B0660F3F20A7D1F3D8335931C95EF50F6C6B"
+" ipv6=[2001:bc8:338c::1]:9001"
+" weight=10",
+"185.97.32.18:9030 orport=9001 id=04250C3835019B26AA6764E85D836088BE441088"
+" weight=10",
+"92.222.20.130:80 orport=443 id=0639612FF149AA19DF3BCEA147E5B8FED6F3C87C"
+" weight=10",
+"163.172.149.155:80 orport=443 id=0B85617241252517E8ECF2CFC7F4C1A32DCD153F"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but opted-out before 0.2.8.6
- * "37.187.1.149:9030 orport=9001 id=08DC0F3C6E3D9C527C1FC8745D35DD1B0DE1875D"
- * " ipv6=[2001:41d0:a:195::1]:9001"
- * " weight=10",
- */
"5.39.92.199:80 orport=443 id=0BEA4A88D069753218EAAAD6D22EA87B9A1319D6"
+" ipv6=[2001:41d0:8:b1c7::1]:443"
" weight=10",
-"5.196.88.122:9030 orport=9001 id=0C2C599AFCB26F5CFC2C7592435924C1D63D9484"
+"163.172.25.118:80 orport=22 id=0CF8F3E6590F45D50B70F2F7DA6605ECA6CD408F"
" weight=10",
"178.62.197.82:80 orport=443 id=0D3EBA17E1C78F1E9900BABDB23861D46FCAF163"
" weight=10",
-/* Fallback was on 0.2.8.6 list, but down for a week before 0.2.9
- * "144.76.14.145:110 orport=143 id=14419131033443AE6E21DA82B0D307F7CAE42BDB"
- * " ipv6=[2a01:4f8:190:9490::dead]:443"
- * " weight=10",
- */
-"178.32.216.146:9030 orport=9001 id=17898F9A2EBC7D69DAF87C00A1BD2FABF3C9E1D2"
+"185.100.86.100:80 orport=443 id=0E8C0C8315B66DB5F703804B3889A1DD66C67CE0"
+" weight=10",
+"5.9.159.14:9030 orport=9001 id=0F100F60C7A63BED90216052324D29B08CFCF797"
+" weight=10",
+"193.11.114.43:9030 orport=9001 id=12AD30E5D25AA67F519780E2111E611A455FDC89"
+" ipv6=[2001:6b0:30:1000::99]:9050"
+" weight=10",
+"37.157.195.87:8030 orport=443 id=12FD624EE73CEF37137C90D38B2406A66F68FAA2"
+" weight=10",
+"178.62.60.37:80 orport=443 id=175921396C7C426309AB03775A9930B6F611F794"
+" weight=10",
+"204.11.50.131:9030 orport=9001 id=185F2A57B0C4620582602761097D17DB81654F70"
+" weight=10",
+"92.222.4.102:9030 orport=9001 id=1A6B8B8272632D8AD38442027F822A367128405C"
+" weight=10",
+"5.9.158.75:80 orport=443 id=1AF72E8906E6C49481A791A6F8F84F8DFEBBB2BA"
+" ipv6=[2a01:4f8:190:514a::2]:443"
" weight=10",
"46.101.151.222:80 orport=443 id=1DBAED235E3957DE1ABD25B4206BE71406FB61F8"
" weight=10",
"91.219.237.229:80 orport=443 id=1ECD73B936CB6E6B3CD647CC204F108D9DF2C9F7"
" weight=10",
+"5.9.146.203:80 orport=443 id=1F45542A24A61BF9408F1C05E0DCE4E29F2CBA11"
+" weight=10",
"212.47.229.2:9030 orport=9001 id=20462CBA5DA4C2D963567D17D0B7249718114A68"
+" ipv6=[2001:bc8:4400:2100::f03]:9001"
+" weight=10",
+"91.219.236.222:80 orport=443 id=20704E7DD51501DC303FA51B738D7B7E61397CF6"
" weight=10",
-"185.61.138.18:8080 orport=4443 id=2541759BEC04D37811C2209A88E863320271EC9C"
+"144.76.163.93:9030 orport=9001 id=22F08CF09764C4E8982640D77F71ED72FF26A9AC"
+" weight=10",
+"163.172.176.167:80 orport=443 id=230A8B2A8BA861210D9B4BA97745AEC217A94207"
+" weight=10",
+"212.47.240.10:82 orport=443 id=2A4C448784F5A83AFE6C78DA357D5E31F7989DEB"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but went down before 0.2.8.5
- * "51.254.215.121:80 orport=443 id=262B66AD25C79588AD1FC8ED0E966395B47E5C1D"
- * " weight=10",
- */
-/* Fallback was on 0.2.8.6 list, but went down before 0.2.9
- * "194.150.168.79:11112 orport=11111 id=29F1020B94BE25E6BE1AD13E93CE19D2131B487C"
- * " weight=10",
- */
"144.76.26.175:9012 orport=9011 id=2BA2C8E96B2590E1072AECE2BDB5C48921BF8510"
" weight=10",
+"178.16.208.56:80 orport=443 id=2CDCFED0142B28B002E89D305CBA2E26063FADE2"
+" ipv6=[2a00:1c20:4089:1234:cd49:b58a:9ebe:67ec]:443"
+" weight=10",
"62.210.124.124:9130 orport=9101 id=2EBD117806EE43C3CC885A8F1E4DC60F207E7D3E"
" ipv6=[2001:bc8:3f23:100::1]:9101"
" weight=10",
+"97.74.237.196:9030 orport=9001 id=2F0F32AB1E5B943CA7D062C03F18960C86E70D94"
+" weight=10",
"213.61.66.118:9031 orport=9001 id=30648BC64CEDB3020F4A405E4AB2A6347FB8FA22"
" weight=10",
+"107.170.101.39:9030 orport=443 id=30973217E70AF00EBE51797FF6D9AA720A902EAA"
+" weight=10",
+"64.113.32.29:9030 orport=9001 id=30C19B81981F450C402306E2E7CFB6C3F79CB6B2"
+" weight=10",
"212.83.154.33:8080 orport=8443 id=322C6E3A973BC10FC36DE3037AD27BC89F14723B"
" weight=10",
"109.105.109.162:52860 orport=60784 id=32EE911D968BE3E016ECA572BB1ED0A9EE43FC2F"
" ipv6=[2001:948:7:2::163]:5001"
" weight=10",
-"146.0.32.144:9030 orport=9001 id=35E8B344F661F4F2E68B17648F35798B44672D7E"
+"185.100.84.212:80 orport=443 id=330CD3DB6AD266DC70CDB512B036957D03D9BC59"
+" ipv6=[2a06:1700:0:7::1]:443"
+" weight=10",
+"163.172.13.165:9030 orport=9001 id=33DA0CAB7C27812EFF2E22C9705630A54D101FEB"
+" ipv6=[2001:bc8:38cb:201::8]:9001"
+" weight=10",
+"45.62.255.25:80 orport=443 id=3473ED788D9E63361D1572B7E82EC54338953D2A"
" weight=10",
"217.79.190.25:9030 orport=9090 id=361D33C96D0F161275EE67E2C91EE10B276E778B"
" weight=10",
-"62.210.92.11:9130 orport=9101 id=387B065A38E4DAA16D9D41C2964ECBC4B31D30FF"
-" ipv6=[2001:bc8:338c::1]:9101"
+"37.187.22.87:9030 orport=9001 id=36B9E7AC1E36B62A9D6F330ABEB6012BA7F0D400"
+" ipv6=[2001:41d0:a:1657::1]:9001"
+" weight=10",
+"176.126.252.12:21 orport=8080 id=379FB450010D17078B3766C2273303C358C3A442"
+" ipv6=[2a02:59e0:0:7::12]:81"
" weight=10",
"198.50.191.95:80 orport=443 id=39F096961ED2576975C866D450373A9913AFDC92"
" weight=10",
"164.132.77.175:9030 orport=9001 id=3B33F6FCA645AD4E91428A3AF7DC736AD9FB727B"
" weight=10",
-"176.10.107.180:9030 orport=9001 id=3D7E274A87D9A89AF064C13D1EE4CA1F184F2600"
+"212.47.230.49:9030 orport=9001 id=3D6D0771E54056AEFC28BB1DE816951F11826E97"
+" weight=10",
+"217.79.179.177:9030 orport=9001 id=3E53D3979DB07EFD736661C934A1DED14127B684"
+" ipv6=[2001:4ba0:fff9:131:6c4f::90d3]:9001"
+" weight=10",
+"212.47.237.95:9030 orport=9001 id=3F5D8A879C58961BB45A3D26AC41B543B40236D6"
+" weight=10",
+"185.100.85.101:9030 orport=9001 id=4061C553CA88021B8302F0814365070AAE617270"
+" weight=10",
+"178.62.86.96:9030 orport=9001 id=439D0447772CB107B886F7782DBC201FA26B92D1"
+" ipv6=[2a03:b0c0:1:d0::3cf:7001]:9050"
+" weight=10",
+"163.172.157.213:8080 orport=443 id=4623A9EC53BFD83155929E56D6F7B55B5E718C24"
+" weight=10",
+"31.31.78.49:80 orport=443 id=46791D156C9B6C255C2665D4D8393EC7DBAA7798"
+" weight=10",
+"69.162.139.9:9030 orport=9001 id=4791FC0692EAB60DF2BCCAFF940B95B74E7654F6"
+" ipv6=[2607:f128:40:1212::45a2:8b09]:9001"
+" weight=10",
+"51.254.246.203:9030 orport=9001 id=47B596B81C9E6277B98623A84B7629798A16E8D5"
" weight=10",
"37.187.102.186:9030 orport=9001 id=489D94333DF66D57FFE34D9D59CC2D97E2CB0053"
" ipv6=[2001:41d0:a:26ba::1]:9001"
" weight=10",
"188.165.194.195:9030 orport=9001 id=49E7AD01BB96F6FE3AB8C3B15BD2470B150354DF"
" weight=10",
-"108.53.208.157:80 orport=443 id=4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2"
+"62.102.148.67:80 orport=443 id=4A0C3E177AF684581EF780981AEAF51A98A6B5CF"
" weight=10",
-"212.51.134.123:9030 orport=9001 id=50586E25BE067FD1F739998550EDDCB1A14CA5B2"
-" ipv6=[2a02:168:6e00:0:3a60:77ff:fe9c:8bd1]:9001"
+"51.254.101.242:9002 orport=9001 id=4CC9CC9195EC38645B699A33307058624F660CCF"
+" weight=10",
+"81.7.16.182:80 orport=443 id=51E1CF613FD6F9F11FE24743C91D6F9981807D82"
+" ipv6=[2a02:180:1:1::517:10b6]:993"
+" weight=10",
+"138.201.130.32:9030 orport=9001 id=52AEA31188331F421B2EDB494DB65CD181E5B257"
" weight=10",
-/* Fallback was on 0.2.8.6 list, but changed IPv4 before 0.2.9
- * "5.175.233.86:80 orport=443 id=5525D0429BFE5DC4F1B0E9DE47A4CFA169661E33"
- * " weight=10",
- */
"94.23.204.175:9030 orport=9001 id=5665A3904C89E22E971305EE8C1997BCA4123C69"
" weight=10",
-"109.163.234.9:80 orport=443 id=5714542DCBEE1DD9864824723638FD44B2122CEA"
+"95.130.12.119:80 orport=443 id=587E0A9552E4274B251F29B5B2673D38442EE4BF"
" weight=10",
"185.21.100.50:9030 orport=9001 id=58ED9C9C35E433EE58764D62892B4FFD518A3CD0"
" ipv6=[2a00:1158:2:cd00:0:74:6f:72]:443"
" weight=10",
"78.142.142.246:80 orport=443 id=5A5E03355C1908EBF424CAF1F3ED70782C0D2F74"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but went down before 0.2.8.5
- * "185.100.85.138:80 orport=46356 id=5C4DF16A0029CC4F67D3E127356E68F219269859"
- * " weight=10",
- */
-"178.16.208.62:80 orport=443 id=5CF8AFA5E4B0BB88942A44A3F3AAE08C3BDFD60B"
-" ipv6=[2a00:1c20:4089:1234:a6a4:2926:d0af:dfee]:443"
+"46.28.207.19:80 orport=443 id=5B92FA5C8A49D46D235735504C72DBB3472BA321"
+" weight=10",
+"120.29.217.46:80 orport=443 id=5E853C94AB1F655E9C908924370A0A6707508C62"
" weight=10",
"95.128.43.164:80 orport=443 id=616081EC829593AF4232550DE6FFAA1D75B37A90"
" ipv6=[2a02:ec0:209:10::4]:443"
" weight=10",
-"89.187.142.208:80 orport=443 id=64186650FFE4469EBBE52B644AE543864D32F43C"
+"195.154.122.54:80 orport=443 id=64E99CB34C595A02A3165484BD1215E7389322C6"
+" weight=10",
+"163.172.139.104:8080 orport=443 id=68F175CCABE727AA2D2309BCD8789499CEE36ED7"
+" weight=10",
+"85.214.62.48:80 orport=443 id=6A7551EEE18F78A9813096E82BF84F740D32B911"
+" weight=10",
+"95.130.11.147:9030 orport=443 id=6B697F3FF04C26123466A5C0E5D1F8D91925967A"
+" weight=10",
+"91.121.84.137:4951 orport=4051 id=6DE61A6F72C1E5418A66BFED80DFB63E4C77668F"
+" ipv6=[2001:41d0:1:8989::1]:4051"
+" weight=10",
+"213.61.66.117:9032 orport=9002 id=6E44A52E3D1FF7683FE5C399C3FB5E912DE1C6B4"
+" weight=10",
+"80.127.137.19:80 orport=443 id=6EF897645B79B6CB35E853B32506375014DE3621"
+" ipv6=[2001:981:47c1:1::6]:443"
" weight=10",
-/* Fallback was on 0.2.8.6 list, but operator opted-out before 0.2.9
- * "144.76.73.140:9030 orport=9001 id=6A640018EABF3DA9BAD9321AA37C2C87BBE1F907"
- * " weight=10",
- */
-/* Fallback was on 0.2.8.6 list, but went down before 0.2.9
- * "94.126.23.174:9030 orport=9001 id=6FC6F08270D565BE89B7C819DD8E2D487397C073"
- * " weight=10",
- */
-"176.31.191.26:9030 orport=9001 id=7350AB9ED7568F22745198359373C04AC783C37C"
+"95.183.48.12:80 orport=443 id=7187CED1A3871F837D0E60AC98F374AC541CB0DA"
+" weight=10",
+"163.172.35.247:80 orport=443 id=71AB4726D830FAE776D74AEF790CF04D8E0151B4"
+" weight=10",
+"85.235.250.88:80 orport=443 id=72B2B12A3F60408BDBC98C6DF53988D3A0B3F0EE"
" weight=10",
"46.101.237.246:9030 orport=9001 id=75F1992FD3F403E9C082A5815EB5D12934CDF46C"
" ipv6=[2a03:b0c0:3:d0::208:5001]:9050"
" weight=10",
+"188.166.133.133:9030 orport=9001 id=774555642FDC1E1D4FDF2E0C31B7CA9501C5C9C7"
+" ipv6=[2a03:b0c0:2:d0::5:f001]:9001"
+" weight=10",
+"81.30.158.213:9030 orport=9001 id=789EA6C9AE9ADDD8760903171CFA9AC5741B0C70"
+" ipv6=[2001:4ba0:cafe:e84::1]:9001"
+" weight=10",
"185.11.180.67:80 orport=9001 id=794D8EA8343A4E820320265D05D4FA83AB6D1778"
" weight=10",
+"171.25.193.131:80 orport=443 id=79861CF8522FC637EF046F7688F5289E49D94576"
+" weight=10",
"62.210.129.246:80 orport=443 id=79E169B25E4C7CE99584F6ED06F379478F23E2B8"
" weight=10",
"82.223.21.74:9030 orport=9001 id=7A32C9519D80CA458FC8B034A28F5F6815649A98"
" ipv6=[2001:470:53e0::cafe]:9050"
" weight=10",
+"51.254.136.195:80 orport=443 id=7BB70F8585DFC27E75D692970C0EEB0F22983A63"
+" weight=10",
+"193.11.114.45:9031 orport=9002 id=80AAF8D5956A43C197104CEF2550CD42D165C6FB"
+" weight=10",
"192.160.102.164:80 orport=9001 id=823AA81E277F366505545522CEDC2F529CE4DC3F"
+" ipv6=[2605:e200:d00c:c01d::1111]:9002"
" weight=10",
"192.87.28.82:9030 orport=9001 id=844AE9CAD04325E955E2BE1521563B79FE7094B7"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but changed address before 0.2.8.5
- * "84.219.173.60:9030 orport=443 id=855BC2DABE24C861CD887DB9B2E950424B49FC34"
- * " weight=10",
- */
"163.172.138.22:80 orport=443 id=8664DC892540F3C789DB37008236C096C871734D"
+" ipv6=[2001:bc8:4400:2100::1:3]:443"
+" weight=10",
+"188.166.23.127:80 orport=443 id=8672E8A01B4D3FA4C0BBE21C740D4506302EA487"
+" ipv6=[2a03:b0c0:2:d0::27b:7001]:9050"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but downloads were slow before 0.2.8.5
- * "185.96.88.29:80 orport=443 id=86C281AD135058238D7A337D546C902BE8505DDE"
- * " weight=10",
- */
"93.180.156.84:9030 orport=9001 id=8844D87E9B038BE3270938F05AF797E1D3C74C0F"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but DirPort changed before 0.2.8.5
- * "178.217.184.32:9030 orport=443 id=8B7F47AE1A5D954A3E58ACDE0865D09DBA5B738D"
- * " weight=10",
- */
+"212.47.241.21:80 orport=443 id=892F941915F6A0C6E0958E52E0A9685C190CF45C"
+" weight=10",
+"163.172.194.53:9030 orport=9001 id=8C00FA7369A7A308F6A137600F0FA07990D9D451"
+" weight=10",
+"178.254.44.135:9030 orport=9001 id=8FA37B93397015B2BC5A525C908485260BE9F422"
+" weight=10",
"151.80.42.103:9030 orport=9001 id=9007C1D8E4F03D506A4A011B907A9E8D04E3C605"
" ipv6=[2001:41d0:e:f67::114]:9001"
" weight=10",
-"5.79.68.161:81 orport=443 id=9030DCF419F6E2FBF84F63CBACBA0097B06F557E"
-" ipv6=[2001:1af8:4700:a012:1::1]:443"
+"173.255.245.116:9030 orport=9001 id=91E4015E1F82DAF0121D62267E54A1F661AB6DC7"
" weight=10",
"51.255.41.65:9030 orport=9001 id=9231DF741915AA1630031A93026D88726877E93A"
" weight=10",
+"178.16.208.57:80 orport=443 id=92CFD9565B24646CAC2D172D3DB503D69E777B8A"
+" ipv6=[2a00:1c20:4089:1234:7825:2c5d:1ecd:c66f]:443"
+" weight=10",
"91.219.237.244:80 orport=443 id=92ECC9E0E2AF81BB954719B189AC362E254AD4A5"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but changed fingerprint before 0.2.8.5
- * "46.101.102.71:80 orport=443 id=9504CB22EEB25D344DE63CB7A6F2C46F895C3686"
- * " ipv6=[2a03:b0c0:3:d0::2ed:7001]:9050"
- * " weight=10",
- */
+"204.8.156.142:80 orport=443 id=94C4B7B8C50C86A92B6A20107539EE2678CF9A28"
+" weight=10",
+"176.10.104.243:8080 orport=8443 id=95DA61AEF23A6C851028C1AA88AD8593F659E60F"
+" weight=10",
+"85.10.202.87:9030 orport=9001 id=971AFB23C168DCD8EDA17473C1C452B359DE3A5A"
+" weight=10",
"85.214.206.219:9030 orport=9001 id=98F8D5F359949E41DE8DF3DBB1975A86E96A84A0"
" weight=10",
+"163.172.223.200:80 orport=443 id=998BF3ED7F70E33D1C307247B9626D9E7573C438"
+" weight=10",
"81.7.10.93:31336 orport=31337 id=99E246DB480B313A3012BC3363093CC26CD209C7"
" weight=10",
+"91.229.20.27:9030 orport=9001 id=9A0D54D3A6D2E0767596BF1515E6162A75B3293F"
+" weight=10",
+"66.111.2.20:9030 orport=9001 id=9A68B85A02318F4E7E87F2828039FBD5D75B0142"
+" weight=10",
+"5.35.251.247:9030 orport=9001 id=9B1F5187DFBA89DC24B37EA7BF896C12B43A27AE"
+" weight=10",
+"5.9.151.241:9030 orport=4223 id=9BF04559224F0F1C3C953D641F1744AF0192543A"
+" weight=10",
+"86.105.212.130:9030 orport=443 id=9C900A7F6F5DD034CFFD192DAEC9CCAA813DB022"
+" weight=10",
+"146.185.177.103:80 orport=9030 id=9EC5E097663862DF861A18C32B37C5F82284B27D"
+" weight=10",
+"178.254.20.134:80 orport=443 id=9F5068310818ED7C70B0BC4087AB55CB12CB4377"
+" weight=10",
"46.28.110.244:80 orport=443 id=9F7D6E6420183C2B76D3CE99624EBC98A21A967E"
" weight=10",
-"46.165.230.5:80 orport=443 id=A0F06C2FADF88D3A39AA3072B406F09D7095AC9E"
+"178.62.22.36:80 orport=443 id=A0766C0D3A667A3232C7D569DE94A28F9922FCB1"
+" ipv6=[2a03:b0c0:1:d0::174:1]:9050"
" weight=10",
"171.25.193.77:80 orport=443 id=A10C4F666D27364036B562823E5830BC448E046A"
" ipv6=[2001:67c:289c:3::77]:443"
" weight=10",
-"176.9.5.116:9030 orport=9001 id=A1EB8D8F1EE28DB98BBB1EAA3B4BEDD303BAB911"
+"171.25.193.78:80 orport=443 id=A478E421F83194C114F41E94F95999672AED51FE"
+" ipv6=[2001:67c:289c:3::78]:443"
+" weight=10",
+"178.16.208.58:80 orport=443 id=A4C98CEA3F34E05299417E9F885A642C88EF6029"
+" ipv6=[2a00:1c20:4089:1234:cdae:1b3e:cc38:3d45]:443"
+" weight=10",
+"163.172.149.122:80 orport=443 id=A9406A006D6E7B5DA30F2C6D4E42A338B5E340B2"
+" weight=10",
+"213.61.66.116:9033 orport=9003 id=A9DEB920B42B4EC1DE6249034039B06D61F38690"
" weight=10",
"192.34.63.137:9030 orport=443 id=ABCB4965F1FEE193602B50A365425105C889D3F8"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but had low uptime before 0.2.8.5
- * "195.154.164.243:80 orport=443 id=AC66FFA4AB35A59EBBF5BF4C70008BF24D8A7A5C"
- * " ipv6=[2001:bc8:399f:f000::1]:993"
- * " weight=10",
- */
+"195.154.164.243:80 orport=443 id=AC66FFA4AB35A59EBBF5BF4C70008BF24D8A7A5C"
+" weight=10",
"86.59.119.88:80 orport=443 id=ACD889D86E02EDDAB1AFD81F598C0936238DC6D0"
" weight=10",
+"185.129.62.62:9030 orport=9001 id=ACDD9E85A05B127BA010466C13C8C47212E8A38F"
+" ipv6=[2a06:d380:0:3700::62]:9001"
+" weight=10",
+"188.40.128.246:9030 orport=9001 id=AD19490C7DBB26D3A68EFC824F67E69B0A96E601"
+" weight=10",
"163.172.131.88:80 orport=443 id=AD253B49E303C6AB1E048B014392AC569E8A7DAE"
" ipv6=[2001:bc8:4400:2100::2:1009]:443"
" weight=10",
-"178.254.44.135:80 orport=443 id=AE6A8C18E7499B586CD36246AC4BCAFFBBF93AB2"
+"176.10.104.240:8080 orport=8443 id=AD86CD1A49573D52A7B6F4A35750F161AAD89C88"
+" weight=10",
+"31.185.104.20:80 orport=443 id=ADB2C26629643DBB9F8FE0096E7D16F9414B4F8D"
" weight=10",
"37.187.7.74:80 orport=443 id=AEA43CB1E47BE5F8051711B2BF01683DB1568E05"
" ipv6=[2001:41d0:a:74a::1]:443"
" weight=10",
+"176.126.252.11:443 orport=9001 id=B0279A521375F3CB2AE210BDBFC645FDD2E1973A"
+" ipv6=[2a02:59e0:0:7::11]:9003"
+" weight=10",
"212.129.62.232:80 orport=443 id=B143D439B72D239A419F8DCE07B8A8EB1B486FA7"
" weight=10",
"185.66.250.141:9030 orport=9001 id=B1726B94885CE3AC3910CA8B60622B97B98E2529"
" weight=10",
+"198.199.64.217:80 orport=443 id=B1D81825CFD7209BD1B4520B040EF5653C204A23"
+" ipv6=[2604:a880:400:d0::1a9:b001]:9050"
+" weight=10",
+"136.243.214.137:80 orport=443 id=B291D30517D23299AD7CEE3E60DFE60D0E3A4664"
+" weight=10",
+"212.47.233.86:9030 orport=9001 id=B4CAFD9CBFB34EC5DAAC146920DC7DFAFE91EA20"
+" weight=10",
+"93.115.97.242:9030 orport=9001 id=B5212DB685A2A0FCFBAE425738E478D12361710D"
+" weight=10",
+"81.2.209.10:443 orport=80 id=B6904ADD4C0D10CDA7179E051962350A69A63243"
+" ipv6=[2001:15e8:201:1::d10a]:80"
+" weight=10",
"193.11.114.46:9032 orport=9003 id=B83DC1558F0D34353BB992EF93AFEAFDB226A73E"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but downloads were slow before 0.2.8.5
- * "178.62.36.64:9030 orport=9001 id=B87C84E38DAECFFFFDE98E5AEE5786AFDC748F2C"
- * " weight=10",
- */
+"85.248.227.164:444 orport=9002 id=B84F248233FEA90CAD439F292556A3139F6E1B82"
+" ipv6=[2a00:1298:8011:212::164]:9004"
+" weight=10",
"197.231.221.211:9030 orport=9001 id=BC630CBBB518BE7E9F4E09712AB0269E9DC7D626"
" weight=10",
+"89.163.247.43:9030 orport=9001 id=BC7ACFAC04854C77167C7D66B7E471314ED8C410"
+" weight=10",
"198.96.155.3:8080 orport=5001 id=BCEDF6C193AA687AE471B8A22EBF6BC57C2D285E"
" weight=10",
+"128.199.55.207:9030 orport=9001 id=BCEF908195805E03E92CCFE669C48738E556B9C5"
+" ipv6=[2a03:b0c0:2:d0::158:3001]:9001"
+" weight=10",
"148.251.190.229:9030 orport=9010 id=BF0FB582E37F738CD33C3651125F2772705BB8E8"
" ipv6=[2a01:4f8:211:c68::2]:9010"
" weight=10",
-"188.138.112.60:1433 orport=1521 id=C414F28FD2BEC1553024299B31D4E726BEB8E788"
+"163.172.35.249:80 orport=443 id=C08DE49658E5B3CFC6F2A952B453C4B608C9A16A"
" weight=10",
-"195.154.79.128:80 orport=443 id=C697612CA5AED06B8D829FCC6065B9287212CB2F"
+"185.35.202.221:9030 orport=9001 id=C13B91384CDD52A871E3ECECE4EF74A7AC7DCB08"
+" ipv6=[2a02:ed06::221]:9001"
" weight=10",
-"37.59.46.159:9030 orport=9001 id=CBD0D1BD110EC52963082D839AC6A89D0AE243E7"
+"213.239.217.18:1338 orport=1337 id=C37BC191AC389179674578C3E6944E925FE186C2"
+" ipv6=[2a01:4f8:a0:746a:101:1:1:1]:1337"
" weight=10",
-"91.121.54.8:9030 orport=9001 id=CBEE0F3303C8C50462A12107CA2AE061831931BC"
+"188.138.112.60:1433 orport=1521 id=C414F28FD2BEC1553024299B31D4E726BEB8E788"
+" weight=10",
+"37.59.46.159:9030 orport=9001 id=CBD0D1BD110EC52963082D839AC6A89D0AE243E7"
" weight=10",
"178.62.199.226:80 orport=443 id=CBEFF7BA4A4062045133C053F2D70524D8BBE5BE"
" ipv6=[2a03:b0c0:2:d0::b7:5001]:443"
" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but went down before 0.2.8.5
- * "81.7.17.171:80 orport=443 id=CFECDDCA990E3EF7B7EC958B22441386B6B8D820"
- * " ipv6=[2a02:180:1:1::517:11ab]:443"
- * " weight=10",
- */
"134.119.3.164:9030 orport=9001 id=D1B8AAA98C65F3DF7D8BB3AF881CAEB84A33D8EE"
" weight=10",
"185.13.38.75:9030 orport=9001 id=D2A1703758A0FBBA026988B92C2F88BAB59F9361"
" weight=10",
-"37.187.115.157:9030 orport=9001 id=D5039E1EBFD96D9A3F9846BF99EC9F75EDDE902A"
-" weight=10",
-/* Fallback was on 0.2.8.2-alpha list, but lost Guard flag before 0.2.8.5
- * "185.14.185.240:9030 orport=443 id=D62FB817B0288085FAC38A6DC8B36DCD85B70260"
- * " weight=10",
- */
"37.221.162.226:9030 orport=9001 id=D64366987CB39F61AD21DBCF8142FA0577B92811"
" weight=10",
+"46.101.169.151:9030 orport=9001 id=D760C5B436E42F93D77EF2D969157EEA14F9B39C"
+" ipv6=[2a03:b0c0:3:d0::74f:a001]:9001"
+" weight=10",
+"46.4.111.124:9030 orport=9001 id=D9065F9E57899B3D272AA212317AF61A9B14D204"
+" weight=10",
"193.35.52.53:9030 orport=9001 id=DAA39FC00B196B353C2A271459C305C429AF09E4"
" weight=10",
+"178.33.183.251:80 orport=443 id=DD823AFB415380A802DCAEB9461AE637604107FB"
+" ipv6=[2001:41d0:2:a683::251]:443"
+" weight=10",
"178.62.173.203:9030 orport=9001 id=DD85503F2D1F52EF9EAD621E942298F46CD2FC10"
" ipv6=[2a03:b0c0:0:1010::a4:b001]:9001"
" weight=10",
+"83.212.99.68:80 orport=443 id=DDBB2A38252ADDA53E4492DDF982CA6CC6E10EC0"
+" ipv6=[2001:648:2ffc:1225:a800:bff:fe3d:67b5]:443"
+" weight=10",
"5.34.183.205:80 orport=443 id=DDD7871C1B7FA32CB55061E08869A236E61BDDF8"
" weight=10",
-/* Fallback was on 0.2.8.6 list, but went down before 0.2.9
- * "195.191.233.221:80 orport=443 id=DE134FC8E5CC4EC8A5DE66934E70AC9D70267197"
- * " weight=10",
- */
-"46.252.26.2:45212 orport=49991 id=E589316576A399C511A9781A73DA4545640B479D"
-" weight=10",
-/* Fallback was on 0.2.8.6 list, but went down before 0.2.9
- * "176.31.180.157:143 orport=22 id=E781F4EC69671B3F1864AE2753E0890351506329"
- * " ipv6=[2001:41d0:8:eb9d::1]:22"
- * " weight=10",
- */
+"167.114.66.61:9696 orport=443 id=DE6CD5F09DF26076F26321B0BDFBE78ACD935C65"
+" ipv6=[2607:5300:100::78d]:443"
+" weight=10",
+"78.24.75.53:9030 orport=9001 id=DEB73705B2929AE9BE87091607388939332EF123"
+" weight=10",
+"92.222.38.67:80 orport=443 id=DED6892FF89DBD737BA689698A171B2392EB3E82"
+" weight=10",
+"217.12.199.208:80 orport=443 id=DF3AED4322B1824BF5539AE54B2D1B38E080FF05"
+" ipv6=[2a02:27a8:0:2::7e]:443"
+" weight=10",
+"167.114.35.28:9030 orport=9001 id=E65D300F11E1DB12C534B0146BDAB6972F1A8A48"
+" weight=10",
+"212.47.244.38:8080 orport=443 id=E81EF60A73B3809F8964F73766B01BAA0A171E20"
+" weight=10",
"131.188.40.188:443 orport=80 id=EBE718E1A49EE229071702964F8DB1F318075FF8"
" weight=10",
-"91.219.236.222:80 orport=443 id=EC413181CEB1C8EDC17608BBB177CD5FD8535E99"
+"89.40.71.149:8081 orport=8080 id=EC639EDAA5121B47DBDF3D6B01A22E48A8CB6CC7"
+" weight=10",
+"192.87.28.28:9030 orport=9001 id=ED2338CAC2711B3E331392E1ED2831219B794024"
+" weight=10",
+"212.83.40.238:9030 orport=9001 id=F409FA7902FD89270E8DE0D7977EA23BC38E5887"
+" weight=10",
+"5.199.142.236:9030 orport=9001 id=F4C0EDAA0BF0F7EC138746F8FEF1CE26C7860265"
" weight=10",
-"94.242.246.23:443 orport=9001 id=F65E0196C94DFFF48AFBF2F5F9E3E19AAE583FD0"
-" ipv6=[2a01:608:ffff:ff07::1:23]:9003"
+"46.28.207.141:80 orport=443 id=F69BED36177ED727706512BA6A97755025EEA0FB"
" weight=10",
-"46.101.143.173:80 orport=443 id=F960DF50F0FD4075AC9B505C1D4FFC8384C490FB"
+"78.47.18.110:443 orport=80 id=F8D27B163B9247B232A2EEE68DD8B698695C28DE"
+" weight=10",
+"178.254.13.126:80 orport=443 id=F9246DEF2B653807236DA134F2AEAB103D58ABFE"
+" weight=10",
+"185.96.180.29:80 orport=443 id=F93D8F37E35C390BCAD9F9069E13085B745EC216"
+" weight=10",
+"104.243.35.196:9030 orport=9001 id=FA3415659444AE006E7E9E5375E82F29700CFDFD"
+" weight=10",
+"86.59.119.83:80 orport=443 id=FC9AC8EA0160D88BCCFDE066940D7DD9FA45495B"
" weight=10",
-/* Fallback was on 0.2.8.6 list, but changed IPv4 before 0.2.9
- * "195.154.8.111:80 orport=443 id=FCB6695F8F2DC240E974510A4B3A0F2B12AB5B64"
- * " weight=10",
- */
"192.187.124.98:9030 orport=9001 id=FD1871854BFC06D7B02F10742073069F0528B5CC"
" weight=10",
+"212.129.38.254:9030 orport=9001 id=FDF845FC159C0020E2BDDA120C30C5C5038F74B4"
+" weight=10",
+"149.56.45.200:9030 orport=9001 id=FE296180018833AF03A8EACD5894A614623D3F76"
+" weight=10",
"193.11.164.243:9030 orport=9001 id=FFA72BD683BC2FCF988356E6BEC1E490F313FB07"
" ipv6=[2001:6b0:7:125::243]:9001"
" weight=10",
1
0

[tor/maint-0.2.8] Merge remote-tracking branch 'teor/new-fallbacks-028-20161219' into maint-0.2.8
by nickm@torproject.org 19 Dec '16
by nickm@torproject.org 19 Dec '16
19 Dec '16
commit e0306320b543049bc66adb19e359b369c66fb775
Merge: 56a2b8d 4181e81
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Mon Dec 19 07:27:39 2016 -0500
Merge remote-tracking branch 'teor/new-fallbacks-028-20161219' into maint-0.2.8
changes/ticket20170-v3 | 5 +
src/or/fallback_dirs.inc | 392 ++++++++++++++++++++++++++++++++---------------
2 files changed, 272 insertions(+), 125 deletions(-)
1
0

[tor/maint-0.2.9] Merge remote-tracking branch 'teor/new-fallbacks-028-20161219' into maint-0.2.8
by nickm@torproject.org 19 Dec '16
by nickm@torproject.org 19 Dec '16
19 Dec '16
commit e0306320b543049bc66adb19e359b369c66fb775
Merge: 56a2b8d 4181e81
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Mon Dec 19 07:27:39 2016 -0500
Merge remote-tracking branch 'teor/new-fallbacks-028-20161219' into maint-0.2.8
changes/ticket20170-v3 | 5 +
src/or/fallback_dirs.inc | 392 ++++++++++++++++++++++++++++++++---------------
2 files changed, 272 insertions(+), 125 deletions(-)
1
0

[tor/release-0.2.8] Merge remote-tracking branch 'teor/new-fallbacks-028-20161219' into maint-0.2.8
by nickm@torproject.org 19 Dec '16
by nickm@torproject.org 19 Dec '16
19 Dec '16
commit e0306320b543049bc66adb19e359b369c66fb775
Merge: 56a2b8d 4181e81
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Mon Dec 19 07:27:39 2016 -0500
Merge remote-tracking branch 'teor/new-fallbacks-028-20161219' into maint-0.2.8
changes/ticket20170-v3 | 5 +
src/or/fallback_dirs.inc | 392 ++++++++++++++++++++++++++++++++---------------
2 files changed, 272 insertions(+), 125 deletions(-)
1
0