[tor-commits] [torflow/master] Analyze a multidimensional surface? No problem! I rollerskate on that shit.

mikeperry at torproject.org mikeperry at torproject.org
Wed Jun 20 20:57:26 UTC 2012


commit f0365f4feb85ef66a55c308d3b46baea32e241b5
Author: Mike Perry <mikeperry-git at fscked.org>
Date:   Sat Jun 16 15:06:51 2012 -0700

    Analyze a multidimensional surface? No problem! I rollerskate on that shit.
    
    This commit provides analysis to guide parameter choices for the path bias
    defense. It contains a standalone attack and defense simulation script, a
    results.txt with example output, and an observations.txt with comments to
    help work towards an answer for #6135.
---
 CircuitAnalysis/PathBias/observations.txt |   40 ++
 CircuitAnalysis/PathBias/path_bias.py     |  554 +++++++++++++++++++++++++++++
 CircuitAnalysis/PathBias/results.txt      |  152 ++++++++
 3 files changed, 746 insertions(+), 0 deletions(-)

diff --git a/CircuitAnalysis/PathBias/observations.txt b/CircuitAnalysis/PathBias/observations.txt
new file mode 100644
index 0000000..c283bbd
--- /dev/null
+++ b/CircuitAnalysis/PathBias/observations.txt
@@ -0,0 +1,40 @@
+Analysis summary:
+
+1. Simulation confirms the raccoon's proof: Under a path bias attack with a
+   100% accurate correlator, P(compromise) = c/n, where c is the amount of
+   capacity the path bias adversary *actually* carries. (This may be difficult
+   to achieve in practice, since you're actively rejecting capacity during the
+   attack, but PID feedback with the bw auths would fix that for you).
+
+3. With or without the defense, simply trying to fail less than PATH_BIAS_PCT
+   of your circuits is not substantially different than failing everything you
+   want. With the defense, your overall network-wide rate of compromise
+   doesn't change. The only thing that changes is the likelihood of getting
+   caught, and the number of circuits you get to observe per client.
+
+4. Against the defense, an adversary with full knowledge of the client's
+   current path bias parameter values can expect to evade detection, yet
+   still achieve:
+   P(compromise) <= (c/n)^2 * (100/PATH_BIAS_PCT)
+   circs_per_client <= circuit_attempts*(c/n)
+
+5. An adversary conducting a probabilistic attack of failing less than
+   PATH_BIAS_PCT of circuits in the long run can achieve similar, but
+   slightly lower bounds.
+
+
+Optimization problem:
+Tradeoff a large spread between PATH_BIAS_PCT and the ambient success rate vs
+a large MIN_CIRCS rate. Need to measure ambient success rate to do this.
+
+Other Questions:
+Can the adversary really track a client's current count values? Can we
+prevent this with randomization?
+
+Action items/Conclusions:
+Parameter choice seems to be blocked on getting a good measure of the ambient
+success rate, however the following immediate conclusions seem clear:
+
+1. We should transform SCALE_FACTOR into a percent, and make it high. Like 95.
+2. We might want to use a separate, lower PATH_BIAS_PCT before SCALE_THRESHOLD.
+3. After SCALE_THRESHOLD, PATH_BIAS_PCT can be pretty high for notice and warn
diff --git a/CircuitAnalysis/PathBias/path_bias.py b/CircuitAnalysis/PathBias/path_bias.py
new file mode 100755
index 0000000..621e2c1
--- /dev/null
+++ b/CircuitAnalysis/PathBias/path_bias.py
@@ -0,0 +1,554 @@
+#!/usr/bin/python
+
+import random
+
+# TODO:
+# Q: What quantity of middle bandwidth do you need to kill guards?
+# A: Intuitively, you need the disable rate % of bandwidth, but you
+# might have some edge cases to exploit with min_circs.
+
+PATH_BIAS_PCT = 70
+
+# XXX: Min_circs only actives the "notice" level logs
+PATH_BIAS_MIN_CIRCS = 20
+
+# XXX: An int divisor was wrong here. Fix that in Tor. We might
+# even want a weighted moving average, but that will be trickier
+# to analyze.
+PATH_BIAS_SCALE_FACTOR = 95
+PATH_BIAS_SCALE_THRESHOLD = 250
+
+# XXX: We should only emit warnings if we are above the scaling threshhold..
+PATH_BIAS_WARN_CIRCS = PATH_BIAS_SCALE_THRESHOLD*(PATH_BIAS_SCALE_FACTOR/100.0)
+
+#############################################################
+
+# FIXME: haxxx. Who cares, though?
+def reset_globals():
+  global PATH_BIAS_PCT
+  global PATH_BIAS_MIN_CIRCS
+  global PATH_BIAS_SCALE_FACTOR
+  global PATH_BIAS_SCALE_THRESHOLD
+  global PATH_BIAS_WARN_CIRCS
+
+  PATH_BIAS_PCT = 70
+  PATH_BIAS_MIN_CIRCS = 20
+  PATH_BIAS_SCALE_FACTOR = 95
+  PATH_BIAS_SCALE_THRESHOLD = 250
+  PATH_BIAS_WARN_CIRCS = PATH_BIAS_SCALE_THRESHOLD*(PATH_BIAS_SCALE_FACTOR/100.0)
+
+
+####################### Guard Types #########################
+
+# Normal Guard experiences the average circuit failure rate
+# of the network as a whole
+class Guard:
+  def __init__(self, succeed_rate):
+    self.first_hops_total = 0
+    self.success_total = 0
+
+    self._first_hops = 0
+    self._success = 0
+    self.succeed_rate = succeed_rate
+    self.rejected_count = 0
+
+  def reset(self):
+    self._success = 0
+    self._first_hops = 0
+
+  def reject_if_bad(self):
+    if self.is_bad():
+      self.reset()
+      self.rejected_count += 1
+
+  def reject_rate(self):
+    return self.rejected_count/float(self.first_hops_total)
+
+  def _get_rate(self):
+    return self._success/float(self._first_hops)
+
+  def is_bad(self):
+    return self._first_hops >= PATH_BIAS_MIN_CIRCS and \
+           (self._get_rate() < (PATH_BIAS_PCT/100.0))
+
+  def build_circuit(self):
+   self._inc_first_hop()
+   if random.random() < self.succeed_rate:
+      self._inc_success()
+
+   # Client may give up on us after this circuit
+   self.reject_if_bad()
+
+  def circ_fail_count(self):
+    return self._first_hops - self._success
+
+  def _inc_first_hop(self):
+    self._first_hops += 1
+    self.first_hops_total += 1
+    if self._first_hops > PATH_BIAS_SCALE_THRESHOLD:
+      self._first_hops *= PATH_BIAS_SCALE_FACTOR/100.0
+      self._success *= PATH_BIAS_SCALE_FACTOR/100.0
+
+  def _inc_success(self):
+    self._success += 1
+    self.success_total += 1
+
+# EvilGuard collects statistics on how evil he is, but doesn't
+# actually implement any evilness
+class EvilGuard(Guard):
+  def __init__(self, succeed_rate, adversary_capacity):
+    Guard.__init__(self, succeed_rate)
+    self.adversary_capacity = adversary_capacity # c/n probability of malicious exit
+    self.capture_count = 0
+
+  def pwnt_per_client(self):
+    return self.capture_count/float(self.rejected_count+1)
+
+  def capture_rate(self):
+    return self.capture_count/float(self.first_hops_total)
+
+  def compromise_rate(self):
+    return self.capture_count/float(self.success_total)
+
+# PassiveEvilGuard uses a non-destructive long-term timing-based
+# tagging attack to fully correlate circuits end-to-end with 100%
+# accuracy. PassiveEvilGuard does not kill any circuits.
+class PassiveEvilGuard(EvilGuard):
+  def __init__(self, succeed_rate, adversary_capacity):
+    EvilGuard.__init__(self, succeed_rate, adversary_capacity)
+
+  def build_circuit(self):
+    self._inc_first_hop()
+
+    # The presence of a malicious exit is a prior probability governed by the
+    # client. Decide it now.
+    got_malicious_exit = False
+    if random.random() < self.adversary_capacity:
+      got_malicious_exit = True
+
+    if random.random() < self.succeed_rate:
+      if got_malicious_exit: # via timing-based tagging attack
+        self._inc_success()
+        self.capture_count += 1
+      else:
+        self._inc_success() # "Better luck next time :/"
+
+    # Client may give up on us after this circuit
+    self.reject_if_bad()
+
+# UnrepentantEvilGuard uses a destructive tagging attack to
+# fully correlate circuits end-to-end with 100%
+# accuracy, as well as to kill uncorrelated circuits.
+#
+# UnrepentantEvilGuard doesn't care if there is a defense or
+# not.
+class UnrepentantEvilGuard(EvilGuard):
+  def __init__(self, succeed_rate, adversary_capacity):
+    EvilGuard.__init__(self, succeed_rate, adversary_capacity)
+
+  def build_circuit(self):
+    self._inc_first_hop()
+
+    # The presence of a malicious exit is a prior probability governed by the
+    # client. Decide it now.
+    got_malicious_exit = False
+    if random.random() < self.adversary_capacity:
+      got_malicious_exit = True
+
+    if random.random() < self.succeed_rate:
+      if got_malicious_exit: # via tagging attack
+        self._inc_success()
+        self.capture_count += 1
+      else:
+        pass # "We can't deanon it? Who cares then?"
+
+    # Client may give up on us after this circuit
+    self.reject_if_bad()
+
+# OmniscientEvilGuard is the worst-case adversary against
+# the path bias counters implemented in Tor 0.2.3.17.
+#
+# OmniscientEvilGuard knows client path counts, when they are about to
+# think it's bad, and when they scale, and tries to use all of these
+# to fail what it can to bias client paths without appearing bad to
+# them.
+#
+# Further in favor of the adversary, we assume that their circuit
+# failure rate is actually less than the network average by
+# the fraction of the network that they control (because the rest
+# of the network experiences this circuit failure as part of the
+# average failure).
+#
+# Further still, OmnscientEvilGuard is *so* omnsicient, it even knows
+# when circuits will fail due to ambient noise, so it never gets
+# killed by chance. (It is debatable how much this helps.. a
+# smart adversary could play the stats close enough to the line
+# to approach this omniscience asymptotically).
+#
+# Note: These omniscience assumptions all favor the attacker,
+# but they also simplify analysis to get worst-case bounds easily.
+#
+# XXX: Introducing some fuzz into our scaling count and/or rate might
+# help remove this exact omniscience in practice?
+class OmniscientEvilGuard(EvilGuard):
+  def __init__(self, succeed_rate, adversary_capacity):
+    EvilGuard.__init__(self, succeed_rate, adversary_capacity)
+
+  def look_ahead(self, n):
+    self.prev_first_hops = self._first_hops
+    self.prev_success = self._success
+    self.prev_first_hops_total = self.first_hops_total
+    self.prev_success_total = self.success_total
+    for i in xrange(n):
+      self._inc_first_hop()
+
+  def stop_looking(self):
+    self._first_hops = self.prev_first_hops
+    self._success = self.prev_success
+    self.first_hops_total = self.prev_first_hops_total
+    self.success_total = self.prev_success_total
+
+  # This guard should never get caught
+  def reject_if_bad(self):
+    assert not self.is_bad()
+
+  def build_circuit(self):
+    self._inc_first_hop()
+
+    # The presence of a malicious exit is a prior probability governed by the
+    # client. Decide it now.
+    got_malicious_exit = False
+    if random.random() < self.adversary_capacity:
+      got_malicious_exit = True
+
+    # In reality, OmniscientEvilGuard sees less failure because some
+    # of the failure in the network is due to other colluding nodes.
+    #if random.random() < self.succeed_rate + self.adversary_capacity:
+    #
+    # Note: We cut this out, because it favors the attacker to do so.
+    # It removes the risk of elimination by chance (which they could mitigate
+    # for an unknown but possibly small cost).
+    if True:
+      if got_malicious_exit: # via tagging attack
+        self.capture_count += 1
+        self._inc_success() # "We built a circuit! Yay!"
+      else:
+        # Look-ahead only needs to be non-zero to mitigate risk of random rejection
+        self.look_ahead(0)
+        if (self._get_rate() <= (PATH_BIAS_PCT/100.0)):
+          self.stop_looking()
+          self._inc_success() # "I better be good! don't want to get caught.."
+        else:
+          pass # Fail the circuit by doing nothing. It's not useful
+
+    # Client may give up on us after this circuit
+    self.reject_if_bad()
+
+# ProbabalisticEvilGuard only fails untagged circuits pct_below_path_bias
+# below the warning rate
+class ProbabalisticEvilGuard(EvilGuard):
+  def __init__(self, succeed_rate, adversary_capacity, pct_below_path_bias):
+    EvilGuard.__init__(self, succeed_rate, adversary_capacity)
+    # FIXME: There may be an optimal point where pct_below_path_bias
+    # is the lowest possible value that the adversary expects to control?
+    # Doesn't seem to be worth probing, though
+    self.path_bias_rate = (PATH_BIAS_PCT - pct_below_path_bias)/100.0
+    assert self.path_bias_rate <= 1.0
+
+  def build_circuit(self):
+    self._inc_first_hop()
+
+    # The presence of a malicious exit is a prior probability governed by the
+    # client. Decide it now.
+    got_malicious_exit = False
+    if random.random() < self.adversary_capacity:
+      got_malicious_exit = True
+
+    # ProbabalisticGamingGuard sees less failure because some
+    # of the failure in the network is due to other colluding nodes.
+    if random.random() < self.succeed_rate + self.adversary_capacity:
+      if got_malicious_exit: # via tagging attack
+        self._inc_success()
+        self.capture_count += 1
+      elif not self.success_total or \
+         self.success_total/float(self.first_hops_total) <= self.path_bias_rate:
+        # "Uh oh, we're failing too much, better let some through"
+        self._inc_success()
+      else:
+        pass # Fail the circuit by doing nothing. It's not useful
+
+    # Client may give up on us after this circuit
+    self.reject_if_bad()
+
+####################### Testing and Simulation #########################
+
+def simulate_circs_until(g, circ_count, say_when):
+  for i in xrange(circ_count):
+    g.build_circuit()
+    if say_when(g):
+      return True
+
+  return say_when(g)
+
+# Variables:
+# success_rate
+# PATH_BIAS_MIN_CIRCS = 20
+# PATH_BIAS_PCT = 70
+def notice_false_positive_test(trials, success_rate, min_circs, path_bias_pct):
+  # FIXME: Look it's just easier this way, ok? Get off my back already
+  global PATH_BIAS_MIN_CIRCS
+  global PATH_BIAS_PCT
+  PATH_BIAS_MIN_CIRCS = min_circs
+  PATH_BIAS_PCT = path_bias_pct
+
+  g = Guard(success_rate)
+
+  for i in xrange(1+trials/min_circs):
+    simulate_circs_until(g, PATH_BIAS_SCALE_THRESHOLD, lambda g: False)
+    g.reset()
+
+  #print g._get_rate()
+
+  return g.reject_rate()
+
+def reject_false_positive_test(trials, success_rate, scale_circs, path_bias_pct):
+  # FIXME: Look it's just easier this way, ok? Get off my back already
+  global PATH_BIAS_MIN_CIRCS
+  global PATH_BIAS_SCALE_THRESHOLD
+  global PATH_BIAS_PCT
+  PATH_BIAS_SCALE_THRESHOLD = scale_circs
+  PATH_BIAS_PCT = path_bias_pct
+
+  g = Guard(success_rate)
+
+  # Ignore startup. We don't reject then.
+  simulate_circs_until(g, PATH_BIAS_SCALE_THRESHOLD, lambda g: False)
+  g.rejected_count = 0
+
+  simulate_circs_until(g, trials, lambda g: False)
+
+  #print g._get_rate()
+
+  return g.reject_rate()
+
+def generic_rate_test(g, trials, success_rate, adversary_capacity, path_bias_pct, rate_fcn):
+  # FIXME: Look it's just easier this way, ok? Get off my back already
+  global PATH_BIAS_PCT
+  PATH_BIAS_PCT = path_bias_pct
+
+  simulate_circs_until(g, trials, lambda g: False)
+
+  if not isinstance(g, UnrepentantEvilGuard):
+    assert not g.is_bad()
+
+  return rate_fcn(g)
+
+################ Multi-Dementianal Analysis #####################
+
+# If brute force doesn't work, you're not using enough
+def brute_force(cmptr, functor, ranges, increment):
+  testpoint = map(lambda p: p[0], ranges)
+  maxpoint = testpoint
+  maxval = functor(*testpoint)
+
+  print "New extrema at "+str(maxpoint)+": "+str(maxval)
+
+  for dementia in xrange(len(ranges)):
+    if increment[dementia] > 0:
+      cmpr = lambda x, y: x<y
+    else:
+      cmpr = lambda x, y: x>y
+
+    value = ranges[dementia][0]
+    while cmpr(value, ranges[dementia][1]):
+      value += increment[dementia]
+      testpoint[dementia] = value
+      val = functor(*testpoint)
+      if cmptr(val, maxval):
+        maxval = val
+        maxpoint = testpoint
+        print "New extrema at "+str(maxpoint)+": "+str(maxval)
+
+  # FIXME: Haxx
+  reset_globals()
+
+  return maxpoint
+
+
+def surface_plot(functor, startpoint, ranges, increment):
+  pass
+
+def gradient_descent(functor, startpoint, ranges, increment):
+  # Warning, mentat: If brute force doesn't work, you're not using enough
+  # It might be wise to try to get a 3d color plot/heatmap/some other
+  # visualization before attempting this?
+  pass
+
+def main():
+  #random.seed(23)
+
+  if True:
+    print "==================== P(Compromise|Guard) =========================="
+
+    print "\nPassiveEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "(As expected, P(CompromisedExit|PassiveEvilGuard) ~= c/n)"
+    print brute_force(lambda x,y: x>y,
+                      lambda t, a,b,c:
+                        generic_rate_test(PassiveEvilGuard(a,b), t, a,b,c,
+                                          lambda g:
+                                            g.compromise_rate()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(10000,10000), (0.75,0.75), (0.05,0.85), (70, 70)],
+                     [0, 0, 0.2, 5])
+
+
+    print "\nUnrepentantEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "(As expected, P(CompromisedExit|UnrepentantEvilGuard) = 1.0)"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(UnrepentantEvilGuard(a,b), t,a,b,c,
+                                          lambda g:
+                                            g.compromise_rate()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(10000,10000), (0.75,0.75), (0.05,0.85), (70, 70)],
+                     [0, 0, 0.2, 5])
+
+    print "\nProbabalisticEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "P(CompromisedExit|ProbabalisticEvilGuard) <= (c/n)*(100/PATH_BIAS_PCT)"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(ProbabalisticEvilGuard(a,b,5),
+                                          t,a,b,c,
+                                          lambda g:
+                                            g.compromise_rate()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(10000,10000), (0.75,0.75), (0.05,0.85), (70, 70)],
+                     [0, 0, 0.2, 5])
+
+    print "\nOmniscientEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "P(CompromisedExit|OmniscientEvilGuard) <= (c/n)*(100/PATH_BIAS_PCT)"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(OmniscientEvilGuard(a,b), t,a,b,c,
+                                          lambda g:
+                                            g.compromise_rate()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(10000,10000), (0.75,0.75), (0.05,0.85), (70, 70)],
+                     [0, 0, 0.2, 5])
+
+    print "\nOmniscientEvilGuard compromise at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "P(CompromisedExit|OmniscientEvilGuard) <= (c/n)*(100/PATH_BIAS_PCT)"
+    print brute_force(lambda x,y: x<y,
+                      lambda t,a,b,c:
+                        generic_rate_test(OmniscientEvilGuard(a,b), t,a,b,c,
+                                          lambda g:
+                                            g.compromise_rate()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(10000,10000), (0.75,0.75), (0.20,0.20), (20, 80)],
+                     [0, 0, 0.05, 20])
+
+  if True:
+    print "\n\n==================== Circuits pwnt per client ========================="
+
+    print "\nUnrepentantEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "circs_per_client ~= success_rate*c/n*MIN_CIRCS      for c/n < PATH_BIAS_PCT || c/n < success_rate"
+    print "                 ~= success_rate*circ_attempts*c/n  for c/n > PATH_BIAS_PCT && c/n > success_rate"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(UnrepentantEvilGuard(a,b), t,a,b,c,
+                                          lambda g:
+                                            g.pwnt_per_client()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(100000,100000), (0.75,0.75), (0.05,0.85), (50, 50)],
+                     [0, 0, 0.2, 5])
+
+    print "\nPassiveEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "circs_per_client ~= success_rate * circ_attempts * c/n"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(PassiveEvilGuard(a,b),
+                                          t,a,b,c,
+                                          lambda g:
+                                            g.pwnt_per_client()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(100000,100000), (0.75,0.75), (0.05,0.85), (50, 50)],
+                     [0, 0, 0.2, 5])
+
+    print "\nProbabalisticEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "circs_per_client ~= success_rate * circ_attempts * c/n"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(ProbabalisticEvilGuard(a,b,5),
+                                          t,a,b,c,
+                                          lambda g:
+                                            g.pwnt_per_client()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(100000,100000), (0.75,0.75), (0.05,0.85), (50, 50)],
+                     [0, 0, 0.2, 5])
+
+    print "\nOmniscientEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:"
+    print "circs_per_client ~= circ_attempts * c/n"
+    print brute_force(lambda x,y: x>y,
+                      lambda t,a,b,c:
+                        generic_rate_test(OmniscientEvilGuard(a,b), t,a,b,c,
+                                          lambda g:
+                                            g.pwnt_per_client()),
+                     #generic_rate_test(trials, success_rate, adversary_capacity, path_bias_pct):
+                     [(100000,100000), (0.75,0.75), (0.05,0.85), (50, 50)],
+                     [0, 0, 0.2, 5])
+
+
+  if True:
+    print "\n\n===================== FALSE POSITIVES ============================"
+
+    print "\nNotice false positive rates at [trials, success_rate, min_circs, path_bias_pct]:"
+    print "(Results are some function of success_rate - path_bias_pct vs min_circs)"
+    print brute_force(lambda x,y: x<y,
+                     notice_false_positive_test,
+                     #false_positive_test(trials, success_rate, min_circs, path_bias_pct):
+                     [(100000,100000), (0.65, 0.65), (50,250), (70, 70)],
+                     [0, -0.1, 50, 5])
+
+    print "\nNotice false positive rates at [trials, success_rate, min_circs, path_bias_pct]:"
+    print "(Results are some function of success_rate - path_bias_pct vs min_circs)"
+    print brute_force(lambda x,y: x<y,
+                     notice_false_positive_test,
+                     #false_positive_test(trials, success_rate, min_circs, path_bias_pct):
+                     [(100000,100000), (0.70, 0.70), (50,500), (70, 70)],
+                     [0, -0.1, 50, 5])
+
+    print "\nNotice false positives at [trials, success_rate, min_circs, path_bias_pct]:"
+    print "(Results are some function of success_rate - path_bias_pct vs min_circs)"
+    print brute_force(lambda x,y: x<y,
+                     notice_false_positive_test,
+                     #false_positive_test(trials, success_rate, min_circs, path_bias_pct):
+                     [(100000,100000), (0.75, 0.75), (20,400), (70, 70)],
+                     [0, -0.1, 20, 5])
+
+    print "\nReject false positive rates at [trials, success_rate, scale_circs, path_bias_pct]:"
+    print "(Results are some function of success_rate - path_bias_pct vs scale_circs)"
+    print brute_force(lambda x,y: x<y,
+                     reject_false_positive_test,
+                     #false_positive_test(trials, success_rate, scale_circs, path_bias_pct):
+                     [(1000000,1000000), (0.65, 0.65), (50,250), (70, 70)],
+                     [0, -0.1, 50, 5])
+
+    print "\nReject false positive rates at [trials, success_rate, scale_circs, path_bias_pct]:"
+    print "(Results are some function of success_rate - path_bias_pct vs scale_circs)"
+    print brute_force(lambda x,y: x<y,
+                     reject_false_positive_test,
+                     #false_positive_test(trials, success_rate, scale_circs, path_bias_pct):
+                     [(1000000,1000000), (0.70, 0.70), (50,500), (70, 70)],
+                     [0, -0.1, 50, 5])
+
+    print "\nReject false positive rates at [trials, success_rate, scale_circs, path_bias_pct]:"
+    print "(Results are some function of success_rate - path_bias_pct vs scale_circs)"
+    print brute_force(lambda x,y: x<y,
+                     reject_false_positive_test,
+                     #false_positive_test(trials, success_rate, scale_circs, path_bias_pct):
+                     [(1000000,1000000), (0.75, 0.75), (50,500), (70, 70)],
+                     [0, -0.1, 50, 5])
+
+
+if __name__ == "__main__":
+  main() #sys.argv)
diff --git a/CircuitAnalysis/PathBias/results.txt b/CircuitAnalysis/PathBias/results.txt
new file mode 100644
index 0000000..4293851
--- /dev/null
+++ b/CircuitAnalysis/PathBias/results.txt
@@ -0,0 +1,152 @@
+==================== P(Compromise|Guard) ==========================
+
+PassiveEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:
+(As expected, P(CompromisedExit|PassiveEvilGuard) ~= c/n)
+New extrema at [10000, 0.75, 0.05, 70]: 0.052610548748
+New extrema at [10000, 0.75, 0.25, 70]: 0.250574712644
+New extrema at [10000, 0.75, 0.45, 70]: 0.455518836748
+New extrema at [10000, 0.75, 0.65, 70]: 0.64692513369
+New extrema at [10000, 0.75, 0.8500000000000001, 70]: 0.851249664069
+[10000, 0.75, 0.8500000000000001, 70]
+
+UnrepentantEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:
+(As expected, P(CompromisedExit|UnrepentantEvilGuard) = 1.0)
+New extrema at [10000, 0.75, 0.05, 70]: 1.0
+[10000, 0.75, 0.8500000000000001, 70]
+
+ProbabalisticEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:
+P(CompromisedExit|ProbabalisticEvilGuard) <= (c/n)*(100/PATH_BIAS_PCT)
+New extrema at [10000, 0.75, 0.05, 70]: 0.0615289955391
+New extrema at [10000, 0.75, 0.25, 70]: 0.381941239809
+New extrema at [10000, 0.75, 0.45, 70]: 0.686913732124
+New extrema at [10000, 0.75, 0.65, 70]: 0.993549377976
+New extrema at [10000, 0.75, 0.8500000000000001, 70]: 1.0
+[10000, 0.75, 0.8500000000000001, 70]
+
+OmniscientEvilGuard compromise rate at [success_rate, adversary_capacity, path_bias_pct]:
+P(CompromisedExit|OmniscientEvilGuard) <= (c/n)*(100/PATH_BIAS_PCT)
+New extrema at [10000, 0.75, 0.05, 70]: 0.0680291553523
+New extrema at [10000, 0.75, 0.25, 70]: 0.356724310419
+New extrema at [10000, 0.75, 0.45, 70]: 0.646151649293
+New extrema at [10000, 0.75, 0.65, 70]: 0.917728688757
+New extrema at [10000, 0.75, 0.8500000000000001, 70]: 1.0
+[10000, 0.75, 0.8500000000000001, 70]
+
+OmniscientEvilGuard compromise at [success_rate, adversary_capacity, path_bias_pct]:
+P(CompromisedExit|OmniscientEvilGuard) <= (c/n)*(100/PATH_BIAS_PCT)
+New extrema at [10000, 0.75, 0.2, 20]: 0.927127409497
+New extrema at [10000, 0.75, 0.2, 40]: 0.497260956175
+New extrema at [10000, 0.75, 0.2, 60]: 0.328728362184
+New extrema at [10000, 0.75, 0.2, 80]: 0.257664872982
+[10000, 0.75, 0.2, 80]
+
+
+==================== Circuits pwnt per client =========================
+
+UnrepentantEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:
+circs_per_client ~= success_rate*c/n*MIN_CIRCS      for c/n < PATH_BIAS_PCT || c/n < success_rate
+                 ~= success_rate*circ_attempts*c/n  for c/n > PATH_BIAS_PCT && c/n > success_rate
+New extrema at [100000, 0.75, 0.05, 50]: 0.742451509698
+New extrema at [100000, 0.75, 0.25, 50]: 3.73414682937
+New extrema at [100000, 0.75, 0.45, 50]: 6.91443298969
+New extrema at [100000, 0.75, 0.65, 50]: 30.9604339502
+New extrema at [100000, 0.75, 0.8500000000000001, 50]: 63741.0
+[100000, 0.75, 0.8500000000000001, 50]
+
+PassiveEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:
+circs_per_client ~= success_rate * circ_attempts * c/n
+New extrema at [100000, 0.75, 0.05, 50]: 3657.0
+New extrema at [100000, 0.75, 0.25, 50]: 18619.0
+New extrema at [100000, 0.75, 0.45, 50]: 33849.0
+New extrema at [100000, 0.75, 0.65, 50]: 49115.0
+New extrema at [100000, 0.75, 0.8500000000000001, 50]: 63625.0
+[100000, 0.75, 0.8500000000000001, 50]
+
+ProbabalisticEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:
+circs_per_client ~= success_rate * circ_attempts * c/n
+New extrema at [100000, 0.75, 0.05, 50]: 4036.0
+New extrema at [100000, 0.75, 0.65, 50]: 65053.0
+New extrema at [100000, 0.75, 0.8500000000000001, 50]: 85187.0
+[100000, 0.75, 0.8500000000000001, 50]
+
+OmniscientEvilGuard compromised circs at [success_rate, adversary_capacity, path_bias_pct]:
+circs_per_client ~= circ_attempts * c/n
+New extrema at [100000, 0.75, 0.05, 50]: 4936.0
+New extrema at [100000, 0.75, 0.25, 50]: 25097.0
+New extrema at [100000, 0.75, 0.45, 50]: 44944.0
+New extrema at [100000, 0.75, 0.65, 50]: 65078.0
+New extrema at [100000, 0.75, 0.8500000000000001, 50]: 84984.0
+[100000, 0.75, 0.8500000000000001, 50]
+
+
+===================== FALSE POSITIVES ============================
+
+Notice false positive rates at [trials, success_rate, min_circs, path_bias_pct]:
+(Results are some function of success_rate - path_bias_pct vs min_circs)
+New extrema at [100000, 0.65, 50, 70]: 0.0151544227886
+New extrema at [100000, 0.65, 100, 70]: 0.00748851148851
+New extrema at [100000, 0.65, 150, 70]: 0.00394002998501
+New extrema at [100000, 0.65, 250, 70]: 0.00385037406484
+[100000, 0.65, 250, 70]
+
+Notice false positive rates at [trials, success_rate, min_circs, path_bias_pct]:
+(Results are some function of success_rate - path_bias_pct vs min_circs)
+New extrema at [100000, 0.7, 50, 70]: 0.0087836081959
+New extrema at [100000, 0.7, 100, 70]: 0.00484715284715
+New extrema at [100000, 0.7, 150, 70]: 0.00275262368816
+New extrema at [100000, 0.7, 200, 70]: 0.00241916167665
+New extrema at [100000, 0.7, 250, 70]: 0.00180548628429
+New extrema at [100000, 0.7, 300, 70]: 0.0
+[100000, 0.7, 500, 70]
+
+Notice false positives at [trials, success_rate, min_circs, path_bias_pct]:
+(Results are some function of success_rate - path_bias_pct vs min_circs)
+New extrema at [100000, 0.75, 20, 70]: 0.00484143171366
+New extrema at [100000, 0.75, 40, 70]: 0.00279088364654
+New extrema at [100000, 0.75, 60, 70]: 0.00188602279544
+New extrema at [100000, 0.75, 80, 70]: 0.00137170263789
+New extrema at [100000, 0.75, 100, 70]: 0.000995004995005
+New extrema at [100000, 0.75, 120, 70]: 0.000834532374101
+New extrema at [100000, 0.75, 140, 70]: 0.000581818181818
+New extrema at [100000, 0.75, 160, 70]: 0.000472843450479
+New extrema at [100000, 0.75, 200, 70]: 0.000287425149701
+New extrema at [100000, 0.75, 240, 70]: 0.000143884892086
+New extrema at [100000, 0.75, 260, 70]: 0.0
+[100000, 0.75, 400, 70]
+
+Reject false positive rates at [trials, success_rate, scale_circs, path_bias_pct]:
+(Results are some function of success_rate - path_bias_pct vs scale_circs)
+New extrema at [1000000, 0.65, 50, 70]: 0.0346242687866
+New extrema at [1000000, 0.65, 100, 70]: 0.0328767123288
+New extrema at [1000000, 0.65, 150, 70]: 0.0322721591761
+New extrema at [1000000, 0.65, 200, 70]: 0.0318086382723
+New extrema at [1000000, 0.65, 250, 70]: 0.0317770557361
+[1000000, 0.65, 250, 70]
+
+Reject false positive rates at [trials, success_rate, scale_circs, path_bias_pct]:
+(Results are some function of success_rate - path_bias_pct vs scale_circs)
+New extrema at [1000000, 0.7, 50, 70]: 0.019617019149
+New extrema at [1000000, 0.7, 100, 70]: 0.014500549945
+New extrema at [1000000, 0.7, 150, 70]: 0.0125621156826
+New extrema at [1000000, 0.7, 200, 70]: 0.0109148170366
+New extrema at [1000000, 0.7, 250, 70]: 0.00958460384904
+New extrema at [1000000, 0.7, 300, 70]: 0.00869939018295
+New extrema at [1000000, 0.7, 350, 70]: 0.00858299595142
+New extrema at [1000000, 0.7, 400, 70]: 0.00795681727309
+New extrema at [1000000, 0.7, 450, 70]: 0.00731071018042
+New extrema at [1000000, 0.7, 500, 70]: 0.00690254872564
+[1000000, 0.7, 500, 70]
+
+Reject false positive rates at [trials, success_rate, scale_circs, path_bias_pct]:
+(Results are some function of success_rate - path_bias_pct vs scale_circs)
+New extrema at [1000000, 0.75, 50, 70]: 0.00594470276486
+New extrema at [1000000, 0.75, 100, 70]: 0.00212178782122
+New extrema at [1000000, 0.75, 150, 70]: 0.000978853172024
+New extrema at [1000000, 0.75, 200, 70]: 0.000467906418716
+New extrema at [1000000, 0.75, 250, 70]: 0.000266933266683
+New extrema at [1000000, 0.75, 300, 70]: 0.000119964010797
+New extrema at [1000000, 0.75, 350, 70]: 7.99720097966e-05
+New extrema at [1000000, 0.75, 400, 70]: 3.59856057577e-05
+New extrema at [1000000, 0.75, 450, 70]: 1.99910040482e-05
+New extrema at [1000000, 0.75, 500, 70]: 2.99850074963e-06
+[1000000, 0.75, 500, 70]



More information about the tor-commits mailing list