tor-commits
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
November 2018
- 19 participants
- 2292 discussions

06 Nov '18
commit 1dd7f1ff78fcb69121130cbd802b0b8b527ffc63
Author: Mike Perry <mikeperry-git(a)torproject.org>
Date: Mon Nov 5 19:45:05 2018 +0000
Proposal 254 updates from asn's review.
---
proposals/254-padding-negotiation.txt | 85 ++++++++++++++++++-----------------
1 file changed, 44 insertions(+), 41 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index b9ecc05..d950446 100644
--- a/proposals/254-padding-negotiation.txt
+++ b/…
[View More]proposals/254-padding-negotiation.txt
@@ -74,6 +74,10 @@ using the primitives in Section 3, by using "leaky pipe" topology to
send the RELAY commands to the Guard node instead of to later nodes in
the circuit.
+Because the above link-level padding only sends padding cells if the link is
+idle, it can be used in combination with the more complicated circuit-level
+padding below, without compounding overhead effects.
+
3. End-to-end circuit padding
@@ -90,16 +94,27 @@ consensus, and custom research machines can be listed in Torrc.
Circuits can have either one or two state machines at both the origin and at a
specified middle hop.
-Each state machine can contain up to three states ("Start", "Burst" and
-"Gap") governing their behavior. Not all states need to be used.
+Each state machine can contain up to three states ("Start", "Burst" and "Gap")
+governing their behavior, as well as an "END" state. Not all states need to be
+used.
Each state of a padding machine specifies either:
* A histogram describing inter-arrival cell delays; OR
* A parameterized delay probability distribution for inter-arrival cell delays
In either case, the lower bound of the delay probability distribution can be
-specified as a parameter, or it can be learned by measuring the RTT of the
-circuit.
+specified as the start_usec parameter, and/or it can be learned by measuring
+the RTT of the circuit at the middle node. For client-side machines, RTT
+measurement is always set to 0. RTT measurement at the middle node is
+calculated by measuring the difference between the time of arrival of an
+received cell (ie: away from origin) and the time of arrival of a sent cell
+(ie: towards origin). The RTT is continually updated so long as two cells do
+not arrive back-to-back in either direction. If the most recent measured RTT
+value is larger than our measured value so far, this larger value is used. If
+the most recent measured RTT value is lower than our measured value so far, it
+is averaged with our current measured value. (We favor longer RTTs slightly in
+this way, because circuits are growing away from the middle node and becoming
+longer).
If the histogram is used, it has an additional special "infinity" bin that
means "infinite delay".
@@ -128,54 +143,39 @@ When an event causes a transition to a state (or back to the same state), a
delay is sampled from the histogram or delay distribution, and padding cell is
scheduled to be sent after that delay.
-If a non-padding cell is sent before the timer, the timer is cancelled and a
+If a non-padding cell is sent before the timer, the timer is canceled and a
new padding delay is chosen.
3.1.1. Histogram Specification
If a histogram is used by a state (as opposed to a fixed parameterized
distribution), then each of the histograms' fields represent a probability
-distribution that is expanded into bins representing time periods a[i]..b[i]
-as follows:
+distribution that is encoded into bins of exponentially increasing width.
+
+The first bin of the histogram (bin 0) has 0 width, with a delay value of
+start_usec+rtt_estimate (from the machine definition, and rtt estimate above).
-start_usec,max_sec,histogram_len initialized from appropriate histogram
-body.
+The bin before the "infinity bin" has a time value of
+start_usec+rtt_estimate+range_sec*USEC_PER_SEC.
-n = histogram_len-1
-INFINITY_BIN = n
+The bins between these two points are exponentially spaced, so that smaller
+bin indexes represent narrower time ranges, doubling up until the last bin
+range of [(start_usec+rtt_estimate+range_sec*USEC_PER_SEC)/2,
+start_usec+rtt_estimate+range_sec*USEC_PER_SEC).
-a[0] = start_usec;
-b[0] = start_usec + max_sec*USEC_PER_SEC/2^(n-1);
-for(i=1; i < n; i++) {
- a[i] = start_usec + max_sec*USEC_PER_SEC/2^(n-i)
- b[i] = start_usec + max_sec*USEC_PER_SEC/2^(n-i-1)
-}
+This exponentially increasing bin width allows the histograms to most
+accurately represent small interpacket delay (where accuracy is needed), and
+devote less accuracy to larger timescales (where accuracy is not as
+important).
To sample the delay time to send a padding packet, perform the
following:
-
- i = 0;
- curr_weight = histogram[0];
-
- tot_weight = sum(histogram);
- bin_choice = crypto_rand_int(tot_weight);
-
- while (curr_weight < bin_choice) {
- curr_weight += histogram[i];
- i++;
- }
-
- if (i == INFINITY_BIN)
- return; // Don't send a padding packet
-
- // Sample uniformly between a[i] and b[i]
- send_padding_packet_at = a[i] + crypto_rand_int(b[i] - a[i]);
-
-In this way, the bin widths are exponentially increasing in width, where
-the width is set at max_sec/2^(n-i) seconds. This exponentially
-increasing bin width allows the histograms to most accurately represent
-small interpacket delay (where accuracy is needed), and devote less
-accuracy to larger timescales (where accuracy is not as important).
+ * Select a bin weighted by the number of tokens in its index compared to
+ the total.
+ * If the infinity bin is selected, do not schedule padding.
+ * If bin 0 is selected, schedule padding at exactly its time value.
+ * For other bins, uniformly sample a time value between this bin and
+ the next bin, and schedule padding then.
3.1.2 Histogram Token Removal
@@ -262,7 +262,10 @@ a RELAY_COMMAND_PADDING_NEGOTIATED with the following format:
};
The 'machine_type' field should be the same as the one from the
-PADDING_NEGOTIATE cell.
+PADDING_NEGOTIATE cell. This is because, as an optimization, new machines can
+be installed at the client side immediately after tearing down an old machine.
+If the response machine type does not match the current machine type, the
+response was for a previous machine, and can be ignored.
If the response field is CIRCPAD_RESPONSE_OK, padding was successfully
negotiated. If it is CIRCPAD_RESPONSE_ERR, the machine is torn down and we do
[View Less]
1
0

[torspec/master] Update Proposal #254 with latest circuit padding plans.
by nickm@torproject.org 06 Nov '18
by nickm@torproject.org 06 Nov '18
06 Nov '18
commit 3fed83a38d9d85cab6d0437184f3b8909ca0266b
Author: Mike Perry <mikeperry-git(a)torproject.org>
Date: Mon Oct 29 19:45:58 2018 +0000
Update Proposal #254 with latest circuit padding plans.
---
proposals/254-padding-negotiation.txt | 620 ++++++++++------------------------
1 file changed, 181 insertions(+), 439 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index ca5ad14..3b2c883 100644
--- a/proposals/254-padding-…
[View More]negotiation.txt
+++ b/proposals/254-padding-negotiation.txt
@@ -71,331 +71,65 @@ the circuit.
3. End-to-end circuit padding
-For circuit-level padding, we need two types of additional features: the
-ability to schedule additional incoming cells at one or more fixed
-points in the future, and the ability to schedule a statistical
+For circuit-level padding, we need the ability to schedule a statistical
distribution of arbitrary padding to overlay on top of non-padding
traffic (aka "Adaptive Padding").
-In both cases, these messages will be sent from clients to middle nodes
-using the "leaky pipe" property of the 'recognized' field of RELAY
-cells, allowing padding to originate from middle nodes on a circuit in a
-way that is not detectable from the Guard node.
+The statistical mechanisms that define padding are known as padding
+machines. Padding machines can be hardcoded in Tor, specified in the
+consensus, and custom research machines can be listed in Torrc.
-This same mechanism can also be used to request padding from the Guard
-node itself, to achieve link-level padding without the additional
-overhead requirements on middle nodes.
+3.1. Padding Machines
-3.1. Fixed-schedule padding message (RELAY_COMMAND_PADDING_SCHEDULE)
+Circuits can have either one or two state machines at both the origin and at a
+specified middle hop.
-The fixed schedule padding will be encoded in a
-RELAY_COMMAND_PADDING_SCHEDULE cell. It specifies a set of up to 80
-fixed time points in the future to send cells.
+Each state machine can contain up to three states ("Start", "Burst" and
+"Gap") governing their behavior. Not all states need to be used.
-XXX: 80 timers is a lot to allow every client to create. We may want to
-have something that checks this structure to ensure it actually
-schedules no more than N in practice, until we figure out how to
-optimize either libevent or timer scheduling/packet delivery. See also
-Section 4.3.
+Each state of a padding machine specifies either:
+ * A histogram describing inter-arrival cell delays; OR
+ * A parameterized distribution for inter-arrival cell delays
-The RELAY_COMMAND_PADDING_SCHEDULE body is specified in Trunnel as
-follows:
+In either case, the lower bound of the delay distribution can be specified as
+a parameter, or it can be learned by measuring the RTT of the circuit.
- struct relay_padding_schedule {
- u8 schedule_length IN [1..80];
+If the histogram is used, it has an additional special "infinity" bin that
+means "infinite delay".
- /* Number of microseconds before sending cells (cumulative) */
- u32 when_send[schedule_length];
-
- /* Number of cells to send at time point sum(when_send[0..i]) */
- u16 num_cells[schedule_length];
-
- /* Adaptivity: If 1, and server-originating cells arrive before the
- next when_send time, then decrement the next non-zero when_send
- index, so we don't send a padding cell then, too */
- u8 adaptive IN [0,1];
- };
-
-To allow both high-resolution time values, and the ability to specify
-timeout values far in the future, the time values are cumulative. In
-other words, sending a cell with when_send = [MAX_INT, MAX_INT, MAX_INT,
-0...] and num_cells = [0, 0, 100, 0...] would cause the relay to reply
-with 100 cells in 3*MAX_INT microseconds from the receipt of this cell.
-
-This scheduled padding is non-periodic. For any forms of periodic
-padding, implementations should use the RELAY_COMMAND_PADDING_ADAPTIVE
-cell from Section 3.2 instead.
-
-3.2. Adaptive Padding message (RELAY_COMMAND_PADDING_ADAPTIVE)
-
-The following message is a generalization of the Adaptive Padding
-defense specified in "Timing Attacks and Defenses"[2].
-
-The message encodes either one or two state machines, each of which can
-contain one or two histograms ("Burst" and "Gap") governing their
-behavior.
-
-The "Burst" histogram specifies the delay probabilities for sending a
-padding packet after the arrival of a non-padding data packet.
-
-The "Gap" histogram specifies the delay probabilities for sending
-another padding packet after a padding packet was just sent from this
-node. This self-triggering property of the "Gap" histogram allows the
-construction of multi-packet padding trains using a simple statistical
-distribution.
-
-Both "Gap" and "Burst" histograms each have a special "Infinity" bin,
-which means "We have decided not to send a packet".
-
-Each histogram is combined with state transition information, which
-allows a client to specify the types of incoming packets that cause the
-state machine to decide to schedule padding cells (and/or when to cease
-scheduling them).
-
-The client also maintains its own local histogram state machine(s), for
-reacting to traffic on its end.
-
-Note that our generalization of the Adaptive Padding state machine also
-gives clients full control over the state transition events, even
-allowing them to specify a single-state Burst-only state machine if
-desired. See Sections 3.2.1 and 3.2.2 for details.
-
-The histograms and the associated state machine packet layout is
-specified in Trunnel as follows:
-
- /* These constants form a bitfield to specify the types of events
- * that can cause transitions between state machine states.
- *
- * Note that SENT and RECV are relative to this endpoint. For
- * relays, SENT means packets destined towards the client and
- * RECV means packets destined towards the relay. On the client,
- * SENT means packets destined towards the relay, where as RECV
- * means packets destined towards the client.
- */
- const RELAY_PADDING_TRANSITION_EVENT_NONPADDING_RECV = 1;
- const RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT = 2;
- const RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT = 4;
- const RELAY_PADDING_TRANSITION_EVENT_PADDING_RECV = 8;
- const RELAY_PADDING_TRANSITION_EVENT_INFINITY = 16;
- const RELAY_PADDING_TRANSITION_EVENT_BINS_EMPTY = 32;
-
- /* Token Removal rules. Enum, not bitfield. */
- const RELAY_PADDING_REMOVE_NO_TOKENS = 0;
- const RELAY_PADDING_REMOVE_LOWER_TOKENS = 1;
- const RELAY_PADDING_REMOVE_HIGHER_TOKENS = 2;
- const RELAY_PADDING_REMOVE_CLOSEST_TOKENS = 3;
-
- /* This payload encodes a histogram delay distribution representing
- * the probability of sending a single RELAY_DROP cell after a
- * given delay in response to a non-padding cell.
- *
- * Payload max size: 113 bytes
- */
- struct burst_state {
- u8 histogram_len IN [2..51];
- u16 histogram[histogram_len];
- u32 start_usec;
- u16 max_sec;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to abort our scheduled packet and
- * return to waiting for another event from transition_burst_events.
- */
- u8 transition_start_events;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to remain in the burst state: Cancel the
- * pending padding packet (if any), and schedule another padding
- * packet from our histogram.
- */
- u8 transition_reschedule_events;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to transition to the Gap state. */
- u8 transition_gap_events;
-
- /* If true, remove tokens from the histogram upon padding and
- * non-padding activity. */
- u8 remove_tokens IN [0..3];
- };
-
- /* This histogram encodes a delay distribution representing the
- * probability of sending a single additional padding packet after
- * sending a padding packet that originated at this hop.
- *
- * Payload max size: 113 bytes
- */
- struct gap_state {
- u8 histogram_len IN [2..51];
- u16 histogram[histogram_len];
- u32 start_usec;
- u16 max_sec;
-
- /* This is a bitfield which specifies which direction and types
- * of traffic should cause us to transition back to the start
- * state (ie: abort scheduling packets completely). */
- u8 transition_start_events;
-
- /* This is a bitfield which specifies which direction and types
- * of traffic should cause us to transition back to the burst
- * state (and schedule a packet from the burst histogram). */
- u8 transition_burst_events;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to remain in the gap state: Cancel the
- * pending padding packet (if any), and schedule another padding
- * packet from our histogram.
- */
- u8 transition_reschedule_events;
-
- /* If true, remove tokens from the histogram upon padding and
- non-padding activity. */
- u8 remove_tokens IN [0..3];
- };
-
- /* Payload max size: 227 bytes */
- struct adaptive_padding_machine {
- /* This is a bitfield which specifies which direction and types
- * of traffic should cause us to transition to the burst
- * state (and schedule a packet from the burst histogram). */
- u8 transition_burst_events;
-
- struct burst_state burst;
- struct gap_state gap;
- };
-
- /* This is the full payload of a RELAY_COMMAND_PADDING_ADAPTIVE
- * cell.
- *
- * Payload max size: 455 bytes
- */
- struct relay_command_padding_adaptive {
- /* Technically, we could allow more than 2 state machines here,
- but only two are sure to fit. More than 2 seems excessive
- anyway. */
- u8 num_machines IN [1,2];
-
- struct adaptive_padding_machine machines[num_machines];
- };
-
-3.2.1. Histogram state machine operation
-
-Each of pair of histograms ("Burst" and "Gap") together form a state
-machine whose transitions are governed by incoming traffic and/or
-locally generated padding traffic.
-
-Each state machine has a Start state S, a Burst state B, and a Gap state
-G.
-
-The state machine starts idle (state S) until it receives a packet of a
-type that matches the bitmask in machines[i].transition_burst_events. If
-machines[i].transition_burst_events is 0, transition to the burst state
-happens immediately.
-
-This causes it to enter burst mode (state B), in which a delay t is
-sampled from the Burst histogram, and a timer is scheduled to count down
-until either another matching packet arrives, or t expires. If the
-"Infinity" time is sampled from this histogram, the machine returns to
-the lowest state with the INFINITY event bit set.
-
-If a packet that matches machines[i].burst.transition_start_events
-arrives before t expires, the machine transitions back to the Start
+The state can also provide an optional parameterized distribution that
+specifies how many total cells (or how many padding cells) can be sent on the
+circuit while the machine is in this state, before it transitions to a new
state.
-If a packet that matches machines[i].burst.transition_reschedule_events
-arrives before t expires, a new delay is sampled and the process is
-repeated again, i.e. it remains in burst mode.
-
-Otherwise, if t expires, a padding message is sent to the other end.
-
-If a packet that matches machines[i].burst.transition_gap_events
-arrives (or is sent), the machine transitions to the Gap state G.
-
-In state G, the machine samples from the Gap histogram and sends padding
-messages when the time it samples expires. If an infinite delay is
-sampled while being in state G we jump back to state B or S,
-depending upon the usage of the infinity event bitmask.
-
-If a packet arrives that matches gap.transition_start_events, the
-machine transitions back to the Start state.
-
-If a packet arrives that matches gap.transition_burst_events, the
-machine transitions back to the Burst state.
-
-If a packet arrives that matches
-machines[i].gap.transition_reschedule_events, the machine remains in G
-but schedules a new padding time from its Gap histogram.
-
-In the event that a malicious or buggy client specifies conflicting
-state transition rules with the same bits in multiple transition
-bitmasks, the transition rules of a state that specify transition to
-earlier states take priority. So burst.transition_start_events
-takes priority over burst.transition_reschedule_events, and both of
-these take priority over burst.transition_gap_events.
-
-Similarly, gap.transition_start_events takes priority over
-gap.transition_burst_events, and gap.transition_burst_events takes
-priority over gap.transition_reschedule_events.
-
-In our generalization of Adaptive Padding, either histogram may actually
-be self-scheduling (by setting the bit
-RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT in their
-transition_reschedule_events). This allows the client to create a
-single-state machine if desired.
-
-Clients are expected to maintain their own local version of the state
-machines, for reacting to their own locally generated traffic, in
-addition to sending one or more state machines to the middle relay. The
-histograms that the client uses locally will differ from the ones it
-sends to the upstream relay.
-
-On the client, the "SENT" direction means packets destined towards the
-relay, where as "RECV" means packets destined towards the client.
-However, on the relay, the "SENT" direction means packets destined
-towards the client, where as "RECV" means packets destined towards the
-relay.
-
-3.2.2. The original Adaptive Padding algorithm
-
-As we have noted, the state machines above represent a generalization of
-the original Adaptive Padding algorithm. To implement the original
-behavior, the following flags should be set in both the client and
-the relay state machines:
-
- num_machines = 1;
-
- machines[0].transition_burst_events =
- RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT;
+Each state of a padding machine can react to the following cell events:
+ * Non-padding cell received
+ * Padding cell received
+ * Non-padding cell sent
+ * Padding cell sent
- machines[0].burst.transition_reschedule_events =
- RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT;
+Additionally, padding machines emit the following internal events to themselves:
+ * Infinity bin was selected
+ * The histogram bins are empty
+ * The length count for this state was exceeded
- machines[0].burst.transition_gap_events =
- RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT;
+Each state of the padding machine specifies a set of these events that cause
+it to cancel any pending padding, and a set of events that cause it to
+transition to another state, or transition back itself.
- machines[0].burst.transition_start_events =
- RELAY_PADDING_TRANSITION_EVENT_INFINITY;
+When an event causes a transition to a state (or back to the same state), a
+delay is sampled from the histogram or delay distribution, and padding cell is
+scheduled to be sent after that delay.
- machines[0].gap.transition_reschedule_events =
- RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT;
+If a non-padding cell is sent before the timer, the timer is cancelled and a
+new padding delay is chosen.
- machines[0].gap.transition_burst_events =
- RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT |
- RELAY_PADDING_TRANSITION_EVENT_INFINITY;
+3.1.1. Histogram Specification
-The rest of the transition fields would be 0.
-
-Adding additional transition flags will either increase or decrease the
-amount of padding sent, depending on their placement.
-
-The second machine slot is provided in the event that it proves useful
-to have separate state machines reacting to both sent and received
-traffic.
-
-3.2.3. Histogram decoding/representation
-
-Each of the histograms' fields represent a probability distribution that
-is expanded into bins representing time periods a[i]..b[i] as follows:
+If a histogram is used by a state (as opposed to a fixed parameterized
+distribution), then each of the histograms' fields represent a probability
+distribution that is expanded into bins representing time periods a[i]..b[i]
+as follows:
start_usec,max_sec,histogram_len initialized from appropriate histogram
body.
@@ -436,42 +170,100 @@ increasing bin width allows the histograms to most accurately represent
small interpacket delay (where accuracy is needed), and devote less
accuracy to larger timescales (where accuracy is not as important).
-3.2.4. Token removal and refill
+3.1.2 Histogram Token Removal
-If the remove_tokens field is set to a non-zero value for a given
-state's histogram, then whenever a padding packet is sent, the
-corresponding histogram bin's token count is decremented by one.
+Tokens can be optionally removed from histogram bins whenever a padding or
+non-padding packet is sent. With this token removal, the histogram functions
+as an overall target delay distribution for the machine while it is in that
+state.
-If a packet matching the current state's transition_reschedule_events
-bitmask arrives from the server before the chosen padding timer expires,
-then a token is removed from a non-empty bin corresponding to
-the delay since the last packet was sent, and the padding packet timer
-is re-sampled from the histogram.
+If token removal is enabled, when a padding packet is sent, a token is removed
+from the bin corresponding to the target delay. When a non-padding packet is
+sent, the actual delay from the previous packet is calculated, and the
+histogram bin corresponding to that delay is inspected. If that bin has
+tokens remaining, it is decremented.
+
+If the bin has no tokens left, the state removes a token from a different bin,
+as specified in its token removal rule. The following token removal options
+are defined:
+ * None -- Never remove any tokens
+ * Exact -- Only remove from the target bin, if it is empty, ignore it.
+ * Higher -- Remove from the next higher non-empty bin
+ * Lower -- Remove from the next higher non-empty bin
+ * Closest -- Remove from the closest non-empty bin by index
+ * Closest_time -- Remove from the closest non-empty bin by index, by time
+
+When all bins are empty in a histogram, the padding machine emits the internal
+"bins empty" event to itself.
+
+3.2. Machine Selection
+
+Clients will select which of the defined available padding machines to use
+based on the conditions that these machines specify. These conditions include:
+ * How many hops the circuit must be in order for the machine to apply
+ * If the machine requires vanguards to be enabled to apply
+ * The state the circuit must be in for machines to apply (building,
+ relay early cells remaining, opened, streams currently attached).
+ * If the circuit purpose matches a set of purposes for the machine.
+ * If the target hop of the machine supports circuit padding.
+
+Clients will only select machines whose conditions fully match given circuits.
+
+3.3. Machine Neogitation
+
+When a machine is selected, the client uses leaky-pipe delivery to send a
+RELAY_COMMAND_PADDING_NEGOTIATE to the target hop of the machine, using the
+following trunnel relay cell payload format:
+
+ /**
+ * This command tells the relay to alter its min and max netflow
+ * timeout range values, and send padding at that rate (resuming
+ * if stopped). */
+ struct circpad_negotiate {
+ u8 version IN [0];
+ u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
+
+ /** Machine type is left unbounded because we can specify
+ * new machines in the consensus */
+ u8 machine_type;
+ };
-The three enums for the remove_tokens field govern if we take the token
-out of the nearest lower non-empty bin, the nearest higher non-empty
-bin, or simply the closest non-empty bin.
+Upon receipt of a RELAY_COMMAND_PADDING_NEGOTIATE cell, the middle node sends
+a RELAY_COMMAND_PADDING_NEGOTIATED with the following format:
+ /**
+ * This command tells the relay to alter its min and max netflow
+ * timeout range values, and send padding at that rate (resuming
+ * if stopped). */
+ struct circpad_negotiated {
+ u8 version IN [0];
+ u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
+ u8 response IN [CIRCPAD_RESPONSE_OK, CIRCPAD_RESPONSE_ERR];
+
+ /** Machine type is left unbounded because we can specify
+ * new machines in the consensus */
+ u8 machine_type;
+ };
-If the entire histogram becomes empty, it is then refilled to the
-original values. This refill happens prior to any state transitions due
-to RELAY_PADDING_TRANSITION_EVENT_BINS_EMPTY (but obviously does not
-prevent the transition from happening).
+If the response field is CIRCPAD_RESPONSE_OK, padding was successfully
+negotiated. If it is CIRCPAD_RESPONSE_ERR, the machine is torn down and we do
+not pad.
-3.2.5. Constructing the histograms
+4. Examples of Padding Machines
-Care must be taken when constructing the histograms themselves, since
-their non-uniform widths means that the actual underlying probability
-distribution needs to be both normalized for total number of tokens, as
-well as the non-uniform histogram bin widths.
+In the original WTF-PAD design[2], the state machines are used as follows:
-Care should also be taken with interaction with the token removal rules
-from Section 3.2.4. Obviously using a large number of tokens will cause
-token removal to have much less of an impact upon the adaptive nature of
-the padding in the face of existing traffic.
+The "Burst" histogram specifies the delay probabilities for sending a
+padding packet after the arrival of a non-padding data packet.
-Actual optimal histogram and state transition construction for different
-traffic types is expected to be a topic for further research.
+The "Gap" histogram specifies the delay probabilities for sending
+another padding packet after a padding packet was just sent from this
+node. This self-triggering property of the "Gap" histogram allows the
+construction of multi-packet padding trains using a simple statistical
+distribution.
+
+Both "Gap" and "Burst" histograms each have a special "Infinity" bin,
+which means "We have decided not to send a packet".
Intuitively, the burst state is used to detect when the line is idle
(and should therefore have few or no tokens in low histogram bins). The
@@ -481,76 +273,71 @@ stalls, or has a gap.
The gap state is used to fill in otherwise idle periods with artificial
payloads from the server (and should have many tokens in low bins, and
-possibly some also at higher bins).
+possibly some also at higher bins). In this way, the gap state either
+generates entirely fake streams of cells, or extends real streams with
+additional cells.
+
+The Adaptive Padding Early implementation[3] uses parameterized distributions
+instead of histograms, but otherwise uses the states in the same way.
+
+It should be noted that due to our generalization of these states and their
+transition possibilities, more complicated interactions are also possible. For
+example, it is possible to simulate circuit extension, so that all circuits
+appear to continue to extend up until the RELAY_EARLY cell count is depleted.
-It should be noted that due to our generalization of these states and
-their transition possibilities, more complicated interactions are also
-possible.
+It is also possible to create machines that simulate traffic on unused
+circuits, or mimic onion service activity on clients that aren't otherwise
+using onion services.
-4. Security considerations and mitigations
+5. Security considerations and mitigations
The risks from this proposal are primarily DoS/resource exhaustion, and
side channels.
-4.1. Rate limiting and accounting
+5.1. Rate limiting
-Fully client-requested padding introduces a vector for resource
-amplification attacks and general network overload due to
-overly-aggressive client implementations requesting too much padding.
-
-Current research indicates that this form of statistical padding should
-be effective at overhead rates of 50-60%. This suggests that clients
-that use more padding than this are likely to be overly aggressive in
-their behavior.
+Current research[2,3] indicates that padding should be be effective against
+website traffic fingerprinting at overhead rates of 50-60%. Circuit setup
+behavior can be concealed with far less overhead.
We recommend that three consensus parameters be used in the event that
the network is being overloaded from padding to such a degree that
padding requests should be ignored:
- * CircuitPaddingMaxRatio
- - The maximum ratio of padding traffic to non-padding traffic
- (expressed as a percent) to allow on a circuit before ceasing
- to pad. Ex: 75 means 75 padding packets for every 100 non-padding
- packets.
- - Default: 120
- * CircuitPaddingLimitCount
+ * circpad_max_machine_padding_pct
+ - The maximum ratio of sent padding traffic to sent non-padding traffic
+ (expressed as a percent) to allow on a padding machine before ceasing
+ to pad. Ex: 75 means 75 padding packets for every 100
+ non-padding+padding packets. This definition is consistent with the
+ overhead values in Proposal #265.
+ * circpad_machine_allowed_cells
- The number of padding cells that must be transmitted before the
- ratio limit is applied.
- - Default: 5000
- * CircuitPaddingLimitTime
- - The time period in seconds over which to count padding cells for
- application of the ratio limit (ie: reset the limit count this
- often).
- - Default: 60
+ per-machine ratio limit is applied.
+ * circpad_max_global_padding_pct
+ - The maximum ratio of sent padding traffic to sent non-padding traffic
+ (expressed as a percent) to allow globally at a client or relay
+ before ceasing to pad. Ex: 75 means 75 padding packets for every 100
+ non-padding+padding packets. This definition is consistent with the
+ overhead values in Proposal #265.
+
+Additionally, each machine can specify its own per-machine limits for
+the allowed cell counters and padding overhead percentages.
-XXX: Should we cap padding at these rates, or fully disable it once
-they're crossed? Probably cap?
+When either global or machine limits are reached, padding is no longer
+scheduled. The machine simply becomes idle until the overhead drops below
+the threshold.
+
+5.2. Overhead accounting
In order to monitor the quantity of padding to decide if we should alter
these limits in the consensus, every node will publish the following
values in a padding-counts line in its extra-info descriptor:
- * write-drop-multihop
- - The number of RELAY_DROP cells sent by this relay to a next hop
- that is listed in the consensus.
- * write-drop-onehop
- - The number of RELAY_DROP cells sent by this relay to a next hop
- that is not listed in the consensus.
- * write-pad
- - The number of CELL_PADDING cells sent by this relay.
- * write-total
- - The total number of cells sent by this relay.
- * read-drop-multihop
- - The number of RELAY_DROP cells read by this relay from a hop
- that is listed in the consensus.
- * read-drop-onehop
- - The number of RELAY_DROP cells read by this relay from a hop
- that is not listed in the consensus.
- * read-pad
- - The number of CELL_PADDING cells read by this relay.
- * read-total
- - The total number of cells read by this relay.
+ * read_drop_cell_count
+ - The number of RELAY_DROP cells read by this relay.
+ * write_drop_cell_count
+ - The number of RELAY_DROP cells sent by this relay.
Each of these counters will be rounded to the nearest 10,000 cells. This
rounding parameter will also be listed in the extra-info descriptor line, in
@@ -560,65 +347,20 @@ In the future, we may decide to introduce Laplace Noise in a similar
manner to the hidden service statistics, to further obscure padding
quantities.
-4.2. Malicious state machines
-
-The state machine capabilities of RELAY_COMMAND_PADDING_ADAPTIVE are
-very flexible, and as a result may specify conflicting or
-non-deterministic state transitions.
-
-We believe that the rules in Section 3.2.1 for prioritizing transitions
-towards lower states remove any possibility of non-deterministic
-transitions.
-
-However, because of self-triggering property that allows the state
-machines to schedule more padding packets after sending their own
-locally generated padding packets, care must be taken with the
-interaction with the rate limiting rules in Section 4.1. If the limits
-in section 4.1 are exceeded, the state machines should stop, rather than
-continually poll themselves trying to transmit packets and being blocked
-by the rate limiter at another layer.
-
-4.3. Libevent timer exhaustion
-
-As mentioned in section 3.1, scheduled padding may create an excessive
-number of libevent timers. Care should be taken in the implementation to
-devise a way to prevent clients from sending padding requests
-specifically designed to impact the ability of relays to function by
-causing too many timers to be scheduled at once.
-
-XXX: Can we suggest any specifics here? I can imagine a few ways of
-lazily scheduling timers only when they are close to their expiry time,
-and other ways of minimizing the number of pending timer callbacks at a
-given time, but I am not sure which would be best for libevent.
-
-4.4. Side channels
+5.3. Side channels
In order to prevent relays from introducing side channels by requesting
-padding from clients, all of these commands should only be valid in the
-outgoing (from the client/OP) direction.
-
-Clients should perform accounting on the amount of padding that they
-receive, and if it exceeds the amount that they have requested, they
-alert the user of a potentially misbehaving node, and/or close the
-circuit.
-
-Similarly, if RELAY_DROP cells arrive from the last hop of a circuit,
-rather than from the expected interior node, clients should alert the
-user of the possibility of that circuit endpoint introducing a
-side-channel attack, and/or close the circuit.
-
-4.5. Memory exhaustion
-
-Because interior nodes do not have information on the current circuits
-SENDME windows, it is possible for malicious clients to consume the
-buffers of relays by specifying padding, and then not reading from the
-associated circuits.
+padding from clients, all of the padding negotiation commands are only
+valid in the outgoing (from the client/OP) direction.
-XXX: Tor already had a few flow-control related DoS's in the past[3]. Is
-that defense sufficient here without any mods? It seems like it may be!
+Similarly, to prevent relays from sending malicious padding from arbitrary
+circuit positions, if RELAY_DROP cells arrive from a hop other than that
+with which padding was negotiated, this cell is counted as invalid for
+purposes of CIRC_BW control port fields, allowing the vanguards addon to
+close the circuit upon detecting this activity.
-------------------
1. https://gitweb.torproject.org/torspec.git/tree/proposals/251-netflow-paddin…
-2. http://freehaven.net/anonbib/cache/ShWa-Timing06.pdf
-3. https://blog.torproject.org/blog/new-tor-denial-service-attacks-and-defenses
+2. https://www.cs.kau.se/pulls/hot/thebasketcase-wtfpad/
+3. https://www.cs.kau.se/pulls/hot/thebasketcase-ape/
[View Less]
1
0

[torspec/master] fixup! Update Proposal #254 with latest circuit padding plans.
by nickm@torproject.org 06 Nov '18
by nickm@torproject.org 06 Nov '18
06 Nov '18
commit a697137e91ed319eae0ef8155c919249b1b75e86
Author: Mike Perry <mikeperry-git(a)torproject.org>
Date: Mon Oct 29 21:20:16 2018 +0000
fixup! Update Proposal #254 with latest circuit padding plans.
Update padding consensus param limits.
---
proposals/254-padding-negotiation.txt | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index 3b2c883..94d8287 …
[View More]100644
--- a/proposals/254-padding-negotiation.txt
+++ b/proposals/254-padding-negotiation.txt
@@ -305,21 +305,15 @@ We recommend that three consensus parameters be used in the event that
the network is being overloaded from padding to such a degree that
padding requests should be ignored:
- * circpad_max_machine_padding_pct
- - The maximum ratio of sent padding traffic to sent non-padding traffic
- (expressed as a percent) to allow on a padding machine before ceasing
- to pad. Ex: 75 means 75 padding packets for every 100
- non-padding+padding packets. This definition is consistent with the
- overhead values in Proposal #265.
- * circpad_machine_allowed_cells
+ * circpad_global_max_padding_pct
+ - The maximum percent of sent padding traffic out of total traffic
+ to allow in a tor process before ceasing to pad. Ex: 75 means
+ 75 padding packets for every 100 non-padding+padding packets.
+ This definition is consistent with the overhead values in Proposal
+ #265, though it does not take node position into account.
+ * circpad_global_allowed_cells
- The number of padding cells that must be transmitted before the
- per-machine ratio limit is applied.
- * circpad_max_global_padding_pct
- - The maximum ratio of sent padding traffic to sent non-padding traffic
- (expressed as a percent) to allow globally at a client or relay
- before ceasing to pad. Ex: 75 means 75 padding packets for every 100
- non-padding+padding packets. This definition is consistent with the
- overhead values in Proposal #265.
+ global ratio limit is applied.
Additionally, each machine can specify its own per-machine limits for
the allowed cell counters and padding overhead percentages.
[View Less]
1
0
commit ab37543cfb16219f6632ce13691bc7c395300645
Author: George Kadianakis <desnacked(a)riseup.net>
Date: Tue Oct 30 18:00:03 2018 +0200
Clarify prop#254 in some parts.
Also kill some trailing whitespace.
---
proposals/254-padding-negotiation.txt | 34 +++++++++++++++++++++++++++-------
1 file changed, 27 insertions(+), 7 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index 94d8287..b9ecc05 100644
--- a/proposals/…
[View More]254-padding-negotiation.txt
+++ b/proposals/254-padding-negotiation.txt
@@ -63,6 +63,12 @@ parameters:
u16 ito_high_ms;
};
+After the above cell is received, the guard should use the 'ito_low_ms' and
+'ito_high_ms' values as the minimum and maximum values (respectively) for
+inactivity before it decides to pad the channel. The actual timeout value is
+randomly chosen between those two values through an appropriate probability
+distribution (see proposal251 for the netflow padding protocol).
+
More complicated forms of link-level padding can still be specified
using the primitives in Section 3, by using "leaky pipe" topology to
send the RELAY commands to the Guard node instead of to later nodes in
@@ -89,10 +95,11 @@ Each state machine can contain up to three states ("Start", "Burst" and
Each state of a padding machine specifies either:
* A histogram describing inter-arrival cell delays; OR
- * A parameterized distribution for inter-arrival cell delays
+ * A parameterized delay probability distribution for inter-arrival cell delays
-In either case, the lower bound of the delay distribution can be specified as
-a parameter, or it can be learned by measuring the RTT of the circuit.
+In either case, the lower bound of the delay probability distribution can be
+specified as a parameter, or it can be learned by measuring the RTT of the
+circuit.
If the histogram is used, it has an additional special "infinity" bin that
means "infinite delay".
@@ -196,7 +203,7 @@ are defined:
When all bins are empty in a histogram, the padding machine emits the internal
"bins empty" event to itself.
-3.2. Machine Selection
+3.2. State Machine Selection
Clients will select which of the defined available padding machines to use
based on the conditions that these machines specify. These conditions include:
@@ -209,7 +216,16 @@ based on the conditions that these machines specify. These conditions include:
Clients will only select machines whose conditions fully match given circuits.
-3.3. Machine Neogitation
+A machine is represented by a positive number that can be thought of as a "menu
+option" through the list of padding machines. The currently supported padding
+state machines are:
+
+ [1]: CIRCPAD_MACHINE_CIRC_SETUP
+
+ A padding machine that obscures the initial circuit setup in an
+ attempt to hide onion services.
+
+3.3. Machine Negotiation
When a machine is selected, the client uses leaky-pipe delivery to send a
RELAY_COMMAND_PADDING_NEGOTIATE to the target hop of the machine, using the
@@ -222,7 +238,7 @@ following trunnel relay cell payload format:
struct circpad_negotiate {
u8 version IN [0];
u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
-
+
/** Machine type is left unbounded because we can specify
* new machines in the consensus */
u8 machine_type;
@@ -230,6 +246,7 @@ following trunnel relay cell payload format:
Upon receipt of a RELAY_COMMAND_PADDING_NEGOTIATE cell, the middle node sends
a RELAY_COMMAND_PADDING_NEGOTIATED with the following format:
+
/**
* This command tells the relay to alter its min and max netflow
* timeout range values, and send padding at that rate (resuming
@@ -238,12 +255,15 @@ a RELAY_COMMAND_PADDING_NEGOTIATED with the following format:
u8 version IN [0];
u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
u8 response IN [CIRCPAD_RESPONSE_OK, CIRCPAD_RESPONSE_ERR];
-
+
/** Machine type is left unbounded because we can specify
* new machines in the consensus */
u8 machine_type;
};
+The 'machine_type' field should be the same as the one from the
+PADDING_NEGOTIATE cell.
+
If the response field is CIRCPAD_RESPONSE_OK, padding was successfully
negotiated. If it is CIRCPAD_RESPONSE_ERR, the machine is torn down and we do
not pad.
[View Less]
1
0

06 Nov '18
commit a66d8a650f6ab540fb6b63d09dc737e59eefb67a
Author: Mike Perry <mikeperry-git(a)torproject.org>
Date: Tue Nov 6 01:20:32 2018 +0000
Prop #254: Use range_usec instead of range_sec.
---
proposals/254-padding-negotiation.txt | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index f166d5f..e569dcc 100644
--- a/proposals/254-padding-negotiation.txt
+++ b/proposals/254-padding-…
[View More]negotiation.txt
@@ -156,13 +156,12 @@ The first bin of the histogram (bin 0) has 0 width, with a delay value of
start_usec+rtt_estimate (from the machine definition, and rtt estimate above).
The remaining bins are exponentially spaced, starting at this offset and
-covering the range of the histogram, which is range_sec*USEC_PER_SEC.
+covering the range of the histogram, which is range_usec.
-The intermediate bins thus divide the timespan range_sec*USEC_PER_SEC with
-offset start_usec+rtt_estimate, so that smaller bin indexes represent narrower
-time ranges, doubling up until the last bin. The last bin before the "infinity
-bin" thus covers [start_usec+rtt_estimate+range_sec*USEC_PER_SEC/2,
-CIRCPAD_DELAY_INFINITE).
+The intermediate bins thus divide the timespan range_usec with offset
+start_usec+rtt_estimate, so that smaller bin indexes represent narrower time
+ranges, doubling up until the last bin. The last bin before the "infinity bin"
+thus covers [start_usec+rtt_estimate+range_usec/2, CIRCPAD_DELAY_INFINITE).
This exponentially increasing bin width allows the histograms to most
accurately represent small interpacket delay (where accuracy is needed), and
[View Less]
1
0

06 Nov '18
commit 9d8c044057807abfd9407d2b23acf6b95b142c1d
Merge: 14881dc 98e6c66
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Tue Nov 6 15:25:52 2018 -0500
Merge remote-tracking branch 'tor-github/pr/41'
proposals/254-padding-negotiation.txt | 717 ++++++++++++----------------------
1 file changed, 242 insertions(+), 475 deletions(-)
1
0

[torspec/master] Prop #254: Clarify special cases for bin 0 and inf bin-1.
by nickm@torproject.org 06 Nov '18
by nickm@torproject.org 06 Nov '18
06 Nov '18
commit 470bde64e6d1161b0bcf0b266aa67ecbdd016774
Author: Mike Perry <mikeperry-git(a)torproject.org>
Date: Tue Nov 6 01:23:10 2018 +0000
Prop #254: Clarify special cases for bin 0 and inf bin-1.
---
proposals/254-padding-negotiation.txt | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index e569dcc..8cad35d 100644
--- a/proposals/254-padding-negotiation.txt
+++ b/proposals/254-…
[View More]padding-negotiation.txt
@@ -161,7 +161,8 @@ covering the range of the histogram, which is range_usec.
The intermediate bins thus divide the timespan range_usec with offset
start_usec+rtt_estimate, so that smaller bin indexes represent narrower time
ranges, doubling up until the last bin. The last bin before the "infinity bin"
-thus covers [start_usec+rtt_estimate+range_usec/2, CIRCPAD_DELAY_INFINITE).
+thus covers [start_usec+rtt_estimate+range_usec/2,
+start_usec+rtt_estimate+range_usec).
This exponentially increasing bin width allows the histograms to most
accurately represent small interpacket delay (where accuracy is needed), and
@@ -203,6 +204,12 @@ are defined:
When all bins are empty in a histogram, the padding machine emits the internal
"bins empty" event to itself.
+Bin 0 and the bin before the infinity bin both have special rules for purposes
+of token removal. While removing tokens, all values less than bin 0 are
+treated as part of bin 0, and all values greater than
+start_usec+rtt_estimate+range_sec are treated as part of the bin before the
+infinity bin.
+
3.2. State Machine Selection
Clients will select which of the defined available padding machines to use
[View Less]
1
0

06 Nov '18
commit 98e6c6637424fd1a887550a7336f193f6b84d50a
Author: Mike Perry <mikeperry-git(a)torproject.org>
Date: Tue Nov 6 01:25:45 2018 +0000
Prop #254: The infinity bin is also special.
---
proposals/254-padding-negotiation.txt | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index 8cad35d..19ab6ce 100644
--- a/proposals/254-padding-negotiation.txt
+++ b/proposals/254-padding-…
[View More]negotiation.txt
@@ -201,14 +201,15 @@ are defined:
* Closest -- Remove from the closest non-empty bin by index
* Closest_time -- Remove from the closest non-empty bin by index, by time
-When all bins are empty in a histogram, the padding machine emits the internal
-"bins empty" event to itself.
+When all bins exept the infinity bin are empty in a histogram, the padding
+machine emits the internal "bins empty" event to itself.
Bin 0 and the bin before the infinity bin both have special rules for purposes
of token removal. While removing tokens, all values less than bin 0 are
treated as part of bin 0, and all values greater than
start_usec+rtt_estimate+range_sec are treated as part of the bin before the
-infinity bin.
+infinity bin. Tokens are not removed from the infinity bin when non-padding is
+sent. (They are only removed when an "infinite" delay is chosen).
3.2. State Machine Selection
[View Less]
1
0

06 Nov '18
commit 0d6d3e1f265609e8e74bf970a5d578300c465617
Author: Alex Xu (Hello71) <alex_y_xu(a)yahoo.ca>
Date: Thu Oct 18 19:54:49 2018 -0400
Notify systemd of ShutdownWaitLength
---
changes/ticket28113 | 3 +++
src/or/hibernate.c | 20 ++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/changes/ticket28113 b/changes/ticket28113
new file mode 100644
index 000000000..2585514b8
--- /dev/null
+++ b/changes/ticket28113
@@ -0,0 +1,3 @@
+ o Minor bugfixes (relay shutdown, …
[View More]systemd):
+ - Notify systemd of ShutdownWaitLength so it can be set to longer than
+ systemd's TimeoutStopSec. Fixes bug 28113; bugfix on 0.2.6.2-alpha.
diff --git a/src/or/hibernate.c b/src/or/hibernate.c
index e3c80b5f1..a59d52f3d 100644
--- a/src/or/hibernate.c
+++ b/src/or/hibernate.c
@@ -837,6 +837,26 @@ hibernate_begin(hibernate_state_t new_state, time_t now)
"connections, and will shut down in %d seconds. Interrupt "
"again to exit now.", options->ShutdownWaitLength);
shutdown_time = time(NULL) + options->ShutdownWaitLength;
+#ifdef HAVE_SYSTEMD
+ /* tell systemd that we may need more than the default 90 seconds to shut
+ * down so they don't kill us. add some extra time to actually finish
+ * shutting down, otherwise systemd will kill us immediately after the
+ * EXTEND_TIMEOUT_USEC expires. this is an *upper* limit; tor will probably
+ * only take one or two more seconds, but assume that maybe we got swapped
+ * out and it takes a little while longer.
+ *
+ * as of writing, this is a no-op with all-defaults: ShutdownWaitLength is
+ * 30 seconds, so this will extend the timeout to 60 seconds.
+ * default systemd DefaultTimeoutStopSec is 90 seconds, so systemd will
+ * wait (up to) 90 seconds anyways.
+ *
+ * 2^31 usec = ~2147 sec = ~35 min. probably nobody will actually set
+ * ShutdownWaitLength to more than that, but use a longer type so we don't
+ * need to think about UB on overflow
+ */
+ sd_notifyf(0, "EXTEND_TIMEOUT_USEC=%" PRIu64,
+ ((uint64_t)(options->ShutdownWaitLength) + 30) * TOR_USEC_PER_SEC);
+#endif
} else { /* soft limit reached */
hibernate_end_time = interval_end_time;
}
[View Less]
1
0

[tor/maint-0.3.5] Merge remote-tracking branch 'tor-github/pr/474' into maint-0.3.5
by nickm@torproject.org 06 Nov '18
by nickm@torproject.org 06 Nov '18
06 Nov '18
commit c60f3ea6077451facf2335b7a7c4bc9eaf13c038
Merge: 8a5590eba bd0e38dcf
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Tue Nov 6 15:21:45 2018 -0500
Merge remote-tracking branch 'tor-github/pr/474' into maint-0.3.5
changes/ticket28113 | 5 +++++
contrib/dist/tor.service.in | 2 +-
src/feature/hibernate/hibernate.c | 20 ++++++++++++++++++++
3 files changed, 26 insertions(+), 1 deletion(-)
diff --cc src/feature/hibernate/hibernate.c
index 02b05ca3a,…
[View More]000000000..4c46c4fe2
mode 100644,000000..100644
--- a/src/feature/hibernate/hibernate.c
+++ b/src/feature/hibernate/hibernate.c
@@@ -1,1235 -1,0 +1,1255 @@@
+/* Copyright (c) 2004-2006, Roger Dingledine, Nick Mathewson.
+ * Copyright (c) 2007-2018, The Tor Project, Inc. */
+/* See LICENSE for licensing information */
+
+/**
+ * \file hibernate.c
+ * \brief Functions to close listeners, stop allowing new circuits,
+ * etc in preparation for closing down or going dormant; and to track
+ * bandwidth and time intervals to know when to hibernate and when to
+ * stop hibernating.
+ *
+ * Ordinarily a Tor relay is "Live".
+ *
+ * A live relay can stop accepting connections for one of two reasons: either
+ * it is trying to conserve bandwidth because of bandwidth accounting rules
+ * ("soft hibernation"), or it is about to shut down ("exiting").
+ **/
+
+/*
+hibernating, phase 1:
+ - send destroy in response to create cells
+ - send end (policy failed) in response to begin cells
+ - close an OR conn when it has no circuits
+
+hibernating, phase 2:
+ (entered when bandwidth hard limit reached)
+ - close all OR/AP/exit conns)
+*/
+
+#define HIBERNATE_PRIVATE
+#include "core/or/or.h"
+#include "core/or/channel.h"
+#include "core/or/channeltls.h"
+#include "app/config/config.h"
+#include "core/mainloop/connection.h"
+#include "core/or/connection_edge.h"
+#include "core/or/connection_or.h"
+#include "feature/control/control.h"
+#include "lib/crypt_ops/crypto_rand.h"
+#include "feature/hibernate/hibernate.h"
+#include "core/mainloop/mainloop.h"
+#include "feature/relay/router.h"
+#include "app/config/statefile.h"
+#include "lib/evloop/compat_libevent.h"
+
+#include "core/or/or_connection_st.h"
+#include "app/config/or_state_st.h"
+
+#ifdef HAVE_UNISTD_H
+#include <unistd.h>
+#endif
+
+/** Are we currently awake, asleep, running out of bandwidth, or shutting
+ * down? */
+static hibernate_state_t hibernate_state = HIBERNATE_STATE_INITIAL;
+/** If are hibernating, when do we plan to wake up? Set to 0 if we
+ * aren't hibernating. */
+static time_t hibernate_end_time = 0;
+/** If we are shutting down, when do we plan finally exit? Set to 0 if
+ * we aren't shutting down. */
+static time_t shutdown_time = 0;
+
+/** A timed event that we'll use when it's time to wake up from
+ * hibernation. */
+static mainloop_event_t *wakeup_event = NULL;
+
+/** Possible accounting periods. */
+typedef enum {
+ UNIT_MONTH=1, UNIT_WEEK=2, UNIT_DAY=3,
+} time_unit_t;
+
+/*
+ * @file hibernate.c
+ *
+ * <h4>Accounting</h4>
+ * Accounting is designed to ensure that no more than N bytes are sent in
+ * either direction over a given interval (currently, one month, one week, or
+ * one day) We could
+ * try to do this by choking our bandwidth to a trickle, but that
+ * would make our streams useless. Instead, we estimate what our
+ * bandwidth usage will be, and guess how long we'll be able to
+ * provide that much bandwidth before hitting our limit. We then
+ * choose a random time within the accounting interval to come up (so
+ * that we don't get 50 Tors running on the 1st of the month and none
+ * on the 30th).
+ *
+ * Each interval runs as follows:
+ *
+ * <ol>
+ * <li>We guess our bandwidth usage, based on how much we used
+ * last time. We choose a "wakeup time" within the interval to come up.
+ * <li>Until the chosen wakeup time, we hibernate.
+ * <li> We come up at the wakeup time, and provide bandwidth until we are
+ * "very close" to running out.
+ * <li> Then we go into low-bandwidth mode, and stop accepting new
+ * connections, but provide bandwidth until we run out.
+ * <li> Then we hibernate until the end of the interval.
+ *
+ * If the interval ends before we run out of bandwidth, we go back to
+ * step one.
+ *
+ * Accounting is controlled by the AccountingMax, AccountingRule, and
+ * AccountingStart options.
+ */
+
+/** How many bytes have we read in this accounting interval? */
+static uint64_t n_bytes_read_in_interval = 0;
+/** How many bytes have we written in this accounting interval? */
+static uint64_t n_bytes_written_in_interval = 0;
+/** How many seconds have we been running this interval? */
+static uint32_t n_seconds_active_in_interval = 0;
+/** How many seconds were we active in this interval before we hit our soft
+ * limit? */
+static int n_seconds_to_hit_soft_limit = 0;
+/** When in this interval was the soft limit hit. */
+static time_t soft_limit_hit_at = 0;
+/** How many bytes had we read/written when we hit the soft limit? */
+static uint64_t n_bytes_at_soft_limit = 0;
+/** When did this accounting interval start? */
+static time_t interval_start_time = 0;
+/** When will this accounting interval end? */
+static time_t interval_end_time = 0;
+/** How far into the accounting interval should we hibernate? */
+static time_t interval_wakeup_time = 0;
+/** How much bandwidth do we 'expect' to use per minute? (0 if we have no
+ * info from the last period.) */
+static uint64_t expected_bandwidth_usage = 0;
+/** What unit are we using for our accounting? */
+static time_unit_t cfg_unit = UNIT_MONTH;
+
+/** How many days,hours,minutes into each unit does our accounting interval
+ * start? */
+/** @{ */
+static int cfg_start_day = 0,
+ cfg_start_hour = 0,
+ cfg_start_min = 0;
+/** @} */
+
+static const char *hibernate_state_to_string(hibernate_state_t state);
+static void reset_accounting(time_t now);
+static int read_bandwidth_usage(void);
+static time_t start_of_accounting_period_after(time_t now);
+static time_t start_of_accounting_period_containing(time_t now);
+static void accounting_set_wakeup_time(void);
+static void on_hibernate_state_change(hibernate_state_t prev_state);
+static void hibernate_schedule_wakeup_event(time_t now, time_t end_time);
+static void wakeup_event_callback(mainloop_event_t *ev, void *data);
+
+/**
+ * Return the human-readable name for the hibernation state <b>state</b>
+ */
+static const char *
+hibernate_state_to_string(hibernate_state_t state)
+{
+ static char buf[64];
+ switch (state) {
+ case HIBERNATE_STATE_EXITING: return "EXITING";
+ case HIBERNATE_STATE_LOWBANDWIDTH: return "SOFT";
+ case HIBERNATE_STATE_DORMANT: return "HARD";
+ case HIBERNATE_STATE_INITIAL:
+ case HIBERNATE_STATE_LIVE:
+ return "AWAKE";
+ default:
+ log_warn(LD_BUG, "unknown hibernate state %d", state);
+ tor_snprintf(buf, sizeof(buf), "unknown [%d]", state);
+ return buf;
+ }
+}
+
+/* ************
+ * Functions for bandwidth accounting.
+ * ************/
+
+/** Configure accounting start/end time settings based on
+ * options->AccountingStart. Return 0 on success, -1 on failure. If
+ * <b>validate_only</b> is true, do not change the current settings. */
+int
+accounting_parse_options(const or_options_t *options, int validate_only)
+{
+ time_unit_t unit;
+ int ok, idx;
+ long d,h,m;
+ smartlist_t *items;
+ const char *v = options->AccountingStart;
+ const char *s;
+ char *cp;
+
+ if (!v) {
+ if (!validate_only) {
+ cfg_unit = UNIT_MONTH;
+ cfg_start_day = 1;
+ cfg_start_hour = 0;
+ cfg_start_min = 0;
+ }
+ return 0;
+ }
+
+ items = smartlist_new();
+ smartlist_split_string(items, v, NULL,
+ SPLIT_SKIP_SPACE|SPLIT_IGNORE_BLANK,0);
+ if (smartlist_len(items)<2) {
+ log_warn(LD_CONFIG, "Too few arguments to AccountingStart");
+ goto err;
+ }
+ s = smartlist_get(items,0);
+ if (0==strcasecmp(s, "month")) {
+ unit = UNIT_MONTH;
+ } else if (0==strcasecmp(s, "week")) {
+ unit = UNIT_WEEK;
+ } else if (0==strcasecmp(s, "day")) {
+ unit = UNIT_DAY;
+ } else {
+ log_warn(LD_CONFIG,
+ "Unrecognized accounting unit '%s': only 'month', 'week',"
+ " and 'day' are supported.", s);
+ goto err;
+ }
+
+ switch (unit) {
+ case UNIT_WEEK:
+ d = tor_parse_long(smartlist_get(items,1), 10, 1, 7, &ok, NULL);
+ if (!ok) {
+ log_warn(LD_CONFIG, "Weekly accounting must begin on a day between "
+ "1 (Monday) and 7 (Sunday)");
+ goto err;
+ }
+ break;
+ case UNIT_MONTH:
+ d = tor_parse_long(smartlist_get(items,1), 10, 1, 28, &ok, NULL);
+ if (!ok) {
+ log_warn(LD_CONFIG, "Monthly accounting must begin on a day between "
+ "1 and 28");
+ goto err;
+ }
+ break;
+ case UNIT_DAY:
+ d = 0;
+ break;
+ /* Coverity dislikes unreachable default cases; some compilers warn on
+ * switch statements missing a case. Tell Coverity not to worry. */
+ /* coverity[dead_error_begin] */
+ default:
+ tor_assert(0);
+ }
+
+ idx = unit==UNIT_DAY?1:2;
+ if (smartlist_len(items) != (idx+1)) {
+ log_warn(LD_CONFIG,"Accounting unit '%s' requires %d argument%s.",
+ s, idx, (idx>1)?"s":"");
+ goto err;
+ }
+ s = smartlist_get(items, idx);
+ h = tor_parse_long(s, 10, 0, 23, &ok, &cp);
+ if (!ok) {
+ log_warn(LD_CONFIG,"Accounting start time not parseable: bad hour.");
+ goto err;
+ }
+ if (!cp || *cp!=':') {
+ log_warn(LD_CONFIG,
+ "Accounting start time not parseable: not in HH:MM format");
+ goto err;
+ }
+ m = tor_parse_long(cp+1, 10, 0, 59, &ok, &cp);
+ if (!ok) {
+ log_warn(LD_CONFIG, "Accounting start time not parseable: bad minute");
+ goto err;
+ }
+ if (!cp || *cp!='\0') {
+ log_warn(LD_CONFIG,
+ "Accounting start time not parseable: not in HH:MM format");
+ goto err;
+ }
+
+ if (!validate_only) {
+ cfg_unit = unit;
+ cfg_start_day = (int)d;
+ cfg_start_hour = (int)h;
+ cfg_start_min = (int)m;
+ }
+ SMARTLIST_FOREACH(items, char *, item, tor_free(item));
+ smartlist_free(items);
+ return 0;
+ err:
+ SMARTLIST_FOREACH(items, char *, item, tor_free(item));
+ smartlist_free(items);
+ return -1;
+}
+
+/** If we want to manage the accounting system and potentially
+ * hibernate, return 1, else return 0.
+ */
+MOCK_IMPL(int,
+accounting_is_enabled,(const or_options_t *options))
+{
+ if (options->AccountingMax)
+ return 1;
+ return 0;
+}
+
+/** If accounting is enabled, return how long (in seconds) this
+ * interval lasts. */
+int
+accounting_get_interval_length(void)
+{
+ return (int)(interval_end_time - interval_start_time);
+}
+
+/** Return the time at which the current accounting interval will end. */
+MOCK_IMPL(time_t,
+accounting_get_end_time,(void))
+{
+ return interval_end_time;
+}
+
+/** Called from connection.c to tell us that <b>seconds</b> seconds have
+ * passed, <b>n_read</b> bytes have been read, and <b>n_written</b>
+ * bytes have been written. */
+void
+accounting_add_bytes(size_t n_read, size_t n_written, int seconds)
+{
+ n_bytes_read_in_interval += n_read;
+ n_bytes_written_in_interval += n_written;
+ /* If we haven't been called in 10 seconds, we're probably jumping
+ * around in time. */
+ n_seconds_active_in_interval += (seconds < 10) ? seconds : 0;
+}
+
+/** If get_end, return the end of the accounting period that contains
+ * the time <b>now</b>. Else, return the start of the accounting
+ * period that contains the time <b>now</b> */
+static time_t
+edge_of_accounting_period_containing(time_t now, int get_end)
+{
+ int before;
+ struct tm tm;
+ tor_localtime_r(&now, &tm);
+
+ /* Set 'before' to true iff the current time is before the hh:mm
+ * changeover time for today. */
+ before = tm.tm_hour < cfg_start_hour ||
+ (tm.tm_hour == cfg_start_hour && tm.tm_min < cfg_start_min);
+
+ /* Dispatch by unit. First, find the start day of the given period;
+ * then, if get_end is true, increment to the end day. */
+ switch (cfg_unit)
+ {
+ case UNIT_MONTH: {
+ /* If this is before the Nth, we want the Nth of last month. */
+ if (tm.tm_mday < cfg_start_day ||
+ (tm.tm_mday == cfg_start_day && before)) {
+ --tm.tm_mon;
+ }
+ /* Otherwise, the month is correct. */
+ tm.tm_mday = cfg_start_day;
+ if (get_end)
+ ++tm.tm_mon;
+ break;
+ }
+ case UNIT_WEEK: {
+ /* What is the 'target' day of the week in struct tm format? (We
+ say Sunday==7; struct tm says Sunday==0.) */
+ int wday = cfg_start_day % 7;
+ /* How many days do we subtract from today to get to the right day? */
+ int delta = (7+tm.tm_wday-wday)%7;
+ /* If we are on the right day, but the changeover hasn't happened yet,
+ * then subtract a whole week. */
+ if (delta == 0 && before)
+ delta = 7;
+ tm.tm_mday -= delta;
+ if (get_end)
+ tm.tm_mday += 7;
+ break;
+ }
+ case UNIT_DAY:
+ if (before)
+ --tm.tm_mday;
+ if (get_end)
+ ++tm.tm_mday;
+ break;
+ default:
+ tor_assert(0);
+ }
+
+ tm.tm_hour = cfg_start_hour;
+ tm.tm_min = cfg_start_min;
+ tm.tm_sec = 0;
+ tm.tm_isdst = -1; /* Autodetect DST */
+ return mktime(&tm);
+}
+
+/** Return the start of the accounting period containing the time
+ * <b>now</b>. */
+static time_t
+start_of_accounting_period_containing(time_t now)
+{
+ return edge_of_accounting_period_containing(now, 0);
+}
+
+/** Return the start of the accounting period that comes after the one
+ * containing the time <b>now</b>. */
+static time_t
+start_of_accounting_period_after(time_t now)
+{
+ return edge_of_accounting_period_containing(now, 1);
+}
+
+/** Return the length of the accounting period containing the time
+ * <b>now</b>. */
+static long
+length_of_accounting_period_containing(time_t now)
+{
+ return edge_of_accounting_period_containing(now, 1) -
+ edge_of_accounting_period_containing(now, 0);
+}
+
+/** Initialize the accounting subsystem. */
+void
+configure_accounting(time_t now)
+{
+ time_t s_now;
+ /* Try to remember our recorded usage. */
+ if (!interval_start_time)
+ read_bandwidth_usage(); /* If we fail, we'll leave values at zero, and
+ * reset below.*/
+
+ s_now = start_of_accounting_period_containing(now);
+
+ if (!interval_start_time) {
+ /* We didn't have recorded usage; Start a new interval. */
+ log_info(LD_ACCT, "Starting new accounting interval.");
+ reset_accounting(now);
+ } else if (s_now == interval_start_time) {
+ log_info(LD_ACCT, "Continuing accounting interval.");
+ /* We are in the interval we thought we were in. Do nothing.*/
+ interval_end_time = start_of_accounting_period_after(interval_start_time);
+ } else {
+ long duration =
+ length_of_accounting_period_containing(interval_start_time);
+ double delta = ((double)(s_now - interval_start_time)) / duration;
+ if (-0.50 <= delta && delta <= 0.50) {
+ /* The start of the period is now a little later or earlier than we
+ * remembered. That's fine; we might lose some bytes we could otherwise
+ * have written, but better to err on the side of obeying accounting
+ * settings. */
+ log_info(LD_ACCT, "Accounting interval moved by %.02f%%; "
+ "that's fine.", delta*100);
+ interval_end_time = start_of_accounting_period_after(now);
+ } else if (delta >= 0.99) {
+ /* This is the regular time-moved-forward case; don't be too noisy
+ * about it or people will complain */
+ log_info(LD_ACCT, "Accounting interval elapsed; starting a new one");
+ reset_accounting(now);
+ } else {
+ log_warn(LD_ACCT,
+ "Mismatched accounting interval: moved by %.02f%%. "
+ "Starting a fresh one.", delta*100);
+ reset_accounting(now);
+ }
+ }
+ accounting_set_wakeup_time();
+}
+
+/** Return the relevant number of bytes sent/received this interval
+ * based on the set AccountingRule */
+uint64_t
+get_accounting_bytes(void)
+{
+ if (get_options()->AccountingRule == ACCT_SUM)
+ return n_bytes_read_in_interval+n_bytes_written_in_interval;
+ else if (get_options()->AccountingRule == ACCT_IN)
+ return n_bytes_read_in_interval;
+ else if (get_options()->AccountingRule == ACCT_OUT)
+ return n_bytes_written_in_interval;
+ else
+ return MAX(n_bytes_read_in_interval, n_bytes_written_in_interval);
+}
+
+/** Set expected_bandwidth_usage based on how much we sent/received
+ * per minute last interval (if we were up for at least 30 minutes),
+ * or based on our declared bandwidth otherwise. */
+static void
+update_expected_bandwidth(void)
+{
+ uint64_t expected;
+ const or_options_t *options= get_options();
+ uint64_t max_configured = (options->RelayBandwidthRate > 0 ?
+ options->RelayBandwidthRate :
+ options->BandwidthRate) * 60;
+ /* max_configured is the larger of bytes read and bytes written
+ * If we are accounting based on sum, worst case is both are
+ * at max, doubling the expected sum of bandwidth */
+ if (get_options()->AccountingRule == ACCT_SUM)
+ max_configured *= 2;
+
+#define MIN_TIME_FOR_MEASUREMENT (1800)
+
+ if (soft_limit_hit_at > interval_start_time && n_bytes_at_soft_limit &&
+ (soft_limit_hit_at - interval_start_time) > MIN_TIME_FOR_MEASUREMENT) {
+ /* If we hit our soft limit last time, only count the bytes up to that
+ * time. This is a better predictor of our actual bandwidth than
+ * considering the entirety of the last interval, since we likely started
+ * using bytes very slowly once we hit our soft limit. */
+ expected = n_bytes_at_soft_limit /
+ (soft_limit_hit_at - interval_start_time);
+ expected /= 60;
+ } else if (n_seconds_active_in_interval >= MIN_TIME_FOR_MEASUREMENT) {
+ /* Otherwise, we either measured enough time in the last interval but
+ * never hit our soft limit, or we're using a state file from a Tor that
+ * doesn't know to store soft-limit info. Just take rate at which
+ * we were reading/writing in the last interval as our expected rate.
+ */
+ uint64_t used = get_accounting_bytes();
+ expected = used / (n_seconds_active_in_interval / 60);
+ } else {
+ /* If we haven't gotten enough data last interval, set 'expected'
+ * to 0. This will set our wakeup to the start of the interval.
+ * Next interval, we'll choose our starting time based on how much
+ * we sent this interval.
+ */
+ expected = 0;
+ }
+ if (expected > max_configured)
+ expected = max_configured;
+ expected_bandwidth_usage = expected;
+}
+
+/** Called at the start of a new accounting interval: reset our
+ * expected bandwidth usage based on what happened last time, set up
+ * the start and end of the interval, and clear byte/time totals.
+ */
+static void
+reset_accounting(time_t now)
+{
+ log_info(LD_ACCT, "Starting new accounting interval.");
+ update_expected_bandwidth();
+ interval_start_time = start_of_accounting_period_containing(now);
+ interval_end_time = start_of_accounting_period_after(interval_start_time);
+ n_bytes_read_in_interval = 0;
+ n_bytes_written_in_interval = 0;
+ n_seconds_active_in_interval = 0;
+ n_bytes_at_soft_limit = 0;
+ soft_limit_hit_at = 0;
+ n_seconds_to_hit_soft_limit = 0;
+}
+
+/** Return true iff we should save our bandwidth usage to disk. */
+static inline int
+time_to_record_bandwidth_usage(time_t now)
+{
+ /* Note every 600 sec */
+#define NOTE_INTERVAL (600)
+ /* Or every 20 megabytes */
+#define NOTE_BYTES 20*(1024*1024)
+ static uint64_t last_read_bytes_noted = 0;
+ static uint64_t last_written_bytes_noted = 0;
+ static time_t last_time_noted = 0;
+
+ if (last_time_noted + NOTE_INTERVAL <= now ||
+ last_read_bytes_noted + NOTE_BYTES <= n_bytes_read_in_interval ||
+ last_written_bytes_noted + NOTE_BYTES <= n_bytes_written_in_interval ||
+ (interval_end_time && interval_end_time <= now)) {
+ last_time_noted = now;
+ last_read_bytes_noted = n_bytes_read_in_interval;
+ last_written_bytes_noted = n_bytes_written_in_interval;
+ return 1;
+ }
+ return 0;
+}
+
+/** Invoked once per second. Checks whether it is time to hibernate,
+ * record bandwidth used, etc. */
+void
+accounting_run_housekeeping(time_t now)
+{
+ if (now >= interval_end_time) {
+ configure_accounting(now);
+ }
+ if (time_to_record_bandwidth_usage(now)) {
+ if (accounting_record_bandwidth_usage(now, get_or_state())) {
+ log_warn(LD_FS, "Couldn't record bandwidth usage to disk.");
+ }
+ }
+}
+
+/** Based on our interval and our estimated bandwidth, choose a
+ * deterministic (but random-ish) time to wake up. */
+static void
+accounting_set_wakeup_time(void)
+{
+ char digest[DIGEST_LEN];
+ crypto_digest_t *d_env;
+ uint64_t time_to_exhaust_bw;
+ int time_to_consider;
+
+ if (! server_identity_key_is_set()) {
+ if (init_keys() < 0) {
+ log_err(LD_BUG, "Error initializing keys");
+ tor_assert(0);
+ }
+ }
+
+ if (server_identity_key_is_set()) {
+ char buf[ISO_TIME_LEN+1];
+ format_iso_time(buf, interval_start_time);
+
+ if (crypto_pk_get_digest(get_server_identity_key(), digest) < 0) {
+ log_err(LD_BUG, "Error getting our key's digest.");
+ tor_assert(0);
+ }
+
+ d_env = crypto_digest_new();
+ crypto_digest_add_bytes(d_env, buf, ISO_TIME_LEN);
+ crypto_digest_add_bytes(d_env, digest, DIGEST_LEN);
+ crypto_digest_get_digest(d_env, digest, DIGEST_LEN);
+ crypto_digest_free(d_env);
+ } else {
+ crypto_rand(digest, DIGEST_LEN);
+ }
+
+ if (!expected_bandwidth_usage) {
+ char buf1[ISO_TIME_LEN+1];
+ char buf2[ISO_TIME_LEN+1];
+ format_local_iso_time(buf1, interval_start_time);
+ format_local_iso_time(buf2, interval_end_time);
+ interval_wakeup_time = interval_start_time;
+
+ log_notice(LD_ACCT,
+ "Configured hibernation. This interval begins at %s "
+ "and ends at %s. We have no prior estimate for bandwidth, so "
+ "we will start out awake and hibernate when we exhaust our quota.",
+ buf1, buf2);
+ return;
+ }
+
+ time_to_exhaust_bw =
+ (get_options()->AccountingMax/expected_bandwidth_usage)*60;
+ if (time_to_exhaust_bw > INT_MAX) {
+ time_to_exhaust_bw = INT_MAX;
+ time_to_consider = 0;
+ } else {
+ time_to_consider = accounting_get_interval_length() -
+ (int)time_to_exhaust_bw;
+ }
+
+ if (time_to_consider<=0) {
+ interval_wakeup_time = interval_start_time;
+ } else {
+ /* XXX can we simplify this just by picking a random (non-deterministic)
+ * time to be up? If we go down and come up, then we pick a new one. Is
+ * that good enough? -RD */
+
+ /* This is not a perfectly unbiased conversion, but it is good enough:
+ * in the worst case, the first half of the day is 0.06 percent likelier
+ * to be chosen than the last half. */
+ interval_wakeup_time = interval_start_time +
+ (get_uint32(digest) % time_to_consider);
+ }
+
+ {
+ char buf1[ISO_TIME_LEN+1];
+ char buf2[ISO_TIME_LEN+1];
+ char buf3[ISO_TIME_LEN+1];
+ char buf4[ISO_TIME_LEN+1];
+ time_t down_time;
+ if (interval_wakeup_time+time_to_exhaust_bw > TIME_MAX)
+ down_time = TIME_MAX;
+ else
+ down_time = (time_t)(interval_wakeup_time+time_to_exhaust_bw);
+ if (down_time>interval_end_time)
+ down_time = interval_end_time;
+ format_local_iso_time(buf1, interval_start_time);
+ format_local_iso_time(buf2, interval_wakeup_time);
+ format_local_iso_time(buf3, down_time);
+ format_local_iso_time(buf4, interval_end_time);
+
+ log_notice(LD_ACCT,
+ "Configured hibernation. This interval began at %s; "
+ "the scheduled wake-up time %s %s; "
+ "we expect%s to exhaust our quota for this interval around %s; "
+ "the next interval begins at %s (all times local)",
+ buf1,
+ time(NULL)<interval_wakeup_time?"is":"was", buf2,
+ time(NULL)<down_time?"":"ed", buf3,
+ buf4);
+ }
+}
+
+/* This rounds 0 up to 1000, but that's actually a feature. */
+#define ROUND_UP(x) (((x) + 0x3ff) & ~0x3ff)
+/** Save all our bandwidth tracking information to disk. Return 0 on
+ * success, -1 on failure. */
+int
+accounting_record_bandwidth_usage(time_t now, or_state_t *state)
+{
+ /* Just update the state */
+ state->AccountingIntervalStart = interval_start_time;
+ state->AccountingBytesReadInInterval = ROUND_UP(n_bytes_read_in_interval);
+ state->AccountingBytesWrittenInInterval =
+ ROUND_UP(n_bytes_written_in_interval);
+ state->AccountingSecondsActive = n_seconds_active_in_interval;
+ state->AccountingExpectedUsage = expected_bandwidth_usage;
+
+ state->AccountingSecondsToReachSoftLimit = n_seconds_to_hit_soft_limit;
+ state->AccountingSoftLimitHitAt = soft_limit_hit_at;
+ state->AccountingBytesAtSoftLimit = n_bytes_at_soft_limit;
+
+ or_state_mark_dirty(state,
+ now+(get_options()->AvoidDiskWrites ? 7200 : 60));
+
+ return 0;
+}
+#undef ROUND_UP
+
+/** Read stored accounting information from disk. Return 0 on success;
+ * return -1 and change nothing on failure. */
+static int
+read_bandwidth_usage(void)
+{
+ or_state_t *state = get_or_state();
+
+ {
+ char *fname = get_datadir_fname("bw_accounting");
+ int res;
+
+ res = unlink(fname);
+ if (res != 0 && errno != ENOENT) {
+ log_warn(LD_FS,
+ "Failed to unlink %s: %s",
+ fname, strerror(errno));
+ }
+
+ tor_free(fname);
+ }
+
+ if (!state)
+ return -1;
+
+ log_info(LD_ACCT, "Reading bandwidth accounting data from state file");
+ n_bytes_read_in_interval = state->AccountingBytesReadInInterval;
+ n_bytes_written_in_interval = state->AccountingBytesWrittenInInterval;
+ n_seconds_active_in_interval = state->AccountingSecondsActive;
+ interval_start_time = state->AccountingIntervalStart;
+ expected_bandwidth_usage = state->AccountingExpectedUsage;
+
+ /* Older versions of Tor (before 0.2.2.17-alpha or so) didn't generate these
+ * fields. If you switch back and forth, you might get an
+ * AccountingSoftLimitHitAt value from long before the most recent
+ * interval_start_time. If that's so, then ignore the softlimit-related
+ * values. */
+ if (state->AccountingSoftLimitHitAt > interval_start_time) {
+ soft_limit_hit_at = state->AccountingSoftLimitHitAt;
+ n_bytes_at_soft_limit = state->AccountingBytesAtSoftLimit;
+ n_seconds_to_hit_soft_limit = state->AccountingSecondsToReachSoftLimit;
+ } else {
+ soft_limit_hit_at = 0;
+ n_bytes_at_soft_limit = 0;
+ n_seconds_to_hit_soft_limit = 0;
+ }
+
+ {
+ char tbuf1[ISO_TIME_LEN+1];
+ char tbuf2[ISO_TIME_LEN+1];
+ format_iso_time(tbuf1, state->LastWritten);
+ format_iso_time(tbuf2, state->AccountingIntervalStart);
+
+ log_info(LD_ACCT,
+ "Successfully read bandwidth accounting info from state written at %s "
+ "for interval starting at %s. We have been active for %lu seconds in "
+ "this interval. At the start of the interval, we expected to use "
+ "about %lu KB per second. (%"PRIu64" bytes read so far, "
+ "%"PRIu64" bytes written so far)",
+ tbuf1, tbuf2,
+ (unsigned long)n_seconds_active_in_interval,
+ (unsigned long)(expected_bandwidth_usage*1024/60),
+ (n_bytes_read_in_interval),
+ (n_bytes_written_in_interval));
+ }
+
+ return 0;
+}
+
+/** Return true iff we have sent/received all the bytes we are willing
+ * to send/receive this interval. */
+static int
+hibernate_hard_limit_reached(void)
+{
+ uint64_t hard_limit = get_options()->AccountingMax;
+ if (!hard_limit)
+ return 0;
+ return get_accounting_bytes() >= hard_limit;
+}
+
+/** Return true iff we have sent/received almost all the bytes we are willing
+ * to send/receive this interval. */
+static int
+hibernate_soft_limit_reached(void)
+{
+ const uint64_t acct_max = get_options()->AccountingMax;
+#define SOFT_LIM_PCT (.95)
+#define SOFT_LIM_BYTES (500*1024*1024)
+#define SOFT_LIM_MINUTES (3*60)
+ /* The 'soft limit' is a fair bit more complicated now than once it was.
+ * We want to stop accepting connections when ALL of the following are true:
+ * - We expect to use up the remaining bytes in under 3 hours
+ * - We have used up 95% of our bytes.
+ * - We have less than 500MB of bytes left.
+ */
+ uint64_t soft_limit = (uint64_t) (acct_max * SOFT_LIM_PCT);
+ if (acct_max > SOFT_LIM_BYTES && acct_max - SOFT_LIM_BYTES > soft_limit) {
+ soft_limit = acct_max - SOFT_LIM_BYTES;
+ }
+ if (expected_bandwidth_usage) {
+ const uint64_t expected_usage =
+ expected_bandwidth_usage * SOFT_LIM_MINUTES;
+ if (acct_max > expected_usage && acct_max - expected_usage > soft_limit)
+ soft_limit = acct_max - expected_usage;
+ }
+
+ if (!soft_limit)
+ return 0;
+ return get_accounting_bytes() >= soft_limit;
+}
+
+/** Called when we get a SIGINT, or when bandwidth soft limit is
+ * reached. Puts us into "loose hibernation": we don't accept new
+ * connections, but we continue handling old ones. */
+static void
+hibernate_begin(hibernate_state_t new_state, time_t now)
+{
+ const or_options_t *options = get_options();
+
+ if (new_state == HIBERNATE_STATE_EXITING &&
+ hibernate_state != HIBERNATE_STATE_LIVE) {
+ log_notice(LD_GENERAL,"SIGINT received %s; exiting now.",
+ hibernate_state == HIBERNATE_STATE_EXITING ?
+ "a second time" : "while hibernating");
+ tor_shutdown_event_loop_and_exit(0);
+ return;
+ }
+
+ if (new_state == HIBERNATE_STATE_LOWBANDWIDTH &&
+ hibernate_state == HIBERNATE_STATE_LIVE) {
+ soft_limit_hit_at = now;
+ n_seconds_to_hit_soft_limit = n_seconds_active_in_interval;
+ n_bytes_at_soft_limit = get_accounting_bytes();
+ }
+
+ /* close listeners. leave control listener(s). */
+ connection_mark_all_noncontrol_listeners();
+
+ /* XXX kill intro point circs */
+ /* XXX upload rendezvous service descriptors with no intro points */
+
+ if (new_state == HIBERNATE_STATE_EXITING) {
+ log_notice(LD_GENERAL,"Interrupt: we have stopped accepting new "
+ "connections, and will shut down in %d seconds. Interrupt "
+ "again to exit now.", options->ShutdownWaitLength);
+ shutdown_time = time(NULL) + options->ShutdownWaitLength;
++#ifdef HAVE_SYSTEMD
++ /* tell systemd that we may need more than the default 90 seconds to shut
++ * down so they don't kill us. add some extra time to actually finish
++ * shutting down, otherwise systemd will kill us immediately after the
++ * EXTEND_TIMEOUT_USEC expires. this is an *upper* limit; tor will probably
++ * only take one or two more seconds, but assume that maybe we got swapped
++ * out and it takes a little while longer.
++ *
++ * as of writing, this is a no-op with all-defaults: ShutdownWaitLength is
++ * 30 seconds, so this will extend the timeout to 60 seconds.
++ * default systemd DefaultTimeoutStopSec is 90 seconds, so systemd will
++ * wait (up to) 90 seconds anyways.
++ *
++ * 2^31 usec = ~2147 sec = ~35 min. probably nobody will actually set
++ * ShutdownWaitLength to more than that, but use a longer type so we don't
++ * need to think about UB on overflow
++ */
++ sd_notifyf(0, "EXTEND_TIMEOUT_USEC=%" PRIu64,
++ ((uint64_t)(options->ShutdownWaitLength) + 30) * TOR_USEC_PER_SEC);
++#endif
+ } else { /* soft limit reached */
+ hibernate_end_time = interval_end_time;
+ }
+
+ hibernate_state = new_state;
+ accounting_record_bandwidth_usage(now, get_or_state());
+
+ or_state_mark_dirty(get_or_state(),
+ get_options()->AvoidDiskWrites ? now+600 : 0);
+}
+
+/** Called when we've been hibernating and our timeout is reached. */
+static void
+hibernate_end(hibernate_state_t new_state)
+{
+ tor_assert(hibernate_state == HIBERNATE_STATE_LOWBANDWIDTH ||
+ hibernate_state == HIBERNATE_STATE_DORMANT ||
+ hibernate_state == HIBERNATE_STATE_INITIAL);
+
+ /* listeners will be relaunched in run_scheduled_events() in main.c */
+ if (hibernate_state != HIBERNATE_STATE_INITIAL)
+ log_notice(LD_ACCT,"Hibernation period ended. Resuming normal activity.");
+
+ hibernate_state = new_state;
+ hibernate_end_time = 0; /* no longer hibernating */
+ reset_uptime(); /* reset published uptime */
+}
+
+/** A wrapper around hibernate_begin, for when we get SIGINT. */
+void
+hibernate_begin_shutdown(void)
+{
+ hibernate_begin(HIBERNATE_STATE_EXITING, time(NULL));
+}
+
+/**
+ * Return true iff we are currently hibernating -- that is, if we are in
+ * any non-live state.
+ */
+MOCK_IMPL(int,
+we_are_hibernating,(void))
+{
+ return hibernate_state != HIBERNATE_STATE_LIVE;
+}
+
+/**
+ * Return true iff we are currently _fully_ hibernating -- that is, if we are
+ * in a state where we expect to handle no network activity at all.
+ */
+MOCK_IMPL(int,
+we_are_fully_hibernating,(void))
+{
+ return hibernate_state == HIBERNATE_STATE_DORMANT;
+}
+
+/** If we aren't currently dormant, close all connections and become
+ * dormant. */
+static void
+hibernate_go_dormant(time_t now)
+{
+ connection_t *conn;
+
+ if (hibernate_state == HIBERNATE_STATE_DORMANT)
+ return;
+ else if (hibernate_state == HIBERNATE_STATE_LOWBANDWIDTH)
+ hibernate_state = HIBERNATE_STATE_DORMANT;
+ else
+ hibernate_begin(HIBERNATE_STATE_DORMANT, now);
+
+ log_notice(LD_ACCT,"Going dormant. Blowing away remaining connections.");
+
+ /* Close all OR/AP/exit conns. Leave dir conns because we still want
+ * to be able to upload server descriptors so clients know we're still
+ * running, and download directories so we can detect if we're obsolete.
+ * Leave control conns because we still want to be controllable.
+ */
+ while ((conn = connection_get_by_type(CONN_TYPE_OR)) ||
+ (conn = connection_get_by_type(CONN_TYPE_AP)) ||
+ (conn = connection_get_by_type(CONN_TYPE_EXIT))) {
+ if (CONN_IS_EDGE(conn)) {
+ connection_edge_end(TO_EDGE_CONN(conn), END_STREAM_REASON_HIBERNATING);
+ }
+ log_info(LD_NET,"Closing conn type %d", conn->type);
+ if (conn->type == CONN_TYPE_AP) {
+ /* send socks failure if needed */
+ connection_mark_unattached_ap(TO_ENTRY_CONN(conn),
+ END_STREAM_REASON_HIBERNATING);
+ } else if (conn->type == CONN_TYPE_OR) {
+ if (TO_OR_CONN(conn)->chan) {
+ connection_or_close_normally(TO_OR_CONN(conn), 0);
+ } else {
+ connection_mark_for_close(conn);
+ }
+ } else {
+ connection_mark_for_close(conn);
+ }
+ }
+
+ if (now < interval_wakeup_time)
+ hibernate_end_time = interval_wakeup_time;
+ else
+ hibernate_end_time = interval_end_time;
+
+ accounting_record_bandwidth_usage(now, get_or_state());
+
+ or_state_mark_dirty(get_or_state(),
+ get_options()->AvoidDiskWrites ? now+600 : 0);
+
+ hibernate_schedule_wakeup_event(now, hibernate_end_time);
+}
+
+/**
+ * Schedule a mainloop event at <b>end_time</b> to wake up from a dormant
+ * state. We can't rely on this happening from second_elapsed_callback,
+ * since second_elapsed_callback will be shut down when we're dormant.
+ *
+ * (Note that We might immediately go back to sleep after we set the next
+ * wakeup time.)
+ */
+static void
+hibernate_schedule_wakeup_event(time_t now, time_t end_time)
+{
+ struct timeval delay = { 0, 0 };
+
+ if (now >= end_time) {
+ // In these cases we always wait at least a second, to avoid running
+ // the callback in a tight loop.
+ delay.tv_sec = 1;
+ } else {
+ delay.tv_sec = (end_time - now);
+ }
+
+ if (!wakeup_event) {
+ wakeup_event = mainloop_event_postloop_new(wakeup_event_callback, NULL);
+ }
+
+ mainloop_event_schedule(wakeup_event, &delay);
+}
+
+/**
+ * Called at the end of the interval, or at the wakeup time of the current
+ * interval, to exit the dormant state.
+ **/
+static void
+wakeup_event_callback(mainloop_event_t *ev, void *data)
+{
+ (void) ev;
+ (void) data;
+
+ const time_t now = time(NULL);
+ accounting_run_housekeeping(now);
+ consider_hibernation(now);
+ if (hibernate_state != HIBERNATE_STATE_DORMANT) {
+ /* We woke up, so everything's great here */
+ return;
+ }
+
+ /* We're still dormant. */
+ if (now < interval_wakeup_time)
+ hibernate_end_time = interval_wakeup_time;
+ else
+ hibernate_end_time = interval_end_time;
+
+ hibernate_schedule_wakeup_event(now, hibernate_end_time);
+}
+
+/** Called when hibernate_end_time has arrived. */
+static void
+hibernate_end_time_elapsed(time_t now)
+{
+ char buf[ISO_TIME_LEN+1];
+
+ /* The interval has ended, or it is wakeup time. Find out which. */
+ accounting_run_housekeeping(now);
+ if (interval_wakeup_time <= now) {
+ /* The interval hasn't changed, but interval_wakeup_time has passed.
+ * It's time to wake up and start being a server. */
+ hibernate_end(HIBERNATE_STATE_LIVE);
+ return;
+ } else {
+ /* The interval has changed, and it isn't time to wake up yet. */
+ hibernate_end_time = interval_wakeup_time;
+ format_iso_time(buf,interval_wakeup_time);
+ if (hibernate_state != HIBERNATE_STATE_DORMANT) {
+ /* We weren't sleeping before; we should sleep now. */
+ log_notice(LD_ACCT,
+ "Accounting period ended. Commencing hibernation until "
+ "%s UTC", buf);
+ hibernate_go_dormant(now);
+ } else {
+ log_notice(LD_ACCT,
+ "Accounting period ended. This period, we will hibernate"
+ " until %s UTC",buf);
+ }
+ }
+}
+
+/** Consider our environment and decide if it's time
+ * to start/stop hibernating.
+ */
+void
+consider_hibernation(time_t now)
+{
+ int accounting_enabled = get_options()->AccountingMax != 0;
+ char buf[ISO_TIME_LEN+1];
+ hibernate_state_t prev_state = hibernate_state;
+
+ /* If we're in 'exiting' mode, then we just shut down after the interval
+ * elapses. */
+ if (hibernate_state == HIBERNATE_STATE_EXITING) {
+ tor_assert(shutdown_time);
+ if (shutdown_time <= now) {
+ log_notice(LD_GENERAL, "Clean shutdown finished. Exiting.");
+ tor_shutdown_event_loop_and_exit(0);
+ }
+ return; /* if exiting soon, don't worry about bandwidth limits */
+ }
+
+ if (hibernate_state == HIBERNATE_STATE_DORMANT) {
+ /* We've been hibernating because of bandwidth accounting. */
+ tor_assert(hibernate_end_time);
+ if (hibernate_end_time > now && accounting_enabled) {
+ /* If we're hibernating, don't wake up until it's time, regardless of
+ * whether we're in a new interval. */
+ return ;
+ } else {
+ hibernate_end_time_elapsed(now);
+ }
+ }
+
+ /* Else, we aren't hibernating. See if it's time to start hibernating, or to
+ * go dormant. */
+ if (hibernate_state == HIBERNATE_STATE_LIVE ||
+ hibernate_state == HIBERNATE_STATE_INITIAL) {
+ if (hibernate_soft_limit_reached()) {
+ log_notice(LD_ACCT,
+ "Bandwidth soft limit reached; commencing hibernation. "
+ "No new connections will be accepted");
+ hibernate_begin(HIBERNATE_STATE_LOWBANDWIDTH, now);
+ } else if (accounting_enabled && now < interval_wakeup_time) {
+ format_local_iso_time(buf,interval_wakeup_time);
+ log_notice(LD_ACCT,
+ "Commencing hibernation. We will wake up at %s local time.",
+ buf);
+ hibernate_go_dormant(now);
+ } else if (hibernate_state == HIBERNATE_STATE_INITIAL) {
+ hibernate_end(HIBERNATE_STATE_LIVE);
+ }
+ }
+
+ if (hibernate_state == HIBERNATE_STATE_LOWBANDWIDTH) {
+ if (!accounting_enabled) {
+ hibernate_end_time_elapsed(now);
+ } else if (hibernate_hard_limit_reached()) {
+ hibernate_go_dormant(now);
+ } else if (hibernate_end_time <= now) {
+ /* The hibernation period ended while we were still in lowbandwidth.*/
+ hibernate_end_time_elapsed(now);
+ }
+ }
+
+ /* Dispatch a controller event if the hibernation state changed. */
+ if (hibernate_state != prev_state)
+ on_hibernate_state_change(prev_state);
+}
+
+/** Helper function: called when we get a GETINFO request for an
+ * accounting-related key on the control connection <b>conn</b>. If we can
+ * answer the request for <b>question</b>, then set *<b>answer</b> to a newly
+ * allocated string holding the result. Otherwise, set *<b>answer</b> to
+ * NULL. */
+int
+getinfo_helper_accounting(control_connection_t *conn,
+ const char *question, char **answer,
+ const char **errmsg)
+{
+ (void) conn;
+ (void) errmsg;
+ if (!strcmp(question, "accounting/enabled")) {
+ *answer = tor_strdup(accounting_is_enabled(get_options()) ? "1" : "0");
+ } else if (!strcmp(question, "accounting/hibernating")) {
+ *answer = tor_strdup(hibernate_state_to_string(hibernate_state));
+ tor_strlower(*answer);
+ } else if (!strcmp(question, "accounting/bytes")) {
+ tor_asprintf(answer, "%"PRIu64" %"PRIu64,
+ (n_bytes_read_in_interval),
+ (n_bytes_written_in_interval));
+ } else if (!strcmp(question, "accounting/bytes-left")) {
+ uint64_t limit = get_options()->AccountingMax;
+ if (get_options()->AccountingRule == ACCT_SUM) {
+ uint64_t total_left = 0;
+ uint64_t total_bytes = get_accounting_bytes();
+ if (total_bytes < limit)
+ total_left = limit - total_bytes;
+ tor_asprintf(answer, "%"PRIu64" %"PRIu64,
+ (total_left), (total_left));
+ } else if (get_options()->AccountingRule == ACCT_IN) {
+ uint64_t read_left = 0;
+ if (n_bytes_read_in_interval < limit)
+ read_left = limit - n_bytes_read_in_interval;
+ tor_asprintf(answer, "%"PRIu64" %"PRIu64,
+ (read_left), (limit));
+ } else if (get_options()->AccountingRule == ACCT_OUT) {
+ uint64_t write_left = 0;
+ if (n_bytes_written_in_interval < limit)
+ write_left = limit - n_bytes_written_in_interval;
+ tor_asprintf(answer, "%"PRIu64" %"PRIu64,
+ (limit), (write_left));
+ } else {
+ uint64_t read_left = 0, write_left = 0;
+ if (n_bytes_read_in_interval < limit)
+ read_left = limit - n_bytes_read_in_interval;
+ if (n_bytes_written_in_interval < limit)
+ write_left = limit - n_bytes_written_in_interval;
+ tor_asprintf(answer, "%"PRIu64" %"PRIu64,
+ (read_left), (write_left));
+ }
+ } else if (!strcmp(question, "accounting/interval-start")) {
+ *answer = tor_malloc(ISO_TIME_LEN+1);
+ format_iso_time(*answer, interval_start_time);
+ } else if (!strcmp(question, "accounting/interval-wake")) {
+ *answer = tor_malloc(ISO_TIME_LEN+1);
+ format_iso_time(*answer, interval_wakeup_time);
+ } else if (!strcmp(question, "accounting/interval-end")) {
+ *answer = tor_malloc(ISO_TIME_LEN+1);
+ format_iso_time(*answer, interval_end_time);
+ } else {
+ *answer = NULL;
+ }
+ return 0;
+}
+
+/**
+ * Helper function: called when the hibernation state changes, and sends a
+ * SERVER_STATUS event to notify interested controllers of the accounting
+ * state change.
+ */
+static void
+on_hibernate_state_change(hibernate_state_t prev_state)
+{
+ control_event_server_status(LOG_NOTICE,
+ "HIBERNATION_STATUS STATUS=%s",
+ hibernate_state_to_string(hibernate_state));
+
+ /* We are changing hibernation state, this can affect the main loop event
+ * list. Rescan it to update the events state. We do this whatever the new
+ * hibernation state because they can each possibly affect an event. The
+ * initial state means we are booting up so we shouldn't scan here because
+ * at this point the events in the list haven't been initialized. */
+ if (prev_state != HIBERNATE_STATE_INITIAL) {
+ rescan_periodic_events(get_options());
+ }
+
+ reschedule_per_second_timer();
+}
+
+/** Free all resources held by the accounting module */
+void
+accounting_free_all(void)
+{
+ mainloop_event_free(wakeup_event);
+ hibernate_state = HIBERNATE_STATE_INITIAL;
+ hibernate_end_time = 0;
+ shutdown_time = 0;
+}
+
+#ifdef TOR_UNIT_TESTS
+/**
+ * Manually change the hibernation state. Private; used only by the unit
+ * tests.
+ */
+void
+hibernate_set_state_for_testing_(hibernate_state_t newstate)
+{
+ hibernate_state = newstate;
+}
+#endif /* defined(TOR_UNIT_TESTS) */
[View Less]
1
0