tevador tevador@gmail.com writes:
Hi all,
Hello tevador,
thanks so much for your work here and for the great simulation. Also for the hybrid attack which was definitely missing from the puzzle.
I've been working on a further revision of the proposal based on your comments. I have just one small question I would like your feedback on.
3.4.3. PoW effort estimation [EFFORT_ESTIMATION] {XXX: BLOCKER: Figure out of this system makes sense}
I wrote a simple simulation in Python to test different ways of adjusting the suggested effort. The results are here: https://github.com/tevador/scratchpad/blob/master/tor-pow/effort_sim.md
In summary, I suggest to use MIN_EFFORT = 1000 and the following algorithm to calculate the suggested effort:
- Sum the effort of all valid requests that have been received since the last HS descriptor update. This includes all handled requests, trimmed requests and requests still in the queue.
- Divide the sum by the max. number of requests that the service could have handled during that time (SVC_BOTTOM_CAPACITY * HS_UPDATE_PERIOD).
- Suggested effort = max(MIN_EFFORT, result)
This algorithm can both increase and reduce the suggested effort.
I like the above logic but I'm wondering of how we can get the real SVC_BOTTOM_CAPACITY for every scenario. In particular, the SVC_BOTTOM_CAPACITY=180 value from 6.2.2 might have been true for David's testing but it will not be true for every computer and every network.
I wonder if we can adapt the above effort estimation algorithm to use an initial SVC_BOTTOM_CAPACITY magic value for the first run (let's say 180), but then derive the real SVC_BOTTOM_CAPACITY of the host in runtime and use that for subsequent runs of the algorithm.
Do you think this is possible?