Queuing process in ORs
Florian.Tschorsch at uni-duesseldorf.de
Tue Jul 20 20:10:40 UTC 2010
I'm trying to understand the exact queuing process in ORs.
As far as I know and verified in the source code it goes like this:
TCP IN --> ROUND ROBIN/ LEAKY BUCKET --> CONN INBUF --> CIRC QUEUES* --> CONN OUTBUF --> ROUND ROBIN/ LEAKY BUCKET --> TCP OUT
ORs read data from a TCP connection to the connection's inbuf (read_to_buf_tls).
The amount of the allowed readable data is calculated by a round robin scheduler (depending on the bucket mechanism).
"Reading to" the inbuf (connection_handle_read_impl) triggers a bunch of procedures and ultimately pushes cells (after crypting) to the appropriate circuit queue.
After appending cells, Tor tries to pop cells and flushes them (connection_or_flush_from_first_active_curcuit) into the respective connection's outbuf.
*And here comes my vagueness: I know circuits, multiplexed over one connection, are ordered in a circular linked list.
My assumption is that the scheduling on circuit level is realized by passing through the linked list of circuits and giving each of them the chance to forward cells.
I also could identify a callback method for the above mentioned procedure (connection_or_flushed_some), that tries to forward even more data.
But how do these mechanisms intertwine into circuit level scheduling? And is the circuit level scheduling influenced by the round robin scheduling?
As soon as data is then available in a connection's outbuf Tor tries to flush some data over the TCP socket.
This also happens with an intermediate round robin scheduler (analogous to inbuf).
It would be kind if someone could help me understanding this issue or at least confirm/disprove my findings.
I believe this would help other developers too and provide a better overall knowledge of Tor's source code.
Thanks in advance.
More information about the tor-dev