Hi all,
it took me a year or so but I've finally managed to build Tor for iOS with a working support for TransPort, as you can see on: https://github.com/sid77/evelyn/blob/master/tor/make.sh
The next natural step is to hack together full device torification as iOS jailbroken devices can run pf (without ALTQ support).
I'm not very comfortable with pf and pfctl so my first step was to head out to https://trac.torproject.org/projects/tor/wiki/doc/TransparentProxy#BSDPF looking for some clue. …
[View More]However, jailbroken iOS' ifconfig can not bring up a second loopback interface (I think the kernel is not allowing it) so I had to test out some custom rules, my current pf.conf is as follow:
-8<-
scrub in
rdr pass on lo0 inet proto tcp all -> 127.0.0.1 port 9040
rdr pass on lo0 inet proto udp to port domain -> 127.0.0.1 port domain
block return out
pass quick on lo0 keep state
pass out quick inet proto tcp user nobody flags S/SA modulate state
pass out quick route-to lo0 inet proto udp to port domain keep state
pass out quick route-to lo0 inet proto tcp all flags S/SA modulate state
-8<-
taken from: https://github.com/sid77/mobiletor/blob/master/pf.conf
I apply it running this script: https://github.com/sid77/sbsettingstor/blob/master/com.sbsettingstor.enable
Tor is running as user nobody (not really secure but I still have to figure out system user management on the platform) and answering DNS queries on 127.0.0.1:53.
This solution is failing *REALLY* hard as I managed to run into a kernel panic as soon as I tried to generate some traffic with Mobile Safari or Cydia.
Is there any pf guru out there which can give me some insights?
Ciao,
Marco
[View Less]
I spent the past week in Sweden, attending the Stockholm Internet
Forum, http://www.stockholminternetforum.se/, for part of it. I made a
number of tails 0.10.2 usb sticks for people on request. I also asked a
lot of people their impressions of Tor and Tails. I received a plethora
of feedback. All 8 people are involved in the Internet Freedom policy,
technology, or freedom of speech communities. They had very different
levels of self-assessed technical skill. The 8 people represented 6
countries.…
[View More]
tldr; 8 people were tested, 8 people had trouble doing simple things
with tor browser in Tails. Issues 4-6 are directly related to Tor.
And by asked, I mean, I stuck them in front of my laptop, put the usb
stick in the computer, and asked them to browse to their favorite site.
No one wanted to be video recorded, even if I offered to only record
the screen and not audio.
Everyone managed to power on the laptop and wait for Tails to boot.
# First issue: Language selection
The first issue was on the language selection screen. 4 of 8
people were confused why it was called "Debian Live System" and not
"Tails Live System". 8 of 8 knew what language selection meant, but
weren't sure how this mapped to Tails.
# Second issue: wifi and tor browser
The tor browser starts up before the wireless is configured. The tor
browser then reports a proxy error. With some prompting, all 8 figured
out the wifi and then didn't know what to do. Tor does keep trying to
load, and takes forever because it needs to download the entire
directory. Users have no feedback as to what's going on behind the
scenes because vidalia is hidden.
8 of 8 waited patiently for something to happen on the screen.
# Third issue: green onion
3 of 8 people saw the green onion appear in the menu bar up top. These
three people hovered over it and saw the 'Connected to the Tor Network'
message. No one knew to double-click on it to get a menu of other things
to do. No one knew to right-click on it to get the drop-down menu. They
were presented with the default check.torproject.org 'congratulations'
page and then sat there.
# Fourth issue: check.tpo is not helpful
8 of 8 people saw the default check.torproject.org site telling them
'congratulations. Your browser is configured to use tor.' 7 of 8 people
asked 'where is my browser?' The one who didn't ask this question was
already a firefox user and recognized the interface. 0 of 8 understood
what the IP address message meant. Comments ranged from 'is that
different than my current IP address?' to 'what's an ip address?'
As an aside, when showing someone TBB on their own laptop, they saw the
check.tpo site, and then went to Safari and started it up. When asked
why they did this, the answer was 'safari is my browser. this says your
browser is configured to use tor.'
No one used the language selections at the bottom of check.tpo, nor even
understood why they were there.
# Fifth issue: exit relay congestion/failures
8 of 8 people tried to get to their own sites. 'I wonder what my site
looks like when I'm anonymous' was the most common comment (5 of 8).
For 6 of 8 people, their site didn't load at all, and tor browser
reported their site was unreachable. All 6 then tried to go to
google search in their own language; meaning google.es, google.se, etc.
For 3 of those 6, this didn't work either. They gave up and assumed tor
was broken or was censoring their destinations.
I intervened, opened the vidalia network map, closed the circuit in
question, and asked them to repeat their browsing.
5 of 6 were able to get to their sites now. The one that was not able
to had the same exit relay as last time, Amunet1, in a new circuit and
just couldn't get anywhere through it. After yet another new circuit,
they could get through to everything.
The user has no feedback as to why their site didn't work. And tor
assumes everything is working fine.
When asked "please find a video you like", they all went to youtube.
Most of the videos they wanted to see resulted in 'This video is
currently unavailable.' 8 of 8 assumed it was because youtube was
blocking tor, not because the video is flash-required. 2 of 8 started
randomly clicking videos suggested by youtube to see if any of them
worked. Eventually, 2 of 8 got videos to work with youtube and were
amazed it worked at all.
# Sixth issue: no flash, no warning
2 of 8 people had flash apps on their website. 4 of 8 had ad banners
that used flash. All were surprised at the red outline with a snake in
it appearing instead of their flash apps. None understood what
happened.
After an explanation, one person suggested changing the red outline
with snake to an actual message written inside, along the lines of
'this app blocked for your protection. click here to unblock it.' I
explained why that wouldn't work (because there is no flash, java,
silverlight plugins installed) and their answer was 'then do not show
it at all'. Inside noscript, I unchecked the 'show placeholder..'
option and had them browse again. they were happy. It seems if the user
cannot do anything about the blocked apps, not showing them may be
preferred.
# Seventh issue: shutdown
I asked all 8 to shutdown tails and let me know when they thought their
data was safely no longer on the system. 1 of 8 figured out how to
shutdown tails by clicking the big red button in the upper right
corner. The rest hit the power button on the laptop.
After rebooting, i showed them all they could just pull the usb drive
to do it as well. As soon as tails started shutting down, they all
assumed everything was safe and tried to power off the laptop.
--
Andrew
http://tpo.is/contact
pgp 0x6B4D6475
[View Less]
Filename: 188-bridge-guards.txt
Title: Bridge Guards and other anti-enumeration defenses
Author: Nick Mathewson
Created: 14 Oct 2011
Status: Open
1. Overview
Bridges are useful against censors only so long as the adversary
cannot easily enumerate their addresses. I propose a design to make
it harder for an adversary who controls or observes only a few
nodes to enumerate a large number of bridges.
Briefly: bridges should choose guard nodes, and use the Tor
protocol's "Loose …
[View More]source routing" feature to re-route all extend
requests from clients through an additional layer of guard nodes
chosen by the bridge. This way, only a bridge's guard nodes can
tell that it is a bridge, and the attacker needs to run many more
nodes in order to enumerate a large number of bridges.
I also discuss other ways to avoid enumeration, recommending some.
These ideas are due to a discussion at the 2011 Tor Developers'
Meeting in Waterloo, Ontario. Practically none of the ideas here
are mine; I'm just writing up what I remember.
2. History and Motivation
Under the current bridge design, an attacker who runs a node can
identify bridges by seeing which "clients" make a large number of
connections to it, or which "clients" make connections to it in the
same way clients do. This has been a known attack since early
versions {XXXX check} of the design document; let's try to fix it.
2.1. Related ideas: Guard nodes
The idea of guard nodes isn't new: since 0.1.1, Tor has used guard
nodes (first designed as "Helper" nodes by Wright et al in {XXXX})
to make it harder for an adversary who controls a smaller number of
nodes to eavesdrop on clients. The rationale was: an adversary who
controls or observes only one entry and one exit will have a low
probability of correlating any single circuit, but over time, if
clients choose a random entry and exit for each circuit, such an
adversary will eventually see some circuits from each client with a
probability of 1, thereby building a statistical profile of the
client's activities. Therefore, let each client choose its entry
node only from among a small number of client-selected "guard"
nodes: the client is still correlated with the same probability as
before, but now the client has a nonzero chance of remaining
unprofiled.
2.2. Related idea: Loose source routing
Since the earliest versions of Onion Routing, the protocol has
provided "loose source routing". In strict source routing, the
source of a message chooses every hop on the message's path. But
in loose source routing, the message traverses the selected nodes,
but may also traverse other nodes as well. In other words, the
client selects nodes N_a, N_b, and N_c, but the message may in fact
traverse any sequence of nodes N_1...N_j, so long as N_1=N_a,
N_x=N_b, and N_y=N_c, for 1 < x < y.
Tor has retained this feature, but has not yet made use of it.
3. Design
Every bridge currently chooses a set of guard nodes for its
circuits. Bridges should also re-route client circuits through
these circuits.
Specifically, when a bridge receives a request from a client to
extend a circuit, it should first create a circuit to its guard,
and then relay that extend cell through the guard. The bridge
should add an additional layer of encryption to outgoing cells on
that circuit corresponding to the encryption that the guard will
remove, and remove a layer of encryption on incoming cells on that
circuit corresponding to the encryption that the guard will add.
3.1. An example
This example doesn't add anything to the design above, but has some
interesting inline notes.
- Alice has connected to her bridge Bob, and built a circuit
through Bob, with the negotiated forward and reverse keys KB_f
and KB_r.
- Alice then wants to extend the circuit to node Charlie. She
makes a hybrid-encrypted onionskin, encrypted to Charlie's
public key, containing her chosen g^x value. She puts this in
an extend cell: "Extend (Charlie's address) (Charlie's OR
Port) (Onionskin) (Charlie's ID)". She encrypts this with
KB_f and sends it as a RELAY_EARLY cell to Bob.
- Bob receives the RELAY_EARLY cell, and decrypts it with KB_f.
He then sees that it's an extend cell for him.
So far, this is exactly the same as the current procedure that
Alice and Bob would follow. Now we diverge:
- Instead of connecting to Charlie directly, Bob makes sure that
he is connected to his guard, Guillaume. Bob uses a
CREATE_FAST cell (or a CREATE cell, but see 4.1 below) to open a
circuit to Guillaume. Now Bob and Guillaume share keys KG_f
and KG_b.
- Now Bob encrypts the Extend cell body with KG_f and sends it
as a RELAY_EARLY cell to Guillaume.
- Guillaume receives it, decrypts it with KG_f, and sees:
"Extend (Charlie's address) (Charlie's OR Port) (Onionskin)
(Charlie's ID)". Guillaume acts accordingly: creating a
connection to Charlie if he doesn't have one, ensuring that
the ID is as expected, and then sending the onionskin in a
create cell on that connection.
Note that Guillaume is behaving exactly as a regular node
would upon receiving an Extend cell.
- Now the handshake finishes. Charlie receives the onionskin
and sends Guillaume "CREATED g^y,KH". Guillaume sends Bob
"E(KG_r, EXTENDED g^y KH)". (Charlie and Guillaume are still
running as regular Tor nodes do today).
- With this extend cell, and with all future relay cells
received on this circuit, Bob first decrypts the cell with
KG_r, then re-encrypts it with KB_r, then passes it to Alice.
When Alice receives the cell, it will be just as she would
have received if Bob had extended to Charlie directly.
- With all future outgoing cells that he receives from Alice,
Bob first decrypts the cell with KA_f, and if the cell does
not have Bob as its destination, Bob encrypts it with KG_f
before passing it to Guillaume.
Note that this design does not require that our stream cipher
operations be transitive, even though they are.
Note also that this design requires no change in behavior from any
node other than Bob the bridge.
Finally, observe that even though the circuit is one hop longer
than it would be otherwise, no relay's count of permissible
RELAY_EARLY cells falls lower than it otherwise would. This is
because the extra hop that Bob adds is done with a CREATE_FAST
cell, and so he does not need to send any RELAY_EARLY cells not
originated by Alice.
4. Other ideas and alternative designs
In addition to the design above, there are more ways to try to
prevent enumeration.
4.1. Make it harder to tell clients from bridges
Right now, there are multiple ways for the node after a bridge to
distinguish a circuit extended through the bridge from one
originating at the bridge. (This lets the node after the bridge
tell that a bridge is talking to it.)
One of the giveaways here is that the first hop in a circuit is
created with CREATE_FAST cells, but all subsequent hops are created
with CREATE cells. In the above design, it's no longer quite so
simple to tell, since all of the circuits that extend through a
bridge now reach its guards through CREATE_FAST cells, whether the
bridge originated them or not.
(If we adopt a faster circuit extension algorithm -- for example,
Goldberg, Stebila, and Ustaoglu's design instantiated over
curve25519 -- we could also solve this issue by eliminating
CREATE_FAST/CREATED_FAST entirely, which would also help our
security margin a little.)
The CREATE/CREATE_FAST distinction is not the only way for a
bridge's guard to tell bridges from orginary clients, however.
Most importantly, a busy bridge will open far more circuits than a
client would. More subtly, the timing on response from the client
will be higher and more highly variable that it would be with an
ordinary client. I don't think we can make bridges behave wholly
indistinguishably from clients: that's why we should go with guard
nodes for bridges.
4.2. Client-enforced bridge guards
What if Tor didn't have loose source routing? We could have
bridges tell clients what guards to use by advertising those guard
in their descriptors, and then refusing to extend circuits to any
other nodes. This change would require all clients to upgrade in
order to be able to use the newer bridges, and would quite possibly
cause a fair amount of pain along the way.
Fortunately, we don't need to go down this path. So let's not!
4.3. Separate bridge-guards and client-guards
In the design above, I specify that bridges should use the same
guard nodes for extending client circuits as they use for their own
circuits. It's not immediately clear whether this is a good idea
or not. Having separate sets would seem to make the two kinds of
circuits more easily distinguishable (even though we already assume
they are distinguishable). Having different sets of guards would
also seem like a way to keep the nodes who guard our own traffic
from learning that we're a bridge... but another set of nodes will
learn that anyway, so it's not clear what we'd gain.
5. Other considerations
What fraction of our traffic is bridge traffic? Will this alter
our circuit selection weights?
Are the current guard selection/evaluation/replacement mechanisms
adequate for bridge guards, or do bridges need to get more
sophisticated?
[View Less]
Filename: 189-authorize-cell.txt
Title: AUTHORIZE and AUTHORIZED cells
Author: George Kadianakis
Created: 04 Nov 2011
Status: Open
1. Overview
Proposal 187 introduced the concept of the AUTHORIZE cell, a cell
whose purpose is to make Tor bridges resistant to scanning attacks.
This is achieved by having the bridge and the client share a secret
out-of-band and then use AUTHORIZE cells to validate that the
client indeed knows that secret before proceeding with the Tor
protocol.…
[View More]
This proposal specifies the format of the AUTHORIZE cell and also
introduces the AUTHORIZED cell, a way for bridges to announce to
clients that the authorization process is complete and successful.
2. Motivation
AUTHORIZE cells should be able to perform a variety of
authorization protocols based on a variety of shared secrets. This
forces the AUTHORIZE cell to have a dynamic format based on the
authorization method used.
AUTHORIZED cells are used by bridges to signal the end of a
successful bridge client authorization and the beginning of the
actual link handshake. AUTHORIZED cells have no other use and for
this reason their format is very simple.
Both AUTHORIZE and AUTHORIZED cells are to be used under censorship
conditions and they should look innocuous to any adversary capable
of monitoring network traffic.
As an attack example, an adversary could passively monitor the
traffic of a bridge host, looking at the packets directly after the
TLS handshake and trying to deduce from their packet size if they
are AUTHORIZE and AUTHORIZED cells. For this reason, AUTHORIZE and
AUTHORIZED cells are padded with a random amount of padding before
sending.
3. Design
3.1. AUTHORIZE cell
The AUTHORIZE cell is a variable-sized cell.
The generic AUTHORIZE cell format is:
AuthMethod [1 octet]
MethodFields [...]
PadLen [2 octets]
Padding ['PadLen' octets]
where:
'AuthMethod', is the authorization method to be used.
'MethodFields', is dependent on the authorization Method used. It's
a meta-field hosting an arbitrary amount of fields.
'PadLen', specifies the amount of padding in octets.
'Padding', is 'PadLen' octets of random content.
3.2. AUTHORIZED cell format
The AUTHORIZED cell is a variable-sized cell.
The AUTHORIZED cell format is:
'AuthMethod' [1 octet]
'PadLen' [2 octets]
'Padding' ['PadLen' octets]
where all fields have the same meaning as in section 3.1.
3.3. Cell parsing
Implementations MUST ignore the contents of 'Padding'.
Implementations MUST reject an AUTHORIZE or AUTHORIZED cell where
the 'Padding' field is not 'PadLen' octets long.
Implementations MUST reject an AUTHORIZE cell with an 'AuthMethod'
they don't recognize.
4. Discussion
4.1. Why not let the pluggable transports do the padding, like they
are supposed to do for the rest of the Tor protocol?
The arguments of section "Alternative design: Just use pluggable
transports" of proposal 187, apply here as well:
All bridges who use client authorization will also need camouflaged
AUTHORIZE/AUTHORIZED cell.
4.2. How should multiple round-trip authorization protocols be handled?
Protocols that require multiple round-trips between the client and
the bridge should use AUTHORIZE cells for communication.
The format of the AUTHORIZE cell is flexible enough to support
messages from the client to the bridge and the inverse.
In the end of a successful multiple round-trip protocol, an
AUTHORIZED cell must be issued from the bridge to the client.
4.3. AUTHORIZED seems useless. Why not use VPADDING instead?
As noted in proposal 187, the Tor protocol uses VPADDING cells for
padding; any other use of VPADDING makes the Tor protocol kludgy.
In the future, and in the example case of a v3 handshake, a client
can optimistically send a VERSIONS cell along with the final
AUTHORIZE cell of an authorization protocol. That allows the
bridge, in the case of successful authorization, to also process
the VERSIONS cell and begin the v3 handshake promptly.
[View Less]
Hi everybody,
we're discussing in #5684 whether we can stop sanitizing nicknames in
the bridge descriptors that we publish here:
https://metrics.torproject.org/data.html#bridgedesc
The sanitizing process is described here:
https://metrics.torproject.org/formats.html#bridgedesc
When we started making sanitized bridge descriptors available on the
metrics website we replaced all contained nicknames with "Unnamed". The
reason was that "bridge nicknames might give hints on the location of
the …
[View More]bridge if chosen without care; e.g. a bridge nickname might be very
similar to the operators' relay nicknames which might be located on
adjacent IP addresses."
This was an easy decision back then, because we didn't use the nickname
for anything. This has changed with #5629 where we try to count EC2
bridges which all have a similar nickname. So, while we don't have that
information, there'd now be a use for it. Another advantage of having
bridge nicknames would be that they're easier to look up in a status
website like Atlas (which doesn't support searching for bridges yet).
We should re-consider whether it still makes sense to sanitize nicknames
in bridge descriptors or not.
Regarding the reasoning above, couldn't an adversary just scan adjacent
IP addresses of all known relays, not just the ones with similar
nicknames? And are we giving away anything else with the nicknames?
It would be great to get some feedback here whether leaving nicknames in
the sanitized descriptors is a terrible idea, and if so, why.
If nobody objects within the next, say, two weeks, I'm going to make an
old tarball from 2008 available with original nicknames. And if nobody
screams, I'll provide the remaining tarballs containing original
nicknames another two weeks later.
Thanks!
Karsten
[View Less]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Erring on the side of "release early, release often" I have put my
Twisted-based (asynchronous, Python) Tor control protocol
implementation online:
http://readthedocs.org/docs/txtorcon/en/latest/https://github.com/meejah/txtorcon
It is MIT licensed (to match Twisted). I would certainly not consider
it "done", and I made it to learn more about Twisted and Python --
criticisms, comments appreciated.
Currently it has the following features (…
[View More]see the above-linked
documentation for more, and examples):
. TorControlProtocol implements the control protocol
. TorState tracks the state of Tor (streams, circuits, routers, address-map), listening for updates
. TorConfig provides read/write configuration access , with HS abstraction (still needs some work)
. IStringAttacher, a stream-to-circuit attacher interface for new streams
. launch_tor can launch slave Tor processes
. integrates into Twisted's endpoints with TCPHiddenServiceEndpoint
The main code is about 1600 LOC, ~4000 with tests and 25% comments
(according to ohcount). There is currently 98% test coverage, if one
believes code-coverage is a good metric.
In the short-term, be aware that I'm planning to re-organize where
things are in files. If you "import txtorcon" and use the classes like
"txtorcon.TorConfig" it will all still work.
Thanks for your attention,
mike
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQEcBAEBAgAGBQJPczCxAAoJEMJgKAMSgGmnTrEH/RG1TLbEsqALWyh5WSm1azYU
7QHx9eup+/NKUE8C6WLPGQyprTkL/snIRZZGYDdkz5grkxcsYaWaVNNtDdUTdctN
KCi2E3rbzdUYHV0aN/VdoNvJdpa8H3J2dpyx4/kFmZ2Z04+VLZOqeX6ANMdYZbYv
FXv37j0dnl15h+t57+65Cf5c8BVbSW50vqXUx/eHWS73BISq3LP30OV4Ut8k3Xbg
IXVf1S/EFeoXxRoGfn9i4i4txeQNyQxCOX0k+fynvIGP+lFuYciSGgGJydYBIkhE
87TMJ//c1tPq41jn5prdbWRTE4mPWA5U03w35wUGrhUWSNUb+OhM6fV4vdRwq30=
=ilGQ
-----END PGP SIGNATURE-----
[View Less]
Forwarding my original answer to Sebastian here.
-------- Original Message --------
Subject: Re: [tor-dev] Can we stop sanitizing nicknames in bridge
descriptors?
Date: Mon, 21 May 2012 19:56:34 +0200
From: Karsten Loesing <karsten(a)torproject.org>
To: Sebastian G. <bastik.tor> <bastik.tor(a)googlemail.com>
Hi Sebastian,
On 5/21/12 7:08 PM, Sebastian G. <bastik.tor> wrote:
(Did you intend to send this mail only to me, not to tor-dev? Feel free
to move the …
[View More]discussion back to tor-dev if you want.)
> Karsten Loesing, 21.05.2012 11:05:
>>>
>>> Here we go with the similarities of bridge and relay nicknames.
>>
>> Thanks for spending this much time on the analysis!
>
> I could have done far worse, but also a lot better in terms of time
> spend on extracting the data that I wanted or at least considered that
> they'd might be useful.
>
> Sometimes I'm just slow at things, e.g. writing this reply.
>
>> Here's what I did with your findings.txt:
>>
>> - extract unique fingerprint pairs of relays and bridges that you found
>> as having similar nicknames,
>>
>> - look through descriptor archives to see if relay and bridge were
>> running in the same /24 at any time in May 2008, and
>>
>> - determine the absolute and relative number of bridges in a given
>> network status that could have been located via nickname similarity.
>>
>> Results are that 24 of your 81 guesses (30%) were correct in the sense
>> that a bridge was at least once running in the same /24 as the relay
>> with similar nickname. At any time in May 2008, you'd have located
>> between 1 and 6 bridges (2.5% to 18%) with 3 bridges (10%) in the mean
>> via nickname similarity.
>
> Not too bad.
I agree. :)
>> I think it's acceptable to publish more recent bridge descriptors with
>> nicknames in a week from now. Results may look quite different with
>> 1000 bridges instead of 30.
>
> May 2008 was the first month with bridges. I expected lot's of relay
> operators that tested a bridge with the same name. Things may have
> changed over time. I assume that further comparisons won't have such a
> "high" hit ratio.
That would be my guess, too. In May 2008, only a few early adopters
were running bridges, and most of those probably ran relays at the same
time, too. Plus, they were enthusiastic and put some energy in finding
cool nicknames. It might be that this has changed since then. To be
honest, I didn't look at 2012 tarballs yet.
>> Again, thanks for running this analysis! Maybe you're interested in
>> automating your comparison and re-running it for a 2012 tarball?
>
> My claim was you got the data, so you can check. (Not with May 2008)
>
> To be honest, my first impression was that I wouldn't do anything useful
> and did not intend to do that. I guessed it wouldn't turn out that it
> doesn't hurt since at least 2011, so I wouldn't find anything good.
>
> Then you asked and I agreed, but already thought "I couldn't keep my
> mouth shut!". I mean I replied to this topic. I surely could have said
> no there. I didn't.
>
> After and while I was doing what I did. I would have said no to the
> question if I'm going to do this again. That's valid for up to Sunday
> night. Today I'm agreeing again.
>
> That's a pretty long way to say: Yes!
Hah, great! :)
I'm going to make the 2012 tarballs available next Wednesday (May 30),
assuming that my poor Linux box doesn't run out of $resource. I'll let
you know.
> Thank you,it's an 2012 tarball. The number of bridges is scary.
>
> I'm going to upload some files somewhere and explain what I did. Step by
> step (somewhat around that). So anyone can check and reproduce what I
> did. It would be nice to hear feedback and ways to improve the way I did
> what I did.
>
> Maybe you can tell me if the findings.txt was alright.
Yes, the file format was fine.
> Unless one objects or you disagree I'm going to upload the files I
> created and explain how and maybe I can say even why.
No objections at all. Open discussion is good.
> I created a Blog, just because I wanted it some when in the past, but
> found it silly. That's the channel I planed to use. Maybe it's OK to put
> it on a Tor-List as well, but maybe it's considered as noise.
I wonder if the Tor wiki would be a better place to collect ideas for
reversing the bridge descriptor sanitizing process. Feel free to grab a
new page in doc/ and start describing what you did.
Best,
Karsten
[View Less]
Hello
I'm reporting my progress over the last week.
I have been working on implementing support for the safe cookie
authentication method for Stem[1].
It is in a usable state[2] right now, but, it isn't fully tested. In
the process, Stem also helped uncover
a bug in the AUTHCHALLENGE error responses[3].
Now, I'll be writing the tests for the safe cookie implementation and
getting it merged. Next, I'll begin work on the
general controller class. Damien has already implemented an the
GETINFO …
[View More]response parser. Beck also wants to
help implement the controller class.
I/We will be working on the deliverables scheduled for week two in my
proposal, i.e.
Implementing the wrapper functions and response parsing for the
following control commands,
GETCONF, SETCONF, LOADCONF, RESETCONF, SAVECONF, SIGNAL,
TAKEOWNERSHIP, USEFEATURE and QUIT.
1. https://trac.torproject.org/projects/tor/ticket/5262
2. http://repo.or.cz/w/stem/neena.git/shortlog/refs/heads/safe-cookie-alt
3. https://trac.torproject.org/projects/tor/ticket/5760
--
Regards,
neena
[View Less]
Hi Damian,
I plan to make a few changes to the bridge descriptor sanitizer to
implement changes discussed on this list and in various Trac tickets.
The result will be format version 1.0. Here's what will change compared
to the current (unversioned) format. Can you take a look whether stem
would be happy with these descriptors and if there are other tweaks I
should do to make it even happier?
- Bridge network statuses contain a "published" line containing the
publication timestamp, so that …
[View More]parsers don't have to learn that
timestamp from the file name anymore.
- Bridge network status entries are ordered by hex-encoded fingerprint,
not by base64-encoded fingerprint, which is mostly a cosmetic change.
- Server descriptors and extra-info descriptors are stored under the
SHA1 hashes of the descriptor identifiers of their non-scrubbed forms.
Previously, descriptors were (supposed to be; see #5607) stored under
the digests of their scrubbed forms. The reason for hashing digests is
to prevent looking up an existing descriptor from the bridge authority
by its non-scrubbed descriptor digest. With this change, we don't have
to repair references between statuses, server descriptors, and
extra-info descriptors anymore which turned out to be error-prone
(#5608). Server descriptors and extra-info descriptors contain a new
"router-digest" line with the hex-formatted descriptor identifier.
These lines are necessary, because we cannot calculate the identifier
anymore and because we don't want to rely on the file name.
- Bridge nicknames (#5684) in all descriptor types and dirreq-*
statistics lines (#5807) in extra-info descriptors are not sanitized
anymore.
- All sanitized bridge descriptors contain @type annotations (#5651).
Please let me know what you think about these changes. I plan to start
sanitizing descriptors with the described changes tomorrow or the day
after and make them available on May 30 (or later if the
sanitizing/compressing/uploading takes much longer than expected).
Thanks,
Karsten
[View Less]
Hello fellow Tor developers!
Some information about me:
*I worked for EFF/Tor Project last year for **GSoC 2011, my project **was a
blocking-resistant transport evaluation framework:
https://gitweb.torproject.org/user/blanu/blocking-test.git*
I am also the author of a pluggable transport written in python:
https://github.com/blanu/Dust/tree/master/py/dust/services/socks2
I've been working on censorship resistance technology since 2001. Here are
some of my projects:
http://blanu.net/Dust-…
[View More]FOCI.pdfhttp://blanu.net/BayesianClassification.pdfhttp://blanu.net/Arcadia.pdfhttp://blanu.net/Freenet2001.pdf
Some information about the project:
The overall goal of the project is to make it easy for pluggable transports
to be written in python. There has been a lot of interest in doing
pluggable transports in python, but currently they are all written from
scratch. For C transports, obfsproxy can be used to do a lot of the heavy
lifting, making it relatively easy to write a new C-based transport. I've
heard there is also a port of obfsproxy to C++. A the author of a python
transport, I am of course an advocate of writing transports in python.
Fortunately, so are some other Tor folks, so soon it will be easy to write
python transports!
The deliverables for this project are as follows:
*A library for parsing pluggable transport configuration options*
This will be a python library that authors of SOCKS proxies can use to
integrate their proxies with Tor.
*A framework (both server and client-side) for writing pluggable transports
in python*
The framework will provide a SOCKS proxy server already integrated with the
pluggable transport library. All the protocol author will need to do is
provide the obfuscation and de-obfuscation functions and a main function to
do command line parsing and call the framework.
*A python implementation of the obfsproxy command line tool*
This will be a command line program using the framework that will accept
the same command line options as the existing obfsproxy tool. It will
support the selection of an obfuscation function, although not all of the
protocols currently supported by obfsproxy will initially be available in
python.
*A python implementation of the obfs2 protocol implemented as an obfsproxy
module*
The obfs2 protocol will be implemented as a plugin for the framework and
made available to the command line tool.
*Conversion of Dust to an obfsproxy module*
The Dust protocol will be implemented as a plugin for the framework and
made available to the command line tool.
*py2exe packaging for obfsproxy*
The command line tool will be packaged into a standalone executable for
Windows.
Optional deliverables if there is sufficient time: obfsproxy modules for
other protocols, experiment with other packaging systems
Current status:
I'm working on a spec of the API for the option parsing library. It should
be available soon.
[View Less]