Hello!
just wanted to remind you that the regular biweekly pluggable
transports meeting is going to occur tomorrow at 16:00 UTC. Place is
the #tor-dev IRC channel in the OFTC network.
Thanks for your attention!
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
George Kadianakis wrote:
> == Opt-in HS indexing service ==
>
> This seems like a fun project that can be used in various ways in
> the future. Of course, the feature must remain opt-in so that only
> services that want to be public will surface.
>
> For this project, we could make some sort of 'HS authority' which
> collects HS information (the HS descriptor?) from volunteering
> HSes. It's unclear who will run …
[View More]an HS authority; maybe we can work
> with ahmia so that they integrate it in their infrastructure?
>
> If we are more experimental, we can even build a basic petname
> system using the HS authority [2]. Maybe just a "simple" NAME <->
> PUBKEY database where HSes can register themselves in a FIFO
> fashion. This might cause tons of domain camping and attempts for
> dirty sybil attacks, but it might develop into something useful.
> Worst case we can shut it down and call the experiment done?
> AFAIK, I2P has been doing something similar at
> https://geti2p.net/en/docs/naming
>
We have been running our petname system for at least five years
(probably longer), mostly without incident. The above linked page
covers details of the current system; see [0] for a (slightly dated)
more general discussion of the reasons behind the I2P naming system,
common arguments and possible alternatives.
We have also started looking at GNS as an option[1,2]. I like the
concept, and the ability for users to easily control their own domain
name zones, which replaces the problem of a user losing their
Destination keys (.onion equivalent) down to only needing to securely
maintain their zone key. But the UI/UX needs vast improvement if
everyday users are to understand it, and we need to research the
trade-offs carefully. I am happy to discuss this further if anyone is
interested, because a combined / general approach would benefit
multiple networks.
str4d
[0] https://geti2p.net/en/docs/discussions/naming
[1] http://zzz.i2p/topics/1545 (in I2P)
[2] http://trac.i2p2.de/wiki/GNS
-----BEGIN PGP SIGNATURE-----
iQIcBAEBCgAGBQJURgOtAAoJEIA97kkaNHPnfUQP/RQIBUCjCQcRvDyXL5l+D0Cv
yoE7JT2vr7mqQx+/4nT9w/EhuQvakS5lK8gsod3yHDErtbNa8Ot/NylDbPYvdSo0
mXYOWRtaozv/8VIrRZuFc1NLne/08RYAIrnC3A9NCbpCcvwVLMxWXdl3DzEuZ+R5
rVijvHXW2nhVKjEYFez4dtErA+qTC/nKtdlG5hG1gq4mTTbWyYPEVmUonZe7t8l6
+iVc5jG9bKS6ng1Mcvmj5XwDN2rRRk3igo+64gyk7GsHSi3IXxq2S8H+hVD16hk+
gPmS0+oX5Vy6Ww04xM+x55X3NBHbLzkb+dEMYC/gYVLZrmxeG76iAQ/vBDCJncGC
8JI+KhoCx/hQGzvIby1T2VEf8t/fyREKwDq/jmAQdraxqoA5/z5xHwgpetNRRJOM
6LuOtc3jkxe0fOrsJcvHXLQybOKE8bLrLf0jLlmOpKR60EqmFAZaXC8qbzfItvaA
KBHKO9t4e1F0t96PH0qhTJjlqqJVunfHyaulC6i9/HBRIZAnjrDtxvN9oQgtyYzI
TRw/uxqR0o4c3iDet472YOjLR0JDwz23OHGT66VG/xiNYN2siNpSeBoVXmpc7YuC
PGqjgZJkl3EZ/AIU+2mS+lAvmouT9UmW58fIF0BD+bmsr1//zfCIxXH8/EeOZBCw
sXUQa30VVMdDYCmQc8dX
=HUK1
-----END PGP SIGNATURE-----
[View Less]
Hi, all!
You probably know that we do our regular meetings for tor (the
program) development on Wednesdays at 1330 UTC on the #tor-dev IRC
channel on irc.oftc.net.
But did you know everybody who's interested in doing anything with tor
is welcome to attend? And did you know that anybody who's interested
is welcome to lurk?
Also, we've started doing a weekly patch workshop where we all look at
each other's patches and try to help get them done, try to help code
get better, and so forth. That'…
[View More]s on Tuesdays at 1700 UTC on the same
IRC channel! (Thanks for rl1987 for suggesting this.)
cheers,
--
Nick
[View Less]
Hi all,
i've released a 3.1.20 version that includes some networking bugfixing
and optimizations.
it also includes the following two new features:
1) with numes we discussed https://github.com/globaleaks/Tor2web-3.0/issues/157
in past i added a feature in order to update the blocklist of a
tor2web node mergin a remote bloklist.
numes asked for the possibility to replace local blocklist with a
remote one in order to implement different policies.
the result is that we now have two modes for …
[View More]this REPLACE mode and MERGE mode
2) https://github.com/globaleaks/Tor2web-3.0/issues/158
in globaleaks in past we has the need to ad a TRANSLATION mode of
tor2web to make a single node as a proxy for only a single hidden
service: e.g., demo.globaleaks.org -> demo-HS
now i've added an /etc/hosts like that enable a mapping like the following:
demo.globaleaks.org hiddenservice1.onion
adopter1.demo.globaleaks.org hiddenservice2.onion
adopter2.demo.globaleaks.org hiddenservice3.onion
this would result particular interesting in order to implement
mnemonic names translation for specific hidden services e.g.:
puppetname.tor2web.org antani123.onion
as usual you will find documentation for this at
https://github.com/globaleaks/Tor2web-3.0/wiki/Configuration-Guide and
in /etc/tor2web.conf.example
ciao!
evilaliv3
[View Less]
On 18 Oct 2014, at 13:29 , tor-dev-request(a)lists.torproject.org wrote:
> Date: Fri, 17 Oct 2014 22:29:02 -0400
> From: Nick Mathewson <nickm(a)freehaven.net>
> To: tor-dev(a)lists.torproject.org
> Subject: Re: [tor-dev] Building TOR using Visual Studio
> * Some people want to use paid versions of Visual Studio, and have
> paid for a version earlier than VS2013, and don't want to pay for a
> newer one. I sympathize with this: I've been on paid upgrade
> …
[View More]treadmills myself, and it's always tempting to save money by skipping
> steps in the upgrade treadmill. I've even paid for commercial
> compilers in my misspent youth.
Are there no-cost, non-license-restricted compilers available for Windows that support C99?
This could be a way out for those who don't wish to pay for the VS 2013 upgrade.
But it's a bit more of a barrier than using an existing compiler on the system.
> * Some compilers for weird old hardware have never been upgraded to
> even rudimentary C99 support, and trying to build code with those
> weird old compilers is a good way to expose some bugs. I sympathize
> with this too: there was one guy who would always compile new versions
> of Tor on his old Irix boxes, and he always turned up a new warning or
> two when he did.
Static analysers, better compiler warnings, and runtime checks are starting to fill the role previous occupied by obscure systems. And mobile/embedded platforms help with this too :-)
I think we may be able to compensate for lack of C89 support for old compilers, by using a combination of coverity, clang --analyze, gcc/clang -ftrapv , and clang -fsanitize=undefined-trap -fsanitize-undefined-trap-on-error.
Oh, and unit tests :-)
teor
pgp 0xABFED1AC
hkp://pgp.mit.edu/https://gist.github.com/teor2345/d033b8ce0a99adbc89c5http://0bin.net/paste/Mu92kPyphK0bqmbA#Zvt3gzMrSCAwDN6GKsUk7Q8G-eG+Y+BLpe7w…
[View Less]
On 17 Oct 2014, at 23:00 , tor-dev-request(a)lists.torproject.org wrote:
> Date: Fri, 17 Oct 2014 15:09:38 +0400
> From: Vladimir <vilgeforce(a)gmail.com>
> To: tor-dev(a)lists.torproject.org
> Subject: Re: [tor-dev] Building TOR using Visual Studio
>
> VS2013 if free only in Express version, and Express version is limited:
> doesn't have a profiler, for example. Moreover, there are limitations for
> using Express version, it's written in license.
>
> I'v …
[View More]attached a diff where I changed a code for VS2008 (tested on ubuntu-64
> gcc 4.8.2 too). Also I changed readme: it contained instruction to run
> ./configure, but there's no such file.
>
[View Less]
Hi all,
This mail is to inform you that a new version of ooniprobe has just been
released.
Here is the changelog:
v1.2.2 (Fri, 17 Oct 2014)
-------------------------
Who said friday 17th is only bad luck?
* Add two new report entry keys test_start_time and test_runtime
* Fix bug that lead to ooniresources not working properly
It is now available for download from pypi:
https://pypi.python.org/pypi/ooniprobe
and soon will be available for download in debian.
Have fun!
~ Arturo
Hi all! I'm new at this list :-)
I decided to understand how TOR works and I want to build it in VS to debug
it and explore it's internals. I have Visual Studio on my first PC and I
got errors during build on address.c. I investigated the reason: commit
0ca83872468af59b94e14fe7fdfcb38cb5a3f496
I have Visual Studio express 2013 on my second PC and I didn't have any
problems building the TOR.
So I have two questions: did you decide not to support old Visual Studio
versions or it'll be better …
[View More]to build TOR in VS2008 too? If old versions
aren't supported, should it be some #error directives in sources to explain
this decision? It was really hard to understand where's the problem, so I
think #error will be very helpfull.
If you give me some instructions about the problem, I'll try to commit the
changes. Thank you.
[View Less]
Hi all
For the last couple of days i've been thinking about the visualization
of the bridge reachability data and how it relates to the currently
deployed ooni [7] system, here are the conclussions:
== Variables ==
I think that the statistical variables for the bridge reachability
reports are:
- Success of the nettest (yes, no, errors)
- The PT of the bridge (obfs3, obfs2, fte, vanilla)
- The pool from where the bridge has been extracted (private, tbb,
BridgeDB https, BridgeDB email)
- The …
[View More]country "of the ooni-probe"
With these variables I believe we can answer a lot of questions related
to how much censorship is being taken, where and how.
But there's something left: the timing. George sent an email [0] in
which he proposes a timeline of events [5] of every bridge that would
allow us to diagnose with much more precission how and why is a bridge
being censored. To build that diagram we should define first the events
that will be showed in the timeline. I think those events are the values
of the pool variable and if the bridge is being blocked in a given country.
With the events defined i think we can define another variable:
- Time deltas between bridge events.
So, for example, what this variable will answer is: how many {days,
hours...} does it take China to block a bridge that is published in
bridgeDB? Is China blocking new bridges at the same speed that Iran? How
many days does it take China block a private bridge?
There are some ambiguities related to the deltas, for example if the
bridge is sometimes blocked and sometimes not in a country, which delta
should we compute?
Finally, in the etherpad [1] the tor's bootstrap is suggested as a
variable, i don't understand why. Is it to detect some way of
censorship? Can anyone explain a little more?
== Data schema ==
In the last email Ruben, Laurier and Pascal "strongly recommended
importing the reports into a database". I deeply believe the same.
We should provide a service to query the values of the previous
variables plus the timestamp of the nettest and the fingerprint of the
bridge.
With this database the inconsistencies between the data formats of the
reports should be erased and the work with the data is much more easy.
I think that we should also provide a way to export the queries to
csv/json to allow other people to dig into the data.
I also believe that we could use mongodb just because one reason: we can
distribute it very easily. But let me explain why in the Future section.
== Biased data ==
Can a malicious ooni-probe bias the data? For example, if it executes in
bursts some tests the reports are going to be the same and the general
picture could be biased. Any more ideas?
== Geo Data ==
In the etherpad [1] it's suggested to increase the granularity of the
geo data to detect geographical patterns, but it seems [2] that at least
in China there's not such patterns so maybe we should discard the idea
altogether.
== Playing with data ==
So until now i've talked about data. Now i want to address how to
present the data.
I think we should provide a way to play with data to allow a more
thoughtful and precise diagnosis of censorship.
What i was thinking is to enhance the interactivity of the visualization
by allowing the user a way to render the diagrams at the same time she
thinks about the data.
The idea is to allow the user to go from more general to more concret
data patterns. So imagine that the user loads the visualization's page,
first he sees a global heated map of censorship measured with the bridge
reachability test, he is chinese so he clicks in his country and a
histogram like [3] for China is stacked at the bottom of the global map,
he then clicks on the obfs2 and a diagram like [4] is also stacked at
the bottom but only showing the success variable for the obfs2 PT, then
he clicks on the True value for the success variable and all the bridges
that have been reached by all the nettests executions in that period of
time in China are showed, finally he selects one bridge and it's
timeline [5] plus it's link to atlas [6] is provided.
This is only a particular scenario, the core idea is to provide the user
with the enhanced capability to drive conclusions as much as she desires.
The user started with the more general concept of the data, and he
applied restrictions to the datapoints to dig more into the data. From
general to specific he can start making hypothesis that he later
discards or approves with more info displayed in the next diagram.
There are some usability problems with the selection of diagram+variable
and the diverse set of users that will use the system, but i'd be very
glad to think about them if you like the idea.
== Users ==
I think there are three set of users:
1- User of tor that is interested in the censorship performed in its
country and how to avoid it.
2- Journalist that wants to write something about censorship but isn't
that tech savvy.
3- Researcher that wants updated and detailed data about censorship.
I believe we can provide a system that satisfies the three of them if we
succeed in the previous bullet point.
== Future ==
So, why do i think that we should index the data with mongodb? Because i
think that this data repository should be provided as a new ooni-backend
API related to the current collector.
Right now the collectors can write down reports from any ooni-probe
instance that chooses to do so and its API is completly separated from
the bouncer API, which overall is a wise design decision because you can
deploy ooni-backend to only work as a collector. So it's not
unreasonable to think that we can have several collectors collecting
different reports because the backend is designed to do so, therefore we
need the data repository to be distributed. And mongodb is good at this.
If we build the database for the bridge reachability nettests, i think
that we should design it to index in the future all nettest reports and
therefore generalize the desgin, implementation and deployment of all
the work that we are going to do to the bridge reachability.
That way an analyst can query the distributed database with a proper
client that connects to the data repository ooni-backend API.
So to sum up, I started talking about the bridge reachability
visualization problem and finished with a much broader vision that
intends to integrate the ongoing efforts of the bridge reachability to
improve ooni as a whole.
Hope the email is not too large.
ciao
[0] https://lists.torproject.org/pipermail/tor-dev/2014-October/007585.html
[1] https://pad.riseup.net/p/bridgereachability
[2] https://blog.torproject.org/blog/closer-look-great-firewall-china
[3] http://ooniviz.chokepointproject.net/transports.htm
[4] http://ooniviz.chokepointproject.net/successes.htm
[5] https://people.torproject.org/~asn/bridget_vis/tbb_blocked_timeline.jpg
[6] https://atlas.torproject.org
[7] https://ooni.torproject.org/
[View Less]
Hi Nick, All,
I've made some minor corrections to proposal 237. Mostly these
are cosmetic changes, but I did remove some...overstatements,
as well. Attached is the patch, but it's only available as a
branch in my personal repo[0] as prop237-clarifications.
Let me know if you have any comments/suggestions (the overall
proposal is unchanged). If not, I'll start implementing this
within the next few days.
Thanks!
Matt
[0] https://git.torproject.org/user/sysrqb/torspec.git