Hi,
I have to study for an exam on Tuesday so it would be ideal for me if we
could move the next ooni dev meeting to Tuesday.
Does that work for those interested in attending?
I would suggest we do it at the same time (19:00 CET).
~ Arturo
W dniu 08.01.2015 o 12:11, Arturo Filastò pisze:
>
>
> On 1/8/15 1:21 AM, Jacek Wielemborek wrote:
>> Hello,
>>
>> Below is the log. Am I missing some package or something?
>>
>
> [ snip ]
>
>>
>> pcap_ex.c:18:23: fatal error: pcap-int.h: No such file or directory
>>
>> # include <pcap-int.h>
>>
>> ^
>>
>> compilation terminated.
>>
>> error: command 'gcc' failed with exit status 1
>>
>
> [ snip ]
>
>> [1:18:05][~]$ yum provides pcap-int.h
>> Loaded plugins: auto-update-debuginfo, langpacks
>> (cut)
>> No matches found
>>
>
> From the looks of it you are missing the libpcap-dev package. I believe
> in fedora it is called libpcap-devel.
>
> If you still encounter other issues you should reach us on the ooni-dev
> mailing list (ooni-dev(a)lists.torproject.org) or try asking around on IRC
> #ooni irc.oftc.net.
>
> ~ Arturo
I do have libpcap-devel installed, it's just this pcap-int.h header that
is both missing and not to be found in any Fedora package:
$ rpm -qa | grep pcap-dev
libpcap-devel-1.6.2-1.fc21.x86_64
Hello,
I was pointed to this mailing list by Ben Zevenbergen.
It seems like there are a few familiar faces in here and I believe some
of you are already quite familiar with the tool in question.
We have recently had some discussions on our
OONI mailing list about ethics of internet censorship related
measurements and what should be the best procedure for getting informed
consent from our users.
You can find this thread here:
https://lists.torproject.org/pipermail/ooni-dev/2014-December/000205.html
A volunteer started writing up some improvements to our current warning
message (that is found here:
https://github.com/TheTorProject/ooni-probe#read-this-before-running-oonipr…)
and you can find the improvements to it here:
https://lists.torproject.org/pipermail/ooni-dev/2015-January/000208.html
Some people have pointed out that the above message contains some
wording that is a bit too vague and that can lead to excessively scaring
users (or possibly even putting them in danger because they have
acknoledged that what they are doing could be legal).
This discussion mainly occurred on IRC so unfortunately it's not
captured anywhere, but I would be happy to further elaborate on it if
you are interested.
What we currently would need most is somebody that takes a look at the
tool and thinks about what could be the real risks that a user of it
could possibly face (if any) and come up with a wording that makes these
risks clear to them.
I am happy to further discuss this either via Skype or on our mailing list.
~ Arturo
As agreed with Aleksejs we are going to move this discussion onto the list.
On 1/4/15 8:40 PM, Aleksejs Popovs wrote:
> Hi Arturo,
>
> First of all, sorry for contacting you directly. Ooni-talk seems to be
> quite dead, and I am not sure that this is appropriate for ooni-dev.
> Feel free to redirect me somewhere else.
>
> Secondly, great job on the 31C3 OONI presentation!
>
> Now, onwards to what I wanted to tell you about. Here in Latvia,
> DPI-based filtering is used to block HTTP(S) connections to online
> gambling websites, as mandated by the law on gambling. However, there is
> also speculation originating from ISPs on the possibility of this being
> implemented for unlicensed online mass media, which to me sounds scary
> as hell. There don't appear to be any reports from Latvia in either
> OONI's report repos or Open Net Initiative's lists.
>
Blocking of gambling sites is in fact something very common in greedy
western countries.
How are they implementing blocking for HTTPS sites? It is quite unusual
to see that happening, but having information on that would be interesting.
> I wanted to create an OONI report that would demonstrate this censorship
> in my ISP's (Lattelecom, one of the biggest ones) network. Lattelecom
> uses DPI on port 80 to find requests containing "Host: <blockedhost>"
> and serve them a page like this:
> https://b.popovs.lv/images/blocked_website.png (they also do something
> similar for HTTPS with self-signed certs). I picked a random blocked
> URL, unibet.net <http://unibet.net>, put both HTTP and HTTPS versions of
> it into a text file, and then put a URL of a page on my personal
> website, popovs.lv <http://popovs.lv> (which isn't blocked), to use as a
> baseline.
>
> I ran the test, and it reported some errors and that "censorship is
> probably not happening" (which applies to my homepage, I guess). Here's
> the ooniprobe log and the
> report: https://popovs.lv/crap/ooni/ooni_run.txthttps://popovs.lv/crap/ooni/report-http_requests-2015-01-04T165420Z.yamloo
>
> Looking at the report, I saw that, while requests to my homepage went
> through just fine (and, as expected, were not censored), requests to the
> censored pages didn't show the censorship message, but instead showed
> various errors. I got confused as to why I could receive a parsing
> error, but it all cleared up when I tried looking at the plain headers
> using netcat: https://popovs.lv/crap/ooni/netcat.txt . That's right,
> there were no HTTP headers at all — their censorship setups just spits
> HTML out right away. I'm genuinely surprised that browsers actually
> render that. The same idiocy seems to be happening with HTTPS.
>
Oh my, that is some super ghetto censorship equipment at work.
We are relying on twisted's HTTP parsing library so it appears that it
does not support very well responses that are out of spec.
There is in the making a new HTTP test template in this branch:
https://github.com/thetorproject/ooni-probe/tree/feature/http-template
and it may be a good idea to support in it also logging HTTP responses
that are out of spec.
In the meantime what you can do to overcome this limit of ooniprobe is
that you could run the http_filtering_bypassing experimental test.
If they are doing blocking based on HTTP Host header field that will
trigger the blocking when running the "test_normal_request", but will
also identify some possible ways to bypass the filter by doing some
slightly modified requests (that is requests that a normal web server
would accept, but may be erroneously matched by the filter).
With this test we were able to detect some filtering bypassing
techniques in Turkmenistan and Uzbekistan:
https://ooni.torproject.org/tab-tab-come-in-bypassing-internet-blocking-to-…
Since this test does not use the full HTTP library, but just uses plain
TCP to form the HTTP request and simply logs the HTTP response as a
string without parsing it.
> So, I'm not even sure about what I want from you: I guess I just wanted
> you to know about this situation. I don't know how exactly are the OONI
> reports analysed — do you consider errors like this one to be cases of
> censorship? I guess you wouldn't want to implement some hacks to support
> my ISPs stupid quirks, but I just want to know if I can help in any
> further way to report on the net censorship here in Latvia.
>
As I said above I think it's a good idea to support these sorts of weird
behaviors ISP filtering equipment has. We may see this behavior in the
future and it's useful to be able to link it to the filtering technology
used by Latvia.
> Huge thanks to you for all of your work on OONI and other net freedom
> and privacy-related projects!
>
> Best regards,
> Aleksejs Popovs
Thanks for your email
~ Arturo
Hi Esther,
I think I had heard of this initiative, but only now I realize how it
could be of good use for my project.
I work on OONI an internet censorship measurement platform. Recently on
our mailing list and on IRC we have had some discussions about the
ethics of measurement and how to better informs users of they risks they
may face when using our software.
You can read more about this discussion here:
https://lists.torproject.org/pipermail/ooni-dev/2014-December/000205.htmlhttps://lists.torproject.org/pipermail/ooni-dev/2014-December/000206.htmlhttps://lists.torproject.org/pipermail/ooni-dev/2014-December/000207.html
Recently a volunteer has suggested some ways to improve our warning text
and you can see that here:
https://lists.torproject.org/pipermail/ooni-dev/2015-January/000208.html
In there there are some question marks that would require some legal
feedback.
Is this something you or somebody from your team would be available to
take a look at?
We would also like in general to be able a go to person for legal
support in the country of one of our users. I think it would be great if
we could add to the legal disclaimer that for some set of countries we
know who these legal help people are and that users should promptly
contact us and we shall direct them to them.
Any feedback and further advice would be great.
Thanks!
~ Arturo
On 1/5/15 3:29 PM, Esther Lim wrote:
> Dear OTF Project,
>
> Happiest of Happy New Year.
>
> I am Esther, the newest addition to the OTF team and the current lead
> for the OTF Legal Lab. I very much look forward to working with all of
> you one way or another.
>
> We are looking to restructure, revamp, and remake the Legal Lab to
> better suit your needs. But first, we need to know what might be helpful.
>
> To remind you, the Legal Lab has mostly served as an aggregating point
> and a mediator between various Legal Clinics and our projects. We aim to
> continue that aspect, but plan to expand in other ways.
>
> Please share your thoughts! You can find the Legal Lab Survey Here
> <https://docs.google.com/forms/d/1njfBDX23lXTu_alKcixD539YrTEQ79xkYgGVJ-VgBA…>
>
> Thanks in advance,
>
Hello Oonitarians,
This is a reminder that today there will be the weekly OONI meeting.
It will happen as usual on the #ooni channel on irc.oftc.net at 18:00
UTC (19:00 CET, 13:00 EST, 10:00 PST).
Everybody is welcome to join us and bring their questions and feedback.
See you later,
~ Arturo
Hello Oonitarians,
This is a reminder that today there will be the weekly OONI meeting.
It will happen as usual on the #ooni channel on irc.oftc.net at 18:00
UTC (19:00 CET, 13:00 EST, 10:00 PST).
Everybody is welcome to join us and bring their questions and feedback.
See you later,
~ Arturo
Hi everyone,
I just subscribed to this list, because Arturo asked me to comment on
two postings here. As a very quick introduction, and because I don't
know how distinct the Tor community and the OONI community are: I'm the
developer behind the Tor network data collector CollecTor [-1] and the
Tor Metrics website that aggregates and visualizes Tor network data [0].
Here's what Arturo asked on another thread:
> With OONI what we are currently focusing on is Bridge Reachability
> measurements. We have at this time 1 meter in China, 1 in Iran (a second
> one is going to be setup soon), 1 in Russia and 1 in Ukraine. We have
> some ideas of the sorts of information we would like to extract from
> this data, but it would also be very good to have some more feedback
> from you on what would be useful [1].
Long mail is long. Some random thoughts:
- For Tor network data it has turned out to be quite useful to strictly
separate data collection from data aggregation from data visualization.
That is, don't worry too much about visualizing the right thing, but
start with something, and if you don't like it, throw it away and do it
differently. And if you're aggregating the wrong thing, then aggregate
the previously collected data in a different way. Of course, if you
figure out you collected the wrong thing, then you won't be able to go
back in time and fix that.
- I saw some discussion of "The pool from where the bridge has been
extracted (private, tbb, BridgeDB https, BridgeDB email)". Note that
isis and I are currently talking about removing sanitized bridge pool
assignments from CollecTor. We're thinking about adding a new config
line to tor that states the preferred bridge pool, which could be used
here instead. Just as a heads-up, six months or so in advance. I can
probably provide more details if this is relevant to you.
> Another area that perhaps overlaps with the needs of the metrics is data
> storage. Currently we have around 16 GB of uncompressed raw report data
> that needs to be archived (currently it's being stored and published on
> staticiforme, but I have a feeling that is not ideal especially when the
> data will become much bigger) and indexed in some sort of database.
> Once we put the data (or a subset of it) in a database producing
> visualizations and exposing the data to end users will be much simpler.
> The question is if this is a need also for
> Metrics/BwAuth/ExitScanner/DocTor and if we can perhaps work out some
> shared infrastructure to fit both of our goals.
> Currently we have placed the data inside of MongoDB, but some concerns
> with it have been raised [2].
Again, some random thoughts:
- For Metrics, the choice of database is entirely an internal decision,
and no user would ever see that. It's part of the aggregation part. If
we ever decide to pick something else (than PostgreSQL in this case),
we'd have to rewrite the aggregation scripts, which would then produce
the same or similar output (which is an .csv file in our case). That
being said, trying out MongoDB or another NoSQL variant might be
worthwhile, but don't rely on it too much.
- Would you want to add bridge reachability statistics to Tor Metrics?
I'm currently working on opening it up and making it easier for people
to contribute metrics. Maybe take a look at the website prototype that
I posted to tor-dev@ a week ago [3] (and if you want, comment there). I
could very well imagine adding a new section "Reachability" right next
to "Diversity" with one or more graphs/tables provided by you. Please
see the new "Contributing to Tor Metrics" section on the About page for
the various options for contributing data or metrics.
- Please ask weasel for a VM to host those 16 GB of report data; having
it on staticiforme is probably a bad idea. Also, do you have any plans
to synchronize reports between hosts? I'm planning such a thing for
CollecTor where two or more instances fetch relay descriptors from
directory authorities and automatically exchange missing descriptors.
- I could imagine extending CollecTor to also collect and archive OONI
reports, as a long-term thing. Right now CollecTor does that for Tor
relay and bridge descriptors, TORDNSEL exit lists, BridgeDB pool
assignment files, and Torperf performance measurement results. But note
that it's written in Java and that I hardly have development time to
keep it afloat; so somebody else would have to extend it towards
supporting OONI reports. I'd be willing to review and merge things. We
should also keep CollecTor pure Java, because I want to make it easier
for others to run their own mirror and help us make data more redundant.
Anyway, I can also imagine keeping the OONI report collector distinct
from CollecTor and only exchange design ideas and experiences if that's
easier.
Lots of ideas. What do you think?
All the best,
Karsten
[-1] https://collector.torproject.org/
[0] https://metrics.torproject.org/
[1] https://lists.torproject.org/pipermail/ooni-dev/2014-October/000176.html
[2] https://lists.torproject.org/pipermail/ooni-dev/2014-October/000178.html
[3] https://kloesing.github.io/metrics-2.0/
Hi,
This is a reminder that today there will be the weekly OONI meeting.
It will happen as usual on the #ooni channel on irc.oftc.net at 18:00
UTC (19:00 CET, 13:00 EST, 10:00 PST).
Everybody is welcome to join us and bring their questions and feedback.
See you later,
~ Arturo