Greetings Tor devs,
I wanted to give a quick update on the progress of the Network Status APIs[0].
First of all, a couple of weeks ago we have deployed an initial version of the service internally so that we can benchmark and test it and see how it holds.
Plus, we now have developed three more APIs: `/details`, `/clients`, `/weights`. In total, we now have covered almost every endpoint that is currently provided by onionoo.
Just to give you a little bit more insights, `/summary` and `/details` fetch data
from the Network Health team database and return a response identical to the one
that onionoo clients expect right now. On the other hand, `/clients`, `/weights` and `bandwidth` will proxy the request to VictoriaMetrics.
In the upcoming weeks we plan to stabilize and test these endpoints further and do some perf improvements.
Meanwhile, we are also working on documenting the APIs so that potential clients know how to use them, when they'll be available. Those can be found in the project's repo Wiki[1].
Please do reach out to me, hiro or GeKo if you have any ideas or feedback you'd
like to share with us.
Cheers,
Matt
[0]: https://gitlab.torproject.org/tpo/network-health/metrics/networkstatusapi/
[1]: https://gitlab.torproject.org/tpo/network-health/metrics/networkstatusapi/-…
Hello Tor hackers,
I just want to announce "funion", a Tor client implementation in Elixir
which has been my primary side-project from March 2023 to August 2023.
I began this project as an exercise in order to understand both, the Tor
protocol and Elixir, a little bit better. Choosing Elixir came to my
mind after watching a Computerphile video[1] introducing Erlang. After
watching it, I concluded that Tor's hierarchy of connections -> circuits
-> streams fits in very well -- which it actually does.
Currently, the implementation is capable of creating streams across
several hops with the ability of sending and receiving data. The
implementation has a size of roughly 3500 LOC. A very neat feature is
that each connection is a dedicated process having the circuits as
dedicated child processes and the streams as dedicated grandchild
processes respectively. This design leads to some very nice benefits,
including the fact that only the circuit processes hold the
cryptographic keys used in communication (except the TLS keys).
Keep in mind however that it still lacks many features, most notably
support for the directory protocol, resulting in the necessity of
hard-coding each OR with the keys extracted manually from the consensus.
I am not sure if I will find the time to continue hacking on this
though, but it has been a very fun and refreshing experience so far.
The Elixir code might not be the most beautiful out there, as this was
my first project in this language.
I do not consider this to be anything near production ready. Beside
this, I purposely decided to ignore certain practices common in
cryptographic applications (such as overwriting sensitive keys with
zeros) for the sake of simplicity, as my primary intention was to
understand the Tor protocol, not to write a tool that allows secure and
anonymous communication across the internet. If you seriously depend on
anonymity: DO NOT USE THIS!
The repository can be found here: https://github.com/emilengler/funion
A talk given at BornHack 2023 talking about the internals and pitfalls
can be found here:
https://media.ccc.de/v/bornhack2023-56123-funion-a-tor-client-i
My dedications go to the following people who have helped me dealing
with understanding various internals of Tor, be it direct or indirect:
- Roger Dingledine
- Alexander Færøy
- Ian Jackson
- Nick Mathewson
- The Talla Authors
- trinity
[1]: https://www.youtube.com/watch?v=SOqQVoVai6s
Tor has been undertaking security audits of code that we've been
changing. Security audits are a good thing! They uncover blind spots,
peel back assumptions, and present us with ways to improve our overall
security posture. We intend to publish the results of the two that we've
done recently, and commit to publishing every one we undertake - stay
tuned.
The first audit we did recently was a great success! The auditors
remarked that although the scope was large, the number of issues
uncovered was low, and that Tor in general adopts an admirably robust
and hardened security posture and sound design decisions:
"[Tor's] code was written to a first-rate standard and conformed to
secure coding practices ... adopt[ing] highly-advanced and deliberately
security focused building processes ... all which contribute towards
considerable defense-in-depth security posture".
One of the issues that came up was the overall lack of automated
resiliency in our software supply chain. What does this mean? That means
several dependencies in our software were outdated. Why were these
outdated? Because we lack the automation. Tracking dependencies manually
is difficult, you need to manually search for those updates individually
(although some package managers offer automated functionality), it
can be difficult to handle
So now we have a solution, Renovate: a highly configurable system for
dependency update automation. It scans your software, discovers
dependencies, automatically checks to see if an updated version exists,
and helps you by submitting automated pull requests. It is an open
source project that we are self-hosting on our gitlab (its like
'dependabot' if you know that).
A number of Tor projects are using it already, please consider using it
for your project! Its very simple to use, and there is no harm in giving
it a try. We are still trying this out, so your feedback[2] is important
for how to move forwards. Ideally, we will have this problem solved
automatically for all of our projects, but lets make sure things work
well for everyone first.
How do I use it?
-----------------
To have renovate work on your gitlab project, you simply have to invite
the 'renovate-bot' user (its a bot!) to your project (with the `Developer`
access level), and then wait for it to do its work. Next time it runs,
it will open an "Onboarding" issue[0], to get you started.
The first time it runs, there may be a number of dependencies that need
updating, which will result in a MR for each[1]. That could be
overwhelming, but after the initial wave, things will calm down.
Simply review the MR and merge it if it makes sense (making any code
adjustments necessary). If you don't want that MR to happen, simply
close it, and Renovate will stop bugging you about it.
How does it work?
-----------------
There is a project in our gitlab[2], which has a scheduled CI that runs
every 30 minutes. When it runs, it looks to see what projects have the
gitlab bot user 'renovate-bot' as a member, with 'developer' level
access. For each of those projects, it then scans the project for any
dependencies that need updating, and will open MRs to update those
out-of-date dependencies (triggering CI builds).
Your project must also have a CI that is being tended to, so that it
runs and succeeds.
I want to change its behavior
-----------------------------
Renovate is highly configurable. You can decide what you do, and do not,
want from Renovate. There are knobs for practically
everything[3]. Renovate has a default[4] set of configurations that
we've set organization-wide, you can override those in your project, and
set any other configuration options[5] you might want.
How to give feedback, ask questions, etc.
-----------------------------------------
If you are looking for help, have questions, or want to give some
feedback on global defaults or other aspects that could be improved,
please file an issue[1]!
0. eg. https://gitlab.torproject.org/tpo/core/onionmasq/-/merge_requests/101
1. eg. https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/merge_requests/151
2. https://gitlab.torproject.org/tpo/tpa/renovate-cron/
3. https://docs.renovatebot.com
4. https://github.com/renovatebot/renovate/blob/main/docs/development/configur…
5. https://docs.renovatebot.com/getting-started/use-cases/
As part of Sponsor 112, Tor has an objective to address covert channels
that malicious relays can use to deanonymize users. This proposal lays
the groundwork by categorizing these issues, and prioritizes them
according to their severity. Subsequent proposals will deal with
specific solutions.
I am posting just the intro section of this proposal here. The full
proposal is at:
https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/344-pr…
Filename: 344-protocol-info-leaks.txt
Title: Prioritizing Protocol Information Leaks in Tor
Author: Mike Perry
Created: 2023-07-17
Purpose: Normative
Status: Open
0. Introduction
Tor's protocol has numerous forms of information leaks, ranging from
highly severe covert channels, to behavioral issues that have been
useful in performing other attacks, to traffic analysis concerns.
Historically, we have had difficulty determining the severity of these
information leaks when they are considered in isolation. At a high
level, many information leaks look similar, and all seem to be forms of
traffic analysis, which is regarded as a difficult attack to perform due
to Tor's distributed trust properties.
However, some information leaks are indeed more severe than others: some
can be used to remove Tor's distributed trust properties by providing a
covert channel and using it to ensure that only colluding and
communicating relays are present in a path, thus deanonymizing users.
Some do not provide this capability, but can be combined with other info
leak vectors to quickly yield Guard Discovery, and some only become
dangerous once Guard Discovery or other anonymity set reduction is
already achieved.
By prioritizing information leak vectors by their co-factors, impact,
and resulting consequences, we can see that these attack vectors are not
all equivalent. Each vector of information leak also has a common
solution, and some categories even share the same solution as other
categories.
This framework is essential for understanding the context in which we
will be addressing information leaks, so that decisions and fixes can be
understood properly. This framework is also essential for recognizing
when new protocol changes might introduce information leaks or not, for
gauging the severity of such information leaks, and for knowing what to
do about them.
Hence, we are including it in tor-spec, as a living, normative document
to be updated with experience, and as external research progresses.
It is essential reading material for any developers working on new Tor
implementations, be they Arti, Arti-relay, or a third party implementation.
This document is likely also useful to developers of Tor-like anonymity
systems, of which there are now several, such as I2P, MASQUE, and Oxen.
They definitely share at least some, and possibly even many of these issues.
Readers who are relatively new to anonymity literature may wish to first
consult the Glossary in Section 3, especially if terms such as Covert
Channel, Path Bias, Guard Discovery, and False Positive/False Negative
are unfamiliar or hazy. There is also a catalog of historical real-world
attacks that are known to have been performed against Tor in Section 2,
to help illustrate how information leaks have been used adversarially,
in practice.
We are interested in hearing from journalists and legal organizations
who learn about court proceedings involving Tor. We became aware of
three instances of real-world attacks covered in Section 2 in this way.
Parallel construction (hiding the true source of evidence by inventing
an alternate story for the court -- also known as lying) is a
possibility in the US and elsewhere, but (so far) we are not aware of
any direct evidence of this occurring with respect to Tor cases. Still,
keep your eyes peeled...
0.1. Table of Contents
1. Info Leak Vectors
1.1. Highly Severe Covert Channel Vectors
1.1.1. Cryptographic Tagging
1.1.2. End-to-end cell header manipulation
1.1.3. Dropped cells
1.2. Info Leaks that enable other attacks
1.2.1. Handshakes with unique traffic patterns
1.2.2. Adversary-Induced Circuit Creation
1.2.3. Relay Bandwidth Lying
1.2.4. Metrics Leakage
1.2.5. Protocol Oracles
1.3. Info Leaks of Research Concern
1.3.1. Netflow Activity
1.3.2. Active Traffic Manipulation Covert Channels
1.3.3. Passive Application-Layer Traffic Patterns
1.3.4. Protocol or Application Linkability
1.3.5. Latency Measurement
2. Attack Examples
2.1. CMU Tagging Attack
2.2. Guard Discovery Attacks with Netflow Deanonymization
2.3. Netflow Anonymity Set Reduction
2.4. Application Layer Confirmation
3. Glossary
The remainder of this proposal can be read at:
https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/344-pr…
--
Mike Perry
Dear Tor Volunteers and Developers,
We are reaching out to you today to seek your invaluable expertise and insights in shaping the future of Tor relay updates. As part of an MSc Dissertation, a comprehensive study is underway to gain a deeper understanding of the significance of Tor relay updates from the unique perspective of developers like yourself.
Your participation is pivotal to the success of this research, as we seek to explore your thoughts and opinions on a proposed automatic update mechanism for the relay. By lending your expertise, you will play a key role in advancing the health of the Tor network and enhancing the overall experience for millions of users around the globe.
Rest assured that all your responses will be treated with the utmost care and confidentiality. To safeguard your data, the survey has been thoughtfully designed on an end-to-end encrypted platform, aligning with our commitment to ethical principles. We genuinely value your privacy and will handle your contributions with utmost respect and protection.
We acknowledge the value of your time and effort, and we assure you that your input, which requires only 10 minutes, will wield a significant influence on the advancement of the Tor network. Don't miss out on this chance to contribute; the survey will remain open until August 5th. We invite you to join us today in shaping transformative research for Tor's bright future.
Should you have any questions or require further information, please don't hesitate to contact us at S2450888(a)ed.ac.uk. We wholeheartedly welcome your feedback and queries, and your cooperation is genuinely appreciated.
Together, we can make a meaningful difference and construct a stronger, more secure online world for everyone. Participate now and be an integral part of shaping the future of Tor relay updates through this link: https://cryptpad.fr/form/#/2/form/view/pb7wKm8zl0D62PsW93fz4wiW73i9nzDM-Jxc…
[https://cryptpad.fr/customize/images/opengraph_preview/og-form.png]<https://cryptpad.fr/form/#/2/form/view/pb7wKm8zl0D62PsW93fz4wiW73i9nzDM-Jxc…>
Encrypted Form<https://cryptpad.fr/form/#/2/form/view/pb7wKm8zl0D62PsW93fz4wiW73i9nzDM-Jxc…>
CryptPad: end-to-end encrypted collaboration suite
cryptpad.fr
Thank you for your essential role in this important journey.
Best regards,
Ravi Kumar
S2450888(a)ed.ac.uk
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th' ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
Hello everyone!
The Network Health team is best known for its work in the bad-relays
area and being concerned with providing metrics + keeping an eye on the
health of the Tor network. While that involves doing analyses to answer
our own questions it was not clear so far what we should do with
questions about the network or available data which got raised by other
teams (i.e. you). Should we do those analyses as well or should we just
provide the necessary data and each team then tries to answer their
questions by themselves? This got raised a bunch of times in the
past[1][2] and I am happy to announce that the Network Health team is
stepping up and helping you with your analyses, if desired.
The idea would be that we'd do the analysis work in those cases, alone
or together, and accumulate/improve a set of scripts and tools over time
at a single place, so that both we and other interested parties can do
those analyses faster and better in the long(er) run. The tools should
be open so that other folks can help improving and shaping them
according to their needs.
We have a dedicated analysis project[3] where we track all analysis
related issues and which contains tools and documentation about how to
use them for our work. If there is any analysis work you would like us
to get done, please file a ticket in that project. If you did a similar
analysis in the past by yourself we'd love to hear about it as it might
speed up our work and we could build documentation out of it and add
available code to the analysis project. Additionally, please assume we
are no experts in your area of work, but don't hesitate to file tickets
because of that. We'll ask if things are unclear. :)
To get your analysis questions answered in a timely manner it would
greatly help if they were attached to some sponsored work as that makes
it usually easier to justify picking them up among all the other things
on our plate. Additionally, emergencies (like new blocking events in
country X) should be easily justifiable as well, in particular as we
have a "surprise 'anomaly analysis' on the network as needed"-item on
our roadmap throughout the year. I plan to make sure that we have
sufficient spare cycles available for that work should it be needed. If
your desired analysis does not fall into either of those categories
don't despair, we'll try to find a way to get it done nonetheless.
Thanks,
Georg
P.S.: If you are not part of any of our teams but want to do an analysis
by yourself or just note an analysis idea somewhere, feel free to start
by filing a ticket in the analysis project as well. Others might just
jump on it and get it done because they might be curious about the
answers themselves. :)
[1]
https://gitlab.torproject.org/tpo/network-health/metrics/relay-search/-/iss…
[2]
https://gitlab.torproject.org/tpo/network-health/team/-/issues/250#note_286…
[3] https://gitlab.torproject.org/tpo/network-health/analysis
Greetings Tor devs,
During the last two weeks there has been an improvement in
the Network Status APIs[0].
Network Status APIs is going to replace Onionoo web protocol as the Network
Health Team moves most of the current bridges and relays data from files
stored on disk to Postgres and VictoriaMetrics.
We now have an initial working version of the `/summary` and `/bandwidth` endpoint.
The first endpoint already has some working filters, while the latter does not
yet. In the upcoming weeks we plan to stabilize and test these endpoints while
we continue the development of others.
The project initially started and was setup to look like a Onionoo copy to
prevent most clients from breaking and for a smoother transition to the new
protocol. We are now in the process of redefining some responses as the Onionoo
implementation introduces some limitations. In particular, changes will affect responses
that contain timeseries data stored on VictoriaMetrics as we thought it made more
sense to just return a direct ref to VictoriaMetrics[1] for those requests,
this way a client can decide to fetch referenced data or not based on its needs.
Please do reach out to me, hiro or GeKo if you have any ideas or feedback you'd
like to share with us.
Cheers,
Matt
[0]: https://gitlab.torproject.org/tpo/network-health/metrics/networkstatusapi/
[1]: https://gitlab.torproject.org/tpo/network-health/metrics/networkstatusapi/-…
Greetings Tor devs,
In an effort to improve current resource utilisation, the
Network Health Status team is developing a new version of their pipeline [1].
Previously, much of the data was stored on files, this made many operations slow
due to their I/O bound nature. The new pipeline will transfer much of the data
related to Tor nodes and bridges from files stored on a single server's disk to
two separate databases: Postgres and Victoria Metrics.
With this new approach, there's the need to develop a new service that is going
to replace the current onionoo web protocol [2].
The main objective of this project is to design a RESTful API service, using the
actix_web framework, that is going to be integrated in the new pipeline v2.0
to support data retrieval of bridges and relays from the two databases and
additionally provide new features such as historic data search.
The project will be developed over the next few weeks as part of the GSoC
sponsored program.
Please do reach out to me, hiro or GeKo if you have any ideas or feedback you'd
like to share with us.
Cheers,
Matt
[1] https://gitlab.torproject.org/tpo/network-health/team/-/wikis/metrics/colle…
[2] https://gitlab.torproject.org/tpo/network-health/metrics/networkstatusapi/
After the blocking of Tor in Russia in December 2022, the number of
Snowflake users rapidly increased. Eventually the tor process became the
limiting factor for performance, using all of one CPU core.
In a thread on tor-relays, we worked out a design where we run multiple
instances of tor on the same host, all with the same identity keys, in
order to effectively use all the server's CPU resources. It's running on
the live bridge now, and as a result the bridge's bandwidth use has
roughly doubled.
Design thread
https://forum.torproject.net/t/tor-relays-how-to-reduce-tor-cpu-load-on-a-s…
Installation instructions
https://gitlab.torproject.org/tpo/anti-censorship/team/-/wikis/Survival-Gui…
Two details came up that are awkward to deal with. We have workaround
for them, but they could benefit from support from core tor. They are:
1. Provide a way to disable onion key rotation, or configure a custom
onion key.
2. Provide a way to set a specific authentication cookie for ExtORPort
SAFE_COOKIE authentication, or a new authentication type that doesn't
require credentials that change whenever tor is restarted.
I should mention that, apart from the load-balancing design we settled
on, we have brainstormed some other options for scaling the Snowflake
bridge or bridges. At this point, none of these ideas can immediately be
put into practice, because there's no way to tell tor "connect to one of
these bridges at random, but only one," or "connect to this bridge, but
accept any of these fingerprints."
https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowfl…
# Disable onion key rotation
Multiple tor instances with the same identity keys will work fine for
the first 5 weeks (onion-key-rotation-days + onion-key-grace-period-days),
but after that time the instances will have independently rotated their
onion keys, and clients will have connection failures unless the load
balancer happens to connect them to the instance whose descriptor they
have cached. This post investigates what the failure looks like:
https://lists.torproject.org/pipermail/tor-relays/2022-January/020238.html
Examples of what could work here are a torrc option to set
onion-key-rotation-days to a large value, an option to disable onion key
rotation, an option to set a certain named file as the onion key.
What we are doing now is a bit of a nasty hack: we create a directory
named secret_onion_key.old, so that a failed replace_file causes an
early exit from rotate_onion_key.
https://gitweb.torproject.org/tor.git/tree/src/feature/relay/router.c?h=tor…
There are a few apparently benign side effects, like tor trying to
rebuild its descriptor every hour, but it's effective at stopping onion
key rotation.
https://lists.torproject.org/pipermail/tor-relays/2022-January/020277.html
# Stable ExtORPort authentication
ExtORPort (extended ORPort) is a protocol that lets a pluggable
transport attach transport and client IP metadata to a connection, for
metrics purposes. In order to connect to the ExtORPort, the pluggable
transport needs to authenticate using a scheme like ControlPort
authentication.
https://gitweb.torproject.org/torspec.git/tree/proposals/217-ext-orport-aut…
tor generates a secret auth cookie and stores it in a file. When the
pluggable transport process is managed by tor, tor tells the pluggable
transport where to find the file by setting the TOR_PT_AUTH_COOKIE_FILE
environment variable.
In the load-balanced configuration, the pluggable transport server
(snowflake-server) is not run and managed by tor. It is an independent
daemon, so it doesn't have access to TOR_PT_AUTH_COOKIE_FILE (which
anyway would be a different path for every tor instance). The bigger
problem is that tor regenerates the auth cookie and rewrites the file on
every restart. All the tor instances have different cookies, and
snowflake-server does not know which it will get through the load
balancer, so it doesn't know what cookie to use.
Examples of what would work here are an option to use a certain file as
the auth cookie, an option to leave the auth cookie file alone if it
already exists, or a new ExtORPort authentication type that can use the
same credentials across multiple instances.
What we're doing now is using a shim program, extor-static-cookie, which
presents an ExtORPort interface with a static auth cookie for
snowflake-server to authenticate with, then re-authenticates to the
ExtORPort of its respective instance of tor, using that instance's auth
cookie.
https://lists.torproject.org/pipermail/tor-relays/2022-January/020183.html
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
I'm happy to announce txtorcon 23.5.0 with the following changes:
* twisted.web.client.Agent instances now use the same HTTPS policy
by default as twisted.web.client.Agent. It is possible to
override this policy with the tls_context_factory= argument, the
equivalent to Agent's contextFactory=
(Thanks to Itamar Turner-Trauring)
* Added support + testing for Python 3.11.
* No more ipaddress dependency
You can download the release from PyPI or GitHub (or of
course "pip install txtorcon"):
https://pypi.python.org/pypi/txtorcon/23.5.0https://github.com/meejah/txtorcon/releases/tag/v23.5.0
Releases are also available from the hidden service:
http://fjblvrw2jrxnhtg67qpbzi45r7ofojaoo3orzykesly2j3c2m3htapid.onion/txtor…http://fjblvrw2jrxnhtg67qpbzi45r7ofojaoo3orzykesly2j3c2m3htapid.onion/txtor…
You can verify the sha256sum of both by running the following 4 lines
in a shell wherever you have the files downloaded:
cat <<EOF | sha256sum --check
93fd80a9dd505f698d0864fe93db8b6a9c1144b5feb91530820b70ed8982651c dist/txtorcon-23.5.0.tar.gz
987f0a91184f98cc3f0a7eccaa42f5054063744d6ac15e325cfa666403214208 dist/txtorcon-23.5.0-py3-none-any.whl
EOF
thanks,
meejah
-----BEGIN PGP SIGNATURE-----
iQFFBAEBCgAvFiEEnVor1WiOy4id680/wmAoAxKAaacFAmRm0DoRHG1lZWphaEBt
ZWVqYWguY2EACgkQwmAoAxKAaae/+wgAw3gAm65npc7+yMdGFixNmCd+RUXorJq9
Hy76hK3BWdtNIA6TZF20QFYs3CX5Vepa0vCJOK1N40fYgxoZTb1/828Zp6Zq2+Gn
piJGvQ0Z1S95ww7lwSV77o67Xf7PozhLR+k7DaOdY8ugvLb/0Rdp15BykF5DWIo8
PRgqB8uZ418ebmDLLrYtqYTdlcUMxFTji4CHXc4N55/2hVHiFiuFt59os6kJ3iG1
u90lmQH8vbDyVF7N6tpgEAdWeb7OdgDbtzhVBdBWHrPg+vDO+UL7WZU8ZjDAcdEr
YzzmK3fIiCH7ngG2E/VIebiJfrjAA9G4eZXltIm7VcWh5css9MXY1Q==
=TeQp
-----END PGP SIGNATURE-----