Hi,
in onionoo all timestamps used to be in the format
YYYY-MM-DD hh:mm:ss
Proposal 328 has timestamps in this same format
https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/328-re…
The newly added prop328 fields in oninoo appear to break with that convention
https://metrics.torproject.org/onionoo.html#details_relay_overload_general_…
Here is an example for an onionoo overload_general_timestamp which appears
to be at millisecond granularity (the source has a granularity of an hour):
1636038000000
Was there a particular motivation for this format change and granularity?
And what do you think about changing it to use the YYYY-MM-DD hh:mm:ss
format for consistency and having a direct human readable format here as well?
related:
Karsten used to maintain onionoo protocol documentation/changelog and versions:
https://metrics.torproject.org/onionoo.html#versions
Is that and the 'version' field in onionoo no longer maintained?
(since it didn't change with the new fields)
kind regards,
nusenu
--
https://nusenu.github.io
Hi,
tldr:
- more outdated relays
(that is a claim I'm making and you could
easily proof me wrong by recreating the 0.3.3.x alpha
repos and ship 0.3.3.7 in them and see how things evolve
after a week or so)
- more work for the tpo website maintainer
- less happy relay operators [3][4]
- more work for repo maintainers? (since a new repo needs to be created)
When the tor 0.3.4 alpha repos (deb.torproject.org) first appeared on 2018-05-23
I was about to submit a PR for the website to include it in the sources.list
generator [1] on tpo but didn't do it because I wanted to wait for a previous PR to be merged first.
The outstanding PR got merged eventually (2018-06-28) but I still did not submit a PR to
update the repo generator for 0.3.4.x nonetheless and here is why.
Recently I was wondering why are there so many relays running tor version 0.3.3.5-rc? (see OrNetStats or Relay Search)
(> 3.2% CW fraction)
Then I realized that this was the last version the tor-experimental-0.3.3.x-*
repos were shipping before they got abandoned due to the new 0.3.4.x-* repos
(I can no longer verify it since they got removed by now).
Peter made it clear in the past that the current way to
have per-major-version debian alpha repos (i.e. tor-experimental-0.3.4.x-jessie)
will not change [2]:
> If you can't be bothered to change your sources.list once or twice a
> year, then you probably should be running stable.
but maybe someone else would be willing to invoke a
"ln" commands everytime a new new alpha repo is born.
tor-alpha-jessie -> tor-experimental-0.3.4.x-jessie
once 0.3.5.x repos are created the link would point to
tor-alpha-jessie -> tor-experimental-0.3.5.x-jessie
It is my opinion that this will help reduce the amount of relays running
outdated versions of tor.
It will certainly avoid having to update the tpo website, which isn't a big task
and could probably be automated but it isn't done currently.
"..but that would cause relay operators to jump from i.e. 0.3.3.x to 0.3.4.x alphas
(and break setups)!"
Yes, and I think that is better than relays stuck on an older version because
the former repo no longer exists and operators still can choose the old repos
which will not jump to newer major versions.
[1] https://www.torproject.org/docs/debian.html.en#ubuntu
[2] https://trac.torproject.org/projects/tor/ticket/14997#comment:3
[3] https://lists.torproject.org/pipermail/tor-relays/2018-June/015549.html
[4] https://trac.torproject.org/projects/tor/ticket/26474
--
https://twitter.com/nusenu_https://mastodon.social/@nusenu
Hi there. I had an idea recently for an onion service to improve the UX
of sites that require a login. The site would have two onions: one for
those who want to use onion auth and another for those who don't or are
still setting it up. A user would first sign in with a username+password
on the unauthenticated onion and click a button to generate a
certificate associated with their account. Then they would add the
public key to their browser and visit the authenticated onion. The
application server would then match the pubkey used to authenticate with
an account in the database, and log them in automatically.
I've looked in the mailing list archives and `man 1 tor` but didn't find
anything that would facilitate this. The closest, it seems, is
HiddenServiceExportCircuitID, but that is for *circuit* IDs, not
*client* IDs. Is this possible to implement, either as an operator or as
a Tor developer?
As an operator, an alternative would be to generate one (authenticated)
onion service per user and route them all to the same place with
different Host headers, but that seems rather inefficient, and I don't
know how well the tor daemon scales up to hundreds of onion services anyway.
P.S. I didn't find an easy way to do full text search on the mailing
list archives, so I wrote a little script to download them all. I've
attached it in case it ends up useful. It requires python3.8+ and you'll
need to `pip install aiohttp anyio BeautifulSoup4` first. After that you
can run `./pipermail_fetch.py
https://lists.torproject.org/pipermail/tor-dev/` and then something like
`rg --context 3 --search-zip '^[^>].*search term here'` will do the trick.
Hi,
I'm wondering if I can access the shared random value[1] while
developing a
protocol/application on top of Tor onion services. The application is
still in
early development, but it would be great if I could depend on the shared
random
value.
If this is not the correct mailing list for this question, I would be
glad if
you could point me to one.
--Sebastian
[1]: PUB-SHAREDRANDOM in
https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt
Hi,
when adding MetricsPort support to ansible-relayor
I realized that many operators that run more than one tor instance per server
will run into an issue because tor's relay prometheus metrics has no identifying label like
fingerprint=
or similar to tell tor instances appart. The instance=
default label can have the same value for all tor instances on a given server
so that can not be used.
To avoid using nickname (might not be set) the easiest option is probably to use
the relay's SHA1 fingerprint or alternatively the IP:ORPort combination
which is unique per server but not necessarily globally unique (RFC1918 IPs).
Another neat option for operators is to use node_exporter's textfile collector to
collect tor's MetricsPort content to avoid having to run an additional webserver
for TLS and authentication (because unlike tor's exporter node_exporter comes with TLS and authentication builtin).
In that case the suggested solution would be even more needed because in that case relabling
via prometheus' scrape config is no longer possible.
What do you think about this suggestion?
kind regards,
nusenu
--
https://nusenu.github.io