Resending to tor-dev with correct email address. Sorry to those receiving 2
copies.
On Oct 8, 2013 2:02 AM, "SiNA Rabbani" <sina(a)redteam.io> wrote:
> Dear Team,
>
> I have started on a draft design document for Project cute.
> Please let me have your kind comments and suggestions.
> (https://trac.torproject.org/projects/tor/wiki/org/sponsors/Otter/Cute)
>
> All the best,
> SiNA
>
>
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> "Cute" design and challenges - draft 0.1
> By: SiNA Rabbani - sina redteam io
>
>
> *Overview*
>
> Project Cute is part of the Otter contract. This project is to provide
> (in the parlance of our time) "point-click-publish" hidden services to
> support
> more effective documenting and analysis of information and evidence
> documenting
> of ongoing human rights violations and corresponding training and advocacy
> Our
> goal is to improve hidden services so that they're more awesome, and to
> create
> a packaged hidden-service blogging platform which is easy for users to run.
>
> *Objective*
>
> To make secure publishing available to activists who are not IT
> professionals.
>
> *Activities*
>
> Tor offers Hidden Services, the ability to host web sites in the Tor
> Network.
> Deploying hidden services successfully requires the ability to configure a
> server securely. Mistakes in setup can be used to unmask site admins and
> the
> location of the server. Automating this process will reduce user error.
> We have to write "point-click-publish" plugins that work with existing
> blogging
> and micro-blogging software.
>
> *Expected results*
>
> The result will be a way to provide portals to submit text, pictures, and
> video.
> These sites will not have the ability to log information that can be used
> to
> track down citizen journalists or other users, and will be resistant to
> distributed denial of service (DDoS) attacks.
>
>
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> *Introduction*
>
> This document describes the technical risks associated with running a
> web-based
> blog tool and exposing it over a hidden service (.onion address). Our goal
> is to
> create a packaged blogging platform that is easy to operate for the
> non-technical user, while protecting the application from a set of known
> attacks
> that can reveal and compromise the network identity.
>
> Hidden-services make it possible for content providers and citizen
> journalists to offer web-applications such as blogs and websites, hosted
> completely behind a firewall (NAT Network) never revealing the public IP
> of such
> services. By design, Hidden Services are resilient to attacks such as
> traffic
> analysis and DDoS, therefore it becomes compelling for the adversary to
> focus
> on the application layer vulnerabilities.
>
> According to OWASP Top 10, Injection is the number one security risk for
> web applications. "Injection flaws, such as SQL, OS, and LDAP injection
> occur
> when untrusted data is sent to an interpreter as part of a command or
> query.
> The attacker?s hostile data can trick the interpreter into executing
> unintended commands or accessing data without proper authorization." [1]
>
>
> Running a site such as WordPress involves a LAMP (Linux, Apache, Mysql,
> PHP)
> installation. This stack needs to be carefully configured and implemented
> to
> meet the desired privacy and security requirements. However, default
> configuration files are not suitable for privacy and lead to the leakage of
> identity.
>
> WordPress is the most popular blog platform on the Internet. We select
> WordPress
> due to it's popular and active core development. WordPress features a
> plug-in
> architecture and a template system, "Plugins can extend WordPress
> to do almost anything you can imagine. In the directory you
> can find, download, rate, and comment on all the best plugins the
> WordPress community has to offer." http://wordpress.org/plugins/
>
> Themes and plugins are mostly developed by programmers with little or no
> security in mind. New code is easily added to the site without the need
> for any
> programming skills. This is recipe for disaster, a quick look at the
> publicly
> available plugin repository and security forums reveals many of the OWASP
> top 10
> vulnerabilities such as XSS and injection vulnerabilities:
>
> http://packetstormsecurity.com/search/?q=wordpress
> http://wordpress.org/plugins/
>
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
>
> *Adversary Model*
>
> We use the range of attacks available to the adversary adversary as the
> base for
> our Threat Model and proposed requirements.
>
>
> *Adversary Goals*
>
> *Code Injection*
> This is the most direct compromise that can take place; the
> ability for
> the adversary to execute code on the host machine is disastrous.
>
> *XSS the front-end, DoS the back-end*
> The adversary can overwhelm the database backed or the web-server
> of a
> dynamically hosted application, denying access to the service.
>
> *Location/Identity information*
> Applications that are not designed with privacy in mind tend to
> reveal
> information by default. For example, WordPress includes the name
> of the
> editor of a post inside the RSS feed: <dc:creator>John
> Smith</dc:creator>
>
> *Adversary Capabilities - Positioning*
>
> The adversary can position themselves at a number of places to execute
> their
> attacks.
>
> *Local Network/ISP/Upstream Provider*
> The attacker can hijack the DNS of the plugin repository or
> perform a
> man-in-the-middle attack and push malicious code into the blog.
>
> *Third party service providers*
> It is common for a blog to email authentication information.
> This information is at risk of social and legal attacks.
> The repository of the blog's source code where themes and plugins
> are
> downloaded is a target for the adversary to insert malicious code.
>
>
> *Adversary Capabilities - Attacks*
>
> The adversary can perform the following attacks in order to accomplish
> varies
> aspects of their goals. Most of these attacks are due to the dynamic and
> Web 2.0 nature of blog platforms.
>
> *SQL Injection & XSS*
> Dynamic sites take advantage of databases for content storage and
> JavaSript for client-side presentation. Programming mistakes can
> lead to
> code injection on server or client side.
>
> *Inserting plugins or core updates*
> Blog platforms have automatic update install features, WordPress
> connects to a remote repositories to download updated code.
> Adversary can perform a man-in-the-middle attack and insert
> malicious
> code.
>
> *Misbehaving plugins/features*
> Some plugins depend on remote connections to provide a feature,
> for e which can lead to leakage of identity.
>
> *Brute-force the admin password*
> Weak passwords are vulnerable to brute-force attacks.
> Blog engines do not provide protection against such attacks.
>
> *Remotely exploiting the LAMP stack*
> A determined adversary has a very large attack surface to analyze
> and discover 0-day vulnerabilities in Apache, PHP or Mysql
> applications.
>
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> *Proposed requirement*
>
> *Dynamic to Static*
> A simple yet effective way to protect dynamic websites is to save
> a 100%
> static copy, hosted with a lightweight and well configured http
> server.
> We propose Nginx which is a free, open-source, high-performance
> HTTP server. We have the option of extending existing WordPress
> plugins
> such as "Really-Static" (HTTP://wordpress.or/plugins/really-static/) to
> generate 100% static files.
>
> *Disable Comments and New User signup (POST REQUESTS)*
> The ability to receive and store comments involves actions and
> configurations that expose the installation to DoS and other web
> attacks.
> We propose to completely disable reader's backed interactions,
> specifically disabling New User Registration and Comment features.
>
> *SOCKS Proxy support for WordPress*
> WordPress has the ability to proxy its outgoing connections,
> however the
> current implementation only supports HTTP proxy. We propose to
> submit a
> patch to WordPress core, enabling SOCKS Proxy support:
>
> http://core.trac.wordpress.org/browser/tags/3.6.1/wp-includes/class-http.php
>
> *Update safety*
> Tor Project or a partner such as WordPress.org should maintain the
> latest
> copy of the WordPress source-code over a .onion address. This will
> allow
> for the Core application updates to take place without ever
> leaving the
> Tor network and we achieve end-to-end encryption without relying
> on the
> traditional SSL/CA model.
>
> *OnionPress plugin*
> Instead of hard-coding our modifications to control WordPress'
> functionalities, we propose to develop a custom plugin.
> The plugin will provide a new menu in the Administrative panel of
> the
> site. Providing options to interact with Hidden Service features.
> (Note that Administrative features are going to be restricted from
> the public and only available to localhost)
>
> *User Authentication/Permission Levels*
>
> a) Blog Administrator
> This person is hosting the blog on a local computer and
> has physical
> access to the installation. There is only 1 of such role.
> This
> login will be restricted to localhost only.
>
> b) Blog editors/contributors
> These are the active content publishers. Each person has
> the ability
> to remotely connect, login and publish content. Editors do
> not have
> any administrative permissions such as adding or deleting
> users.
> Each editor is assigned a dedicated key and .onion
> hostname using
> the HidServAuth and HiddenServiceAuthorizeClient features
> in
> stealth mode.
>
> "HiddenServiceAuthorizeClient auth-type
> client-name,client-name,…
> If configured, the hidden service is accessible for
> authorized
> clients only. The auth-type can either be 'basic' for a
> general-purpose authorization protocol or 'stealth' for a
> less
> scalable protocol that also hides service activity from
> unauthorized clients. Only clients that are listed here are
> authorized to access the hidden service. Valid client names
> are 1 to 19 characters long and only use characters in
> A-Za-z0-9+-_ (no spaces). If this option is set, the hidden
> service is not accessible for clients without authorization
> any more. Generated authorization data can be found in the
> hostname file. Clients need to put this authorization data
> in their configuration file using HidServAuth."
>
> c) Readers (the world)
> These are the site visitors, they will be able to send GET
> requests
> to the .onion address and receive HTML and multimedia
> content.
> No login or comment posting permissions granted.
>
> *Packaged System*
>
> We propose to design and develop a Debian based Live OS that can
> be started as a Virtual Machine or to boot a personal computer.
> This OS
> will include Tor, LAMP stack and a running copy of WordPress.
>
> The LAMP installation will be hardened and configured to meet our
> security desires. We require a secondary USB disk for persistent
> storage.
> Desired outcome is a point-and-click and maybe portable solution
> that
> can be launched from inside of Windows, Linux or Mac OSX.
>
> VirtualBox is a second candidate to host the VM.
> http://wiki.qemu.org/Main_Page and/or https://www.virtualbox.org/
>
>
> *Edge Cache Nodes*
>
> In order to provide "blazing-fast" access to the published content
> outside of the Tor network, we propose the deployment of caching
> servers
> operated by semi-trusted third party organizations or persons.
> These are
> similar to tor2web nodes:http://tor2web.or/
>
> The content providers (bloggers) would select from a list of
> available
> edge servers and send a pgp signed zip file of the latest static
> version
> over Tor. Edge servers will reduce traffic from the Tor network,
> also
> provide a chance for content to reach the world in case of a DDoS
> or
> technical issues with the Tor network itself.
>
> Having cached copies available remotely make it possible for the
> blogger
> to get on-line, publish content and go back off-line, minimizing
> the
> amount of time and traffic spent on the Internet.
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
Hello!
Here is the analysis of libpurple/Pidgin as a candidate for the Attentive Otter project, written by DrWhax and me.
Best regards,
Thijs (xnyhps)
-----
## Intro
Pidgin is an IM client written in C using a GTK+ UI. libpurple is the library used by Pidgin as an abstraction for the different network protocols Pidgin supports. libpurple comes with a number of plugins ("prpls") for the various chat networks it supports.
Other UIs using libpurple include Adium (Cocoa), Finch (GNT) and Quail (Qt, in development).
## Release a secure, portable chat program
### Secure
Number of security advisories published on https://pidgin.im/news/security/ since 2005-06-10:
* libpurple: 6
* nss plugin: 1
* Pidgin: 3
* prpl-gg: 1
* prpl-irc: 2
* prpl-jabber: 7
* prpl-msn: 15
* prpl-mxit: 3
* prpl-oscar (AIM/ICQ): 6
* prpl-qq: (no longer supported) 1
* prpl-sametime: 1
* prpl-silc: 2
* prpl-yahoo: 3
My (xnyhps) personal experience is that the Pidgin developers take reported security issues very seriously and often come up with a fix quickly. However, it does sometimes take a while before they release and deploy that fix. I don't have enough experience with Pidgin (the GTK+ UI) to judge its code quality, but I think the quality of libpurple itself is decent. The quality of prpls varies a lot, generally the reverse-engineered protocols are in worse shape than those with open specifications. I think MSN is definitely the worst and auditing it would take more time than the servers will be online for (until March 2014). I think auditing jabber+irc should be doable and would be a good start: IRC is quite small (>5kloc), XMPP (28kloc) has clear specifications and by using UTF-8 and XML it removes a lot of possible buffer overflow and string manipulation vulnerabilities.
DrWhax's personal experience is that Pidgin developers don't always take security issues seriously. In the summer of 2012 I spent a night digging with Jake into the source code and DLL's that they ship and we found out that in 2012, they were shipping vulnerable DLL's with the oldest exploit originating from 2006! It took quite some convincing that this is a serious matter and we had to convince(sigh) the Pidgin folks to update all the DLL's to the latest version. In February 2013, Pidgin finally released a security update which fixed 12 security bugs? I would say, if we would want to go forward with releasing a minimal Pidgin, we should only ship with IRC and XMPP support.
### Portable
Pidgin provides official instructions on how to run Pidgin from an USB stick: https://developer.pidgin.im/wiki/Using%20Pidgin#RunningWindowsPidginFromaUS….
## Sends all traffic over Tor
libpurple has a proxy setting that will force all traffic to pass through Tor. This disables all SRV lookups, meaning many XMPP servers require manual configuration to work.
UPnP (used for automatic port fowarding and external IP detection) can be disabled globally.
## Can be used with a wide variety of chat networks
libpurple includes support for:
* AIM
* Bonjour
* Gadu-Gadu
* Google Talk
* Groupwise
* ICQ
* IRC
* MSN
* MXit
* MySpaceIM
* SILC
* SIMPLE
* Sametime
* XMPP
* Yahoo!
* Zephyr
Though nearly everyone of these is a separate prpl, so would require its own auditing.
## Uses off-the-record encryption of conversations by default
The Pidgin-OTR plugin can be configured to enable OTR by default. Including this plugin with Pidgin by default is underway.
## Has an easy-to-use graphical user interface localized into Farsi, French, Spanish, and Arabic.
It has a graphical user interface which a large number of users might already be familiar with.
From https://developer.pidgin.im/l10n/2.x.y/:
* Farsi: 59.77% finished.
* French: 98.95% finished.
* Spanish: 99.31% finished.
* Arabic: 78.02% finished.
This breakdown includes all prpls, so might turn out differently when only including a subset of them.
## Cross-platform support
Pidgin ships binaries for Windows and Linux. By packaging GTK+, it should be possible to package it to run on OSX too, though the UI would be quite confusing to many OSX users.
Another option on OS X is Adium, though this increases the amount of code to be audited by a large factor.
## Integration with Tor
libpurple supports plugins written in C, Tcl, Perl and C# (Mono). These can make changes to the preferences to automate configuring it for Tor. It is also possible to configure Pidgin using D-Bus, but for security reasons it is probably wiser to disable D-Bus support.
So the options to control Tor are:
* A C/Tcl/Perl/C# plugin that launches a Tor process and which sets up Pidgin's preferences accordingly.
* Use Tor Launcher from the TBB to start both Tor and Pidgin. Configuring Pidgin could happen by:
* Writing to the preference files before starting it.
* Communicating the preference changes to a simple C/Tcl/Perl/C# plugin.
* D-Bus. (Unlikely to work on Windows and difficult to secure)
## Pidgin 3.0
Pidgin 3.0.x will be released with OTR support by default, which is awesome, only one codebase has to be maintained in the various Linux distributions and Windows clients! Huzzay! Something i'm less happy with, is that the Pidgin developers started using Webkit for HTML rendering within the client. Webkit doesn't have a very good security history and opens up with another giant security hole. If we would want to release Pidgin inside of TIMBB, I would propose to work on Pidgin 3.0 and not the older 2.x releases.
---------- Forwarded message ----------
Date: Sat, 5 Oct 2013 23:59:16 +0000
From: Colin Childs via RT <help(a)rt.torproject.org>
To: vero(a)was.fm
Subject: [rt.torproject.org #14664] Tor on iOS
On Sat Oct 05 21:57:29 2013, vero(a)was.fm wrote:
> Dear Tor,
>
> I work with a small group of developers called Wizardry and
> Steamworks and we have recently managed to port Tor to Apple iOS devices.
> The port requires for iOS to be jailbroken (please note that that action
> is not illegal but just //may// void warranty).
>
> The port has been uploaded to Cydia on BigBoss's repo and can be
> found at:
>
> http://moreinfo.thebigboss.org/moreinfo/depiction.php?file=torDp
>
> We have seen that your website lists download locations for Tor
> at:
>
> https://www.torproject.org/download/download-easy.html.en
>
> and that on the page:
>
> https://www.torproject.org/download/download.html.en
>
> an Android Bundle is listed under the "Tor for Smartphones"
> header.
>
> We were wondering if you could list our port to iOS or, at
> least, mention somewhere on the download page that Tor is now also
> available for Apple iOS devices via Cydia.
>
> Thank you,
> Veronique Dubois
> on behalf of Wizardry and Steamworks @ was.fm
Hello,
Please send a copy of this message to our Tor-Dev mailing list at
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev.
--
Colin C.
Ian,
Thanks. Tell me what a better forum would be and I' ll gladly move this discussion there.
Your suggestion does use encryption if only as a catalyst. There was a time when encrypted content could not be sent to Canada from the US and I can imagine some governments forbidding the use of encryption.
Harvesting keywords from k messages of length n takes O(n.k) time and space, possibly with some large constants. If the messages are scrambled hopefully it would take O(k.n.log(n)) time to harvest them. What would really help imho with mass snooping would be a transfer method that requires O(k.log(k).n) time for any form of keyword harvesting. I don' t have an answer, hence the contest.
Another approach might be to string every m-th bit of a message of n bits, where m an n are relatively prime, cyclically.
Best of luck with Tor.
Cheers,
Joel
------------------------------
On Thu, Oct 3, 2013 2:58 PM PDT Ian Goldberg wrote:
>On Thu, Oct 03, 2013 at 02:34:50PM -0700, Malard Joel wrote:
>
>
> Do you think that the code at https://github.com/malardj/ slice, that uses neither encryption nor a password, could help a community proof their communications from massive systemic eavesdropping by making the latter computationally impractical or financially unsustainable? Do you think that a tool like that would be valuable if it existed? Would you think of some yourself that everyone could use?
>
> I am unaffiliated with any institution but I would like to setup a contest for the best such algorithm or procedure that does not involve cryptography and that can be implemented by any group of ordinary citizens for the purpose of proofing their Internet communications of ASCII characters from systemic eavesdropping.
>
> I need help setting up the rules of competition ( i never did this), finding judges (I am totally unqualified), finding (virtual) places where to announce and hold the competition. I would welcome your suggestion on how to make this contest more relevant to all. Can you help, or suggest where to look for help?
>
> The code above is a quick example of the type of entries that I have in mind. It consists in a C++ program that inputs a character string, slices it and shuffles those slices into a javascript program that displays the input when it is run. The method purports to hamper the work of automated keyword harvesters. The available code does not support html text but that capacity is not be hard to add.
>
> With best regards,
>
> Joel Malard, PhD
> Fremont, CA
>
>I don't think this is the right list for this discussion, but how about:
>
>- Pick a 128-bit random key K
>- Encrypt the message using key K with, say, AES-GCM or your favourite
> authenticated encryption mode to yield the ciphertext C (including
> tag)
>- randomize the last b bits of K to yield K', for some b, probably
> around 30, but could be anything from 0 to 40ish.
>- output K' concatenated with C
>
>Notably, you don't output b. The receiver just runs a counter X up from
>0 until C authenticates using the key (K' XOR X). b controls how long
>this will take, but the receiver doesn't know it. (There was a
>password-stretching algorithm like that in USENIX Security (I think it
>was) a little while ago; the adversary doesn't know how many iterations
>to wait before giving up / trying another password.) The high part of K
>acts like a salt.
>
>It's kind of like the old Lotus Notes encryption, in that the NSA got to
>know the high part of the key, and just had to brute the low 40 bits,
>only now the receiver has to brute it as well. It's not technically
>encryption, as there's no shared or public key. But there's also no key
>management, of course.
>
> - Ian
>_______________________________________________
>tor-dev mailing list
>tor-dev(a)lists.torproject.org
>https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Do you think that the code at https://github.com/malardj/ slice, that uses neither encryption nor a password, could help a community proof their communications from massive systemic eavesdropping by making the latter computationally impractical or financially unsustainable? Do you think that a tool like that would be valuable if it existed? Would you think of some yourself that everyone could use?
I am unaffiliated with any institution but I would like to setup a contest for the best such algorithm or procedure that does not involve cryptography and that can be implemented by any group of ordinary citizens for the purpose of proofing their Internet communications of ASCII characters from systemic eavesdropping.
I need help setting up the rules of competition ( i never did this), finding judges (I am totally unqualified), finding (virtual) places where to announce and hold the competition. I would welcome your suggestion on how to make this contest more relevant to all. Can you help, or suggest where to look for help?
The code above is a quick example of the type of entries that I have in mind. It consists in a C++ program that inputs a character string, slices it and shuffles those slices into a javascript program that displays the input when it is run. The method purports to hamper the work of automated keyword harvesters. The available code does not support html text but that capacity is not be hard to add.
With best regards,
Joel Malard, PhD
Fremont, CA
Here is another HS proposal draft.
This one specifies how to migrate from the current HS protocol to the
one specified in proposals "Migrate HS identity keys to Ed25519" and
"Stop HS address enumeration by HSDirs":
https://lists.torproject.org/pipermail/tor-dev/2013-August/005279.htmlhttps://lists.torproject.org/pipermail/tor-dev/2013-August/005280.html
(I will soon write a proposal that merges these two proposals into one)
This proposal is in serious need for comments. Specifically, see
section "1.1. From the PoV of Hidden Services" which I left entirely
unspecified. There are also probably multiple migration concerns which
I have forgotten or completely ignored.
Inlining:
Filename: xxx-hs-id-keys-migration.txt
Title: Migration to ed25519 HS identity keys and privacy-preserving directory documents
Author: George Kadianakis
Created: 13 September 2013
Target: 0.2.5.x
Status: Draft
[More draft than Guinness.]
0. Overview and motivation
Proposal XXX introduces two new HS features:
a) Stronger HS identity keys.
b) New-style HS directory documents so that HS addresses are not
leaked to HSDirs.
This document specifies how different Tor actors must act after
proposal XXX is implemented, so that the migration can proceed
smoothly.
We aim for the migration to be smooth both from the perspective of
Hidden Services (introduce grace period so that HS operators don't
wake up one day to find that their HS can't be accessed with that
address anymore) and from an implementation perspective (avoid
bloating the Tor protocol or introducing sensitive flag days).
1. Migration strategy
After proposal XXX is implemented:
a) The HS address format will change (called "new-style HS address"
in this document)
b) New HSDirs will be introduced (called "HSDirV3" in this document)
c) New-style HS descriptors will be introduced (called "HS V3
descriptors" in this document).
The following sections investigate how these changes influence the
various Tor actors:
1.1. From the PoV of Hidden Services:
=== XXX DISCUSSION XXX ===
I see (at least) three migration strategies here. I'm not sure which
one is better so I'll write all of them and we can then discuss them
and pick one.
a) After proposal XXX is implemented, Tor (configured as an HS) will
only publish HS V3 descriptors by default. There will be a torrc
parameter that the HS operator can use, that turns on publishing
of HS V2 descriptors for backwards compatibility with the old HS
address (the old identity key must be kept around).
b) After proposal XXX is implemented, Tor (configured as an HS) will
publish both V3 and V2 HS descriptors for each Hidden
Service. There will be a torrc parameter that turns off
publishing of V2 HS descriptors. This parameter will eventually
be switched off by default.
c) After proposal XXX is implemented, Tor (configured as an HS) will
only publish V3 HS descriptors. The code that publishes V2
descriptors can be disabled or removed. HSes who want to publish
V2 descriptors (and keep the same address) will have to maintain
two copies of Tor -- one running the latest Tor version, and
another one running an older version that supports V2
descriptors.
The engineer inside me tells me to go with c), but I feel that it
leaves all the burden to the HS operators.
I haven't checked how much implementation effort doing a) or b)
would take us.
1.2. From the PoV of the HS client:
Tor clients can distinguish new-style HS addresses from old ones by
their length. Legacy addresses are 16 base32 characters, while new
ones are 56 (XXX) base32 characters.
If Tor is asked to connect to a legacy address it SHOULD throw a
warning and advocate the use of new-style addresses (it should still
connect to the HS however). In the future, when old-style HS
addresses are close to depletion, we can introduce a torrc parameter
that blocks client connections to old-style HS addresses.
1.3. From the PoV of HS directories:
Tor relays will advertise themselves as HSDirV3 servers using the
"hidden-service-dir" router descriptor line.
For a while, relays will also continue being HSDirV2 servers. We will
specify a timeout period of X months (4?), after which relays will
stop being HSDirV2 servers by default (using the HidServDirectoryV2
torrc parameter).
1.4. From the PoV of directory authorities:
Authorities will continue voting for HSDirV2 servers. Eventually,
when all relays have upgraded and no one is claiming to be HSDirV2,
we can disable and remove the relevant code.
XXX Need to specify grace periods.
XXX What did I forget?
release early; release often again
This is a draft of a proposal that merges the two proposals I posted
last month, namely the "Migrate HS identity keys to Ed25519" and "Stop
HS address enumeration by HSDirs" proposals.
This goes together with the "Migration to ed25519 HS identity keys and
privacy-preserving directory documents" I posted two weeks ago.
It tried to address most of the comments from Nick and Matthew. There
are still lots of stuff to fix (especially the key derivation part).
Inlining:
Filename: xxx-hs-id-keys-and-onion-leaking.txt
Title: On upgrading HS identity keys and on a new HS directory scheme that does not leak
Author: George Kadianakis
Created: 10 August 2013
Target: 0.2.5.x
Status: Draft
[More draft than Guiness.]
ToC:
0. Overview
1. Motivation
2. Related proposals
3. Overview of changes
4. Specification of changes
5. Discussion
6. Acknowledgments
Appendix:
A0. Security proof of directory scheme
---
0. Overview:
This proposal has two aims:
a) Improve the strength of HS identity keys.
Specifically, this proposal suggests the adoption of Ed25519
ECDSA keys as a replacement for the RSA-1024 keys that are
currently in use by hidden services.
b) Stop HS address enumeration by HSDirs
When a client asks for the descriptor of a hidden service the
hidden service address is leaked to the HSDir that was inquired.
This proposal suggests a new directory scheme that prevents
HSDirs from learning the hidden service address or the contents
of the hidden service descriptors they serve.
1. Motivation
The long-term identity keys of Hidden Services are RSA-1024 keys; we
consider these weak and in need for replacement. Furthermore, 80-bit
hidden service addresses are short enough to be prone to brute-force
impersonation attacks.
Ed25519 (specifically, Ed25519-SHA-512 as described and specified
at http://ed25519.cr.yp.to/) is a good alternative: it's secure,
fast, has small keys and small signatures, is bulletproof in
several important ways, and supports fast batch verification. (It
isn't quite as fast as RSA1024 when it comes to public key
operations, since RSA gets to take advantage of small exponents
when generating public keys.)
2. Related proposals
This proposal goes hand in hand with proposal XXX "On the migration
to Ed25519 HS identity keys and privacy-preserving directory
documents". Proposal XXX specifies how the schemes described in this
proposal should be deployed in the real Tor network to minimize
frustration and bad vibes.
XXX Another related proposal is proposal XXX "On the upgrade of Hidden
Service service keys" which defines the upgrade from the current
RSA-1024 HS service keys to a more powerful cryptosystem.
3. Overview of changes
This section provides a high-level overview of the changes suggested
in this document.
3.0. Overview of identity-key changes
Longterm RSA-1024 "identity keys" are used by hidden services to
authenticate themselves to clients.
This proposal specifies:
* The generation of new Ed25519 long-term identity keypairs (#KEYGEN)
* A new HS descriptor format (v3) that contains Ed25519 HS
identity keys (#NEWDESC)
* A way for clients to upload and fetch v3 descriptors from
HSDirs (#DESCFETCH and #DESCUPLOAD)
3.1. Overview of anti-enumeration scheme
Currently, Hidden Services upload their unencrypted descriptor to
hidden service directories (HSDirs). HSDirs store the unencrypted
descriptor in an internal map of: <hs address> -> <hs descriptor>
When a client wants the descriptor of an HS, it asks an HSDir for
the descriptor that corresponds to <hs address>. If the HSDir has
such an index in its map, it returns the <hs descriptor> to the
client.
This proposal asks Hidden Services to periodically derive a new
ephemeral keypair from their long-term identity key; the new keypair
being a function of the identity key and a time-dependent nonce. The
derivation should be one-way; if you know the identity key you
should be able to derive the ephemeral key but not the other way
around. Finally, a client should be able to derive the ephemeral HS
public key from the long-term HS public key without knowing the
long-term HS private key (#KEYPAIRDERIVATION)
Hidden Services then encrypt their descriptor with a symmetric key
(derived from the long-term identity public key) and sign the
ciphertext and the ephemeral public key with their ephemeral
private key (#NEWDESC). Then they place the ephemeral public key,
the encrypted descriptor and the signature in a v3 hidden servide
descriptor and send that to the HSDir (#DESCUPLOAD).
When the HSDir receives a v3 hidden service descriptor, it validates
the signature using the embedded ephemeral public key and if it
verifies it stores the descriptor in an internal map of:
<ephemeral public key> -> <v3 descriptor> (#DESCHANDLING).
Now, out of band, the HS gives to its clients a <z>.onion
address. <z> in this case is the long-term public key of the HS
(this is different from the current situation where <z> is the hash
of the long-term public key).
A client that knows the <z>.onion address and wants to acquire the
HS descriptor, derives the <ephemeral public key> of the HS using
<z> and the same key derivation procedure that the HS uses. She also
derives the symmetric key that decrypts the encrypted HS descriptor
(#KEYPAIRDERIVATION).
The client then contacts the appropriate HSDir and queries it for
the descriptor with index <ephemeral public key>. If the HSDir has
such an index in its internal map it passes the <v3 descriptor> to
the client (#DESCFETCH).
The client then verifies the signature of the v3 descriptor, and if
it's legit she decrypts the encrypted descriptor using the ephemeral
symmetric key. The client now has the Hidden Servide descriptor she
was looking for (#DESCHANDLING).
4. Specificaton of changes
4.0 Generation of long-term identity-key Ed25519 keypairs (#KEYGEN)
When a hidden service starts up with no Ed25519 identity keys, it
generates a new identity keypair.
4.1. Ephemeral keypair derivation (anti-enumreation) (#KEYPAIRDERIVATION)
XXX Leaving this unpsecified for now till the security proof comes
along.
4.1.0. By Hidden Services:
For now, let's assume that after this paragraph each HS has a
per-TIME_PERIOD ephemeral keypair. It also has a symmetric key
derived from the identity public key to encrypt its descriptor.
4.1.1. By clients:
When a Tor client is asked to connect to a hidden service address
<z>.onion, it assumes that <z> is the long-term public key of the
hidden service. Internally, and without notifying the user, the
Tor client generates the ephemeral public key and ephemeral
symmetric key of the HS for that time period (using the long-term
public key of the hidden service and the procedure specified in
the previous section).
4.2. New v3 Hidden Service Descriptor Format (#NEWDESC)
To serve the purposes of this proposal we need to perform multiple
modifications to hidden service descriptors. Specifically, to
migrate to new identity keys we need to change the format of the
v2 Hidden Service descriptor to use Ed25519 keys. Furthermore, to
provide enumeration protection we need to encrypt the whole
descriptor with the ephemeral symmetric key.
To do so we define a new construct just for this section, called
"intermediate descriptor". It has the same format as a v2 hidden
service descriptor but uses Ed25519 instead of RSA-1024. The
following changes are performed to the v2 HS descriptor:
[*] The "permanent-key" field is replaced by "permanent-key-ed25519":
"permanent-key-ed25519" SP an Ed25519 public key
[Exactly once]
The public key of the hidden service which is required to
verify the "descriptor-id" and "signature-ed25519".
In base64 format with terminating =s removed.
[*] The "service-key" field is replaced by "service-key-ed25519':
"service-key-ed25519" SP an Ed25519 public key
[Exactly once]
The public key used to encrypt messages to the hidden
service.
In base64 format with terminating =s removed.
[*] The "signature" field is replaced by "signature-ed25519':
"signature-ed25519" SP signature-string
[At end, exactly once]
A signature of all fields above using the private Ed25519
key of the hidden service.
In base64 format with terminating =s removed.
Now we define the format of the new v3 Hidden Service descriptor
that is uploaded to the HSDirs:
"ephemeral-public-key" SP public-key
[At start, exactly once]
The ephemeral HS public key for this time period.
In base64 format with terminating =s removed.
"encrypted-descriptor" SP encrypted-descriptor
[Exactly once]
An encrypted intermediate descriptor (as specified
above).
It's encrypted with AES in CTR mode with a random
initialization vector of 128 bits that is written in the
beginning of the encrypted string. The ephemeral symmetric key
derived in section XXX is used as the secret key of AES.
The encrypted string is encoded in base64 and surrounded with
"-----BEGIN MESSAGE-----" and "-----END MESSAGE-----".
"publication-time" SP YYYY-MM-DD HH:MM:SS NL
[Exactly once]
A timestamp when this descriptor has been created. It should
be rounded down to the nearest day. The timestamp SHOULD be
used by the clients and HSDirs to discard old descriptors.
XXX Should we specify how long the HSDir is supposed to keep
the descriptor? rend-spec.txt doesn't do so.
XXX Is "rounded down to the nearest day" too extreme of a
rounding? Other Tor documents round it down to the nearest
hour. Does it matter if our expiration time is longer than a
day?
XXX Should we kill the "publication-time" in the HS
descriptors? Or just leave it there?
"signature" SP signature
[At end, exactly once]
A signature of all fields above using the ephemeral private
key of the hidden service.
In base64 format with terminating =s removed.
4.3. Uploading v3 HS descriptors to HSDirs (#DESCUPLOAD)
A new type of Hidden Service Directory Server must be established
which knows how to handle v3 Hidden Service descriptors.
The Hidden Service follows the same publishing procedure as for v2
descriptors but instead of sending an HTTP 'POST' to
"/tor/rendezvous2/publish", the HS sends the 'POST' request to
"/tor/rendezvous3/publish" with the descriptor included in the
body of the 'POST'.
4.4. Fetching v3 HS descriptors from HSDirs (#DESCFETCH)
When a client needs to fetch a v3 Hidden Service Descriptor from
an HSDir, it follows the exact same procedure as for v2
descriptors but instead of sending an HTTP 'GET' to
"/tor/rendezvous2/<z>", it sends an HTTP 'GET' to
"/tor/rendezvous3/<z>" where <z> is the base32 encoding of the
*ephemeral* public key of the Hidden Service.
XXX base32 is used because it's URL-safe and because we don't
really care about the extra length.
XXX example?
4.5. v3 hidden servide descriptor handling (#DESCHANDLING)
4.5.1. By HSDirs:
When an HSDir receives a v3 hidden service descriptor, it verifies
its signature using the ephemeral public key that was included in
the descriptor. If the signature verifies and the descriptor
timestamp is reasonable, the descriptor is accepted and
cached. Otherwise, the descriptor is discarded.
4.5.2. By clients:
When a client receives a v3 hidden service descriptor, it checks
the timestamp and verifies the signature using the previously
derived keys and discards it if the signature was not proper.
If the signature is proper, the client uses the derived ephemeral
symmetric key to decrypt the 'encrypted-descriptor' part of the
metadescriptor.
5. Discussion
[Points here might deserve their own sections]
Do we need the HSDir hash ring, even though the HS address and the
descriptor are now hidden from HSDirs?
An Ed25519 public key is 32 bytes. 32 bytes in base32 encoding is 56
characters (or 52 with the '=' padding removed). Do we want a
different URL encoding or are we happy with addresses like:
mfrggzdfmztwq2lknnwg23tpobyxe43uov3ho6dzpjaueq2eivda.onion ?
6. Acknowledgments
The cryptography behind this proposal was originally proposed by
Robert Ransom in a private email thread and subsequently posted in
tor-dev [0]. Discussion was continued in trac ticket #8106 [1].
During the past 6 months many bright people have looked at the
cryptography behind this scheme. The list of people includes Nadia
Heninger, Leonid Reyzin, Nick Hopper, Aggelos Kiayias, Tanja Lange,
Dan J. Bernstein and probably others that I can't recall at this
point. Thanks!
Parts of this proposal has been based on discussions with Nick
Mathewson and his proposal 220.
XXX "One more: Christian Grothoff told me in Garching that GNUnet does
something quite similar for its keys. So we should probably check out
their approach too, and include them in the "related work" section of
any hypothetical publication here. :)"
Appendix:
A0. Security proof of directory scheme
XXX A security proof of the above scheme is under development. We
are not going to implement or deploy anything before we have a solid
proof of this.
On Mon, 30 Sep 2013 19:13:37 -0700
Tom Lowenthal <me(a)tomlowenthal.com> wrote:
> Today, at 1100 Pacific, we spent more than 90 minutes discussing
> [Sponsor F][]. Here's the summary.
>
> **READ THIS**: The next Sponsor F meeting will be held in a mere two
> weeks on **2013-10-14, at 1100h Pacific in #tor-dev**.
>
> This is a schedule change: from now on, the meetings will be every two
> weeks. It's possible that we may have to increase this to every week,
> depending on how fast we work, and how long meetings are taking. If
> you should be at these meetings but cannot make Mondays at 1100h
> Pacific, please contact me, and I'll start the process of finding a
> better time or times.
>
> If you are individually in the `cc` field of this message, it's
> because I think that there's something which I think you need to do
> for Sponsor F before Halloween. You should also come to the next
> Sponsor F meeting. If you can't make the meeting, or don't think this
> work applies to you, you should get back to me ASAP so we can fix it.
>
> Is something else missing, wrong, or messed up? I'd like to know.
>
> [Sponsor F]:
> https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorF/Year3
>
>
> * * * * *
>
>
> Core Phase 2 Deliverables
> =========================
>
>
> UDP Transport [#10]
> -------------
>
> **[On Track: Minimal]** Karsten will work with Rob to complete the
> Shadow simulation of this work, then write up a full report on this,
> probably with Stephen's help. We expect both tasks to be complete by
> Halloween. This likely represents a minimal outcome for this
> deliverable.
>
>
> Combine obfuscation (obfsproxy) with address-diversity (flashproxy)
> [#11]
> -------------------------------------------------------------------
>
> **[On Track: Minimal]** The work of integrating obfsproxy with
> flashproxy is done. George will include this in the next released
> version of the pluggable transports browser bundle. George will also
> write a report on this work. Ximin and David will help. By Halloween,
> the report will be complete and the bundles will either be released or
> well on their way through testing. This likely represents some
> combination of minimal or intended outcomes for this deliverable.
>
>
> Bridge Metrics [#12]
> --------------
>
> **[Done: Intended]** Our reporting code has been merged into master.
> It will ride the trains or flow downstream or whatever your favorite
> code development cycle imagery is, and show up in future releases. As
> it goes through alpha, beta, and release, gets picked up and adopted
> by more operators, we'll get more comprehensive sample coverage, and
> better data. This likely represents an ideal outcome for this
> deliverable.
>
>
> N23 Performance Work [#13]
> --------------------
>
> **[On Track: Alternate]** Roger doesn't think that N23 is as good as
> we thought it was, so he's going to write a report on the various
> performance improvements we've implemented lately; the performance
> work which we though about, but decided not to implement, and why; and
> a wishlist of future performance work and research. He'll have this
> done by Halloween. This likely represents an alternate outcome for
> this deliverable.
>
>
> Improved Scheduling [#14]
> -------------------
>
> **[On Track: Intended]** Nick will work with Roger and Andrea to
> implement an improved scheduler (possibly based on picking randomly,
> weighted by bandwidth), and -- if possible -- also to refactor `cmux`.
> If time permits, Nick will also attempt to simulate this using Shadow
> , probably with Karsten's help. Finally, Nick will produce a full
> report before Halloween. This likely represents an intended outcome
> for this deliverable.
>
>
> Alternate `Fast` Flag Allocation [#15]
> --------------------------------
>
> **[At Risk]** The person who we initially expected to do this work
> does not seem to be available to do it. We need to find an alternate
> plan to execute this deliverable. If you think you can do it, please
> read [ticket #1854][#1854] and get in touch.
>
> [#1854]: https://trac.torproject.org/projects/tor/ticket/1854
>
>
> VoIP Support [#16]
> ------------
>
> **[On Track: alternate]** Our implementation strategy for this was
> high-latency send-an-mp3-over-XMPP using Gibberbot/Chatsecure, or a
> similar system. The internal milestone was to have a release candidate
> available today. Sadly, Nathan (who is on point for this) wasn't on
> the call. Fortunately, Nathan [blogged][guardian-1] about Chatsecure's
> `12.4.0-beta4` ten days ago, and a tantalizingly-named
> [`ChatSecure-v12.4.2-release.apk`][chatsecure-release]
> ([sig][chatsecure-release-sig]) is available in the Guardian Project
> [release directory][guardian-releases]. The outlook seems good, but
> Tom will follow up with Nathan as soon as possible to verify these
> outrageous claims. Nathan, if you're reading this, could you get in
> touch (email, IRC, XMPP, whatever). Thanks!
>
> [guardian-1]:
> https://guardianproject.info/2013/09/20/gibberbots-chatsecure-makeover-almo…
> [chatsecure-release]:
> https://guardianproject.info/releases/ChatSecure-v12.4.0-beta4-release.apk
> [chatsecure-release-sig]:
> https://guardianproject.info/releases/ChatSecure-v12.4.0-beta4-release.apk.…
> [guardian-releases]: https://guardianproject.info/releases/
>
>
>
> Extended Phase 2 Deliverables
> =============================
>
>
> VoIP Support [#17]
> ------------
>
> **[Limbo: Intended]** Here we're talking about getting a general VoIP
> client (Mumble) to work over Tor. This discussed in [ticket
> #5699][#5699], and [instructions][torify-mumble] for using Mumble over
> Tor are on the wiki. We didn't get an update during the meeting, so if
> you're working on this, get in touch, eh?
>
> [#5699]: https://trac.torproject.org/projects/tor/ticket/5699
> [torify-mumble]:
> https://trac.torproject.org/projects/tor/wiki/doc/TorifyHOWTO/Mumble
>
>
There still exists a bug in Mumble such that when network connections
are routed through a proxy they leak DNS requests. This past month I've
been reading QT documentation so that I can understand the Mumble code
better and find the bug. After the meeting on IRC, velope suggested
that fixing it might be as simple as implementing SOCKS5 with
hostname. DNS leaks also occur with HTTP proxies, which shouldn't
happen. See also: http://sourceforge.net/p/mumble/bugs/1033/