[tor-bugs] #28174 [Applications/Tor Browser]: Block non-.onion subresources on .onion websites?

Tor Bug Tracker & Wiki blackhole at torproject.org
Mon Jan 13 15:01:22 UTC 2020

#28174: Block non-.onion subresources on .onion websites?
 Reporter:  arthuredelstein           |          Owner:  tbb-team
     Type:  defect                    |         Status:  needs_information
 Priority:  Medium                    |      Milestone:
Component:  Applications/Tor Browser  |        Version:
 Severity:  Normal                    |     Resolution:
 Keywords:  TorBrowserTeam202001      |  Actual Points:
Parent ID:                            |         Points:  2
 Reviewer:                            |        Sponsor:  Sponsor27
Changes (by sysrqb):

 * status:  new => needs_information


 Replying to [ticket:28174 arthuredelstein]:
 > Right now, .onion sites can load HTTP or HTTPS subresources (scripts,
 images, etc.).
 > But is this safe? Loading non-.onion subresources means we are
 potentially leaking information including:
 > * the .onion domain
 > * the full top-level .onion URL
 > * other information about the content of the page
 > * the list of subresources requested by a .onion page
 > Leaks might happen by referer, fetch request, query string, etc. (I
 haven't tested these yet and I'm not sure what leaks happen in practice.)
 Such leaks would be particularly bad for "stealth" onion sites.
 > Even worse, some of the non-.onion subresources may leak the onion
 site's IP address. For example, a .onion website improperly configured may
 accidentally include URLs pointing to their own server's non-.onion IP
 address. Loading those subresources leaks the IP address not just to the
 user but to anyone watching connections outside the Tor network.

 I'm not sure I understand the goal of this. In the simple case, a web
 developer has complete control over which subresources are used on the web
 site. As such, they accept any risks associated with using non-onion
 subresources. Maybe we should provide more training/support for explaining
 these risks, but I do not see the browser as a place where these
 restrictions should be imposed.

 I begin seeing the benefit of blocking resources from clearnet addresses
 on more complicated websites, such as those sites where user-generated
 content is published. However, in this case, it seems like the
 website/server should implement sanitization or filtering in their
 software, instead of expecting this functionality in the browser.

 As a user, it is possible I may only want to load resources from .onion
 addresses. This wouldn't be related to leaking onion addresses. There is a
 torrc option (`OnionTrafficOnly`) which accomplishes this, and we could
 expose a UI preference for this - but as gk mentioned, this sound like

Ticket URL: <https://trac.torproject.org/projects/tor/ticket/28174#comment:10>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online

More information about the tor-bugs mailing list