Hey again, just to be clear:
I am not an official developer, but decided to give my thoughts anyway.
If you require advanced security, look into the Vanguards project and it's detections for "abnormal" traffic / cells - it is mostly designed for hidden service operators, however clients can use it too :-)
All the best,
George
On Thursday, July 17th, 2025 at 5:11 AM, Khaled Roomi <khaledroomi2013(a)gmail.com> wrote:
> I got it, thank you George!
>
> > On 17 Jul 2025, at 6:26 AM, George Hartley hartley_george(a)proton.me wrote:
> >
> > Hey,
> >
> > Thanks for sharing your idea.
> >
> > That said, there are a few big issues with your proposal:
> >
> > 1.) Centralization of trust: Giving a fixed group of “original volunteers” special relay status adds centralized points of trust and failure. Tor has always aimed to avoid that — no single group should be in a position to de-anonymize users.
> >
> > 2.) The concept still exposes the IP somewhere: Replacing the guard with a “shield” doesn’t really solve the problem — it just moves it. The shield would now see the user’s IP instead of the guard, so we’re back to the same trust issue, just in a different spot.
> >
> > 3.) "Hard-coded" nodes are risky: If we hard-code a set of shield nodes, they become easy to block or target. And if any of them go down, users could lose access or hidden services could get de-anonymized.
> >
> > 4.) More latency: Adding an extra hop (shield → guard → middle → exit) makes the whole system slower and more complex, without clear benefits to anonymity.
> >
> > In a way, using a Snowflake bridge already does this according to my knowledge, as long as it is running as a Firefox plugin (which then proxies traffic to the "guard").
> >
> > 5.) Exclusion and scalability: Restricting shield operation to only a small historical group doesn’t scale well and could create unnecessary gatekeeping. It also means we’d be relying on fewer people, which weakens the network.
> >
> > 6.) As mentioned above with Snowflake, censorship resistance is already handled elsewhere.
> >
> > Obfuscation for censored users is already addressed through pluggable transports like obfs4, meek, snowflake, etc. These are designed to be dynamic and hard to detect — hardcoded nodes would be a step back.
> >
> > In short: it’s a well-meaning idea, and the concerns behind it are valid - but the approach would likely introduce more problems than it solves. The current design of Tor reflects a lot of hard lessons about trust, decentralization, and threat modeling that we’ve learned the hard way over the past few years.
> >
> > Thank you,
> >
> > - George
> >
> > On Wednesday, July 16th, 2025 at 11:45 AM, Khaled Roomi via tor-dev tor-dev(a)lists.torproject.org wrote:
> >
> > > Hi tor, I wanted to contact you about an idea I want to share with you. My idea is making all the original volunteers (who were in the project since the project begun) run a new relay, it’s like some sort of shield. And no one else other than the volunteers can make that shield node. The tor client is hardcoded to connect to one of the shields the volunteer is running, then the shield connects to the guard then middle then exit. It improves anonymity by hiding the IP address of a user from the guard because law enforcement can make relays too. And since the client will only connect to the shields provided from the volunteers there’s no way that law enforcement or any criminal can see someone’s IP. And for people in censored countries volunteers should also run shields that obfuscate network traffic so ISPs wont catch people trying to connect through tor. Tor browser would become slower but more anonymous. And that’s my idea, I hope you guys are safe and goodbye.
> > > _______________________________________________
> > > tor-dev mailing list -- tor-dev(a)lists.torproject.org
> > > To unsubscribe send an email to tor-dev-leave(a)lists.torproject.org
> > > <publickey - hartley_george(a)proton.me - 0xAEE8E00F.asc>