[tor-talk] bug in onion browser

Jadaan jadaan at list.ru
Mon Mar 10 06:26:30 UTC 2014

please read it all and sorry for poor english !
i use tor browser and also onion browser purchased from apple store 
   all sites detect that i am using iphone with ios 7.0.6 
  even when i choose option changer user agent 

you can check from this link 


there is a link in blue that show all information about you 

" Показать подробную информацию"

is this mean that even if we make user agent spoofing it is not real 

and can any one tell me what mean

crsftoken code in cookies 
and what kind of information they take from cookies or head of http 
   to identify me 

and is there any way or program that i can know or read what there cookies take from me to make identefication

even if

i use usb modem in pc with one sim card every time i connect to internet i have dynamic ip and change me mac address with soft TMAC   and delete profile in firefox and create new one and change in winxp hostname,owner name,user name 
    i type in cmd  netconfig rdr 
and all info there changed with regedit 
   so everytime i connect to internet i check 
  netconfig rdr it all changed 
  even though it is not enagh 
maybe there is top level that i need to change to be hidden ??? but dont know what ? 
before i thought that changing mac address and profile and info that shown in netconfig is enagh to be hidden but i was wrong ? 
   what i need to change

all this info was changed 
even though i was detected 
    what is this level of security they have to identify me  ? 
   maybe from using one sim they get info from isp but even when i change sim card every time i connect to internet they can locate you and identify you ? 


> 10 марта 2014 г., в 5:48, tor-talk-request at lists.torproject.org написал(а):
> Send tor-talk mailing list submissions to
>    tor-talk at lists.torproject.org
> To subscribe or unsubscribe via the World Wide Web, visit
>    https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
> or, via email, send a message with subject or body 'help' to
>    tor-talk-request at lists.torproject.org
> You can reach the person managing the list at
>    tor-talk-owner at lists.torproject.org
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of tor-talk digest..."
> Today's Topics:
>   1. Re: Pissed off about Blacklists, and what to do? (Paul Syverson)
>   2. More entry nodes prevent DDOS attacks on hidden services?
>      (hikki at Safe-mail.net)
>   3. Re: Pissed off about Blacklists, and what to do?
>      (Fabio Pietrosanti (naif))
>   4. Another fake key for my email address (Erinn Clark)
>   5. Tor-ramdisk 2014 20140309 released (Anthony G. Basile)
>   6. MaxMemInCellQueues questions (s7r at sky-ip.org)
>   7. configure Vidalia stand alone (Joe Btfsplk)
> ----------------------------------------------------------------------
> Message: 1
> Date: Sun, 9 Mar 2014 09:28:56 -0400
> From: Paul Syverson <paul.syverson at nrl.navy.mil>
> To: tor-talk at lists.torproject.org
> Subject: Re: [tor-talk] Pissed off about Blacklists, and what to do?
> Message-ID: <20140309132856.GA68140 at vpn212046.nrl.navy.mil>
> Content-Type: text/plain; charset=us-ascii
>> On Sun, Mar 09, 2014 at 10:21:52AM +0100, Fabio Pietrosanti (naif) wrote:
>> Il 3/8/14, 8:39 PM, Paul Syverson ha scritto:
>>> If you naively view Tor as Yet Another Pulbic Proxy, I agree. But this
>>> is the same thinking that leads you to block all encrypted traffic you
>>> aren't MITMing. There may be environments where it makes sense, but
>>> most of the time you are hurting yourself more than you are helping,
>>> And enough places have learned that preventing encrypted traffic hurts
>>> them that many people reading this probably don't remember when it was
>>> commonly argumed that the opposite was preferable.  If you have
>>> customers or employees that could benefit from personal defense in
>>> depth or if your corporate operations do, then you are unnecessarily
>>> hurting yourself. As Andrew noted, if you just buy a box and use its
>>> defaults, you probably aren't getting what you want.  Directing
>>> incoming Tor traffic appropriately, possibly requiring extra
>>> authentication steps for anything where you don't need to permit
>>> anonymous-from-you access to your services, makes much more sense.
>>> From a Perimeter Security point of view Tor is a public proxy service,
>> that enable someone to connect indirectly to a remote IT system hiding
>> your IP.
>> What you suggest is "good common sense" for a "properly well organized
>> and well funded" large organization, where the "IT Governance" and the
>> "Security Governance" works very well together.
>> But in the dirty-real-world, enterprise application development is done
>> trough a series of contractors, IT is often managing the application's
>> infrastructure while Security is managing the perimeter security and
>> incident response.
>> In a situation like that it's organizationally and politically very
>> difficult to make the decision that you are suggesting, requiring some
>> internal stakeholder to became the sponsor of "very ponderate decisions"
>> against public proxy service users.
>> A decision to "manage in a soft way connections coming from public proxy
>> services" need more effort than just blocking it.
>> So, let's assume we have an internal sponsor in a large organizaiton
>> that want to use a soft approach.
>> The decision will reach some very high level senior manager (being the
>> IT manager or Security Manager).
>> That 1st level management will will ask some very simple questions in
>> order to take decision:
>> * Which is the business impact?
>> * Do we have numbers on how many of our customers have this behavious of
>> shielding their IP?
>> * Of those who shield their IP, how many are already our customers?
>> * Which are the residual risks we're opening by managing softly rather
>> than blocking?
>> * What other companies are doing with this problem?
>> * What our super-senior security advisor think of this problem?
>> * What our IT Security Product Vendor recommend about this problem?
>> * How much does it costs to manage in a soft-way?
>> Frankly speaking i think that in most of the situation the decision will
>> not be in favour of managing in a soft way (especially not for resources
>> that could be abused).
> I understand that many organizations are dysfunctional and don't use
> common sense, but that isn't something to recommend.  Solving such
> dysfunction is hard, highly contextual, and I'm not pretending it is
> something for which I have expertise. But there are still very simple
> things security folks can to do if dysfunction has not gone off the
> deep end. Selective, short-lived blocking based on incidents is
> different from permanent blocks, as Andrew commented, speaking as
> former head of IT of a global company.  Similarly having a perimeter
> rule-set that includes requiring authentication, or solving a CAPTCHA,
> or whatever is specifically appropriate based on IP address rather
> than just permanent blocks as I commented.
> It seems that your soft/hard distinction is between permanent blocking
> of a class of IP addresses and anything else.  Anything that crude is
> probably going to cost in some way even if you don't know exactly how
> yet.  It isn't necessary conservative to make blocking decisions
> without even asking the above questions or because the cost of finding
> good answers to some of them is itself too high to justify. I
> understand how, e.g., a company worried about spam could come to block
> all email from Europe for half a year, as one of the largest US ISPs
> did a decade ago.  But that's different than recommending them to do so.
> aloha,
> Paul
> ------------------------------
> Message: 2
> Date: Sun, 9 Mar 2014 11:13:39 -0400
> From: hikki at Safe-mail.net
> To: tor-talk at lists.torproject.org
> Subject: [tor-talk] More entry nodes prevent DDOS attacks on hidden
>    services?
> Message-ID: <N1-1d1f-vEYNR at Safe-mail.net>
> Content-Type: text/plain; charset=us-ascii
> Since Tor uses only 3 entry nodes as default, it's often the entry nodes that are DDOSed, not the server.
> Will increasing the amount of entry nodes for the hidden service make it available to more users during DDOS attacks, if you've got a very powerful server?
> But more entry nodes equals less security right??
> ------------------------------
> Message: 3
> Date: Sun, 09 Mar 2014 18:10:49 +0100
> From: "Fabio Pietrosanti (naif)" <lists at infosecurity.ch>
> To: tor-talk at lists.torproject.org
> Subject: Re: [tor-talk] Pissed off about Blacklists, and what to do?
> Message-ID: <531CA099.7040005 at infosecurity.ch>
> Content-Type: text/plain; charset=ISO-8859-1
> Il 3/9/14, 2:28 PM, Paul Syverson ha scritto:
>> I understand that many organizations are dysfunctional and don't use
>> common sense, but that isn't something to recommend.  Solving such
>> dysfunction is hard, highly contextual, and I'm not pretending it is
>> something for which I have expertise. But there are still very simple
>> things security folks can to do if dysfunction has not gone off the
>> deep end. Selective, short-lived blocking based on incidents is
>> different from permanent blocks, as Andrew commented, speaking as
>> former head of IT of a global company.  Similarly having a perimeter
>> rule-set that includes requiring authentication, or solving a CAPTCHA,
>> or whatever is specifically appropriate based on IP address rather
>> than just permanent blocks as I commented.
> While i understand and agree from the technical point of view, this
> approach does not scale up because of a matter of effort.
> Having additional authentication or solving a Captcha is something that
> usually require application's modification.
> Modifying an application in a large enterprise means that someone need to:
> - convince the product manager of the application that this a valuable
> feature
> - allocate a budget for this additional "functional requirements"
> - prioritize so it would not end-up in the "never to be implemented
> requirements"
> So the "Security Department" cannot do anything directly into this
> process other than "blocking at perimeter" using a functionality that
> they already have in their Firewall/IPS, usually clicking on a couple of
> checkboxes.
> Unless we are not clearly able to demonstrate the business value to
> avoid IP-based blocking, switching to an application-level enforcement,
> the IT Security Product Vendor built-in features will win.
> Probably the Tor Project could work on creating a set of CIO and CISO
> focused papers, explaining the business value of improving the
> accessibility of their enterprise applications and services to users
> using Tor.
> But that does require an important Advocacy and Lobby activity to be
> done within the Information Security and IT Security world, reasonably
> focusing on senior and middle management.
> The manager will always ask "Show me the numbers, swho me the best
> industry practices".
> That's probably what we need in order to feed the cat.
> Fabio
> ------------------------------
> Message: 4
> Date: Sun, 9 Mar 2014 20:25:56 +0100
> From: Erinn Clark <erinn at torproject.org>
> To: tor-talk at lists.torproject.org
> Cc: tor-dev at lists.torproject.org
> Subject: [tor-talk] Another fake key for my email address
> Message-ID: <20140309192556.GC5591 at berimbolo.double-helix.org>
> Content-Type: text/plain; charset="us-ascii"
> Hi everyone,
> In September last year I discovered a fake key for my torproject.org email
> address[1]. Today I discovered another one:
> pub   2048R/C458C590 2014-02-13 [expires: 2018-02-13]
>      Key fingerprint = 106D 9243 7726 CD80 6A14  0F37 B00C 48E2 C458 C590
> uid                  Erinn Clark <erinn at torproject.org>
> sub   2048R/D16B3DB6 2014-02-13 [expires: 2018-02-13]
> To reiterate what I said last time this happened:
> 1. That is NOT MY KEY. Do not under any circumstances trust anything that may
> have ever been signed or encrypted with this key. I looked around and was
> unable to find anything, but nonetheless, it is out there and that is creepy.
> 2. If anyone on any of these lists has encountered this key anywhere -- the
> main fear being that it has been used to fraudulently sign packages of some
> kind -- can you please let me/us know ASAP?
> Tor Project official signatures are listed here: 
> https://www.torproject.org/docs/signing-keys.html.en
> Consider that the canonical source for all signatures! Be suspicious of
> anything not listed there and let us know if you ever find anything.
> I want to note here that last year I created a new key which also belongs to me
> and I just haven't switched to yet. I am not signing any Tor packages
> whatsoever with this key, but it does belong to me and has several signatures
> from people I've met in person, some of whom also signed my old/current
> (63FEE659) key:
> pub   4096R/91FCD12F 2013-09-21
>      Key fingerprint = 724B 96C1 997A E999 F0C0  0F8A F8F4 9DD8 91FC D12F
> uid                  Erinn Clark <erinn at torproject.org>
> uid                  Erinn Clark <erinn at double-helix.org>
> uid                  Erinn Clark <erinn at debian.org>
> sub   4096R/1B749632 2013-09-21
> With declining trust in the web of trust,
> Erinn
> [1] https://lists.torproject.org/pipermail/tor-talk/2013-September/029752.html
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 490 bytes
> Desc: Digital signature
> URL: <http://lists.torproject.org/pipermail/tor-talk/attachments/20140309/362d99d1/attachment-0001.sig>
> ------------------------------
> Message: 5
> Date: Sun, 09 Mar 2014 18:38:23 -0400
> From: "Anthony G. Basile" <basile at opensource.dyc.edu>
> To: tor-ramdisk <tor-ramdisk at opensource.dyc.edu>,
>    tor-talk at lists.torproject.org
> Subject: [tor-talk] Tor-ramdisk 2014 20140309 released
> Message-ID: <531CED5F.8070500 at opensource.dyc.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> Hi everyone
> I want to announce to the list that a new release of tor-ramdisk is out. 
> Tor-ramdisk is an i686, x86_64 or MIPS uClibc-based micro Linux 
> distribution whose only purpose is to host a Tor server in an 
> environment that maximizes security and privacy. Security is enhanced by 
> hardening the kernel and binaries, and privacy is enhanced by forcing 
> logging to be off at all levels so that even the Tor operator only has 
> access to minimal information. Finally, since everything runs in 
> ephemeral memory, no information survives a reboot, except for the Tor 
> configuration file and the private RSA key, which may be 
> exported/imported by FTP or SCP.
> Changelog:
> This release bumps tor to version and the kernel to 3.13.5 plus 
> Gentoo's hardened-patches.  All other components are kept at the same 
> versions as the previous release.   We also add haveged, a daemon to 
> help generate entropy on diskless systems, for a more cryptographically 
> sound system.  Testing shows that previous versions of tor-ramdisk were 
> operating at near zero entropy, while haveged easily keeps the available 
> entropy close to 9000 bits. Upgrading is strongly encouraged.
> i686:
> Homepage: http://opensource.dyc.edu/tor-ramdisk
> Download: http://opensource.dyc.edu/tor-ramdisk-downloads
> x86_64:
> Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
> Download: http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads
> -- 
> Anthony G. Basile, Ph. D.
> Chair of Information Technology
> D'Youville College
> Buffalo, NY 14201
> (716) 829-8197
> ------------------------------
> Message: 6
> Date: Mon, 10 Mar 2014 01:31:29 +0200
> From: "s7r at sky-ip.org" <s7r at sky-ip.org>
> To: tor-talk at lists.torproject.org
> Subject: [tor-talk] MaxMemInCellQueues questions
> Message-ID: <531CF9D1.7010903 at sky-ip.org>
> Content-Type: text/plain; charset=ISO-8859-1
> Hash: SHA1
> Hi Onions,
> Reading the blog post with the DDoS possible attack to terminate
> relays by making RAM memory scarce, I would like to ask the following:
> In Tor versions > 2.4 a defense against these types of attacks was
> deployed (MaxMemInCellQueues). MaxMemInCellQueues shall kill circuits
> when RAM memory gets low based on cell lifetime (?oldest circuit(s)?).
> The post says:
> "There is likely not one single value
> that makes sense here: if it is too high, then relays with lower memory
> will not be protected; if it is too low, then there may be more false
> positives resulting in honest circuits being killed."
> - -- in this case, with the default install of Tor (from torproject.org
> or repositories), without any editing of torrc file from user's side,
> what is the MaxMemInCellQueues value set to? Or is it not set at all?
> With an improper configuration for this value made by the users
> running relays, won't this cause a penalty over the performance of Tor
> network overall?
> It could be confusing. How many of current admins are using
> MaxMemInCellQueues and more important is it used correctly or are
> honest circuits being killed? (I could add that since few weeks ago
> circuits to OFTC irc server for example do not last more than 24 Hrs).
> How can the MaxMemInCellQueues setting help regular non-advanced users
> who are relays using Vidalia relay packages for Windows? How can this
> category of users protect against this type of attack?
> - -- 
> PGP Public key: http://www.sky-ip.org/s7r@sky-ip.org.asc
> Version: GnuPG v2.0.17 (MingW32)
> cmlH7Cl8YKyQQxD6bNMOBvDUUGERaUrLkObbakGj09l2HuQntrQB06Y9LxANj93E
> uA2BlwtOHNdQSBghUbFFFwKRlRO9pi3Fxekk5ll++6ADc7W8qNxYHLpiLx//eVct
> gCIB2QlDiaDL7BmfHcSEsyKxFZmvl8TJpODrq9kFrkf7Caak7M5KiM0r5jpPpkNK
> Sa//7AzLGGqif1RMpjb3bTOjkejPmMCI9Sf+Lz24rYrAO133lkGdPUngrT0Q1WBl
> 8oqYDhbegBSsDKJJuvm33lslO7/VU05mpT5IUHCunsglKyCQI3VGs+5UKYhjx6k=
> =Jfmy
> ------------------------------
> Message: 7
> Date: Sun, 09 Mar 2014 20:48:13 -0500
> From: Joe Btfsplk <joebtfsplk at gmx.com>
> To: Tor-Talk <tor-talk at lists.torproject.org>
> Subject: [tor-talk] configure Vidalia stand alone
> Message-ID: <531D19DD.9050103 at gmx.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> The only comments I've seen about using Vidalia 0.2.21 - Win (the stand 
> alone package) w/ TBB 3.5.x, to see the map & connections, is just 
> "install / extract it to its own folder & start it after TBB is already 
> running."
> Like falling off a log?  That doesn't work for me.  Perhaps because I 
> don't let TBB extract to the default location (I use another HDD than 
> where C:\ is located)?  Or, the devil's in the details.
> I need to control the country for exit relays - temporarily, while set 
> up an email acct w/ TBB - then remove the restriction. Vidalia's not 
> required to force exit relays in Tor, but makes it easier to see 
> counties actually used.
> Anyway, I've not found many details on how to edit Vidalia settings now 
> so it will detect that Tor is already running (it is). Especially when 
> TBB is not on C:\.
> As opposed to what the Vidalia settings file still shows as a check box 
> (the *old* way of how Vidalia / Tor worked):  "Start Tor software when 
> Vidalia starts;"
> instead of... how the FAQ instructions 
> https://trac.torproject.org/projects/tor/wiki/doc/TorBrowserBundle3FAQ, 
> seem to indicate?, just put Vidalia anywhere & start it anytime after 
> TBB is running... (~ no mods needed & they'll find each other).
> Thanks.
> ------------------------------
> Subject: Digest Footer
> _______________________________________________
> tor-talk mailing list
> tor-talk at lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
> ------------------------------
> End of tor-talk Digest, Vol 38, Issue 15
> ****************************************

More information about the tor-talk mailing list