Hi everyone,

Resurrecting this thread and adding in Emma since she’s been asking about it and anarcat since he might need to be in the loop also,

Let’s just go ahead and use subdomain urls for search as it seems this will be the easiest and not making a decision either way means nothing is moving forward :)

We can always iterate on it if other options become available.

Thanks!

Pili 


Project Manager: Tor Browser, UX and Community teams
pili at torproject dot org 
gpg 3E7F A89E 2459 B6CC A62F 56B8 C6CB 772E F096 9C45

On 12 Feb 2019, at 14:34, silvia [hiro] <hiro@torproject.org> wrote:

Hi all!

On 2/11/19 5:46 PM, Alison Macrina wrote:
[snip]
I don't see any issue with doing it that way. Could the user check boxes
to select just the support portal (or another portal)?

Alison

Sure we can. The ux can be designed as we want it. The only thing is the
url.

It would help if we can use urls in the form of
search.support.torproject.org and search.tb-manual.torproject.org and so on.

Talk soon,

-hiro


Cheers,

-hiro

On 11/16/18 9:22 AM, Linus Nordberg wrote:
emma peel <emma.peel@riseup.net> wrote
Fri, 16 Nov 2018 07:41:00 +0000:

silvia [hiro]:
On 11/15/18 10:06 AM, emma peel wrote:
What I like about the 'central search' idea is that you can get a
User Manual result when searching Tor Support... because we have so
many different pieces of content that I liked the idea of moving
the user from one site to the other through the searches.

Is this still going to happen with your proposal?
I like that too, but I think UX wanted search results per portal?
I don't know about doing it project-wide, but I feel that for example
support.torproject.org and tb-manual.torproject.org could share search
results.
I think this is a good time for figuring out what Tor Project wants from
a search function. I've put down a couple of statements sprinkled with
questions below. Please jump in and argue against false statements and
answer questions where possible. And please add more questions.


- The web site support.tpo needs a search field and a button next to it
 resulting in the user seeing a list of matching url's (and their
 titles) in their browser.

- What corpus would such a search look at? support.tpo only? support.tpo
 and tb-manual.tpo? More than that?

- Are there other tpo sites that need/want a search function? Should
 search results include matches from other tpo sites as well, or only
 the one the user is currently visiting?

- Sending the user to a separate site, say search.tpo, is considered not
 UX friendly enough.

- Is search.<site>.tpo good enough?

- Are we limited to using solr, as mentioned in #25322, or can we
 explore other options?

- User fronting tpo web sites are "on the static rotation" because
 that's how we can keep them up and running given the resources at
 hand. Adding dynamic content, i.e. anything that is not "oh, that url
 corresponds to this file, let's send it to the user", would not be
 possible on our current set of VM's given the load we see on user
 facing tpo websites. This means that one of the proposed solutions
 with web servers proxying requests to a separate service, search.tpo,
 is not an option. Another argument against proxying is that it breaks
 the expectation of end-to-end security given by HTTPS.


Roger Dingledine:
On Fri, Nov 16, 2018 at 10:22:24AM +0100, Linus Nordberg wrote:
- Are we limited to using solr, as mentioned in #25322, or can we
 explore other options?
I have vague memories that Isa and Hiro explored other options,
like outsourcing it to duckduckgo, but apparently the user flow was
horrible. So, I don't know what constraints we want now, but there is
some history of exploring other options.

- User fronting tpo web sites are "on the static rotation" because
 that's how we can keep them up and running given the resources at
 hand. Adding dynamic content, i.e. anything that is not "oh, that url
 corresponds to this file, let's send it to the user", would not be
 possible on our current set of VM's given the load we see on user
 facing tpo websites. This means that one of the proposed solutions
 with web servers proxying requests to a separate service, search.tpo,
 is not an option.
If there's some way to limit the number of searches (proxypasses) going
at once, so a crawler doesn't take down (fill all the slots of) all of
our static webservers, this idea might still be worth exploring. I feel
a bit bad putting in place something that is so obviously going to be a
source of ongoing pain, but I don't know of amazing better options that
match all the other goals.

Another argument against proxying is that it breaks
 the expectation of end-to-end security given by HTTPS.
If we're proxying to another service running *on that same machine*,
then I think we're ok on this point. It's just if we have some central
separate search service that it would be a problem. So for example if
solr is our choice, we could run a replicated solr on each webserver.

--Roger

_______________________________________________
tor-community-team mailing list
tor-community-team@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-community-team
_______________________________________________
tor-community-team mailing list
tor-community-team@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-community-team

_______________________________________________
tor-community-team mailing list
tor-community-team@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-community-team