“Should we not be buying VW, BMW, Siemens and Bayer technology and products today because they participated in holocaust and directly collaborated with Hitler?” – CEO of Kagi when given feedback re: Brave partnership

  • Nate Cox@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    10 months ago

    God damn it.

    I am a paying subscriber to Kagi because the search results are excellent and there are no ads, so of course you show me a thread on “we should maybe add a small message to suicidal users telling them there is help for them” which then reads like a truth-social propaganda thread, filled to the brim with “helping people is a slippery slope!!! muh freedoms!!” arguments.

    • sudneo@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      13
      ·
      10 months ago

      This is something I really don’t get.

      • it is unclear whether anybody in history has ever been helped by that kind of message.
      • it is kind of a religious morality that suicide needs to be prevented and that if someone wants to do it, it’s because they are not in control. This doesn’t mean it’s wrong in absolute sense, but it’s very opinionated.
      • realistically speaking, there is no need to “search” how to suicide.
      • trying to conclude what you want to do, rather than what you want to know (I.e. search) is IMHO exactly against what kagi’s idea is. It’s a service that does only what it is asked for, and doesn’t try to “know” you, as a customer or user. No text editor prompts you to suicide hotlines by analyzing the text you are writing, and we would consider it extremely weird if it did. However, with search we get used to the tool trying to guess what we want to do because Google does know you, I think the beauty of Kagi is going in another direction.

      But let’s assume that all the previous points are invalid, and - for a greater good - it’s worth displaying a message when someone is looking at suicide-related topics. What about “how to kill someone”, " how to rape", “how to […]” with the hundreds of things that are universally considered wrong? And even worse, what about the thousands of things that are not universally considered wrong, but that some group thinks are wrong? “How to change sex”, " how to blow up a pipeline", etc.?

      This I think was their point in that conversation, and I agree with. The moment you try to interpret what the user wants to do with the info they ask you, and you decide to assume responsibility to change the user’s mind, there are hundreds or thousands of instances in which users or groups of users will demand you take a position for what they believe is right. Instead I think a search engine should stop at providing information relevant to your query and not assume what you want to do with it. It’s not its place to correct people’s behavior or educate people. The public education system should do that, the healthcare system should ensure people have the right support. A search engine is (or better, should be) basically like a librarian, or a library index, you ask what you want and they point you in that direction. They don’t try to guess why you are looking for books about torture or environmental activism.

      This at least is my perspective.