• Ada@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 months ago

    You miss the point. Your approach requires the targetted minority to experience the hate first, and then react to it, and gives them no method of pro-actively avoiding the content from new sources. It also ensures that every member of the minority in the community in question has a chance to see it, and has to individually remove it.

    That suits bigots fine, and unsurprisingly, isn’t sustainable for many targets of bigotry.

    • cameron_vale@lemm.eeOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      7
      ·
      edit-2
      11 months ago

      Your approach requires the targetted minority to experience the hate first

      That isn’t so. There is vote propagation among peers to consider.

      If a trusted (upvoted) peer or peers downvotes a bigot (by downvoting the bigot’s posts) then you will see that bigot downvoted in your own perspective as well.

      • Ada@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        You still see it though, especially if it’s a direct reply. And it is still a responsive system, that lets bigots just come back with new accounts and spew hate until they get downvoted in to silence, when they just come back with another account.

        Whilst the latter problem still exists even with moderators, at least a moderator can reduce the number of people exposed to hate.

        I’ve lived this. I have zero desire to use the system you describe, because I know it leads to toxicity that I don’t need.

        • cameron_vale@lemm.eeOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          11 months ago

          For older bigots you would filter them away.

          For brand new bigots. That might require a “if the person’s history is too small, exclude” type rule. Which is less than ideal, yes. Lots of false positives there.

          But let’s not put the cart before the horse. I think it’s a pretty good idea and I’d like to see it tested.

          • Ada@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            For brand new bigots. That might require a “if the person’s history is too small, exclude” type rule. Which is less than ideal, yes. Lots of false positives there.

            Doesn’t work. For trans folk particularly, throw away accounts not linked to their main account is often the first step of exploring their identity online.

              • Ada@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                This is all hypotheticals for you, based on some ideal you think is important.

                It’s lived experience for me. I told you it wouldn’t work for many folk. Your priority is “free” speech ahead of well being, and well, as a member of a targetted minority on the internet, my priorities are in a different order

                  • Ada@lemmy.blahaj.zone
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    11 months ago

                    There’s nothing to validate. You asked a question, I answered it.

                    I don’t want the system you describe. Should it ever exist though, if it appeals to you, then use it. Then we’re both happy.