EnsignRedshirt [he/him]

  • 0 Posts
  • 37 Comments
Joined 4 years ago
cake
Cake day: July 26th, 2020

help-circle

  • The structure of Reddit’s content aggregation and curation leads to a regression to the mean. Things that are broadly agreed-upon, even if wrong, are amplified, and things that are controversial, even if correct, are attenuated. What floats to the top is whatever the hive mind agrees is least objectionable to the most people.

    One solution that seems to work elsewhere is to disable downvoting. Downvoting makes it too easy to suppress controversial perspectives. Someone could put forward a thoughtful position on something, and if a few people don’t like the title and hit the downvote button, that post may be effectively buried. No rebuttal, no discourse, just “I don’t like this, make it go away.” Removing the downvote means if you don’t like something, you can either ignore it, or you can put effort into responding to it.

    The “downvote to disagree” thing isn’t just an attitude problem, it’s a structural issue. No amount of asking people nicely to obey site etiquette will change the fact that the downvote button is a disagree button. If you don’t want a hive mind, you necessarily need to be able to allow for things you don’t like to be amplified.

    Twitter is actually better for this than Reddit because it has the quote function. You can amplify something you don’t like as a way of getting other people to hate it with you. It’s not perfect, but there’s no way of having it both ways. “Reddiquette” was never a real thing, just a polite fiction that ignores the Eternal September world that we live in.

    If you have the same structure as Reddit, you will recreate Reddit. Lemmy isn’t going to be different if all the incentives and interactive elements are the same.



  • Internal politics is going to be responsible for some of it. This is an unexpected opportunity for individuals to advance their careers or agendas outside of the usual process, and some of them are going to take the opportunity. They might not even dislike the idea of Harris being the nominee, but they want to find a way to use their support to their advantage. The Democrats are hardly a monolith, they’re a broad coalition that barely holds together at the best of times, it’s not that weird that there would be conflict.

    There’s also the issue that there hasn’t been any sort of democratic process to select a new nominee. Harris makes sense for a number of reasons, and the party does have the authority to nominate whomever they want, but they have to avoid making it look like the party insiders are just coronating a new nominee. It’s bad optics, if nothing else. This is also a pretty unprecedented situation, and it seems like no one knew it was going to happen for sure. It makes sense that there’s a conversation out in the open about who is going to be the nominee.

    As a candidate, she’s not the best choice, but she’s an improvement over Biden. I doubt she would have won a genuinely competitive primary process. She’s probably in the best position to be the nominee at this moment, but there are no doubt plenty of people who feel that this could have been handled better and are going to make their opinions heard.




  • Bill Burr is a surprisingly thoughtful and principled guy with consistently good opinions. He’s a comedian, and he doesn’t have any theory underpinning his worldview, but I bet if you look at why he’s been criticized in the past it’s by liberals who are mad that he’s being critical of liberals. I’m not at all surprised that he lit up Bill Maher on his boomer-ass Israel-Palestine takes.



  • Properly-designed tools with good data will absolutely be useful. What I like about this analogy with the talking dog and the braindead CEO is that it points out how people are looking at ChatGPT and Dall-E and going “cool, we can just fire everyone tomorrow” and no you most certainly can’t. These are impressive tools that are still not adequate replacements for human beings for most things. Even in the example of medical imaging, there’s no way any part of the medical establishment is going to allow for diagnosis without a doctor verifying every single case, for a variety of very good reasons.

    There was a case recently of an Air Canada chatbot that gave bad information to a traveler about a discount/refund, which eventually resulted in the airline being forced to honor what the chatbot said, because of course they have to honor what it says. It’s the representative of the company, that’s what “customer service representative” means. If a customer can’t trust what the bot says, then the bot is useless. The function that the human serves still needs to be fulfilled, and a big part of that function is dealing with edge-cases that require some degree of human discretion. In other words, you can’t even replace customer service reps with “AI” tools because they are essentially talking dogs, and a talking dog can’t do that job.

    Agreed that ‘artificial intelligence’ is a poor term, or at least a poor way to describe LLM. I get the impression that some people believe that the problem of intelligence has been solved, and it’s just a matter of refining the solutions and getting enough computing power, but the reality is that we don’t even have a theoretical framework for how to create actual intelligence aside from doing it the old fashioned way. These LLM/AI tools will be useful, and in some ways revolutionary, but they are not the singularity.




  • Are there any flat-moon conspiracy theorists? I feel like there’s way more evidence that the moon is flat. We see the same side all the time. If it were round, wouldn’t we see different parts of it? We’re supposed to believe that it’s a spheroid orbiting us at the exact rotational speed required so that the same side is facing us all the time? Be serious.









  • Look into design thinking and in particular ideation. There are lots of formal processes, exercises, activities, etc. that are used by individuals and teams in all sorts of contexts specifically for coming up with ideas. The process is usually one of throwing a bunch of things on the table, sorting through them, getting rid of most of them, elaborating on the ones that seem interesting, then following one to completion, or at least to some sort of first draft/prototype/mockup. You then decide whether or not you want to work on the draft further, or decide that it’s a dead end and start from scratch. The thing with “ideas” is that all of them are terrible and only serve to help guide us towards doing something interesting. Creating things is an intensely iterative process, and what you start with is unlikely to look much like what you end up with after a number of iterations.

    Ideas are also all derivative. There are no new ideas, just riffs on existing ones. Even most interesting and innovative works have been influenced by past works, or works from different disciplines, or inspired by nature. If you’re looking to make a short comic, start by figuring out what works and artists and styles you like. Try recreating parts of them, or emulating them, or combining elements of them, and see if the results speak to you. That’s one of the few actually useful applications of LLM AI. You can quickly test concepts, maintain some elements and discard others, do mashups, etc. When something grabs you, try to figure out what it is that resonates about it, then try to recreate it with your own spin.

    Ultimately, ideas are just prompts for doing work, and having a good idea (to the extent that such a thing even exists) is far less important than being willing to test a number of ideas to find out what will motivate you to spend real time and effort on creating something.