It’s like saying Microsoft Windows is the most loved OS on PC. People just go with the option in front of them. Spotify is the biggest streaming service now, Amazon Music ties in with Alexa.
It’s like saying Microsoft Windows is the most loved OS on PC. People just go with the option in front of them. Spotify is the biggest streaming service now, Amazon Music ties in with Alexa.
And some shows have a slightly different intro for each episode, which might make you want to watch it every time.
The intro is the opening sequence of the show. People usually watch that on the first episode, but if you’re binge-watching a show you don’t want to keep seeing the intro over and over again for each episode.
Also this “shower thought” is word for word one of the popular posts on lemmy from a few weeks ago.
The equivalent expression in my language is “the drop that filled the glass”. As with the camel, the glass was already full, it just needed one more drop to reach its limit.
Hallucinations are an issue for generative AI. This is a classification problem, not gen AI. This type of use for AI predates gen AI by many years. What you describe is called a false positive, not a hallucination.
For this type of problem you use AI to narrow down a set to a more manageable size. e.g. you have tens of thousands of images and the AI identifies a few dozen that are likely what you’re looking for. Humans would have taken forever to manually review all those images. Instead you have humans verifying just the reduced set, and confirming the findings through further investigation.
deleted by creator
Reminds me of that scene from The Cube
I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.
All those people can live their lives just fine without seeing political posts on Lemmy.
Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.
Not sure what that’s supposed to help with. I’d be even more uncomfortable if my steak had eyes and made eye contact than when a person does it.
Make a large enough model, and it will seem like an intelligent being.
That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.
And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?
But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.
Exactly. As the mandatory sexual harassment and money laundering trainings have taught me repeatedly, if the company knows about it and doesn’t do anything, they’re equally liable (and in many cases even if they don’t know about it). So stopping inappropriate behavior is in their interest.
Remember to look into his eyes
I don’t know if it’s some neurodivergence or if other introverts feel the same way, but that is something I personally find very difficult and uncomfortable and I can’t hold eye contact for more than a second or two at a time. What feels natural to me is to look at a person’s mouth when they talk.
Pear and gorgonzola is a typical combination.
They are effective, but in the other direction. I wouldn’t be surprised if they’re funded by fossil fuel companies.
Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI
Just heard the story. Apparently it cost 200m by the point they presented the alpha and it was absolute crap. So Sony put another 200m into outsourcing the work asap to fix it.
I grew up as a PC gamer (if you can call 8-bit computers PCs too) and never had a console as a kid. I got an Xbox One when it came out, just because of the Kinect, and never played anything on it other than Just Dance. Playing on my PC is more convenient. I got a Switch and played some Pokémon, but couldn’t get in the habit of playing on a device instead of a PC. When I got a Switch emulator on my PC, I played more on that than I did on the actual Switch in all the time I owned it.