…can’t argue with that
…can’t argue with that
Thanks, that was interesting. I kept thinking that this reads like something out of Quanta Magazine, and then at the end there was an attribution to them :)
To all the reflexive AI-downvoters: This is about an application of machine learning, not an LLM. Don’t behave like an advanced autocomplete; think before you click :P
Thanks for posting, don’t mind the downvotes from the luddites :D
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022…
Arrows
Pointless
Pick one
If you’re logged in to lemmy.world, I think you can click the hamburger menu top right and then “Create community”?
Edit: sorry, just noticed your account is on programming.dev, where there’s no such option? Then I’m afraid I don’t know :/
Edit 2: From the programming.dev sidebar:
Community Creation
Communities in our instance are created from our community request zone. If you have an idea for a community that fits our instance that hasnt been made already feel free to create a post for it there. Communities will be considered for creation if theres enough interest in the idea shown by people upvoting it
The board that fired him was that of the nonprofit, so they don’t answer to shareholders.
LemmySee? LemmyKnow? LemmyIn? 🙂
Oh, the humanity!
Yeah, that’s what I did. With my very light usage the fixed-price subscription isn’t justifiable, but the api works nicely.
Ok, maybe slightly :) but it surprises me that the ability to emulate a basic human is dismissed as “just statistics”, since until a year ago it seemed like an impossible task…
Absolutely agree that this is a necessary next step!
Agree, I have definitely fallen for the temptation to say what sounds better, rather than what’s exactly true… Less so in writing, possibly because it’s less of a linear stream.
Yeah, I was probably a bit too caustic, and there’s more to (A)GI than an LLM can achieve on its own, but I do believe that some, and perhaps a large, part of human consciousness works in a similar manner.
I also think that LLMs can have models of concepts, otherwise they couldn’t do what they do. Probably also of truth and falsity, but perhaps with a lack of external grounding?
And this tech community is being weirdly luddite over it as well, saying stuff like “it’s only a bunch of statistics predicting what’s best to say next”. Guess what, so are you, sunshine.
Presumably the Nefnanafnanefnd
Nice, thanks!
Thanks for posting, please ignore the stochastic luddites 🙂