ChatGPT consistently makes up shit. It’s difficult to tell when something is made up because it’s a language model so it is supposed to sound confident as if it’s any person telling a fact that they know.
It knows how to talk like a subject matter expert because that’s usually what gets publicized most and thus that’s what it’s trained on, but it doesn’t always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it’s wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won’t trust a glorified autocomplete.
Good point, hallucinations only add to the fake news problem and artificial content problem.
I’ll counter with this: how do you know the stuff you look up online is legit? Should we go back to encyclopedias? Who writes those?
Edit: in case anyone isn’t aware, GPT “hallucinates” made up information in specific cases when temperature and top_p settings aren’t optimized, wasn’t saying anyone’s opinion was a hallucination of course
Some generative chatbots will say something then link to where the info is from. That’s good because I can followup
Some will just say something. That’s bad and I’ll have to search myself afterwards.
It’s the equivalent of a book with no cover or a webpage where I can’t see what website it’s on. Maybe it’s reputable, maybe it’s not. Without a source I can’t really decide
Ya, it’s utterly baffling to me that anyone would use a tool that predicts the next word in a sentence to try and learn something. Besides, what’s the endgame when no reporter could make a living because all their words are laundered and fed into a most people are saying bot? At that point new and unknown news, information, and facts will just be filtered out unless a lot of clickbait sites steal them because they the words don’t show up in the average conversation frequently enough.
Amusing, much like the Cryptocurrency and NFT industry where everyone from the CEO of Openai to the majority of the influencers came from, the extent that the system remind useable at all is reliant on the technology being niche. If it ever actually did become the primary method the tech would fundamentally collapse under its own weight.
Was leading onto this side of the debate, but basically our collective knowledge, hell our collective experiences are not objective. Our assumptions, mistakes, wordings which result in different interpreted meanings, etc all contribute to some level of disinformation.
Now let’s not be as nit picky and accept that some detail fudging isn’t the end of the world and happens frequently. We can cross reference each others’ accounts but even that only works to an extent.
Whole cultures might bare witness to an event and perceive it to be about x y or z, whereas the next door neighbor might see it completely different.
AI to me really isn’t that far off from the winners being the ones to write the history books, or that strange or unexpected events naturally cause human brains to recollect them in incorrect detail and accuracy.
Fact checking is about making sure the things we know are true. You seem to be saying that we can’t obtain objective facts in order to verify what is true, but this is incorrect.
It’s important to understand that if needed, everything we know can be verified. You can obtain the sources of the work done that shows how it was done, when and by whom.
None of that is subjective. If it were, your TV wouldn’t work, aeroplane’s could not fly and the device you used to read and comment here would not exist.
“Everything” is not subjective and to say so is no different than belief in magic.
Not quite what I meant, I was merely pointing out that we should be cognizant and of how our world view and others views might shape and define what’s considered history or fact.
All in all, central points of authority are inherently vulnerable to misinformation. I personally think communal (and biological namely) sources of information shared and verified by each other is far more valuable.
Why settle to see the rainbow for your own favorite color when there’s such an amazing and valuable spectrum available. So very digital of us
you intentionally refuse to understand what I am saying, yes I don’t use google, samsung or any other assistants either. People talking about me is not a problem corporations and governments spying on me is.
I’m sorry excuse me? I’m not intentionally doing anything. Maybe instead of attributing malice, you might opt for ignorance next time. I can only understand what I understand and you can only communicate as effectively as you can. There’s plenty of room for leeway and benefit of doubt here, unless you’d like another hive mind Reddit clone.
Realigning with the conversation, I can understand not wanting powerful parties knowing all about you. They’re much more severe (generally) than people can be. But for some, people can be just as damaging if they have it out for you for whatever reason (gender, sexual orientation, race, success, pissed them off and they’re psychotic, etc)
My outlook on privacy isn’t to obscure or hide information but to inflate it with noise instead. Finding a back alley doorway to a building is much easier than finding the right hotel room in a complex of hotels.
Obviously not everyone subscribes to this tactic but I wanted to share my outlook as well. I was just sharing what I know and have with you. Wasn’t meant to be a heated debate.
No thanks.
Care to elaborate why not? Interested in your viewpoint
ChatGPT consistently makes up shit. It’s difficult to tell when something is made up because it’s a language model so it is supposed to sound confident as if it’s any person telling a fact that they know.
It knows how to talk like a subject matter expert because that’s usually what gets publicized most and thus that’s what it’s trained on, but it doesn’t always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it’s wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won’t trust a glorified autocomplete.
Good point, hallucinations only add to the fake news problem and artificial content problem.
I’ll counter with this: how do you know the stuff you look up online is legit? Should we go back to encyclopedias? Who writes those?
Edit: in case anyone isn’t aware, GPT “hallucinates” made up information in specific cases when temperature and top_p settings aren’t optimized, wasn’t saying anyone’s opinion was a hallucination of course
Some generative chatbots will say something then link to where the info is from. That’s good because I can followup
Some will just say something. That’s bad and I’ll have to search myself afterwards.
It’s the equivalent of a book with no cover or a webpage where I can’t see what website it’s on. Maybe it’s reputable, maybe it’s not. Without a source I can’t really decide
Ya, it’s utterly baffling to me that anyone would use a tool that predicts the next word in a sentence to try and learn something. Besides, what’s the endgame when no reporter could make a living because all their words are laundered and fed into a most people are saying bot? At that point new and unknown news, information, and facts will just be filtered out unless a lot of clickbait sites steal them because they the words don’t show up in the average conversation frequently enough.
Amusing, much like the Cryptocurrency and NFT industry where everyone from the CEO of Openai to the majority of the influencers came from, the extent that the system remind useable at all is reliant on the technology being niche. If it ever actually did become the primary method the tech would fundamentally collapse under its own weight.
I feel like this question is rhetorical but if it’s not, I’m surprised anyone on Lemmy isn’t aware of how to cross reference when fact checking.
I’m more than happy to help with that if required.
Yep you got me!
Was leading onto this side of the debate, but basically our collective knowledge, hell our collective experiences are not objective. Our assumptions, mistakes, wordings which result in different interpreted meanings, etc all contribute to some level of disinformation.
Now let’s not be as nit picky and accept that some detail fudging isn’t the end of the world and happens frequently. We can cross reference each others’ accounts but even that only works to an extent.
Whole cultures might bare witness to an event and perceive it to be about x y or z, whereas the next door neighbor might see it completely different.
AI to me really isn’t that far off from the winners being the ones to write the history books, or that strange or unexpected events naturally cause human brains to recollect them in incorrect detail and accuracy.
Fact checking is about making sure the things we know are true. You seem to be saying that we can’t obtain objective facts in order to verify what is true, but this is incorrect.
It’s important to understand that if needed, everything we know can be verified. You can obtain the sources of the work done that shows how it was done, when and by whom.
None of that is subjective. If it were, your TV wouldn’t work, aeroplane’s could not fly and the device you used to read and comment here would not exist.
“Everything” is not subjective and to say so is no different than belief in magic.
Not quite what I meant, I was merely pointing out that we should be cognizant and of how our world view and others views might shape and define what’s considered history or fact.
All in all, central points of authority are inherently vulnerable to misinformation. I personally think communal (and biological namely) sources of information shared and verified by each other is far more valuable.
Why settle to see the rainbow for your own favorite color when there’s such an amazing and valuable spectrum available. So very digital of us
You’re still implying that obtaining objective facts is less reliable than made up stories.
Unless I’m completely mistaken, could you explain why you think that is please?
Cause Chatgpt isn’t reliable on actual information and i don’t want to have any “assistant” at all.
Fair enough!
Collects your data, profiles you based on your typing style, what you type etc.
Regular assistants, websites, stores etc all do this exact same thing already for what it’s worth.
People do too!
people/regular aassistants don’t sell my data to do highest bidder
By regular assistant I meant google, Samsung, etc not like a person.
People just give it away for free to each other.
you intentionally refuse to understand what I am saying, yes I don’t use google, samsung or any other assistants either. People talking about me is not a problem corporations and governments spying on me is.
I’m sorry excuse me? I’m not intentionally doing anything. Maybe instead of attributing malice, you might opt for ignorance next time. I can only understand what I understand and you can only communicate as effectively as you can. There’s plenty of room for leeway and benefit of doubt here, unless you’d like another hive mind Reddit clone.
Realigning with the conversation, I can understand not wanting powerful parties knowing all about you. They’re much more severe (generally) than people can be. But for some, people can be just as damaging if they have it out for you for whatever reason (gender, sexual orientation, race, success, pissed them off and they’re psychotic, etc)
My outlook on privacy isn’t to obscure or hide information but to inflate it with noise instead. Finding a back alley doorway to a building is much easier than finding the right hotel room in a complex of hotels.
Obviously not everyone subscribes to this tactic but I wanted to share my outlook as well. I was just sharing what I know and have with you. Wasn’t meant to be a heated debate.