- cross-posted to:
- linuxmemes@lemmy.world
- memes@lemmy.ml
- cross-posted to:
- linuxmemes@lemmy.world
- memes@lemmy.ml
cross-posted from: https://feddit.uk/post/16950456
For those who don’t know google gemini is an AI created by google
This doesn’t mean anything. It’s an LLM and it will only give you a valid sounding answer regardless of the truth. “Yes” sounds valid and is probably the one with the most occurrences in the training data.
Stop posting shit like this.
90% of the market sounds like “yes” to me too.
Being a monopoly and engaging in negative monopolistic behaviors are also different things.
For example if the only two burger joints in the world were McDonalds and Burger King, and Burger King decided to replace their burgers with literal shit, actual human and animal feces, would McDonalds be a (I hope and assume) monopoly? Probably. Are they engaging in negative monopolistic behavior? Not necessarily.
Obviously, as a quick aside, fuck Google for their shitty software decisions, their cancelling of great products and their enshittification of a majority of their applications.
However simply having 90% of the market does not technically mean they have done anything wrong. You can’t say they have 90% of the market therefore they have done something illegal or have abused being a monopoly.
You have to be specific. You have to call out payment to companies to be the default. But even that isn’t quite enough because companies sold access. Can a company be at fault for buying access as the default? It was for sale. It’s a weak argument, or at least an incomplete one. You need to prove they abused their position. Or you need to make a case that the industry they are in requires additional regulation as a whole.
I say this because although it sounds like I’m defending Google I’m not. There is a difference between something feeling illegal and something being illegal. Technically, although a recent judgement would disagree with me, they haven’t done anything wrong. It feels like they have. I agree it feels like they have. But they haven’t (or there are further pending results which will prove otherwise).
Ok. But you usually don’t get to 90% market share without doing something “wrong”.
Yes but PROVE IT. Define what wrong they did. That’s my point.
Take a look at the recent monopoly trial, https://www.nytimes.com/2024/08/05/technology/google-antitrust-ruling.html
They claim that spending $18 billion per year to be the default search engine makes them monopolistic. That’s it? That’s all they got?
So the result will be Google stops paying $18 billion and device/browser manufacturers have to put up a Browser Choice dot EU type option.
Go back 10 years and put that law in place. AFAIK Apple has always defaulted to Google. Samsung probably would have sold out to Bing to be the default (although in this case Bing wouldn’t reach a monopoly, so I guess that’s ok for some reason).
I’m not saying paying to be the default didn’t help, but is that the reason they have 90% of the searches? No.
Did they do some else? Maybe. Someone should prove it and we can have an actual change.
Relax bro
Information can’t be dismissed simply by stating it was written by an LLM. It’s still ad hominem.
What? No, the fact that it’s an LLM is pivotal to the reliability of the information. In fact, this isn’t even information per se, just the most likely responses to this question synthesized into one response. I don’t think you’ve fully internalized how LLMs work.
I disagree. Information can be factual independent of who or what said it. If it’s false, then point to the errors in it, not to the source.
You’re correct, but why are you trusting the output by default? Why ask us to debunk something that is well-known to be easy to lead to the answer you want, and that doesn’t factually understand what it’s saying?
But I’m not trusting it by default and I’m not asking you to debunk anything. I’m simply stating that ad hominem is not a valid counter-argument even in the case of LLMs.
You’re saying ad hominem isn’t valid as a counterargument, which means you think there’s an argument in the first place. But it’s not a counterargument at all, because the LLM’s claim is not an argument.
ETA: And it wouldn’t be ad hominem anyways, since the claim about the reliability of the entity making an argument isn’t unrelated to what’s being discussed. Ad hominem only applies when the insult isn’t valid and related to the argument.
Dismissing something AI has ‘said’ not because of the content, but because it came from LLM is a choice any individual is free to make. However, that doesn’t serve as evidence against the validity of the content itself. To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.
It is possible to create an infinite amount of bullshit at no cost. So by simply hurling waves and waves of bullshit at you, we can exhaust you.
Feel free to argue further, I’ll be outsourcing my replies to ChatGPT.
Oh yea? Well, why doesn’t Ross, the larger of the friends, simply eat the other friends?
That’s a classic misinterpretation of the Friends universe. Ross, being the larger of the group, would never eat the others because his intellectual appetite is already satisfied by correcting their grammar and paleontology facts. Besides, cannibalism is frowned upon in a sitcom setting.
Seems to be a decent answer considering the source.
I just did the same thing with llama and got the same thing
That’s a really long prompt just to have it roll a d2
On a side note the free gemini version (whichever model they use) is absolute poo poo compared to free Claude or even Chatgpt.
When you get a long and nuanced answer to a seemingly simple question you can be quite certain they know what they’re talking about. If you prefer a short and simple answer it’s better to ask someone who doesn’t.
Sometimes, it’s just the opposite.
It’s a LLM. It doesn’t “know” what it’s talking about. Gemini is designed to write long nuanced answers to ‘every’ question, unless prompted otherwise.
Not knowing what it’s talking about is irrelevant if the answer is correct. Humans that knows what they’re talking about are just as prone to mistakes as an LLM is. Some could argue that in much more numerous ways too. I don’t see the way they work that different from each other as most other people here seem to.
“If you can’t explain it simply, you don’t understand it well enough.”
— Albert Einstein