Yeah I’ve been migrating away steadily. And what a shame! Now Google don’t get to use my emails to train their shit AI.
Yeah I’ve been migrating away steadily. And what a shame! Now Google don’t get to use my emails to train their shit AI.
There’s a bit more to it than that
I’d say if you have 80% of the requirements you might as well apply. I would frankly ignore years of experience more or less entirely.
It’s not exactly uncommon for a listing to advertise the person they want, but to accept applicants with significantly less on the basis that they can get there. Nearly every job I’ve ever got I was not at the level advertised in something or other.
Well of course we’re going to throw poo at him
The median is an average. But it isn’t the mean, which is presumably what the other comment was using.
Oh, sorry the 45 page document is for something else. The only person who understands this dataset is Dave and he was made redundant 5 years ago. Anyway, can you get this done today?
ChatGPT is not designed to fool us into thinking it’s a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn’t be intelligent, in my view.
The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn’t be surprised if a modern LLM could pass it, at least some of the time. That doesn’t mean they are intelligent, they aren’t, but I don’t think the Turing test is good justification.
For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn’t plan a whole sentence out in advance, it works token by token… The input to each prediction is just everything so far, up to the last word. When it starts writing “As…” it has no concept of the fact that it’s going to write “…an AI A language model” until it gets through those words.
Frankly, given that fact it’s amazing that LLMs can be as powerful as they are. They don’t check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token… An incredible piece of technology, despite it’s obvious flaws.
Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.
I mean if you don’t like replacing gear for similar stuff with better stats I don’t think looter shooters are for you haha! The story is pretty basic but I do enjoy the random dialogue you get when just flying about once you’ve unlocked HIVE
I would recommend Everspace 2. It’s quite a different game, it’s a space dogfighter first and foremost… But it’s also somewhat a light RPG looter shooter. It’s great, and still recieving updates (slowly).
LLMs are just predictive text but bigger
Oh, true, I didn’t look too close.
The “puzzle” isn’t the test, the test uses your browser history, mouse activity, etc to identify you as human (or not). The puzzle is used to generate training data for ML models.
It’s amazing what your phone can run, even better if you have a Bluetooth controller.
For best results buy one per family member per floor. Actually better get two so you can have one at seated height too.
What??
Academic publishing seems like a problem that should be easy to solve. It’s a situation where greed is outright making the service worse for everyone, so it seems like a new journal that does things differently (e.g. by not charging researchers) could become wildly successful… So why doesn’t that happen? Are there barriers to creating new journals?
It’s called hackthebox not hackoutofthebox