It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.
The hypesters and grifters do not prevent AI from being used for truly valuable things even now. In fact medical uses will be one of those things that WILL keep AI from just fading away.
Just look at those marketing wankers as a cherry on the top that you didn’t want or need.
I think the vast majority of people understand that already. They don’t understand just what all those gadgets are for anyway. Medicine is largely a ''blackbox" or magical process anyway.
There are way too many techbros trying to push the idea of turning chat gpt into a physician replacement. After it “passed” the board exams, they immediately started hollering about how physicians are outdated and too expensive and we can just replace them with AI. What that ignores is the fact that the board exam is multiple choice and a massive portion of medical student evaluation is on the “art” side of medicine that involves taking the history and performing the physical exam that the question stem provides for the multiple choice questions.
And it has gone exactly nowhere either hasn’t it. Nor do those techbros want the legal and moral responsibilities that come with an actual licence to pass the boards.
I think there are some techbros out there with sleazy legal counsel that promises they can drench the thing in enough terms and conditions to relieve themselves of liability, similar to the way that WebMD does. Also, with healthcare access the way it is in America, there are plenty of people who will skim right past the disclaimer telling them to go see a real healthcare provider and just trust the “AI”. Additionally, there’s enough slimy NP professional groups pushing for unsupervised practice that they could just sign on their NP licenses for prescriptions, and the malpractice laws currently in place would be difficult to enforce depending on outcomes and jurisdictions.
This doesn’t get into the sowing of discord and discontent with physicians that is happening even without these products existing in the first place. Even the claims that an AI could potentially, maybe, someday sorta-kinda replace physicians makes people distrust and dislike physicians now.
Separately, I have some gullible classmates in medical school that I worry about quite a lot, because they’ve bought into the line that chat GPT passed the boards, so they take its’ hallucinations as gospel and argue with our professor’s explanations as to why the hallucination is wrong and the correct answer on a test is correct. I was not shy about admonishing them and forcefully explaining how these “generative AIs” are little more than glorified text predictors, but the allure of easy answers without having to dig for them and understand complex underlying principles is very alluring, so I don’t know if I actually got through to him or not.
The hypesters and grifters do not prevent AI from being used for truly valuable things even now.
I mean, yeah, except that the unnecessary applications are all the corporations are paying anyone to do these days. When the hype flies around like this, the C-suite starts trying to micromanage the product team’s roadmap. Once it dies down, they let us get back to work.
Also, for GPU prices to come down. Right now the AI garbage is eating a lot of the GPU production, as well as wasting a ton of energy. It sucks. Right as the crypto stuff started dying out we got AI crap.
Those are going to make a ton of money for a lot of people. Every 1% fuel efficiency gained, every second saved in an industrial process, it’s hundreds of millions of dollars.
You don’t need AI in your fridge or in your snickers, that will (hopefully) die off, but AI is not going away where it matters.
Hardware for faster matrix/tensor multiplication leads to faster training, thus helping. More contributors to your favorite python frameworks leads to better tools, thus helping. Etc.
I am aware that chatbots don’t cure cancer, but discarding all the contributions of the last two years is disingenuous at best.
Those are going to make a ton of money for a lot of people.
Right, but not any one person. The people running the hype train want to be that one person, but the real uses just aren’t going to be something you can exclusively monetize.
Depends how you define “a ton” of money. Plenty of startups have been acquired for silly amounts of money, plenty of consultants are making bank, make executives are cashing big bonuses for successful improvements using AI…
I define “a ton” of money in this case to mean “the amount they think of when they get the dollar signs in their eyes.” People are cashing in on that delusion right now, but it’s not going to last.
It’s a money saver, so it’s profit model is all wonky.
A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
A hospital, as a place that helps people, will still want to use these scans widely because “ignoring preventative care to profit off long term treatment” is a bit too “mask off” even for the US healthcare system and doctors would quit.
Insurance companies, however, would pay just shy of the cost of treatment to avoid paying for treatment.
So the cost will rise to be the cost of treatment times the incidence rate, scaled to the likelihood the scan catches something, plus system costs and staff costs.
In a sane system, we’d pass a law saying capable facilities must provide preventative screenings at cost where there’s a reasonable chance the scan would provide meaningful information and have the government pay the bill. Everyone’s happy except people who view healthcare as an investment opportunity.
A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
I believe this idea was generally debunked a little while ago; to wit, the profit margin on cancer care just isn’t as big (you have to pay a lot of doctors) as the profit margin on mammograms. Moreover, you’re less likely to actually get paid the later you identify it (because end-of-life care costs for the deceased tend to get settled rather than being paid).
I’ll come back and drop the article link here, if I can find it.
Oh interesting, I’d be happy to be wrong on that. :)
I figured they’d factor the staffing costs into what they charge the insurance, so it’d be more profit due to a higher fixed costs, longer treatment and some fixed percentage profit margin.
The estate costs thing is unfortunately an avenue I hadn’t considered. :/
I still think it would be better if we removed the profit incentive entirely, but I’m pleased if the two interests are aligned if we have to have both.
Oh, absolutely. Absent a profit motive that pushes them toward what basically amounts to a protection scam, they’re left with good old fashioned price gouging. Even if interests are aligned, it’s still way more expensive than it should be. So yes, I agree that we should remove the profit incentive for healthcare.
Sadly, I can’t find the article. I’ll keep an eye out for it, though. I’m pretty sure I linked to it somewhere but I’m too terminally online to figure out where.
It sure is. But this is basically just making something that already exists more reliable, not creating something new. Still important, but not as earth-shaking.
I once had ideas about building a machine learning program to assist workflows in Emergency Departments, and its’ training data would be entirely generated by the specific ER it’s deployed in. Because of differences in populations, the data is not always readily transferable between departments.
Ok, I’ll concede. Finally a good use for AI. Fuck cancer.
It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.
The hypesters and grifters do not prevent AI from being used for truly valuable things even now. In fact medical uses will be one of those things that WILL keep AI from just fading away.
Just look at those marketing wankers as a cherry on the top that you didn’t want or need.
People just need to understand that the true medical uses are as tools for physicians, not “replacements” for physicians.
I think the vast majority of people understand that already. They don’t understand just what all those gadgets are for anyway. Medicine is largely a ''blackbox" or magical process anyway.
There are way too many techbros trying to push the idea of turning chat gpt into a physician replacement. After it “passed” the board exams, they immediately started hollering about how physicians are outdated and too expensive and we can just replace them with AI. What that ignores is the fact that the board exam is multiple choice and a massive portion of medical student evaluation is on the “art” side of medicine that involves taking the history and performing the physical exam that the question stem provides for the multiple choice questions.
And it has gone exactly nowhere either hasn’t it. Nor do those techbros want the legal and moral responsibilities that come with an actual licence to pass the boards.
I think there are some techbros out there with sleazy legal counsel that promises they can drench the thing in enough terms and conditions to relieve themselves of liability, similar to the way that WebMD does. Also, with healthcare access the way it is in America, there are plenty of people who will skim right past the disclaimer telling them to go see a real healthcare provider and just trust the “AI”. Additionally, there’s enough slimy NP professional groups pushing for unsupervised practice that they could just sign on their NP licenses for prescriptions, and the malpractice laws currently in place would be difficult to enforce depending on outcomes and jurisdictions.
This doesn’t get into the sowing of discord and discontent with physicians that is happening even without these products existing in the first place. Even the claims that an AI could potentially, maybe, someday sorta-kinda replace physicians makes people distrust and dislike physicians now.
Separately, I have some gullible classmates in medical school that I worry about quite a lot, because they’ve bought into the line that chat GPT passed the boards, so they take its’ hallucinations as gospel and argue with our professor’s explanations as to why the hallucination is wrong and the correct answer on a test is correct. I was not shy about admonishing them and forcefully explaining how these “generative AIs” are little more than glorified text predictors, but the allure of easy answers without having to dig for them and understand complex underlying principles is very alluring, so I don’t know if I actually got through to him or not.
I mean, yeah, except that the unnecessary applications are all the corporations are paying anyone to do these days. When the hype flies around like this, the C-suite starts trying to micromanage the product team’s roadmap. Once it dies down, they let us get back to work.
Also, for GPU prices to come down. Right now the AI garbage is eating a lot of the GPU production, as well as wasting a ton of energy. It sucks. Right as the crypto stuff started dying out we got AI crap.
Yeah, fuck that detecting cancer crap, I want to game!
You missed that we were talking about the useless AI garbage, didn’t you? I guess humans can also put out garbage…
What article is this comment section about?
Right, I forgot we’re only allowed to talk about one thing per thread. Sorry.
Precisely
GPU price hikes are causing problems outside of the gaming industry, too. Imaging, scientific research, astronomy…
Might be, but I somehow don’t picture an astronomer complaining about GPU prices on lemmy…
There are actually a ton of people in research and academia on here.
Or at least there were. I don’t know what the current state of the Lemmy community is.
Those are going to make a ton of money for a lot of people. Every 1% fuel efficiency gained, every second saved in an industrial process, it’s hundreds of millions of dollars.
You don’t need AI in your fridge or in your snickers, that will (hopefully) die off, but AI is not going away where it matters.
Well, AI has been in those places for a while. The hype cycle is around generative AI which just isn’t useful for that type of thing.
I’m sure if Nvidia, AMD, Apple and Co create npus or tpus for Gen ai they can also be used for those places, thus improving them along.
Why do you think that?
Nothing I’ve seen with current generative AI techniques leads me to believe that it has any particular utility for system design or architecture.
There are AI techniques that can help with such things, they’re just not the generative variety.
Hardware for faster matrix/tensor multiplication leads to faster training, thus helping. More contributors to your favorite python frameworks leads to better tools, thus helping. Etc.
I am aware that chatbots don’t cure cancer, but discarding all the contributions of the last two years is disingenuous at best.
Right, but not any one person. The people running the hype train want to be that one person, but the real uses just aren’t going to be something you can exclusively monetize.
Depends how you define “a ton” of money. Plenty of startups have been acquired for silly amounts of money, plenty of consultants are making bank, make executives are cashing big bonuses for successful improvements using AI…
I define “a ton” of money in this case to mean “the amount they think of when they get the dollar signs in their eyes.” People are cashing in on that delusion right now, but it’s not going to last.
A cure for cancer, if it can be literally nipped in the bud, seems like a possible money-maker to me.
It’s a money saver, so it’s profit model is all wonky.
A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
A hospital, as a place that helps people, will still want to use these scans widely because “ignoring preventative care to profit off long term treatment” is a bit too “mask off” even for the US healthcare system and doctors would quit.
Insurance companies, however, would pay just shy of the cost of treatment to avoid paying for treatment.
So the cost will rise to be the cost of treatment times the incidence rate, scaled to the likelihood the scan catches something, plus system costs and staff costs.
In a sane system, we’d pass a law saying capable facilities must provide preventative screenings at cost where there’s a reasonable chance the scan would provide meaningful information and have the government pay the bill. Everyone’s happy except people who view healthcare as an investment opportunity.
I believe this idea was generally debunked a little while ago; to wit, the profit margin on cancer care just isn’t as big (you have to pay a lot of doctors) as the profit margin on mammograms. Moreover, you’re less likely to actually get paid the later you identify it (because end-of-life care costs for the deceased tend to get settled rather than being paid).
I’ll come back and drop the article link here, if I can find it.
Oh interesting, I’d be happy to be wrong on that. :)
I figured they’d factor the staffing costs into what they charge the insurance, so it’d be more profit due to a higher fixed costs, longer treatment and some fixed percentage profit margin.
The estate costs thing is unfortunately an avenue I hadn’t considered. :/
I still think it would be better if we removed the profit incentive entirely, but I’m pleased if the two interests are aligned if we have to have both.
Oh, absolutely. Absent a profit motive that pushes them toward what basically amounts to a protection scam, they’re left with good old fashioned price gouging. Even if interests are aligned, it’s still way more expensive than it should be. So yes, I agree that we should remove the profit incentive for healthcare.
Sadly, I can’t find the article. I’ll keep an eye out for it, though. I’m pretty sure I linked to it somewhere but I’m too terminally online to figure out where.
That’s not what this is, though. This is early detection, which is awesome and super helpful, but way less game-changing than an actual cure.
It’s not a cure in itself, but isn’t early detection a good way to catch it early and in many cases kill it before it spreads?
It sure is. But this is basically just making something that already exists more reliable, not creating something new. Still important, but not as earth-shaking.
Honestly they should go back to calling useful applications ML (that is what it is) since AI is getting such a bad rap.
I once had ideas about building a machine learning program to assist workflows in Emergency Departments, and its’ training data would be entirely generated by the specific ER it’s deployed in. Because of differences in populations, the data is not always readily transferable between departments.