We weren’t using LLMs, but object detection models.
We were doing facial recognition, patron counting, firearm detection, etc.
We weren’t using LLMs, but object detection models.
We were doing facial recognition, patron counting, firearm detection, etc.
I’m coming at it from the standpoint of implementing an AI model into a suite of applications. Which I have done. I have even trained a custom version of a model to fit our needs.
Plugging into an API is more or less trivial (as you said), but that’s only a single aspect of an application. And that’s assuming that you’re using someone else’s API and not running and implementing the model yourself.
Not only that, but what I was aiming at was building applications that actually use the models. There are thousands upon thousands of internal tooling and applications built that take advantage of various models. They all require various levels of coding skill.
There’s a huge gap between “playing with prompts” and “writing the underlying models” and they entire gap is all coding.
Just because my only source of water isn’t entirely clean doesn’t mean I should mix it with arsenic.
Yes, so much of our personal data is leeched from our phones, doesn’t mean we should except it and willingly open a vein.