image description
An infographic titled “How To Write Alt Text” featuring a photo of a capybara. Parts of alt text are divided by color, including “identify who”, “expression”, “description”, “colour”, and “interesting features”. The finished description reads “A capybara looking relaxed in a hot spa. Yellow yuzu fruits are floating in the water, and one is balanced on the top of the capybara’s head.”
via https://www.perkins.org/resource/how-write-alt-text-and-image-descriptions-visually-impaired/
Potentially also useful for creating good prompts for AI image generators?
It’s essentially by-hand CLIP, that’s how the training data for CLIP came into being, it was descriptive text for images.
Explains why it sucks so much shit.
CLIP is pretty decent for what it does though
It’s only useful if the AI was trained on similar prompts. A lot of the anime style ones work best with lists of tags, while the realistic ones work best with descriptions like above.
Prompts are just the reverse of image recognition AI tagging stuff.
Alt text is exactly the kind of tedious work that AI would be good at doing, but everyone in the fediverse seems to have a huge hate boner for ANYTHING AI…
Fediverse: write a fucking essay every time you post an image… But make sure you waste time doing it manually, instead of using AI tools!!!
If you have really detailed image tags, a model trained on them can make great outputs.
We don’t do that here.