I’m just a hobbyist, and not familiar with anything specific, or prepackaged in an app, but there are probably examples posted in the projects section of hugging face (=like the github + dev social-ish thing for AI). I’m not sure what is really possible locally as far as machine vision + text recognition + translation. I think it would be really difficult to build an accurate model to do this on the limited system memory of a phone. I’m not sure what/if Google is offloading onto their servers to make this happen or if they are tuning the hell out of an AI model to get it small enough. I mean, there is a reason why the Pixel line has a SoC called a Tensor core, (it is designed for AI), but I haven’t explored models or toolchains for mobile deployment.
I’m just a hobbyist, and not familiar with anything specific, or prepackaged in an app, but there are probably examples posted in the projects section of hugging face (=like the github + dev social-ish thing for AI). I’m not sure what is really possible locally as far as machine vision + text recognition + translation. I think it would be really difficult to build an accurate model to do this on the limited system memory of a phone. I’m not sure what/if Google is offloading onto their servers to make this happen or if they are tuning the hell out of an AI model to get it small enough. I mean, there is a reason why the Pixel line has a SoC called a Tensor core, (it is designed for AI), but I haven’t explored models or toolchains for mobile deployment.