“What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations? No, it's not a Black Mirror episode—it's Voice Data, and CMG has the capabilities to use it to your business advantage.”
I would assume that you are right, considering how much gargage you collect if listening.
Now imagine recording those who have not given consent, or the device saving full scripts of movies.
Right. The legality of just recording everything in a room, without any consent, is already incredibly dubious at best, so companies aren’t going to risk it. At least with voice dictation or wakewords, you need to voluntarily say something or push a button which signifies your consent to the device recording you.
Also, another problem with the idea of on-device conversion to a keyword that is sent to Google or Amazon: with constant recording from millions of devices, even text forms of keywords will still be an infeasible amount of data to process. Discord’s ~200 million active users send almost a billion text messages each day, yet Discord can’t use algorithmic AI to detect hate speech from Nazis or pedophiles approaching vulnerable children — it is simply far too much data to timely process.
Amazon has 500 million Amazon Echo’s sold, and that’s just Amazon. From an infrastructure-standpoint, how is Amazon supposed to deal with processing near 24/7 keyword spam from 500 million Echo devices every single day? Such a solution would also have to be, in theory, infinitely scalable as the amount of traffic is directly proportional to the number of devices sold/being actively used.
It’s just technologically infeasible.
Anecdotally, the odds are near zero that my wife and I can talk once about maybe buying some obscure thing like electric blinds and suddenly targetted ads for them somehow pop up on our devices.
This happens a lot.
I think you’re being naive if you believe they don’t locally distill our discussions into key words and phrases and transmit those.
As I asked someone else, what is your methodology here? Have you limited all variables that can cause Google to collect this information and lead to confirmation bias? Did your wife search for prices of electric blinds on Google after you both talked about buying them (a very natural and logical next step after discussing the need to purchase them)? A random conversation, or 20 of them, with your wife isn’t exactly a controlled experiment.
Again, I’ll mention that Mitchollow attempted to conduct a genuine controlled experiment to determine whether Google was listening to him through his microphone. What he failed to realize was that he was livestreaming this experiment to YouTube through Google’s servers, the very conspirators he claimed were listening to him, so he was already voluntarily giving them all of the audio data they would ever want! When this was pointed out to him he retracted his statements, admitting the experiment was intrinsically worthless by design.
I’m not being naive, I’m skeptical of these claims and challenging the veracity of the anecdotal evidence presented. To blindly accept yet another anecdote that confirms a bias of ours whilst rejecting differing skeptical opinions would be closed-minded.
That in itself is concerning. Everyone is arguing that no company would invest resources to voice to text everything. But Google does it in YouTube.
What a hilarious oversight with that experiment lol! He must have felt stupid when it was pointed out to him.
Here is an alternative Piped link(s):
genuine controlled experiment to determine whether Google was listening to him through his microphone
When this was pointed out to him he retracted his statements
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Confirmation bias — people unconsciously prefer remembering things supporting their beliefs.
You are assuming he had the belief before the events when it could be the events creating the belief.