So just a thought, if they are looking for highly optimized keywords that can be done locally what’s to stop them from adding common keywords for advertising.
In the given anecdote about babies and diapers, you would literally just need a baby keyword. It gets triggered phone tells the Mothership it heard about babies, suddenly diaper ads. It wasn’t listening to every single word, it wasn’t parsing the sentence, it was just looking for highly optimized ad keywords. You could even set a threshold for how often certain add keywords or triggered to avoid false positives on detection
It’s not listening to actual words, that’s already too complex (you’d have to parse language for that, which those low power chips can’t do). It’s listening for syllables, Oh-Kay-Goo-Gle or whatever. Depends on the chip and implementation of course, which is also why you get false positives when someone says something similar.
If you add more syllables to that then your phone would activate literally all the time, with tons of false positives.
Seriously, if we had low powered voice recording + voice to text you’d already have instant conversation subtitles on your phone, instant translation and so on. We simply don’t have that yet, those features do exist but they are power hungry (so if you do use them say goodbye to your battery life).
So just a thought, if they are looking for highly optimized keywords that can be done locally what’s to stop them from adding common keywords for advertising.
In the given anecdote about babies and diapers, you would literally just need a baby keyword. It gets triggered phone tells the Mothership it heard about babies, suddenly diaper ads. It wasn’t listening to every single word, it wasn’t parsing the sentence, it was just looking for highly optimized ad keywords. You could even set a threshold for how often certain add keywords or triggered to avoid false positives on detection
It’s not listening to actual words, that’s already too complex (you’d have to parse language for that, which those low power chips can’t do). It’s listening for syllables, Oh-Kay-Goo-Gle or whatever. Depends on the chip and implementation of course, which is also why you get false positives when someone says something similar.
If you add more syllables to that then your phone would activate literally all the time, with tons of false positives.
Seriously, if we had low powered voice recording + voice to text you’d already have instant conversation subtitles on your phone, instant translation and so on. We simply don’t have that yet, those features do exist but they are power hungry (so if you do use them say goodbye to your battery life).