Key Facts:
-
The AI system uses ten categories of social emotions to identify violations of social norms.
-
The system has been tested on two large datasets of short texts, validating its models.
-
This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.
there is no application for this that is actually good
It could help identify and measure people on the autistic spectrum or similar.
And what good comes of possibly covertly testing individuals to an autism test.
What does the examiner do with the results? Or what does their boss do with the results?
No good!
Well I mean in a functioning system itd be private medical documents, and used to give the best treatment per patient
In our system it’ll be used by a private company as “their” data and sold to whoever will pay
Yeah I don’t think anyone using this to do that is going to do good things with that information