What is the current relationship between prosody in linguistics, speech recognition, and affective computing?
Quora gives as good an overview of where these things are at as any:
According to Wikipedia, irony is an example of Prosody (linguistics). I suggest looking at the 2010 seminar paper,Verbal Irony: Theories and Automatic Detection. Another example might be the effects of accent on a conversational agent such as Siri, amply illustrated in my own quick and dirty webpage, 100 Best Siri Accent Videos | Meta-Guide.com.
I consider acoustic speech recognition to be primarily a mechanical or hardware solution, and more akin to hard robotics than soft AI, such as NLP. For instance, supposedly the noise cancellation chip in Apple iDevices has allowed them to surpass the speech recognition of most PC hardware, and thus enabled Siri to excel over previous PC based conversational agents.
There is a lot of research going into affective computing, and has been a lot of recent discussion about it on Quora. Basically, sentiment analysis is the toe in the water of mainstream affective computing. Generally, sentiment analysis is considered to be an NLP solution.
See also my recent Quora answers to:
· What steps are being taken toward developing artificial emotion in machines, computers, and robots?
· What is the relation between sentiment analysis, natural language processing and machine learning?