What is the intersection of natural language processing and sound or music interpretation methods?
Generally, natural language processing is textual. However, speech recognition andspeech synthesis convert sound into text and text into sound. There was a history of systems trying to go directly from sound to meaning, and perhaps from meaning to sound, without text; however, this eventually became more specialized, and compartmentalized, out of efficiency.
As far as music is concerned, there are heaps of music related systems based on text processing that do involve NLP. Otherwise, there are music recognition systems, which do not rely on text, and therefore would not involve NLP.
See also the answer to my Quora question: