Keith, I think you know the answer to this as well as anyone. But, I’ll take a stab at it. First of all, it must be assumed that there is an entire layer of technologies, let us say, outside the public domain. Secondly, as can be seen from Siri and Watson the entire ecosystem is tremendously complex, from hardware to software and even to crowdsourcing. Thirdly, let’s take the Turing perspective, and just say that I do know some machines more semantically competent than certain people.
There is still a long way to go in terms of hardware improvements, not to mention shifting the installed base. And, software to differentiate and follow different speakers in crowded rooms, for instance, is not yet at hand. Further, what in fact is semantics? When you drill down into grammar far enough it becomes extraordinarily vague as to what it actually represents. My particular focus at the moment is on modeling the mechanism of converting words into images, and images into words in the human brain. It is probably mathematical, and may prove to resemble something like fractals. I can refer you to my draft webpages on Computational Metaphorics [1] in Computational Dreaming [2].
[1] http://www.meta-guide.com/home/bibliography/google-scholar/computational-metaphorics
[2] http://www.meta-guide.com/home/bibliography/google-scholar/computational-dreaming