What are the major arguments for the limits of the possibility of AI consciousness?
The logical trap here is Anthropomorphism. There isn’t any good reason why AIs should be like people at all. AIs could be like anything, like space aliens. I believe Alan Turing‘s penchant for turning machines into men may have led humanity astray.
The one thing the Turing tests, in the form of the Loebner Prize, have shown us is that fooling people into believing a machine is a man is more important than the machine actually being like a man. In this way, AIs will be much smarter than people, and be able to convince people 99 times out of 100 that they, the machines are conscious. However, in reality the machines will in no way be conscious like people.
In fact, our human language is ridiculously imprecise. Because, machines could know much more about themselves than people do, and still not be self-aware in the way that people are, or in other words have awareness.