Think “smoke and mirrors”…. “Mind games” are a HUGE part of any Turing class bot; in fact, botmasters regularly program intentional mistakes to accurately simulate human fallibility in order to fool human Turing test judges. Natural language dialog is hugely psychological. My father often said that it doesn’t matter what you “say”, only what others “hear”.
Secondly, self-awareness in a machine will very likely be different from self-consciousness in a human; so, machines may very well become self-aware, but not humanly self-conscious. Self-awareness in a machine could very well manifest as reflective environmental consciousness. A machine may become more aware of its surroundings than a human. A machine may know everything about itself, and in a practical sense more than any human knows about itself. Then the machine could calculate every relation and every possibility between itself and the environment, for instance like a prize winning chess computer. The result would be that the machine was much more self-aware than the person standing next to it.
But this heightened self-awareness would be superior to that of the human; therefore, it would not fool any human. In fact, the only way to fool a human would be to “dumb down” the machine, ergo smoke and mirrors mind games; so in the end, a machine with human level consciousness would be inferior to the truly self-aware machine. ;^)