How "meaningful" is Turing test?
Because even though it's sometimes presented as a "scientific" way of classifying the goodness of AI, I do view it as a sort of naive, left-handed way for defining or measuring computer intelligence.
Because there are many more aspects to computer intelligence than mere "whether it can trick some people into believing that they talk with a human". Sure if the computer can answer any question similarly to how a humans expert would, then that would be pretty "hard", but computer intelligence is not there yet and it's unsure, whether it'll ever be due to the long-known differences between formal and informal semantics. It could be just a problem of inhuman system design capability. That is, no-one can ever in practice develop a computer program that's that good. It would just be too large.
Nowadays intelligence of computer programs could be measured by e.g. quantities of statements that they can evalute. That is, just the number of possibilities that they're able to process. Because even the goal of the Turing test would be a computer program with large enough database and large enough semantic analyzer and reasoner.
So, are there better measures?