Should-Read: This piece by the interesting Geoffrey Pulham seems to start out non-optimally.
There is a difference between (1) true "AI" on the one hand and (2) successful voice/text interface to database search on the other. At the moment (2) is easy. And we should implement (2)—which requires that humans do a little bit of adjusting in order not to use "not", for figuring out within which superset of results any particular "not" is asking for the complement is genuinely hard, and does require true or nearly-true "AI".
Thus to solve Pulham's problem, all you have to do is ask two queries: (i) "Which UK papers are part of the Murdoch empire?"; (ii) "What are the major UK papers?"; take the complement of (i) within (ii) and you immediately get a completely serviceable and useful answer to your question.
That you need to do two rather than one query is because Google has not set itself up to produce short lists as possible answers to (ii) and (i), and then subtract (i) from (ii), and that the reason that it has not done that is a hard AI problem rather than the brute-force-and-massive-ignorance word-frequency-plus-internet-attention that is Google shtick.
But what amazes me is that Google can get so close—not that "true AI" is really hard.
And maybe that is Pelham's real point:
Geoffrey Pulham (2013): Why Are We Still Waiting for Natural Language Processing?: "Try typing this, or any question with roughly the same meaning, into the Google search box... http://www.chronicle.com/blogs/linguafranca/2013/05/09/natural-language-processing/