In a fascinating post by Teresa Kubacka for ‘The Scholarly Kitchen’, she shares her adventures in beta-testing the new ScopusAI tool, which was released last summer. Her testing, using her PhD research topic, revealed a bewildering randomness in the retrieval and analysis of Scopus literature in response to natural language queries to the tool’s chatbot interface, when compared with conventional querying of the Scopus database. Obviously, such testing is all meant towards improving the tool’s ongoing performance, but Teresa emphasized that it underlines three important facets of complexity: making clear the importance of scrutinizing the chatbot’s answers; being transparent about the empirical performance of the tool; and the fact that that AI is only as good as the underlying data. Read all about it at: