@cwebber Sadly, this is because #LLMs (which most people think are AI instead of just a particular tool in the toolset) are primaily designed to "fill in the blank" with the most statistically likely words.
Being a very computationally expensive type of Mad Libs makes them sound convincing even when they are spouting nonsense. People can do this too, but LLMs often excel at it.
AI is great at a lot of things. As a natural language search of a given data set, it can be very effective and a heck of a lot easier to use than a search engine or regular expression, both of which require you to already have a good idea of what keywords you need to look for to find the information.
Even the new chain-of-thought systems are really just layering another pass through the LLM to turn its search strategy into natural language. It's not really #XAI yet. Someday, but not today.
I'm glad you're calling out the critical thinking part! Using current AI tools is a lot like using a calculator: it can make things easier, but you have to already know what you're trying to do and have at least a ballpark idea of what a valid answer should be to use one. Metaphors are tricky, but "AI as a calculator" is my current go-to for explaining when not to use one.