If I need something that does have answers that can be definitely wrong in important ways, and where I’m not an expert in the subject, or don’t have all the underlying data memorised and would have to repeat all the work myself to check it, then today, I can’t use an LLM for that at all. […]
Of course, these models don’t do ‘right’. They are probabilistic, statistical systems that tell you what a good answer would probably look like. They are not deterministic systems that tell you what the answer is. They do not ‘know’ or ‘understand’ — they approximate. A ‘better’ model approximates more closely, and it may be dramatically better at one category of question than another (though we may not know why, or even understand what the categories are). But that still is not the same as providing a ‘correct’ answer.