To have an algorithm, you need to have a concrete way to show than an output is the correct or optimal one.
An LLM is satisfied by providing any random output that passes some subjective "this-does-not-seem-to-be-a-hallucination" test.
reply