This. It points out exactly what I do not like about LLMs.
“This concludes all the testing for GPT5 I have to do. If a tool is able to actively mislead me this easy, which potentially results in me wasting significant amounts of time in trying to make something work that is guaranteed to never work, it’s a useless tool. I don’t like collaborating with chronic liars who aren’t able to openly point out knowledge gaps, so I’m also not interested in burning resources for a LLM that does the same.”