Coherence degrades while fluency improves.

The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become

I had a long session recently with a public genAI tool that taught me something more important than the topic I started with.

The lesson was not about whether the model was “smart enough.” It was about control. At a certain point, I realized I was no longer simply prompting an LLM. I was negotiating