There are moments in a long AI session when the exchange stops feeling linear.

You are no longer simply asking a question and receiving an answer. You are no longer even refining a prompt in the ordinary sense. Something else begins to happen. Certain phrases return with altered weight. Certain errors recur, but not identically.

Coherence degrades while fluency improves.

The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become

We’ve spent the last couple of years treating generative AI like a vending machine. Select a task. Insert a prompt. Retrieve a product. And to be fair, in many legal and professional contexts that’s exactly the right frame: accuracy and precision matter and “creative” output in payroll or billing codes is usually just a polished

I had a long session recently with a public genAI tool that taught me something more important than the topic I started with.

The lesson was not about whether the model was “smart enough.” It was about control. At a certain point, I realized I was no longer simply prompting an LLM. I was negotiating

I have just posted a trio of new research white papers to SSRN. These represent the latest output from the Kennedy Idea Propulsion Laboratory and the culmination of my work over the last month to move AI beyond “utilitarian drift.” This is the cycle of incremental efficiency gains that ultimately generates no transformative insight.