There are moments in a long AI session when the exchange stops feeling linear.

You are no longer simply asking a question and receiving an answer. You are no longer even refining a prompt in the ordinary sense. Something else begins to happen. Certain phrases return with altered weight. Certain errors recur, but not identically.

Coherence degrades while fluency improves.

The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become

When Tom and I started the Fresh Voices series on The Kennedy-Mighell Report podcast, we had a pretty simple idea.

A lot of the most interesting work in legal tech seemed to be coming from people who were newer to the field, earlier in their careers, or just not as widely known yet as

Why the OpenAI Hiring Surge Signals a Crisis of Professional Control

The management problem in AI is no longer whether the models are improving. They are. The management problem is whether the working surface is becoming more dependable or less.

That is why the recent OpenAI hiring story on its plan to nearly double its

We pay AI tools to do the hard work, like the synthesis, the heavy lifting, and the cognitive labor we do not have time for. What we often get instead is a tool that produces a decent first draft and then hands the real work back to us.

Not just the hard work. The administrative