February 2026

We’ve spent the last couple of years treating generative AI like a vending machine. Select a task. Insert a prompt. Retrieve a product. And to be fair, in many legal and professional contexts that’s exactly the right frame: accuracy and precision matter and “creative” output in payroll or billing codes is usually just a polished

I had a long session recently with a public genAI tool that taught me something more important than the topic I started with.

The lesson was not about whether the model was “smart enough.” It was about control. At a certain point, I realized I was no longer simply prompting an LLM. I was negotiating

I have just posted a trio of new research white papers to SSRN. These represent the latest output from the Kennedy Idea Propulsion Laboratory and the culmination of my work over the last month to move AI beyond “utilitarian drift.” This is the cycle of incremental efficiency gains that ultimately generates no transformative insight.

An investigation into why serious AI work depends less on clever prompts and more on defending invariants, boundaries, and human judgment.

At the end of a long, technical AI session this week, something became clear to me: human-in-the-loop is being misunderstood in ways that matter.

The issue wasn’t whether the system could generate outputs quickly