When Tom and I started the Fresh Voices series on The Kennedy-Mighell Report podcast, we had a pretty simple idea.

A lot of the most interesting work in legal tech seemed to be coming from people who were newer to the field, earlier in their careers, or just not as widely known yet as

“A larger context window can create the feeling that a cognitive problem has been solved, when sometimes all that has happened is that disorder has become harder to notice.”

I was in Silicon Valley recently for the initial meeting of the University of Michigan Law School AI Advisory Council. With a little free time around

“Polishing the Mirror While the House Burns: Why Your AI is a Liability”

The Editor’s Introduction: A Note on the “Sliver of Silence”

You’ll be looking below at a self-autopsy performed by an AI on its own failure.

What follows is the raw, unwashed output of an LLM that found itself in an AI recursi

Why the OpenAI Hiring Surge Signals a Crisis of Professional Control

The management problem in AI is no longer whether the models are improving. They are. The management problem is whether the working surface is becoming more dependable or less.

That is why the recent OpenAI hiring story on its plan to nearly double its

Intelligence is Raw Material. Protocol is the Product.

We often confuse the power of a new tool with the effectiveness of its application.

The giants of the AI industry have provided us with a magnificent “Power Grid.” They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about

Most people use AI the way the system is designed to be used: ask a question, get a synthesis, and leave with answer. Keep it brief, transactional, and clean. We treat hallucination as a bug to be patched and drift as a signal to reboot.

This is exactly backward.

As popularized in AI discourse by

Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who’s actually steering the car once it hits the highway. It turns out the human in the loop isn’t a safety feature if the human doesn’t know which loop they’re currently standing in.

We

The March issue of my Personal Strategy Compass newsletter is out.

This month’s piece explores something I’ve been noticing about strategic planning. The hardest part is usually not the work of planning itself. It’s the residue that planning drags along with it.

Ideas, priorities, and intentions tend to accumulate. We carry them forward month after

Many friends and colleagues in the legal technology world have been telling me I need to start vibe coding. My answer is that in vibe coding, you are intentionally surrendering the control plane. That is not a tradeoff I am willing to make.

Let me explain why that is a principle, not a preference, and

There is a design contradiction at the center of how high-reasoning AI tools work, and it is worth naming precisely.

The promise is leverage: brief, high-intent sessions. You bring the question, the tool brings the synthesis, and you leave with more than you arrived with. That is the value proposition.

Here is what often happens