We’ve spent the last couple of years treating generative AI like a vending machine. Select a task. Insert a prompt. Retrieve a product. And to be fair, in many legal and professional contexts that’s exactly the right frame: accuracy and precision matter and “creative” output in payroll or billing codes is usually just a polished

I had a long session recently with a public genAI tool that taught me something more important than the topic I started with.

The lesson was not about whether the model was “smart enough.” It was about control. At a certain point, I realized I was no longer simply prompting an LLM. I was negotiating

I have just posted a trio of new research white papers to SSRN. These represent the latest output from the Kennedy Idea Propulsion Laboratory and the culmination of my work over the last month to move AI beyond “utilitarian drift.” This is the cycle of incremental efficiency gains that ultimately generates no transformative insight.

I’ve been working on organizing and optimizing my AI prompts and prompting methodologies. I noticed that I have two major categories of approaches.

The first I call “complex, structured prompting.”

Google Gemini describes that, and I think accurately, as “a systematic, engineered way to interact with AI. It’s about building a personalized ‘cognitive operating system’