I have just posted a trio of new research white papers to SSRN. These represent the latest output from the Kennedy Idea Propulsion Laboratory and the culmination of my work over the last month to move AI beyond “utilitarian drift.” This is the cycle of incremental efficiency gains that ultimately generates no transformative insight.
To be clear: these are not promotional materials. They are formal experiments in protocol-governed AI designed to help solve a growing problem I call Innovation Sedimentation. This is the unmanaged accumulation of specialized AI tools that eventually imposes a higher cognitive load than the efficiency they provide. If you have ever felt decision fatigue trying to figure out which prompt or custom GPT to use for a task, you have experienced this sedimentation.
I felt it was important to get this research out now rather than keeping these findings to myself. To that end, I have released all three papers under a Creative Commons Attribution 4.0 International License (CC BY 4.0). My hope is that you will read them, use them, and be inspired to try your own experiments in AI architecture.
My research proposes that we stop treating AI as a word processor and start treating it as cognitive infrastructure.
The Three Pillars of the Research
- From Personas to Thinking Partners: This paper tackles the lifecycle of AI systems. I argue that capability must precede character because no amount of personality can substitute for functional competence. I also introduce a critical technical intervention called the Re-Grounding Maneuver. This is a planned state-reset to combat context entropy, which is the structural decay that causes AI to lose its forensic edge and revert to generic helpfulness after 15 to 20 exchanges.
- Prompting with Protocols: Here, I introduce Stat, a forensic polymath persona designed for high-stakes reasoning. Stat is governed by a strict cite → apply → translate → verdict sequence. By strictly constraining what the AI is allowed to do (for example, forbidding Stat from setting strategy or making point predictions), we actually increase its professional value. It transforms the AI from a black box into a governable internal control.
- The Innovation Detective: I used the Sherlock Holmes canon as a high-fidelity Single Source of Truth (SSOT) to extract a cognitive operating system for modern Wicked Problems. Using a specialized expert persona named Mary to maintain canonical fidelity, I developed four operational frameworks. These include the Negative Evidence Audit based on the “dog that didn’t bark” in Silver Blaze and Process Archaeology. This methodology demonstrates how any sufficiently rich public domain corpus can be turned into a reasoning engine.
The core takeaway is that the AI systems that survive professional deployment are those that know what they cannot do. Rigor is an architectural choice, not a model capability.
You can find the abstracts and full papers on SSRN (published February 2, 2026):
- From Personas to Thinking Partners: A Lifecycle Method for Designing and Governing AI Cognitive Systems
- Prompting with Protocols: Designing High-Rigor AI Personas for Risk, Audit, and Decision Validation
- The Innovation Detective: Operationalizing the Sherlock Holmes Canon for AI Strategy and Legal Practice
Practical Takeaways and Challenges
- Stop Prompting and Start Protocoling: If a task matters, do not just chat with the AI. Build a protocol that defines the identity, constraints, and specific sequence of the interaction.
- Audit for Silence: Use the Negative Evidence Audit. Do not just look at what the AI tells you. Look for what is systematically missing from the output. The silence is often the strongest signal of bias or missing data.
- Embrace the Refusal: If your AI persona does not occasionally tell you “I cannot do that,” it isn’t specialized enough. Constraints are the utility.
The Evolution Continues: These papers are just markers on a trail from the Kennedy Idea Propulsion Laboratory. I am already evolving these ideas into new experiments, specifically looking at multi-agent orchestration where these personas do not just talk to me, but start auditing each other’s work in recursive loops.
DK
[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]
DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory
Like this post? Buy me a coffee
DennisKennedy.Blog is part of the LexBlog network.