Cognitive-First AI
2 April 2026
There is no good name yet for people who use AI extensively and deliberately while keeping their own cognition in charge. I think that category needs to exist.
A study called "Thinking — Fast, Slow, and Artificial" by Shaw and Nave that was published a few months ago named something I have been building instinctively against without knowing it even had a name. With nearly 1,400 people and close to 10,000 trials, the participants solved reasoning problems with and without an AI assistant and the researchers secretly controlled whether the AI gave right or wrong answers.
And here it gets interesting. When people had access to AI, they took its answer about 80% of the time even when the AI was confidently and deliberately wrong. People thinking alone got 46% right. With accurate AI, that went up to 71%. But when the AI got it wrong? It dropped to 31%. Worse than no AI at all.
That last number shows that AI made people perform worse than thinking alone and their confidence went up at the same time! Think about that!
They call it cognitive surrender!
Building on Kahneman's System 1 (fast, intuitive) and System 2 (slow, deliberative), Shaw and Nave added System 3 called artificial cognition. This is external, algorithmic and outside the brain. When System 3 is available, it doesn't just assist thinking, but it displaces it. Not because AI is bad, but because surrendering is easier than thinking and the outputs feel so natural that our internal alarm doesn't fire.
The people who resisted this surrender were the ones who genuinely like thinking. Those with an internal alarm that still fires when something feels too easy. The ones who surrendered faster trusted AI more or were under time pressure, conditions where ease feels like a gift instead of a warning.
What do you think helped people resist? Feedback! Seeing where you went wrong and correcting.
The strongest protection against cognitive surrender was not actually external. It was wanting to think for yourself in the first place. And I have been building against exactly this without knowing it had a name.
When I started building Pippi, my personal AI operating system, I wasn't thinking about cognitive surrender. I was thinking about how my mind actually works. And when AI arrived in my workflow, I felt a pull. Not purely fear or excitement, but a pull toward letting it do the thinking for me.
So I started to build the opposite.
Not a set of tools around my output, but actually a system around my own thinking.
My CLAUDE.md isn't a prompt, but it is a cognitive context that tells AI who I am, how I think and what role it plays. Every interaction starts from how I actually think and not from scratch where the easiest thing is to just go with whatever comes back.
My mistakes log creates a persistent feedback loop. The paper showed feedback was the single most effective intervention against cognitive surrender. I didn't know that when I built it, but I just knew that if I couldn't see where I went wrong, I would stop noticing!
When I write with Claude, I write it first and I use it to sharpen my thinking and not to replace it. I push back the moment it doesn't feel right because my own thinking is at stake if I don't.
There are many smart people working on this from different angles. Researchers in extended cognition and human-AI interaction, builders creating tools around their own thinking and writers arguing that humanities skills will matter more than ever. This is a conversation that is already happening in pieces across different fields.
What I noticed is that my own work lives at the intersection of all of this and the name I use for it is Cognitive-First AI. Because AI should adapt to how you think and not the other way around.
Your own thinking is the starting point with your languages, your patterns, your knowledge and your ways of making sense. AI should only enter as an extension and not as a replacement. The architecture is yours and the system is here to serve the mind, not the other way around. This is exactly where it's so easy to miss.
The paper ends with recommendations for systems that adapt to how people want to think and interfaces that adjust to context. They are describing exactly this. And the paper has two categories for how people relate to AI. It is "independents" who mostly avoid it and "AI-Users" who engage it and largely surrender to it.
There is no name yet for the people who use AI extensively and deliberately while keeping their own cognition in charge. People who actually design how they think with AI rather than just going with whatever the default gives them.
I think that category needs to exist. And I think it might be the most important one of them all.
I didn't build Pippi because I read a paper about cognitive surrender. I built it because something in my practice told me the real risk of AI wasn't being wrong, it was making me stop thinking. And once you stop noticing that you have stopped doing the thinking yourself, no feedback loop can save you.
Now there is a Wharton study with nearly 1,400 people and close to 10,000 trials that says that instinct was pointing at something real.
I don't have the answers, but I know enough to see which questions matter. I call it Cognitive-First AI and this is the lab where I'm building it.
This is not black or white. Not about resisting AI or surrendering completely to it, but designing the systems so your mind stays in the room.