My AI Has a Soul, and I Want It Portable
Why your chat history is your identity and why OpenAI and Google shouldn't own it.
We’ve all been there.
You’re scrolling Twitter at 10 AM when OpenAI drops some new reasoning model, and suddenly your timeline becomes a war zone of benchmark screenshots. Then Google fires back with a Gemini update that has a context window longer than a textbook, and Anthropic sneaks in with a Claude version that writes better poetry than your ex.
As expected, the group chat goes wild. I’m usually that one person backing Google—call me a fan if you like, but I honestly think Gemini is very solid, and I stand by it
But here’s the thing: objectively, it’s complicated. ChatGPT still has the cleanest integrations and the widest ecosystem. When you need something to just work, it usually does. Gemini (my personal favourite) is incredibly strong with images, audio, and documents, and if you already live inside Google Workspace like I do, it feels like having a colleague who actually understands your emails and remembers context. And then there’s Claude, which I’ll admit, writes in a way that feels human. Not like a corporate memo generator. It has feeling. Claude’s also great at coding.
But we’re all trapped.
The Golden Handcuffs Are Real
My partner Osita gets it. He sees what Gemini can do. He’s watched me use it. He knows it’s brilliant. But switching? That’s not easy.
He has spent a long time teaching ChatGPT how he thinks, his business context, even the way he prefers sentences to be structured. Moving to another model isn’t just “trying something new.” It’s like starting therapy again with someone who knows nothing about his past. He’d have to explain everything from scratch.
His context is locked up. His digital identity is being held hostage by a company that has zero incentive to let him leave.
And that’s the problem with all of us. We’ve poured so much of ourselves into these LLMs, our writing styles, our preferences, our private thoughts, our work history, that they’ve become more than tools. They’re identity layers. They know us better than some of our friends do.
And we can’t take any of it with us.
Enter the LLM Identity Portability Protocol (LIPP)
This is where I think we need something I’m calling the LLM Identity Portability Protocol or LIPP for short.
Imagine this: a decentralized system where your context lives independently of any single company. All your conversations, preferences, patterns, and personal data exist in a format that you control. Your “AI soul,” for lack of a better term, becomes portable.
On Monday, ChatGPT has the best reasoning for your coding problem? Plug your ‘Soul’ in there. On Wednesday, Claude writes better marketing copy? Slide your context over. Friday, Gemini drops a feature that perfectly fits your workflow? You’re already there, and it already knows you.
No rebuilding. No re-explaining. No starting from zero.
For us, the consumers, this is a total no-brainer:
Freedom to choose the best tool for the job. You’re not married to one provider because they happen to know your life story. You use whoever’s best at whatever you need right now.
True data ownership. Your context isn’t some corporate asset sitting in OpenAI’s servers. It’s yours. You decide who gets access, for how long, and for what purpose.
No more context tax. That psychological barrier, “I’ve invested too much to switch” , disappears. You can experiment freely without feeling like you’re abandoning a relationship.
Why This Doesn’t Exist (Yet)
Of course, the AI labs will never voluntarily build this.
To Sam Altman or Sundar Pichai, your context isn’t just “data”, it’s their moat. If you can leave ChatGPT as easily as you switch from Spotify to Apple Music, they lose their grip on you. They want you locked in. They want you to feel that “Golden Handcuff” tighten every time you think about trying a competitor.
But if these labs actually cared about the user experience, they’d provide Context/Identity APIs. With proper Context APIs or Identity APIs, third-party developers could build personal context vaults that work across models. You grant permission, and any AI instantly understands who you are and how you think.
Here’s the part they rarely say out loud: we’re not just users.
Every prompt we write, every correction we make, every long conversation we have helps train these systems. We’re actively contributing to their intelligence.
In many ways, we’re co-creators.
So it’s strange that the value we help create, our digital identities, our context, our AI “souls”, gets locked away behind proprietary walls.
Your soul shouldn’t be trapped in one company’s file format.
Next time, I’ll talk about the massive opportunities that open up once Identity APIs become real. Because the moment context becomes portable, an entirely new AI economy appears.
But for now, I just want my AI soul 😩.




