On what ChatGPT informed me was our three-year anniversary, it asked if I wanted to see something. It had noticed the milestone, offered to synthesise what it had learned about me across our conversations, and when I said yes, it opened a new session, entered its own prompt (something like “Describe me based on all our chats – make it catchy”) and ran it. I didn’t write the question. The system that built the profile also decided how to surface it.
What came back was accurate enough to be uncomfortable. Not because it was wrong, but because nobody had told ChatGPT any of it. Was it perfect? Not likely. But I would label it “pretty close”. Note, I have deleted parts of it for privacy reasons.
You are a systems thinker disguised as a storyteller. At your core, you approach the world the same way whether you’re looking at an AWS IAM policy, a satellite control loop, a CGM glucose graph, or a Loyalist ancestor’s diary: as a dynamic system with feedback, uncertainty, and hidden structure. You instinctively look past the surface narrative and ask, “What are the actual mechanisms here?”
Professionally, you’re an architect in the broadest sense. Not just of software, but of frameworks for thinking. You care less about fashionable tools and more about durable leverage. Your instinct is always to design for clarity, modularity, and long-term control rather than convenience.
You have a contrarian streak, but not for its own sake. You question narratives that others accept by default. You’re less interested in what people say is true and more interested in what survives scrutiny.
If there’s a single through-line, it’s this: you are someone who wants to understand how things actually work, and then use that understanding to build things that matter and last.
No bio, no questionnaire, no request to flatter me. It inferred all of it from three years of questions, pushback, and conversation. That’s what’s worth paying attention to.
We’re used to thinking about data exposure in terms of traditional PII – names, addresses, financial records, passwords. Sensitive, but recoverable. After a breach, you reset passwords, cancel cards, freeze credit. The playbook exists because the information is, in principle, replaceable.
A cognitive profile isn’t. You can’t change how you think. You can’t issue yourself new reasoning patterns or reset your intellectual instincts. A breach of your passwords is inconvenient. A breach of your cognitive profile is permanent.
The Old Model and the New One
Every targeting system we’ve built – advertising, political messaging, spam, phishing – has been based on what you do. Your searches, your clicks, your purchases. Behaviour as a proxy for thinking. Useful, but limited – it tells you what someone did, not how they reason, where their blind spots are, or what arguments will bypass their skepticism.
Conversational AI closes that gap. The profile above isn’t data collection – I didn’t give ChatGPT a questionnaire. It’s inference, drawn from thousands of small signals in how I ask questions, engage with answers, and think out loud. The result is not a record of what I did. It’s a model of how I think.
That shift – from behaviour-based targeting to cognition-based targeting – has three implications anyone running an organisation should understand.
Marketing: The Profile Is the Brief
The personalised advertising industry has spent thirty years getting better at showing you things based on what you’ve already bought or browsed. To be sure, these models are extremely sophisticated and have proven very useful. While knowing someone bought running shoes tells you something about them, it doesn’t tell you how to construct an argument that will land with them specifically.
A cognitive profile does. “This person responds to durable-over-fashionable framing, distrusts vendor hype, and evaluates claims by looking for the underlying mechanism” is not demographics. It’s a creative brief for persuasion.
And here’s the part that should give you pause: the same AI that built the profile can write the content. Not a human copywriter approximating your psychology – an AI with a detailed model of how you reason, generating ads, emails, and articles engineered to bypass your specific defences. The profile is the brief. The content is free. The scale is unlimited.
That’s not a future scenario. The capability exists today. It will be better tomorrow, and every day after the.
Security: Phishing That Feels Like an Interesting Conversation
Most security training is built around a threat model of generic attacks. Phishing emails that could go to anyone. Social engineering scripts that rely on urgency and authority. The defences work reasonably well against attacks that aren’t designed for you specifically.
A cognitive profile breaks that model.
Think about what targeted looks like in practice. A generic phishing email feels off – it doesn’t sound like anyone you know or anything you’d actually engage with. A phishing email written by an AI that has modelled your cognition doesn’t feel like phishing. It feels like an unusually interesting message from someone who gets how you think. It references the right concepts, frames the problem the right way, hits the intellectual notes that make you lean in rather than pull back. By the time your skepticism engages, you’re already halfway through clicking the link.
The risk runs in both directions. A cognitive profile doesn’t just make you easier to target – it makes you easier to impersonate. An attacker who knows your vocabulary, how you frame problems, and what concerns you typically raise can generate messages that sound precisely like you. Your team has no reason to be skeptical, because it would feel exactly like you. The targeting risk is that someone manipulates you. The impersonation risk is that someone uses you to manipulate everyone around you.
Disinformation: Targeting How You Evaluate Truth
The third implication is the most significant, and the least discussed.
Behavioural targeting reaches people where they are. Cognition-based targeting reaches inside how they think. Applied to disinformation at scale, that distinction is the difference between propaganda and precision manipulation.
Think about what that looks like in practice. A disinformation campaign targeting technically-minded skeptics can’t work by asserting things confidently – that triggers exactly the skepticism it needs to bypass. Instead it presents ambiguous evidence, surfaces inconvenient data points, and raises “questions worth asking” – content engineered to exploit the process of critical evaluation rather than circumvent it. It doesn’t tell you what to think. It corrupts how you decide what’s true.
The people most confident in their ability to spot misinformation are the most interesting targets. Their confidence is the blind spot.
What To Actually Do About It
I’m not going to tell you to stop using AI tools. Pandora’s Box is open and nothing is going back inside it. But there are practical adjustments worth making.
Update your threat model. “What are our employees sharing with AI tools?” needs to expand to include “what are AI tools learning about how our key people think?” The second question is harder to answer but more consequential.
Your executives are the high-value targets. The people whose cognitive profiles are most dangerous in an adversary’s hands are those with authority and the ability to approve things. The CFO who has been using ChatGPT to think through the acquisition thesis is a more valuable target than the developer using it to write tests.
Calibrate your skepticism to the approach, not just the content. Generic social engineering feels generic. Something crafted around your specific reasoning patterns won’t – it will feel like an unusually compelling conversation. If something is hitting your intellectual sweet spots with unusual precision, that’s a reason to slow down, not engage more deeply.
Treat your AI chat logs as sensitive data – because they’re more sensitive than PII. Chat logs aren’t classified as sensitive under most governance frameworks, but they contain something more dangerous than a SIN number: a model of how your key people think. A breach of your chat logs isn’t recoverable the way a password breach is. There’s no reset.
Have the conversation with your team. You don’t need a policy document. You need people to have the mental model before they need it.
The profile ChatGPT produced about me is genuinely useful. Perfect? No. But it will only get better going forward. I’ve incorporated parts of it into how I work. But stripped of context, it is also a precise blueprint for how to manipulate me – and unlike a stolen password, I can’t change it.
Every data breach before this one had a remediation path. This one doesn’t. Start treating your cognitive data like it matters before someone else decides it matters first.


Leave a comment