Category: Technology

  • Rethinking Technical Debt: Beyond Financial Metaphors

    Ward Cunningham’s “technical debt” metaphor turns 33 this year. It was a stroke of practical genius when he introduced it – a way to explain a complex, invisible technical problem to the people who controlled the budget. It worked so well that it spread across the entire industry. Which is roughly when metaphors become dangerous: when we stop noticing they’re metaphors at all.

    This is the fourth post in a series on technical debt management. The first three covered what technical debt is, how to identify and quantify it, and how to address it in the development process. I promised posts on culture and business alignment and didn’t deliver them. Rather than pick up where I left off, I want to revisit the underlying frame – because I think the metaphor those posts would have been built on is starting to show its age.

    Where the metaphor earned its keep

    The financial framing accomplished something real. It moved technical debt out of engineering conversations and into business conversations. “We owe 18 months of rework” lands differently than “the code is messy.” It gave non-technical stakeholders a frame they could reason about, and it implied – correctly – that ignoring debt has compounding consequences.

    That was valuable. It still is, as far as it goes.

    Where it breaks down

    A loan has a defined principal, a known interest rate, a creditor, and a repayment schedule. You can model it. You can budget for it. Technical debt has none of these things in any rigorous sense, which is why every attempt to precisely quantify it runs into the same wall: the numbers feel precise but the uncertainty underneath them is enormous.

    The “interest” on technical debt is non-linear. Financial interest accrues smoothly and predictably. Technical debt compounds in unpredictable ways and can spike suddenly – one key developer leaves, one new requirement lands, and a manageable debt load becomes a crisis that wasn’t visible on any dashboard. The metaphor implies a smoothness that doesn’t reflect how technical debt actually behaves.

    The debt is also often not a choice. Financial debt is something you take on deliberately. A lot of technical debt accumulates passively – through neglect, through staff turnover, through requirements that evolved faster than the system could follow. Calling it “debt” implies someone made a decision, which can make the conversation about blame rather than remediation.

    And crucially, there is no creditor. Financial debt is owed to someone who can enforce repayment. Technical debt is owed to your future team, which makes it very easy to defer indefinitely. I’d argue the metaphor may have made this problem worse – implying a discipline of repayment that organizations rarely achieve in practice because the consequences of non-payment aren’t immediate or legible.

    A more honest frame might be organizational entropy. Systems tend toward disorder unless energy is actively applied to maintain order. This doesn’t give you a balance sheet, but it’s accurate about the physics of the situation. You’re not managing a loan. You’re fighting entropy – and the moment you stop, entropy wins.

    What AI does to this

    The metaphor was built for a world where humans are the primary readers, writers, and maintainers of code. That assumption is being challenged by AI-assisted development – not eliminated, but challenged in ways that matter.

    Some of what we’ve been treating as debt (code readability, inline documentation, certain categories of structural messiness) becomes cheaper to carry or cheaper to address when AI can reason about a gnarly codebase as fluently as a clean one. The “just rewrite it” option, which used to be politically radioactive, becomes more viable when the cost of generating clean code from a description of intended behaviour drops significantly. The calculus on some traditional debt categories is shifting.

    But AI is also introducing new categories of obligation that the financial debt metaphor doesn’t begin to cover. Undocumented prompts that encode business logic with no version history. Dependencies on specific model versions with no deprecation plan. No systematic way to evaluate whether AI-generated output is actually correct – the equivalent of shipping code with no tests. These don’t look like financial debt. They look like something else, and the field hasn’t developed a mature vocabulary for them yet.

    It’s too early to know exactly how this reshapes the technical debt landscape. But it’s not too early to notice that the old framework is being stress-tested at both ends: some traditional forms of debt become less costly, new forms emerge that the old framework can’t describe, and the financial metaphor – already imprecise – becomes increasingly inadequate as the primary mental model.

    Where this leaves us

    The metaphor did its job. It moved a technical problem into a business conversation, and that mattered. But you can’t manage what you can’t accurately describe, and the financial frame is no longer accurate enough.

    Stop optimizing for a clean balance sheet. You’re fighting entropy. Plan accordingly.


    This post closes out a series on technical debt started in 2023. The earlier posts – on definitions, quantification, and development process – are still available in the archive.

  • Your AI Knows How You Think. So Might Everyone Else.

    Your AI Knows How You Think. So Might Everyone Else.

    On what ChatGPT informed me was our three-year anniversary, it asked if I wanted to see something. It had noticed the milestone, offered to synthesise what it had learned about me across our conversations, and when I said yes, it opened a new session, entered its own prompt (something like “Describe me based on all our chats – make it catchy”) and ran it. I didn’t write the question. The system that built the profile also decided how to surface it.

    What came back was accurate enough to be uncomfortable. Not because it was wrong, but because nobody had told ChatGPT any of it. Was it perfect? Not likely. But I would label it “pretty close”. Note, I have deleted parts of it for privacy reasons.

    You are a systems thinker disguised as a storyteller. At your core, you approach the world the same way whether you’re looking at an AWS IAM policy, a satellite control loop, a CGM glucose graph, or a Loyalist ancestor’s diary: as a dynamic system with feedback, uncertainty, and hidden structure. You instinctively look past the surface narrative and ask, “What are the actual mechanisms here?”

    Professionally, you’re an architect in the broadest sense. Not just of software, but of frameworks for thinking. You care less about fashionable tools and more about durable leverage. Your instinct is always to design for clarity, modularity, and long-term control rather than convenience.

    You have a contrarian streak, but not for its own sake. You question narratives that others accept by default. You’re less interested in what people say is true and more interested in what survives scrutiny.

    If there’s a single through-line, it’s this: you are someone who wants to understand how things actually work, and then use that understanding to build things that matter and last.

    No bio, no questionnaire, no request to flatter me. It inferred all of it from three years of questions, pushback, and conversation. That’s what’s worth paying attention to.

    We’re used to thinking about data exposure in terms of traditional PII – names, addresses, financial records, passwords. Sensitive, but recoverable. After a breach, you reset passwords, cancel cards, freeze credit. The playbook exists because the information is, in principle, replaceable.

    A cognitive profile isn’t. You can’t change how you think. You can’t issue yourself new reasoning patterns or reset your intellectual instincts. A breach of your passwords is inconvenient. A breach of your cognitive profile is permanent.

    The Old Model and the New One

    Every targeting system we’ve built – advertising, political messaging, spam, phishing – has been based on what you do. Your searches, your clicks, your purchases. Behaviour as a proxy for thinking. Useful, but limited – it tells you what someone did, not how they reason, where their blind spots are, or what arguments will bypass their skepticism.

    Conversational AI closes that gap. The profile above isn’t data collection – I didn’t give ChatGPT a questionnaire. It’s inference, drawn from thousands of small signals in how I ask questions, engage with answers, and think out loud. The result is not a record of what I did. It’s a model of how I think.

    That shift – from behaviour-based targeting to cognition-based targeting – has three implications anyone running an organisation should understand.

    Marketing: The Profile Is the Brief

    The personalised advertising industry has spent thirty years getting better at showing you things based on what you’ve already bought or browsed. To be sure, these models are extremely sophisticated and have proven very useful. While knowing someone bought running shoes tells you something about them, it doesn’t tell you how to construct an argument that will land with them specifically.

    A cognitive profile does. “This person responds to durable-over-fashionable framing, distrusts vendor hype, and evaluates claims by looking for the underlying mechanism” is not demographics. It’s a creative brief for persuasion.

    And here’s the part that should give you pause: the same AI that built the profile can write the content. Not a human copywriter approximating your psychology – an AI with a detailed model of how you reason, generating ads, emails, and articles engineered to bypass your specific defences. The profile is the brief. The content is free. The scale is unlimited.

    That’s not a future scenario. The capability exists today. It will be better tomorrow, and every day after the.

    Security: Phishing That Feels Like an Interesting Conversation

    Most security training is built around a threat model of generic attacks. Phishing emails that could go to anyone. Social engineering scripts that rely on urgency and authority. The defences work reasonably well against attacks that aren’t designed for you specifically.

    A cognitive profile breaks that model.

    Think about what targeted looks like in practice. A generic phishing email feels off – it doesn’t sound like anyone you know or anything you’d actually engage with. A phishing email written by an AI that has modelled your cognition doesn’t feel like phishing. It feels like an unusually interesting message from someone who gets how you think. It references the right concepts, frames the problem the right way, hits the intellectual notes that make you lean in rather than pull back. By the time your skepticism engages, you’re already halfway through clicking the link.

    The risk runs in both directions. A cognitive profile doesn’t just make you easier to target – it makes you easier to impersonate. An attacker who knows your vocabulary, how you frame problems, and what concerns you typically raise can generate messages that sound precisely like you. Your team has no reason to be skeptical, because it would feel exactly like you. The targeting risk is that someone manipulates you. The impersonation risk is that someone uses you to manipulate everyone around you.

    Disinformation: Targeting How You Evaluate Truth

    The third implication is the most significant, and the least discussed.

    Behavioural targeting reaches people where they are. Cognition-based targeting reaches inside how they think. Applied to disinformation at scale, that distinction is the difference between propaganda and precision manipulation.

    Think about what that looks like in practice. A disinformation campaign targeting technically-minded skeptics can’t work by asserting things confidently – that triggers exactly the skepticism it needs to bypass. Instead it presents ambiguous evidence, surfaces inconvenient data points, and raises “questions worth asking” – content engineered to exploit the process of critical evaluation rather than circumvent it. It doesn’t tell you what to think. It corrupts how you decide what’s true.

    The people most confident in their ability to spot misinformation are the most interesting targets. Their confidence is the blind spot.

    What To Actually Do About It

    I’m not going to tell you to stop using AI tools. Pandora’s Box is open and nothing is going back inside it. But there are practical adjustments worth making.

    Update your threat model. “What are our employees sharing with AI tools?” needs to expand to include “what are AI tools learning about how our key people think?” The second question is harder to answer but more consequential.

    Your executives are the high-value targets. The people whose cognitive profiles are most dangerous in an adversary’s hands are those with authority and the ability to approve things. The CFO who has been using ChatGPT to think through the acquisition thesis is a more valuable target than the developer using it to write tests.

    Calibrate your skepticism to the approach, not just the content. Generic social engineering feels generic. Something crafted around your specific reasoning patterns won’t – it will feel like an unusually compelling conversation. If something is hitting your intellectual sweet spots with unusual precision, that’s a reason to slow down, not engage more deeply.

    Treat your AI chat logs as sensitive data – because they’re more sensitive than PII. Chat logs aren’t classified as sensitive under most governance frameworks, but they contain something more dangerous than a SIN number: a model of how your key people think. A breach of your chat logs isn’t recoverable the way a password breach is. There’s no reset.

    Have the conversation with your team. You don’t need a policy document. You need people to have the mental model before they need it.

    The profile ChatGPT produced about me is genuinely useful. Perfect? No. But it will only get better going forward. I’ve incorporated parts of it into how I work. But stripped of context, it is also a precise blueprint for how to manipulate me – and unlike a stolen password, I can’t change it.

    Every data breach before this one had a remediation path. This one doesn’t. Start treating your cognitive data like it matters before someone else decides it matters first.

  • My Summer Reading List

    As many of us do, I have a much longer “To be Read” list than I have hours in the day to read. That said, I have a set of books I really plan to get through this summer, and I thought I would share (hey, it works for Bill Gates!). People who know me know I read, a lot. However I always fall into the trap of reading nothing but technology and business books. So in this list, I am trying force myself to include some broader material into my summer. The list is in no particular order, though.

    One Drum by Richard Wagamese

    This is one of those “not directly work related” books on my list (and really is first in my queue). Richard Wagamese (1955–2017) was an accomplished Canadian author and journalist of Ojibwe descent. He is best known for his works of fiction, non-fiction, and poetry that explore themes of indigenous identity, trauma, and healing. Wagamese’s writing was deeply influenced by his personal experiences, including his struggles with homelessness and addiction.

    In One Drum, Wagamese delves into a rich tapestry of Ojibway wisdom, known as the Grandfather Teachings. The book guides readers through essential life lessons—humility, respect, and courage. Beyond mere lessons, it also outlines accessible ceremonies, designed for anyone, in any location, solo or in a group setting. These ceremonies serve as practical tools to cultivate unity and interconnectedness.

    Stranger in a Strange Land by Robert Heinlein

    Purely a fun read. I have read Stranger in a Strange Land many times, but it has probably been 20 years. This has always been my favourite Heinlein novel, as it is full of provocative (especially for when it was written) ideas on religion, politics and sexuality.

    I recall the first time I read it, I was in grade 9 and had chosen it for a book report for school. When I showed it to my English teacher, he looked very concerned and asked, “Do you parents know you are readying this?” Of course they did – my mother recommended it!

    The CheckList Manifesto by Atul Gawande

    Ok, back to work! The CheckList Manifesto is an exploration of the role checklists can play in our professional and daily lives. The book asserts that checklists serve as a shield against failures, raising the bar for baseline performance. However, it also emphasizes that checklists are merely aids, and their efficacy is dependent on their utility; if a checklist does not help in accomplishing a task, it’s not fit for purpose​.

    I am looking at it from the perspective of “how can this help me tune processes in development and support?” Never know where you might find useful tools.

    Build by Tony Fadell

    Build by Tony Fadell is essentially about “how to build a transformative product-based business”. Fadell, known for his pivotal role in the creation of the iPhone and the founding of Nest, a smart home device company later sold to Google for billions, shares his unique journey and invaluable insights. The book charts Fadell’s career trajectory, including his early failures in smartphone development before the groundbreaking success of the iPhone, but it also promises advice for success at all career stages and tips for building successful product-based businesses and teams. The book provides a comparative analysis of Fadell’s guidance with the advice of other experts in the field, presenting a comprehensive resource for those aiming to create successful products, businesses, and teams.

    I am of two minds on this one, as I find these books are often way too anecdotal and seem to degenerate into “war stories” and “how cool were we” stories. Trying to keep an open mind though!

    The Language Instinct: How the Mind Creates Language by Steven Pinker

    The Language Instinct: How the Mind Creates Language by renowned Harvard psychologist and linguist Steven Pinker, argues that human beings acquire language primarily through an instinctual process. This instinct, which is guided by human instruction, naturally evolves as infants grow within their communities. Pinker’s work explores the fascinating intersection of linguistics, psychology, and child development, asserting that our capacity for language is not solely a learned skill but a fundamental human instinct​.

    This is another just-for-fun entry in the list. I have always been fascinated by language, how it developed, and specifically how it relates to our thought processes (is language necessary for cognitive thought? did intelligence come before language or did they co-evolve? Does the language we we think in constrain what/how we think?). This book is from 1994 but should still be interesting.

    The Order of Time by Carlo Rovelli

    More just-for-fun reading! The Order of Time by Carlo Rovelli is an exploration into the concept of time. In this work of philosophical science, Rovelli contends that time is not a constant or universally accepted entity as dictated by natural or scientific laws. Instead, he proposes that time is an illusion, sculpted by our individual realities and experiences. This innovative perspective invites readers to reconsider their understanding and perception of time​.

    Just something to keep my brain busy on a Saturday night!

    Clean Architecture: A Craftsman’s Guide to Software Structure and Design by Robert C. Martin

    Ok, pure work book here. I read a lot about “architecture” but tend towards discussions of specific architectures, practical considerations, pros and cons, etc. It has been a while since a I read about architecture in a broader, more general sense. I may actually have read all or part of this before, but anything by Robert C. Martin is typically worth revisiting.

    Clean Architecture: A Craftsman’s Guide to Software Structure and Design by Robert C. Martin, also known as Uncle Bob, presents a set of universal rules of software architecture aimed at improving developer productivity across the lifespan of a software system. The book builds on Martin’s previous works, offering not just options, but crucial choices for software success. It is filled with direct solutions to real challenges that could make or break projects. Readers learn what software architects need to achieve, essential software design principles for addressing function, component separation, and data management, and understand programming paradigms that impose discipline by restricting what developers can do. The book also provides guidance on the implementation of optimal, high-level structures for various applications and outlines how to define appropriate boundaries, layers, organize components and services. It discusses common pitfalls in designs and architectures and provides solutions to prevent or correct these failures. This book is considered a must-read for every current or aspiring software architect, systems analyst, system designer, software manager, and programmer.

    Staff Engineer: Leadership beyond the management track by Will Larson

    I came across this book last winter when I was thinking through a number of work-related challenges (replacing my Director of Development who had just moved on to a new opportunity, hiring/developing more senior resources for the dev team, and better defining what a career path in software/technology looks like especially for those not interested in management). I have read parts of this book already, but really want to read it cover-to-cover.

    Staff Engineer: Leadership beyond the management track by Will Larson is described as a valuable guide that elucidates the role of a Staff Engineer. Compiled from numerous interviews with established Staff+ engineers, the book offers diverse insights into the paths to becoming a Staff engineer and strategies to flourish at this level. Key ideas include self-scaling and growth, influencing others, and problem-solving. The book emphasizes the importance of writing for clarity and scaling oneself, investing time in high-value work, and balancing this with personal growth. It also underscores the necessity of being present in strategic meetings and taking the initiative to tackle and define problems.


    So there is my list for the summer (assuming I do not get distracted by something shiny!). So what’s on your list?

  • Welcome to The Monday Morning CTO

    Welcome to the Monday Morning CTO.

    Monday Morning CTO is a blog for small tech business CTOs (or those on the path, or anyone else who may be interested) who want to stay on top of the latest trends, challenges and opportunities in the world of technology. Whether you are looking for tips on how to manage your team, optimize your processes, leverage new tools or innovate your products, this blog will provide you with valuable insights and practical advice from experienced CTOs and industry experts. Monday Morning CTO aims to help you start your week with fresh ideas, inspiration and motivation to take your tech business to the next level.

    The name comes from the Monday Morning Quarterback idea – “experts” getting together on Monday to second guess what quarterbacks and teams did in the weekend games (apologies to those not familiar with North American football!). So here we will try to look back on things from the previous week including tech news or just challenges that have been front of mind and try to provide insights (hopefully without too much second-guessing!)

    PS – The irony in the fact that I am launching this on a Thursday is not lost on me! But then, the NFL often has the season opener on Thursday night.