Category: Uncategorized

  • Beyond words

    What is the meaning of putting these words down? Is it an attempt to find some internal core, a feeling that must be rendered into language to be understood? Why can’t moods clarify themselves and reveal their deeper meaning without this intermediary? Language comes up short so many times.

    I suspect that with large language models, we will have a new type of language and communication—and hence, a new type of human experience. What I’m about to say may sound like dystopian sci-fi, but if we extend current AI with mass surveillance and some form of continual learning, it will create a personalized warden, marketer, guru, coach, parent, and perhaps even a god for each one of us. In such a world, maybe the only truly free human expression will be a silence that can only be deciphered by the soul in its midst. It would be a secret form of communicating with yourself, first and foremost, before we discover what a new language of shared human experience beyond words might be.

    I guess what I’m trying to say is that LLMs, taken to their computational and logical extremis, will be the end of this era of language. What may emerge naturally from this mix of societal and political upheaval, as AI becomes omnipresent, is a new dimension of communication and connection between humans—one that would make the current era seem like the telegraph era feels to us today. It will not be about the efficiency and robustness of the channel, the medium, or the message. It might just be about how it remains, always, a few major steps away from being figured out computationally.

    If Roger Penrose is right that consciousness is not computational, then the very means by which we connect as seemingly separate instances of this larger, unified consciousness must eventually evolve to a dimension that computation cannot ever catch.

    And yet, we live in a time when AlphaFold has deciphered protein folding, which seems like precisely the kind of natural phenomenon that lies at the very limit of what is computationally possible to recreate in silicon.

    AI will force humanity to urgently redefine almost every aspect of what makes us unique—our very identity as a species. The greatest irony is that our god-like ability to create new technological and scientific breakthroughs is the very thing leading to a deeper existential crisis about our own nature. It turns out that when Nature or God gives you the ability to be almost anything and everything, you risk feeling like nothing specific. It leads to a crisis of both collective and individual identity.

    Across the ages, the deepest thinkers of the human condition have returned, again and again, to a similar conclusion: only one thing can be said with confidence about human nature. We seem cursed to always hang in the middle—too aware to be simply animal, but too impotent, unsure, and mortal to be God.

    (P.S. I keep returning to Ernest Becker and The Denial of Death so organically that I think it’s high time I re-read it and write about him.)

  • AI coding frustrations

    There’s a strange bargain being struck in the world of AI. Companies are launching powerful tools in what feels like a perpetual beta, with developers and early adopters serving as the de facto QA team. We hear endlessly about the power of these new coding assistants, yet so many users are caught in a frustrating loop of one step forward, two steps back. Features like Claude Code’s Sub Agents are added to amplify their capabilities, but without perfect execution, they often just create a high-volume mess of unhelpful code.

    In this race for market and mind share, it feels as though the Overton window for what is considered “production-ready” has been fundamentally shifted. We have tacitly agreed that it’s okay for our most advanced tools to be non-deterministic—for them to be confidently wrong. This has created an unprecedented dynamic where the burden of managing a tool’s inherent flaws is passed from its creator to its user. I can’t think of another major technology wave where the responsibility to use a product correctly—to work around its core deficiencies—was so squarely placed on the customer.

    As we release ever more powerful models, the immediate risk isn’t just a catastrophic failure, but a more pervasive and corrosive frustration for the people trying to integrate them into their work.

    Out of this mess, a new cottage industry is being born. Communities of practice are forming to share tips and workarounds. A massive opportunity has opened up for consultants, service providers, and content creators who can help others make sense of the chaos. In the short term, the people who will gain an edge are those who persistently experiment, learn from the collective, and keep tweaking these tools until they behave. The tech giants will surely improve their models over time, but we are still some ways from a truly seamless experience.

    This same friction between expectation and reality is why the first wave of AI-native hardware startups failed so miserably. Their products collided head-on with what we demand from a physical device: reliability. A piece of hardware is supposed to just work. This requires a cognitive upheaval from consumers, a new willingness to accept a certain level of failure from our gadgets. We don’t tolerate this from our phones, and the inconsistency of early voice assistants like Alexa bred a deep-seated distrust that they never fully overcame.

    So, how do we, as a consumer base, become okay with the non-deterministic nature of AI? No one has a clear answer. It’s a dealbreaker in coding, in finance, and in countless other B2B scenarios where precision is non-negotiable. The only places this unpredictability is celebrated are in creative applications—image and text generation—where the hallucinations are reframed as serendipity.

    When you look at the mountain of capital that has poured into the AI market over the last five years, the path forward seems like a combination of storytelling, freebies, influencer marketing and a general fake-it-till-you-make-it attitude. For the valuations to make sense, the industry has to solve the problem of reliability. It must meet the deterministic expectations of the enterprise and the average consumer, or risk the entire boom leading to another long AI winter before fundamental breakthroughs such as continual learning come about. In and all, what a time to be alive!

  • The ultimate outsource

    I’ve been thinking about what it means to abstract the process of learning to an AI. You simply state the goal you want, and the entire journey of figuring it out is left to the LLM. This applies to any creative project, whether it’s coding, writing, or design. It’s as if the end product justifies the means, and the means is whatever the AI decides to do.

    This path leads to a world where, in an attempt to push more and more of the process below AI’s abstraction layer, humans are relegated to a very thin slice at the top. We are only supposed to focus on what we want, not how or why. A little of the why, perhaps, but a core part of the joy in doing anything is the learning process itself—a journey of understanding that can never be fully captured by this outsourcing.

    We are already seeing the cracks in the promise that AI will free us from mundane tasks so we can pursue more human, creative work. Instead of liberation, the expectation is simply more. If you’re a developer, you’re now expected to be a 50x developer. If you’re a creative writer, you’re expected to produce more content. The list goes on. The implicit message is, “Don’t spend time thinking about the how, or learning things deeply—just deliver more output.” It is the classic capitalist impulse to extract more, and it’s sad to see that, yet again, a new technological paradigm is being bent toward this purpose. Humans will be expected to produce more and more, while focusing less on the profound experience of what it means to truly engage with their work.

    I suspect this is creating a new class hierarchy between those who are in the know and those who are not. Take a highly skilled software developer who, through decades of trial and error, failure and success, has cultivated a deep knowledge of how to program well. The way they feel, I suspect, is a kind of in-group, out-group defensiveness. They see these newcomers who think they can just prompt an AI to get the output that they themselves struggled so hard for. This isn’t real, they might think. This isn’t earned. This isn’t how it’s supposed to be.

    And in a sense, they are right. What they feel is their hard work, their creativity, and their entire learning process being diminished. They know deep down how valuable that journey was as a human experience, even when it was full of suffering and hardship. They see newcomers as people taking a shortcut, emulating a mastery they have not earned. But perhaps it’s just two different ways of looking at the same thing. What I feel is missing from the conversation is a focus on the human, emotional, and psychological aspects of what it will mean to do something in the world to come.

  • Build >> Buy?

    One of the most significant impacts of AI will be the proliferation of custom-built tools and apps. While this was technically possible before, the friction for non-developers was simply too high. I believe we’re at the beginning of a genuine democratization of software, where curious individuals can finally build tools for their personal needs, perhaps extending them to a few people around them.

    This shift will eventually transform the classic “build vs. buy” calculation for businesses, starting with SMBs. For years, the default was to buy. Why would a small business spend precious time and energy building non-core functions? They bought, which led directly to the massive SaaS sprawl we see today. That tide is turning. For SMBs, the new incentive is to kill the bloat by building lean software that perfectly fits their use cases. Of course, common productivity tools will remain, but the custom plumbing and specific AI applications will increasingly be built in-house.

    As companies mature and complexity grows, this trend will only accelerate. The need for custom software will increase, but meeting that need won’t be the time-and-energy sink it once was, nor will it fall solely on the existing developer team. Instead, everyone will be expected to contribute a fraction of their time to building the tools they need to be more effective and successful in their own roles.

    My personal exploration has been focused on that first piece: building the tools I’ve always wanted but were either locked behind expensive SaaS subscriptions or a $10/month fee that made me think, “I can build this myself, why should I pay?” For me, experimenting with AI for coding has been a revelation. Now that I’ve committed to a $100 monthly Claude Code subscription, I suspect I’ll be building a lot more of these tools myself.

    Let’s see where it goes.