2024-11-22
read time: 7 min
cat ~/posts/LLM/dr-singularity.md

The War Room

Like many seasoned engineers, I entered 2024 in a defensive crouch. Working on Bitbucket Cloud for nearly a decade now, I’ve helped build a system that hosts 50 million repositories for millions of developers worldwide. I take pride in building tools that increase human productivity. So when the AI revolution began storming the gates of our industry, my first instinct was a visceral gut reaction - No thank you.

The headlines weren't helping. Every day brought new proclamations of AI replacing developers, making traditional engineering obsolete, or potentially ending humanity. Fellow engineers split into camps: the enthusiasts promising digital utopia, the doomsayers predicting apocalypse, and the skeptics dismissing it all as snake oil.

The Human Element

My turning point came not with a bang but with a whimper - I randomly started up a conversation with a former colleague whose opinions I hold in particularly high regard and more importantly, I knew broadly agreed with me on the state of AI - everyone involved is annoying and the tech is usually over-hyped.

On this particular afternoon, I was venting about a tedious data transformation task that was taking me hours in Google Sheets when much to my astonishment, he confessed that he has been using Claude for tasks exactly of this nature.

He pointed out that its ability to create hyper-specific one-off/throwaway apps that replaced mucking around in a spreadsheet all day was actually quite good. But where he really sold me was how well it could (caveat emptor; it also sometimes lies, see later admonishments on generating content too far outside of one’s own realm of expertise) generate interactive visualizations for problems he was having a hard time wrapping his head around.

I had been working on a tree-sitter grammar for generating CODEOWNERS parsers and was flummoxed by a shift-reduce conflict. Not having abided by what most engineers would consider a sane schooling trajectory - I had just stopped going at around the 11th grade - I had no real formal knowledge of LR-parsing. So, I fired up my credit card, and asked claude my very first question, and was greeted with this interactive visualization. Mental model established - I was off and running.

Mounting the bomb

With some trepidation, I asked Claude to help with creating an app that could solve my data transformation issue - matching mentors with apprentices, showing at a glance who was missing a mentor and/or apprentice and allow for quickly shuffling pairings around.

Within minutes, I had a working React (:barf:) app that did all of the above and hashed the user data and pairings into the URL which allowed me to share the current state without a backend. It was even able to generate a “Copy to confluence” button/func that allowed for copy/paste state management of the confluence table. The code was buggy, not production ready, and god damn if it wasn’t absolutely fine-enough for the task at hand.

This wasn't the skynet-style takeover I'd feared. It was more like having access to an infinitely patient pair-programming partner who had somehow memorized every API documentation ever written. But crucially, one whose suggestions I was still qualified to evaluate.

Mutual Assured Production

Over the next few weeks, I found myself expanding the scope of our collaboration. Each project revealed new possibilities:

The Doomsday Gap

Like the nuclear technology of the Cold War, AI isn't going back in the box. But there's a vast gulf between our fears and reality. AI isn't replacing engineers - it's transforming how we work, much like how compilers and high-level languages transformed programming generations ago.

The industry wastes endless energy debating whether AI coding assistants are equivalent to junior, mid-level, or senior developers. This spectacularly misses the point. The real question isn't the AI's "skill level" - it's whether you have sufficient expertise to evaluate its output. Having an AI confidently generate code for a domain you barely understand isn't empowering - it's reckless. It's like copying code from Stack Overflow without understanding the implications - except the AI can generate far more convincing mistakes.

The key insights:

The Survival Plan

For fellow engineers wondering how to approach this brave new world, I suggest:

The future isn't about AI replacing developers. It's about developers who understand AI collaborating with developers who don't. The real question isn't whether to embrace AI, but how to use it responsibly and effectively.

Sir! I have a plan!

// Contents