Like many seasoned engineers, I entered 2024 in a defensive crouch. Working on Bitbucket Cloud for nearly a decade now, I’ve helped build a system that hosts 50 million repositories for millions of developers worldwide. I take pride in building tools that increase human productivity. So when the AI revolution began storming the gates of our industry, my first instinct was a visceral gut reaction - No thank you.
The headlines weren't helping. Every day brought new proclamations of AI replacing developers, making traditional engineering obsolete, or potentially ending humanity. Fellow engineers split into camps: the enthusiasts promising digital utopia, the doomsayers predicting apocalypse, and the skeptics dismissing it all as snake oil.
My turning point came not with a bang but with a whimper - I randomly started up a conversation with a former colleague whose opinions I hold in particularly high regard and more importantly, I knew broadly agreed with me on the state of AI - everyone involved is annoying and the tech is usually over-hyped.
On this particular afternoon, I was venting about a tedious data transformation task that was taking me hours in Google Sheets when much to my astonishment, he confessed that he has been using Claude for tasks exactly of this nature.
He pointed out that its ability to create hyper-specific one-off/throwaway apps that replaced mucking around in a spreadsheet all day was actually quite good. But where he really sold me was how well it could (caveat emptor; it also sometimes lies, see later admonishments on generating content too far outside of one’s own realm of expertise) generate interactive visualizations for problems he was having a hard time wrapping his head around.
I had been working on a tree-sitter grammar for generating CODEOWNERS parsers and was flummoxed by a shift-reduce conflict. Not having abided by what most engineers would consider a sane schooling trajectory - I had just stopped going at around the 11th grade - I had no real formal knowledge of LR-parsing. So, I fired up my credit card, and asked claude my very first question, and was greeted with this interactive visualization. Mental model established - I was off and running.
With some trepidation, I asked Claude to help with creating an app that could solve my data transformation issue - matching mentors with apprentices, showing at a glance who was missing a mentor and/or apprentice and allow for quickly shuffling pairings around.
Within minutes, I had a working React (:barf:) app that did all of the above and hashed the user data and pairings into the URL which allowed me to share the current state without a backend. It was even able to generate a “Copy to confluence” button/func that allowed for copy/paste state management of the confluence table. The code was buggy, not production ready, and god damn if it wasn’t absolutely fine-enough for the task at hand.
This wasn't the skynet-style takeover I'd feared. It was more like having access to an infinitely patient pair-programming partner who had somehow memorized every API documentation ever written. But crucially, one whose suggestions I was still qualified to evaluate.
Over the next few weeks, I found myself expanding the scope of our collaboration. Each project revealed new possibilities:
The Mentorship Matcher: Our organization needed to connect mentors with apprentices across multiple teams and time zones. Instead of drowning in spreadsheets and manual matching, I built a web application that intelligently pairs people based on skills, interests, and availability. AI helped with everything from algorithm design to frontend code.
Smart But Scattered Web App: As a parent of a 7-year-old with ADHD, I'd been struggling to implement strategies from the book "Smart But Scattered". I created a web application that gamifies executive function tasks and tracks progress. The AI helped translate psychological concepts into engaging software features. We have used it every morning for the past week, and it really seems like it's working! ( Though I might have to remove “Pick dinner” as a reward if I don't want to eat Chick-Fil-A every day for the rest of my life )
Terminal-velocity: A static site generator written in Rust that integrates LLM capabilities. Here, AI wasn't just a development tool - it became a core feature, helping users generate and refine content while maintaining the performance benefits of static sites. (Currently working on Confluence publishing so that I dont have to maintain two copies of this document)
Pulldown-html-ext: A Rust not-quite-published-yet crate extending pulldown-cmark's HTML rendering. It will be used to drastically improve the performance and feature set of Bitbucket Cloud’s markdown rendering capabilities. AI helped navigate the complexities of Rust's type system and unsafe code requirements, while maintaining performance and safety.
Daily Augmentation: Countless smaller tasks - data analysis, interactive visualizations, documentation writing - all tasks where AI amplified rather than replaced human capability.
Conversational sounding board: Last but definitely not least. I have… well let’s just call it a short attention span. Little flits of ideas pop into my head at constantly and it just feels incredibly validating to say them to someone. “Hey, I think maybe there are some game-theoretical issues with the incentive structures created by our peer review system” is not met with a incredulity or a debate of any kind, rather it's met with “That’s an interesting observation, here are some books and papers you could read on the subject” ( Very extensive blog post coming soon; I promise )
Like the nuclear technology of the Cold War, AI isn't going back in the box. But there's a vast gulf between our fears and reality. AI isn't replacing engineers - it's transforming how we work, much like how compilers and high-level languages transformed programming generations ago.
The industry wastes endless energy debating whether AI coding assistants are equivalent to junior, mid-level, or senior developers. This spectacularly misses the point. The real question isn't the AI's "skill level" - it's whether you have sufficient expertise to evaluate its output. Having an AI confidently generate code for a domain you barely understand isn't empowering - it's reckless. It's like copying code from Stack Overflow without understanding the implications - except the AI can generate far more convincing mistakes.
The key insights:
For fellow engineers wondering how to approach this brave new world, I suggest:
The future isn't about AI replacing developers. It's about developers who understand AI collaborating with developers who don't. The real question isn't whether to embrace AI, but how to use it responsibly and effectively.
Sir! I have a plan!