Dr. Skynet or: How I Learned to Stop Worrying and Love the Bot
Okay, let’s set the stage for a moment.
It’s November 2022, and you’re a young data engineer, early in their career. COVID hit hard – but you were quick on your feet and transitioned into a remote job. Data was the field of choice; you’d always wanted to get into data science. But analytics and engineering were more attainable without an advanced degree, so you went with that.
You log onto work one morning and Slack is abuzz with chatter. You don’t look very closely. You’re still waking up. Instead, you get your morning coffee ready and sign on to a weekly check-in with your coach/mentor/sort-of-supervisor. The first words out of his mouth are:
“Have you heard about this ChatGPT thing?”
Oh how quickly the world changes. I brought this up because – before I get into the meat of today’s post – I’d like to ask: where were you when you first heard about ChatGPT/AI/LLMs? How did you hear about it? And perhaps, most importantly…
Remember how much it sucked?
At least, at the time. I remember two main reactions from that moment. First: ‘Wow! This is crazy!’. And second: ‘Oh, so it’s just a fancy autocomplete’.
At the time, my company was ambivalent towards ChatGPT. On one hand, it was a really interesting commercialized example of a breakthrough in data science. On the other hand, it struggled to write even a single line of usable code.
Advancements would come, of course. Persistent context. Increased parameters. Agent orchestration. But for years, I think the public perception of AI remained centered on this discongruity.
AI was neat, but it wasn’t perfect.
Now it’s May of 2026. When I look online, the discourse has pretty much sorted itself into two camps. You’re either an AI evangelist or AI abolitionist, and whatever rises to the top of your feed is a caricature of both.
Initially, I was pumped about AI. But as I learned more about it and saw what it was being used for, that excitement turned to disenchantment. I felt like the mythical ‘Big Data’ that I had been thrilled about as a college student was being used to create… well, a monster.
The big jump in AI-generated art was what really pushed me towards being anti-AI. Art is so inherent to the human experience, and such a vital expression of the soul – using AI to prompt your way to a piece felt like a bastardization of that. And there was real work that was skipped in doing that.
But this also had an effect in how I percieved art, and made me weigh aesthetic criteria less. Sure, I’d like for things to look nice. But nowadays with AI, anything can look nice. So now I think what matters more to me when it comes to art is how it makes me feel. There’s some intimacy there that I don’t think AI is all that great at manipulating… yet.
A few months ago, though, I realized I needed to revisit AI tooling in general. I needed to see where things were at – what were people using and how were they using it? And I realized something very concerning:
I was falling behind. Not a little bit. A lot.
This is to be expected with any technology. When an innovation really sticks, the early adopters find success (actually, the earliest adopters fall flat on their faces, but that’s a whole other blog post). I realized that I had gotten so wrapped up in the discourse around AI, I hadn’t even noticed when it left me in the dust.
So I decided to start leaning in on AI, at least for technical stuff (I still don’t think AI is a good creative generation tool). I tried to go with a provider I respected and that appeared to have a clear ethical standpoint to their service.
It’s ultimately window dressing though. Here’s what I’m trying to say today: if you aren’t exploring how AI can impact your work, you are late. Because it will impact your work, whether you get onboard or not. I mean this genuinely and with a small sense of urgency.
Unfortunately, technological innovation does not happen in a vacuum. Time and time again, it costs people their jobs and their livelihoods. Your stance on AI is not just be a quirk of your identity, it most likely will be a major factor in the direction of your life.
My recommendation is this: really think critically about how you’d like to respond to new technology. Don’t model yourselves after the sycophants you see on social media – think of the people in history you really respect and look up towards.
For the pro-AI’s out there: maybe stop trying to stick AI everywhere. Maybe focus a bit more on safety, security, and ethics. Maybe consider developing AI governance strategies instead of plugging straight into the prod environment.
For the (hardcore) anti-AI’s: maybe don’t buy into the popular sentiment as much, no matter how righteous it makes you feel. Maybe think about trying AI out for something technical – see where it fits and where it doesn’t.
And for everyone else… start asking questions. The answers might surprise you.