The Robots are Coming, the Robots are Coming!


Every. Single. Day. My LinkedIn feed is at war with itself. We have one camp proclaiming the end of software engineers. We have one camp proclaiming that AI cannot even perform simple code challenges yet.
(I still wonder what that “simple” code challenge is…)
Tempting Fate
Curious—and maybe a little afraid— to know how much time I had left before yielding to the robot overlords, I decided to … gasp … vibe-code.
I asked Claude to write a simple Lambda function. I was certainly impressed by how quickly it returned a functional body of code, but I was immediately concerned by the lack of any form of security.
I asked it to add cross-site request forgery protection. It did — but as a single environment shared among all requests.
I told it that this was insecure, and after apologizing profusely, it created a hash based on a session ID— but still maintained the same token for the duration of the session.
Except… sessions are typically associated with front-end websites, and it had written a function that responded with JSON.
After a few more exchanges, of it mixing aspects of API and web session security (and me running out of tokens for the night), I decided to call it.
I was impressed by how quickly it generated responses and how easy the code was to read but, frankly, implementing anything that it generated would have at best confused whatever was trying to connect and at worst left the front door wide open.
A Different Approach
A couple of weeks later, I was working on another project, got stuck and decided to try a more focused approach to vibing it. I gave Claude a very specific prompt to accomplish a single task.
This was far more successful and, with some tuning, quickly and securely augmented a gap in my understanding. I tried the same focused approach a week later and got the same result.
What I took from this is that large language model (LLM) coding is not going to replace experienced engineers any time soon. After all, an LLM is simply a prediction engine – predicting the next word (or method or variable) based on the previous input and output. Although LLMs may seem like they have the capacity to think – it’s ultimately a statistics game.
Vibe coding the next generation CRM solution will likely land us in security hell before the night is over. Thoughtful application of AI atop a strong engineering foundation, however, can greatly enhance productivity and quickly close gaps in knowledge. After all, no one knows everything.
I’d wager this is true of other applications of AI as well. The key is knowing when we can trust AI output and when we need to question or outright reject it. After all, using statistics, we can predict a coin flip 50% of the time…
… unless the coin is magnetized. We’d have to know that beforehand.

Paul is a software engineer of more than 30 years turned big data/observability platform architect who has Splunked and Cribl’d cars, lights, the human body, and more – by day. By night he’s an EDM afficionado starting to DJ in the Denver area.