AI Won't Replace You
Lately, I’ve been spending a lot of time experimenting with Claude Code, an AI-assisted engineering tool.
As a non-developer who transitioned from marketing into tech, I've always leaned into operational parts of it helping with Q/A, code revisions, text, architecture, or observability. I’ve learned the processes of putting an app into production or creating something from scratch: rambling through Python scripts, working with Jupyter notebooks, running SQL queries, all of which required study and revision (usually tracked in spreadsheets, still my most-used tool despite all the new platforms available). But still.
Teaching the AI (and Learning from It)
The challenge I face now, with so many new features emerging every week, is that I’m constantly teaching AI what I need. At the same time, learning how to teach it better. In that process, a lot of my curation work ends up tying me to a specific vendor.
You might ask, why not just use a platform that’s not vendor-dependent and play with different models?
Fair point. But coming from a marketing background, I like simplicity, and for me that's keeping VS Code and Claude Code. It's just a familiar view. Anyway, I digress…
AI Tools as Extensions of Us
Working with these AI tools, Claude Code being my favorite, has made me realize how much time it actually takes to set up an environment, and how much of that setup eventually becomes an extension of you. You still need a human to select the right MCPs, configure integrations, and build specific skills that align with your workflow.
It makes me wonder: if you’re thinking about starting a business, can you really delegate your entire operational layer to AI? Would you actually trust it to run unsupervised?
From what I’ve seen, especially in the ongoing discussions around vibe coding versus AI-assisted engineering, a few things are becoming clear:
Vibe coding doesn’t replace real engineering. It’s great for prototyping but it’s not sustainable for long-term development.
And honestly, even this text, after grammar and readability revisions, proves the point. AI can improve the structure, but it still takes a human to keep the meaning intact.
Experiment: Building a Node.js App
From my recent experiments, I once again considered rebuilding my personal website as a React application. I’ve followed this path a few times in the past, but I always end up reverting to a simple CMS. Maintaining a custom structure (keeping dependencies up to date, ensuring builds run cleanly, managing deployment) demands more ongoing effort than it initially seems. Still, with today’s AI-assisted workflows, I decided to give it another try.
I asked an AI model to generate the project, but it quickly got caught in an infinite iteration loop (consuming tokens, producing bugs, and attempting to fix them in cycles). Eventually, I went back to a more controlled setup: cloning an existing repository and using AI selectively to improve specific components. GitHub may offer thousands of examples, but human judgment is still required to evaluate which approach fits the context and why.
Once I had an MVP running locally, I tried deploying the Node.js app to Cloudflare using AI-generated configurations. That turned out to be more time-consuming than productive, and I ultimately reverted to Netlify, which handled the deployment with minimal friction.
Human in the Loop
All of this reinforces that AI tools function as extensions, not replacements. They can automate tasks, scaffold applications, and even suggest optimizations. Still, they rely on human oversight to define architecture, interpret trade-offs, and decide what’s worth building.
This, ultimately, is the essence of human curation: guiding, refining, and shaping technology so that it serves human intent rather than replacing it.