I use AI coding tools every day. Cursor is my primary development environment. I use Claude and GPT-4o for architecture discussions, code review, and working through hard problems. The speed improvement on certain kinds of work is real and significant.
But I’ve also seen what happens when developers use these tools wrong — code that looks right but has subtle bugs, architecture decisions made by autocomplete, test suites that test nothing because they were AI-generated and nobody read them.
Here’s my honest take on what AI-assisted coding actually delivers and how to use it without creating technical debt you’ll regret.
What AI Code Generation Is Actually Good At
Boilerplate and scaffolding. The most immediate value. Setting up a new Rails model with validations, associations, and corresponding tests used to take 20-30 minutes. With AI assistance, it takes 5. That’s real time savings on work that has low cognitive demand but high volume.
Pattern implementation. Once a pattern is established in your codebase, AI tools are excellent at extending it. “Add a new API endpoint following the same pattern as orders” — the AI can do this accurately because it has context on the existing structure.
Documentation and comments. Explaining what code does is something AI does well. I use it to generate inline documentation on complex methods and write docstrings that I’d otherwise skip.
Test cases. AI generates reasonable test case skeletons quickly. I still review and often modify them, but it’s faster than starting from scratch.
Regex and complex string manipulation. I don’t want to write regex from scratch. Nobody does. AI handles this well.
Database query optimization suggestions. Describing a slow query and asking for optimization suggestions often surfaces things I’d catch myself but faster.
What AI Code Generation Is Bad At
Architecture decisions. AI will give you an answer to “should I use microservices or a monolith?” but it’s giving you a reasonable-sounding answer, not necessarily the right answer for your specific constraints. Architecture decisions require understanding context that AI doesn’t have.
Business logic with complex rules. If your business has specific rules that don’t follow standard patterns — industry-specific compliance, unusual pricing logic, proprietary workflows — AI generates plausible-looking code that often gets the edge cases wrong. This is the most dangerous category.
Subtle bugs in generated code. AI code usually looks right. That’s what makes the bugs hard to spot. I’ve caught subtle logic errors in AI-generated code that would have passed casual review. Trust but verify.
Security-sensitive code. Authentication, authorization, payment processing, PII handling — do not lean on AI generation for these. The frameworks have right ways to do this. Use the framework.
Anything that requires understanding your specific domain. The AI doesn’t know your business. It doesn’t know that “order” in your system means something different than “order” in general commerce. Domain-specific logic needs human writing and human review.
My Actual Workflow
I use Cursor with the Claude integration. My workflow on a typical feature:
-
Write the spec first. Before asking AI to write anything, I write what the feature needs to do — usually as comments or a quick spec in the PR description. This forces me to think clearly about the requirements.
-
Use AI for the scaffold. Generate the model, controller, routes, and test stubs. Review everything it generates.
-
Write the critical business logic myself. The parts that are specific to the domain, the edge cases, the complex rules — I write these. AI might help me look up syntax or remind me of a method, but the logic is mine.
-
Use AI to add tests for the scaffold. It’s fast at generating test cases for standard behavior. I add the edge cases and business-rule tests manually.
-
Review AI-generated code like you’d review a junior developer’s PR. Because that’s what it is. Smart, fast, good at patterns, occasionally wrong in subtle ways, doesn’t understand your specific situation.
The Speed Math
On a typical feature that used to take me two days, AI assistance gets me to “working and tested” in about one day. That’s meaningful — roughly 2x on the scaffolding-heavy parts.
On complex business logic features, the improvement is more modest. Maybe 20-30% faster, because that work is thinking-intensive and AI assistance with the thinking is less reliable.
The compounding benefit: faster delivery without cutting corners means clients see results sooner, can give feedback sooner, and we iterate faster. That’s the real value.
What This Means for How I Work With Clients
AI assistance means I can take on more projects, move faster, and pass some of those savings along. It doesn’t mean I’m not doing the work — it means I’m doing the thinking work, which is where the value is, and using AI for the repetitive implementation work.
I’m transparent with clients about this. Using AI tools is part of professional development in 2025. Any developer who tells you they’re not using them is either lying or falling behind.
If you’re curious about how AI-assisted development could accelerate your specific project, let’s talk.