code reviews take way longer now because you gotta actually read through everything instead of trusting the dev knew what they were writing. Its like the AI is great at the happy path but completely misses edge cases or makes weird assumptions about state...
The real kicker is when someone copies AI generated code without understanding it and then 3 months later nobody can figure out why production keeps having these random issues. debugging AI slop is its own special hell
hmmm even at big Fortune 100 size companies there's a lot of vibe coded slop that works well for MVP but will absolutely fall apart under stress and within prod environments once they scale to more users. I've seen this first hand. It doesn't really reveal itself until later when it's much more difficult to fix.
nice also you can use project-specific structure and markdown files to ensure the AI organizes content correctly for your use case. we are using it on 800k lines of golang and it works well. https://getstream.io/blog/cursor-ai-large-projects/
thanks for the feedback... I'll update the post but check with your founder (Paul).
I messaged Paul on Twitter on Sunday before even sharing the post to get feedback if any as I don't want any confusion like you had last time on Reddit.
and I genuinely like both databases and other awesome developer tools.
My app is focused on providing a quick and easy way to share initial project concepts with others. The key is feedback gathering. I use LLM just to quickly provide project brief for user. But the key is feedback from audience you have.
The real kicker is when someone copies AI generated code without understanding it and then 3 months later nobody can figure out why production keeps having these random issues. debugging AI slop is its own special hell