Great post. I agree that a critical factor in why I'm getting good results from LLM-assisted coding is that _I know this shit_. I flag the model when it's going down the wrong path, and I include hints in my prompt that I know will steer things in the right direction.
If you don't know how to do that, you're getting shit quality output.
"It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031."
https://techtrenches.dev/p/the-west-forgot-how-to-make-things