I use AI assistants all day, every day - Claude Code, Codex, Gemini, Ollama, and more.
These are the practical tips I've learned that actually make a difference in getting better results.
At the end of any request for a project - I add 'Please plan that out, and then ask me a long set of questions that clarify any ambiguities or anything that might be handy to know to complete the task'
Spend time on your global agents.md/claude.md. Every time it does something you don't like - try to block it in your agents.md - and soon it will be doing a whole lot better
If you use multiple agents - have a single agents.md - and symlink all the specific md files to that common one - that way they all act the same, and changing one changes them al
when working with AI - nothing is more important than QA - and for any web uis - testing the UI is part of it - make sure that new playwright tests are part of every check in
testing with mocks is one of the worst things a developer can ever do - it proves absolutely nothing, supports terrible, hard to test program structures - and allows stuff to get through that doesn't work at all with any other part of the system. This is way more important with AI systems who don't do any other sort of testing. Make sure that your AI can't use mocks in any way. I actually have a non-ai validation on the commit hook searching for words like mock, placeholder, skip etc anywhere in the repository, and hard prevent any checkins
for every language you code in make a skill with all your coding rules and lots of examples - this way the assistants will produce code in your style - eg python and dotnet (those being the languages I use most at the moment)
when you want to use ai locally in other tools, use this skill: ai The tools always want to use an api - which can cost you a fortune - this allows you to do anything within your existing subscription
pick a secret provider like keyring and use it for every secret - never allow secrets to be embedded in code, local files, or environment variables