
| The next article initially appeared on Angie Jones’s web site and is being republished right here with the creator’s permission. |
I’ve been seeing an increasing number of open supply maintainers throwing up their arms over AI-generated pull requests. Going as far as to cease accepting PRs from exterior contributors.
In case you’re an open supply maintainer, you’ve felt this ache. All of us have. It’s irritating reviewing PRs that not solely ignore the mission’s coding conventions but additionally are riddled with AI slop.
However yo, what are we doing?! Closing the door on contributors isn’t the reply. Open supply maintainers don’t wish to hear this, however that is the best way folks code now, and that you must do your half to arrange your repo for AI coding assistants.
I’m a maintainer on goose which has greater than 300 exterior contributors. We felt this frustration early on, however as an alternative of pushing well-meaning contributors away, we did the work to assist them contribute with AI responsibly.
1. Inform people the way to use AI in your mission
We created a HOWTOAI.md file as a simple information for contributors on the way to use AI instruments responsibly when engaged on our codebase. It covers issues like:
- What AI is sweet for (boilerplate, checks, docs, refactoring) and what it’s not (safety essential code, architectural modifications, code you don’t perceive)
- The expectation that you’re accountable for each line you submit, AI-generated or not
- The way to validate AI output earlier than opening a PR: construct it, check it, lint it, perceive it
- Being clear about AI utilization in your PRs
This welcomes AI PRs but additionally units clear expectations. Most contributors need to do the appropriate factor, they only must know what the appropriate factor is.
And whilst you’re at it, take a contemporary take a look at your CONTRIBUTING.md too. Numerous the issues folks blame on AI are literally issues that at all times existed, AI simply amplified them. Be particular. Don’t simply say “comply with the code type”; say what the code type is. Don’t simply say “add checks”; present what a great check seems like in your mission. The higher your docs are, the higher each people and AI brokers will carry out.
2. Inform the brokers the way to work in your mission
Contributors aren’t the one ones who want directions. The AI brokers do too.
We now have an AGENTS.md file that AI coding brokers can learn to grasp our mission conventions. It contains the mission construction, construct instructions, check instructions, linting steps, coding guidelines, and express “by no means do that” guardrails.
When somebody factors their AI agent at our repo, the agent picks up these conventions robotically. It is aware of what to do and the way to do it, what to not contact, how the mission is structured, and the way to run checks to test their work.
You possibly can’t complain that AI-generated PRs don’t comply with your conventions when you by no means instructed the AI what your conventions are.
3. Use AI to overview AI
Investing in an AI code reviewer as the primary touchpoint for incoming PRs has been a sport changer.
I already know what you’re pondering… They suck too. LOL, honest. However once more, you need to information the AI. We added customized directions so the AI code reviewer is aware of what we care about.
We instructed it our precedence areas: safety, correctness, structure patterns. We instructed it what to skip: type and formatting points that CI already catches. We instructed it to solely remark when it has excessive confidence there’s an actual difficulty, not simply nitpick for the sake of it.
Now, contributors get suggestions earlier than a maintainer ever seems on the PR. They will clear issues up on their very own. By the point it reaches us, the apparent stuff is already dealt with.
4. Have good checks
No, critically. I’ve been telling y’all this for YEARS. Anybody who follows my work is aware of I’ve been on the check automation soapbox for a very long time. And I want everybody to listen to me after I say the significance of getting a strong check suite has by no means been larger than it’s proper now.
Assessments are your security internet towards dangerous AI-generated code. Your check suite can catch breaking modifications from contributors, human or AI.
With out good check protection, you’re doing guide overview on each PR making an attempt to cause about correctness in your head. That’s not sustainable with 5 contributors, not to mention 50 of them, half of whom are utilizing AI.
5. Automate the boring gatekeeping with CI
Your CI pipeline also needs to be doing the heavy lifting on high quality checks so that you don’t must. Linting, formatting, kind checking all ought to run robotically on each PR.
This isn’t new recommendation, nevertheless it issues extra now. When you might have clear, automated checks that run on each PR, you create an goal high quality bar. The PR both passes or it doesn’t. Doesn’t matter if a human wrote it or an AI wrote it.
For instance, in goose, we run a GitHub Motion on any PR that entails reusable prompts or AI directions to make sure they don’t comprise immediate injections or the rest that’s sketchy.
Take into consideration what’s distinctive to your mission and see when you can throw some CI checks at it to maintain high quality excessive.
I perceive the impulse to lock issues down, however y’all we are able to’t quit on the factor that makes open supply particular.
Don’t shut the door in your initiatives. Elevate the bar, then give folks (and their AI instruments) the knowledge they should clear it.
On March 26, be a part of Addy Osmani and Tim O’Reilly at AI Codecon: Software program Craftsmanship within the Age of AI, the place an all-star lineup of consultants will go deeper into orchestration, agent coordination, and the brand new expertise builders must construct wonderful software program that creates worth for all individuals. Join free right here.
