AI alignment for OSS moderation
The question of 'how to moderate AI contributions' is a hot topic:
My OSS project is over two years old and leverages AI if the user chooses to use it. However, this also seems to attract vibe coders who submit pull requests that absolutely do not follow coding standards. They're sloppy, include random changes, Add complexity and contain plainly useless code that isn’t even used. - recent /r/opensource Reddit post
The AI in Open Source Alignment Working Group (special thanks to @MoralCode / Adrian Edwards) is compiling this list of challenges, discussions, resources, and emerging policies to help the community navigate the growing problem of low-quality AI-generated contributions.
Contributors and maintainers are approaching this issue from multiple angles - automated detection, policy frameworks, and in some cases, outright bans. We'll continue tracking progress and resources as they develop.
Attribution as Accountability
The problem of AI slop may also benefit from improved contribution attribution (something I have been chatting with a lot of people about this month). While my attribution proposal focuses on positive impact, there's significant potential in tracking negative impact as well - wasted maintainer time, useless code, and review burden. This approach could help enforce accountability.
Get Involved
The AI in Open Source Alignment Working Group meets bi-weekly. We'll also be hosting an evening Birds of a Feather (BoF) session at CHAOSScon, just before FOSDEM.
Join us!