Web Analytics Made Easy - Statcounter

After the Pull Request: Four Proposed Areas of Work for Open Community Representation

an open sign that's not turned on (not lit)
Photo by Chelaxy Designs / Unsplash

Things that made me happy this week:

  • Creative Commons paused before charging forward. Their update on CC Signals - it made me happy to see their response to community push-back, and taking time, rather than rushing to something.
  • A focus for my future. I have decided to find a way to formalize my research and related work - and am now focused on application to fellowships and/or masters program around AI, ethics + open source. Scoping my goals and personal statements have been the most creative and purposeful I have felt in a long time (professionally). Hopeful.
  • I finished - refinishing my dresser. Before/After at bottom of this post for subscribers. I absolutely loved the process, and am on the hunt for more woodworking projects! New hobby maybe.

Last week I argued that the question for open source is not whether AI will kill it. It is whether the communities whose work fuels AI have consent and voice in what earns reputation and trust. The pull request did that for software (?). AI needs an equivalent.

So that's stating a problem, but I also want to (at least for myself) begin defining what the work might look like by drafting this proposal for our AI Alignment Working Group. I believe CHAOSS, as a 'community of communities' is a perfect place to lead with an opinion and practice - and also test some hypothesis. My first attempt at a problem statement went like this:

Open source communities currently have no standard way to proactively govern AI. Not in the tools and platforms they depend on, and not in the community and project spaces they own.
Please consider becoming a paid subscriber to support my writing and time.

I am especially interested in defining enforceable consent, and defined four categories :

  • Use of models: whether AI can be used at all in a given tool, platform, or community space. Trust starts with consent.
  • Approval of model type: which specific models (as a standard, or named) are acceptable for interacting with a project's code, contributors, or conversations. This might include dataset, open code, environmental considerations and more. Reputation can be built by models responding to community standards.
  • Actions taken by an approved model: what AI is allowed to do in community workflows (submit PRs, post comments, rewrite docs, review issues, triage security reports) - train on community resources. Trust is tested by what a model does in practice.
  • Improvements ("PR on models" concept): Human feedback already shapes AI models continuously - but only through labs' internal channels. Open source communities have no equivalent way to contribute. No "pull request" on AI, despite training data being largely reliant on their work. Reputation has to be maintained through community feedback.

These (I think) with what Creative Commons is teasing, which is encouraging:

AI relies on the commons, not the other way around. - Creative Commons Update on Signalling project

If this is work you want to do, come find the CHAOSS AI Alignment Working Group. Issue #61 - its a proposal at this point, where we take it - who knows. The issue continues into hypothesis, related work etc - any input and reaction appreciated.


Photo: my refinished dresser, as promised

This post is for paying subscribers only

Already have an account? Sign in.

Subscribe to Emma's open notes

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe
Licensed under CC BY-SA 4.0