New CHAOSS AI Alignment Working Group
I'm really interested in understanding how the work of CHAOSS (Community Health Analytics in Open Source Software) can be applied to the efforts around AI alignment/safety.
AI alignment is the effort to design Artificial Intelligence systems so their goals, behaviors, and decisions are consistent with human values and intentions, making them safe, helpful, and reliable
CHAOSS metrics have been built by humans with these intentions in mind, why not build on them? Maybe! To explore, we're setting up a working group to talk about what this might look like. Right now we are just a channel (wg-ai-alignment) on CHAOSS Slack setting up our first meeting - would love to have others join in this discussion!