The value responses from 1,000 test subjects were used to tune a more democratic large language model.
Many public-facing LLMs have been developed with guardrails — encoded instructions dictating specific behavior — in place in an attempt to limit unwanted outputs. Anthropic’s Claude and OpenAI’s ChatGPT, for example, typically give users a canned safety response to output requests related to violent or controversial topics.
However, many pundits argue that guardrails and other interventional techniques can serve to remove users’ agency, as what’s considered acceptable isn’t always useful, and what’s considered useful isn’t always acceptable. At the same time, definitions for morality or value-based judgments can vary between cultures, populaces and periods of time.
Related: UK to target potential AI threats at planned November summit
One possible remedy to this is to allow users to dictate value alignment for AI models. Anthropic’s “Collective Constitutional AI” experiment is an attempt at this “messy challenge.”
Anthropic, in collaboration with Polis and Collective Intelligence Project, tapped 1,000 users across diverse demographics and asked them to answer a series of questions via polling.
The challenge centers around allowing users the agency to determine what’s appropriate without exposing them to inappropriate outputs. This involved soliciting user values and then implementing those ideas into a model that’s already been trained.
Anthropic uses a method called “Constitutional AI” to direct its efforts at tuning LLMs for safety and usefulness. Essentially, this involves giving the model a list of rules it must abide by and then training it to implement those rules throughout its process, much like a constitution serves as the core document for governance in many nations.
In the Collective Constitutional AI experiment, Anthropic attempted to integrate group-based feedback into the model’s constitution. The results, according to a blog post from Anthropic, appear to have been a scientific success in that it illuminated further challenges toward achieving the goal of allowing the users of an LLM product to determine their collective values.
One of the difficulties the team had to overcome was coming up with a novel method for the benchmarking process. As this experiment appears to be the first of its kind, and it relies on Anthropic’s Constitutional AI methodology, there isn’t an established test for comparing base models to those tuned with crowd-sourced values.
Ultimately, it appears as though the model that implemented data resulting from user polling feedback “slightly” outperformed the base model in the area of biased outputs.
Per the blog post:
“More than the resulting model, we’re excited about the process. We believe that this may be one of the first instances in which members of the public have, as a group, intentionally directed the behavior of a large language model. We hope that communities around the world will build on techniques like this to train culturally- and context-specific models that serve their needs.”
Leave feedback about this