Most efforts by governments to set guardrails around the ethical and legal use of artificial intelligence are in their infancy. While the risks and benefits of the technology are being debated, voluntary standards are an increasingly popular tactic. The White House this week announced eight companies have joined a group that has pledged to honor several principles around the development and implementation of AI.
Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability joined seven others—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—that had made the same pledge in July.
The firms, which represent some of the biggest and most advanced working on the technology, have committed to voluntarily conduct independent testing on cyber and biosecurity and ensuring that security is a primary focus, ensure that it is evident to consumers when content is AI generated, commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, among others.