In response to concerns over job loss and copyright infringement caused by emerging AI technologies, the White House has announced that seven major AI companies have made voluntary commitments to ensure the safe, secure, and transparent development of the technology. The companies involved are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. These commitments address issues such as cybersecurity, biosecurity risks, misuse prevention, safe testing, privacy protections, and public transparency.
However, the effectiveness of these voluntary commitments is unclear. Google, Meta, and OpenAI are currently facing lawsuits related to copyright infringement and misuse of user information. Experts in the fields of art and technology are skeptical that these commitments will have a significant impact. They argue that the voluntary nature of the agreements renders them meaningless and that stronger, well-defined regulation is necessary.
One example of a technical challenge mentioned is watermarking generative content, both text and images. No robust solutions for this issue currently exist. Critics argue that relying on AI companies to voluntarily develop these complex systems is naive and that regulation with clear goals and enforcement plans is needed.
Concept Art Association, an organization supporting concept artists, also emphasized the importance of including artists and creators in the legislation around generative AI. They argue that the intellectual property of these creators drives the industry and should not be ignored in policy discussions.
The White House has expressed its commitment to assembling an executive order and pursuing legislation to protect the public in the era of AI. Previous meetings between Vice President Kamala Harris and AI CEOs have discussed the importance of responsible AI technology. Congressional hearings on AI policy have also taken place, with testimonies highlighting the need for stronger protections for creators’ rights.
The announcement of voluntary commitments from leading AI companies is a step in the right direction, but concerns remain about the effectiveness of these agreements and the need for stronger regulation and involvement of creators in the process.