“Apple joins 15 other technology companies — including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — that committed to the White House’s ground rules for developing generative AI in July 2023.
As a frequent target of federal regulators, Apple wants to signal early that it’s willing to play by the White House’s rules on AI — a possible attempt to curry favor before any future regulatory battles on AI break out.
But how much teeth do Apple’s voluntary commitments to the White House have? Not much, but it’s a start. The White House calls this the “first step” toward Apple and 15 other AI companies developing AI that is safe, secure and trustworthy.
Under the commitment, AI companies promise to red-team (acting as an adversarial hacker to stress test an organization’s safety measures) AI models before a public release and share that information with the public.
The White House’s voluntary commitment also asks AI companies to treat unreleased AI model weights confidentially. Apple and other companies agree to work on AI model weights in secure environments, limiting access to model weights to as few employees as possible.
Lastly, the AI companies agree to develop content labeling systems, such as watermarking, to help users distinguish what is and isn’t AI generated.”
1 Comment
“Apple joins 15 other technology companies — including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — that committed to the White House’s ground rules for developing generative AI in July 2023.
As a frequent target of federal regulators, Apple wants to signal early that it’s willing to play by the White House’s rules on AI — a possible attempt to curry favor before any future regulatory battles on AI break out.
But how much teeth do Apple’s voluntary commitments to the White House have? Not much, but it’s a start. The White House calls this the “first step” toward Apple and 15 other AI companies developing AI that is safe, secure and trustworthy.
Under the commitment, AI companies promise to red-team (acting as an adversarial hacker to stress test an organization’s safety measures) AI models before a public release and share that information with the public.
The White House’s voluntary commitment also asks AI companies to treat unreleased AI model weights confidentially. Apple and other companies agree to work on AI model weights in secure environments, limiting access to model weights to as few employees as possible.
Lastly, the AI companies agree to develop content labeling systems, such as watermarking, to help users distinguish what is and isn’t AI generated.”