The White House is expected to host an event Monday on “safe, secure and trustworthy artificial intelligence.”
That’s according to Axios, which obtained an invitation to the event, with multiple sources telling the outlet that an AI executive order will be announced. President Biden vowed earlier this year to take executive action to ensure “America leads the way toward responsible AI innovation.”
“The release is timed just before Vice President Kamala Harris is expected to travel to the U.K. for an AI summit scheduled for Nov. 1-2. Commerce Secretary Gina Raimondo is expected to go to the U.K. summit as well, per a source,” Axios reported.
The White House reportedly declined to comment.
Axios noted that there has been some progress in regulating AI, reporting that the Biden administration “earlier this year got AI companies to make voluntary safety, security and transparency commitments.”
Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation, supports the effort.
“I applaud the administration for taking the first step,” Siegel told Fox News Digital. “We should applaud the first step through the EO but quickly need a framework for the detailed steps beyond that truly safeguard our freedoms.”
Siegel cited “four pillars” of regulation that would address concerns about AI safety.
More from Fox News:
Pillar one, Siegel said, was to protect children and other vulnerable populations from “scams and other harms.” The second would be to pass new rules in the criminal justice code to ensure AI cannot be used as cover for criminals. The third, according to Siegel, would be to ensure “fairness” by not allowing current biases to be rooted into AI data and models, while the fourth would be to ensure there is a focus on “trust and safety” in AI systems that “includes agreement on how the systems are used and not used.”
“We need to put the onus on the algorithm providers to make sure customers are not using it for nefarious purposes much like we ask banks to certify their customers are not money laundering,” Siegel said. “We need to make sure AI use is disclosed (for example in advertising) to not mislead.”
There are concerns about the technology being used for surveillance and AI’s potential impact on jobs, with the network citing the commitments made earlier this year by 15 major AI developers who signed a voluntary agreement requiring the firms to share data about AI safety with the government.
“Many Democrats and progressive groups have fallen into the trap that AI regulations need to be mostly focused on misinformation and policing police — we’re holding out hope that President Biden’s executive order strays away from this and falls more towards the practical efforts with artificial intelligence,” Aiden Buzzetti, president of the Bull Moose Project, told Fox News Digital. “We believe that responsible safeguards to AI can promote both innovation and a reasonable amount of data privacy and security for Americans, and there is absolutely no need to privilege one over the other.”
“We need regulations that provide basic security for consumers without privileging the same companies that fight tooth and nail to avoid regulations except for this one particular instance where they hold the advantage in time and resources,” Buzzetti added.