Excessive regulation could endanger AI industry, warns JD Vance

Excessive regulation could endanger AI industry, warns JD Vance

The United States believes that exaggerated regulations could seriously endanger artificial intelligence (AI) industry, said US Vice President JD Vance on Tuesday. He brings Donald Trump's struggle against restrictions on the development of AI on a global stage.

The warning of excessive regulations

"We believe that excessive regulations in the AI ​​sector could endanger a transformative industry, especially now that it is in the upswing," said Vance in front of head of state and CEOs, who had gathered for Artificial Intelligence Action Summit in Paris. "I want this deregulated approach to flow into many conversations at this conference."

The step backwards for safety precautions

Vance's speech followed the abolition of a comprehensive executive arrangement by Trump, which was issued last month. This had been signed by his predecessor Joe Biden and aimed to use national security risks in connection with AI and to prevent discrimination against AI systems.

artificial intelligence as an opportunity

"I'm not here to talk about the security of AI," emphasized Vance. "I'm here to talk about the chances of AI." While he underlined the potential of AI to improve people's life, he referred to the numerous risks that need to be regulated. So AI can create pictures, audio and videos that appear that a person has said or did something that she never did. Such technologies could be used to influence elections or create fake pornographic content.

The dangers of technology

The dangers arising from the technology are considerable and range from the simple availability of information to the inspection of crimes to the fear that AI could slip over to people. "AI can be used to develop autonomous weapon systems that can harm the world," Manoj Chaudhary, CTO of the US software company Jitterbit, told CNN in November.

Need of regulatory measures

A report that was created in March on behalf of the US State Department warned of "catastrophic" national security risks that arise from the rapid development of this technology and called for "emergency measures" to protect them through regulatory requirements.

optimism and innovation

Vance said that the new US government will pursue an innovation-friendly approach for AI. However, this does not mean that "all security concerns are ignored". "Focus is important, and we now have to focus on the chance to get the best out of our innovations and use AI to improve the well-being of our nations and its citizens with great trust. I can say that the Trump administration will not waste this opportunity."

education and dealing with Ki

A way of how the new US government wants to implement this is to ensure that US schools teach the students "how to deal with AI-supported tools, monitor and interact", as they are increasingly part of everyday life. "I really believe that AI ... will make people more productive. It will not replace people," he said.

regulations and European approaches

Vance warned of regulations that “suffocate” AI. He demanded more "optimism" towards technology from Europe, especially from Europe. The European Union has recently passed a pioneering law on AI, which prohibits some "unacceptable" applications of the technology and defines strict framework conditions for other applications classified as "high -risk". For example, the eu ki-law Every biometric application for the estimate of the breed, political alignment or sexual orientation of a person.

competitiveness of the USA

vance also mentioned other principles of the Trump administration and emphasized that to ensure America's advantage, it is ensured that the most powerful AI systems in the USA are produced with American developed and manufactured chips. The new government will also ensure that the AI ​​systems developed in America are free of "ideological bias and never restrict the right of our citizens to freedom of expression," he added.

This report was supported by Annoa Abekah-Mensah.

Kommentare (0)