Thu. May 16th, 2024

In May, more than 100 professors of artificial intelligence (AI), including some of the top experts in the field, academics from other fields, and AI leaders, signed a statement. only one sentence:
“Reducing the risk of extinction from AI must be a global priority, alongside other societal-scale risks such as pandemics and nuclear war. Many of the researchers signed up explaining their concerns: The risk is that we might build powerful AI systems that we can’t control this decade or the next.

Some see this statement as an industry ploy to promote AI companies’ products or influence regulation, unaware that most of the signatories, including us, are academics who do not work for the industry. But the most common concern we’re aware of is that by focusing on extinction, this claim distracts from current AI misdeeds. As volunteers involved in collecting signatures and drafting the statement, we disagree because the two issues have a common core. The false dichotomy between persistent harm and emerging risk makes it unnecessarily difficult to address common root causes.

AI Robot Wallpapers - Wallpaper Cave

AI poses many ongoing problems. AI systems can promote human rights violations, perpetuate systemic discrimination, and create power imbalances. Since AI systems are often widely deployed, the damage can be enormous. For example, tens of thousands of families were pushed into poverty when the Dutch tax and customs authorities accused them of fraud and asked them to refund large sums of money, based on an AI-generated risk profile. go out.

The root causes of AI harm

Why is AI dangerous? - Probable Harmful Effects of Artifici… | Machine ...



Bukaelly is an experienced author on various topics with a passion of writing stories of famous personalities, health issues, sports, journalists, news and trending topics. Enjoy reading!!

Leave a Reply

Your email address will not be published. Required fields are marked *