Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/feedavenue.com/public_html/wp-includes/functions.php on line 6114
The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw - Feedavenue
Monday, December 23, 2024
HomeTechnologyThe Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw

The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw

Date:

Related stories

spot_imgspot_img


In 2019, guards on the borders of Greece, Hungary, and Latvia began testing an artificial-intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK.

The trial sparked controversy. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didn’t work, and the project’s own website acknowledged that the technology “may imply risks for fundamental human rights.”

This month, Silent Talker, a company spun out of Manchester Met that made the technology underlying iBorderCtrl, dissolved. But that’s not the end of the story. Lawyers, activists, and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migration—citing iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.

A ban on AI lie detectors at borders is one of thousands of amendments to the AI Act being considered by officials from EU nations and members of the European Parliament. The legislation is intended to protect EU citizens’ fundamental rights, like the right to live free from discrimination or to declare asylum. It labels some use cases of AI “high-risk,” some “low-risk,” and slaps an outright ban on others. Those lobbying to change the AI Act include human rights groups, trade unions, and companies like Google and Microsoft, which want the AI Act to draw a distinction between those who make general-purpose AI systems, and those who deploy them for specific uses.

Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for the act to ban the use of AI polygraphs that measure things like eye movement, tone of voice, or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act as written would allow use of systems like iBorderCtrl, adding to Europe’s existing “publicly funded border AI ecosystem.” The analysis calculated that over the past two decades, roughly half of the €341 million ($356 million) in funding for use of AI at the border, such as profiling migrants, went to private companies.

The use of AI lie detectors on borders effectively creates new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, labeling everyone as suspicious. “You have to prove that you are a refugee, and you’re assumed to be a liar unless proven otherwise,” she says. “That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders.”

Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for innocuous reasons—such as culture, religion, or trauma—but doing so is sometimes misread as a signal a person is hiding something. Humans often struggle with cross-cultural communication or speaking to people who experienced trauma, she says, so why would people believe a machine can do better?



Source link

Latest stories

spot_img