Experts in the field of artificial intelligence development demand strict regulation of deepfakes: signed letter announces threat to society

by alex

Signatories call for complete criminalization of materials containing fake information about child sexual abuse

Hundreds of members of the artificial intelligence community, including prominent experts and activists such as Jaron Lanier, Francis Haugen, and Stuart Russell, signed an open letter expressing the urgent need for strict regulation of deepfakes. This letter, supported by more than 500 scientists and practitioners in the artificial intelligence industry, places a responsibility on governments around the world to take mandatory action to stop the spread of misinformation and harmful deepfakes.

The letter, which was signed by experts from various companies and organizations, including OpenAI, DeepMind (a division of Google), Anthropic, Amazon, Apple and Microsoft, notes that deepfakes pose a serious threat to society. They call for the complete criminalization of material containing false information about child sexual abuse. At the same time, it is proposed that, regardless of whether the figures depicted are real or fictitious, anyone who creates or distributes such materials should be held criminally liable.

An important part of the letter is a call on artificial intelligence developers to take responsibility for preventing the creation and spread of malicious deepfakes. It is proposed to apply penalties in case of inadequacy of preventive measures taken by developers.

READ
A 15,080 mAh battery, IP69K and MIL-STD-810H protection, two screens and a bright flashlight - all this is the Blackview BV9300 Pro smartphone

Scientists and experts from around the world who joined this call emphasized the importance of these measures. Moreover, they indicate the need for legislative regulation of this problem. In their letter, they emphasize that the European Union's perception of this challenge and its willingness to propose measures, as well as the slow progress of the Children's Online Safety Act (KOSA), may have stimulated active discussion on the topic and brought public attention to the problem of deepfakes.

Despite this, it remains unknown whether legislators will take all these proposals into account. A previous letter calling for the suspension of artificial intelligence development did not receive the attention it deserves. However, this new call is more practical and offers concrete steps to solve the problem. The authors hope that the list of signed experts will be used by legislators to assess the global response to this topic and in shaping the strategy for regulating the field of artificial intelligence.

You may also like

Leave a Comment