A group of artificial intelligence experts and industry executives, led by trailblazer Yoshua Bengio, has issued an open letter urging for increased regulation surrounding the creation and dissemination of deepfake content, citing potential risks to society.
Authored by Andrew Critch, an AI researcher at UC Berkeley, the letter emphasises the growing prevalence of deepfakes, which are realistic yet fabricated images, audios, and videos created by AI algorithms. The advancements in AI technology have made deepfakes increasingly indistinguishable from human-created content, posing significant challenges to society.
Titled “Disrupting the Deepfake Supply Chain,” the letter proposes recommendations for regulating deepfakes. These include advocating for the full criminalisation of deepfake child pornography, imposing criminal penalties for individuals involved in creating or spreading harmful deepfakes, and requiring AI companies to implement measures preventing their products from generating harmful deepfakes.
Over 400 individuals from various sectors, including academia, entertainment, and politics, have signed the letter as of Wednesday morning. Notable signatories include Harvard psychology professor Steven Pinker, Joy Buolamwini, founder of the Algorithmic Justice League, and two former Estonian presidents, along with researchers from Google, DeepMind, and OpenAI.
The call for regulation comes amidst concerns about the potential societal impacts of deepfake technology. Recent developments, such as Microsoft-backed OpenAI’s ChatGPT, which can engage users in human-like conversation, have underscored the need for proactive measures to address AI-related risks.
Prominent figures, including Elon Musk, have previously raised alarms about AI risks. Musk, in a letter last year, called for a six-month pause in the development of AI systems more powerful than OpenAI’s GPT-4 model, highlighting the importance of prioritising the ethical implications of AI advancements.
As AI technology continues to evolve, experts and industry leaders are increasingly emphasising the importance of regulatory frameworks to mitigate the potential harms associated with deepfakes and other AI-driven applications.