The misuse of AI has led to the creation of deepfake videos showing Emma Watson in suggestive situations. These clips represent promotional videos for the FaceMega application.
With the advance of AI, deepfake videos are becoming more and more common these days. This technology uses machine-learning algorithms to superimpose one person’s face on another’s body. This creates videos that look authentic, but are actually deceptive manipulations.
Emma Watson videos: a shocking advertising campaign for a Deepfake application
The rise of artificial intelligence (AI) continues to influence our daily lives, but its use can sometimes lead to harmful consequences. Recently, hundreds of deepfake videos have been shared on Facebook and Instagram. They feature faces that perfectly resemble those of actress Emma Watson.
The videos were apparently a advertising campaign for deepfake AI app FaceMega. However, they shocked many users by showing the Hermione Granger actress in sexually explicit situations.
With a weekly subscription fee of $8, users can use the app to swap the faces of people in a video. While this technology may sound fascinating, FaceMega’s marketing strategy highlights the risks associated with using deepfakes to trick people. In response to this controversy, FaceMega has been removed from the Apple App Store and Google Play Store.
FaceMega: Misleading advertising and privacy risks
In this situation, the FaceMega app seems to be contradicting its own terms and conditions. Indeed, the app’s ads, which feature celebrities such as Emma Watson, use deceptive strategies to attract users. However, these commercial videos run counter to the application’s clear rules, which prohibit users from impersonating any person or organization.
Ufoto Limited, the studio behind the ad’s design, is currently under fire. Although the FaceMega app was created with entertainment in mind, it can be used for illegal activities such as the creation of non-consensual pornographic videos or the dissemination of false information.
Furthermore, FaceMega developers have a responsibility to ensure that their technology is not used in a malicious or dangerous way. To this end, developers must put in place measures to protect the privacy and security of users.
Deepfake: a technology under intense legal and media scrutiny
Deepfake is a digital manipulation technique which uses artificial intelligence to create videos that look real. Legislation concerning the use of this technology is relatively new and constantly evolving. In many countries, the creation and dissemination of this kind of content for malicious purposes, such as defamation or the manipulation of public opinion, is illegal.
In the USA, for example, federal law prohibits the creation of pornographic deepfakes involving minors, as well as the distribution of videos with the aim of misleading the public for political purposes.
In Europe, the General Data Protection Regulation (GDPR) also imposes provisions concerning facial synthesis. Subsequently, social media such as Facebook have also begun to take steps to combat deepfake. Indeed, the platform removes manipulated videos and by labeling content suspected of manipulation.