Deepfake. What you need to know about artificial intelligence-powered fake media.

Deepfake
Deepfake

        The ability of computers to simulate reality has improved over time. In place of the genuine settings and props that were historically frequent, modern film, for instance, extensively depends on computer-generated sets, scenery, and characters, and most of the time these sequences are virtually indistinguishable from reality.

 

         Deepfake technology has been in the news recently. Deepfakes, the most recent advancement in computer images, are made when artificial intelligence (AI) is trained to swap out one person's likeness for another in a recorded video.

What are deep fakes and how do they operate? 


         The name "deepfake" derives from the underlying artificial intelligence (AI) technique known as "deep learning." Face swapping in video and digital content is done using deep learning algorithms, which, when given enormous amounts of data, educate themselves on how to solve issues.

       There are many techniques for producing deepfakes, but the most popular one makes use of deep neural networks with autoencoders and a face-swapping algorithm. A series of video clips of the person you want to put in the target must come first, followed by a target video to serve as the foundation for the deepfake.

       The Chinese app Zao, DeepFace Lab, FaceApp (a picture editing app with built-in AI methods), Face Swap, and the since removed DeepNude, a particularly risky app that produced fake nude photographs of women, all make creating deepfakes simple even for amateurs.

On GitHub, a community for open source software development, there are a lot of deepfake programmes available. 

      While some of these programmes are significantly more likely to be used maliciously, others are more frequently used for purely amusing reasons, which is why deepfake development is not prohibited.


According to several specialists, as technology advances, deepfakes will become much more sophisticated and pose more serious hazards to the public, including potential election meddling, heightened political tension, and increased criminal activity.

Uses for deep fakes

While the capacity to automatically swap faces in order to produce convincing and realistic-looking synthetic video has some intriguing, innocuous applications (such as in gaming and film), it is undoubtedly a risky technology with some unsettling implications. Making fake pornography was one of the first uses of deepfakes in the real world.

 

In 2017, a Reddit user going by the handle "deepfakes" set up a pornographic forum with actors who had their faces switched. Since then, porn (especially revenge porn) has frequently made headlines, severely tarnishing the reputations of famous people. A Deeptrace research states that 96% of deepfake movies discovered online in 2019 were pornographic.

Only videos can be deepfaked?

Deepfakes are not just in videos, either. The topic of deepfake audio is rapidly expanding and has many uses.

 

A model of a voice can be created, and once it is, that voice can be made to say anything, as was the case last year when fake audio of a CEO was used to commit fraud. Realistic audio deepfakes can now be created using deep learning algorithms with just a few hours (or in some cases, minutes) of audio of the person whose voice is being copied.

How to recognise a deep fake

Deepfakes are likely to grow more prevalent, and as a result, society as a whole will need to acclimate to seeing deepfake films in the same way that web users have become skilled at spotting other types of fake news.

 

More deepfake technology frequently needs to be developed in order to identify and stop it from spreading, as is the situation with cybersecurity. This can lead to a vicious cycle and possibly cause more harm.

Deepfakes can be detected by a few signs, including:

 

Current deepfakes struggle to realistically animate faces, which leads to videos where the subject either doesn't blink at all or blinks either excessively or strangely. New deepfakes, however, were produced that no longer had this issue after researchers from the University of Albany published a paper detecting irregular blinking.

A face that seems to be blurrier than the surroundings it is in should be on the lookout for skin or hair issues. It could appear that the focus is too soft.

Do you think the lighting is unnatural? Deepfake algorithms frequently maintain lighting from the model clips for the fake video, which is not a good match for the lighting in the target video.

Post a Comment

Previous Post Next Post

Contact Form