Share This Article
FakeCatcher technology can detect deepfakes in milliseconds
The U.S. Intel Corporation introduced a new service based on artificial intelligence, which is capable of identifying fakes. The FakeCatcher algorithm identifies fakes in less than a second. It can be applied to video, and the program works in real time.
According to the developers, the algorithm identifies the image of a real person from a deepfake by analyzing the color of the blood vessels and their changes. The AI also uses other factors for analysis, demonstrating high results. Tests conducted show success in identifying fake people in videos in 96% of cases.
According to the authors of FakeCatcher, the algorithm will help detect malicious videos on social networks or news resources in time, thereby avoiding various manipulations by fraudsters. The relevance of such a service is quite high, given how many deepfakes appear where fake celebrities say or do things that could harm their careers or negatively affect their millions of followers.
The technology used to create the deepfakes is based on synthesizing images and generating soundtracks thanks to GAN neural networks. The mechanism is built in such a way that the neural networks compete with each other to create a better audio track. AI techniques are also used for this, with the machine processing a huge number of faces and voices and finding common features.
It should be noted that the technology of creating deepfakes is not only used in the form of practical jokes or in the schemes of swindlers. For example, Amazon uses it to improve the voice of its virtual assistant Alexa, and Disney uses the algorithm to make facial changes smoother and more natural in animated films. Therefore, it is important to understand that the technology itself is extremely useful, but at the same time available to fraudsters. In this regard, Intel and other companies are developing algorithms to identify fakes.
Major cases of the use of deepfake technology to defraud became known back in 2019. At that time, Symantec identified executive fake voices at several businesses, resulting in millions of dollars being transferred to third-party accounts. Deepfakes also significantly complicate the court system. For example, evidence provided in proceedings can turn out to be fake, and then the entire defense or prosecution strategy will be useless. Therefore, developments like FakeCatcher are extremely important for different sectors. Over time, they will solve more complex tasks than just moderating video content on social networks. However, the problem of deepfakes will also evolve, as will programs for recognizing them.