In 20 18, NVIDIA used artificial intelligence technology to synthesize some nonexistent face photos, and the researchers relied on an algorithm called GANs. Let two neural networks confront each other, in which A tries to find fakes and B tries to generate more real fakes to fool A. They can continue to play idioms solitaire. In other words, with enough time, Gans can generate a fake that looks more like a real person than a real person.
Since then, the ability of artificial intelligence to generate human images has been greatly improved. But it has also brought some bad effects. Liars can use these generated fakes to deceive people, splicing faces into pornographic movies without their consent, so as to satisfy their abnormal desires and even undermine people's trust in online media. Moreover, the generated fake photos are easier to gain people's trust than the real photos. Although deep forgery can be found by using artificial intelligence itself, technology companies have failed to effectively adjust those complex materials, which means that this road will not work.
A more important question is whether humans can find the difference between fake photos and real photos and what is the relationship with deep forgery. A study on PNAS shows that the situation is not impressive. People's ability to find fake photos is not as high as the accuracy of random guessing. They think that fake faces are more trustworthy than real faces. The author of the research report wrote: "Our evaluation of the face fidelity synthesized by artificial intelligence shows that the synthesis engine has reached an incredible level, and it can create indistinguishable faces, which is more trustworthy than real faces."
In order to test the response to fake faces, the researchers used Nvidia's updated version of GAN to generate 400 fake faces, of which the gender ratio was the same, with 65,438+000 in each of the four ethnic groups. Black, Caucasian, East Asian and South Asian. They matched these faces with real faces extracted from the database originally used to train GAN, and these faces were judged to be similar by different neural networks.
Then they recruited 3 15 participants from Amazon Mechanical Turk crowdsourcing platform. Everyone was asked to judge 128 faces from the merged data set and decide whether they were fake or not. In the end, their accuracy rate is only 48%, which is lower than the accuracy rate of 50% obtained by random guess.
Deep forged fake photos often have characteristic defects and faults, which can help people identify them. Therefore, the researchers conducted a second experiment on another 2 19 participants. Before asking them to judge the same number of faces, they received some basic training and were told what to pay attention to. But as a result, their performance only improved slightly, reaching 59%, only by 1 1%.
In the last experiment, the research team decided to see if the intuitive response to the face could improve the accuracy. People usually determine things that are difficult to judge according to their first intuition in an instant. For human face, credibility is undoubtedly the first reaction when people judge a person. However, when another 223 participants rated 65,438+028 faces, the researchers found that people's credibility evaluation of fake faces was actually 8% higher than that of real faces, which was not significant, but had important statistical significance.
The researchers found that fake faces look more credible than real faces because fake faces often look more like ordinary faces, while people with hot bodies tend to trust ordinary faces more, especially those that look harmless can win people's trust. This is confirmed by observing four least trustworthy faces (all true) and three most trustworthy faces (all false).
This study points out that those who develop the basic technology behind deep forgery need to seriously think about what they are doing. Ask yourself whether the benefits of this technology have exceeded the risks it brings. The industry should also seriously consider some resume security measures, such as letting people who use technology costs have watermarks in their output photos. The author of the research report said: Because the use of this powerful technology has caused great threat to people's lives, we should seriously consider the practice that open and unrestricted deep forged code can be incorporated into any program by anyone at will, and whether some restrictions should be added. But unfortunately, it may be too late now. The open model has been able to produce very real deep forged photos, and it is unlikely that we will get the model back.
Original link: /QQ _ 43529978/ article/details/123 109543.