Deepfakes challenge to trust and truth – EPFL Research

Will deepfakes become the most powerful tool of misinformation ever seen? Can we mitigate, or govern, against the coming onslaught of synthetic media?

Our research focuses on the risks that deepfakes create. We highlight risks at three levels: the individual, the organizational and the societal. In each case, knowing how to respond means investigating to better understand the risks of what and to whom. And it’s important to note that these risks don’t necessarily involve malicious intent. Typically, if an individual or an organization faces a deepfake risk, it’s because they’ve been targeted in some way – for example, non-consensual pornography at the individual level, or fraud against an organization. But on the societal level, one of the things our research highlights is that the potential harm from deepfakes is not necessarily intentional: the growing prevalence of synthetic media can stoke concerns about fundamental social values like trust and truth.

Source & Full article : EPFL

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.