Will deepfakes become the most powerful tool of misinformation ever seen? Can we mitigate, or govern, against the coming onslaught of synthetic media?
Our research focuses on the risks that deepfakes create. We highlight risks at three levels: the individual, the organizational and the societal. In each case, knowing how to respond means investigating to better understand the risks of what and to whom. And it’s important to note that these risks don’t necessarily involve malicious intent. Typically, if an individual or an organization faces a deepfake risk, it’s because they’ve been targeted in some way – for example, non-consensual pornography at the individual level, or fraud against an organization. But on the societal level, one of the things our research highlights is that the potential harm from deepfakes is not necessarily intentional: the growing prevalence of synthetic media can stoke concerns about fundamental social values like trust and truth.
“Echo chambers are not a new thing but the real replication of these memes happens online. Until now, a lot of polarization research has been around content production because this is what we can easily measure. This population of Firefox users consented to share their data – it’s like some people donate blood for the common good, here people donated their data for the common good,” said Head of the Data Science Lab, Assistant Professor Robert West, and the study’s lead author.
Whilst there have been previous, smaller studies that measured engagement in different ways, uniquely this new study was conducted in vivo, with users going about their normal daily lives, meaning that researchers were able to follow people in their natural habitat. With access to browsing history, unlike earlier studies, this research measured the time that users spent on particular websites, and reading particular articles, rather than whether a user had visited a site or not.
This additional data provided new evidence of a greater extent of polarization than observed in prior literature, showing that people engaged much more deeply with articles matching their political persuasion, spending more time on news sources matching their partisan beliefs than other information sources. […]
The value of trust in (dis)information – investi(gati)ng online trustful news
4.66 billion people use social media. Social media has emerged as a revolutionary powerful communication means to spread (dis)information. What are the benefits, the threats, the risks, the opportunities? How do we trust online information? Is investi(gati)ng in online news company stocks risky or trustworthy? The advent and easiness of online publishing makes it harder to value and trust online (dis)information.
The objective of this presentation was to describe new challenges in terms of intelligence, governance, control, accountability, ethics, reputation of social media (dis)information, explain stakeholders’ impact. With a recent Swiss case who has impacted Switzerland generating disruptive reactions from the Swiss population and a more global case, this session will show how damaging harmful narratives or financial market manipulation can cause tremendous loss of trust and financial losses. In parallel this session exposes the importance of online threat intelligence and cooperation between private and public sectors. This session is also meant to be interactive for everyone to share his and her fruitful thoughts and concerns.
The webinar was presented by Paul Wang and facilitated by Christopher H. Cordey, futuratinow & digiVolution
CEO and co-founder of ZeNPulsar, Paul is an Entrepreneur, seasoned Speaker, Forensic Examiner, Blockchain Enthusiast, Senior Executive, Cyber Security Advocate, and a university lecturer in Economic Crime Investigation. Also Head of Corporate Governance Insight at Geneva Macro Labs, Paul sits in the Advisory Board of several companies. ZeNPulsar responds to the integrity threats posed by online disinformation campaigns and harmful narratives. Reg- Legal- FinTech specialist, with over 20 years of experience at Big 4 companies, he performs engagements in the field of Cybersecurity, Fraud Investigation, Litigation Support, Data Privacy, Risk & Compliance.
Previously, Paul was Partner, regional Head of Fraud Investigation & Dispute Services at a Big4 firm and Head of Forensic Technology & Discovery Services. He has cooperated with global supervisory and regulatory authorities, compliance officers, legal counsels, and law enforcement. He’s also acting as mediator in dispute cases. Possessing a Master of Computer Science EPFL, CISA, CISSP, CISM, CRISC, MIT Sloan School of Management in Blockchain Technologies certificates, he’s (ISC)2 and ISACA active member.