Home technology security Facebook to ban deepfakes before 2020 election
Security
CIO Bulletin
2020-01-07
Facebook has its fair share of issues when it comes to political scandals and ahead of the 2020 elections the company is planning to ban deepfakes or videos that are manipulative in nature. According to Facebook, the company intends to stop the spread of misinformation on the largest social network.
Although with the new policy Facebook doesn’t ban all the manipulated videos as the new guidelines allow certain viral videos like that of House Speaker Nancy Pelosi to still go on the network.
The deepfake technology uses artificial intelligence to make it look like politicians, celebrities and other influential people are doing or saying something they didn’t and this can be a big issue for the social media platforms that are striving to keep misinformation away. Several videos of major figures appeared last year including Facebook CEO Mark Zuckerberg and former president Barack Obama.
Facebook’s latest policy will ban videos that are edited or synthesized by techniques such as AI that aren’t easy to identify as fake. But the ban will not include videos that are edited for satire or parody according to the new policy.
Earlier in September, Facebook teamed up with Microsoft on AI and academics from about six colleges to create a challenge to help improve the detection of deepfakes. This took place after the US intelligence community 2019 Worldwide Threat Assessment issued a warning that adversaries would probably use deepfakes to influence people in the US and this raised a security concern.
Digital-marketing
Artificial-intelligence
Lifestyle-and-fashion
Food-and-beverage
Travel-and-hospitality