A “fakeness score” could help people identify AI generated content

New deepfake detection tool helps to crack down on fake content

A “deepfake score” helps users spot AI generated video and audio

The tool is free to use to help mitigate the impact of fake content

Deepfake technology uses artificial intelligence to create realistic, yet entirely fabricated images, videos, and audio, with the manipulated media often imitating famous individuals or ordinary people for the use of fraudulent purposes, including financial scams, political disinformation, and identity theft.

In order to combat the rise in such scams, security firm CloudSEK has launched a new Deep Fake Detection Technology, designed to counter the threat of deepfakes and provide users with a way to identify manipulated content.

CloudSEK’s detection tool aims to help organizations identify deepfake content and prevent potential damage to their operations and credibility, assessing the authenticity of video frames, focusing on facial features and movement inconsistencies that might indicate deepfake tampering, such as facial expressions with unnatural transitions, and unusual textures in the background and on faces.

The rise of deepfakes but there is a solution

Audio analysis is also used, where the tool detects synthetic speech patterns that signal the presence of artificially generated voices. The system also transcribes audio and summarizes key points, allowing users to quickly assess the credibility of the content they are reviewing. The final result is an overall “Fakeness Score,” which indicates the likelihood that the content has been artificially altered.

This score helps users understand the level of potential manipulation, offering insights into whether the content is AI-generated, mixed with deepfake elements, or likely human-generated.

A Fakeness score of 70% and above is AI-generated, 40% to 70% is dubious and possibly a mix of original and deep fake elements while 40% and below is likely human-generated.

In the finance sector, deepfakes are being used for fraudulent activities like manipulating stock prices or tricking customers with fake video-based KYC processes.

Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

The healthcare sector has also been affected, with deepfakes being used to create false medical records or impersonate doctors, while government entities face threats from election-related deepfakes or falsified evidence.

Similarly, media and IT sectors are equally vulnerable, with deepfakes being used to create fake news or damage brand reputations.

“Our mission to predict and prevent cyber threats extends beyond corporations. That’s why we’ve decided to release the Deepfakes Analyzer to the community,” said Bofin Babu, Co-Founder, CloudSEK.

Original Author: Efosa Udinmwen | Source: TechRadar

About

Shark’s Data Den provides data-driven insights and analysis on technology, business, and innovation.

AI artificial intelligence Artificial Intelligence: A Guide for Thinking Humans Being Human in the Age of Artificial Intelligence books bookself Dangers data science data scientist Human Compatible Human Compatible: Artificial Intelligence and the Problem of Control Life 3.0 machine learning Max Tegmark Melanie Mitchell Pedro Domingos Stuart Russell Superintelligence Superintelligence: Paths Dangers Strategies The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

Discover more from The Shark's Data Den

Subscribe now to keep reading and get access to the full archive.

Continue reading