Google Releases Creepy Deepfake Dataset to Help Developers Create Detection Methods

Google Releases Creepy Deepfake Dataset to Help Developers Create Detection Methods

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Deepfakes are an increasing concern for society. This year, a deepfaked voice of a CEO was used to steal $250,000 from a company.

The technology allows for uncanny, life-like clips of politicians saying things they didn't really say. It isn't hard to imagine how this could be used to unsettle the masses.

That's why Google has released a trove of deepfake videos to help researchers come up with detection methods.


Harmful to society

As Google points out in a new blog post, "while many [deepfake videos] are likely intended to be humorous, others could be harmful to individuals and society."

"Google considers these issues seriously. As we published in our AI Principles last year, we are committed to developing AI best practices to mitigate the potential for harm and abuse," the post continues.

Last year, the company released a dataset of synthetic speech for an international challenge that asked for developers to devise fake audio detectors that could catch deepfake audio clips.

This time, in collaboration with Jigsaw, Google has announced the release of a large dataset of deepfake videos. The dataset, the company says in its blog post, has been added into the Technical University of Munich and the University Federico II of Naples’ new FaceForensics benchmark — an initiative that Google co-sponsors.

Which one's real?

The new data was created with the help of consenting actors. Over the past year, Google paid the actors to record hundreds of videos. As Google says, "the resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts."

A few examples of the videos side-by-side — where we can't really tell which one is real — can be seen below.

The incorporation of the data into the FaceForensics video benchmark was carried out with the help of leading researchers, including Prof. Matthias Niessner, Prof. Luisa Verdoliva, and the FaceForensics team.

The deepfake video data is free to download on the FaceForensics GitHub page.

Watch the video: How to Make Synthetic Data. Synthetic Data Generation for Machine Learning (June 2022).


  1. Alfie

    My seat is on the left and I have to sit there ... Hey, speaker, would you calm down and really think with your head :)

  2. Tarif

    WOW .... =)

  3. Goll

    You are going the right way, comrades

  4. Caolabhuinn

    I congratulate you, your thought is very good

  5. Twein

    I think he is wrong. I am able to prove it. Write to me in PM, speak.

  6. Kevron

    And where is your logic?

  7. Chiamaka

    Instead, critics recommend the solution to the problem.

Write a message