Skip to content

Create deepfake video by just uploading the original video and specifying the text the character will read

Notifications You must be signed in to change notification settings

nsourlos/end-to-end_deepfake_colab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

End-to-End DeepFake Video Generation

forthebadge forthebadge

Maintenance

Overview

This Colab notebook provides a step-by-step guide to generate a deepfake video by cloning a voice onto a video. The process involves uploading video and voice files, renaming them, extracting audio, creating audio chunks, and finally using Wav2Lip for deepfake generation.

Open In Colab

Steps

Before executing this notebook we need to have a folder in our Google Drive named deepfake with at least a video file (mp4 format). It is strongly recommended to also include an audio (mp3 format) file to clone the voice from. Especially for cases of non-English language in the video, it is essential to upload an English audio file as well.

Caution: Text prompt should be separated with '|' every one to two sentences (every ~20secs it takes to read it). If you get any warnings and restart session is suggested (after installing a library - e.g. librosa, as shown in the figure below), click 'cancel'. In the free version (T4 or V100 with 15GB VRAM and ~13GB RAM) the maximum audio/video duration can be ~50secs (takes ~30mins to run the script and obtain results). For a longer text prompt a larger GPU is needed (paid version using L4 with 22.5GB VRAM and ~63GB of RAM or A100 with 40GB VRAM and ~84GB RAM - the latter uses more compute units/hour).

Warning

1. Upload Video and Voice Files

  • Mount Google Drive to access files.
  • Change directory to the specified path.
from google.colab import drive
drive.mount('/content/gdrive')

cd gdrive/MyDrive/deepfake

2. Set Base Path

Specify the base path for video and audio files.

base_path='/content/gdrive/MyDrive/deepfake'

3. Install Dependencies

Install TTS, pydub, and moviepy libraries.

!pip install -q pydub==0.25.1 TTS==0.22.0 moviepy==1.0.3

4. Set Text to Read

Set the English text that will be read with the cloned voice.

text_to_read="Joining two modalities results in a surprising increase in generalization! \\\n What would happen if we combined them all?\" 

5. Rename Audio and Video Files

Rename the uploaded audio and video files to input_voice.mp3 and video_full.mp4, respectively.

6. Extract Audio from Video (if needed)

If only a video is provided, extract audio from it to be used to clone the individual.

7. Create Audio Chunks

Create a folder with 10-second chunks of audio to be used as input in Tortoise.

8. Confirm Audio and Video Duration

Ensure audio and video have the same duration. If not, trim the longer one to match the shorter one (or cut them both to 20 seconds).

9. Clone Wav2Lip Repository and Download Models

Clone Wav2Lip GitHub repository, download pre-trained models, and install dependencies.

10. Generate Deepfake

Run the Wav2Lip inference script to generate the deepfake video.

11. Cleanup

Remove temporary files and folders.

Releases

No releases published

Packages

No packages published