0:00

Empower Your Creativity with ByteDance’s AnimateDiff-Lightning Technology

Introducing AnimateDiff-Lightning, a groundbreaking model designed for lightning-fast text-to-video generation. This innovative technology delivers video generation speeds exceeding ten times faster than its predecessor, AnimateDiff. To dive deeper into the subject, be sure to read our research paper: AnimateDiff-Lightning: Cross-Model Diffusion Distillation. Our team is excited to release this model as part of our research initiatives.

The models are distilled from AnimateDiff SD1.5 v2. Within this repository, you’ll find checkpoints for 1-step, 2-step, 4-step, and 8-step distilled models, with the remarkable generation quality evident in our 2-step, 4-step, and 8-step models. Please note that the 1-step model is strictly for research purposes.

Try It Out: Demo

Experience the power of AnimateDiff-Lightning firsthand with our interactive text-to-video generation demo.

Recommended Base Models for Optimal Results

For best results using AnimateDiff-Lightning, we recommend pairing it with stylized base models. Here are some suggestions:

Realistic Models

Anime & Cartoon Models

We encourage you to experiment with different settings. Using 3 inference steps on the 2-step model has shown to produce excellent results. Additionally, certain base models perform better with Controlled Guidance Fine-tuning (CFG). For enhanced motion quality, we recommend utilizing Motion LoRAs, employing strengths of 0.7 to 0.8 to avoid adding watermarks.

Implementing AnimateDiff-Lightning: Diffusers Usage

Below is a sample code snippet illustrating how to implement AnimateDiff-Lightning using Diffusers:

import torch
from diffusers import AnimateDiffPipeline, MotionAdapter, EulerDiscreteScheduler
from diffusers.utils import export_to_gif
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file

device = "cuda"
dtype = torch.float16

step = 4  # Options: [1,2,4,8]
repo = "ByteDance/AnimateDiff-Lightning"
ckpt = f"animatediff_lightning_{step}step_diffusers.safetensors"
base = "emilianJR/epiCRealism"  # Choose your favorite base model.

adapter = MotionAdapter().to(device, dtype)
adapter.load_state_dict(load_file(hf_hub_download(repo ,ckpt), device=device))
pipe = AnimateDiffPipeline.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")

output = pipe(prompt="A girl smiling", guidance_scale=1.0, num_inference_steps=step)
export_to_gif(output.frames[0], "animation.gif")

Using AnimateDiff-Lightning with ComfyUI

  1. Download the animatediff_lightning_workflow.json file and import it into ComfyUI.
  2. Install the required nodes. This can be done manually or with ComfyUI-Manager:
  3. Download your preferred base model checkpoint and place it in the /models/checkpoints/ directory.
  4. Download the AnimateDiff-Lightning checkpoint animatediff_lightning_Nstep_comfyui.safetensors and place it in the /custom_nodes/ComfyUI-AnimateDiff-Evolved/models/ directory.
ComfyUI Workflow
ComfyUI Workflow

Video-to-Video Generation with AnimateDiff-Lightning

AnimateDiff-Lightning excels in video-to-video generation, simplifying the workflow through ComfyUI with ControlNet. Follow these steps:

  1. Download the animatediff_lightning_v2v_openpose_workflow.json file and import it into ComfyUI.
  2. Install nodes manually or via ComfyUI-Manager:
  3. Download your selected base model checkpoint and save it to /models/checkpoints/.
  4. Download the AnimateDiff-Lightning checkpoint animatediff_lightning_Nstep_comfyui.safetensors and place it in the /custom_nodes/ComfyUI-AnimateDiff-Evolved/models/.
  5. Download the ControlNet OpenPose checkpoint control_v11p_sd15_openpose.pth and move it to /models/controlnet/.
  6. Upload your video and execute the pipeline.

Additional Notes:

  1. Keep videos short and maintain low resolution. For testing, we utilized 576×1024 resolution at 30fps for 8 seconds.
  2. Adjust frame rates to align with the input video for synchronized audio.
  3. DWPose downloads necessary checkpoints automatically upon the initial run.
  4. If DWPose seems frozen in the UI, check the ComfyUI log and your output folder for processing updates.
ComfyUI OpenPose Workflow
ComfyUI OpenPose Workflow

Source : original source link


What's Your Reaction?

OMG OMG
10
OMG
Scary Scary
9
Scary
Curiosity Curiosity
5
Curiosity
Like Like
4
Like
Skepticism Skepticism
2
Skepticism
Excitement Excitement
1
Excitement
Confused Confused
10
Confused
TechWorld

0 Comments

Your email address will not be published. Required fields are marked *