[go: up one dir, main page]

Skip to content
View dreamgaussian's full-sized avatar

Block or report dreamgaussian

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
dreamgaussian/readme.md

DreamGaussian

This repository contains the official implementation for DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation.

accelerate.mp4

News

  • Image-to-3D: Open In Colab
  • Text-to-3D: Open In Colab
  • Image-to-3D:
  • Run Gradio demo on Colab: Open In Colab

Install

pip install -r requirements.txt

# a modified gaussian splatting (+ depth, alpha rendering)
git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
pip install ./diff-gaussian-rasterization

# simple-knn
pip install ./simple-knn

# nvdiffrast
pip install git+https://github.com/NVlabs/nvdiffrast/

# kiuikit
pip install git+https://github.com/ashawkey/kiuikit

# To use MVdream, also install:
pip install git+https://github.com/bytedance/MVDream

# To use ImageDream, also install:
pip install git+https://github.com/bytedance/ImageDream/#subdirectory=extern/ImageDream

Tested on:

  • Ubuntu 22 with torch 1.12 & CUDA 11.6 on a V100.
  • Windows 10 with torch 2.1 & CUDA 12.1 on a 3070.

Usage

Image-to-3D:

### preprocess
# background removal and recentering, save rgba at 256x256
python process.py data/name.jpg

# save at a larger resolution
python process.py data/name.jpg --size 512

# process all jpg images under a dir
python process.py data

### training gaussian stage
# train 500 iters (~1min) and export ckpt & coarse_mesh to logs
python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name

# gui mode (supports visualizing training)
python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name gui=True

# load and visualize a saved ckpt
python main.py --config configs/image.yaml load=logs/name_model.ply gui=True

# use an estimated elevation angle if image is not front-view (e.g., common looking-down image can use -30)
python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name elevation=-30

### training mesh stage
# auto load coarse_mesh and refine 50 iters (~1min), export fine_mesh to logs
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name

# specify coarse mesh path explicity
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name mesh=logs/name_mesh.obj

# gui mode
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name gui=True

# export glb instead of obj
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name mesh_format=glb

### visualization
# gui for visualizing mesh
# `kire` is short for `python -m kiui.render`
kire logs/name.obj

# save 360 degree video of mesh (can run without gui)
kire logs/name.obj --save_video name.mp4 --wogui

# save 8 view images of mesh (can run without gui)
kire logs/name.obj --save images/name/ --wogui

### evaluation of CLIP-similarity
python -m kiui.cli.clip_sim data/name_rgba.png logs/name.obj

Please check ./configs/image.yaml for more options.

Image-to-3D (stable-zero123):

### training gaussian stage
python main.py --config configs/image_sai.yaml input=data/name_rgba.png save_path=name

### training mesh stage
python main2.py --config configs/image_sai.yaml input=data/name_rgba.png save_path=name

Text-to-3D:

### training gaussian stage
python main.py --config configs/text.yaml prompt="a photo of an icecream" save_path=icecream

### training mesh stage
python main2.py --config configs/text.yaml prompt="a photo of an icecream" save_path=icecream

Please check ./configs/text.yaml for more options.

Text-to-3D (MVDream):

### training gaussian stage
python main.py --config configs/text_mv.yaml prompt="a plush toy of a corgi nurse" save_path=corgi_nurse

### training mesh stage
python main2.py --config configs/text_mv.yaml prompt="a plush toy of a corgi nurse" save_path=corgi_nurse

Please check ./configs/text_mv.yaml for more options.

Image+Text-to-3D (ImageDream):

### training gaussian stage
python main.py --config configs/imagedream.yaml input=data/ghost_rgba.png prompt="a ghost eating hamburger" save_path=ghost

### training mesh stage
python main2.py --config configs/imagedream.yaml input=data/ghost_rgba.png prompt="a ghost eating hamburger" save_path=ghost

Helper scripts:

# run all image samples (*_rgba.png) in ./data
python scripts/runall.py --dir ./data --gpu 0

# run all text samples (hardcoded in runall_sd.py)
python scripts/runall_sd.py --gpu 0

# export all ./logs/*.obj to mp4 in ./videos
python scripts/convert_obj_to_video.py --dir ./logs

Gradio Demo:

python gradio_app.py

Tips

  • The world & camera coordinate system is the same as OpenGL:
    World            Camera        
  
     +y              up  target                                              
     |               |  /                                            
     |               | /                                                
     |______+x       |/______right                                      
    /                /         
   /                /          
  /                /           
 +z               forward           

elevation: in (-90, 90), from +y to -y is (-90, 90)
azimuth: in (-180, 180), from +z to +x is (0, 90)
  • Trouble shooting OpenGL errors (e.g., [F glutil.cpp:338] eglInitialize() failed):
# either try to install OpenGL correctly (usually installed with the Nvidia driver), or use force_cuda_rast:
python main.py --config configs/image_sai.yaml input=data/name_rgba.png save_path=name force_cuda_rast=True

kire mesh.obj --force_cuda_rast

Acknowledgement

This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!

Citation

@article{tang2023dreamgaussian,
  title={DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation},
  author={Tang, Jiaxiang and Ren, Jiawei and Zhou, Hang and Liu, Ziwei and Zeng, Gang},
  journal={arXiv preprint arXiv:2309.16653},
  year={2023}
}

Popular repositories Loading

  1. dreamgaussian dreamgaussian Public

    [ICLR 2024 Oral] Generative Gaussian Splatting for Efficient 3D Content Creation

    Python 3.9k 357

  2. dreamgaussian.github.io dreamgaussian.github.io Public

    Project Page for Generative Gaussian Splatting for Efficient 3D Content Creation

    HTML 15 5