In this work, we successfully achieve highly controllable multi-modal image colorization. Our method is built on Stable Diffusion with
new designs such as stroke control and content-guided deformable autoencoder.
These designs offer our method realistic outputs along with various applications,
including text/stroke/exemplar-based colorization, hybrid control colorization,
recolorization, iterative editing and the pioneering implementation of local region
colorization. Our approach provides a more diverse range of colors and high user interactivity.
Our Various Applications
Unconditional Colorization
Colorize images into various colors without conditions.
Multi-control Colorization
Colorize images according to user needs (text prompts, strokes and exemplar).
Iterative Editing
Our method allows user to edit images freely and iteratively.
Region colorization
Our method supports colorization for specific regions in images.