Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

aditya-gupta-dev/Image-Colorization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image-Colorization

A python application to convert old or grayscale image into colored form using opencv & deep learning

Technical Part

  • The technique we’ll be covering here today is from Zhang et al.’s 2016 ECCV paper, Colorful Image Colorization. Developed at the University of California, Berkeley by Richard Zhang, Phillip Isola, and Alexei A. Efros.

  • Previous approaches to black and white image colorization relied on manual human annotation and often produced desaturated results that were not “believable” as true colorizations.

  • Zhang et al. decided to attack the problem of image colorization by using Convolutional Neural Networks to “hallucinate” what an input grayscale image would look like when colorized.

  • To train the network Zhang et al. started with the ImageNet dataset and converted all images from the RGB color space to the Lab color space.

  • Similar to the RGB color space, the Lab color space has three channels. But unlike the RGB color space, Lab encodes color information differently:

    • The L channel encodes lightness intensity only
    • The a channel encodes green-red.
    • And the b channel encodes blue-yellow.
  • As explained in the original paper, the authors, embraced the underlying uncertainty of the problem by posing it as a classification task using class-rebalancing at training time to increase the diversity of colors in the result. The Artificial Intelligent (AI) approach is implemented as a feed-forward pass in a CNN (“Convolutional Neural Network”) at test time and is trained on over a million color images.

  • The color photos were decomposed using Lab model and “L channel” is used as an input feature and “a and b channels” as classification labels as shown in below diagram.

  • The trained model (that is available publically and in models folder of this repo or download it by clicking here), we can use it to colorize a new B&W photo, where this photo will be the input of the model or the component “L”. The output of the model will be the other components “a” and “b”, that once added to the original “L”, will return a full colorized image.

The entire (simplified) process can be summarized as:

  • Convert all training images from the RGB color space to the Lab color space.
  • Use the L channel as the input to the network and train the network to predict the ab channels.
  • Combine the input L channel with the predicted ab channels.
  • Convert the Lab image back to RGB.

About

A python application to convert old or grayscale image into color form

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages