Deep Halftoning

Deep context-aware descreening and rescreening of halftone images is a fully automatic method for descreening halftone images is presented based on convolutional neural networks with end-to-end learning.

Halftoning and Rescreening example

This project pertains automated Descreening process. Descreening is the task that we try to reconstruct the halftoned image (which is the mandatory process to interact images with printers, scanners, monitors, etc) meanwhile reducing the amount of data loss. For more info, please see the original paper.

  • First and the only fully open source implementation of this paper, in PyTorch
  • The implementation can be divided into below separate projects:
    • CoarseNet: Modified version of U-Net architecture to work as a low-pass filter to remove halftone patterns
    • DetailsNet: A deep CNN generator and two discriminators which are trained simultaneously to improve image quality by adding details to it
    • EdgeNet: A simple CNN model to extract Canny edge features to preserve details
    • ObjectNet: Modified version of "Pyramid Scene Parsing Network" to only return 25 major classified segments out of 150
    • Halftoning Algorithms: Implementation of some of the halftone algorithms provided in most recent digital color halftoning books as ground truth

You can find the implementation on Github.