Note (General)type:Thesis
The adage, "A picture is worth a thousand words", has proved the effectiveness of image and video in delivering information. Hence, the Internet becomes wonderful when we can share image/video media with people worldwide in this digital era. It must be more incredible if image/video media can precisely show what we see in real life with our eyes. Unfortunately, due to natural causes (e.g., shooting devices and environments) or artificial causes (e.g., image/video compression sacrificing information to achieve better transmission), the image/video media is not always in the best visual quality which human expects to see (ground-truth), reducing user experience in receiving the information. The loss of an image compared with its ground-truth is called degradation, and the act of solving degradation is called restoration. Even though many advanced techniques have been proposed to restore degraded images/videos, the real-world degradation remains unsolved. Hence, this thesis will dive into and solve specific types of real-world degradation, including (1) artificial degradation in image/video compression and (2) naturally affected degradation in smartphone photo scanning.Regarding (1), we leverage deep learning techniques to solve compression degradation and recover other missing information caused by our effort in reducing compression complexity. Concretely, we sacrifice numerous pixels by down-sampling and color information. It creates a new challenge in compensating for the massively missing information through down-sampling, color removal, and compression. By adopting advanced techniques in computer vision, we propose a specific deep neural network, named restoration-reconstruction deep neural network (RR-DnCNN), to solve Super-Resolution with compression degradation. Furthermore, we also introduce a scheme to compensate for color information with Color Learning and enhance image quality with Deep Motion Compensation for P-frame coding. As a result, our works outperform the standard codec and the previous works in the field.Regarding (2), one solution is to train a supervised deep neural network on many digital images and smartphone-scanned versions. However, it requires a high labor cost, leading to limited training data. Previous works create training pairs by simulating degradation using low-level image processing techniques. Their synthetic images are then formed with perfectly scanned photos in latent space. Even so, the real-world degradation in smartphone photo scanning remains unsolved since it is more complicated due to lens defocus, low-cost cameras, losing details via printing. Besides, locally structural misalignment still occurs in data due to distorted shapes captured in a 3-D world, reducing restoration performance and the reliability of the quantitative evaluation. To address these problems, we propose a semi-supervised Deep Photo Scan (DPScan). First, we present a way to produce real-world degradation and provide the DIV2K-SCAN dataset for smartphone-scanned photo restoration. Also, Local Alignment is proposed to reduce the minor misalignment remaining in data. Second, we simulate many different variants of the real-world degradation using low-level image transformation to gain a generalization in smartphone-scanned image properties, then train a degradation network to learn how to degrade unscanned images as if a smartphone scanned them. Finally, we propose a Semi-Supervised Learning that allows our restoration network to be trained on both scanned and unscanned images, diversifying training image content. As a result, the proposed DPScan quantitatively and qualitatively outperforms its baseline architecture, state-of-the-art academic research, and industrial products in the field.
Collection (particular)国立国会図書館デジタルコレクション > デジタル化資料 > 博士論文
Date Accepted (W3CDTF)2022-07-05T02:30:21+09:00
Data Provider (Database)国立国会図書館 : 国立国会図書館デジタルコレクション