本資料は、掲載誌(URI)等のリンク先にある学位授与機関のWebサイトやCiNii Dissertationsから、本文を自由に閲覧できる場合があります。
博士論文
国立国会図書館館内限定公開
収録元データベースで確認する
国立国会図書館デジタルコレクション
デジタルデータあり
Image Classification and Its Applications on Insect Pest Recognition
- 国立国会図書館永続的識別子
- info:ndljp/pid/11679607
国立国会図書館での利用に関する注記
資料に関する注記
一般注記:
- In computer vision community, many excellent convolutional neural networks are proposed in recent years. Kinds of approaches are applied to improve th...
書店で探す
障害者向け資料で読む
書店で探す
障害者向け資料で読む
書誌情報
この資料の詳細や典拠(同じ主題の資料を指すキーワード、著者名)等を確認できます。
デジタル
- 資料種別
- 博士論文
- 著者・編者
- 劉, 文傑
- 著者標目
- 出版年月日等
- 2021-03-23
- 出版年(W3CDTF)
- 2021-03-23
- 並列タイトル等
- 画像分類とその害虫認識への応用
- 授与機関名
- 徳島大学
- 授与年月日
- 2021-03-23
- 授与年月日(W3CDTF)
- 2021-03-23
- 報告番号
- 甲第3521号
- 学位
- 博士(工学)
- 博論授与番号
- 甲第3521号
- 本文の言語コード
- eng
- 件名標目
- 対象利用者
- 一般
- 一般注記
- In computer vision community, many excellent convolutional neural networks are proposed in recent years. Kinds of approaches are applied to improve the model performance, such as adding model depth, feature fusion, attention mechanism etc. However, how to effective extract and utilize the feature in the model is a critical problem for convolutional neural networks. In this thesis, we focused on constructing more effective feature extracting unit based on residual network for image classification, and we proposed three variant residual networks, including feature reuse residual network (FR-ResNet), deep feature fusion residual network (DFF-ResNet), and deep multi-branch fusion residual network (DMF-ResNet). Meanwhile, insect pests are regarded as the main thread to the commercially important corps. An effective classification method can avoid economic losses significantly. Earlier detection will help decrease agricultural losses. For the traditional classification method, it needs more experts to distinguish the categories of insect pests, which is expensive and low efficiency. As the deep learning method attracting more attention, this approach is applied in this domain. In our thesis, we applied our models to recognize the insect pest, which can achieve considerable improvement compared other convolutional neural networks.Feature reuse is an effective method to improve the capability of model performance, and we also adopted this method in our model. Based on the original residual block, we combined feature from the input signal of a residual block and the residual signal together, which reuse the feature from the previous layer in a new and simple mode. Therefore, we named it a feature reuse residual block. In each block, it enhances the capacity of representation by learning half and reuse half feature. By stacking the feature reuse residual block, we obtained the feature reuse residual network (FR-ResNet) and evaluated the performance on several benchmark dataset, including CIFAR, SVHN, and IP102. The experimental results showed that FR-ResNet could achieve significant performance improvement in terms of image classification. Moreover, to demonstrate the adaptive of our approach, we applied it to various kinds of residual networks, including ResNet, Pre-ResNet, and WRN. The experimental results also showed that the performance could be improved obviously than original networks. Based on these experiments on several benchmark datasets, it demonstrates the effectiveness of our approach.In FR-ResNet, it has a drawback that adding the model width will bring more parameters. Therefore, to obtain a good tradeoff between model performance and parameters, we modified the architecture of feature reuse residual block and proposed the feature fusion residual block. In each feature fusion residual block, we fused the feature from the previous layer between two 1×1 convolution layer in residual signal branch to extract more feature for our task. Meanwhile, we explored the contribution of each residual group to the entire model. We found that adding the number of residual blocks in earlier residual groups can promote the model performance significantly, which makes the model having a better capacity of generalization. Following the architecture of FR-ResNet, we construct the Deep Feature Fusion Residual Network (DFF-ResNet). Furthermore, we combined our approach with two common residual networks (Pre-ResNet and WRN) to prove the validness and adaptiveness of our method. Then, we validated these models on several benchmark datasets, including CIFAR and SVHN. The empirical experimental results indicate that our models have a better model performance than FR-ResNet and other state-of-the-art methods. Then, we apply our models in the field of recognizing insect pests, and we evaluate our models on IP102 dataset. Under the similar total number of model parameters, the DFF-ResNet surpasses FR-ResNet on several benchmark datasets with fewer parameters, respectively. Meanwhile, DFF-ResNet had better test accuracy performance than other state-of-the-art methods.Activated by the proceding works, to learn the multi-scale representation to improve the model performance, we fused the extracted feature from three branches in each residual block. Specifically, in the new residual block, it contains three branches, including the basic branch, the bottleneck branch, and the branch from the input with linear conversion. Based on this structure, to further improve the model performance, we proposed the SFR module to recalibrate channel-wise feature responses and to model the relationship between these branches. The experimental results verified the effectiveness of our approach on CIFAR-10 and CIFAR-100 datasets. Even for extremely deep DMFResNet, our model can achieve compelling results. Compared to the baseline models and other state-of-the-art methods, our model can obtain the best model performance on IP102 dataset, which had proved the validness of our approach for the high-resolution image classification task. Through visualizing the highlighted regions on images, we can further explain the effect of our approach for the image classification task.In this thesis, we proposed the FR-ResNet, DFF-ResNet, and DMF-ResNet end evaluated these models on several benchmark datasets. The experimental results demonstrated that our approaches can effectively improve the model performance. Besides, it also verified that our proposed models could extract more useful features for image classification tasks.
- 国立国会図書館永続的識別子
- info:ndljp/pid/11679607
- コレクション(共通)
- コレクション(障害者向け資料:レベル1)
- コレクション(個別)
- 国立国会図書館デジタルコレクション > デジタル化資料 > 博士論文
- 収集根拠
- 博士論文(自動収集)
- 受理日(W3CDTF)
- 2021-06-07T02:06:26+09:00
- 記録形式(IMT)
- application/pdf
- オンライン閲覧公開範囲
- 国立国会図書館内限定公開
- デジタル化資料送信
- 図書館・個人送信対象外
- 遠隔複写可否(NDL)
- 可
- 連携機関・データベース
- 国立国会図書館 : 国立国会図書館デジタルコレクション