本資料は、掲載誌(URI)等のリンク先にある学位授与機関のWebサイトやCiNii Dissertationsから、本文を自由に閲覧できる場合があります。
博士論文
国立国会図書館館内限定公開
収録元データベースで確認する
国立国会図書館デジタルコレクション
デジタルデータあり
USING COMPUTING FIRST PRINCIPLES TO IMPROVE THE SYMBIOTIC PERFORMANCE IN ALGORITHMS AND PROCESSORS USED IN LOW-POWERED MACHINE LEARNING
- 国立国会図書館永続的識別子
- info:ndljp/pid/12361091
国立国会図書館での利用に関する注記
資料に関する注記
一般注記:
- Using less electric power or speeding up processing is catching the interests of researchers in deep learning. Models have grown in complexity and siz...
書店で探す
障害者向け資料で読む
書店で探す
障害者向け資料で読む
書誌情報
この資料の詳細や典拠(同じ主題の資料を指すキーワード、著者名)等を確認できます。
デジタル
- 資料種別
- 博士論文
- 著者・編者
- Nsinga, Robert
- 著者標目
- 出版年月日等
- 2022-09-20
- 出版年(W3CDTF)
- 2022-09-20
- 並列タイトル等
- コンピューティングの第一原理を使用して、低電力の機械学習で使用されるアルゴリズムとプロセッサの共生パフォーマンスを向上させる
- 授与機関名
- 徳島大学
- 授与年月日
- 2022-09-20
- 授与年月日(W3CDTF)
- 2022-09-20
- 報告番号
- 甲第3652号
- 学位
- 博士(工学)
- 博論授与番号
- 甲第3652号
- 本文の言語コード
- eng
- 対象利用者
- 一般
- 一般注記
- Using less electric power or speeding up processing is catching the interests of researchers in deep learning. Models have grown in complexity and size using as much precision depth as can be computationally supported regardless of how expensive the minimum required cooling system might cost. Quantization has offered ease of deployment to small devices lacking floating precision capability, but little has been suggested about the floating numbers themselves. This thesis evaluates hardware acceleration for embedded devices that cannot support the energy requirements of floating numbers and proposes solutions to challenge the limits of power consumption and apply them to measure their effectiveness in terms of energy demand and speed capacity.Experts have declared the end of Moore’s law with the current state of nanotechnology coming to terms with its inability to increase the performance per transistor density ratio. Accelerators, although providing a countering measure, have also increased their power needs to unsustainable levels. At the same time there has been sufficient increase in knowledge, such as distributed computing, to branch-off into possibilities that could reduce power demands while maintaining, or possibly increase microprocessor performance. This thesis highlights some important challenges that were born out of the rapid rise of deep learning.We present experimental results showing that low-powered devices can serve as powerful tools in low cost deep learning research. In doing so we are interested in slowing down the ongoing trend that favors expensive investment in deep learning computers. Using known properties in computer architecture, hardware acceleration, and digital arithmetic we implement ways to design algorithms that symbiotically match their performance in accordance with the theoretical limits afforded by the hardware components that run them.Computer processors are utilized based on their ability to execute instructions defined in code or machine-readable format. Some processors are multi-purpose, others are domain-specific, the former being good at a wide range of tasks and the latter only focused for specific tasks. While executing any task an ideal processor should engage all its transistors to ensure that no part is left underutilized. However, in practice it is not always the case, which is why domain-specific processors are optimized to carry only the instructions for which they would fully commit their components.It is considered good practice when algorithms are designed to encourage the maximum use of available capacity for any execution. Our proposed method improves the symbiotic complementarity in peak algorithm performance and theoretical hardware capacity.
- 国立国会図書館永続的識別子
- info:ndljp/pid/12361091
- コレクション(共通)
- コレクション(障害者向け資料:レベル1)
- コレクション(個別)
- 国立国会図書館デジタルコレクション > デジタル化資料 > 博士論文
- 収集根拠
- 博士論文(自動収集)
- 受理日(W3CDTF)
- 2022-11-07T16:56:35+09:00
- 記録形式(IMT)
- PDF
- オンライン閲覧公開範囲
- 国立国会図書館内限定公開
- デジタル化資料送信
- 図書館・個人送信対象外
- 遠隔複写可否(NDL)
- 可
- 連携機関・データベース
- 国立国会図書館 : 国立国会図書館デジタルコレクション