並列タイトル等音楽ゲームのステージデータの自動生成における適切な難易度調整手法について
一般注記The music game is one of the popular game styles where a player is typically requested to take specific actions at specific moments corresponding to the prominent acoustic epoch such as onsets in the music playing in the background. Stage data for such games record a sequence of actions (e.g., hitting a pad) the player has to take, and the timing of action is often represented by a moving target over- lapping a stationary object on the screen. Creating such data is a difficult task that requires skilled artists. It is also time-consuming because such games require multiple sequences with different difficulty levels.Multiple attempts at generating stage data using the machine learning approach have been made, including Donahue’s[1] Dance Dance Convolution (DDC), which generates stage data for the music game called Dance Dance Revolution (DDR) from the user input of audio data of a piece of music and a desired difficulty level. DDC uses two models, step placement and step selection, to generate stage data. Step placement model finds onsets by employing convolutional neural network (CNN) and long-short term memory (LSTM), which step selection deter- mines the type of action requirement (e.g., which button to press). It is found that when the desired difficulty level is lower, the generated stage data has significantly more action requirements (or targets) than the man-made counterpart.We propose a method for generating sparse stage data with specified difficulty levels with two different approaches: 1. quantifying difficulty by examining the movement of the player required by the game 2. collecting statistics from man– made stage data analyzed from musical knowledge.The difficulty of given stage data is mainly determined by the density of targets, but the complexity of the action also plays a role. Movement Cost (MC) is a value we propose which quantifies the difficulty of any stage data by calculating optimal action the player is required to make in order to play the game perfectly.By comparing the MC of the generated stage data and comparing it to the average of MC of the man-made ones, the two models can be modified to generate more or less targets or different type of targets.On the other hand, introducing musical knowledge such as rhythm representations enables us to collect statistics from man-made stage data. This is because rhythm representations can be used to classify the targets. In the method we pro- pose, the music is first divided into units called measures, then the measures are subdivided into certain number of sections. The onsets of the subdivisions are marked as significant points in the music, which is then used to classify the targets.Using this method, we can create a preferred number of targets per class for each difficulty level, which we can reference to refine the generated stage data so that they have the level of difficulty desired by the user.
(主査) 教授 土橋 宜典, 教授 坂本 雄児, 教授 長谷山 美紀, 特任教授 荒木 健治
情報科学院(情報科学専攻)
コレクション(個別)国立国会図書館デジタルコレクション > デジタル化資料 > 博士論文
受理日(W3CDTF)2023-07-08T03:42:31+09:00
連携機関・データベース国立国会図書館 : 国立国会図書館デジタルコレクション