In order to create an evaluation model based on those feature quantities, this study aims to identify the developmental features of musical expression in young children from the perspective of changing body movement aspects.
In this study, the author examined brand-new feature amounts for machine
learning classification and discriminating of the level of musical expression
in young children. First, the author provided evidence for the findings of
statistical analysis of movement components in early childhood musical
expression utilising 3D motion capture and machine learning to assess levels of
musical development. In this study, full-body motions were first subjected to
an ANOVA. A three-way non-repeated ANOVA was used to quantitatively assess the
motion capture data of 3-, 4-, and 5-year-old children in child facilities
(n=178). Consequently, there was a statistically significant variation in how
the bodily parts moved. Right hand movements, including moving distance and
moving average acceleration, showed a significant difference. Second, machine
learning techniques such as decision trees, the Sequential Minimum Optimization
algorithm (SMO), support vector machines (SVM), and neural networks
(multi-layer perceptrons) were used to construct classification models for
evaluating the degree of musical development as determined by educators using
simultaneously recorded children's video and related motion capture data. The
multi-layer perceptron gave the best confusion matrix results among the various
trained classification models, and it showed reasonable classifying precision
and utility to support educators in assessing children's musical development
stages. As a result of multilayered perceptron machine learning, the movement
of the pelvis has a significant correlation with the degree of musical
progression. Its consistency in categorization accuracy suggests that the model
may be used to assist educators in determining how well youngsters can express
themselves musically.
The author then provided some results of eye tracking on musical expression in
a recent study based on the classification and discriminating by machine
learning of the developmental degree of musical expression in order to figure
out additional feature quantities. In order to research human bodily reaction
in relation to cognitive and emotional components, eye-tracking is now often
employed. According to the author, eye-tracking data on gazepath, fixations,
and saccades provides information that can help us grasp musical expression.
Children at child facilities aged 3, 4, and 5 years old (n=118) took part in
eye tracking while singing a song while wearing an eye tracker (Tobii3). On the
calculated data, quantitative analysis using ANOVA was done. The rise in data,
including the frequency and magnitude of saccades as well as the saccade's
moving average velocity, revealed that saccades during early childhood musical
expression tended to be greater in major keys than in minor keys. The outcome
demonstrated that it was possible to extract useful feature values for machine
learning from the computed data of eye movement during musical expression.
Author(s) Details:
Mina Sano,
Tokoha University, Japan.
Please see the link here: https://stm.bookpi.org/CRLLE-V7/article/view/7540
No comments:
Post a Comment