Comprehensive Summary
In this study by Barki, Mai, and Chung, a classification algorithm is used to detect emotional states from physiological input through a wearable EEG and PPG signal interpreting device. Gradient boosting classifiers minimize classification errors by creating a body of sequential decision trees. In Barki’s study, a highly improved “extreme” version of this gradient boosting algorithm (XGBoost model) is used to control complexity (despite the sophistication of clinical implementation) and reduce overfitting, when AI succeeds in learning methods but fails in application. In this study, the EEG and PPG signal detecting devices were fixed onto each of twenty-one participants while they sat comfortably in front of a monitor, which played a video clip corresponding to each of four emotions: fear, happiness, calmness, and sadness. The EEG and PPG signals were recorded for two minutes per clip, and proper tests were conducted to ensure the validity of the recordings. The data was processed to reduce noise, and certain aspects of the data were extracted to capture emotion-related physical disturbances using the XGBoost model. The result of this study was that fusing data from the EEG and PPG together was more effective to detect and classify emotions than from either of the two alone (97.58% accuracy versus 95.63% and 91.49% respectively). This proves that a multimodal approach is necessary for more accurate identification of emotions.
Outcomes and Implications
The application of this research is wide-ranging, from detecting mental health disorders to developing newer technologies that can adapt and respond to human emotional cues. The XGBoost model can be seen as a solid foundation for future technologies; however, other modes of development can further enhance this algorithm’s capabilities. Incorporating deep learning or neural networks can potentially generalize data for a more diverse population for medical implementation. Further, other stimuli such as body temperature or skin conductance may provide better results and possibly a more feasible clinical option. In the future, the ultimate hope is that the XGBoost model can allow for better human-machine interaction.