Comprehensive Summary
Park et al. investigated the field of polysomnography, which is used for sleep staging, but is tedious to perform due to the numerous instruments needed. They developed DistillSleep, which is a system that uses a single-channel electroencephalogram, or EEG, to perform sleep staging using less resource-intensive devices. It used two models, a high-capacity teacher model, and a 109 k-parameter student model, which both used a Multi-Wavelength Pyramid module and Transformer structure to acquire intra- and inter- epoch contexts. The knowledge transfer from the teacher to the student was performed using feature- and prediction- type distillations. The training of the teacher models was performed using over 10,000 overnight recordings, and the performance of the models was evaluated using Macro-F1. The teacher and student models were able to perform extremely well under the Macro-F1 evaluation scale, with the student earning an accuracy level up to 79.7%.
Outcomes and Implications
DistillSleep’s compact model combines extremely high level accuracy in a single-channel EEG. It could be used to expand the use of sleep therapies on a large scale thanks to its accessibility and ease of use.