Skip to main content

Posts

Showing posts from July, 2023

Enhancing DNN Explainability in BCI

This project tackled the challenge posed by the opaque nature of Deep Neural Network (DNN) models, especially crucial in safety-driven fields like Brain-Computer Interface (BCI) applications. The primary aim was to unleash the full potential of DNN models, specifically in analyzing EEG signals. In this initial study, I highlighted the importance of not just relying on quantitative performance but also considering qualitative evaluation. The investigation delved into a key paper in the field, providing a comprehensive analysis as a starting point. I then compared the model against common alternatives, tweaking various hyper-parameters to validate its performance. To demystify the "black-box," I utilized the Layer-wise Relevance Propagation (LRP) method, revealing insightful details about each model's reliability. LRP proved invaluable in explaining the inner workings of the DNN, making it a useful tool in interpreting complex neural network decisions and enhancing transpar