Explaining machine learning models predictions in the context of adversarial data

Defense Date:

In my thesis, I investigate the impact of robust learning on the quality of explanations of machine learning models. The use of models in many areas of life causes an increase in the demand for robustness and credibility of predictions of the models used. To this end, I propose 3 experimental methods for assessing the quality of explanations and analysis 4 methods for generating explanations. I compare the robust and classic variants of the random forest and XGBoost models using the proposed evaluation methods and compare the explanations obtained on the 3 data sets with the information from the data mining process.