Application aiding in testing resistance of classifier models to attacks using adversarial examples
Defense Date:
This thesis presents the development of an application designed to assist in testing the robustness of classifier models against adversarial attacks. The work focuses on evaluating the resistance of machine learning models to adversarial examples - specially crafted inputs designed to fool classification algorithms. The application provides tools for generating adversarial examples and systematically testing model vulnerabilities.
