Adversarial Robustness Toolbox. Trouble reproducing attack and defense example (ART black box attack using HopSkipJump)
I decided to study the attack on machine learning systems.
WHY the outputing HopSkipJump attack is so wierd?
in this code I am trying to apply black box attacks on my model and tried to make look like the notebook that library provided Adversarial Robustness Toolbox but I failed
https://github.com/mostaf7583/bacheloe/blob/master/blackbox.ipynb