Rearranging Pixels is a Powerful Black-Box Attack for RGB and Infrared Deep Learning Models
Recent research has found that neural networks for computer vision are vulnerable to several types of external attacks that modify the input of the model, with the malicious intent of producing a misclassification.With the increase in the number of feasible attacks, many defence approaches have been proposed to mitigate the effect of these attacks