ConvNeXt - Adversarial images generation

I implemented Stanislav Fort’s project in Pytorch. The Github repo has a notebook which looks at generating adversarial images to “fool” the ConvNeXt model’s image classification capabilities. ConvNeXt came out earlier this year (2022) from Meta AI.

The FGSM (Fast Gradient Sign Method) is a great algorithm to attack models in a white-box fashion with the goal of misclassification. Noise is added to the input image (not randomly) but in a manner such that the direction is the same as the gradient of the cost function with respect to the data.

Next
Previous

Related