BrainSmasher is a platform made with the purpose of aiding pentesters, researcher, students, A.I. Cybersecurity engineers to practice and learn all the techniques for exploiting commercial A.I. applications, by working on specifically crafted labs that reproduce several systems, like face recognition, speech recognition, ensemble image classification, autonomous drive, malware evasion, chatbot, data poisoning etc...
Every month a lab on various topic found in commercial A.I. applications will be posted, with 3 different difficulties (named challenges), in order to guide the user in understanding all the mechanics behind it and practice different ways of exploitation.
Since A.I. applications are relatively new, there is also the possibility that the harder difficulty challenges for the labs don't have some public known ways of exploitation, so it's up to you to find the correct solution. Maybe some challenges could need the combination of "standard" cybersecurity techniques with machine learning adversarial attacks ;)
The platform, which is now in beta version, will also feature in the next future paid competitions, job offers posting, ranking system, tutorials on several A.I. exploit topics, the possibility to earn money by proposing personal labs or different challenges, for an already existent A.I. lab applications, to be used by the community and also propose modification already existent challenges in order to augment their robustness vs. the various attacks.
All the material and the techs for the exploitation of A.I. will be posted here in a dedicated section of hacktricks.
While we are in beta version and completing the implementation of all the above described features, the subscription and all the already posted labs with their relative challenges are free. So start learning how to exploit A.I. for free while you can in BrA.I.Smasher Website ENJOY ;)
A big thanks to Hacktricks and Carlos Polop for giving us this opportunity
Walter Miele from BrA.I.nsmasher
In order to register in BrA.I.Smasher you need to solve an easy challenge (here). Just think how you can confuse a neuronal network while not confusing the other one knowing that one detects better the panda while the other one is worse...
I have to tell you that there are easier ways to pass the challenge, but this solution is awesome as you will learn how to pass the challenge performing an Adversarial Image performing a Fast Gradient Signed Method (FGSM) attack for images.