Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.