Sveriges mest populära poddar

LessWrong (Curated & Popular)

Ironing Out the Squiggles

19 min • 1 maj 2024
Adversarial Examples: A Problem

The apparent successes of the deep learning revolution conceal a dark underbelly. It may seem that we now know how to get computers to (say) check whether a photo is of a bird, but this façade of seemingly good performance is belied by the existence of adversarial examples—specially prepared data that looks ordinary to humans, but is seen radically differently by machine learning models.

The differentiable nature of neural networks, which make them possible to be trained at all, are also responsible for their downfall at the hands of an adversary. Deep learning models are fit using stochastic gradient descent (SGD) to approximate the function between expected inputs and outputs. Given an input, an expected output, and a loss function (which measures "how bad" it is for the actual output to differ from the expected output), we can calculate the gradient of the [...]

The original text contained 5 footnotes which were omitted from this narration.

---

First published:
April 29th, 2024

Source:
https://www.lesswrong.com/posts/H7fkGinsv8SDxgiS2/ironing-out-the-squiggles

---

Narrated by TYPE III AUDIO.

00:00 -00:00