Words are a window into human psychology, society, and culture, says Stanford linguist and computer scientist Dan Jurafsky. The words we choose reveal what we think, how we feel and even what our biases are. And, more and more, computers are being trained to comprehend those words, a fact easily apparent in voice-recognition apps like Siri, Alexa and Cortana.
Jurafsky says that his field, known as natural language processing (NLP), is now in the midst of a shift from simply trying to understanding the literal meaning of words to digging into the human emotions and the social meanings behind those words. In the social sciences, our great digital dialog is being analyzed to tell us who we are. And, by looking at the language of the past, language analysis promises to reveal who we once were. Meanwhile, in fields such as medicine, NLP is being used to help doctors diagnose mental illnesses, like schizophrenia, and to measure how those patients respond to treatment.
The next generation of NLP-driven applications must not only hear what we say, but understand and even reply in more human ways, as Dan Jurafsky explains in his own words to host Russ Altman in this episode of Stanford Engineering’s The Future of Everything podcast. Listen and subscribe here.
Connect With Us:
Episode Transcripts >>> The Future of Everything Website
Connect with Russ >>> Threads / Bluesky / Mastodon
Connect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook