Artificial intelligence started with programmed computers, where programmers would manually program human expert knowledge into the systems. In sharp contrast, today's artificial neural networks – deep learning – are able to learn from experience, and perform at human-like levels of perceptual categorization, language production, and other cognitive abilities at h. This difference has been portrayed as roughly parallel to the philosophical divide between rationalists or nativists on the one hand, and empiricists on the other.
In From Deep Learning to Rational Machines (Oxford UP, 2024), Cameron Buckner lays out a program for future AI development based on discussions of the human mind by such figures as David Hume, Ibn Sina (Avicenna), and Sophie de Grouchy, among others. Buckner, who is an associate professor of philosophy at the University of Houston, offers a conceptual framework that occupies a middle ground between the extremes of 'blank slate' empiricism and innate domain specific faculty psychology, and defends the claim that neural network modelers have found, at least in some cases, a sweet spot of abstraction from the messy details of biological cognition so as to capture the relevant similarities in their artificial networks.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology