Sveriges mest populära poddar

The Stephen Wolfram Podcast

Future of Science and Technology Q&A: Live from the Wolfram Summer School (July 3, 2024)

104 min • 6 september 2024

Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa


Questions include: ​​How do you see electricity being transmitted or provided to households in the future? These power poles and lines are over-100-year-old technology. - How often will AI be revisited in future science and technology? Or do you think AI has firmly cemented its place? - Do you think LLMs have already passed the Turing test (which is currently being asserted by many "experts")? If yes, what does that mean for the future direction of AI research? If no, what's missing? - Over time, AI training data will increasingly be AI generated. Will this feedback loop amplify errors and cause AI to self-destruct? - If we can sustain mini-brains or large clusters of human neurons for years, this approach might achieve artificial general intelligence before synthetic methods do. What do you think? - Are those neural cats behind you? - Is it possible that human-machine integration or radical genetic modification can allow humans to make significant leaps in rulial space? - What role do emotions play in language and information processing? Do emotions speed up communication? What other elements are important for AI development in communication beyond language? - ​​Will AI make interdisciplinary learning and collaboration easier by facilitating that process, or will it create more misunderstanding between fields? - When people discuss whether an LLM is sentient or not, a question that always comes up is whether it "understands" the prompts and its replies, with the Chinese room thought experiment something typically brought up in such a discussion. I see two ways to look at this. One is that an LLM is just an advanced predictive text generator and that sentience is something more than that. Another is that we sentient beings are actually just advanced predictive text/action generators. What do you feel sentience really is? - Is it possible for AI to achieve true randomness? - Why is there no latency when we are looking around and constructing a scene on the fly? Or is it our perception that makes it seem like there is no latency? - What new types of auxiliary jobs do you think will be necessary for the ubiquitous integration of AI into society to properly balance AI with human interests, such as the alignment problem? And what role, if any, do you see Wolfram Research playing in that "AI economy"? - Do you see there being more specialized computing hardware in the future, where the computations are more directly embedded in physical processes rather than needing to construct a given computation within a universal computer? - How do you envision hypergraph-based models advancing our understanding of quantum mechanics, general relativity and their potential unification? Specifically, how might these models address challenges like quantum gravity, the nature of spacetime and the emergence of fundamental particles? - Are we programmed by evolution to be sentient? If so, can't we program a machine to be sentient? - Do you think hydrogen has a future in computing, and will it play a major role in energy and possible propulsion to get us to Mars? - Is the ruliad a meta-theory, or does it actually exist? - ​​If the ruliad is correct, what kind of technology do you think that can bring us?

Kategorier
Förekommer på
00:00 -00:00