Stephen Wolfram answers general questions from his viewers about science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include:
I've been hearing of AI and LLMs in context of an "arms race" between countries. What do LLMs look like scaled up in that manner (vs. a global LLM)? - What about model interoperability? Where are we at on the research for that? Do we need to develop new and more sophisticated mathematics to begin to understand these black box models? Do you think in time we will be able to do casual inference with them? - Do you agree with Yann LeCunn and Andrew Ng that recent affirmation that AGI is still decades away and cannot be achieved with the current transformer architectures, regardless of parameter and token count? - Where is the line then between a program with an inner experience and one without? - So with unlimited intelligence, maybe everything can be predicted with accuracy. - When will an AI write a work worth feeding into another AI?