AI, body, and soul.
Hosted on Acast. See acast.com/privacy for more information.
The podcast I, scientist with Balazs Kegl is created by Balazs Kegl. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
We get deep into identity and death. If you are fascinated by Michael Levin and Jonathan Pageau, this conversation is for you.
Fascinating discussion where we go beyond the Free Energy Principle, and even Levin's cognitive cones, and postulate that crucial internal structures, a subset of me, are what define identity. If they get damaged, I lose my identity. Deep self-transformation may also rupture my continuity. When does it happen, when do I die, as opposed to my subsystems dying? Very much a sequel to my conversation with Yogi Jaeger (https://youtu.be/5UJ4y2L2qpk). The paradox of self-preservation vs self-sacrifice. Maverick computing and organoids.
Alexander is an Assistant Professor of Computer Science at Rochester Institute of Technology. He directs the Neural Adaptive Computing (NAC) Laboratory where they work on developing new learning procedures and computational architectures that embody various properties of biological neurocircuitry and are guided by theories of mind and brain functionality. His research focuses on predictive processing, active inference, spiking neural networks, competitive neural learning, neural-based cognitive modeling, and metaheuristic optimization.
Alexander's webpage: https://www.rit.edu/directory/agovcs-alexander-ororbia
Paper link: https://arxiv.org/abs/2311.09589
Biological learning survey: https://arxiv.org/abs/2312.09257
00:00:00 Intro
00:01:37 Geoff Hinton's "mortal computation" idea: what if software and hardware were inseparable? Cybernetics, thermodynamics, 4e cognitive science, basal cognition.
00:11:20 Hinton's energy consumption argument.
00:21:01 The self-preservation directive is the source of intelligence. The free energy principle. The non-equilibrium steady state is your identity.
00:32:21 The paradox of the membrane: a permeable separation of inside and outside. The nestedness of inside and outside: fractal structure. Beyond FEP: identity-defining internal structures. Subsystemic death. Bottom-up and top-down causation.
00:52:12 Why mortal (as opposed to living) computation? The crucial role of finite time horizon.
01:01:49 The paradox of self-preservation vs self-sacrifice. Life drive vs death drive. Is death a must? Maverick computing and organoids.
01:15:38 My take on embodiment and mortality.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
This is my second conversation with the philosopher Alexander Bard. Yoga and alignment, downwards in the body and upwards in the sociont. Archetypology and rituals. Tantric sex and the sex/religion/spirituality triad. The masculine drive towards the project. Collective purpose and men's work.
The first part of our conversation: https://youtu.be/Y8cuqy6y0fo
Process and Event seminar series on Parallax: https://www.youtube.com/playlist?list=PLSc7jFP5VeNHTwj4vzq7Tv6UIhAPCC4vp
Alexander Bard is a philosopher and futurist specializing in the relationship between humanity and technology, with a profound interest in psychoanalysis and anthropology to understand the impact of technological change on interpersonal relationships.
00:00:00 Intro
00:03:19 My Yoga experience and the general notion of alignment. Relation to the sociont and tribology. Relation to couple therapy. The chemistry of a music band.
00:12:31 Archetypology. Psychology, psychoanalysis, sociology, data anthropology. Internal family system. Families, clans, and tribes.
00:25:51 Rituals: Eliade and Van Gennep, idiosyncratic nuclear family. Event/process in wedding/marriage.
00:29:10 Sex. The sexualized process and event. Sexuality and spirituality. Taoist sex. Meditation vs absolute bliss: sexuality and psychedelics.
00:38:09 Tantric/Taoist history and modern teachers.
Mantak Chia, David Deida, John Wineland, Kim Anami. Dualist Islam Sufism and monist Zoroastrian Zufism.
00:42:15 Alexander's six-episode course on Process and Event, relation to Awakening from the Meaning Crisis. But P&E doesn't avoid the bliss like AFTMC. The ego appears in the dialectics of the pure experience (meditation) and bliss (tantric sex).
00:45:20 Purpose. My tribal experience in the Pierre Auger experiment. Monastery = tribe + mission. Project precedes subject and object.
00:54:46 Relevance realization, where the project comes from?
01:01:27 Collective purpose, flow, and men's work.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
This is my first and definitely not the last encounter with the philosopher Alexander Bard. We discuss his grand oeuvre of metaphysics: Process and Event.
Why should you only trust a philosopher with a great sex life? Shit happens: a philosophy that starts in the here and now. Boy pharaohs and pillar saints: the two faces of Gnosticism. Our issues with Christianity: lack of embodiment, lack of mature adulthood. How is the day of a good Zoroastrian?
Exciting conversation, brotherly love at first sight. Enjoy!!
Alexander Bard is a philosopher and futurist specializing in the relationship between humanity and technology, with a profound interest in psychoanalysis and anthropology to understand the impact of technological change on interpersonal relationships.
00:00:00 Intro
00:05:15 Don't trust a philosopher who doesn't have a great sex life. What's wrong with Gnosticism. Philosophy and the psychology of the philosopher.
Convincing vs aspiration, manipulation vs seduction, winning vs going somewhere together: the dance.
00:10:47 Process and Event. East over west. Uniting theology and philosophy. Yin-yang philosophy. Start from the nomadic. If you hide the process, you have to repeat the event. Keep the dichotomy and contradiction.
00:23:57 Zoroastrianism. The love of wisdom. Only archetypes reincarnate. The adult religion.
00:30:53 Religion. How to create stable communities? Fire temples, initiation, Nowruz.
00:39:08 Why am I not a Christian? My son's baptism. God's omnipotence and Gnosticism. The fear of the ecstatic.
00:46:11 Jonathan Pageau's monist Christianity. The philosophical vs the supernatural tradition.
00:49:18 Transcendental emergentism. Boy pharaohs and pillar saints. The king and the priest. What's wrong with reductionism (both physicalism and panpsychism)? Shit happens. Emergence vectors (physics, chemistry, biology, consciousness).
01:01:13 Yoga, alignment, Taoism, Buddhism, and Zoroastrianism, Gnosticism and omnipotence. From a Child of a God, becoming an Adult of an Adult: enlightenment.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
Dan Shipper (http://danshipper.com/)is an entrepreneur, writer, and the CEO and co-founder of Every (https://every.to/@danshipper), a daily newsletter on business, AI, and personal development read by nearly 75,000 founders, operators, and investors. He writes a weekly column at Every called Chain of Thought, where he covers topics like AI, tools for thought, and the psychology of work.
I contacted Dan because he makes his living off the writing industry, yet instead of panicking, he took the LLM revolution head on, choosing to lead it instead of ducking.
Join us to explore creative uses of generative AI for writing and personal development.
Fascinating discussion about science at the end where we are both at the edge of our chairs and knowledge.
If you like this, you may also be interested in my conversation with Tatjana (https://www.youtube.com/watch?v=_KJ29Nslbl8) and Joel (https://www.youtube.com/watch?v=mmIHnZRY5zo) on creative writing and teaching AI.
00:00:00 Intro.
00:04:18 Dan's journey: software and writing.
00:13:45 The writing industry and the business model of Every.
00:19:45 How future works. AI, ChatGPT, creativity, psychology: Dan's writing.
00:25:25 ChatGPT and personal development. Mirroring and motivational interviewing. Dealing with dragons, cognitive behavioral therapy. https://www.maxyourmind.xyz
00:33:34 Therapy. Dan's fight with OCD. Embodiment, somatic therapy. Podcast as a therapy.
00:46:28 GPT and creativity. Link to interview with David Perell. How to get rid of the GPT smell. Link to Dan's app. Dan's definition of art.
01:00:14 GPT and writing. Personalized content vs shared stories and shared game-like media.
01:06:52 AI and science. Prediction without theory. Complexity and embodied intuition. Multiplicity of causes and categorization. Connectionist induction.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
Csaba Szepesvári is a prominent figure in machine learning and artificial intelligence, particularly known for his contributions to reinforcement learning. He is a professor in the Department of Computing Science at the University of Alberta and a Senior Staff Research Scientist at Google DeepMind.
Our goal in life: keep eating Yann Lecun's cherry (inside joke, ask GPT).
This was a wide-reaching conversation with my good friend Csaba. The first 50 minutes is about AI, we give you insight into what AI researchers and practitioners are dealing behind the scenes, a bit of history and practice of AI.
At around 49 minutes we discuss Yann Lecun's view on model-based planning and reinforcement learning, which is one of the most interesting and far-reaching discussion within AI (should we model the world or just learn to reach goals?).
This naturally leads into relevance realization: what to learn predict and what to ignore, which is the most important question of AGI.
The last 20 minutes it turns more personal, we talk about our life as scientists, the metaphysical no man's land (nonreductionist naturalist metaphysics), and my wrestling with Christianity.
If you like this, check out my conversations with Yogi Jaeger https://www.youtube.com/watch?v=5UJ4y2L2qpk, Anna Riedl https://www.youtube.com/watch?v=w2ZiSWZNQsg, and Giuseppe Paolo https://www.youtube.com/watch?v=R7tEd65e2i8.
00:00:00 Intro.
00:02:39 Google Deep Mind and leading the Foundations team. Theory tells what is possible, and how to do it? What does theory add to practice in machine learning?
00:08:58 Probabilistic guarantees. Sorting vs machine learning. Random algorithms and random data.
00:15:57 Theory and practice. Support vector machines, boosting, and neural nets. History of AI. Practice-driven ML.
00:24:34 Neural nets: history and practice. The fiddliness and the robustification. Overfitting. Overparameterization vs classical statistics. Label noise and regularization.
00:35:06 Reinforcement learning. Learning to behave. Learning to collect data.
00:39:10 Why did we get into AI: to understand intelligence and ourselves.
00:41:48 Reinforcement learning: model of an intelligent agent. Theory: wrt a fully informed agent, how much do you lose by having to learn?
00:49:17 To model or not to model? Model-based reinforcement learning.
00:58:18 Latent representation of importance, relevance realization. Steelmanning and criticizing Yann Lecun's cherry metaphor. Relevance vs simplicity. The shiny object syndrome.
01:12:40 The Jaeger - Riedl - Djedovic - Vervaeke - Walsh paper on naturalizing relevance realization.
01:24:12 Fear and technology. Regulation, freedoms, open source.
01:33:00 Framing precedes observation.
01:35:60 Subjectivity of science. Can we be part of the world we study?
01:43:55 The metaphysical no man's land: nonreductionist naturalism.
01:48:00 Why am I not yet a Christian? Omnipotence and lack of embodiment, rituals that focus on our responsibility of what is below.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
Yogi Jaeger is a philosopher at the University of Vienna, specializing in the philosophy of science and biology in particular.
Are we rushing ahead into the unknown, scaring people, while also producing fake shelters for them to retreat into artificial small worlds? Are we losing our grip on the world or realizing our humanness in the mirror of AI? Will AI take our agency away? Are we willingly giving it away? Or they will extend our agency?
Passionate conversation about the philosophy of science and the non-computational and non-formalizable nature of cognition. We shed light on AI with a non-reductive naturalist torch and see the rough edges and the strange shadows it casts over society.
Project website: https://www.expandingpossibilities.org.
Personal website: http://www.johannesjaeger.eu.
Relevant papers and blog:
https://osf.io/preprints/osf/pr42k
https://arxiv.org/abs/2307.07515
https://www.frontiersin.org/articles/10.3389/fevo.2021.806283/full
http://www.johannesjaeger.eu/blog/machine-metaphysics-and-the-cult-of-techno-transcendentalism.
I warmly recommend his course on the philosophy of science: https://www.youtube.com/playlist?list=PL8vh-kVsYPqPVrV0m4HjZexgO6oDkgkK0
00:00:00 Intro.
00:04:56 My excitement and interests in Yogi's work. Personal: non-reductionism naturalism beats lived dualism. Professional: tech is becoming organismic.
00:08:49 Definitions, naturalizing relevance realization. Formalization, computation and framing, small/closed and large/open worlds.
00:15:55 The paradox of living in a small world is that we don't know that we are living in a small world.
00:18:07 Computational approaches do work, just not for (all) cognition. The brain evolved not to compute but to keep you alive.
00:20:12 Church-Turing-Deutsch conjecture.
00:23:04 Approximation. Can cognition be well-approximated by computation? Csaba Szepesvari's third question. Yes, but that does not mean cognition works that way.
00:26:36 Nothingbutism. Prediction and understanding, are they the same? Understanding needs to have the concepts right.
00:29:15 AGI doom only happens in the small world.
00:30:45 Philosophy and science. Any scientific test for the non-computation claim? Csaba's second question.
00:33:27 Framing precedes science.
00:35:17 Non-reductionist but non-woowoo naturalism.
00:43:08 Relevance realization: the map of map-making. Enactivism. Non-formalizability. Fragility of the organism and death. The work to exist.
00:50:55 Goal-orientation: intrinsic or extrinsic? Paperclip maximizer.
00:54:44 Far-from-equilibrium thermodynamics. Physics-free simulation vs self-manufacturing experiential systems. Immediate and pre-conceptual access to the world.
01:04:58 Biotech and AI. The value of slowing down.
01:07:47 TAME: Michael Levin's technological approach to mind everywhere.
01:18:55 LLMs and hallucinations.
01:23:05 Social media AI and recommendation systems: taking away our agency but trying to optimize for an unoptimizable goal.
01:30:34 AGI, slowing down, losing our grip on the world or realizing our humanness in the mirror of AI?
01:39:30 Intelligence and rationality.
01:41:58 Yogi's journey: a natural philosopher, a biologist using philosophical approaches.
01:48:15 Where do I want to see AI going? Yogi's question to me.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
Baudouin Saintyves is an artist, physicist, and roboticist, inventor of Granulobots.
https://www.youtube.com/watch?v=Gnbdneho78M
https://arxiv.org/abs/2304.03125
Research: https://voices.uchicago.edu/bsaintyveslab/
Art: https://www.shapesofemergence.com/
Fascinating conversation, starting with an intro of soft and swarm robotics and his work on designing Granulobots, self organizing aggregates of small wheels with magnets and a simple motor.
We then dive deep into a deep dialogical out-loud thinking about self-organization, self-coordination, synchronization, and emergence. We are both at the edge of what we know, dancing around and with these fascinating subjects.
At around the hour mark, I ask Michael Levin's three questions: 1) Any surprising system-level behavior? (yes!!) 2) Any way to control the system at the top level? (hard to say but Baudouin really riffs off this) 3) How does the world look like for a Granulobot? (fascinating :) ).
The last half an hour is Baudouin's amazing journey, triangulating between art, science, and engineering, a perfect illustration of the perspectival philosophy of science that we discussed with Yogi Jaeger in my upcoming next episode.
Watch it, and please give it thumbs and sign up at the YouTube channel if you like it, it's a small gesture that could help these sober conversations about AI and science reach more people.
00:00:00 Intro
00:03:40 Robots, hard and soft, distributed and liquid.
00:14:25 Mobile Lego. Modular robots, swarms and flocks, decentralized embodied intelligence.
00:21:29 Granulobots. Machine is the material vs the material is the machine. Motorized grains of sand.
00:29:02 Self-organization, self-coordination, synchronization, emergence.
Hysteresis as a precondition of control. Solid and liquid.
01:03:28 Michael Levin's three questions: 1) any surprising high-level behavior?
01:13:30 2) Top-down: any way to control the system at the top level?
01:22:22 3) First person systems. How does the world look like from the system's perspective?
01:30:36 Music, visual art, self organization, physics, shapes of emergence, contemplative cinema, robotics. Baudouin's journey across art and science.
01:38:37 The marvelous triangle of science, engineering, and art.
01:43:39 TAME: the link to Baudouin's work.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
Why are LLMs so alien?
Is it because they lack goal-oriented behavior?
Or because they are designed to please humans, which can be unsettling?
Join me and Laura in this new episode where we discuss her paper "Active inference goes to school", AI and 4E cognitive science, active inference and child development, and AI in the classroom.
My notes on Laura's paper: https://balazskegl.substack.com/p/active-inference-goes-to-school
00:00:00 Intro
00:05:44 4E cognitive science. Embodied, enacted, embedded, extended. Mirror neurons and meaning. Scaffolding.
00:20:03 LLMs and extendedness. Confabulations.
00:29:53 LLM vs social media AI. The role of embodiment and the scariness of disembodied LLM. Loopholes.
00:37:20 Active inference. LLMs are trained like kids in a classical school. Building artifacts to align reality with internal model. Pragmatic vs objective truth.
00:56:45 Child development and active inference. Cognitive and physical are intertwined. The role of history and culture.
01:07:02 Montessori. Flow and the speed of learning. Prepared environment.
01:13:46 AI in school and work.
01:19:40 Laura's journey. Marx, cinema, history of science, biology, running a bar, having a child, second PhD.
01:22:43 The role of artifacts in forming the mind. Memorabilia.
01:29:37 Having a child. Starting school during Covid. The role of imitation in learning and its lack in school. Aspiration. Copying. Learning and sense.
01:40:47 Embodied AI, when can I read it? Laura's question to me. Martial arts. Meditation.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
What, if anything, differentiates us from machines? Is cognition computational? Can self-preservation be externally imposed? How do you feel in your body when you hear the word "singularity"?
Join us in a thought-provoking conversation with Anna Riedl, a distinguished cognitive scientist from Vienna, known for her work on rationality.
We dive deep into psychology, cognitive science, and artificial intelligence. What I find exemplary about Anna is how she plants her feet firmly in the real world while embarking deep, wide-ranging, and sometimes controversial topics in one of the fastest moving fields in science.
Anna recently co-authored an insightful paper on "Naturalizing Relevance Realization," shedding light on how our understanding of relevance in cognitive processes can be grounded in naturalistic terms. For a deeper dive into the themes of our conversation, including my summary and notes on this groundbreaking paper, visit https://balazskegl.substack.com/p/naturalizing-relevance-realization.
00:00:00 Intro.
00:05:51 Rationality and intelligence. Small and large worlds, the frame problem.
00:12:58 Non-computational (but physical!) framing, relevance realization, natural agency, cognition. Naturalistic teleology.
00:20:07 Algorithm vs organism through reinforcement learning lens.
00:33:18 Relevance realization. Bayesian rationality and the paradox of timescale. Social media AI.
00:47:40 Self-preservation: intrinsic or external goal?
00:51:23 Opponent processing.
00:57:08 Circularity argument for the non-learnability of relevance realization.
01:00:27 Ecology of agents. Where does the agential world start?
01:04:47 Singularity and self-manufacturing. Parasitic processing. Rationality. How do we construe the world? How does a topic make you feel?
01:11:16 What's next for Anna? Naturalizing decision-making.
01:14:15 Insight.
01:16:49 Prophecy, anticipation, forecasting, threshold points and agency, faith.
01:20:13 How and why Anna became a cognitive scientist? Creativity, clear thinking, psychology, cognitive science.
01:25:02 Black-box vs constructive predictive models. Implicit vs explicit.
01:29:33 Collective expertise, higher-level organisms.
01:36:52 What got me into tech? Anna's question to me.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
Jakob Nordin is an observational astrophysicist from the Humboldt University, Berlin, member of the LSST collaboration building the Vera Rubin Observatory and the informational infrastructure to manage the immense data it will collect.
Real-time astronomy & LSST: We discuss the intriguing potential of real-time astronomy, focusing on the Large Synoptic Survey Telescope (LSST), its technology and data processing challenges.
Multi-messenger astrophysics & brain-like structure: Explored the fascinating concept of multi-messenger astrophysics, where various observatories act like sensors in a brain-like structure, collaboratively detecting cosmic events. Discussed how this network uses a semi-automatic system of alerts to direct scientific attention efficiently, highlighting the organic and evolving nature of this expansive, interconnected system.
AI in astronomy: Explored how artificial intelligence is integrated into astronomical research, particularly in image analysis and classification within massive data streams like those expected from LSST.
00:00:00 Intro. LSST.
00:03:43 Universe as a lab. Astronomer, astrophysicist, observational astrophysicist.
00:09:15 The space of astronomical observatories: what messenger, what energy, what field of view (patch of sky), what timescale?
00:11:15 Vera Rubin Observatory and LSST collaboration: optical wavelengths, large field of view, detailed map and daily transients. Several instruments in one.
00:13:17 Space or ground? Trade-offs and complementarity.
00:16:16 Technology: Mirror and camera.
00:19:20 Atmosphere. How to deal with turbulence? Exotic locations.
00:22:01 Numbers. 7 trillion detections, 37 billion objects (transients).
00:23:16 Categories of transients: noise, Solar system objects (planets, asteroids), Milky way (variable stars), extragalactic (giant black holes, supernovae)
00:27:57 Alert rate: 10 million nightly events. 60 sec latency.
00:30:54 Multi-messenger astrophysics.
00:39:27 Sociological paradigm change.
00:43:45 Unknown unknowns. The similarities between multi-messenger astrophysics and a living multi-sensorial perceptual system.
00:47:44 AI, what works what does not.
00:52:38 Data brokers. Scarcity of resources forces a distributed TAME-like organization in science.
00:59:02 AI. Noise filtering. Transient categorization.
01:03:3 0 Training data: mostly simulations. Data augmentation.
01:07:50 The sociological paradigm change: using telescope time to verify AI pipelines.
01:09:57 Instrumental meditation: AI models need to be calibrated.
01:11:08 The price of knowledge. How to calibrate exploration?
01:14:06 Why astrophysics? Curiosity + coincidence.
01:18:16 What does a scientist do?
01:21:05 Organization. Small groups evolving towards large-scale experiments.
01:24:56 AI, software, data science, computer science, computer engineering.
01:29:20 Productionalizing AI. Big data vs. small data. AI is tough when models are exposed to reality.
01:34:40 GPT for writing and coding. Hallucination and novelty.
01:40:18 "Do I get the money?" Would I finance real-time astronomy? How to choose what to work on?
01:43:02 Debates are overrated. Jakob's question: Can a podcast change what we do? Can a guest change how I live? You can change me if I want to become like you.
01:51:05 Epistemic authority. How do we make decisions? How can we trust information?
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
The second hour is really just two guys trying to make sense of what's going on in the world as a reaction to AI and the meaning crisis, explored through the works of John Vervaeke. I'm sure the insights from this conversation will resonate with many educators and technologists alike. If you like it, please help the channel by signing up!
00:00:00 Intro
00:06:53 Path to teaching writing.
00:15:36 Why do we write? Making an impact vs having a voice vs thinking something through.
00:20:37 GPT vs human writing. The voice of GPT. Simile and metaphors.
00:34:05 Teaching and AI. Assisting a lesson plan.
00:37:48 Assessment in the age of GPT: policing or integration?
00:46:51 OER: teaching LLMs within Open Education Resources.
00:53:11 AI assistance: feedback generated by LLMs. Should we learn to drive a stick or read maps?
01:03:32 GPT as a dialogical partner.
01:10:14 Vervaeke's AI video essay. https://balazskegl.substack.com/p/notes-on-john-vervaekes-ai-the-coming
01:15:02 Opponent processing: dialog, jujitsu.
01:22:37 Jeremy England, life, entropy, dissipation, and e/acc.
01:26:20 Doom vs zoom: I don't agree with the framing.
01:34:49 Relevance realization and Vervaeke's trinity: nomological, normative, and narrative order.
01:41:35 Open theology vs closed worlds.
01:48:05 The irony of Enlightenment.
01:52:11 What should educators be aware of around AI? Joel's question to me.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro.
00:03:18 Stories. Why do they fascinate us? Mechanism of emotional manipulation.
00:06:45 True to life vs realistic. Startrek. Dramatic real stories are rarely good.
00:10:08 GPT and stories. How to build stories word-by-word. What is creativity? Information and meaning. Saturation in the story space. Alignment of living and writing.
00:19:25 Yellowstone. What makes a story good? Lioness. Average is OK but don't expect to get payed for it.
00:23:24 GPT and storywriting. Write us a joke about migrants. A new episode of Sherlock.
00:28:54 Art = fire + algebra. A good story makes you want to stop watching it and reengage with your life.
00:31:53 Fire in children and mystics. Chickens: individuals vs a category. Surprise and predictability. Intelligence is overrated.
00:39:46 Christianity. The tension and harmony between the transcendent and the particular. The role of lived experience in storywriting.
00:45:47 The zombie myth. Metaverse. Transhumanism. What is AI after? Zuckerberg and jujitsu.
00:57:44 AGI singularity vs narrow social media AI. Cautious humbleness and exploration. Personalized storytelling and an autistic world. Disembodiment.
01:15:50 GPT: how to use it for writing better stories? Support the research process then go and face real life. Paradoxically it will slow down the writing process.
01:22:16 Love: learning by loving vs just downloading knowledge.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro. Why philosophy of mind is important for AI and for me as a human.
00:04:53 Schools on consciousness, a historical perspective. Conscious person and conscious mental state, thought, experience, or perception
00:13:16 Qualia and the hard problem. The objective/subjective divide.
00:20:40 Some options: physicalism, dualism, panpsychism, illusionism.
00:24:14 My reaction to the world "illusion" is that I get angry.
00:26:37 Pain _is_ real in illusionism. What we can be wrong about is _what_ it actually is.
00:29:15 Dual aspect monism and Mark Solms. Functional view of consciousness.
00:33:35 Zombies, philosophical, the existential, and the AI zombie.
00:39:09 Panpsychism, the inner light, what it is to be like an electron. The combination problem. Psychological function.
00:44:19 Why are we positing a private world? Because we have one. But what if it is a sort of illusion?
00:46:01 There is no magical awareness of anything. Awareness requires a mechanism. First of Keith's points from this presentation: https://www.youtube.com/watch?v=M1jnW0MHlb8&t=2081s
00:49:04 The same goes for self-awareness and for mental states and emotions.
00:50:43 You can't know being in a state by being in it. Do cats know they are angry? What do you mean by knowing? Reflection and transparency.
00:54:19 Money. The phenomenal properties, the stories we build around conscious feelings are similar to money, one can say that they are both illusory constructs. Daniel Dennett. Illusory does not mean meaningless.
01:01:04 But isn't then all perception illusory? We should have a concept of useful and non-useful illusions?
01:05:39 Gripped by pain. Nothing above the physiological reactions. What about altered states of consciousness? Dissociation?
01:10:47 If you seem to be aware of something scientifically inexplicable, it is more likely that you are misperceiving or misinterpreting something.
01:17:56 Illusionism as a way of life? Nagel on Dennett (https://www.nybooks.com/articles/2017/03/09/is-consciousness-an-illusion-dennett-evolution).
01:20:57 Does illusionism deny the _authority_ of the first person vew over the third person view?
01:24:42 How did Keith become an illusionist?
01:27:51 Stage magic, feeling cheated or not, stage vs real magic.
01:28:46 Music. All that evolution cares about is the effect. Evolution selected the mechanism that made us feel metaphysically special.
01:31:41 How do we decide what is real, what is not? Consciousness = awareness? Can you be in pain without being aware of it?
01:35:38 1st person falsifiability: can you imagine any experience you would go through that would change your mind about illusionism?
01:43:47 AI. Keith's question: am I enthusiastic, am I frightened? AGI, social media recommendation, GPT, embodiment.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro: empirical vs armchair philosopher. Visual vs tactile understanding.
00:06:08 How subjective experience rises from physical matter. Entering wine into your body vs seeing a tomato. Perception of color, pain, interoception.
00:14:09 Consciousness is not a thing, but conscious is an adjective.
00:18:24 Brain is (part of) the body. The developmental biology view.
00:21:23 Immune system: what is you, what is not you?
00:24:30 Cells, tissue, (https://www.youtube.com/shorts/Rvmvt7gscIM) organs, body, how does hierarchical agency function? The relational ontology.
00:27:08 Love, hate, and self, me and not me, at every level.
00:28:24 Pregnancy. How selves are created and negotiating.
00:30:45 Homeostasis and autopoiesis. Allostasis and homeorhesis. We are systems that create themselves. How we learn to deal with gravity? Self disorders.
00:38:51 How to do science about first person experience? Reported lived experiences, physiological measurements, brain imaging.
00:44:36 Depersonalization. No self-organizing system without movement. Transparent background and its crack. Can't afford processing the self in the background. Sense of touch or odor can bring you back. Fetuses touch themselves and each other.
00:55:14 4P of John Vervaeke, is depersonalization a disorder of relevance realization? Automatic vs automaton.
01:01:09 Meditation vs depersonalization, phenomena of the same system? Psychosis and depersonalization.
01:07:29 Movement and depersonalization.
01:09:55 Autism and depersonalization. Why is Anna interested in depersonalization?
01:13:23 Feeling like a zombie or a ghost.
01:16:22 Dissociation and depersonalization.
01:19:01 The detachment of scientists of their subjects. Anna's question to me: what drove me to create the podcast? Soul hunting through interacting with people. We can't do it alone. Movement medicine and contact dance.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro: Mark's journey from undergrad physics to psychology, philosophy, and theology.
00:05:17 Philosophy: "Who you are is directly related to what you know".
00:06:25 The psychology of being a scientist. Lived dualism.
00:07:49 Becoming a scientist is to become a certain kind of person, and this very much shapes the scientific worldview.
00:08:46 AI is leaked from the lab and becoming a focal point around which the world is turning.
00:12:05 Psychology and theology. How are they related? Developmental psychology. Thresholds, crisis moments, self-transcendence.
00:17:23 Intelligence is a deeply felt notion. Crisis, suffering, struggle, not knowing who we will become, are part of our intelligence. Intelligence is a deeply felt notion. It is not isolated, part of the cosmos, which is why science can be done.
00:20:11 Self-transformation is optional as an adult. Dante and Barfield: are individual and collective transformations similar?
00:32:28 Dante and AI. In hell there is no novelty: closed data set, no imagination, frozen world. Paradise: the joy of knowing more. Music.
00:36:00 Fear.
00:37:15 Addiction. AI recommendation engines. The infinite scroll.
00:41:25 Francis Bacon: "technology was given to humanity by God to bring us back to the garden of Eden, to relieve suffering". Infinite as more and more vs the one thing opening onto all things.
00:43:53 The therapeutic use of addictive AI. https://balazskegl.substack.com/p/mental-jujitsu-between-me-and-the My story with social media addiction and martial arts and its theological interpretation.
00:49:48 Turing test. First of Mark's ten points. https://www.youtube.com/watch?v=LHIvKFY2kbk The dialogical Turing test. https://balazskegl.substack.com/p/gpt-4-in-conversation-with-itself GPT-4 is a much more sophisticated hell.
00:55:53 AI leaders are incentivized to oversell AI. Governments do have levers. "We need more science."
00:58:57 AI and feelings. What is to be human?
01:01:20 Model of cognition is not cognizant itself.
01:04:21 Metaphors of mind. Engineers realize metaphors.
01:07:12 Engineering organisms vs machines. Michael Levin.
01:09:40 We dwell in presence, not just compute. We participate in reality, not just observe it. The field metaphor. Memory is not stored in the brain.
01:16:30 Technology is unadaptive. It is not built into reality, but our reality is built in a way into which technology fits. Thinking machine is an ancient dream.
01:19:13 Attention is a moral act. How to manage fear. The GPT panic.
01:23:35 Love of life is part of intelligence. Intellect is driven towards what is loved. The silence around the words.
01:26:40 Embrace the boredom. Dwell in uncertainty. Convert suffering to hope. Think about our own psychology. The purgatorial state.
01:29:27 Mark's question to me: do I feel that this crisis moment can be a turn for the better, rather than this panic-driven fear of what's going to happen. Agency in AI. Cybernetics.
01:34:40 Alignment. John Vervaeke's program of bringing up AI.
01:38:59 The exponential take-off is a theology. Agency gets more dependent as it gets more sophisticated.
01:41:31 The high-functioning zombie metaphor. The fear of zombie AI is what will create it.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro. Gaels's journey from physics to AI through coding, then health and social science applications.
00:06:17 Looking for impact. Are we using our energy to solve the best problems? How to estimate future impact?
00:12:18 How did the interacting with a wide variety of sciences changes you as an AI researcher? Out-of-the box. Empirical research.
00:13:19 Benchmarks. How they incorporate value and drive AI research. AI went from a mathematical to an empirical science. Fei-Fei Li and ImageNet.
00:19:07 The Autism Challenge: predict the condition from brain imaging. How to avoid fooling ourselves?
00:25:24 How did the medical community react? The clash between what is true and what is valuable.
00:27:15 How do you measure your scientific impact?
00:31:09 Scientific/technological and societal progress.
00:33:01 Recommender AI and the 2007 Netflix challenge.
00:35:14 How to deal with social media addiction.
00:42:06 Scikit-learn. The Toyota of AI. Origin story.
A well-designed tool for scientist is also useful for business.
00:47:03 Open source organizational structure. Ecosystem building.
00:55:57 Deep learning and scikit-learn.
00:59:37 Sociology ad psychology of scikit-learn.
01:09:57 How to bring science home. AI has become an ice breaker.
01:13:13 Gael's question: what excites me these days. Being seen; clarifying my thoughts through dialogs and writing; agency, RL, and putting AI in hardware we connect with; moving my body.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro
00:04:20 The LHCb experiment. Fundamental particle physics. Why isn't there as much antimatter as matter? Timescales.
00:17:18 Shortcomings of the Standard Model. Dark matter. The LHC.
00:22:24 The LHCb collaboration. Organization of a scientific experiment beyond the Dunbar number. Carrier development in academia and physics.
00:27:03 The skills and day job of an LHC physicist. The messy organization of a big experiment. Technical vs physics work.
00:32:33 The (lack of) management levers and incentives. 70+ institutes. Where does meaning come from?
00:38:26 Vava's journey from war-torn Yugoslavia through Vienna and Oxford to CERN in Geneva and permanent position in Paris.
00:41:23 Why physics? Curiosity and introversion. The helpers on a hero's journey.
00:45:03 The real-time aspects: how we take the data. 30 million+ proton-proton collisions, a few to 30+ Terabits per second. The real-time trigger reduces the rate by 3-4 orders of magnitude.
00:48:39 Working in a small group. Career without planning in the early days vs students today.
00:51:48 Early adoption of AI and GPUs in the real-time trigger. Separate signal (interesting events) from background (known particles) in a million-dimensional space. Reconstruction cuts it down to 10-20 features where we apply Boosted Decision Trees. Training data and simulation. Neural nets? Sometimes, in complex feature spaces, for example in the calorimeter.
01:01:16 Simulation to real data: systematic uncertainties. How to prioritize what to care about? The soft process and social structure of scrutinizing results. The effect of the aggregated knowledge of the collaboration.
01:09:29 The delicacies of the scientific method: the look-elsewhere effect and unknown unknowns. The soft side of the Popperian ideal.
01:16:46 Who decides what to go after in physics? LHCb: 20 years x 100 PhD thesis is a lot of investment. The role of the critical mass.
01:20:27 The International Linear Collider and the sociology of the next big experiment.
01:24:10 ATLAS = Cathedral. The deep metaphor: multi-generational experiments. The sacrifice of early-career scientists.
01:29:56 Vava's dream for the rest of his career. Survival guilt.
01:36:01 Science at home.
01:39:43 Why the podcast? - Vava's question to me. Spirituality and science. Anger and separation anxiety. This little corner of the internet. Truth and importance: the daily dilemma at the Paris-Saclay Center for Data Science.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro. Bogdan's journey through the French system towards PhD in AI. Inspiration by early DeepMind papers, research on LSTM and other recurrent architectures.
00:05:29 Oxford postdoc between ML and Neuroscience, theory of mind. Turn towards safety. Influence of Nick Bostrom's 2014 book Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies.
00:07:29 AI acceleration. AlphaGo, Startcraft, BART, GPT-2. Language. Learning people's preferences.
00:10:24 The curious path of an independent AI safety researcher. 80000 hours https://80000hours.org/, the Alignment Forum https://www.alignmentforum.org/, LessWrong https://www.lesswrong.com/, effective altruism.
00:18:15 GPT: in-context learning, math.
00:23:47 GPT: planning and thinking. Planning in the real world (reinforcement learning) vs planning in a math proof, planning as problem solving.
00:27:29 GPT: chain of thought. "Let's think about this step by step."
00:31:47 GPT: lying? HAL from 2001 Space Odyssey. Does GPT have the will to do something? Simulators, Bayesian inference, simulacra, autoregressivity. The surprising coherence of GPT-4. Playing personas.
00:43:38 GPT: reinforcement learning with human feedback. RLHF is like an anti-psychotic drug? Or cognitive behavioral therapy?
00:45:38 GPT: Vervaeke's dialog Turing test https://balazskegl.substack.com/p/gpt-4-in-conversation-with-itself.
00:52:36 AI Safety. The issue of timescale. The OpenAI initiative https://openai.com/blog/our-approach-to-ai-safety. Aligning by debating.
00:57:46 Direct alignment research; Bogdan's pessimism.
The 2-step approach: automate alignment research. Who will align the aligner AI?
01:04:11 Alignment by giving agency to AI. Embodiment. Let them confabulate but confront reality.
01:12:09 Max Tegmark's waterfall metaphor. Munk debate on AI https://www.youtube.com/watch?v=144uOfr4SYA, Yoshua Bengio's interview https://www.youtube.com/watch?v=0RknkWgd6Ck.
01:22:21 Open source AI. George Hotz interview https://www.youtube.com/watch?v=dNrTrx42DGQ. Bogdan's counterargument: engineering a pandemic. Some tools make a few people very powerful.
01:28:15 Adversarial examples.
01:31:32 Bogdan's dreams and fears, where are we heading?
Hosted on Acast. See acast.com/privacy for more information.
00:00:00 Intro.
00:01:34 Jonas' story. Math, physics towards computer science, AI, and robotics.
00:05:09 Embodiment in intelligence.
00:07:25 LLMs in robots. Should we put LLMs in robots? Can we teach them as kids if they can speak? Will they develop their personalities depending on their unique experience? Should we add episodic memory to them?
00:19:02 Relevance realization. How to filter important information from the immense incoming flow of signals? The top-down aspect of perception. Cultural learning and binding.
00:23:30 Limits of GI. Is there an inherent limit to how intelligent a being can be? What if too much intelligence makes the map take over the agent, leading to something like schizophrenia?
00:29:20 Continual learning. The brain is a little scientist. The scientific method: where do the hypotheses come from? Where does the value of a proposition come from? How do we decide what proposition to prove or what experiment to run? Why did I work on the Higgs boson?
00:37:37 Dog intelligence. Do dogs want to "go beyond" what is "visible", or is it a purely human drive?
00:40:00 Collective alignment. Higher level collective consciousness and its relationship to human and AI alignment.
00:47:21 AGI. How far are we from AGI?
00:50:38 Robots. Bodies are the bottleneck of robotics research.
00:57:09 Jonas' dream: connecting the dots, merging the cognitive modules and experiment in the real world, towards a dog intelligence in 5-10 years.
01:01:13 High-functioning zombies. Should we be afraid of the them: an agent smart enough to plan, but not smart enough to see the harm that some of the planned actions may cause?
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
1:35 Screwdriver hands: how Giuseppe became a scientist.
6:07 Reinforcement learning (RL) explained to Giuseppe's grandma.
9:01 Model-based reinforcement learning: how to fry an egg.
10:45 The three components of model-based reinforcement learning: the actor, the model, and the planner.
16:29 Planning = thinking: ham & eggs and Google map
19:01 RL is responsible for collecting its own data
22:05 Vervaeke's first two Ps: propositional (GPT) and procedural (RL)
24:05 Fear from AI: the paperclip scenario, evil, indifference, and foolishness
28:19 Should we add agency into AI? Can we socialize AI as we do with kids?
32:22 Do we need to embody AI to align it? Can GPT bike? Is textual knowledge everything?
36:28 Why to aspire to the dream of creating AI? 1) Why not :)? 2) Curiosity. 3) To understand ourselves better.
41:52 How AI changed our lives. Recommendation engines, addiction, jujitsu, conspiracy theories, the attention economy.
49:12 Open source large language models. AI as a mirror. Individuation of LLMs.
51:16 Putting AI into things: they will individuate.
52:56 Vervaeke's 2nd person Turing test. Let GPTs talk to each other.
54:37 Could AI manipulate us?
56:11 Closing.
I, scientist blog: https://balazskegl.substack.com
Twitter: https://twitter.com/balazskegl
Artwork: DALL-E
Music: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw
Hosted on Acast. See acast.com/privacy for more information.
En liten tjänst av I'm With Friends. Finns även på engelska.