Speaker
Description
Understanding the neural basis of mental phenomena remains a great challenge. Statistical physics contributed to the development of attractor neural networks that are our best models linking mental states with the physical properties of the brain. Information (from senses and memory) is embedded in high-dimensional patterns of neural activity. It can be visualized by fMRI scans. Generative neural models based on diffusion processes resemble formation of brain patterns, illustrating associative aspects of thinking in large language models. Transitions between attractor states simulate chains of thought. Higher-level processes needed for different reasoning strategies are based on Good Old-Fashioned symbolic AI (GOFAI) that has achieved superhuman level in chess, go, and many other games. New autonomous-learning model architectures may lead to similar results in many real-world applications, including medicine. Large neural models (LLMs) internalize information in rich contexts, learning from text, spoken language, images, videos, and various signals. Augmented reality glasses provide additional source of visual and behavioral data. LLMs capable of associative thinking were scaled up to billions of neural network parameters, compressing and internalizing most of human knowledge stored in texts and multimedia form. Deeper understanding requires high-level abstractions from observations, and current systems are mostly approximating observations. Physics-informed systems can be first trained to respect constraints based on the laws of physics, before they can be successfully applied to predict complex real-world phenomena. Attractor network simulations show how temporo-spatial processing disorders can be related to properties of networks and individual neurons, and offer a neural interpretation of psychological phenomena.
How long human expertise will retain its value?