Speaker
Description
Memory effects are a ubiquitous feature of nanoscale systems, arising in particular from resistive junctions that give rise to memristive behavior. In this work, we investigate the learning capacity of memristive networks, with a focus on nanowire and nanoparticle architectures. We discuss two examples of learning, e.g., two-phase and contrastive learning with resistive and memristive networks. We show that learning capacity can be characterized by the fixed points of network dynamics, which serve as attractors encoding stable computational states. This perspective naturally connects the analysis of memristive networks to algebraic geometry, where the fixed-point structure is captured by the Gröbner basis of polynomial equations. Thus, we establish a mathematical framework that links material-level memory dynamics to the emergent information-processing abilities of physical systems.