The scope of the Symposium will cover:
The symposium will take place in Kraków at the Institute of Physics of the Jagiellonian University.
Abstract (oral or poster presentation) submission till 31 May 2019. Information about acceptance will be send till 15 July 2019.
Fees:
early registration (payment till 30 June 2019): 250 EUR (1000 PLN)
regular registration: (payment till 31 July 2019) 300 EUR (1250 PLN)
PhD students: 200/250 EUR (early/regular) 850/1000 PLN (early/regular)
accompanying person: 50 EUR (250 PLN)
Conference fee covers conference materials, get together party, conference dinner (banquet), coffe breaks and lunches during the Confernce. Accommodation is not included in the Conference fee.
The payment is required via bank transfer only:
Bank Pekao S.A.
Account holder: KONTO WYDZIALOWE WFAIS
Address: ul. prof. S. Lojasiewicza 11, 30-348 Krakow
IBAN code: PL07 1240 4722 1111 0000 4855 9692
SWIFT CODE: PKOPPLPW
with reference to: "32MSS Last Name"
In case of foreign transfer please use SEPA (within EU) or OUR option (outside EU) - bank charges are to be covered by the participant.
Please note that we cannot handle cash payments.
We consider quantum dynamics on a graph, with repeated strong measurements performed locally at a fixed time interval $\tau$. For example a particle starting on node $x$ and measurements performed on another node $x'$. From the basic postulates of quantum mechanics the string of measurements yields a sequence no,no,no, $\cdots$ and finally in the $n$-th attempt a yes, i.e. the particle is detected. Statistics of the first detection time $n \tau$ are investigated, and compared with the corresponding classical first passage problem. Dark states, Zeno physics, a quantum renewal equation, winding number for the first return problem (work of A. Grunbaum et al.), total detection probability, detection time operators and time wave functions are discussed.
Evolving stochastic process, when interrupted at random epochs and reset to its initial condition, reaches a new nonequilibrium stationary state. The approach to the stationary state is accompanied by an unusual `dynamical phase transition'. Moreever, the mean first-passage time to a fixed target becomes a minimum at an optimal value of the resetting rate. This makes the diffusive search process rather efficient. Resetting dynamics has been studied intensively in the last few years and is a rapidly emerging field in stochastic processes and nonequilibrium systems. In this talk, I'll give an overview of this evolving field.
In this work we consider the role of active inclusions in a growing interface, for example membrane binding proteins which catalyse growth in the plasma membrane of eukaryotic cells. The interface is thus rendered active and is described by two coupled fields: the height field of the interface and the density of the inclusions. The equations generalise to active interface growth the Kardar Parisi Zhang equation which descibes nonequilibrium growth and also represents many other systems driven out of out of equilibrium. In our model inclusions gravitate towards minima of the height field and then catalyse growth which generates interface waves. This leads to complex kinematic waves and pattern formation and the proteins are able to surf the waves they create. The interface width displays a novel superposition of scaling and sustained oscillations distinct from KPZ physics.
F Cagnetta, M. R. Evans and D Marenduzzo Phys. Rev. Lett. 120, 258001 (2018)
F Cagnetta, M. R. Evans and D Marenduzzo Phys. Rev. E 99, 042124 (2019)
We analyze a couple of simple systems, without stationary probability distribution, in order to show how to proceed for obtaining detailed as well as integral fluctuation theorems in such a kind of systems. To reach such a goal, we exploit a path integral approach that adequately fits to this kind of study. This methodology, together with the variational approach, are also exploited to analize fluctuation theorems in the paradigmatic KPZ equation, as well as to determine a Large Deviation Function. This lead us to conjecture that a higher critical dimension does not exists for the KPZ system.
The treatment of cancer by boosting the immune system is a recent and promising therapeutic strategy. During interactions, the immune system cells learn to recognize cancer cells. Analogously, the cancer cells can develop the ability to blend into the surrounding tissue and mislead the immune system cells.
I will present a model of cell interactions in the framework of thermostatted kinetic theory [1,2]. Cell activation, learning processes, and memory loss due to cell death are reproduced by regulating the cell activity introduced in the model. By analogy with energy dissipation in a mechanical system, the control of the activity fluctuations is achieved by a so-called thermostat. Proliferation of cancer cells is reproduced by autocatalytic processes. For each cell type, I will write down the thermostatted kinetic equations for the distribution functions of position, velocity, and activity and explain how the direct simulation Monte Carlo (DSMC) method has been adapted to solve them.
The numbers and activities of cancer cells and immune system cells are followed for different initial distributions of cells. The effect of the thermostat on cancer evolution will be compared to unexplained clinical observations. I will show that the model is able to reproduce an apparent elimination of the tumor preceding a long period of equilibrium, eventually followed by the proliferation of the cancer cells, according to a process identified as "the three E's" of immunoediting, for "Elimination, Equilibrium and Escape" [3,4].
We calculate the time-dependent probability distribution function (PDF) of an overdamped Brownian particle moving in a one-dimensional periodic potential energy $U(x)$. The PDF is found by solving the corresponding Smoluchowski diffusion equation. We derive the solution for any periodic even function $U(x)$ and demonstrate that it is asymptotically (at large times $t$) correct up to terms decaying faster than $1/t^{3/2}$. As part of the derivation, we also recover the Lifson-Jackson formula for the effective diffusion coefficient of the dynamics. The derived solution exhibits agreement with Langevin dynamics simulations. The approach is generalized for inhomogeneous systems where, in additional to the periodic potential, the particle also experiences a periodic diffusion coefficient. The application of a one-dimensional (Fick-Jacobs) diffusion equation for describing Brownian dynamics in periodic corrugated channels is also discussed.
In this talk, we explore an approach to understanding price fluctuations within a market via considerations of functional dependencies between asset prices. Interestingly, this approach suggests a class of models of a type used earlier to describe the dynamics of real and artificial neural networks. Statistical physics approaches turn out to be suitable for an analysis of their collective properties. We first motivate the basic phenomenology and modeling arguments before moving on to discussing some major issues with inference and empirical verification. In particular, we focus on the natural creation of market states through the inclusion of interactions and how these then interfere with inference. This is primarily addressed in a synthetic setting. Finally we investigate real data to test the ability of our approach to capture some key features of the behavior of financial markets.
A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This behaviour has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. In particular, we focus on the theory of diffusing diffusivity and consider the very generic class of the generalised Gamma distribution for the random diffusion coefficient. Moreover, addressing the first passage problem for a specific diffusing diffusivity model, we emphasize that even when the non-Gaussian character appears for certain regimes only and in the tails of the distributions (thus with low probability), it may be essential for those systems in which rare events dominate triggered actions.
Chemical processes in closed systems inevitably relax to equilibrium. Living systems avoid this fate and give rise to a much richer diversity of phenomena by operating under nonequilibrium conditions. Recent experiments in dissipative self-assembly also demonstrated that by opening reaction vessels and steering certain concentrations, an ocean of opportunities for artificial synthesis and energy storage emerges. To navigate it, thermodynamic notions of energy, work and dissipation must be established for these open chemical systems. Here, we do so by building upon recent theoretical advances in nonequilibrium statistical physics. As a central outcome, we show how to quantify the efficiency of such chemical operations and lay the foundation for performance analysis of any dissipative chemical process.
The sensitivity to perturbations of the Fisher, Kolmogorov, Petrovskii, and Piskunov (FKPP) wave front is used to find a quantity revealing the perturbation of diffusion in a concentrated solution. We consider two chemical species A and B engaged in the reaction A + B $\rightarrow$ 2A. When A and B have different diffusivities $D_A$ and $D_B$, the deterministic dynamics includes cross-diffusion terms due to the deviation from the dilution limit [1].
The behaviors of the front speed, the shift between the concentration profiles of the two species, and the width of the reactive zone are investigated, both analytically and numerically. The analytic results are deduced from a perturbation approach in the limit of small diffusion terms with respect to reaction terms. The shift between the two profiles turns out to be a well-adapted criterion presenting noticeable variations with the deviation from the dilution limit in a wide range of parameter values. In particular, the difference between the shifts obtained in a dilute system and a concentrated system increases as $D_B$ differs from $D_A$, especially in the case $D_B>D_A$ [2].
[1] L. Signon, B. Nowakowski, and A. Lemarchand, Phys. Rev. E 93, 042402 (2016).
[2] G. Morgado, B. Nowakowski, and A. Lemarchand, Phys. Rev. E 99, 022205 (2019).
Quantum analogs of classical random walks have been defined in quantum information theory as a useful concept to implement algorithms. Due to interference effects, statistical properties of quantum walks can drastically differ from their classical counterparts, leading to much faster computations.
We shall present various statistical properties of continuous-time quantum walks on a lattice, such as: survival properties of quantum particles in the presence of traps (i.e. a quantum generalization of the Donsker-Varadhan stretched exponential law), the growth of a quantum population in the presence of a source, quantum return probabilities and Loschmidt echoes.
The classical first-passage theory for random walks is generalized to quantum systems by using repeated attempts with a fixed frequency $1/\tau$ to find the system in the detection state $| \psi_\text{d}\rangle$. The first successful of these attempts defines the time $T = N \tau$ of first detected arrival. Here, the Zeno limit $\tau\to0$ of diverging detection frequency is investigated. The repeated detection setup is compared with a non-Hermitean Schrödinger equation. Using an electrostatic analogy we can determine all absorbtion modes in the Zeno limit and find the pdf as well as all moments of $T$ for systems with a discrete energy spectrum. The pdf has a scaling form in $\tau$. Applying known results from the repeated detection setup to the non-Hermitean equation shows that the mean dissipation time in the latter system is quantized.
The model of a step quantum heat engine (SQHE) is defined as a working body, given by the two-level system (TLS), acting separately (i.e. in steps) with the heat baths and the energy storage system (a battery). A single step of the engine is defined as the unitary and energy conserving operation. For the general SQHE we prove the fundamental attainable efficiency, given as a function of a cold and hot temperature, which is below the Carnot efficiency. The reason is that the engine is quasi-autonomous, i.e. there is no extra external control like fields commonly used in a non-autonomous setting, but in contrary the SQHE is realised by a unique physical process of the TLS population inversion via a strong coupling with the heat bath. For our model of the SQHE we additionally discuss the problem of the work definition for the fully quantum systems. So far one of the reasonable definition of the work (consistent with the fluctuation theorems) is given by the change in a mean energy of the battery which has additionally a translational symmetry, i.e. these changes do not depend on how much energy is currently stored in the battery. However, this symmetry impose a nonphysical property that the battery cannot have a ground state. We solve this problem showing that the battery with a ground state can be used as a proper energy storage system only if the work is defined as a change of the ergotropy instead of a mean energy.
The development of multicellular organisms is a dynamic process in which cells divide, rearrange, and interpret molecular signals to adopt specific cell fates. Despite the intrinsic stochasticity of cellular events, the cells identify their position within the tissue with striking precision of one cell diameter in fruit fly or three cell diameters in vertebrate spinal cord. How do cells acquire this positional information? Where is this information encoded and how do cells decode it to achieve the observed level of cell fate reproducibility? These are fundamental questions in biology that are still poorly understood. In this talk, I will combine both information theory methods and mechanistic models to address these questions. I will investigate to what extent the level of noise in the input signals affects precision of the resulting gene expression pattern. I will present data-driven analysis of gene regulatory network that interprets two positional cues in the developing spinal cord. Interestingly, the observed precision of gene expression pattern is close to the theoretical limit of precision of decoding of noisy signals.
The Eliazar-Klafter targeted stochasticity concept , together and that of the reverse engineering (reconstruction of the stochastic process once a target pdf is a priori given), has been originally devised for Lévy-driven Langevin systems. Its generalization, discussed in [PRE 84, 011142, (2011)], involves a non-Langevin alternative which associates with the sam Levy driver and the same target pdf, another (Feynman-Kac formula related) confinement mechanism for Lévy flights, based on a direct reponse to energy (potential) landscapes, instead of that to conservative forces. We revisit the problem of Lévy motion in steep potential wells, addressed in [A.A. Kharcheva et al., J. Stat Mech., (2016), 054029] and [B. Dybiec et al. , PRE 95, 05201, (2017)] and discuss the alternative semigroup (Feynman-Kac) motion scenario. Our focus is on a link with the problem of boundary data (Dirichlet versus Neumann, or absorbing versus reflecting) for the Lévy motion and its generator on the interval (or bounded domain, in general).
The Levy walk processes with rests restricted to a region bounded by two absorbing barriers are discussed. The waiting time between the jumps is given by an exponential distribution with a constant jumping rate and with a position-dependent jumping rate. The time of flight for both ranges of $\alpha$: lower $(0,1)$ and higher $(1,2)$, is considered.
For constant jumping rate two limits are taken into account: of short waiting time that corresponds to Levy walks without rests, and long waiting time which exhibits properties of Levy flights model. The quantities describing the escape process: first passage time distribution, mean first passage time are analysed. The analytical results are compared with Monte Carlo trajectory simulations.
I will present the solutions of Volterra equations with the fading memory given by the Prabhakar function with negative upper parameter which is relevant to the standard non-Debye models of dielectric relaxation, namely for the Cole-Cole, Cole-Davidson, and Havriliak-Negami models. These integro-differential equations are solved by using the umbral calculus and Laplace transform method whose results are identically for the same fixed values of the used parameters.
Although there is not a complete “proof” of the second law of thermodynamics based on microscopic dynamics, two properties of Hamiltonian systems have been used to prove the impossibility of work extraction from a single thermal reservoir: Liouville’s theorem and the adiabatic invariance of the volume enclosed by an energy shell (Helmholtz's theorem). In this talk, I will review these two properties and analyze the dynamics of isothermal and microcanonical Szilard engines in the phase space. In particular, we will see that ergodicity breaking plays a crucial role in all these variants of the Maxwell demon because the enclosed volume is no longer an adiabatic invariant in non-ergodic systems.
We show that the entropy production in small open systems coupled to environments made of extended baths is predominantly caused by the displacement of the environment from equilibrium rather than, as often assumed, the mutual information between the system and the environment. The latter contribution is strongly bounded from above by the Araki-Lieb inequality, and therefore is not time-extensive, in contrast to the entropy production itself. Furthermore, we show that in the thermodynamic limit the entropy production is associated mainly with generation of the mutual information between initially uncorrelated environmental degrees of freedom. We confirm our results with exact numerical calculations of the system-environment dynamics.
Transitions to chaos have been previously extensively studied in different setups of randomly connected networks. The prevailing assumption is that, due to the central limit theorem, synaptic input can be modeled as a Gaussian random variable. In this scenario, a continuous transition has been found in rate models with smooth activation functions. However, these models do not take into account that neurons feature thresholds that cut off small inputs. With such thresholds, the transition to chaos in Gaussian networks becomes discontinuous, making it impossible for the network to stay close to the edge of chaos and to reproduce biologically relevant low activity states.
Here we introduce a model with biologically motivated, heavy-tailed distribution of synaptic weights and analytically show that it exhibits a continuous transition to chaos. Notably, in this model the edge of chaos is associated with well-known avalanches. We validate our predictions in simulations of networks of binary as well as leaky integrate and fire neurons. Our results uncover an important functional role of non-Gaussian distributions of synaptic efficacy and suggest that their heavy tails may form a weak sparsity prior that can be useful in biological and artificial adaptive systems.
A $q$-neighbor majority-vote model for the opinion formation is introduced in which agents
represented by two-state spins update their opinions on the basis of the opinions of
randomly chosen subsets of $q$ their neighbors ($q$-lobbies). The agents with probability
$(1-2p)$, $0\le p\le1/2$, obey the majority-vote
rule in which the probability of the opinion flip depends only on the sign of the resultant
opinion of the $q$-lobby, and with probability $2p$ act independently and change opinion or
remain in the actual state with equal probabilities. Thus, the parameter $p$ controls the
degree of stochasticity in the model. In the model under study the
agents are located in the nodes of complex networks, e.g., Erd\"os-R\'enyi graphs or
scale-free networks, and the neighborhood of each agent consists of all agents
connected with him/her by edges, out of which the $q$-lobby is chosen randomly at each
step of the Monte Carlo simulation. This model is related to a recently introduced
$q$-neighbor Ising model [A.\ J\c{e}drzejewski et al., Phys.\ Rev.\ E 92, 052105 (2015);
A.\ Chmiel et al., Int.\ J.\ Modern Phys.\ C 29, 1850041 (2018)], with agents obeying
Metropolis opinion update rule, in which, in particular, first-order ferromagnetic transition was
reported, with the width of the hysteresis loop oscillating with $q$. In contrast, in the
$q$-neighbor majority vote model only second-order ferromagnetic transition is observed.
Theory for this transition is presented both in the mean-field approximation, valid for
large mean degrees of nodes and large $q$, and in a more elaborate pair approximation. In the
latter case the predicted location of the critical point $p_{c}$ agrees quantitatively with that
obtained from Monte Carlo simulations for various complex networks with broad range of
mean degrees of nodes and sizes of the $q$-lobby. Finite size scaling analysis shows that
in the vicinity of the critical point the magnetization shows scalin typical for the
mean-field Ising model, with the critical exponent $\beta = 1/2$, but other critical
exponents depend on the topology of the underlying complex network.
Currently, the investigations of resistive switching have attracted much attention. Electronic devices, the functioning of which is based on the resistive switching, are called memristors. The memristor as a new fundamental element of the electrical circuit that dissipates energy and has memory was theoretically predicted by Chua in 1971, but found its hardware implementation only in 2008. It represents a thin (from several nanometers to several tens nanometers in thickness) dielectric film sandwiched between two conductive electrodes. The switching of a memristor from the low resistance state (LRS) to the high resistance state (HRS) is achieved by the rapture of the filament by a voltage pulse (so-called RESET process). The filament can be restored by a voltage pulse of the opposite polarity that results in the switching from the HRS back to LRS (so-called SET process). As a result, its current-voltage characteristic is nonlinear and takes the form of hysteresis. At present, memristors have found application in diverse areas of science and technology ranging from information processing to biologically inspired systems. In particular, they are considered to be promising for application in the next generation non-volatile computer memory (Resistive Random Access Memory, ReRAM), in the neuromorphic computer systems, etc.
All previous theoretical and experimental studies have neglected the important effect of noise on the memory properties of these elements. As a result, an adequate stochastic model of memristor, taking into account many different factors as well as internal and external noises, is still far from being constructed. Difficulties in creating an adequate model associated with complex physico-chemical reactions occurring inside the film under the action of an applied electric field, the structure of the conducting filament, setting the right conditions at the boundaries with contacts, various memristor materials, etc. It has already become clear that to create a real model of the device, the ideal memristor models proposed by Chua are not enough and it is necessary to consider the system as multistable in terms of statistical physics approach.
In this report after a brief overview of previous achievements in this area the new results both theoretical and experimental studies of memristors performed in the “Laboratory of stochastic multistable systems” of National Research Lobachevsky State University of Nizhni Novgorod would be presented. Among them, experimental investigations of the resistive switching in a memristor based on a thin film ZrO2(Y)/Ta2O5 stack under a random noise voltage in the form of white Gaussian noise signal with certain parameters, measurements of the activation energies of oxygen ion diffusion in yttria stabilized zirconia by flicker-noise spectroscopy, probabilistic analysis of the voltage-controlled and the current-controlled ideal memristor under the action of the external voltage in the form of Gaussian noise, non-stationary distributions and relaxation times in a stochastic model of memristor.
Enormous progress in machine learning achievements, going together with their excellent implementations on user-friendly platforms, have pushed many of us towards this
methodology. Can we get better explanations for studied data? Can we get the explanation easier? In the following we deal with data formed from recordings on healthy people with different age and sex. The problem is how the age and/or sex influence the normal rhythm of a healthy heart.
The healthy human heart remains under the permanent influence of both branches of the autonomic neural system (ANS): the parasympathetic (considered to slow down heart rate) and the sympathetic (considered to speed up heart rate). Many measures estimating heart rate variability (HRV) have been proposed in order to quantify the regulatory function of the ANS. Intensive studies on healthy population have found a correlation between an increase in age and a decrease in many HRV indices. Therefore higher values of HRV have been attributed to better organization of feedback reflexes driving an organism's response to actual body needs. However, there are observations suggesting that abnormal levels of some indices should be related to erratic rhythms, i.e., rhythms resulting from remodeling of the cardiac tissue due to disease or aging. We hypothesize that increase of measures of dynamical patterns in elderly indicates at unhealthy autonomic activity, or possible erratic rhythm associated with degradation of cardiac tissue, or both. Such erratic rhythms might be the first stage of developing silently arrhythmogenesis.
The task of separating different cardiac patient groups on the basis of HRV parameters is an urgent problem. If there are differences, it might be possible to find noninvasive marker for specific cardiac diseases. Answering to these questions demands wide knowledge about the way in which information hidden in heart rate variability displays the actual state of the heart regulatory system.
Single-particle trajectories measured in microscopy experiments contain important information about dynamic processes undergoing in a range of materials including living cells and tissues. However, extracting that information is not a trivial task due to the stochastic nature of particles’ movement and the sampling noise. It usually starts with the detection of a corresponding motion type of a molecule, because this information may already provide insight into mechanical properties of the molecule’s surrounding.
The most common analysis method uses mean square displacement (MSD) curves. Within this approach, one fits the theoretical curves for various physical models to the data and then selects the best fit with statistical analysis. However, in many cases, the actual trajectories are too short for extracting meaningful information from the time-averaged MSDs. Moreover, the finite localization precision adds a term to the MSD, which can limit the interpretation of the data.
Classification of trajectories with machine learning (ML) algorithms is one of the possible approaches to overcome the problems of the MSD method. It is rooted in computer science and statistics. And it is very appealing because it would enable automated analysis of many hundreds or even thousands of trajectories with a reduced amount of manual intervention and initial data curation.
Several attempts to analyze SPT trajectories with the traditional ML methods have been already carried out. However, the methods that have been used for that purpose (e.g. Bayesian approach, random forest and a simple back-propagation neural network) belong to the class of so-called feature-based methods. Each trajectory within this approach is described by a set of human-engineered features and only those features are provided as input to a classifier model. Thus, similarly to the MSD based methods, they require the preprocessing of raw data and their interpretability may be limited for short and noisy trajectories.
In the talk, we will present a novel classification method based on convolutional neural networks (CNN). CNNs is one of the most popular deep learning algorithms, which excels in image classification. Using them is very appealing because they operate on raw data. They do not require any feature selection and extraction carried out by a human expert. Instead, they use a cascade of multiple layers of nonlinear processing units for feature identification, extraction and transformation in order to learn multiple levels of data representations. The performance of the CNN classifier trained on artificial trajectories will be compared with two popular feature based methods (random forest and gradient boosting).
References:
1. Patrycja Kowalek, Hanna Loch-Olszewska, Janusz Szwabiński, Classification of diffusion modes in single particle tracking data: feature based vs. deep learning approach, arXiv:1902.07942, submitted to Phys. Rev. E