Showing posts with label dynamical systems. Show all posts
Showing posts with label dynamical systems. Show all posts

Saturday, 24 February 2024

Inducing time-asymmetry on reversible classical statistical mechanics via Interventional Thermodynamic Ensembles (ITEs).

Preamble 

Probably, one of the most fundamental issue in classical statistical mechanics is extending reversible dynamics to many-particle systems that behaves irreversibly. In other words, how time's arrow appears even though constituted systems evolves in reversible dynamics. This is the main idea of Loschmidt's paradox. The resolution to this paradox lies into something called interventional thermodynamic ensembles (ITEs).  

Leaning Tower
of Pisa:Recall Galileo's 
Experiments
 (Wikipedia)

Time-asymmetry is about different histories : Counterfactual dynamics

Before trying to understand how ITEs are used in resolving Loschmidt's paradox, we understand that inducing different trajectories on an identical dynamical system in "a parallel universe" implies time-asymmetry. A trajectory provides here a reversibility.  So called "a parallel universe" is about imagining a different dynamics via a sampling, this corresponds to counterfactuals within Causal inference frameworks. 

Interventional Thermodynamic Ensembles (ITEs)

Interventional ensemble build upon an other ensemble, for the sake of simplicity, we can think of an ensemble as an associated chosen sampling scheme. From this perspective,  sampling scheme $\mathscr{E}$ would have an interventional sampling $do(\mathscr{E})$ if the adjusted scheme only  introduces a change in the scheme that doesn't change the inherent dynamics but effects the dynamical  history. One of the first examples of this is appeared recently: single-spin-flip vs. dual-spin-flip dynamics [suezen23]. This is shown with simulations. 

Outlook

Reversibility and time-asymmetry in classical dynamics are a long standing issues in physics. By inducing causal inference perspective in computing dynamical evolution of many body systems leads to reconciliation of reversibility and time-asymmetry i.e., $do-$operator's interpretation.

References

[suezen23] H-theorem do-conjecture (2023) arXiv:2310.01458 (simulation code GitHub).

Please Cite as:

 @misc{suezen24ite, 
     title = {Inducing time-asymmetry on reversible classical statistical mechanics via  Interventional Thermodynamic Ensembles (ITEs)}, 
     howpublished = {\url{https://memosisland.blogspot.com/2024/02/inducing-time-asymmetry-on-reversible.html}, 
     author = {Mehmet Süzen},
     year = {2024}
}  





Saturday, 14 October 2023

Ising-Conway lattice-games: Understanding increasing entropy

Preamble

The entropy is probably one of the most difficult physical concepts to grasp. Its inception roots in efficiency of engines and foundational connection to multi-particle classical mechanics to thermodynamics,  i.e., kinetic theory to thermo-statistics. However, computing entropy for a physical systems is a difficult task, as most of the real-physical systems lacks the explicit formulation. Apart from advanced simulation techniques that invokes thermodynamical expressions, pedagogically accessible and physically plausible system is lacking in the literature. Addressing this, we explore here, recently proposed Ising-Conway Games.

Figure: Evolution of Ising-Conway
Game  (arXiv:2310.01458)
Ising-Conway Lattice-Games (ICG)

Ising-Lenz model is probably one of the landmark models in physics, remarkably provides beyond its idealised case of magnetic domains,  now impacts even quantum computational research. However, computing entropy of Ising-Lenz models are still quite difficult. On the other hand, Conway introduce a game with simple rules generating complexity in various orders, via simple dynamical rules. By analogy to these two modelling approach,  we recently introduce game like physical system of spins or lattice sides on a finite space with constraints. This gives a physically plausible dynamics but simpler dynamical evolution to generate the trajectories. Because vanilla Ising-Models requires more complicated Monte Carlo techniques.  Here is the configuration and dynamics of Ising-Conway games,

  1. $M$ sites as a fixed space.
  2. $N$ occupied sites, or 1s.  
  3. Configuration $C(M,N,t)=C(i)$ over time changes. But at $t=0$ all occupied sites live in at the corner.
  4. Configuration can only change to neighbouring sites if they are empty. This is closely related to spin-flip dynamics of the Ising Model. 
  5. No sites occupy the same lattice cell, Pauli exclusion
  6. Should be contained within $M$ Cell.
An example evolution is shown on the Figure.

Defining ensemble Entropy on ICG

Now we are in position to define the entropy for ICGs, which easy to grasp conceptually and computationally.  $C(i, t) \in \{1,0\}$ defines the states of  the game. We build an ensemble at a given time $t$ by defining a region enclosed by 1s.  Then dimensionality of the ensemble  $ k(t) = argmax[\mathbb{I}(C(i))] - argmin [\mathbb{I}(C(i)) ]$. Here,  $\mathbb{I}$ returns index of $1$s on the lattice. This ensemble closely track maximum entropy of the system at a given time. 

Conclusions

A new game-like system that helps us to understand entropy increase that has a plausible physical characteristics that one can easily simulate.

Further reading

  • H-theorem do-conjecture, M.Süzen, arXiv:2310.01458
  • Effective ergodicity in single-spin-flip dynamics, Mehmet Süzen. Phys. Rev. E 90, 03214 url
  • do_ensemble module provides such simulation via simulate_single_spin_flip_game  from the repo h-do-conjecture 

Please cite as 

 @misc{suezen23iclg, 
     title = {Ising-Conway lattice-games: Understanding increasing entropy}, 
     howpublished = {\url{https://memosisland.blogspot.com/2023/10/ising-conway-games-entropy-increase.html}}, 
     author = {Mehmet Süzen},
     year = {2023}
}  


Sunday, 22 March 2020

Computational Epidemiology and Data Scientists: Don't post analysis on outbreak arbitrarily

Summary

Many data scientist are trained or experienced in using tools to do statistical modelling, forecasting or machine learning solutions, this doesn't necessarily mean that they should just jump out and do an ad-hoc analysis on the available public data on the covid19 outbreak and draw policy conclusions and publish them in their blogs or the other medium.  Rule of thumb of doing such thing you should have at least one published paper, article or software solution related to outbreaks appeared before December 2019. Please be considerate, epidemiological modelling is not merely fitting exponential distribution.

Conclusion
Please refrain on posting a blog or similar posts on infection modelling and giving advice out of your ad-hoc data analysis you did over your lunch-break, if you have not worked on computational epidemiology before. There is a vast academic literature on computational epidemiology. Let people experts in those fields express their modelling efforts first. Let us value expertise in an area.

Appendix: Computational epidemiology introductory resources

Here we provide, limited pointers to computational epidemiology literature. Google Scholar is your friend to find many more resources.

  • Computational Epidemiology
    Madhav Marathe, Anil Kumar S. Vullikanti
    Communications of the ACM, July 2013, Vol. 56 No. 7, Pages 88-96 doi
  • Broadwick: a framework for computational epidemiology
    O’Hare, A., Lycett, S.J., Doherty, T. et al. Broadwick
    BMC Bioinformatics 17, 65 (2016).
    doi
  • Mathematical Tools for Understanding Infectious Disease Dynamics
    (Princeton Series in Theoretical and Computational Biology)
    Odo Diekmann, Hans Heesterbeek, and Tom Britton
    Princeton Press
  • Agent-Based Simulation Tools in Computational Epidemiology
    Patlolla P., Gunupudi V., Mikler A.R., Jacob R.T. (2006)
    doi
  • DIMACS 2002-2011 Special Focus on Computational and Mathematical Epidemiology Rutgers working group
  • Containment strategy for an epidemic based on fluctuations in the SIR model
    Philip Bittihn, Ramin Golestanian
    Oxford/Max Planck
    arXiv:2003.08784
  • SIAM Epidemiology Collection (2020)
  • The collection provided by the American Physical Society (APS) Physical Review COVID-19 collection
  • Modeling epidemics by the lattice Boltzmann method Alessandro De Rosis Phys. Rev. E 102, 023301
  • Implications of vaccination and waning immunity, Heffernan-Keeling,  2009 Jun 7; 276(1664): 2071–2080, url



Monday, 25 July 2016

Economy and dynamic modelling: Haavelmo's approach

Updated on 25 August 2017
Preamable: 
Predictions using dynamic modelling
Machine Learning and  Neural Networks are not the only way to do data science or AI. There are other techniques to explore  , for example, from quantitative economics. Apart from Game Theory, dynamic modelling could be suitable to many prediction problems, specially the ones with temporal datasets. Here is one example technique from  forgotten Norwegian Nobel prize winning economist/data scientist Trygve Haavelmo. Advantage of such models are they can be explained, not fully black box. Make no mistake, for very large system with large datasets, this is not that trivial to implement, GPUs are well suited. In this post we give one hands on exercise.

Summary

Econometrics aims at estimating observables in the economy and their inter-dependencies and testing the estimates against the economic reality. A quantitative approach to express these inter-dependencies appear as simultaneous equations, an i.e. system of linear equations, this is  a mathematical structure of economic relationships that were made possible with the pioneering work of Nobel prize winning economist Trygve Haavelmo [1,2]. This approach and its dynamic variants are now used routinely in dynamic modelling of econometric systems. From a computational perspective, R-project provides efficient and very rich computational environment and the large set of extensions for econometrics in general [3].

Dynamic Modelling

The simplest relationship that can be constructed with two arbitrary economic variables, or instruments, $X(t)$ and $Y(t)$ is shown by Haavelmo [1]. For example, these variables could be unemployment rate and Gross Domestic Product (GDP), as in Okun's law. Hence, the simplest bi-variate simultaneous system of equations looks as follows,

$X(t) =a Y(t) + \epsilon_{x}(t)$,
$Y(t) =b X(t) + \epsilon_{y}(t)$,

where $a$ and $b$ are constant coefficients. Where, $\epsilon_{x}$ and $\epsilon_{y}$, appear as non-deterministic disturbances and are not observed in the modelled economic system. Disturbances are usually expressed as random variables drawn from the normal distribution. At this stage, one can follow two approaches in doing economic scenario analysis for forecasting. We can aim at finding coefficients $a$ and $b$ using economic data via dynamic regression. If $a$ and $b$ coefficients are known, we may want to study effect of different disturbances over time.
Dynamic Haavelmo Model
A model of propensity to consume in economic system is shown by Haavelmo [3] based on his analysis of US economic conditions between 1929-1949. His analysis leads to following simultaneous system of equations,
$c(t)= \alpha y(t) + \beta + u(t)$,
$r(t)= \mu (c(t)+x(t)) +\nu + w(t)$,
$y(t)= c(t)+x(t)-r(t)$,
where $\alpha,\beta,\mu,\nu$ are constants and $u(t)$ and $w(t)$ are disturbances. Economic variables have the following meanings,

c(t) : personal consumption expenditures,
y(t) : personal disposable income,
r(t) : gross business savings,
x(t) : gross investment.

However, this model considered to be static while all the relationships are given at the same time point. Zellner-Palm [5] provided a dynamic version of the Haavelmo's model. Here we write down a version of it,

$c(t)= \alpha Dy(t) + \beta + u(t)$,
$r(t)= \mu D(c(t)+x(t)) +\nu + w(t)$,
$y(t)= c(t)+x(t)-r(t)$. 

where difference operator means, $Dy(t) = y(t)-y(t-1)$.

Krokozhia Case Study

Data

Krokozhia is a fictional country depicted in Steven Spielberg's movie The Terminal. Let's generate a fictional data for our dynamic Haavelmo model's economic instruments for this country from 1950 to present in R,

set.seed(4242)
# KH: Krokozhia Haavelmo Model
KH.df <- data.frame(Year=seq(1950,2013),
c=sample(300:1500,64,replace=TRUE),
y=sample(1000:5000,64,replace=TRUE),

r=sample(100:500,64,replace=TRUE),
x=sample(100:500,64,replace=TRUE))
# Add lag data for difference
KH.df$y.lag <- c(NA, KH.df$y[1:63])
KH.df$c.lag <- c(NA, KH.df$c[1:63])
KH.df$x.lag <- c(NA, KH.df$x[1:63])


Determining Constants

Multilevel regression is needed in order to fit the data and determine the constants of the dynamic model. One R package called sem developed by John Fox can do such analysis.

library(sem)
# Two-stage least squares
#   Eq1: $c(t)= \alpha Dy(t) + \beta + u(t)$,
#         \beta is the intercept and u(t) is not used
#   Eq2: $r(t)= \mu D(c(t)+x(t)) +\nu + w(t)$,
#         \nu is the intercept and w(t) is not used
KH.eq1 <- tsls(c~I(y-y.lag), ~c+y+r, data=KH.df)
KH.eq2 <- tsls(r~I(c-c.lag+x-x.lag), ~c+y+r, data=KH.df)
coef(KH.eq1)  # alpha=875.414  nu=0.015
coef(KH.eq2)  # mu=-0.028      nu=300.675


Here tsls performs two-stage least square analysis.

Propagating a disturbance in the economy

We have not used any disturbance in determining the system coefficients, constants, above. However, we can propagate the values of economic observables using the dynamic model if we set a disturbance value at a given time. Imagine if we set disturbance on the year 2001 as $u=200$ and $w=150$. Hence, the dynamic model will read on year 2001, $t=2001$, $u=200$ and $w=150$

     $c = 875.414 (y-2570) + 0.015 + 200$
     $r = -0.028(c+x-1281-479) + 300.675 + 150$,
     $y = c+x-r$,

By solving this under-determined system of equations we can determine the values in the year 2001 and furthermore propagate all dynamics after 2001 similarly. The resulting new series will give us a quantitative idea of the effect of single disturbance in the simulated economy.

Conclusions and outlook

In this post, we have briefly reviewed possible uses of R in simulating dynamic econometric models, in particular simultaneous equation models. A simple demonstration of determining model coefficients of the Haavelmo type toy model with generated synthetic data is provided. One use case of this type of approach in economic scenario analysis and forecasting is to monitor propagation of the econometric instruments over time is also mentioned.


References

[1] The Statistical Implications of a System of Simultaneous Equations,
     Trygve Haavelmo, Econometrica, Vol. 11, No. 1. (Jan., 1943), pp. 1-12
[2] Econometrics Analysis, William H. Greene, Prentice Hall (2011)
[3] Applied Econometrics with R,
     Kleiber, Christian, Zeileis, Achim, Springer (2008), Achim Zeileis
[4] Methods of measuring the marginal propensity to consume,
     T. Haavelmo, Journal of the American Statistical Association,
     1947 - Taylor & Francis.
[5] Time series analysis and simultaneous equation econometric models,
      Zellner, Arnold and Palm, Franz, Journal of Econometrics,
      Vol.2, Num.1, p17-54 (1974)

(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License