×

On the one dimensional “learning from neighbours” model. (English) Zbl 1225.60123

Summary: We consider a model of a discrete time “interacting particle system” on the integer line where infinitely many changes are allowed at each instance of time. We describe the model using chameleons of two different colours, viz., red (R) and blue (B). At each instance of time, each chameleon performs an independent but identical coin toss experiment with probability \(\alpha \) to decide whether to change its colour or not. If the face-up side of the coin head, then the creature retains its colour (this is to be interpreted as a “success”); otherwise it observes the colours and coin tosses of its two nearest neighbours and changes its colour only if, among its neighbours and including itself, the proportion of successes of the other colour is larger than the proportion of successes of its own colour. This produces a Markov chain with infinite state space. This model was studied by K. Chatterjee and S. H. Xu [Adv. Appl. Probab. 36, No. 2, 355–376 (2004; Zbl 1074.91005)] in the context of diffusion of technologies in a set-up of myopic, memoryless agents. In their work, they assumed different success probabilities of coin tosses according to the colour of the chameleon. In this work, we consider the symmetric case where the success probability \(\alpha\) is the same irrespective of the colour of the chameleon. We show that starting from any initial translation invariant distribution of colours the Markov chain converges to a limit of a single colour, i.e., even at the symmetric case there is no “coexistence” of the two colours in the limit. As a corollary, we also characterize the set of all translation invariant stationary laws of this Markov chain. Moreover, we show that starting with an i.i.d. colour distribution with density \(p\in[0,1]\) of one colour (say red), the limiting distribution is all red with probability \(\pi(\alpha,p)\) which is continuous in \(p\) and for \(p\) “small” \(\pi(p)> p\). The last result can be interpreted as the model favours the “underdog”.

MSC:

60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
60K35 Interacting random processes; statistical mechanics type models; percolation theory
60C05 Combinatorial probability
62E10 Characterization and structure theory of statistical distributions
90B15 Stochastic network models in operations research
91D30 Social networks; opinion dynamics

Citations:

Zbl 1074.91005
PDFBibTeX XMLCite
Full Text: DOI arXiv EMIS