Skip to content
BY 4.0 license Open Access Published by De Gruyter July 17, 2021

Distortion inequality for a Markov operator generated by a randomly perturbed family of Markov Maps in ℝd

  • Peter Bugiel , Stanisław Wędrychowicz and Beata Rzepka EMAIL logo

Abstract

Asymptotic properties of the sequences

(a) {Pj}j=1 and

(b) {j1i=0j1Pi}j=1

are studied for gG = {fL1(I) : f ≥ 0 and ‖f ‖ = 1}, where P : L1(I) → L1(I) is a Markov operator defined by Pf:=Pyfdp(y) for fL1; {Py}y∈Y is the family of the Frobenius-Perron operators associated with a family {φy}y∈Y of nonsingular Markov maps defined on a subset I ⊆ ℝd; and the index y runs over a probability space (Y, Σ(Y), p). Asymptotic properties of the sequences (a) and (b), of the Markov operator P, are closely connected with the asymptotic properties of the sequence of random vectors xj=φξj(xj1) for j = 1,2, . . .,where {ξj}j=1 is a sequence of Y-valued independent random elements with common probability distribution p.

An operator-theoretic analogue of Rényi’s Condition is introduced for the family {Py}y∈Y of the Frobenius-Perron operators. It is proved that under some additional assumptions this condition implies the L1- convergence of the sequences (a) and (b) to a unique g0G. The general result is applied to some families {φy}y∈Y of smooth Markov maps in ℝd.

1 Introduction

Let a semi-dynamical system evolve according to the rule

(1.1) xj:=φ(xj1),

where φ is a point transformation from a subset I ⊆ ℝd (bounded or not), d ≥ 1, into itself. Clearly, the state transition xj−1xj (j = 1,2, . . .) in such a system is carried out by applying the single transformation φ.

In [12, 27] the authors began to study the more general situation when one applies in turn different transformations chosen at random from some family of transformations. Such an approach as that above leads to more realistic models of considered processes in a variety of real situations. This is so because conditions of a repeated process may differ slightly at each stage, and also because one may not know exactly the rules of the evolution of the modeled process. It makes also possible to obtain models of numerical simulations of real systems because of round off approximations.

A version of the above ideas can be mathematically set up as follows. Let {φy}y∈Y be a family of point transformations from a subset I ⊆ ℝd (bounded or not), d ≥ 1, into itself and let the index y runs over a measurable space (Y, Σ(Y)). Let {ξj}j=1 be a sequence of Y-valued independent and identically distributed random elements (indices) over a probability space (Ω, Σ(Ω), p1).

Since the state transition xj−1xj (j = 1,2, . . .) in the above situation is executed by a transformation φξj which is drawn independently from the given family according to the probability distribution of ξj, the relation between xj−1 and xj is given by

(1.2) xj(x,ω):=x,forj=0,φξj(ω)(xj1),forj1,

where (x, ω) ∈ I × Ω.

Let L1(I, m) be the Banach space of all integrable real valued functions, wherem is the Lebesgue measure on I. There exists very useful tool for studying the asymptotic stability of semi-dynamical system (1.1) from the point of view of statistical mechanics. Namely, if φ is nonsingular then the formula

Pφf:=ddm(mfφ1)forfL1,

where dmf = f dm, and ddm denotes the Radon-Nikodym derivative, defines a Markov operator (i.e., Pφ is a linear operator and for any fL1(m) with f ≥ 0, Pφf ≥ 0 and Pφf=f). It is called the Frobenius-Perron operator (F-P operator, in short) associated with φ [20, 26]. Thus instead of (1.1) one studies the semi-dynamical-system

(1.3) gj=Pφg˜j1,j=1,2,,

where g˜0(g˜00,g˜0=1) stands for probability distribution of the initial state x0.

For studying the asymptotic stability of the system (1.2) one uses another Markov operator. Let m˜ be a probability measure on Σ(I) the σ-algebra of all Borel subsets ∞of I. Then the sequence {xj}j=0 defined by (1.2) is a sequence of random vectors over (Ω˜,Σ(Ω˜), Prob) whereΩ˜=I×Ω and Prob=m˜×p1. Its initial probability distribution equals Prob(x0B)=m˜(B) for any BΣ(I).

Now let {φy}y∈Y be a family of nonsingular point transformations and let the initial probability distribution be absolutely continuous, that is Prob(x0B):=Bgdm for all BΣ(I), where gL1(m) and g ≥ 0, g=1.

Under these circumstances we have [Prop. 3.1]

(1.4) Prob(xjB)=BPjgdm(j=1,2,)

for all BΣ(I), where P is defined by (3.1), and Pj, its j-th iterate, is given by (3.3).

In [7] has been established conditions which ensure the convergence (in L1) of the sequence {Pjg} to a unique stationary (i.e., P-invariant) density g0 (g0 ≥ 0, ‖g0‖ = 1) in the case {φy}y∈Y is a family of multidimensional expanding Markov maps. Similar problems were also studied, e.g. in [2, 11, 13, 17, 18, 19], and [3].

In [6] has been presented a general approach to the study of asymptotic properties of the sequences:

(a˜){Pφj}j=1 and

(b˜){j1i=0j1Pφi}j=1 for gG={fL1:f0,and f=1}.

Namely, there has been introduced an operator-theoretic analogue of Rényi’s Condition (see condition (3.H1) there, called Distortion Inequality). It has been proved that under some additional assumptions this condition implies the L1-convergence of the sequence (a˜) and (b˜) to a unique g0G. Then the general result has been applied to various particular classes of smooth Markov maps in ℝd. It has enabled one to unify many cases which were previously considered separately.

The aim of this paper is to extend the above general approach to the described above random case. Thus we consider a family {φy}y∈Y of Markov maps defined on a σ-finite measure space (I, Σ, m) (see Defs. 2.1, 2.2, 2.3), where the index y runs over a probability space (Y, Σ(Y), p), Y is a complete metric space. That family of Markov maps generates the family {Py}y∈Y of the Frobenius-Perron operators. Our first task is to establish a probabilistic analogue of the Distortion Inequality in [6], condition (3.H1). Then we study asymptotic properties of the sequences

  1. {Pj}j=1, and

  2. {j1i=0j1Pi}j=1, where P is given by (3.1).

It is useful to distinguish, in the random case, two kinds of the Distortion Inequality: conditions (3.H1) and (3.H˜1). The first kind of Distortion Inequality is more general while the second one makes it possible to get more information about the P-invariant density (compare Th. 3.3 with Ths. 3.4, and 3.5). Moreover, an application of the second kind of Distortion Inequality to smooth Markov maps implies the convergence of the sequence (a), or (b), respectively to a unique g0G not only in the norm of L1, but also in the norm of the uniform convergence on each Ik (see Ths. 4.4, 4.8, and 4.5, respectively).

As for the convergence of the two sequences (a), and (b), we prove three abstract theorems. The first theorem (Th. 3.3) is the most general, it states the convergence of (a) in L1 under conditions (3.H1) and (3.H2). The second and the third theorems state the convergence in L1 of (a) and (b) under more restrictive condition (3.H˜1) and conditions (3.H2) and (3.H3), respectively (see Ths. 3.4, and 3.5).

Our last task is to apply the three abstract theorems to various particular families of smooth Markov maps. We show, among other things, that Ths. 3.1 and 3.2 in [7] (in this paper Ths. 4.7, and 4.8, respectively) are special cases of Ths. 3.3, and 3.4 in this paper.

It is worthwhile mentioning that the problem of the applicability of operators of diverse kinds is recently considered in a lot of papers (cf. [23], for example). Moreover, some authors are discussing various classes of function spaces having interesting properties in order to characterize solutions of considered operator equations (see [1]).

The paper is divided into four sections. In Section 2 are introduced basic definitions and notations. In Section 3 are introduced two probabilistic analogues of the Distortion Inequality in [6], condition (3.H1), and it is proved under those conditions the convergence in L1 of the above mentioned sequences (a) and (b).

To illustrate the generality and usefulness of the results of Section 3, Theorems 3.3, 3.4 and 3.5 are applied in Section 4 to some families of smooth Markov maps in ℝd. The proofs of the theorems of Section 4 reveal that several combinations of probabilistic analogues of already known conditions imply condition (3.H1) or condition (3.H˜1). This makes it possible to derive, in a uniform way, many separate results from three general ones: Theorems 3.3, 3.4 and 3.5, and thereby to unify them. Assuming smoothness of the transformations considered, one additionally gets smoothness of their invariant densities.

2 Basic definitions and notations

Let (I, Σ, m) be a σ-finite atomless (non-negative) measure space. Quite often the notions or relations occurring in this paper (in particular, the considered transformations) are defined or hold only up to the sets of m-measure zero. Henceforth we do not mention this explicitly.

The restriction of a mapping τ : XY to a subset AX is denoted by τ|A and the indicator function of a set A by 1A.

Let τ : II be a measurable transformation i.e., τ−1(A) ∈ Σ for each AΣ. It is called nonsingular iff mτ−1m i.e., for each AΣ, m(τ−1(A)) = 0 ⇔ m(A) = 0.

We begin with the following definition:

Definition 2.1

A nonsingular transformation φ from I into itself is said to be a piecewise invertible iff

(2.M1) one can find a finite or countable partition π = {Ik : kK} of I,which consists of measurable subsets (of I) such that m(Ik) > 0 for each kK, and sup{m(Ik):kK}<;

(2.M2) for each Ikπ, the mapping φk = φ|Ik is one-to-one of Ik onto Jk = φk(Ik) and its inverse φk1 is measurable.

Several various important classes of piecewise invertible transformations e.g., Anosov diffeomorphisms [15], some expanding mapping [24, 25], or unimodal mapping [16] admit partitions with so-called Markov property.

In this paper we study random families of such piecewise invertible transformations. First however we give a definition of a single Markov map:

Definition 2.2

A piecewise invertible transformation φ is said to be a Markov map iff its corresponding partition π satisfies the following two conditions:

(2.M3) π is a Markov partition i.e., for each kK,

φ(Ik)={Ij:m(φ(Ik)Ij)>0};

(2.M4) φ is indecomposable (irreducible) with respect to π i.e., for each (j, k) ∈ K2 there exists an integer n > 0 such that Ikφn(Ij).

In what follows we denote by ‖ · ‖ the norm in L1 = L1(I, Σ, m) and by G = G(m) the set of all (probabilistic) densities i.e.,

G:={gL1:g0,andg=1}.

Let τ : II be a nonsingular transformation. Then the formula

(2.1) Pτf:=ddm(mfτ1)forfL1,

where dmf = f dm, and ddm denotes the Radon-Nikodym derivative, defines a linear operator from L1 into itself. It is called the Frobenius-Perron operator (F-P operator, in short) associated with τ [20, 26].

From the definition of Pτ it follows that it is a Markov operator, i.e., Pτ is a linear operator and for any fL1(m) with f ≥ 0, Pτ f ≥ 0 and Pτf=f. Further, PτGG, and Pτ is a contraction, i.e., ‖Pτ‖ ≤ 1.

Let τ1, . . . , τj, (j ≥ 2), be some nonsingular transformations. Denote by P(j,,1),P(j,,i+1) and P(i,...,1), 1 ≤ i < j, the F-P operators associated with the transformations τ(j,,1)=τjτ1,τ(j,,i+1)=τjτi+1 and τ(i,...,1) = τi ◦ · · · ◦ τ1, respectively. Then

(2.2) P(j,,1)=P(j,,i+1)P(i,,1),

in particular P(j,...,1) = Pj . . . P1.

Definition 2.3

In what follows we consider a family {φy}y∈Y of Markov maps such that:

(2.My5) Y is a complete separable metric space and the map I × Y (x, y) → φy(x) ∈ I is Σ(I × Y)/Σ(I)-measurable;

(2.My6) there exists a partition π of I such that πy = π for each yY, where πy is a Markov partition associated with φy.

For j ≥ 1, and y1, . . . , yjY, we denote y(j) = (yj , . . . , y1) and then we set

(2.3) φy(j):=φyjφy1.

Clearly, φy(j) : II is a Markov map. Its Markov partition is given by

πy(j):=πφy(1)1(π)φy(2)1(π)φy(j1)1(π)providedj2.

It consists of the sets of the form:

(2.4) Ik(j)y(j1):=Ik0φy(1)1(Ik1)φy(2)1(Ik2)φy(j1)1(Ikj1),

where k(j) = (k0, k1, . . . , kj−1) ∈ Kj.

Let φy(j)k(j):=(φy(j))|Ik(j)y(j1); then by condition (2.M2), φy(j)k(j) is one-to-one mapping of Ik(j)y(j1) onto Jk(j)y(j):= φy(j)k(j)Ik(j)y(j1)=φyjIkj1. It is nonsingular, and φy(j)k(j)1, the mapping inverse to φy(j)k(j), is measurable. By Def. 2.3, πy(1) = πy1 = π and therefore Ik(1)y(0)=Ik0; consequently φy(1)k(1) = (φy1 )|Ik0 = φy1k0 and, according to (2.M2), φyk is one-to-one mapping of Ik onto Jky=φyk(Ik).

In the case of a family {φy}y∈Y of Markov maps we use the following analogue of the indecomposable condition (2.M4):

(2.My4) for each (j, k) ∈ K2 there exist an integer s > 0, and a subset Y˜sYs with ps(Y˜s)>0 such that Ikφy(s)(Ij) for all y(s)Y˜s. Here p is a probability measure on Σ(Y), and ps=p××pstimes.

Note that in the particular case: {φy}y∈Y , where Y = {y}, p({y}) = 1, the above conditions yields condition (2.M4) (see also note at the end of Rem. 2.4).

Since φy(r)k(r):=(φy(r))|Ik(r)y(r1) is a nonsingular mapping of Ik(r)y(r1) onto Jk(r)y(r), then the formula

(2.5) my(r)k(r)(A):=m(φy(r)k(r)1(A))forAΣ,

defines an absolutely continuous measure whichis concentrated on Jk(r)y(r) (i.e.,my(r)k(r)(A)=my(r)k(r)(AJk(r)y(r))), and whose Radon-Nikodym derivative satisfies dmy(r)k(r)dm>0 a.e. on Jk(r)y(r).

To see the latter property of the measure my(r)k(r), note first that if dmy(r)k(r)dm=0 on AJk(r)y(r), then φy(r)k(r)1(A)IIk(r)y(r1) a.e., because m(φy(r)k(r)1(A)Ik(r)y(r1))=AJk(r)y(r)dmy(r)k(r)dmdm=0. Therefore, A= a.e.

We put (r = 1,2, . . .)

(2.6) σy(r)k(r):=dmy(r)k(r)dm,onJk(r)y(r),0,onIJk(r)y(r),

and next,

(2.7) fy(r)k(r):=fφy(r)k(r)1,onJk(r)y(r),0,onIJk(r)y(r).

Then the F-P operator P y(r) = Pyr . . . Py1 corresponding to the Markov map φy(r), given by (2.3), can be written in the following form

(2.8) Py(r)f=k(r)fy(r)k(r)σy(r)k(r).

Indeed, from (2.1), Defs. 2.2, 2.3 and (2.5) it follows that for any fL1, f ≥ 0 the following equalities hold:

APy(r)fdm=mf(φy(r)1(A))=k(r)Ak(r)y(r1)fdm=k(r)Afφy(r)k(r)1dmy(r)k(r)=A(k(r)fy(r)k(r)σy(r)k(r))dm,

where Ak(r)y(r1)=φy(r)k(r)1(A). Hence, (2.8) follows.

Remark 2.4

The studies of this paper can be extended to the family {φy}y∈Y which satisfies condition (2.My5) of Def. 2.3 and the following one:

(2.M˜y 6) there is a Markov map φy˜ such that for each yY:

  1. πyπy˜. i.e., for each Vπy , there exists Uπy˜ which contains V, and

  2. for each Vπy , φy(V) is a union of a number of Uπy˜.

In this situation each φy(j) = φyjφyj−1 ◦ · · · ◦ φ1 is defined on the interval of the form:

Ik(j)y(j):=Ik0y1φy(1)1(Ik1y2)φy(2)1(Ik2y3)φy(j1)1(Ikj1yj)πy(j).

The following family can serve as a simple example: {φi}i=1 where φi = φi, and φ is a Markov map. Note that conditions (2.M4) and (2.My4) are equivalent in this case, if p({i}) > 0 for i ≥ 1.

Let P : L1(m) → L1(m) be a Markov operator. The following criterion of the convergence of {Pjg}j=1 is used in this paper (see [14], Th. 2, and Rem. 1):

Theorem 2.5

Let there exist hL1, h ≥ 0 withh‖ > 0, and a dense subset G0G such that

limj(Pjgh)=0,forgG0.

Then there is a unique g0G such that limjPjg=g0 for all gG.

3 Convergence theorems

Let {φy}y∈Y be a family of Markov maps in the sense of Def. 2.3, Σ(Y) – σ-algebra of all Borel-measurable subsets of Y, and let p be a probability measure on (Y, Σ(Y)).

For a fixed fL1(m), the mapping Y yPy fL1(m) where Py is the F-P operator associated with φy, is a Borel-random element in L1(m), i.e., the pre-image of any subset of Σ(L1(m)) belongs to Σ(Y).

From the above fact and the inequality ‖Py f ‖ ≤ ‖f ‖ < ∞it follows that the Bochner integral (with respect to p) of yPy f exists. We put

(3.1) Pf:=Pyfdp(y)forfL1(m).

From this definition and the fact that the F-P operator Py : L1(m) → L1(m) is a Markov operator it follows that P : L1(m) → L1(m) is a Markov operator too.

Let Y yfyL1(m) be a Bochner-integrable mapping and let L : L1(m) → L1(m), be a bounded linear operator. It follows from the properties of the Bochner integral that

(3.2) Lfydp(y)=Lfydp(y).

Let Pj = PPj−1(j ≥ 2), then from (3.1), (3.2) and (2.1) it follows that

(3.3) Pjf=Py(j)fdpj(yj,,y1),

where Py(j) = Pyj . . . Py1 is the F-P operator corresponding to φy(j), defined by (2.3), and pj=p××pjtimes.

Conversely, if one defines Pj by (3.3), then from (2.1), (3.2), and (3.1) it follows that Pj = PPj−1.

The following probabilistic interpretation of the Markov operator P given by (3.1) is useful in practice. Let Ω = Y be a direct product space, Σ(Ω) – σ-algebra of all Borel measurable subset of Y, p1 = p = p ×p ×. . . – direct product measure, and ξj(ω) = ωj for ωΩ where ωj is the j-th coordinate of ω = (ω1, ω2, . . .) ∈ Y. Then {ξj}j=1 is a sequence of identically p-distributed independent Y-valued random elements (indices).

For each (x, ω) ∈ I × Ω we put

(3.4) xj(x,ω):=x,forj=0,φξj(ω)(xj1)=φξ(j)(ω)(x),forj1,

where ξ(j)(ω) = (ξj(ω), . . . , ξ1(ω)).

Now let (Ω˜,Σ(Ω˜), Prob) be a probability space with Ω˜=I×Ω and Prob=m˜×p1, where m˜ is a probability measure on Σ(I). Then the sequence {xj}j=0 defined by (3.4) is a sequence of random vectors over (Ω˜,Σ(Ω˜), Prob). Note that {x0B} = B × Ω, hence Prob(x0B)=m˜(B) for any BΣ(I).

It turns out that if the initial probability distribution is absolutely continuous, then the probability distribution of each random vector xj, defined by (3.4), is absolutely continuous, too:

Proposition 3.1

If Prob(x0B):=Bgdm for all BΣ(I), where gL1(m) and g ≥ 0, ‖g‖ = 1, then

Prob(xjB)=BPjgdm(j=1,2,)

for all BΣ(I), where Pj is the j-th iterate of the Markov operator P defined by (3.1).

Proof. We refer to [7], Prop. 3.1.

In [6] has been established an inequality for the Frobenius-Perron operator Pφ associated with a single Markov map φ. That inequality (called Distortion Inequality) was intended as an operator-theoretic analogue of the Rényi’s Condition [21, 22]. Below we formulate a probabilistic analogue of that inequality for a family {Py}y∈Y of the Frobenius-Perron operators corresponding to a given family {φy}y∈Y of Markov maps.

Let φy(j) be a Markov map given by (2.3) and let Py(j) be its F-P operator given by (2.8). We put

(3.5) Ay(j)k(g):=esssup{Py(j)g(x):xIkspt(Py(j)g)},ay(j)k(g):=essinf{Py(j)g(x):xIkspt(Py(j)g)},

where spt(g) := {x : g(x) > 0}.

Now we give the following definition:

Definition 3.2

A density gG belongs to G(C*), 0 < C* < ∞, iff there exist constants Cy(j)(g) ≥ 1, y(j) ∈ Yj, jj1(g), such that the following two conditions are satisfied:

  1. Ay(j)k(g)Cy(j)(g)ay(j)k(g) a.e. [pj],jj1(g), and

  2. lim supjlnCy(j)(g)dpj<C.

Note that the above definition is coherent with the definition of the set G(C*) given in [6] by formula (3.0) in the case φy = φ for each yY, where φ is a fixed Markov map. Therefore the two discussed there questions connected with the set G(C*), in the case of single F-P operator, are valid in the case of a family of F-P operators too. Thus in the first place we must assume the following property of G(C*):

(3.H1) (Distortion Inequality for the family {Py}y∈Y )

There exists a constant 0 < C* < ∞such that the set G(C*) defined by Def. 3.2 contains a subset dense in G.

Next it follows from the second question discussed in the cited paper [6] that condition (3.H1) alone does not yet ensure the existence of P-invariant density. In [6] have been used two complementary conditions. Each of those two conditions completes condition (3.H1) there in an optimal way. In this paper we make use of their probabilistic analogues. They complete condition (3.H1) here in an optimal way too (see Rem. 3.6).

To formulate a probabilistic analogue of the first of the two mentioned conditions we introduce the following auxiliary function:

(3.6) uw(2r)(x):=inf{gw(2r)k(r)(x):k(r)Kr,andIk(r)w(r1)},

where

(3.7) gw(2r)k(r):=k˜(r)σ˜w˜(r)k˜(r)Ik˜(r)w˜(r1)σ˜w(r)k(r)dm,

w(2r)=(w˜(r),w(r))Yr×Yr, and

(3.8) σ˜w(r)k(r):=σw(r)k(r)m(Ik(r)w(r1)),

and Ik(r)w(r1), and σw(r)k(r) are defined, respectively by (2.4) and (2.6).

The first complementary condition reads as follows:

(3.H2) There exists r˜1 such that 0<uw(2r˜)dp2r˜<.

The theorem below has already been proved in [9]. It states that the semi-dynamical system given by (3.4) evolves to a stationary distribution under the above two conditions.

Theorem 3.3

(First Convergence Theorem). Assume that a family {φy}y∈Y of Markov maps satisfy conditions (3.H1) and (3.H2).

Then there exists exactly one P-invariant density g0 such that

limjPjg=g0,for allgG.

Proof. (main idea) By condition (3.H2) there exists AY2r˜×I such that A|lnuw(2r˜)|dp2r˜×m<. We assume for the simplicity that A=Y2r˜×B for some BΣ(I). Then the idea of the proof consists in showing that the function

(3.9) u2r˜:=Cˆexp(lnuw(2r˜)dp2r˜)onB,andu2r˜=0onIB,

whereCˆ=exp(2C), satisfies the relation:

(3.10) limj(Pjgu2r˜)=0for allgG.

Then the proof of the theorem is completed by an appeal to Th. 2.5.

The theorem above states only that the semi-dynamical system given by (3.4) evolves to a unique P-invariant density g0. It however gives no further information about the limit density itself. Below we prove some properties of the limit density g0 under somewhat more restrictive condition than condition (3.H1). Namely, we assume that the constants Cy(j)(g) in Def. 3.2 depend on j and g but not on y(j) ∈ Yj. In that case the set G(C*) in condition (3.H1) satisfies the following equality: G(C)={gG:lim supjAj(g)<C}, where

Aj(g):=supy(j)YjsupkKAy(j)k(g)ay(j)k(g),

Ay(j)k(g), and ay(j)k(g) are defined by (3.5).

Accordingly, the new version of condition (3.H1) can be formulated as follows:

(3.H˜1) (Uniformly Distortion Inequality for the family {Py}y∈Y )

There exists a constant C* > 0 such that the set

G˜(C):={gG:lim supjAj(g)<C}

contains a subset dense in G.

We are going now to prove the following theorem:

Theorem 3.4

(Second Convergence Theorem). Let a family {φy}y∈Y ofMarkov maps satisfy conditions (3.H˜1) and (3.H2).

Then:

  1. There exists exactly one P-invariant density g0 such that

    limjPjg=g0 for all gG.
  2. There exists a density of the form

g˜0:=k(r˜)(σ˜w(r˜)k(r˜)1Jk(r˜)w(r˜)Ik(r˜)w(r˜1)g0dm)dpr˜,

with r˜ as in (3.H2), such that

g˜0/Cg0Cg˜0,andg˜0>0,

where C* is the constant in condition (3.H˜1).

(b1) Additionally, the unique P-invariant density g0 is estimated as follows:

C2uw(2r˜)dp2r˜g0C2Uw(2r˜)dp2r˜,

where uw(2r˜) is defined by (3.6), and

Uw(2r)(x):=sup{gw(2r)k(r)(x):k(r)Kr,andIk(r)w(r1)}.

The upper estimation holds provided Uw(2r˜)dp2r˜<.

Proof. (a) Clearly, that assertion follows from Th. 3.3. Nevertheless we prove it directly (in a way close to the proof of Th. 3.3) because the proof contains a result which is exploited in the course of the proof of the two remainder assertions.

The proof is based on the fact that the function

(3.11) u˜2r:=C2uw(2r)dp2r,

where C* is the constant in condition (3.H˜1), satisfies relation (3.10).

By condition (3.H˜1) there exists a subset GG˜(C) dense in G. Then for any gG* there exists j1 = j1(g) such that: for any jj1 and all Ikπ one has

(3.12) 1CPy(j)g(x)Py(j)g(y)Cform×ma.e.(x,y)Ik×Ik.

From the above inequalities follows the following basic double inequality (see [7], Lemma 2.3):

(3.13) Frw(r)(Py(j)g)CPw(r)Py(j)gCFrw(r)(Py(j)g)

for gG*, all w(r) ∈ Yr, y(j) ∈ Yj, r ≥ 1, and all jj1(g); where F rw(r) is defined by

(3.14) Frw(r)(g):=k(r)σ˜w(r)k(r)Ik(r)w(r1)gdm.

In the last formula σ˜w(r)k(r) and Ik(r)w(r1) are defined by (3.8) and (2.4), respectively.

Indeed, from (3.12) we obtain

C1(Py(j)g)w(r)k(r)(x)σw(r)k(r)(x)(Py(j)g)w(r)k(r)(y)σw(r)k(r)(x)C(Py(j)g)w(r)k(r)(x)σw(r)k(r)(x)

for each Jk(r)w(r)=φw(r)k(r)(Ik(r)w(r1)), all x,yJk(r)w(r), and jj1(g);

where

(Py(j)g)w(r)k(r)(x):=(Py(j)g)φw(r)k(r)1(x),forxJk(r)w(r),0,forxIJk(r)w(r).

Integrating the above inequalities with respect to x on Jk(r)w(r) and multiplying by σ˜w(r)k(r)(y) , then summing the resulting inequalities with respect to all k(r) and finally using the equality (2.8) one gets the desired double inequality (3.13).

Let w(2r)=(w˜(r),w(r))Yr×Yr, then iterating the double inequality (3.13), by using the equalities Pw(2r)=Pw˜(r)Pw(r),

(3.15) k(r)1Ik(r)w(r1)Py(j)g=1,

and the definition (3.14), one gets for every r ≥ 1, jj1(g), and all w˜(r), w(r) ∈ Yr, and y(j) ∈ Yj:

(3.16) (a)Pw(2r)Py(j)gC2Frw˜(r)(Frw(r)Py(j)g)C2uw(2r),(b)Pw(2r)Py(j)gC2Uw(2r).

Integrating the above two inequalities with respect to w(2r)=(w˜(r),w(r)), and y(j), and applying formulas (2.2), (3.2), and (3.3) give

(3.17) C2uw(2r)dp2rPj+2rgC2Uw(2r)dp2r,

for r ≥ 1, and jj1(g).

It follows from the first part of the just derived double inequality (3.17), the two facts: G* is dense in G, and P is a contraction, that for each r ≥ 1, the function u˜2r, given by (3.11), satisfies relation (3.10).

Since however it may happen that u˜2r=0 for each r ≥ 1 (see [4, 5] and recently [8]), one has to assume condition (3.H2) that excludes such a possibility. The proof of assertion (a) is completed by an appeal to Th. 2.5.

(b) Integrating (3.13) with respect to w(r), and y(j), and applying formulas (2.2), (3.2), and (3.3) give

C1Frw(r)(Pjg)dprPrPjgCFrw(r)(Pjg)dpr,

for r ≥ 1, and jj1(g).

Therefore the double inequality of the assertion under consideration follows from the last relation, and assertion (a).

Hence for each BΣ,μ0(B)=0μ˜0(B)=0 where dμ˜0=g˜0dm, that is A=spt(g0)=spt(g˜0), where spt(g) = {xI : g(x) > 0}. To prove that g˜0>0, note first that A consists exclusively of Ik’s. It follows from the form of g˜0 given in Th. 3.4 (b). We claim that 1Ikg0=1IkAg0>0, i.e., IkA for any Ikπ.

Take any Ik˜A, and for an arbitrary Ik let s be such that Ikφy(s)(Ik˜)φy(s)(A) for all y(s)Y˜s, where Y˜sYs with ps(Y˜s)>0 (see condition (2.My4)).

Then we have

fy(s):=Ikφy(s)(Ik˜)Py(s)(1Ik˜g0)dm>0onY˜s.

Therefore

Ikg0dm=Ik{Py(s)g0dps}dm={Ikφy(s)(A)Py(s)(1Ag0)dm}dpsY˜sfy(s)dps>0.

It shows that spt(g˜0)=I.

(b1) The estimations under consideration follows from (3.17) and assertion (a) of the theorem.

We conclude this section with a result on the convergence of {j1i=0j1Pi}j=1. Namely, the convergence of that sequence is proved under condition (3.H˜1) and a probabilistic analogue of condition (3.H3) in [6].

Let {Vn}n=1 be a sequence of subsets of I such that each Vn consists of a finite number of Ik’s, VnVn+1, n=1Vn=I˜ and m(II˜)=0. Then we define

(3.18) drn:=supw(r)supk(r)IVnσ˜w(r)k(r)dm.

The probabilistic analogue of that condition reads as follows:

(3.H3) There exists r˜1 such that limndr˜n=0.

The following theorem holds:

Theorem 3.5

(Third Convergence Theorem). Let a family {φy}y∈Y of Markov maps satisfy conditions (3.H˜1) and (3.H3).

Then:

(a) There exists exactly one P-invariant density g0 such that limjSjg=g0 in L1 for all gG, where

Sj:=j1i=0j1Pi.

(b) The two assertions (b) and (b1) of the previous theorem hold.

Proof. (a) Let Vn consists of a finite number of Ik’s. Integrating first the second part of the double inequality (3.13), with respect to y and over IkVn, one gets m(Ik)Py(j)g(x)CIkPy(j)gdm. Then integrating the

obtained inequality with respect to y(j) ∈ Yj one gets

(3.19) PjgCm˜(Vn)1ma.e. onVn,

where m˜(Vn)=min{m(Ik):IkVn}.

Next, it follows from the second part of the double inequality (3.13) and the definition (3.18) that

(3.20) supjj1IVnPjgdmCdrnforr,n=1,2,.

Then the last two inequalities and condition (3.H3) imply weak compactness of {Pjg} for gG* ([10], Th. IV.8.9).

Finally, assertion (a) follows from this, the Yosida-Kakutani Ergodic Theorem ([10], Th. VIII.5.1), and the denseness of G* in G.

(b) Follows from assertion (a) of this theorem similarly as assertions (b) and (b1) of the previous theorem follow from assertion (a) of that theorem.

Remark 3.6

It follows from (3.6), (3.7), (3.16) (a), and the first part of the double inequality (3.17) that condition (3.H2) is optimal. Likewise from (3.18), the second part of the inequality (3.13), and (3.20) follow that condition (3.H3) is optimal too.

Note also that the first complementary condition is essentially less restrictive than the second one, but the second condition is readily verifiable in practice (especially in the case m(I) < ∞, see Fact 4.1).

4 Application to a families {φy}yY of C1+α, 0 < α ≤ 1, and C2-smooth Markov maps in ℝd

In what follows the following notation will be used: ℝdd-dimensional Euclidean space (d ≥ 1); | · | – the Euclidean norm; I – a domain in ℝd, i.e., an open, connected subset of ℝd; Σσ-algebra of all Borel-measurable subsets of I; m – the Lebesgue measure on ℝd; diam(A) – the diameter of the set A; Df – the derivative of f .

A C1+α, 0 < α ≤ 1, or C2-smooth Markov map φ means a Markov map in the sense of Def. 2.2 and such that: the partition π of φ consists of domains, and the restriction φk, of φ to any Ikπ, is a C1+α (or C2) diffeomorphism.

At first we consider a family {φy}y∈Y of C1+α-smooth Markov maps which satisfy the following C1+α- variant of the so-called Reńyi’s Condition (see e.g. [15] or [21]):

(4.Hy4) (Local Case) Let {φy}y∈Y be a family of C 1+α-smooth Markov maps. There exist constants C10,y(r) > 0, y(r) ∈ Yr, such that for r = 1,2, . . . , k(r) ∈ Kr, and all Ikπ,

(a) |σy(r)k(r)(x)σy(r)k(r)(y)|C10,y(r)σy(r)k(r)(y)|xy|α

for all x,yJk(r)y(r)Ik, where σy(r)k(r) is defined by (2.6), and Jk(r)y(r)=φy(r)k(r)(Ik(r)y(r1)).

Furthermore, the constants C10,y(r)> 0 satisfy either

b1lim supjC10,y(j)dpj<; or b2lim supjsupy(j)YjC10,y(j)<.

For further use we formulate below, in the case m(I) < ∞, the global analogue of the above condition. It reads:

(4.H˜y4) (Global Case, m(I) < ∞) Let {φy}y∈Y be a family of C 1+α-smooth Markov maps. There exist constants C˜10,y(r)>0,y(r)Yr, such that for r = 1,2, . . . , k(r) ∈ Kr,

(a˜)|σy(r)k(r)(x)σy(r)k(r)(y)|C˜10,y(r)σy(r)k(r)(y)|xy|αfor all x,yJk(r)y(r).

Furthermore, the constantsC˜10,y(r)>0 satisfy either

(b˜1)lim supjC˜10,y(j)dpj<;or
(b˜2)lim supj{supy(j)YjC˜10,y(j)}<.

It is also possible, in the case m(I) < ∞, to replace condition (3.H3) by the following simpler one:

(4.Hy6) C11:=inf{m(φy(Ik)):kK,yY}>0.

This is so because condition 4.H˜y4(a˜),b˜2 together with the last one implies the following fact:

Fact 4.1. If a family {φy}y∈Y of C1+α-smooth Markov maps is defined on I with m(I) < ∞, satisfies conditions 4.H˜y4(a˜),b˜2 and (4.Hy6), then condition (3.H3) holds.

Proof. First note that condition 4.H˜y4(a˜),b˜2 implies a global version of the so-called Reńyi’s Condition ([21] or [22]). Namely, the following inequality holds:

(4.H˜y5) (Global Case, m(I) < ∞) There exist a constantC˜>0, and an integer r0 such that:

supxJk(r)y(r)σy(r)k(r)(x)C˜infxJk(r)y(r)σy(r)k(r)(x),

for all rr0, y(r) ∈ Yr , and k(r) ∈ Kr.

The above inequality and condition (4.Hy6) imply

σ˜y(r)k(r)C˜/C11onJk(r)y(r),

where σ˜y(r)k(r) is defined by (3.8). Thus condition (3.H3) holds.

Let {φy}y∈Y be a given family of C1+α-smooth Markov maps, and let {πy(r) : r = 1,2, . . . , y(r) ∈ Yr} be a family of partitions whose elements are defined by (2.4). We assume

(4.Hy7) (Generating Condition on {πy(r) : r = 1,2, . . . , y(r) ∈ Yr}) The family of partitions satisfies either

( c 1 ) lim j { sup k ( j + 1 ) diam ( I k ( j + 1 ) y ( j ) ) α } d p j = 0 ; or ( c 2 ) lim j { sup y ( j ) Y j , k ( j + 1 ) diam ( I k ( j + 1 ) y ( j ) ) α } = 0.

In [9] we have proved, by using Th. 3.3 there, the following result (see Th. 4.2 there):

Theorem 4.2

Let a family {φy}y∈Y of C1+α-smooth Markov maps satisfy conditions (4.Hy4(a), (b1)), (4.Hy7(c1)) and (3.H2). Then there exists exactly one P-invariant density g0 such that limjPjg=g0 for all gG.

Proof. (main idea) One has to check only condition (3.H1). As a dense subset contained in G(C*) has been chosen in [9] a class Gα of densities whose definition is given below. The constant C* involved in condition (3.H1) has been defined as an arbitrary (but fixed) number such that:

C>lim supjlnCy(j)(g)dpj,

where

Cy(j)(g):={1+C(g)supk(j+1)Kj+1diam(Ik(j+1)y(j))α}{1+C˜0αC10,y(j)},

andC˜0 := sup{diam(Ik) : kK}, for j = 1,2, . . . .

Note that the definition of C* is correct because by the definition of Cy(j)’s and conditions (4.Hy4(b1)) and (4.Hy7(c1)) one has lim supjlnCy(j)(g)dpj<.

Definition 4.3

We denote by Gα, 0 < α ≤ 1, the class of all densities gG satisfying the following three conditions:

  1. spt(g) := { xI : g(x) > 0} is a sum of a number of Ik’s;

  2. for each Ikπ,gIk C0+α(Ik), and

|g(x) − g(y)| ≤ C(g)g(y)|xy|α for all x, y ∈ spt(g) ∩ Ik, where C(g) is a constant depending on g.

We are going now to examine the convergence of {Pjg} under conditions (3.H2), (4.Hy4(a), (b2)) and (4.Hy7(c2)). We show that the last two assumptions imply condition (3.H˜1) and it enables us to establish, by Th. 3.4, further properties of the P-invariant density.

Moreover, it is possible in the case under consideration to establish the convergence of {Pjg} not only in L1, but also in the topology of the uniform convergence (on every Ikπ), and the C0+α-smoothness of the unique P-invariant density.

As an application of Th. 3.4 we prove the following:

Theorem 4.4

Let a family {φy}y∈Y of C 1+α-smooth Markov maps satisfy:

(A) Conditions (3.H2), (4.Hy4(a), (b2)), and (4.Hy7(c2)) hold.

Then:

(D1) The assertions (a), (b) and (b1) of Theorem 3.4 hold.

(D2.c) For each kK,limjg0Pjgk=0 for all gGα , wheregk = sup{|g(x)| : xIk};

(D2.d) |g0(x) − g0(y)| ≤ (C10C10,α/m(Ik)) |xy|α for x, yIk.

Proof. (D1) By Th. 3.4 it is enough to check condition (3.H˜1). To this end we show that

(4.1) GαG˜(C)for a fixed numberCsuch thatC>lim supjCj(g),

where

(4.2) Cj(g):={1+C(g)supy(j)supk(j+1)Kj+1diam(Ik(j+1)y(j))α}{1+C˜0αsupy(j)C10,y(j)},

C˜0:= sup{diam(Ik) : kK}, and j = 1,2, . . ..

Let gGα, then for any y(j) ∈ Yj , k(j) ∈ Kj , j = 1,2, . . ., and for any x, zIr the following inequality holds:

gφy(j)k(j)1(x)/gφy(j)k(j)1(z)1+C(g)supk(j+1)diam(Ik(j+1)y(j))α.

Next, by condition (4.Hy4(a)), we have the following inequality (for any y(j) ∈ Yj, k(j) ∈ Kj, j = 1,2, . . ., and for any x, zIr):

σy(j)k(j)(x)/σy(j)k(j)(z)1+C˜0αC10,y(j).

Then the last two inequalities imply that for any y(j) ∈ Yj, j = 1,2, . . ., Ikπ, and for any x, zIk we have

Py(j)g(x)Cj(g)Py(j)g(z),

where C j(g) are given by (4.2).

Finally, it follows from (4.2) and conditions (4.Hy4(b2)) and (4.Hy7(c2)) that lim supjCj(g)<. Therefore this and the last inequality imply (4.1). Since Gα is dense in G, condition (3.H˜1) holds. This finishes the proof of (D1).

(D2) By condition (4.Hy4(a), (b2)), there exists a constant C10 > 0 such that

(4.3) |σy(r)k(r)(x)σy(r)k(r)(y)|C10σy(r)k(r)(y)|xy|α

for all x,yJk(r)y(r)Ik and rr0.

Let gGα, then it follows from the last inequality that for any Ik, and j = 1,2, . . ., the following two inequalities hold:

(4.4) Pjg(x)Cα(g)C10,αm(Ik)forxIk,

and

(4.5) |Pjg(x)Pjg(y)|C(g)|xy|αPjg(x)+C10|xy|αPjg(y)(C(g)Pjg(x)+C10Pjg(y))|xy|αforx,yIk,

where Cα(g):=1+C(g)C˜0α and C10,α:=1+C10C˜0α.

The above two inequalities imply that for each Ik, the family {Pjg}j≥1, restricted to Ik, is bounded and equicontinuous in the space C(Ik) of all bounded and continuous functions f : Ik → ℝ equipped with the norm ‖fk := sup{|f (x)| : xIk}.

Thus conclusion (D2.c) follows from the Lemma of Ascoli-Arzela and (D1.a) of the theorem.

The second conclusion (D2.d) follows from (D2.c), the last two inequalities (4.4) and (4.5), and from the fact that wk = 1Ik /m(Ik) ∈ Gα with C(wk) = 0.

We end our considerations on the family {φy}yY of C1+α -smooth Markov maps with the following C1+α- counterpart of Th. 3.5:

Theorem 4.5

Let a family {φy}y∈Y of C 1+α-smooth Markov maps satisfy either

  1. (B) Conditions (3.H3), (4.Hy4(a), (b2)) and (4.Hy7(c2)); or

  2. (C) m(I)<,4.H˜y4(a˜),b˜2, (4.Hy6) and (4.H7(c2)).

Then:

(D1) The assertions (a), (b) and (b1) of Theorem 3.4 hold.

(D2.c) For each kK,limjg0Sjgk=0 for all gGα,

where gk=sup{|g(x)|:xIk};

(D2.d)|g0(x)g0(y)|C10C10,α/m(Ik)|xy|α for x,yIk.

Proof. (B) =⇒ (D1) Similarly as in the proof of the assertion (D1) of Th. 4.4, the proof is based on the fact that the last two conditions of assumption (B) imply condition (3.H˜1).

(C) =⇒ (D1) By Fact 4.1 and the previous case.

The two remaining implications hold for the reason analogous to that of the previous theorem.

Now we are going to consider a family {φy}y∈Y of C2-smooth Markov maps. Those maps satisfy

(4.Hy8) Uniformly Expanding (in All Directions) Condition: There exist the constants C1y > 1, yY, such that at each point xI˜=kKIk the derivative matrix y(x) of φy(x) satisfies:

(a)|Dφy(x)v|C1y|v|for eachvRd.

Next, we assume that the constants C 1y, yY, satisfy either

( d 1 ) C ~ 1 := C 1 y 1 d p ( y ) < 1 ; or ( d 2 ) C 1 := sup { C 1 y 1 : y Y } < 1.

Together with condition (4.Hy8(a), (d1) or (d2)) we assume that the family {φy}y∈Y of C2-smooth Markov maps satisfy

(4.Hy9) Second Derivative Condition: For each kK, φyC2(Ik); and

(a) C2y:=sup{Reg(σyk):kK}<,

where σyk is defined by (2.6), and Reg(f) by

(4.6) Reg(f):=sup{|Df(x)||f(x)|:xY,|f(x)|>0,Df(x)exists}.

Furthermore, we assume that the constants C2y, yY, satisfy either

  1. C˜2:=C2ydp(y)<;or

  2. C2:=sup{C2y:yY}<.

Finally, we assume that the domains Jky=φky(Ik),kK, yY, satisfy the following condition:

(4.M13) There is a constant C0 > 0 such that any two points x, y in any Jky=φky(Ik) can be joined by a piecewise straight arc of length at most C0|xy|.

In [7] is proved directly under conditions (4.Hy8(a), (d1)), (4.Hy9(a), (e1)) and (3.H2) that the sequence Pjg converges to a unique P-invariant density g0 (see Th. 3.1 there).

Here we show that the result in question is a consequence of Th. 3.3 in this paper. Clearly, it is enough to check condition (3.H1). To this end we define first a dense subset of G occurring in that condition.

Definition 4.6

We denote byG˜(1) the class of all densities gG satisfying the following four conditions:

  1. spt(g) := { xI : g(x) > 0} is a sum of a number of Ik’s;

  2. for each Ikπ, g|IkC1(Ik);

  3. Reg(g) < ∞, where Reg(g) is defined by (4.6);

  4. sup{g(x) : xIk} < ∞for each Ikπ.

Now Th. 3.1 in [7] can be formulated as follows:

Theorem 4.7

Let a family {φy}y∈Y of C2-smooth Markov maps satisfy

(A) conditions (3.H2), (4.Hy8(a), (d1)) and (4.Hy9(a), (e1)).

Then there exists exactly one P-invariant density g0 such that limjPjg=g0 for all gG.

Proof. We show that

(4.7) G˜(1)G(C)for an arbitrary (but fixed) numberCsuch thatC>lim supjlnCy(j)(g)dpj,

where

(4.8) Cy(j)(g):=exp{C0Reg(g)supk(j+1)diam(Ik(j+1)y(j))+C0C˜0Reg(σy(j)k(j))}.

To this end we note that any gG˜(1) satisfies the inequality [6]

(4.9) g(x)g(y)exp{Reg(g)C0|xy|}for anyx,yIk.

Next we also note that [6]

(4.10) σy(j)k(j)(x)/σy(j)k(j)(y)exp{Reg(σy(j)k(j))C0|xy|}

for each Jk(j)y(j),x,yJk(j)y(j), and j = 1,2, . . ..

Then (4.9), and (4.10) imply that

(4.11) Py(j)g(x)Cy(j)Py(j)g(z),

where Cy(j)(g)’s are defined by (4.8). Hence condition (a) of Def. 3.2 holds for gG˜(1).

To check condition (b) of that definition we first note that

(4.12) Reg(σy(j)k(j))C2yj+C1yj1C2yj1+C1yj1C1yj11C2yj2++C1yj1C1yj11C1y21C2y1,

and

(4.13) diam(Ik(j+1)y(j))C0C1y11C1y21C1yj1

for any y(j) and k(j + 1), j = 1,2, . . ..

Then it follows from (4.8), (4.12), (4.13), conditions (4.Hy8(a), (d1)) and (4.Hy9(a), (e1)) that

lim supjlnCy(j)(g)dpjC0C˜0C˜2/(1C˜1)

for any gG˜(1). The last inequality together with (4.11) implies (4.7). Since G˜(1) is dense in G, condition (3.H1) holds.

At the end we show that Th. 3.2 in [7] is a special case of Th. 3.4 in this paper:

Theorem 4.8

Let a family {φy}y∈Y of C2-smooth Markov maps satisfy

(A) conditions (3.H2), (4.Hy8(a), (d2)) and (4.Hy9(a), (e2)).

Then:

(D1) The assertions (a), (b) and (b1) of Theorem 3.4 hold.

(D2.c) For each kK, limjg0Pjgk=0 for all gGα,

where gk=sup{|g(x)|:xIk};

(D2.d) |g0(x) − g0(y)| ≤ (C10C10,α /m(Ik)) |xy|α for x, yIk.

In the last two assertions α = 1.

Proof. (D1) It follows from (4.8), (4.12), (4.13) and conditions (4.Hy8(a), (d2)) and (4.Hy9(a), (e2)) that

lim supj{supy(j)Cy(j)(g)}exp{C0C˜0C2/(C11)}.

It implies

Cy(j)(g)<Cfory(j)Yj,andjj1(g),

where C>exp{C0C˜0C2/(1C1)} is arbitrary (but fixed). This together with (4.11) implies condition (3.H˜1) becauseG˜(1) is dense in G. Thus the assertion under consideration holds by Th. 3.4.

(D2.c) and (D2.d) From the inequalities (4.10), (4.12), and conditions (4.Hy8(a), (d2)) and (4.Hy9(a), (e2)) it follows that (4.3) holds for some constant C10 > 0, and α = 1 (see e.g. [6], Fact 4.2.2). It implies inequalities (4.4), and (4.5) for gG˜(1), and α = 1. Thus the assertions under discussion hold for the reason explained in the proof of Th. 4.4.

  1. Conflict of interest

    Conflict of interest statement: Authors state no conflict of interest.

References

[1] J. Banaś and T. Zając, On a measure of noncompactness in the space of regulated functions and its applications, Adv. Nonlinear Anal. 8 (2019), 1099–1110.10.1515/anona-2018-0024Search in Google Scholar

[2] T. Bogenschütz and V.M. Gundlach, Symbolic dynamics for expanding random dynamical systems, Random Comput. Dynamics 1 (1992/93), 219–227.Search in Google Scholar

[3] T. Bogenschütz and Z. Kowalski, A condition for mixing of skew products, Aequationes Math. 59 (2000), 222–234.10.1007/s000100050122Search in Google Scholar

[4] P. Bugiel, A note on invariant measures for Markov maps of an interval, Z. Wahrsch. Verw. Gebiete 70 (1985), 345–349.10.1007/BF00534867Search in Google Scholar

[5] P. Bugiel, Correction and addendum to "A note on invariant measures for Markov maps of an interval", Z. Wahrsch. Verw. Gebiete 70 (1985), 345–349; Probab. Theory Related Fields 76 (1987), 255–256.Search in Google Scholar

[6] P. Bugiel, Distortion inequality for the Frobenius-Perron operator and some its consequences in ergodic theory of Markov maps in ℝd Ann. Polon. Math. LXVIII.2 (1998), 125–157. (available online at https://www.impan.pl/pl/wydawnicrwa/czasopisma-i-serie-wydawnicze/annales-polonici-mathematici)10.4064/ap-68-2-125-157Search in Google Scholar

[7] P. Bugiel, Ergodic properties of a randomly perturbed family of piecewise C2-diffeomorphisms in ℝd Math. Z. 224 (1997), 289–311.10.1007/PL00004585Search in Google Scholar

[8] P. Bugiel, S. We¸drychowicz and B. Rzepka, A few problems connected with invariant measures of Markov maps – verification of some claims and opinions that circulate in the literature, Adv. Nonlinear Anal. 9 (2020), 1607–1616.10.1515/anona-2020-0221Search in Google Scholar

[9] P. Bugiel, S. We¸drychowicz and B. Rzepka, Fixed point of some Markov operator of Frobenius-Perron type generated by a random family of point-transformations in ℝd Adv. Nonlinear Anal. 10 (2021), 972–981.10.1515/anona-2020-0163Search in Google Scholar

[10] N. Dunford and J.T. Schwartz, Linear Operators. Part I: General Theory, Wiley, New York, 1963.Search in Google Scholar

[11] K. Horbacz, Invariant densities for one-dimensional random dynamical systems, Univ. Iagel. Acta Math. Fasc. 28 (1991), 101–106.Search in Google Scholar

[12] S. Kakutani, Random ergodic theorems and Markoff processes with a stable distribution, Proc. 2nd Berkeley Symp. (1951), 247–261.Search in Google Scholar

[13] Y. Kifer, Equilibrium states for random expanding transformations, Random Comput. Dynamics 1 (1992/93), 1–31.Search in Google Scholar

[14] A. Lasota and J. Yorke, Exact dynamical systems and the Frobenius-Perron operator, Trans. Amer. Math. Soc. 273 (1982), 375–384.10.1090/S0002-9947-1982-0664049-XSearch in Google Scholar

[15] R. Mañé, Ergodic Theory and Differentiable Dynamics, Ergebnisse der Mathematic und ihrer Grenzgebiete, Springer, Berlin and New York, 1987.10.1007/978-3-642-70335-5Search in Google Scholar

[16] W. de Melo and S. van Strien, One-dimensional Dynamics, Ergebnisse der Mathematic und ihrer Grenzgebiete, Springer, Berlin and New York, 1993.Search in Google Scholar

[17] T. Morita, Random iteration of one-dimensional transformations, Osaka J. Math. 22 (1985), 489–518.Search in Google Scholar

[18] T. Ohno, Asymptotic behavior of dynamical system with random parameters, Publ. RIMS Kyoto Univ. 19 (1983), 83–98.10.2977/prims/1195182976Search in Google Scholar

[19] S. Pelikan, Invariant densities for random maps of the intervals, Trans. Amer. Math. Soc. 281 (1984), 813–825.10.1090/S0002-9947-1984-0722776-1Search in Google Scholar

[20] O. Rechard, Invariant measures for many-one transformations, Duke Math. J. 23 (1956), 477–488.10.1215/S0012-7094-56-02344-4Search in Google Scholar

[21] A. Rényi, Representation of real numbers and their ergodic properties, Acta Math. Acad. Sci. Hungar 8 (1957), 477–493.10.1007/BF02020331Search in Google Scholar

[22] V.A. Rochlin, Exact endomorphisms of Lebesgue spaces, Amer. Math. Soc. Transl. 39(2) (1964), 1–36; Izv. Akad. Nauk SSSR Ser. Mat. 25 (1961), 490–530.Search in Google Scholar

[23] B. Rzepka and J. Ścibisz, The superposition operator in the space of functions continuous and converging at infinity on the real half-axis, Adv. Nonlinear Anal. 9 (2020), 1205–1213.10.1515/anona-2020-0046Search in Google Scholar

[24] F. Schweiger, Ergodic Theory of Fibred Systems, Institut für Mathematik der Universität Salzburg, A-5020 Salzburg, 1989.Search in Google Scholar

[25] W. Szlenk, An Introduction to the Theory of Smooth Dynamical Systems, PWN, Warsaw, 1984.Search in Google Scholar

[26] S.M. Ulam, A Collection of Mathematical Problems, Interscience Tracts in Pure and Appl. Math. no. 8, Interscience, New York, 1960.Search in Google Scholar

[27] S. Ulam and J. von Neumann, Random ergodic theorems, Bull. Amer. Math. Soc. 51 (1947), no. 9, 660.Search in Google Scholar

Received: 2021-01-10
Accepted: 2021-04-26
Published Online: 2021-07-17

© 2021 Peter Bugiel et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 9.6.2024 from https://www.degruyter.com/document/doi/10.1515/anona-2020-0188/html
Scroll to top button