Usuário(a):Horadrim/Testes11



Página inicial: Testes

Edições completas: Mecânica estatística  • Modelo Hodgkin-Huxley  • Neurociência computacional  • Modelo probabilístico para redes neurais  • Teoria de campo médio  • Modelo FitzHugh–Nagumo  • Processo Lévy  • Cadeias de Markov  • Processo de Poisson  • Galves–Löcherbach model  • Stochastic chains with memory of variable length  • Lesão do plexo braquial  • Somatotopia  • Função densidade  • Modelos de grafos aleatórios exponenciais • Processo de Gram-Schmidt  • Equação de Chapman–Kolmogorov  • Predefinição:Processos estocásticos  • Algoritmo de autovalores  • Transição de fase  • Hipótese do cérebro crítico  • Critical brain hypothesis  • Passeio aleatório  • Plasticidade sináptica  • Potencial pós-sináptico excitatório  • Potencial pós-sináptico inibitório  • Modelo de Morris-Lecar  • Plexo braquial  • Processo gaussiano  • Campo aleatório de Markov  • Eletroencefalografia  • Modelo de Hindmarsh-Rose  • Sistemas de partícula em interação  • Medida de Gibbs  • Nervo escapular dorsal  • Nervo radial  • Nervo peitoral lateral  • Nervo musculocutâneo  • Medida de Dirac  • Nervo torácico longo  • Sigma-álgebra  • Nervo peitoral medial  • Nervo ulnar  • Potencial evocado  • Estimulação magnética transcraniana repetitiva  • Teorema de Donsker  • Desigualdade de Boole  • Codificação neural  • Aprendizado de máquina  • Independência condicional  • Inferência estatística  • Nervo subclávio  • Nervo supraescapular  • Nervo mediano  • Nervo axilar  • Movimento browniano geométrico  • Caminho autoevitante  • Tempo local  • Nervo subescapular superior  • Nervo toracodorsal  • Nervo subscapular inferior  • Caminho (teoria dos grafos)  • Campo aleatório  • Lei do logaritmo iterado

Edições em andamento: Nervo cutâneo medial do braço (N)  • Nervo cutâneo medial do antebraço (N)  • Cérebro estatístico (N)  • Statistician brain  • Distribuição de probabilidade condicional (N)  • Esperança condicional (N)  • Integral de Itō (N)  • Martingale  • Variação quadrática (N)  • Processo Ornstein–Uhlenbeck  • Ruído branco  • Teoria ergódica  • Avalanche neuronal (N)  • Teoria da percolação (N)  • Função totiente de Euler  • Ruído neuronal (N)  • Distribuição de Poisson  • Córtex cerebral  • Estímulo (fisiologia)




Referência: Cadeias estocásticas com memória de alcance variável

Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983[1], as a universal tool to data compression, but recently have been used to model data in different areas such as biology[2], linguistics[3] and music[4].

Definition editar

A stochastic chain with memory of variable length is a stochastic chain  , taking values in a finite alphabet  , and characterized by a probabilistic context tree  , so that

  •   is the group of all contexts. A context  , being   the size of the context, is a finite portion of the past  , which is relevant to predict the next symbol  ;
  •   is a family of transition probabilities associated with each context.

History editar

The class of stochastic chains with memory of variable length was introduced by Jorma Rissanen in the article A universal system for data compression system[1]. Such class of stochastic chains was popularized in the statistical and probabilistic community by P. Bühlmann and A. J. Wyner in 1999, in the article Variable Length Markov Chains. Named by Bühlmann and Wyner as “Variable length Markov chains” (VLMC), these chains are also known as “Variable order Markov models" (VOM), “Probabilistic suffix trees”[2] and “Context tree models”[5]. The name “Stochastic chains with memory of variable length” seems to have been introduced by Galves and Löcherbach, in 2008, in the article of the same name[6].

Examples editar

Interrupted Light Source editar

Consider a system by a lamp, an observer and a door between both of them. The lamp has two possible states: on, represented by 1, or off, represented by 0. When the lamp is on, the observer may see the light through the door, depending on which state the door is at the time: open, 1, or closed, 0. such states are independent of the original state of the lamp.

Be   a Markov chain that represents the state of the lamp, with values in   e com uma matriz de probabilidade de transição  . Also, be   a sequence of independent random variables that represents the door's states, also taking values in  , independent of the chain   and such as

 

where  . Define a new sequence   such as

  for every  .

In order to discover the last instant that the observer could see the lamp on, i.e. to identify the least instant  , with   in which  .

Using a context tree it's possible to represent the past states of the sequence, showing which are relevant to identify the next state.

The stochastic chain   is, then, a chain with memory of variable length, taking values in   and compatible with the probabilistic context tree  , where

 .

Probabilistic properties editar

Existence editar

Perfect simulation editar

Inferences in chains with variable length editar

Given a sample  , one can find the appropriated context tree using the following algorithms.

The context algorithm editar

In the article A Universal Data Compression System[1], Rissanen introduced a consistent algorithm to estimate the probabilistic context tree that generates the data. This algorithm’s function can be summarized in two steps:

  1. Given the sample produced by a chain with memory of variable length, we start with the maximum tree whose branches are all the candidates to contexts to the sample;
  1. The branches in this tree are then cut until you obtain the smallest tree that’s well adapted to the data. Deciding whether or not shortening the context is done through a given gain function, such as the ratio of the log-likelihood.

Be   a sample of a finite probabilistic tree  . For any sequence   with  , it is possible to denote by   the number of ocurrences of the sequence in the sample, i.e,

 

Rissanen first built a context maximum candidate, given by  , where   and   is an arbitrary positive constant. The intuitive reason for the choice of   comes from the impossibility of estimating the probabilities of sequence with lengths greater than   based in a sample of size  .

From there, Rissanen shortens the maximum candidate through successive cutting the branches according to a sequence of tests based in statistical likelihood ratio. In a more formal definition, if bANnxk1b0 define the probability estimator of the transition probability   by  

where  . If  , define  .

To  , define

 

where   and

 

Note that   is the ratio of the log-likelihood to test the consistency of the sample with the probabilistic context tree   against the alternative that is consistent with  , where   and   differ only by a set of sibling knots.

The length of the current estimated context is defined by

 

where   is any positive constant. At last, by Rissanen[1], there's the following result. Given   of a finite probabilistic context tree  , then

 

when  .

Bayesian Information Criterion (BIC) editar

The estimator of the context tree by BIC with a penalty constant   is defined as

 

Smallest maximizer criterion (SMC) editar

The smallest maximizer criterion[3] is calculated by selecting the smallest tree τ of a set of champion trees C such that

 

See also editar

Markov chain Stochastic process

Referências

  1. a b c d Rissanen, J (Sep 1983). «A Universal Data Compression System». IEEE Transactions on Information Theory. 29 (5): 656–664. doi:10.1109/TIT.1983.1056741  Verifique data em: |data= (ajuda)
  2. a b Bejenaro, G (2001). «Variations on probabilistic suffix trees: statistical modeling and prediction of protein families». Bioinformatics. 17 (5): 23-43. doi:10.1093/bioinformatics/17.1.23 
  3. a b Galves, A; Galves, C; García, J; Garcia, N L; Leonardi, F (2012). «Context tree selection and linguistic rhythm retrieval from written texts». The Annals of Applied Statistics. 6 (5): 186-209. doi:10.1214/11-AOAS511 
  4. Dubnov, S; Assayag, G; Lartillot, O; Bejenaro G (2003). «Using machine-learning methods for musical style modeling». Computer. 36 (10): 73-80. doi:10.1109/MC.2003.1236474 
  5. Galves, A; Garivier, A; Gassiat, E (2012). «Joint estimation of intersecting context tree models». Scandinavian Journal of Statistics. 40 (2): 344-362. doi:10.1111/j.1467-9469.2012.00814.x 
  6. Galves, A; Löcherbach, E (2008). «Stochastic chains with memory of variable length». TICSP Series. 38: 117-133 

Predefinição:Stochastic processes

Category:Stochastic processes Category:Statistical models