Model

Here, equations for the mixed LiNGAM are presented and corresponding command-line options are described.

In the mixed LiNGAM for two random variables \(x_{1}\) and \(x_{2}\), the relationship of these variables are modeled as:

\[\begin{split}x_{1}^{(i)} &= \mu_{1} + \tilde{\mu}_{1}^{(i)} + e_{1}^{(i)} \\ x_{2}^{(i)} &= \mu_{2} + \tilde{\mu}_{2}^{(i)} + b_{21}x_{1}^{(i)} + e_{2}^{(i)}\end{split}\]

or

\[\begin{split}x_{1}^{(i)} &= \mu_{1} + \tilde{\mu}_{1}^{(i)} + b_{12}x_{2}^{(i)} + e_{1}^{(i)} \\ x_{2}^{(i)} &= \mu_{2} + \tilde{\mu}_{2}^{(i)} + e_{2}^{(i)}.\end{split}\]

The former pair of equations, denoted by \({\cal M}_{1}\), models causation from \(x_{1}\) to \(x_{2}\), while the latter one \({\cal M}_{2}\) represents the opposite direction. Intercepts \(\mu_{1}\) and \(\mu_{2}\), individual specific effects \(\tilde{\mu}_{1}^{(i)}\) and \(\tilde{\mu}_{2}^{(i)}\), regression coefficients \(b_{21}\) and \(b_{12}\) and error variables \(e_{1}\) and \(e_{2}^{(i)}\) are unknown variables and should be estimated.

Causal inference is to choose one of the above models based on observations \({\cal D}=\{(x_{1}^{(i)}, x_{2}^{(i)})\}_{i=1}^{n}\). To do this, we put prior distributions on the unknown variables and performe Bayesian model selection. The marginal likelihood of the above models are:

\[\begin{split}p({\cal D}|{\cal M}_{1}) &= \int p({\cal D}|\mu, b_{21}, \tilde{\mu}, e,{\cal M}_{1})p(\mu)p(b_{21})p(\tilde{\mu})p(e)d\mu d b_{21} d\tilde{\mu} de \\ p({\cal D}|{\cal M}_{2}) &= \int p({\cal D}|\mu, b_{12}, \tilde{\mu}, e,{\cal M}_{2})p(\mu)p(b_{12})p(\tilde{\mu})p(e)d\mu d b_{12} d\tilde{\mu} de,\end{split}\]

where \(\tilde{\mu}\equiv\{\tilde{\mu}_{1}^{(i)}, \tilde{\mu}_{2}^{(i)}\}_{i=1}^{n}\) and \(e\equiv\{e_{1}^{(i)}, e_{2}^{(i)}\}_{i=1}^{n}\). In the BMLiNGAM software, the marginal likelihood is calculated by a naive Monte Carlo procedure.

When a set of prior distributions on all of the unknown variables are fixed, we only need to compare these two quantities. However, appropriate prior distributions are difficult to determine a priori. Thus we apply various sets of prior distributions for \({\cal M}_{1}\) and \({\cal M}_{2}\) and choose the model with the maximum of the marginal likelihood.

Specifically, the procedure taken in this software (and the original paper of the mixed LiNGAM) is as follows. For \(m\) sets of prior distributions, let us denote the two models with the \(j\)-th prior set (\(1\leq j\leq m\)) as \({\cal M}_{1,j}\) and \({\cal M}_{2,j}\). The optimal models for both causal directions are:

\[\begin{split}{\cal M}_{1*} &= {\rm arg}\max_{{\cal M}_{1,j}}p({\cal D}|{\cal M}_{1,j}) \\ {\cal M}_{2*} &= {\rm arg}\max_{{\cal M}_{2,j}}p({\cal D}|{\cal M}_{2,j}).\end{split}\]

Then, \(p({\cal D}|{\cal M}_{1*})\) and \(p({\cal D}|{\cal M}_{2*})\) are compared and finally choose the model

\[{\cal M}_{*}={\rm arg}\max_{{\cal M}_{1*},{\cal M}_{2*}}(p({\cal D}|{\cal M}_{1*}), p({\cal D}|{\cal M}_{2*})).\]

Note

Obviously, it holds that \({\cal M}_{*}={\rm arg}\max_{{\cal M}_{c,j} (c\in\{1,2\}, j=1,\cdots,m)}p({\cal D}|{\cal M}_{c,j})\).

When assuming parametrized family of prior distributions, as also in this software, this procedure is referred to as hyperparameter optimization.

Posterior probability of models

The posterior probability of a model given the data is calculated by normalizing the full probability:

\[P({\cal M}_{c,j}|{\cal D})= \frac{P({\cal D}, {\cal M}_{c,j})}{P({\cal D})}= \frac{P({\cal D}|{\cal M}_{c,j})P({\cal M}_{c,j})}{P({\cal D})},\]

where \(P({\cal D})=\sum_{c,j}P({\cal D}, {\cal M}_{c,j})\). Since we assume a uniform distribution on \(P({\cal M}_{c,j})\), the chance level of the probability with random selection of a model is the reciprocal of the number of the candidate models.

Standardization

Before describing prior distributions, we should mention about standardization of the data. Given --standardize_on to bmlingam-causality, observations are standardized such that mean 0 and std 1. Although this preprocessing has not been employed in the paper, it is expected that, after standardization, causal inference does not depend on the scale of observed variables.

Prior distributions

In the following subsections, we describe prior distributions and options of bmlingam-causality, which defines possible hyperparameter values.

Scale parameters

There are two types of scale parameters. The one is common scale parameter \(\tau^{cmmn}_{1,2}\), which controls the variance of prior distributions on intercepts, regression coefficients, and error variables. We set \(\tau^{cmmn}_{1}=max_c\times\hat{\rm var}(x_{1})\) and \(\tau^{cmmn}_{2}=max_c\times\hat{\rm var}(x_{2})\), where \(\hat{\rm var}(x)\) denotes sample variance. \(max_{c}\) is specified with the option max_c.

The other scale parameters, denoted by \(\tau^{indvdl}_{1}\) and \(\tau^{indvdl}_{2}\), are used for the prior distribution on individual specific effects (described later). We set \(\tau^{indvdl}_{1}=c\times\hat{\rm var}(x_{1})\) and \(\tau^{indvdl}_{2}=c\times\hat{\rm var}(x_{2})\), where \(c\) is a hyperparameter specified with the option cs. This option is a list of positive real values. Each of values in cs will be tested. Default is 0, .2, .4, .6, .8.

Intercepts

If --fix_mu_zero_on is given to bmlingam-causality, \(\mu_{1}=\mu_{2}=0\) (constant). This option is reasonable when standardization is applied. Otherwise (--fix_mu_zero_off), normal distributions are assumed as prior on \(\mu_{1,2}\):

\[\begin{split}\mu_{1} &\sim N(0, \tau^{cmmn}_{1}) \\ \mu_{2} &\sim N(0, \tau^{cmmn}_{2}).\end{split}\]

Regression coefficient

Prior on the regression coefficient \(b_{21}\) or \(b_{12}\) follows normal distribution:

\[\begin{split}b_{21} &\sim N(0, \tau^{cmmn}_{2}) \\ b_{12} &\sim N(0, \tau^{cmmn}_{1}).\end{split}\]

Scale of error varaibles

If prior_scale=tr_normal, priors on the scale of error variables are:

\[\begin{split}\tilde{h_{1}} &\sim N(0,\tau^{cmmn}_{1}) \\ \tilde{h_{2}} &\sim N(0,\tau^{cmmn}_{2}) \\ h_{1} &= |\tilde{h}_{1}| \\ h_{2} &= |\tilde{h}_{2}|.\end{split}\]

If prior_scale=log_normal,

\[\begin{split}\log h_{1} &\sim N(0,\tau^{cmmn}_{1}) \\ \log h_{2} &\sim N(0,\tau^{cmmn}_{2}).\end{split}\]

Error variables

If dist_noise=laplace,

\[\begin{split}p(e_{1}) &= Laplace(0, h_{1}/\sqrt{2}) \\ p(e_{2}) &= Laplace(0, h_{2}/\sqrt{2}) \\\end{split}\]

If dist_noise=gg,

\[\begin{split}p(e_{1}) &= GG(1, m_{1}^{err}, \beta^{err}) \\ p(e_{2}) &= GG(1, m_{2}^{err}, \beta^{err}),\end{split}\]

where \(GG\) denotes generalized Gaussian distribution (see below). Possible values of the shape parameter \(\beta^{err}\) are specified with option betas_noise. Default is .25,.5,.75,1. (comma-separated float values). The scale parameter \(m_{1}^{err}\) (\(m_{2}^{err}\)) is determined such that the variance of \(e_{1}\) (\(e_{2}\)) are \(h_{1}\) (\(h_{1}\)).

Individual specific effects

Individual specific effects implicitly model correlation of observed variables. To do this, a correlation matrix \(L\) is introduced, where \(L_{pp}=1\) (\(p=1,2\)) and \(L_{12}=L_{21}=\sigma_{12}\). \(\sigma_{12}\) determine the strength of correlation. For hyperparameter optimization, \(\sigma_{21}\) is varied as \(-0.9,-0.7,-0.5,-0.3,0,.3,.5,.7,.9\), which is specified by --L_cov_21s.

The prior distribution on \([\tilde{\mu}_{1}^{(i)}, \tilde{\mu}_{2}^{(i)}]\) are chosen from the followings:

  • T distribution (default, --prior_indvdls=t):

    \[\left[ \tilde{\mu}_{1}^{(i)}/\sqrt{\tau_{1}^{indvdl}}, \tilde{\mu}_{2}^{(i)}/\sqrt{\tau_{2}^{indvdl}} \right] &\sim T_{\nu}(0, L_{t(\nu)}),\]

    where \(\nu\) is the degrees of freedom of the distribution. Default is \(\nu=8\). \(L_{t(\nu)}\) is proportional to \(L\) and scaled such that \({\rm var}(\tilde{\mu}_{1, 2}^{(i)}/\sqrt{\tau_{1, 2}^{indvdl}})=1\). Thus, the standard deviation of \(\tilde{\mu}_{1}^{(i)}\) (\(\tilde{\mu}_{2}^{(i)}\)) is \(\sqrt{\tau}_{1}^{indvdl}\) (\(\sqrt{\tau}_{2}^{indvdl}\)).

  • Normal distribution (--prior_indvdls=gauss):

    \[\left[ \tilde{\mu}_{1}^{(i)}/\sqrt{\tau_{1}^{indvdl}}, \tilde{\mu}_{2}^{(i)}/\sqrt{\tau_{2}^{indvdl}} \right] &\sim N(0, L).\]
  • Generalized Gaussian distribution (--prior_indvdls=gg):

    \[\left[ \tilde{\mu}_{1}^{(i)}/\sqrt{\tau_{1}^{indvdl}}, \tilde{\mu}_{2}^{(i)}/\sqrt{\tau_{2}^{indvdl}} \right] &\sim GG(L, m^{indvdl}, \beta^{indvdl}),\]

    where \(m^{indvdl}\) is determined such that \({\rm var}(\tilde{\mu}_{1, 2}^{(i)}/\sqrt{\tau_{1, 2}^{indvdl}})=1\) to variance 1. \(\beta^{indvdl}\) varies during hyperparameter optimization and possible values are set with betas_coeff (default to .25,.5,.75,1.).

Generalized Gaussian distribution

The density function of \(p\)-dimensional Generalized Gaussian distribution with mean 0 is defined as [FLJY2013]:

\[\begin{split}GG(M,m,\beta) &= p(x|M,m,\beta) \\ &= \frac{1}{|M|^{1/2}}h_{m,\beta}(x'M^{-1}x) \\ h_{m,\beta}(y) &= \frac{\beta\Gamma(n/2)}{\pi^{n/2}\Gamma(n/(2\beta))2^{n/(2\beta)}} \frac{1}{m^{n/2}} \exp\left(-\frac{y^{\beta}}{2m^{\beta}}\right),\end{split}\]

where \(M\) is the normalized scaling matrix such that \({\rm diag}(M)=p\), \(m\) is the scale parameter, and \(\beta\) is the shape parameter.

[FLJY2013]Frédéric Pascal, Lionel Bombrun, Jean-Yves Tourneret and Yannick Berthoumieu. Parameter Estimation For Multivariate Generalized Gaussian Distributions. IEEE Transactions on Signal Processing 23, 2013.