From Surf Wiki (app.surf) — the open knowledge base
Generalized chi-squared distribution
Kind of probability distribution
Kind of probability distribution
\boldsymbol{k}, vector of degrees of freedom of noncentral chi-square components \boldsymbol{\lambda}, vector of non-centrality parameters of chi-square components s, scale of normal term m, offset
In probability theory and statistics, the generalized chi-squared distribution (or generalized chi-square distribution) is the distribution of a quadratic function of a multinormal variable (normal vector), or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.
Definition
The generalized chi-squared variable may be described in multiple ways. One is to write it as a weighted sum of independent noncentral chi-square variables {{\chi}'}^2 and a standard normal variable z:
:\tilde{\chi}(\boldsymbol{w}, \boldsymbol{k}, \boldsymbol{\lambda},s,m)=\sum_i w_i {{\chi}'}^2 (k_i,\lambda_i) + sz+m.
Here the parameters are the weights w_i, the degrees of freedom k_i and non-centralities \lambda_i of the constituent non-central chi-squares, and the coefficients s and m of the normal. Some important special cases of this have all weights w_i of the same sign, or have central chi-squared components, or omit the normal term.
Since a non-central chi-squared variable is a sum of squares of normal variables with different means, the generalized chi-square variable is also defined as a sum of squares of independent normal variables, plus an independent normal variable: that is, a quadratic in normal variables.
Another equivalent way is to formulate it as a quadratic form of a normal vector \boldsymbol{x}:
:\tilde{\chi}=q(\boldsymbol{x}) = \boldsymbol{x}' \mathbf{Q_2} \boldsymbol{x} + \boldsymbol{q_1}' \boldsymbol{x} + q_0.
Here \mathbf{Q_2} is a matrix, \boldsymbol{q_1} is a vector, and q_0 is a scalar. These, together with the mean \boldsymbol{\mu} and covariance matrix \mathbf{\Sigma} of the normal vector \boldsymbol{x}, parameterize the distribution.
For the most general case, a reduction towards a common standard form can be made by using a representation of the following form:
:X=(z+a)^\mathrm T A(z+a)+c^\mathrm T z= (x+b)^\mathrm T D(x+b)+d^\mathrm T x+e ,
where D is a diagonal matrix and where x represents a vector of uncorrelated standard normal random variables.
Parameter conversions
A generalized chi-square variable or distribution can be parameterized in two ways. The first is in terms of the weights w_i, the degrees of freedom k_i and non-centralities \lambda_i of the constituent non-central chi-squares, and the coefficients s and m of the added normal term. The second parameterization is using the quadratic form of a normal vector, where the paremeters are the matrix \mathbf{Q_2}, the vector \boldsymbol{q_1}, and the scalar q_0, and the mean \boldsymbol{\mu} and covariance matrix \mathbf{\Sigma} of the normal vector.
The parameters of the first expression (in terms of non-central chi-squares, a normal and a constant) can be calculated in terms of the parameters of the second expression (quadratic form of a normal vector). {{cite journal There exists open-source Matlab code to convert from one set of parameters to another.
Support and tails
When s=0 and w_i are all positive or negative, the quadratic is an ellipse. Then the distribution starts from the point m at one end, which is called a finite tail. The other end tails off at + or -\infty respectively, which is called an infinite tail. When w_i have mixed signs, and/or there is a normal s term, both tails are infinite, and the support is the entire real line. The methods to compute the CDF and PDF of the distribution behave differently in finite vs. infinite tails (see table below for best method to use in each case).
Computing the PDF/CDF/inverse CDF/random numbers
The probability density, cumulative distribution, and inverse cumulative distribution functions of a generalized chi-squared variable do not have simple closed-form expressions. But there exist several methods to compute them numerically: Ruben's method, Imhof's method,, Pearson approximation {{cite journal Numerical algorithms and computer code (Fortran and C, Matlab, R, Python, Julia) have been published that implement some of these methods to compute the PDF, CDF, and inverse CDF, and to generate random numbers. The following table shows the best methods to use to compute the CDF and PDF for the different parts of the generalized chi-square distribution in different cases.
| \tilde{\chi} type | part | best cdf/pdf method(s) |
|---|---|---|
| ellipse: | ||
| w_i same sign, | ||
| s=0 | body | Ruben, Imhof, IFFT, ray |
| finite tail | Ruben, ray (if \lambda_i=0), ellipse | |
| infinite tail | Ruben, ray, tail | |
| not ellipse: | ||
| w_i mixed signs, | ||
| and/or s \neq 0 | body | Imhof, IFFT, ray |
| infinite tails | ray, tail | |
| sphere: | ||
| non-central \chi^2 | ||
| (only one term) | body | Matlab's `ncx2cdf`/`ncx2pdf` |
| finite tail | `ncx2cdf`/`ncx2pdf`, ellipse | |
| infinite tail | `ncx2pdf`, ray, tail |
Asymptotic expressions in the tails
Asymptotic expressions for the PDF and CDF of the distribution in the lower or upper infinite tail are given by the infinite-tail approximation:
\lim_{x \to \pm \infty} f(x) = \frac{a}{\left| w_* \right|} f_!\left(\frac{x}{w_*}\right),
\lim_{x \to -\infty} F(x) = \lim_{x \to \infty} \bar{F}(x) = a \ \bar{F}!\left(\frac{x}{w}\right) = a \ Q_{k_/2}(\sqrt{\lambda_},\sqrt{x/w_}),
\text{where the constant } a = e^{\frac{m}{2w_} + \frac{s^2}{8w_^2}} \prod_{j \neq } \frac{\exp{\frac{\lambda_j w_j}{2(w_-w_j)}}}{\left(1-\frac{w_j}{w_*} \right)^{k_j/2}}.
Here, w_* is the largest positive or negative weight if we are looking at the upper or lower tail respectively, and k_* and \lambda_* are its corresponding degree and non-centrality. f_ and F_ are the PDF and CDF of the non-central chi-square distribution with parameters k_* and \lambda_*. Q is the Marcum Q-function.
In the far tails, these expressions can be further simplified to ones that are then identical for the pdf f(x) and tail CDF p(x) (which is the CDF at a point in the lower tail, or the complementary CDF at a point in the upper tail): f(x) \approx p(x) \approx \begin{cases} \left(\tfrac{x}{w_}\right)^{\tfrac{k_-2}{2}} e^{-x/2w_}, & \text{if } \lambda_=0, \[1ex] \left(\tfrac{x}{w_}\right)^{\tfrac{k_-3}{4}} e^{-x/2w_* + \sqrt{\lambda_* x/w_*}}, & \text{if } \lambda_*0. \end{cases}
Here again, w_* is the largest positive or negative weight if we are looking at the upper or lower tail respectively.
Applications
In model fitting and selection
If a predictive model is fitted by least squares, but the residuals have either autocorrelation or heteroscedasticity, then alternative models can be compared (in model selection) by relating changes in the sum of squares to an asymptotically valid generalized chi-squared distribution.
Classifying normal vectors using Gaussian discriminant analysis
If \boldsymbol{x} is a normal vector, its log likelihood is a quadratic form of \boldsymbol{x}, and is hence distributed as a generalized chi-squared. The log likelihood ratio that \boldsymbol{x} arises from one normal distribution versus another is also a quadratic form, so distributed as a generalized chi-squared.
In Gaussian discriminant analysis, samples from multinormal distributions are optimally separated by using a quadratic classifier, a boundary that is a quadratic function (e.g. the curve defined by setting the likelihood ratio between two Gaussians to 1). The classification error rates of different types (false positives and false negatives) are integrals of the normal distributions within the quadratic regions defined by this classifier. Since this is mathematically equivalent to integrating a quadratic form of a normal vector, the result is an integral of a generalized-chi-squared variable.
In signal processing
The following application arises in the context of Fourier analysis in signal processing, renewal theory in probability theory, and multi-antenna systems in wireless communication. The common factor of these areas is that the sum of exponentially distributed variables is of importance (or identically, the sum of squared magnitudes of circularly-symmetric centered complex Gaussian variables).
If Z_i are k independent, circularly-symmetric centered complex Gaussian random variables with mean 0 and variance \sigma_i^2, then the random variable
:\tilde{Q} = \sum_{i=1}^k |Z_i|^2
has a generalized chi-squared distribution of a particular form. The difference from the standard chi-squared distribution is that Z_i are complex and can have different variances, and the difference from the more general generalized chi-squared distribution is that the relevant scaling matrix A is diagonal. If \mu=\sigma_i^2 for all i, then \tilde{Q}, scaled down by \mu/2 (i.e. multiplied by 2/\mu), has a chi-squared distribution, \chi^2(2k), also known as an Erlang distribution. If \sigma_i^2 have distinct values for all i, then \tilde{Q} has the pdf : f(x; k,\sigma_1^2,\ldots,\sigma_k^2) = \sum_{i=1}^k \frac{e^{-\frac x {\sigma_i^2}}}{\sigma_i^2 \prod_{j=1, j\neq i}^k \left(1- \frac{\sigma_j^2}{\sigma_i^2}\right)} \quad\text{for } x\geq0.
If there are sets of repeated variances among \sigma_i^2, assume that they are divided into M sets, each representing a certain variance value. Denote \mathbf{r}=(r_1, r_2, \dots, r_M) to be the number of repetitions in each group. That is, the mth set contains r_m variables that have variance \sigma^2_m. It represents an arbitrary linear combination of independent \chi^2-distributed random variables with different degrees of freedom:
:\tilde{Q} = \sum_{m=1}^M \sigma^2_m/2* Q_m, \quad Q_m \sim \chi^2(2r_m) , .
The pdf of \tilde{Q} is
: f(x; \mathbf{r}, \sigma^2_1, \dots \sigma^2_M) = \prod_{m=1}^M \frac{1}{\sigma^{2r_m}m} \sum{k=1}^M \sum_{l=1}^{r_k} \frac{\Psi_{k,l,\mathbf{r}}}{(r_k-l)!} (-x)^{r_k-l} e^{-\frac{x}{\sigma^2_k}}, \quad \text{ for }x\geq0 ,
where
:\Psi_{k,l,\mathbf{r}} = (-1)^{r_k-1} \sum_{\mathbf{i} \in \Omega_{k,l}} \prod_{j \neq k} \binom{i_j + r_j-1}{i_j} \left(\frac 1 {\sigma^2_j}!-!\frac{1}{\sigma^2_k} \right)^{-(r_j + i_j)},
with \mathbf{i}=[i_1,\ldots,i_M]^T from the set \Omega_{k,l} of all partitions of l-1 (with i_k=0) defined as
: \Omega_{k,l} = \left{ [i_1,\ldots,i_m]\in \mathbb{Z}^m; \sum_{j=1}^M i_j != l-1, i_k=0, i_j\geq 0 \text{ for all } j \right}.
References
References
- Davies, R. B.. (1973). "Numerical inversion of a characteristic function". [[Biometrika]].
- Davies, R. B.. (1980). "Algorithm AS155: The distribution of a linear combination of ''χ''2 random variables". Journal of the Royal Statistical Society.
- (1977). "Algorithm AS106: The distribution of non-negative quadratic forms in normal variables". Journal of the Royal Statistical Society.
- (1962). "Probability content of regions under spherical normal distributions, IV: The distribution of homogeneous and non-homogeneous quadratic functions of normal variables.". The Annals of Mathematical Statistics.
- (1961). "Computing the Distribution of Quadratic Forms in Normal Variables". Biometrika.
- Jones, D. A.. (1983). "Statistical analysis of empirical models fitted by optimisation". [[Biometrika]].
- D. Hammarwall, M. Bengtsson, B. Ottersten (2008) "Acquiring Partial CSI for Spatially Selective Transmission by Instantaneous Channel Norm Feedback",'' IEEE Transactions on Signal Processing'', 56, 1188–1204
- E. Björnson, D. Hammarwall, B. Ottersten (2009) [http://kth.diva-portal.org/smash/get/diva2:402940/FULLTEXT01 "Exploiting Quantized Channel Norm Feedback through Conditional Statistics in Arbitrarily Correlated MIMO Systems"], ''IEEE Transactions on Signal Processing'', 57, 4027–4041
This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.
Ask Mako anything about Generalized chi-squared distribution — get instant answers, deeper analysis, and related topics.
Research with MakoFree with your Surf account
Create a free account to save articles, ask Mako questions, and organize your research.
Sign up freeThis content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.
Report