Likelihoodratio test
From Academic Kids

A likelihoodratio test is a statistical test relying on a test statistic computed by taking the ratio of the maximum value of the likelihood function under the constraint of the null hypothesis to the maximum with that constraint relaxed. If that ratio is Λ and the null hypothesis holds, then for commonly occurring families of probability distributions, −2 log Λ has a particularly handy asymptotic distribution. Many common test statistics such as the Ztest, the Ftest, Pearson's chisquare test and the Gtest can be phrased as loglikelihood ratios or approximations thereof.
Many of these approximations were quite useful when computers did not exist, but now that taking a log is really no more vexing than multiplying two numbers, other approximations may be more useful, especially in special cases where the approximations are suspect.
A statistical model is often a parametrized family of probability density functions or probability mass functions f_{θ}(x). A null hypothesis is often stated by saying the parameter θ is in a specified subset Θ_{0} of the parameter space Θ. The likelihood function is L(θ) = L(θ x) = p(xθ) = f_{θ}(x) is a function of the parameter θ with x held fixed at the value that was actually observed, i.e., the data. The likelihood ratio is
 <math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.<math>
This is a function of the data x, and is therefore a statistic. The likelihoodratio test rejects the null hypothesis if the value of this statistic is too small, and is justified by the NeymanPearson lemma. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I error" consist of rejection of a null hypothesis that is true).
If the null hypothesis is true, then −2 log Λ will be asymptotically χ^{2} distributed with degrees of freedom equal to the difference in dimensionality of Θ and Θ_{0}.
For instance, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.
Heads  Tails  
Coin 1  k_{1H}  k_{1T} 
Coin 2  k_{2H}  k_{2T} 
Here ω consists of the parameters p_{1H}, p_{1T}, p_{2H}, and p_{2T} which are the probability that coin 1 (2) comes up heads (tails). The hypothesis space H is defined by the usual constraints on a distribution, p_{ij} ≥ 0, p_{ij} ≤ 1, and p_{iH} + p_{iT} = 1. The null hypothesis H_{0} is the subspace where p_{1j} = p_{2j}. In all of these constraints, i = 1,2 and j = H,T.
The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the loglikelihood ratio to have the desired nice distribution. Since the constraint causes the twodimensional H to be reduced to the onedimensional H_{0}, the asymptotic distribution for the test will be χ^{2}(1), the χ^{2} distribution with one degree of freedom.
For the general contingency table, we can write the loglikelihood ratio statistic as
 <math>2 \log \Lambda = \sum_{i,j} k_{ij} \log {p_{ij} \over m_{ij}}. <math>
Bayesian criticisms of classical likelihood ratio tests focus on two issues:
 the supremum function in the calculation of the likelihood ratio, saying that this takes no account of the uncertainty about θ and that using maximum likelihood estimates in this way can promote complicated alternative hypotheses with an excessive number of free parameters;
 testing the probability that the sample would produce a result as extreme or more extreme under the null hypothesis, saying that this bases the test on the probability of extreme events that did not happen.
Instead they put forward methods such as Bayes factors, which explicitly take uncertainty about the parameters into account, and which are based on the evidence which did occur.