搜索结果: 151-165 共查到“知识库 数理统计学”相关记录860条 . 查询时间(5.042 秒)
We consider the group lasso penalty for the linear model. We note that the standard algorithm for solving the problem assumes that the model matrices in each group are orthonormal. Here we consider a ...
Genomewide Association Analysis by Lasso Penalized Logistic Regression
Genomewide Association Analysis Lasso Penalized Logistic Regression
2015/8/21
In ordinary regression, imposition of a lasso penalty makes continuous model selection straightforward. Lasso penalized regression is particularly advantageous when the number of predictors far exceed...
Strong Rules for Discarding Predictors in Lasso-type Problems
Strong Rules Discarding Predictors Lasso-type Problems
2015/8/21
We consider rules for discarding predictors in lasso regression and related problems, for computational efficiency. El Ghaoui et al. (2010) propose “SAFE” rules, based on univariate inner products bet...
The graphical lasso:New insights and alternatives
Graphical lasso sparse inverse covariance selection precision matrix convex analysis/optimization positive definite matrices sparsity semidefinite programming
2015/8/21
The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ1 regularization to control the number of zeros in the precision matrix Θ = Σ...
Applications of the lasso and grouped lasso to the estimation of sparse graphical models
lasso and grouped lasso sparse graphical models
2015/8/21
We propose several methods for estimating edge-sparse and nodesparse graphical models based on lasso and grouped lasso penalties.We develop efficient algorithms for fitting these models when the numbe...
Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso
sparse inverse covariance selection sparsity graphical lasso Gaussian graphical models graph connected components concentration graph large scale covariance estimation
2015/8/21
We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sampl...
Optimal Multiple Testing Under a Gaussian Prior on the Effect Sizes
Effect Sizes Multiple Testing
2015/8/21
We develop a new method for frequentist multiple testing with Bayesian prior information.
Our procedure nds a new set of optimal p-value weights called the Bayes weights. Prior
information is relev...
Bootstrapping data arrays of arbitrary order
Bayesian pigeonhole bootstrap online bagging online bootstrap
2015/8/21
In this paper we study a bootstrap strategy for estimating the variance of a mean taken over large multifactor crossed random eects data
sets. We apply bootstrap reweighting independently to the lev...
Ideal Denoising in an orthonormal basis chosen from a library of bases
Wavelet Packets Cosine Packets weak-` p spaces
2015/8/20
Suppose we have observations yi = si +zi, i = 1; :::; n, where (si) is signal and (zi)
is i.i.d. Gaussian white noise. Suppose we have available a library L of orthogonal
bases, such as the Wavelet ...
Ideal Spatial Adaptation by Wavelet Shrinkage
Minimax estimation sub ject to doing well at a point Orthogonal Wavelet Bases of Compact Support
2015/8/20
With ideal spatial adaptation, an oracle furnishes information about how best to
adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial,
variable knot spline, or vari...
Consider estimating the mean vector from data Nn(; 2I ) with lq norm loss,
q 1, when is known to lie in an n-dimensional lp ball, p 2 (0; 1). For large
n, the ratio of minimax linear risk to...
On minimax estimation of a sparse normal mean vector
nearly black object robustness white noise model
2015/8/20
Mallows has conjectured that among distributions which are Gaussian but
for occasional contamination by additive noise, the one having least Fisher
information has (two-sided) geometric contaminatio...
Minimax Bayes, asymptotic minimax and sparse wavelet priors
Minimax Decision theory Minimax Bayes estimation
2015/8/20
Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared
error of estimation of a signal in Gaussian noise when the signal is known a priori
to lie in a compact ellipsoid in Hi...
Principal components analysis (PCA) is a classical method for the reduction of dimensionality of
data in the form of n observations (or cases) of a vector with p variables. Contemporary data sets
of...
This paper explores a class of empirical Bayes methods for levedependent threshold selection in wavelet shrinkage. The prior considered
for each wavelet coefficient is a mixture of an atom of p...