搜索结果: 136-150 共查到“知识库 数理统计学”相关记录860条 . 查询时间(2.662 秒)
Presence-Only Data and the EM Algorithm
Boosted trees EM algorithm Logistic model Presence-only data Use-availability data
2015/8/21
In ecological modeling of the habitat of a species, it can be prohibitively expensive to determine species absence.Presence-only data consist of a sample of locations with observed presences and a sep...
We consider the least angle regression and forward stagewise algorithms for solving penalized least squares regression problems. In Efron,Hastie, Johnstone & Tibshirani (2004) it is proved that the le...
There has been considerable interest in random projections, an approximate algorithm for estimating distances between pairs of points in a high-dimensional vector space. Let A ∈ Rn×D be our n points i...
DISCUSSION:THE DANTZIG SELECTOR:STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN n
DANTZIG SELECTOR STATISTICAL ESTIMATION p IS MUCH LARGER THAN n
2015/8/21
This is a fascinating paper on an important topic: the choice of predictor variables in large-scale linear models. A previous paper in these pages attacked the same problem using the “LARS” algorithm ...
A Unified Near-Optimal Estimator For Dimension Reduction in lα (0 < α ≤ 2) Using Stable Random Projections
Near-Optimal Estimator Dimension Reduction Stable Random Projections
2015/8/21
Many tasks (e.g., clustering) in machine learning only require the lα distances instead of the original data. For dimension reductions in the lα norm (0 < α ≤ 2), the method of stable random projectio...
We consider “one-at-a-time” coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L1-penalized regression (lasso) in the l...
Response to Mease and Wyner,Evidence Contrary to the Statistical View of Boosting
Mease and Wyner Evidence Contrary Statistical View
2015/8/21
This is an interesting and thought-provoking paper. We especially appreciate the fact that the authors have supplied R code for their examples, as this allows the reader to understand and assess their...
BOOSTING ALGORITHMS:REGULARIZATION,PREDICTION AND MODEL FITTING
Generalized linear models Generalized additive models Gradient boosting Survival analysis Variable selection Software
2015/8/21
We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as we...
Comment:Boosting Algorithms:Regularization,Prediction and Model Fitting
Boosting Algorithms Regularization Prediction Model Fitting
2015/8/21
We congratulate the authors (hereafter BH) for an interesting take on the boosting technology, and for developing a modular computational environment in R for exploring their models. Their use of low-...
Nonlinear Estimators and Tail Bounds for Dimension Reduction in l1 Using Cauchy Random Projections
dimension reduction l1 norm Johnson-Lindenstrauss (JL) lemma Cauchy random projections
2015/8/21
For1 dimension reduction in the l1 norm, the method of Cauchy random projections multiplies the original data matrix A ∈ Rn×D with a random matrix R ∈ RD×k (k D) whose entries are i.i.d. samples of ...
“PRECONDITIONING” FOR FEATURE SELECTION AND REGRESSION IN HIGH-DIMENSIONAL PROBLEMS
Model selection prediction error lasso
2015/8/21
We consider regression problems where the number of predictors greatly exceeds the number of observations. We propose a method for variable selection that first estimates the regression function, yiel...
We consider the problem of performing interpretable classification in the high-dimensional setting, in which the number of features is very large and the number of observations is limited. This settin...
One Sketch For All:Theory and Application of Conditional Random Sampling
One Sketch For All Theory and Application Conditional Random Sampling
2015/8/21
Conditional Random Sampling (CRS) was originally proposed for efficiently computing pairwise (l2, l1) distances, in static, large-scale, and sparse data. This study modifies the original CRS and exten...
Sparse inverse covariance estimation with the lasso
Sparse inverse covariance estimation the lasso
2015/8/21
We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm— the ...
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
collaborative filtering nuclear norm spectral regularization netflix prize large scale convex optimization
2015/8/21
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and...