Title: | Solving Least Squares or Quadratic Programming Problems under Equality/Inequality Constraints |
---|---|
Description: | It contains functions that solve least squares linear regression problems under linear equality/inequality constraints. Functions for solving quadratic programming problems are also available, which transform such problems into least squares ones first. It is developed based on the 'Fortran' program of Lawson and Hanson (1974, 1995), which is public domain and available at <http://www.netlib.org/lawson-hanson/>. |
Authors: | Yong Wang [aut, cre], Charles L. Lawson [aut], Richard J. Hanson [aut] |
Maintainer: | Yong Wang <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.3-0 |
Built: | 2025-03-07 03:17:58 UTC |
Source: | https://github.com/cran/lsei |
Solves the least squares problem using Householder forward triangulation with column interchanges. It is an R interface to the HFTI function that is described in Lawson and Hanson (1974, 1995). Its Fortran implementation is public domain and is available at http://www.netlib.org/lawson-hanson/.
hfti(a, b, tol = 1e-07)
hfti(a, b, tol = 1e-07)
a |
Design matrix. |
b |
Response vector or matrix. |
tol |
Tolerance for determining the pseudorank. |
Given matrix a
and vector b
, hfti
solves the least
squares problem:
b |
first |
krank |
psuedo-rank |
rnorm |
Euclidean norm of the residual vector. |
Yong Wang <[email protected]>
Lawson and Hanson (1974, 1995). Solving least squares problems. Englewood Cliffs, N.J., Prentice-Hall.
a = matrix(rnorm(10*4), nrow=10) b = a %*% c(0,1,-1,1) + rnorm(10) hfti(a, b)
a = matrix(rnorm(10*4), nrow=10) b = a %*% c(0,1,-1,1) + rnorm(10) hfti(a, b)
For each of given values, indx
finds the index of the value in a
vector sorted in ascending order that the given value is barely greater than
or equal to.
indx(x, v)
indx(x, v)
x |
vector of numeric values, the indices of which are to be found. |
v |
vector of numeric values sorted in ascending order. |
For each x[i], the function returns integer j such that
where .
Returns a vector of integers, that are indices of x-values in vector v.
Yong Wang <[email protected]>
indx(0:6,c(1:5,5)) indx(sort(rnorm(5)),-2:2)
indx(0:6,c(1:5,5)) indx(sort(rnorm(5)),-2:2)
These functions can be used for solving least squares or quadratic programming problems under general equality and/or inequality constraints.
lsei(a, b, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf) lsi(a, b, e=NULL, f=NULL, lower=-Inf, upper=Inf) ldp(e, f) qp(q, p, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf, tol=1e-15)
lsei(a, b, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf) lsi(a, b, e=NULL, f=NULL, lower=-Inf, upper=Inf) ldp(e, f) qp(q, p, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf, tol=1e-15)
a |
Design matrix. |
b |
Response vector. |
c |
Matrix of numeric coefficients on the left-hand sides of equality
constraints. If it is NULL, |
d |
Vector of numeric values on the right-hand sides of equality constraints. |
e |
Matrix of numeric coefficients on the left-hand sides of inequality
constraints. If it is NULL, |
f |
Vector of numeric values on the right-hand sides of inequality constraints. |
lower , upper
|
Bounds on the solutions, as a way to specify such simple inequality constraints. |
q |
Matrix of numeric values for the quadratic term of a quadratic programming problem. |
p |
Vector of numeric values for the linear term of a quadratic programming problem. |
tol |
Tolerance, for calculating pseudo-rank in |
The lsei
function solves a least squares problem under both equality
and inequality constraints. It is an implementation of the LSEI algorithm
described in Lawson and Hanson (1974, 1995).
The lsi
function solves a least squares problem under inequality
constraints. It is an implementation of the LSI algorithm described in
Lawson and Hanson (1974, 1995).
The ldp
function solves a least distance programming problem under
inequality constraints. It is an R wrapper of the LDP function which is in
Fortran, as described in Lawson and Hanson (1974, 1995).
The qp
function solves a quadratic programming problem, by
transforming the problem into a least squares one under the same equality
and inequality constraints, which is then solved by function lsei
.
The NNLS and LDP Fortran implementations used internally is downloaded from http://www.netlib.org/lawson-hanson/.
Given matrices a
, c
and e
, and vectors b
,
d
and f
, function lsei
solves the least squares problem
under both equality and inequality constraints:
Function lsi
solves the least squares problem under inequality
constraints:
Function ldp
solves the least distance programming problem under
inequality constraints:
Function qp
solves the quadratic programming problem:
A vector of the solution values
Yong Wang <[email protected]>
Lawson and Hanson (1974, 1995). Solving least squares problems. Englewood Cliffs, N.J., Prentice-Hall.
beta = c(rnorm(2), 1) beta[beta<0] = 0 beta = beta / sum(beta) a = matrix(rnorm(18), ncol=3) b = a %*% beta + rnorm(3,sd=.1) c = t(rep(1, 3)) d = 1 e = diag(1,3) f = rep(0,3) lsei(a, b) # under no constraint lsei(a, b, c, d) # under eq. constraints lsei(a, b, e=e, f=f) # under ineq. constraints lsei(a, b, c, d, e, f) # under eq. and ineq. constraints lsei(a, b, rep(1,3), 1, lower=0) # same solution q = crossprod(a) p = -drop(crossprod(b, a)) qp(q, p, rep(1,3), 1, lower=0) # same solution ## Example from Lawson and Hanson (1974), p.140 a = cbind(c(.4302,.6246), c(.3516,.3384)) b = c(.6593, .9666) c = c(.4087, .1593) d = .1376 lsei(a, b, c, d) # Solution: -1.177499 3.884770 ## Example from Lawson and Hanson (1974), p.170 a = cbind(c(.25,.5,.5,.8),rep(1,4)) b = c(.5,.6,.7,1.2) e = cbind(c(1,0,-1),c(0,1,-1)) f = c(0,0,-1) lsi(a, b, e, f) # Solution: 0.6213152 0.3786848 ## Example from Lawson and Hanson (1974), p.171: e = cbind(c(-.207,-.392,.599), c(2.558, -1.351, -1.206)) f = c(-1.3,-.084,.384) ldp(e, f) # Solution: 0.1268538 -0.2554018
beta = c(rnorm(2), 1) beta[beta<0] = 0 beta = beta / sum(beta) a = matrix(rnorm(18), ncol=3) b = a %*% beta + rnorm(3,sd=.1) c = t(rep(1, 3)) d = 1 e = diag(1,3) f = rep(0,3) lsei(a, b) # under no constraint lsei(a, b, c, d) # under eq. constraints lsei(a, b, e=e, f=f) # under ineq. constraints lsei(a, b, c, d, e, f) # under eq. and ineq. constraints lsei(a, b, rep(1,3), 1, lower=0) # same solution q = crossprod(a) p = -drop(crossprod(b, a)) qp(q, p, rep(1,3), 1, lower=0) # same solution ## Example from Lawson and Hanson (1974), p.140 a = cbind(c(.4302,.6246), c(.3516,.3384)) b = c(.6593, .9666) c = c(.4087, .1593) d = .1376 lsei(a, b, c, d) # Solution: -1.177499 3.884770 ## Example from Lawson and Hanson (1974), p.170 a = cbind(c(.25,.5,.5,.8),rep(1,4)) b = c(.5,.6,.7,1.2) e = cbind(c(1,0,-1),c(0,1,-1)) f = c(0,0,-1) lsi(a, b, e, f) # Solution: 0.6213152 0.3786848 ## Example from Lawson and Hanson (1974), p.171: e = cbind(c(-.207,-.392,.599), c(2.558, -1.351, -1.206)) f = c(-1.3,-.084,.384) ldp(e, f) # Solution: 0.1268538 -0.2554018
Finds either row or column maximum values of a matrix.
matMaxs(x, dim = 1)
matMaxs(x, dim = 1)
x |
numeric matrix. |
dim |
|
Matrix x
may contain Inf
or -Inf
, but not NA
or
NaN
.
Returns a numeric vector with row or column maximum values.
The function is very much the same as using apply(x, 1, max)
or
apply(x, 2, max)
, but faster.
Yong Wang <[email protected]>
x = cbind(c(1:4,Inf), 5:1) matMaxs(x) matMaxs(x, 2)
x = cbind(c(1:4,Inf), 5:1) matMaxs(x) matMaxs(x, 2)
These functions are particularly useful for solving least squares or quadratic programming problems when some or all of the solution values are subject to nonnegativity constraint. One may further restrict the NN-restricted coefficients to have a fixed positive sum.
nnls(a, b) pnnls(a, b, k=0, sum=NULL) pnnqp(q, p, k=0, sum=NULL, tol=1e-20)
nnls(a, b) pnnls(a, b, k=0, sum=NULL) pnnqp(q, p, k=0, sum=NULL, tol=1e-20)
a |
Design matrix. |
b |
Response vector. |
k |
Integer, meaning that the first |
sum |
= NULL, if NN-restricted coefficients are not further restricted to have a fixed sum; = a positive value, if NN-restricted coefficients are further restricted to have a fixed positive sum. |
q |
Positive semidefinite matrix of numeric values for the quadratic term of a quadratic programming problem. |
p |
Vector of numeric values for the linear term of a quadratic programming problem. |
tol |
Tolerance used for calculating pseudo-rank of |
Function nnls
solves the least squares problem under
nonnegativity (NN) constraints. It is an R interface to the NNLS
function that is described in Lawson and Hanson (1974, 1995). Its
Fortran implementation is public domain and available at
http://www.netlib.org/lawson-hanson/ (with slight
modifications by Yong Wang for compatibility with the lastest
Fortran compiler.)
Given matrix a
and vector b
, nnls
solves the
nonnegativity least squares problem:
Function pnnls
also solves the above nonnegativity least
squares problem when k=0
, but it may also leave the first
k
coefficients unrestricted. The output value of k
can be smaller than the input one, if a
has linearly
dependent columns. If sum
is a positive value, pnnls
solves the problem by further restricting that the NN-restricted
coefficients must sum to the given value.
Function pnnqp
solves the quadratic programming problem
when only some or all coefficients are restricted by
nonnegativity. The quadratic programming problem is solved by
transforming the problem into a least squares one under the same
constraints, which is then solved by function
pnnls
. Arguments k
and sum
have the same
meanings as for pnnls
.
Functions nnls
, pnnls
and pnnqp
are able to
return any zero-valued solution as 0 exactly. This differs from
functions lsei
and qp
, which may produce very small
values for exactly 0s, thanks to numerical errors.
x |
Solution |
r |
The upper-triangular matrix |
b |
The vector |
index |
Indices of the columns of |
rnorm |
Euclidean norm of the residual vector. |
mode |
= 1, successful computation; = 2, bad dimensions of the problem; = 3, iteration count exceeded (more than 3 times the number of variables iterations). |
k |
Number of the first few coefficients that are truly not NN-restricted. |
Yong Wang <[email protected]>
Lawson and Hanson (1974, 1995). Solving Least Squares Problems. Englewood Cliffs, N.J., Prentice-Hall.
Dax (1990). The smallest point of a polytope. Journal of Optimization Theory and Applications, 64, pp. 429-432.
Wang (2010). Fisher scoring: An interpolation family and its Monte Carlo implementations. Computational Statistics and Data Analysis, 54, pp. 1744-1755.
a = matrix(rnorm(40), nrow=10) b = drop(a %*% c(0,1,-1,1)) + rnorm(10) nnls(a, b)$x # constraint x >= 0 pnnls(a, b, k=0)$x # same as nnls(a, b) pnnls(a, b, k=2)$x # first two coeffs are not NN-constrained pnnls(a, b, k=2, sum=1)$x # NN-constrained coeffs must sum to 1 pnnls(a, b, k=2, sum=2)$x # NN-constrained coeffs must sum to 2 q = crossprod(a) p = -drop(crossprod(b, a)) pnnqp(q, p, k=2, sum=2)$x # same solution pnnls(a, b, sum=1)$x # zeros found exactly pnnqp(q, p, sum=1)$x # zeros found exactly lsei(a, b, rep(1,4), 1, lower=0) # zeros not so exact
a = matrix(rnorm(40), nrow=10) b = drop(a %*% c(0,1,-1,1)) + rnorm(10) nnls(a, b)$x # constraint x >= 0 pnnls(a, b, k=0)$x # same as nnls(a, b) pnnls(a, b, k=2)$x # first two coeffs are not NN-constrained pnnls(a, b, k=2, sum=1)$x # NN-constrained coeffs must sum to 1 pnnls(a, b, k=2, sum=2)$x # NN-constrained coeffs must sum to 2 q = crossprod(a) p = -drop(crossprod(b, a)) pnnqp(q, p, k=2, sum=2)$x # same solution pnnls(a, b, sum=1)$x # zeros found exactly pnnqp(q, p, sum=1)$x # zeros found exactly lsei(a, b, rep(1,4), 1, lower=0) # zeros not so exact