You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
import regreg.api as rr
loss = rr.squared_error(X,Y)
sparsity = rr.l1norm(p, lagrange=1)
fused = rr.l1norm.linear(D, lagrange=10)
problem = rr.container(sparsity, fused, loss)
, where D is the difference operator that penalizes first differences in the coefficient vector.
--> So, what I would need is to replace fused in the example above with something along the lines fused = rr.l2norm.linear(D, lagrange=10)^2, where I put the "^2" here for exposition but I am aware that this does not work.
Workaround:
I found a workaround, where I map the sparse fused ridge problem into a lasso problem, by artificially expanding my X and Y matrices (Lemma 1 in Hebiri, van de Geer (2011)). Now I can use the normal lasso to solve the problem.
However, to create said artificial matrices I need to know whether the rr.squared_error() indeed 'only' minimizes 1/2 (Y-Xbeta)^2, as documented, or internally controls for the sample size like 1/2N (Y-Xbeta)^2?
EDIT:
Is the following a possible solution:
import regreg.api as rr
loss = rr.squared_error(X,Y)
sparsity = rr.l1norm(p, lagrange=1)
fused = rr.quadratic_loss(p, Q=D, coef=lambda)
problem = rr.container(sparsity, fused, loss)
where D is the difference operator that penalizes first differences in the coefficient vector. I then control the amount of penalization by choosing different values for coef ?!
Thanks in advance!
The text was updated successfully, but these errors were encountered:
How can I implement a "squared l2norm.linear" ?
Example implementation for Fused Lasso:
, where D is the difference operator that penalizes first differences in the coefficient vector.
--> So, what I would need is to replace fused in the example above with something along the lines
fused = rr.l2norm.linear(D, lagrange=10)^2
, where I put the "^2" here for exposition but I am aware that this does not work.Workaround:
I found a workaround, where I map the sparse fused ridge problem into a lasso problem, by artificially expanding my X and Y matrices (Lemma 1 in Hebiri, van de Geer (2011)). Now I can use the normal lasso to solve the problem.
However, to create said artificial matrices I need to know whether the
rr.squared_error()
indeed 'only' minimizes 1/2 (Y-Xbeta)^2, as documented, or internally controls for the sample size like 1/2N (Y-Xbeta)^2?EDIT:
Is the following a possible solution:
where D is the difference operator that penalizes first differences in the coefficient vector. I then control the amount of penalization by choosing different values for
coef
?!Thanks in advance!
The text was updated successfully, but these errors were encountered: