Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAM issues with Newton iteration #41

Open
orcahn opened this issue Dec 15, 2018 · 4 comments
Open

RAM issues with Newton iteration #41

orcahn opened this issue Dec 15, 2018 · 4 comments

Comments

@orcahn
Copy link

orcahn commented Dec 15, 2018

I tried to replicate some results from the "Approximation of 2^d x 2^d matrices" paper. There were results given for the approximate inverse of a 2D Laplacian with the Newton method. However, with my code it already takes more than 20GB of RAM to calculate this Newton iteration for a 2D Laplacian with d=9. What are the reasons for this? Am I even supposed to minimize the absolute as opposed to the relative residual?
The code:

% Approximate inversion of 2D Laplace operator for varying n
d=[5,6,7,8,9,10,11,12];

max_rank=75;
eps=1e-5;

for i=1:8

A=tt_qlaplace_dd([d(i),d(i)]);
I = tt_eye(2,(d(i)*2)); I = tt_matrix(I);
X = 0.2*I;
err=2;
j = 0;

while norm(err,'F') > 1    
    Z = round(2*I-A*X,eps,max_rank);
    X = round(X*Z,eps,max_rank);
    err = I - A*X;
    j = j+1;
end
disp("n="+num2str(2^d(i))+"²" + " Iters: "+num2str(j) + " Error (Frob):" + num2str(norm(err,'F'),'%10.1e\n')); 
end

And the output:

n=32² Iters: 7 Error (Frob):8.0e-01
n=64² Iters: 9 Error (Frob):7.8e-01
n=128² Iters: 11 Error (Frob):7.7e-01
n=256² Iters: 13 Error (Frob):7.7e-01
Out of memory. Type "help memory" for your options.

Error in tt_tensor/round (line 55)
cr=cr(1:pos1); %Truncate storage if required

Error in tt_matrix/round (line 24)
tt.tt=round(tt.tt,eps,rmax);

Error in testinv (line 16)
    X = round(X*Z,eps,max_rank);
@dolgov
Copy link
Collaborator

dolgov commented Dec 15, 2018

Hi,

the devil is in the line X = round(XZ,eps,max_rank);
The TT ranks of A inverse are already pretty large, and they get squared in the exact * operation.
There is an iterative ALS-type method amen_mm that computes directly the approximate result of the matrix product (without constructing a TT matrix with squared ranks on the way).
Here is a code for e.g. d=10 that works fine on my laptop:
d = 10;
A = tt_qlaplace_dd([d d]);
I = tt_eye(2,2
d);
X = 0.2I;
err = 2;
while (norm(err)>1)
X = amen_mm(X, 2
I-AX, 1e-7);
err = I - A
X;
fprintf('%g\n', norm(err));
end

@orcahn
Copy link
Author

orcahn commented Dec 15, 2018

Thank you very much.

@oseledets
Copy link
Owner

@dolgov Maybe add this to examples/tutorial? This is a neat example.

@dolgov
Copy link
Collaborator

dolgov commented Dec 16, 2018

well, I can add this to tests, but not the quick_start, as I don't have a latex of it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants