HIGH ACCURACY CALCULATING METHOD FOR INVERSE MATRIX OF python

Asked 2 years ago, Updated 2 years ago, 60 views

We are currently calculating the inverse matrix required for simulation in python.
The data is stored in variable A of type ndarray,dtype=np.float64, where A is approximately 400x400 matrix.We found the inverse matrix using the following methods:

import numpy as np
Ainv=np.linalg.inv(A)

So far, I can calculate without any errors, but after this,

AAi=np.dot(A,Ainv)
AiA = np.dot (Ainv, A)

Calculating the product of the inverse matrix with the original matrix, such as , returns a completely different value (for example, AAi[0][0]=6.68, AiA[0][0]=5.8e+15).

So, in advance,

 A = A [:100,:100]

If you do the same calculation by dividing it into ranges, the items you want to see as 1 are 9.9997e-01,0 and the items you want to see are ***e-05 to ***e-11, and the accuracy of the digits is different (specifically, the first digit has not been matched since the number of elements exceeded 106 x 106.On the other hand, 80 x 80 is a unit matrix with a precision of about 6 digits.)

It is thought that the product of the original matrix and the inverse matrix are no longer unit matrices due to the increase in the number of elements in the matrix, but is there a way to find this accurately? It doesn't matter if it takes some time to calculate.

Environment:
python 2.7.9
numpy1.14.0 (lapack, blas already installed)

python numpy

2022-09-30 14:21

2 Answers

In numerical calculations, not limited to Python calculations, direct finding of inverse matrices can increase the error.Theoretically, the matrix can be analyzed using the number of conditions, and in general, the larger the number of conditions, the greater the error that can be included in the inverse matrix.The number of conditions of the matrix that the questioner wants to ask for is 10⁸ 程度, but this is very large.Therefore, it is no wonder that the inverse matrix found in np.linalg.inv(A) contains many errors.

Therefore, if a inverse matrix is required in the middle of a formula while performing numerical calculations, it is often done indirectly using LU decomposition instead of inv(), but

Therefore, I would like to consider whether the formula can be modified to reduce the number of conditions in the matrix.

In a simple example, scaling each component of a matrix can reduce the number of conditions.For example, from the Number Solution Quality: Conditions, Stability, and Error Analysis, matrix A below is a matrix with a large number of conditions (that is, il-conditioned).

A=\left(\begin{array}{cc}2\times10^9&10^9\10^{-9}&2\times10^{-9}\\\end{array}\right)

However, you can reduce the number of conditions of the matrix by multiplying it by the appropriate diagonal matrix.

B=\left(\begin{array}{cc}10^{-9}&0\0&10^9\\end{array}\right)\left(\begin{array}{cc}2\times10^9&10\9;10\am}<p>This is scaling.Depending on the configuration of the problem, this calculation can be done in advance to accurately determine the inverse matrix.(However, LAPACK does this automatically, so I don't know how far it will change...) Also, although I'm not familiar with it, preconditioning seems to be a topic related to scaling, and <a href= of English Wikipedia has a comprehensive explanation.

Furthermore, it seems that sometimes it works if the inverse matrix is found after the singular value decomposition of the original matrix.I asked a question on the sister site Computational Science Stack Exchange and was taught this.See here.

These techniques may not always work, but depending on the nature of the problem, please try them.

(These are the pages linked in the sentence)


2022-09-30 14:21

It may depend on the nature of the data, but the result is almost 1,1 for a random number matrix with a uniform distribution as follows:

import numpy as np

A=np.random.rand (400,400)

Ainv=np.linalg.inv(A)

AAi=np.dot (A, Ainv)
AiA = np.dot (Ainv, A)


for i in range (400):
    print(AAi[i][i], AiA[i][i])

Results

 (0.9999999999984, 0.999999999999889)
(1.0000000000000198, 0.9999999999999999)
(0.9999999999999643, 1.0000000000002853)
(1.0000000000000118, 1.0000000000000686)
(1.0000000000001563, 0.9999999999999708)
(1.000000000000004, 1.0000000000000784)
hereinafter abbreviated

However, I have heard that inv() does not provide accuracy, and in that case, use solve().

Ainv=np.linalg.solve(A,np.eye(400))

I got almost one or one.Also, np.show_config() is as follows:I don't know if it will be helpful because it seems to be set at compile time, not at run time.I have installed lapack, blas, openblas, atlas, and so on.

lapack_info:
    libraries = ['lapack', 'lapack' ]
    library_dirs=['/usr/lib']
    language=f77
wrapack_opt_info:
    libraries=['lapack', 'lapack', 'blas', 'blas']
    library_dirs=['/usr/lib']
    language=c
    define_macros=[('NO_ATLAS_INFO',1),('HAVE_CBLAS', None)]
openblas_lapack_info:
  NOT AVAILABLE
blas_info:
    libraries=['blas', 'blas']
    library_dirs=['/usr/lib']
    define_macros=[('HAVE_CBLAS', None)]
    language=c
atlas_3_10_blas_threads_info:
  NOT AVAILABLE
atlas_threads_info:
  NOT AVAILABLE
atlas_3_10_threads_info:
  NOT AVAILABLE
atlas_blas_info:
  NOT AVAILABLE
atlas_3_10_blas_info:
  NOT AVAILABLE
atlas_blas_threads_info:
  NOT AVAILABLE
openblas_info:
  NOT AVAILABLE
blas_mkl_info:
  NOT AVAILABLE
blas_opt_info:
    libraries=['blas', 'blas']
    library_dirs=['/usr/lib']
    language=c
    define_macros=[('NO_ATLAS_INFO',1),('HAVE_CBLAS', None)]
atlas_info:
  NOT AVAILABLE
atlas_3_10_info:
  NOT AVAILABLE
wrapack_mkl_info:
  NOT AVAILABLE
mkl_info:
  NOT AVAILABLE


2022-09-30 14:21

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.