- Linear Algebra Libraries by Claire Mouton, March 2009.
Detailed comparison btw CPPLapack, Eigen, Flens, Gmm++, GNU Scientific Library, IT++, Lapack++, MTL, PETSc, Seldon, SparseLib++, TNT, Trilinos, uBlas, and others by listing developers, license, interface, performance description, portability, dependencies, and some limitations. A very clear and useful document.
- On the Reusability and Numeric Efficiency of C++ Packages in Scientific Computing
http://www.linuxclustersinstitute.org/conferences/archive/2003/PDF/Mello_U.pdf
by Ulisses Mello and Ildar Khabibrakhmanov, IBM T.J. Watson Research Center, Yorktown, NY, USA
Performance comparison by plots in different Blas level 1, 2, 3 experiments. It compares ATLAS, Goto Blas, Blitz, uBlas, STL implementation, Fortran 77 code, C code, bsm, MTL, and A++. It's more like pure performance comparison regardless its interface or whether there is OO wrapper or not. It also provides some observations and reasons why one beats another.
- Help choosing a C++ matrix library
http://old.nabble.com/Help-choosing-a-C%2B%2B-matrix-library-td18857631.html
In-depth forum discussion. Point out several requirements for a good matrix library.
- Goto Blas
http://www.utexas.edu/features/2006/goto/
The author's story. Goto is the author's Japanese last name. GotoBlas is known to be the fastest Blas library in the world. It is handcrafted down to assembly code for different specific architectures. - Discussion about the adoption of either uBlas or MTL4
http://stackoverflow.com/questions/1067821/ublas-vs-matrix-template-library-mtl4
One mentioned Eigen.
- Eigen benchmark
http://eigen.tuxfamily.org/index.php?title=Benchmark
OO abstract, though it provides readable and maintainable code, is always a penalty for performance consideration. Eigen is one of the actively developing libraries which claims its performance. Others as I know are MTL and armadillo. One reason of not claiming performance of their C++ code is because although they provide OO interface, many of them are "wrappers" of Blas and Lapack, like IT++, Lapack++, in order to provide Fortran level performance instead of C++ implementations. It is interesting to note that Eigen claims their code is at a similar speed as ATLAS (Automatically Tuned Linear Algebra Software) and Goto Blas (critically parts are written in assembly). The other thing that is worth noting is that the FLOPS drops when performing vector-vector additions over a certain vector size, while in the IBM report above it happens in matrix-vector multiplication instead. By the way, the performance results are all about Blas level 1, 2, 3. None of them compare solving linear systems or some other advanced decomposition method, like SVD, Cholesky, eigenvalue, etc.
The last thing: license. LGPL vs. GPL used to be a very big drawback of Qt before version 4.5 since GPL means using the library is equivalent to open your source code to everyone who have access to GPL code (virtually every human being), while LGPL provides flexibility for commercial software and closed code is permitted. Eigen is either GPL or LGPL. It depends on your choice. MTL4 seems not that liberal.
Hi,
ReplyDeletethanks for this page..I feel like selecting eigen for my work....
suresh
Hey Suresh,
DeleteDidn't know why your post didn't pop up in notification. My personal work choice is in the end pure LAPACK+BLAS due to the very stringent performance requirement. How's your experience with Eigen? It will be great if I can hear your experience too!