Google Summer of Code Weeks 3 and 4: Matrix Optimizations

Posted on June 21, 2019 by Ankit Pandey

See the previous post for Week 2.

I spent a large part of last week travelling, so I’m combining the blog posts for the last two weeks.

Finishing up with LFortran

I’m finished with the pull request for the LFortran code printer for now, though it’s definitely way too incomplete to be merged. The code passes most of the rudimentary tests I’ve added.

Here’s a simple example of one of the failing LFortran tests: Suppose we want to generate Fortran (using LFortran) code from the mathematical expression x-x . SymPy sees this expression as multiplication with -1, as it implements only addition and multiplication in its arithmetic operations:

Directly converting same mathematical expression in Fortran as -x we can see that LFortran instead sees it as unary subtraction:

This is a major problem for the tests, which right now look to see if the Lfortran-parsed output of fcode (SymPy’s current Fortran code generator) on an expression matches the same directly translated AST. This won’t be true for x-x , since the translated expression is a multiplication BinOp while the parsed expression in an UnaryOp.

One solution might be to not parse fcode’s output and instead just check for equivalence between strings. This would mean dealing with the quirks of the code printers (such as their tendency to produce excessive parenthesis), and take away some of the advantages of direct translation. The more probable solution would be to introduce substitution rules within the LFortran AST.

Missing matrix nodes

I filed issue #17006, in which lambdify misinterpreted identity matrices as the imaginary unit. The fix in #17022 is pretty simple: just generate identity matrices with np.eye when we can.

I also went through the matrix expression classes to see which ones weren’t supported by the NumPy code printer and filed issue #17013. These are addressed by another contributor in #17029.

Rewriting matrix inversion

Most of this week was spent on implementing an optimization for the NumPy generator suggested by Aaron: given the expression A1bA^{-1}b where AA is a square matrix and bb a vector, generate the expression np.linalg.solve(A, b) instead of np.linalg.inv(A) * b. While both solve and inv use the same LU-decomposition based LAPACK ?gesv functions 1, solve is called on a vector while the inv on a (much larger) matrix. In addition to cutting down on the number of operations, this optimization might also remove any errors introduced in calculating the inverse.

My pull request for this optimization is available at #17041, which uses SymPy’s assumption system to make sure that AA is full-rank (a constraint imposed by solve). My initial approach was to embed these optimizations directly in the code printing base classes. After some discussion with Björn, we decided it would be better to separate optimization from printing as much as possible, leading to the representation of the solving operation as its own distinct AST node. This approach is much better than the original, since it was fairly easy to the optimization to the Octave/Matlab code printer.

Next Steps

For this week, I’ll be continuing with the matrix optimization PR. I’ll try to find other optimizations that can be applied (such as the evaluation order of complicated matrix expressions) and look into using Sympy’s unification capabilities in simplifying the expression of optimization rules.


  1. You can find the C definitions for the functions eventually called by inv and solve. These are written in a special templated version of C, but you can find the template variable definitions a bit higher up in the source.