`Indexed`

pull request from last week, in addition to filing a new matrix wildcard pull request.
Since #17223 was merged this week, I started with an implementation of matrix wildcards that takes advantage of the functionality included in the pull request. I thought that this would be relatively straightforward, with an implementation of the `matches`

method for the `MatrixWild`

subclass being enough. There was one problem though: the underlying matching implementation assumes that all powers in the expression are an instance of the `Pow`

class. However, this isn’t true for matrix expressions: the `MatPow`

class, which represents matrix powers, is a subclass of its own. I’m not exactly sure what the reason for this is, since a quick change of `MatPow`

to inherit from `Pow`

doesn’t seem to break anything. I’ll probably look into this a bit more, since I think this might have something to do with the fact that Matrix exponents can also include other matrices.

My solution for this was to allow temporarily allow expansion of powers by recursing through the expression tree and setting the `is_Pow`

field of each matrix power to `True`

and later reverting these states later. It doesn’t look pretty, but it does seem to work (you can see the code here).

I’ll try to get started with some optimizations that utilize this wildcard class once the pull request gets merged.

]]>For this week, I’ve started with #17299. This pull request is meant to extend support for the `MatrixExpr`

class by allowing for conversion into an `Indexed`

class in which contractions equivalent to the matrix expression are represented.

`Indexed`

The `as_indexed`

method that the pull request introduces is pretty self-explanatory:

```
>>> n, m = symbols('n m', integer=True)
>>> A = MatrixSymbol('A', n, m)
>>> B = MatrixSymbol('B', m, n)
>>> e = A*B
>>> e.as_indexed()
A[i, j]*B[j, k]
```

A matrix multiplication between two matrix symbols is equivalent to a contraction along the index *j* (Since matrix multiplication is equivalent to a contraction along a single index).

The purpose of the pull request is to allow conversion to help in the generation of code related to some matrix expressions. This is because there’s already an existing infrastructure for code generation through contractions (something that the still-WIP #17170 addresses). The currently work in progress PR is meant to aid in extending code generation to matrix expressions instead of just `Indexed`

objects. This same conversion might also be possible using the `Codegen*`

classes in `array_utils`

, though this way seems to make more sense since it’s entirely possible to use the function for non-codegen related purposes.

My plans for this week are to continue working on the pull request and start with the new Matrix Wildcard pull request.

]]>I spent most of this week rewriting the non-commutative matching code in Sympy’s core as Aaron suggested. The pull request for this rewrite is available at #17223.

SymPy already supports matching within non-commutative multiplication expressions. While I mentioned in my last blog post that this matching support was limited, I’ll go into a bit more detail about what those limitations (which sometimes produce wrong results) are:

Matching within commutative SymPy expressions allows for taking the structure of expressions into account. Two commutative SymPy expressions match only if both contain the same non-wildcard symbols:

```
>>> from sympy.abc import a, x, y, z
>>> w = Wild('w')
>>> m = x*y*w
>>> m.matches(x*y*z)
{w_: z}
>>> m.matches(a*x*z)
```

`m`

specifies that the expression must contain both `x`

and `y`

in addition to whatever the wildcard matches. For this reason, `m`

matches `x*y*z`

but not `a*x*z`

.

The corresponding example for non-commutative expressions does not work as expected, as it does not match when we expect it to:

```
>>> A, B, C, D = symbols('A:D', commutative=False)
>>> W = Wild('W', commutative=False)
>>> M = A*B*W
>>> M.matches(A*B*C)
>>> M.matches(A*D*C)
```

In instances where matching does seem to work, the non-commutativity of expressions is not respected:

```
>>> A, B, C, D = symbols('A:D', commutative=False)
>>> w = Wild('w')
>>> from sympy.abc import x
>>> (w*A*B*C).matches(x*C*B*A)
{w_: x}
```

The two expressions should *not* have matched, since the order of the non-commutative expressions were different. I reported this same error for matrix expressions in issue #17172.

The matching code should be able to match portions of powers, which are represented differently in the SymPy AST. As an example, a non-commutative matcher such as *A**W* (where *W* is a wildcard) should match *A*^{2} with {*W* ↦ *A*}. I wasn’t able create a working example of this using the existing matching code.

Since order needs to be taken into account for matching non-commutative expressions, the new matching code essentially does what a regular expression matcher would do, with nodes taking the place of characters and wildcards taking the place of the `.+`

regular expression.

The matching PR still needs to be polished, and the related documentation needs to be updated, so I’ll be working on that. I’ll also start with extending matrix matching from this PR.

]]>I spent most of this week on extending wildcard support for matrix expressions, along with some more explorations in printing array contractions.

As I’ve probably mentioned in the last two blog posts, SymPy’s support for matching matrix expressions through the `Wild`

class is currently severely limited (when it works). While it is possible to construct a non-commutative `Wild`

, it isn’t able to match expressions in a matrix multiplication:

```
>>> W, X = symbols('W, X', cls=Wild, commutative=False)
>>> from sympy.abc import N
>>> A, B = MatrixSymbol('A', N, N), MatrixSymbol('B', N, N)
>>> type((A * B).match(W * X))
<class 'NoneType'>
```

It’s also currently not possible to combine matrices and wildcards in expressions, since wildcards don’t have a defined shape and so may only function as scalars:

```
>>> W + A
TypeError: Mix of Matrix and Scalar symbols
>>> W * A
NotImplementedError: noncommutative scalars in MatMul are not supported.
```

`MatrixWild`

I spent most of this week working on #17177, which implements a `MatrixWild`

class that functions as both a wildcard and a matrix expression. In order to construct the wildcard, we need to give it a shape:

```
>>> from sympy.abc import N
>>> from sympy.matrices.expressions.matexpr import MatrixWild
>>> W, X = MatrixWild('W', N, N), MatrixWild('X', N, N)
```

Unlike in the example above using `Wild`

, compound expressions are able to match against a matrix multiplication:

Note that in order for matrix wildcards to match, their shape must match with the target expression:

```
>>> x = MatrixSymbol('x', N, 1)
>>> e = A * x
>>> e.shape
(N, 1)
>>> type(e.match(W))
<class 'NoneType'>
```

However, if we don’t care about dimension, we can include another wildcard in the matrix wildcard’s shape:

```
>>> M = MatrixSymbol('M', 3, 3)
>>> w = Wild('w')
>>> Y = MatrixWild('Y', w, w)
>>> M.match(Y)
{w_: 3, Y_: M}
```

While this is a good first step to the matching functionality I was looking for with `unify`

for rewriting matrix expressions, there is still quite a bit of functionality (and tests) to be implemented, along with an unknown number of bugs to fix.

I’ve also been working on a small pull request to improve the functionality the printing `IndexedBases`

so that it instead uses intermediate values (represented through the new code generation classes) to accumulate the values of contractions. Currently, this does nothing but break existing compatibility (Fortran versions older than Fortran 95 don’t support variable declarations in arbitrary locations, and the variable currently defaults to a 32-bit floating point number), though I think this is a good first step for supporting the printing of more complex contractions.

For this week, I plan to finish with the implementation of`MatrixWild`

(and hopefully get started with using it for rewriting matrix expressions), along with making some more progress on the indexed bases pull request.

This week I’ve made some progress on matching and tensors, though I haven’t filed any pull requests.

I have a working implementation of rewriting non-commutative expressions using SymPy’s unify. It works by generating a `ReplaceOptim`

object that applies the rewriting rules to any term it’s called with. Here’s how we specify the rewriting rules:

```
>>> from sympy import Symbol, MatrixSymbol
>>> n = Symbol('N_matcher', integer=True)
>>> X = MatrixSymbol('X_matcher', n, n)
>>> Y = MatrixSymbol('Y_matcher', n, 1)
>>> variables = [n, X, Y]
>>> matcher = X**(-1) * Y
>>> goal = MatrixSolve(X, Y)
```

Here, the combination of `matcher`

and `variables`

specifies that we’re looking for any expression of the form *X*^{ − 1}*Y*, where both *X* and *Y* can be any compound matrix expression. The inclusion of `n`

in `variables`

imposes the additional restriction that the matrix expression matched by *X* must be square (i.e. *n* × *n*) while the expression matched by *Y* must be a vector (i.e. *n* × 1). `goal`

specifies what the matched expression should be replaced with, where `X`

and `Y`

serve as stand-ins for the matched terms.

After specifying our goals, we can construct the object and apply the replacement to some expressions:

```
>>> replacer = gen_replacement_operator(matcher, goal, variables)
>>> A, B, x = MatrixSymbol('A', 3, 3), MatrixSymbol('B', 3, 3), MatrixSymbol('x', 3, 1)
>>> replacer(A ** (-1) * x)
(MatrixSolve(A, vector=x))
>>> replacer(A ** (-1) * B)
A ** (-1) * B
```

The first term was replaced since the dimensions of `A`

and `x`

agreed with what was specified in matcher, while the second expression was left untouched since `B`

is not a vector.

While the matcher does work, I haven’t filed a pull request because of some problems which don’t seem like they could be easily addressed:

- I had to give add the suffix
`_matcher`

to the variable names to avoid variable capture, since SymPy symbols are considered equal if they have the same name.`unify`

does not support`Dummy`

symbols as variables. - Some compound expressions are not matched. I’ve narrowed this down to the way the variables are being passed to
`unify`

, since they need to be converted to symbols. It seems like this conversion sometimes causes expressions to no longer be unifiable. - Unification doesn’t seem to work for a mixture of commutative and non-commutative expressions. I’m not sure if this is a problem with
`unify`

itself or the way that I’m using it, since the only test of`unify`

in the SymPy codebase involving matrix expressions is on matrix multiplication.

As I mentioned in my last blog post, SymPy already supports this sort of pattern matching through `Wild`

, though it currently does not support expressions involving matrices. Before trying to address these issues, I think it would be worthwhile to look into extending the functionality of `Wild`

as an alternative.

I’ve made some progress in low-level code generation of matrix expressions. I tried seeing if instances of classes in the `array_utils`

module could be converted to SymPy’s AST representation before being passed off to the code generators. This doesn’t seem possible at the moment, since the AST has a number of limitations (such as not supporting variables in `for`

loop ranges). The `IndexedBase`

printer already has some of the functionality that I’m trying to implement, so I’ve settled on extending the printer to support arbitrary contractions. This same functionality can probably be reused for the `array_utils`

printers. The implementation will hopefully be straightforward.

My goal for this week is to have a pull request for the tensor code generation ready, along with a plan for what to do with matching.

]]>This week I’ve been mostly doing background reading. This post is mostly a summary of what I learned.

In short, unification is the process of finding substitutions of variables within two terms two terms to make them identical. For example, if we have the expressions *x* + 2*y* and *a* + 3*b*, the substitution {*x* ↦ *a*, *y* ↦ 3, *b* ↦ 2} is a unifier, since applying the substitution to both expressions makes gives us the identical expression of *a* + 3 ⋅ 2. While this particular substitution includes variables from both expressions, we’re mostly interested in rules involving substitutions of variables from just one expression (a case of unification known as matching). Several well-known algorithms for unification already exist.

SymPy also has an implementation of a unification algorithm that is able to take the commutativity of operations into account. Suppose we wanted to unify the matrix expressions *A*^{T}*B*^{2}*C*^{ − 1} and *X**Y*^{ − 1}. This is essentially the problem of finding a substitution that makes these two expressions equal. Using the `sympy.unify.usympy`

module, we can discover what this substitution is:

```
>>> from sympy.unify.usympy import *
>>> from sympy.abc import N
>>> m = lambda x: MatrixSymbol(x, N, N)
>>> A, B, C, X, Y = map(m, ['A', 'B', 'X', 'Y'])
>>> e1 = A.T * B**2 * C.I
>>> e2 = X * Y **(-1)
>>> next(unify(e1, e2, variables=[X, Y]))
{X: A.T*B**2, Y: C}
```

We’ve reduced this to a matching problem in which the variables are specified only in `e2`

. What’s important to note here is that the matching rule within `e2`

we specified (*X**Y*^{ − 1}) was a compound expression. This is something that is currently not possible for non-commutative expressions (such as matrix multiplication) using SymPy’s `Wild`

interface. `unify`

allows use to express substitution rules that are able to match across sub-expressions in matrix multiplication.

Through unification, we can express substitution rules for optimization as a simple term-rewriting rule. In my previous blog post, I mentioned rewriting the matrix multiplication *A**x* as a solving operation of `MatSolve(A, x)`

under certain assumptions. The actual implementation is restricted to cases where both the `A`

and the `x`

are matrix symbols, and the optimization can’t identify cases where either the `A`

or the `x`

is a compound expression. With unification, we can identify the same pattern in more complex subexpressions. If we’re given the matrix expression *A*^{T}(*A**B*)^{ − 1}*x**y*, a unification based transformer can produce `MatSolve(AB, x)`

, provided that the shapes of the matrices match the given rule.

I also looked into generating C and Fortran code from SymPy matrix expressions. For the purposes of code generation, SymPy has a relatively new `array_utils`

module. The AST nodes in this module express generalizations of operations on matrices, which require a bit of background in tensors.

Many array operations (including matrix multiplication) involve *contraction* along an axis. Contractions are a combination of multiplication and summation along certain axis of a tensor^{1}. In assigning the matrix multiplication *A**B* to the *n* × *n* matrix *C*, we can explicitly write the summations (using subscripts for indexing matrix elements) as

$$C_{ik} = \sum_{j = 1}^{n} A_{ij} B_{jk}$$

The index *j* is contracted, as it is shared between both *A* and *B*, and describing this summation operation as a whole boils down to which indices are shared between the matrices. This is essentially what the `array_utils`

classes do. This is what happens when we use `array_utils`

to convert the matrix multiplication to an equivalent contraction operation:

```
>>> from sympy.codegen.array_utils import CodegenArrayContraction
>>> from sympy.abc import N
>>> A = MatrixSymbol('A', N, N)
>>> B = MatrixSymbol('B', N, N)
>>> CodegenArrayContraction.from_MatMul(A * B)
CodegenArrayContraction(CodegenArrayTensorProduct(A, B), (1, 2))
```

We’re given a new`CodegenArrayContraction`

object that stores, along with the variables `A`

and `B`

, tuples of integers representing contractions along certain indices. Here, the `(1, 2)`

means that the variable at index 1 and index 2 (indices start at 0) are shared. We can confirm this by looking at the above summation, since both the second and third indices out of all indices that appear in the expression are *j*.

For next week, I’ll try to re-implement the rewriting optimization in terms of `unify`

. This will both make it easier to express rules and extend to sub-expressions as well. I’ll also start with implementing additional printers for the C and Fortran printers. The printer will probably just print naive `for`

loops to keep things simple (and it would probaly be better to use something like Theano for highly optimized code).

For our purposes, we can think of tensors as just

*n*-dimensional arrays. Most of my reading on tensors was Justin C. Feng’s The Poor Man’s Introduction to Tensors.↩

I spent a large part of last week travelling, so I’m combining the blog posts for the last two weeks.

I’m finished with the pull request for the LFortran code printer for now, though it’s definitely way too incomplete to be merged. The code passes *most* of the rudimentary tests I’ve added.

Here’s a simple example of one of the failing LFortran tests: Suppose we want to generate Fortran (using LFortran) code from the mathematical expression − *x*. SymPy sees this expression as multiplication with -1, as it implements only addition and multiplication in its arithmetic operations:

Directly converting same mathematical expression in Fortran as `-x`

we can see that LFortran instead sees it as unary subtraction:

```
>>> from lfortran import *
>>> src_to_ast("-x", False)
<lfortran.ast.ast.UnaryOp object at 0x7f9027f1aba8>
```

This is a major problem for the tests, which right now look to see if the Lfortran-parsed output of `fcode`

(SymPy’s current Fortran code generator) on an expression matches the same directly translated AST. This won’t be true for − *x*, since the translated expression is a multiplication `BinOp`

while the parsed expression in an `UnaryOp`

.

One solution might be to not parse `fcode`

’s output and instead just check for equivalence between strings. This would mean dealing with the quirks of the code printers (such as their tendency to produce excessive parenthesis), and take away some of the advantages of direct translation. The more probable solution would be to introduce substitution rules within the LFortran AST.

I filed issue #17006, in which `lambdify`

misinterpreted identity matrices as the imaginary unit. The fix in #17022 is pretty simple: just generate identity matrices with `np.eye`

when we can.

I also went through the matrix expression classes to see which ones weren’t supported by the NumPy code printer and filed issue #17013. These are addressed by another contributor in #17029.

Most of this week was spent on implementing an optimization for the NumPy generator suggested by Aaron: given the expression *A*^{ − 1}*b* where *A* is a square matrix and *b* a vector, generate the expression `np.linalg.solve(A, b)`

instead of `np.linalg.inv(A) * b`

. While both `solve`

and `inv`

use the same LU-decomposition based LAPACK `?gesv`

functions ^{1}, `solve`

is called on a vector while the `inv`

on a (much larger) matrix. In addition to cutting down on the number of operations, this optimization might also remove any errors introduced in calculating the inverse.

My pull request for this optimization is available at #17041, which uses SymPy’s assumption system to make sure that *A* is full-rank (a constraint imposed by `solve`

). My initial approach was to embed these optimizations directly in the code printing base classes. After some discussion with Björn, we decided it would be better to separate optimization from printing as much as possible, leading to the representation of the solving operation as its own distinct AST node. This approach is much better than the original, since it was fairly easy to the optimization to the Octave/Matlab code printer.

For this week, I’ll be continuing with the matrix optimization PR. I’ll try to find other optimizations that can be applied (such as the evaluation order of complicated matrix expressions) and look into using Sympy’s unification capabilities in simplifying the expression of optimization rules.

You can find the C definitions for the functions eventually called by

`inv`

and`solve`

. These are written in a special templated version of C, but you can find the template variable definitions a bit higher up in the source.↩

For this week, I’ve continued working on adding support for LFortran to SymPy’s code generation capabilities. This week mostly involved getting the infrastructure for testing the functionality of the new code generator working. I also extended the number of expressions the generator can handle, in addition to adding to LFortran’s ability to parse numbers upstream.

I’ve added support for four more expression types that the generator can handle: `Float`

, `Rational`

, `Pow`

and `Function`

. Since our base translation class was already in place from last week, implementing these was relatively straightforward and involved just defining the node visitors for each expression type (The commit that implements this can be found here). Here’s a demonstration showing the abstract syntax tree generated from translating the expression $\left(\frac{4}{3}\right)^{x}$:

```
>>> from sympy.abc import x
>>> from sympy.codegen.lfort import sympy_to_lfortran
>>> from lfortran.asr.pprint import pprint_asr
>>> pprint_asr(sympy_to_lfortran(Rational(4, 3) ** x))
expr.BinOp
├─left=expr.BinOp
│ ├─left=expr.Num
│ │ ├─n='4_dp'
│ │ ╰─type=ttype.Real
│ │ ├─kind=4
│ │ ╰─dims=[]
│ ├─op=operator.Div
│ ├─right=expr.Num
│ │ ├─n='3_dp'
│ │ ╰─type=ttype.Real
│ │ ├─kind=4
│ │ ╰─dims=[]
│ ╰─type=ttype.Real
│ ├─kind=4
│ ╰─dims=[]
├─op=operator.Pow
├─right=x
╰─type=ttype.Real
├─kind=4
╰─dims=[]
```

However, the translator fails for expressions that should in theory work. Right now, we can’t add an integer to a symbol because symbols default to real numbers, resulting in a type mismatch:

Fortran allows the implicit conversion of a float to a real, and the expression shouldn’t generate an error. This is functionality that will hopefully be implemented by the time I come back to this project close to the end of the summer.

I also added the initial infrastructure for testing the new code generation functions, with the starting commit available here. As Aaron mentioned in one of our meetings, the plan right now is for code generated by the LFortran backend to be equivalent to the output generated by the existing `fcode`

at the AST level. Each test should be in the form of an assertion that tests the (parsed) output of `fcode`

applied to a SymPy expression against the same AST generated by our newly implemented `sympy_to_lfortran`

. The LFortran project already has code to check generated ASTs against expected values, so I adapted this to the testing library of our code generator (I’m also not sure how this works in terms of licensing, since both SymPy and LFortran use the BSD-3 license).

One problem that immediately became apparent was the way that LFortran represents numbers. Looking at the expression tree above, the real numbers are actually stored as strings. On the parser side, LFortran stores a real number as the string used to represent that number. This means that the ASTs of two expressions that represent the same number in different ways are not identical (for example, `1.0_dp`

and `1.d0`

both represent the same double precision floating point number, but the strings stored by LFortran will be different). It’s only at the “annotation” stage of evaluation that LFortran canonicalizes floating point representations. For now, the tests use the annotation function of this stage, and I filed a merge request on the LFortran project to add support for parsing numbers in the way that `fcode`

generates them.

While the initial infrastructure is in place, I haven’t added any tests yet. Since the LFortran project is still in early alpha, the functionality needed to compare the syntax tree made by the builder API against the syntax tree parsed from the output of `fcode`

hasn’t been implemented yet. Again, this is something that will hopefully be implemented in LFortran near the end of the summer when I start on this portion of the project again.

After I filed the merge request to add the functionality I needed to LFortran, Ondřej (the creator of LFortran and one of my mentors) mentioned that he was planning on eventually removing the module I contributed to. The merge request I filed actually wasn’t the one I had in mind at first. I thought about adding support for canonicalizing number nodes right after they’re created in the builder API, but I decided against this because I felt that any changes I made would have to be minimally invasive. In retrospect, this was probably a misplaced concern, since it’s important to consider the development stage of a project when deciding how much of it should be changed. Because of this, LFortran will probably end up with something I opted at the moment to not implement.

There’s still some work left to be done with LFortran, such as filing issues I encountered and preparing the pull request for a merge (though it’ll probably remain a work in progress for some time). After that, I’ll be finished with LFortran for the time being and move on to extending support for matrix expressions in the Python code generator. The Python code generator can already convert (most) matrix expressions through NumPy, though there are still some bugs owing to an incomplete implementation. For next week, I’ll have to figure out what this missing functionality is how it can be implemented.

]]>`fcode`

, which converts a SymPy expression to an equivalent expression in Fortran, utilizing only LFortran as a backend. This post is an outline of what I’ve done (and learned) over last week.
LFortran is a Fortran (with some extensions) to LLVM compiler. One advantage that this design provides is that it enables interactive execution of Fortran code. LFortran can also be used as a Jupyter kernel, which means it can be used in a Jupyter notebook environment (you can even find an online interactive demo here).

In addition to being able to parse code, LFortran also provides the functionality of traversing a parse tree and generating the equivalent Fortran code. This means that if we want to generate Fortran code from a SymPy expression, the only work that we have to do is convert the SymPy expression tree to its LFortran equivalent.

LFortran provides a number of convenience functions for building a Fortran AST. Since LFortran is still in early alpha, there are currently only about a dozen builder functions. However, these few basic functions are enough for constructing simple expressions in the Fortran AST. As an example, if we wanted to construct the expression represented by `c = a + b`

, where each variable involved is an integer, we could do something like:

```
>>> import lfortran.asr.builder as builder
>>> import lfortran.asr.asr as asr
>>> integer = builder.make_type_integer()
>>> a = asr.Variable(name="a", type=integer)
>>> b = asr.Variable(name="b", type=integer)
>>> c = asr.Variable(name="c", type=integer)
>>> sum = builder.make_binop(a, asr.Add(), b)
>>> expr = asr.Assignment(c, sum)
```

LFortran also provides functionality to visualize what the expression tree looks like:

```
>>> import lfortran.asr.pprint as pprint
>>> pprint.pprint_asr(expr)
stmt.Assignment
├─target=c
╰─value=expr.BinOp
├─left=a
├─op=operator.Add
├─right=b
╰─type=ttype.Integer
├─kind=4
╰─dims=[]
```

I’ve started with the implementation of a basic SymPy to LFortran converter utilizing the AST builder described above, with the current pull request available on the SymPy GitHub. The converter follows the same node visitor class structure as all of the other code printers (it even inherits the `CodePrinter`

class, despite the methods not producing strings but rather AST nodes). Here’s a simple example that demonstrates the conversion of a simple expression to an equivalent in LFortran:

```
>>> from sympy.abc import x
>>> from sympy.codegen.lfort import sympy_to_lfortran
>>> import lfortran
>>> e = x + 1
>>> e_converted = sympy_to_lfortran(e)
>>> lfortran.ast_to_src(lfortran.asr_to_ast(e_converted)).replace('\n', '')
'(x) + (1)'
```

There are two things to notice here. The first is that I had to replace all the newlines in the generated expression, since a bug in LFortran causes too many newlines to be printed. The second is that there are a number of redundant parentheses in the printed expression. While this isn’t an outright bug, it’s another aspect of LFortran that is currently being improved upon.

I’ve also add another function, `sympy_to_lfortran_wrapped`

, which wraps an expression in a function definition, (poorly) emulating the wrapping part of `autowrap`

:

```
>>> from sympy.codegen.lfort import sympy_to_lfortran_wrapped
>>> e_wrapped = sympy_to_lfortran_wrapped(e)
>>> print(lfortran.ast_to_src(lfortran.asr_to_ast(e_wrapped)))
integer function f(x) result(ret)
integer, intent(in) :: x
ret = 1 + x
end function
```

Since LFortran can directly compile the AST to an LLVM intermediate representation, a future implementation of `autowrap`

might be implemented by compiling the output of this function (instead of first completely generating the code and then feeding it to `gfortran`

as it’s done right now).

For the next couple of days, I will try to extend the types of SymPy expressions that may be converted. One thing to note is that there isn’t a perfect correspondence between SymPy and LFortran AST nodes. LFortran supports nodes for operations like unary subtraction and division, which SymPy converts into multiplication and division respectively. On top of this, I’ll also add some tests for the functionality that I have implemented so far. After that, I’ll start with work on SymPy’s matrix expression code generation (the second part of my GSoC project) and pick LFortran up again close to the end of the summer.

]]>