A Python Front-End To GCC Is Brewing This Summer
Phoronix: A Python Front-End To GCC Is Brewing This Summer
It turns out there's another fairly interesting Google Summer of Code project being worked on this summer beyond the exciting projects and the Mesa/X/Wayland projects that have piqued our interest this year. This project was somehow skipped past when looking at the GSoC information before, but it's a continued effort (by the same student last year) to write a Python front-end to GCC...
is this a good news for python coders?
If I got it right, when this project gets finished, it will mean you will be able to run something like
Originally Posted by NomadDemon
and get a compiled executable out of your python sources, thus leading to a 10x-100x speed gain. So, great news!
gcc main.py -o myprog
great, I code in python
There's a lot of talk in the blog over handling dynamic typing, but it seems to me that that's a relatively small matter compared to the ability for programs to modify themselves at runtime - evaluating strings as Python expressions, adding and removing methods and fields to objects, etc. Perhaps not impossible with a static compiler, but I think a good JIT runtime is much better suited to dynamic languages like Python than this...
Such a speed gain is very unlikely, except in specific cases (e.g. integer math, provided the compiler can detect at compile time that it's likely that a group of python code lines will use that).
Originally Posted by kbios
Like Delgarde already wrote: a JIT compiler is more likely to be able to uptimize such code effectively (and that's what the PyPy project does).
In theory JIT-compiling has the advantages of using data like machine specifics, runtime information (changes the code during runtime) to increase the effectiveness of the JIT-compile but in practice the performance by far favours AOT due to the fact that the jit-window (amount of time the just-in-time compilation is allowed to run) is for practical reasons very small and that optimizing during execution is very expensive (and therefore seldom used), meaning that it can't perform as aggressive optimization as in AOT (ahead-of-time) where compile time is practically always far beyond what would be acceptable in a JIT-situation.
Originally Posted by JanC
I agree with Delgarde though, that due to Python's dynamic properties, JIT-compiling may be a much better fit overall. It will be interesting to see benchmarks once this project matures, maybe a future benchmark of Pypy, unladen swallow and gccpy.
Hi everyone this is me redbrain the author of this python front-end. And i would just like to say i was very humbled that phoronix put this on their site and i just would like to say thank you
For those of you out their please feel free to ask any questions you may have.
One thing that generally comes up time and time again from people i tell about this project is. "there is no way this can work" or "this will _not_ give a speed up you should work on an existing jit implementation"
So one thing from very academic related people they generally will say if you look at python code you cant assume anything about it when compiling it mostly because of dynamic typing and how objects and calls work. But at the same time if you say that i would say if someone was to look at a piece of python code would nearly mean they cant understand it, but this clearly isn't true.
Its true from the aspect of dynamic typing that expressions and binary operators are handled at runtime but this is fairly un avoidable and in the end not that much of a degradation in speed. But at the same time i'm very lucky that this code all gets optimised by gcc already huge set of optimisers, although i have to write my own constant folding pass for various reasons.
The real speed ups come in in my opinion where people define classes or other language constructs like loops and conditionals as instead of these being runtime evaluated they can be easily statically compiled. Class's pose some challenges but i have this working. Where things get really tricky is evaluating imports and statements like yield. But for imports i plan on using GCC's LTO so i can see all the stuff available to link together the calls instead of having to runtime evaluate each thing, as runtime call evaluation would have been such a horrible speed degradation for me but at least lto exists so i can work around this now that in it self should being a large speed increase.
If in the end there was no speed ups in any area in gccpy i would be extremely surprised, but even if that were the case i still feel its an extremely valuable piece of software given from the perspective you can compile your code like:
gccpy main.py -o myprog.exe
It may be interesting to see if people may adopt it for plugin development or embeded scripting development in large software projects. And in the end this should have a much lower memory usage than other python implementations and may be interesting on embeded devices. I seriously dont think it would ever become main stream.
But i do passionately believe it will bring about a lot of program transformation understanding for me and maybe others who are interested and bring about some fairly large speed increases in certain areas. Since even if you compare to pypy or cpython or jython they are all in the end just interpreters cpython is a fairly strong implementation well made interpreter which is very rare in the world of interpreters but even if you have a jit there is only so much of a speed up a jit can do and the biggest problem with a jit is the level of complexity in the implementation when a jit is introduced is kind of insane and becomes a hard to maintain when you want to develop new ideas.
Also coupled with the fact interpreters have HUGE amounts of front-end and middle-end code to work though before you actually evaluate the user-land code. you would be shocked at projects like haskell or even perl or python how long it takes to do simple things.
With gccpy when the code is compiled there is no need for any of this, it just works. Although i know it probably wont be the 100% solution to speed in dynamic languages i do really believe it has its place and will show strong ideas in areas of language optimisation in this area. I will be putting up posts on my blog more on in detail how this stuff works but i was kind of sick for a few days was kind of burnt out after the gcc developers meet-up in London so please believe in me rather than doubt me
Last edited by redbrain; 07-02-2011 at 07:19 AM.
Thanks for the insight Redbrain. Very interesting seeing Python performance being tackled from this angle. Your blog has been bookmarked
Tags for this Thread