Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

wow. i ran their prime number python accelerator example on 10 000 000 upper bound:

(taichi) [X@X taichi]$ python primes.py

[Taichi] version 1.4.1, llvm 15.0.4, commit e67c674e, linux, python 3.9.14

[Taichi] Starting on arch=x64

Number of primes: 664579

time elapsed: 93.54279175889678/s

Number of primes: 664579

time elapsed: 0.5988388371188194/s



It seems like Taichi fast language, but it can also never be overstated how slow Python is on contemporary architectures.


Right, I'd like to see a comparison to Nim or Julia, or another compiled high-level language that isn't particularly performance-oriented like Haskell or Clojure, or Common Lisp, or even Ruby with its new JIT(s). Or for that matter, Python with Numba or one of the other JIT implementations (PyPy, Pyston, Cinder).


It doesn't support 3.11 yet. 3.10 is slow compared to 3.11.


yeah, but not on the level as the top level comment...


Is that improvement caused because the program is automatically parallelized or because the code is compiled/JITed? A x150 improvement is too much, so I suspect both reasons collaborate to get it.


they say the example i used is JIT compiled into machine code. i haven't looked into the codebase yet but i presume that means it just un-pythons it back into C? not sure.

fwiw, i tried the gpu target (cuda) and it was faster than vanilla, but slower than accelerated cpu target by about 4x.


How does it compare with numba?

(I don't know enough about the python ecosystem, but I have to tweak code from one of my coworkers and he uses numba.)


And if you run it on a number bigger than 2^64, does it error because taichi automatically assumes python's BigInts are Int64s?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: