I often find Miniconda’s Python to be faster than Clear’s Python. Is this expected? It sure is nice for the possibility of Python running faster. Is Clear’s Python slow because I’m running on an AMD box?
I don’t know any reasons for differences in performance, as I don’t know what Miniconda is doing differently. But on a different topic, have you looked at Pyston at all? They make some strong claims for optimized Python. Pyston | Python Performance
I added results for PyPy 7.3.13, above. Is there a reason why no bundle for Pyston or PyPy in Clear? Is the reason because the implementations not yet supporting Python 3.12.1?
$ pyston --version
Python 3.8.12 (remotes/origin/release_2.3.5:4b858b5062, Sep 25 2022, 18:56:33)
[Pyston 2.3.5, GCC 9.4.0]
$ pypy3 --version (note: Fedora 39 binary, running on Clear)
Python 3.10.13 (6ff4c5778e99, Oct 05 2023, 11:29:33)
[PyPy 7.3.13 with GCC 13.2.1 20230918 (Red Hat 13.2.1-3)]
There is another test that I run. That is the time to complete the os-scheduler responsiveness-test. Clear’s Python previously took more than 20 seconds. So, some improvement from before.
Thank you for introducing me to TaiChi, recently. TaiChi is amazing. Currently, I am running the taichi-nerfs demonstration. I have a RTX 3070, so need to lower the batch_size to 2048.
Training the Lego scene from scratch takes 3m54s for batch_size 2048. That consumes 4.3 GB GPU memory. My RTX 3070 is power limited to 175W max (via a service file at startup). This is possible with NVIDIA graphics. I never worry about my GPU overheating and the fans spin 56% max.