Clear linux performance

Hi, I’ve recently switched to clear linux from manjaro for the rumored performance gains, and there are some anomalies that I’ve noticed. While checking my power consumption on my UPS, I noticed that it was “revving” up constantly at idle. I checked conky, and sure enough, my CPU frequency is constantly jumping around from minimum, to mid, to max frequency while sitting at idle, whereas on manjaro, if the PC was idle, it would stay at the minimum frequency.

The second anomaly I’ve experienced is in my primary computational task, photogrammetry with Meshroom. In most of the steps, clear linux outperformed manjaro on my benchmark job, except for one, which was the heaviest use of CPU. Here it performed significantly worse than the last benchmark I ran before switching distros. While running the job, I noticed in htop that only half of my available CPU threads from this dual CPU system were being used, and the other half were randomly ramping up to 100% kernel use a few at a time. I believe I fixed this by updating a python path, because now htop shows all threads being used fully, however this did not improve performance, and the task took just as long as when it was only using half of the threads.

I’m curious if anyone has any insights or similar experiences.

From my understanding CL aims to keep the CPU on high/er clock levels to be more responsive, as on modern CPU without workload a high clock rate does not significantly increase power usage.

No insight from developers as to why the most CPU heavy task performed almost 60% slower in clear linux? Is there some way I can collect and provide more data? From what I’ve observed of system resources during processing, it pegs all threads at 100%, then as it is nearing completion, fewer and fewer threads are being utilized. It is this final stage when the task is around 90% finished and only a handful of threads are still working that clear linux seems to lag way behind manjaro in performance.

I’ve seen it take the same amount of time to go from 90% to 100% progress that it did to go from 0% to 90%. In manjaro, It did slow down at around 90% when only a handful of threads were still working, but the time required to get that last 10% was significantly lower than I’m seeing in clear.

Hard to say without more data. I would check your hardware if half the threads weren’t working. Also consider whether you have auto updates enabled. CL ramps up a bit during delta updates. I tend to get about to 20%(!) better performance than windows at about half the power consumption running a desktop – on a canyon nuc, on cpu intensive tasks and especially tcp (let’s see, who can ping 127.0.0.1 the most times in 10 minutes? lmao). (About 8% better on larger systems – supermicro workstation with a bronze xeon and a dell xps).The nuc is my daily driver and I swear this little machine running Clear is the fastest desktop I’ve ever worked on by a mile! I only use mac on a laptop so its hard to compare anything there. Works about the same as my CL laptop but most of my daily laptop tasks just bottleneck on wifi or are presentation related. About half the windows difference compared to rhel and debian desktops. The only thing comparable to CL in the tcp arena that I’ve seen are freebsd appliances. I get slightly better io, ram, cpu and reboot times out of redhat (or really centos) than CL under heavy server workloads with complicated shutdown and boot procedures (involving nas and san) running kvm in the lab on dell EMC. CL containers are a little better aside from the occasional runaway process. Strangely enough though CL stomps out all others as a nested vm lab utilizing cpu pass-through under a centos host. Rhel vms and container servers seem to do better than CL running blocking web softwares like python and php and perl etc. CL seems to just not respond at all sometimes running cgi, but is way better with more concurrent systems like go and java and c etc, and even asynce processes like node. I could go on . . .

I know it’s not the hardware, the only change I made was wiping my manjaro drive to install clear. I don’t doubt that clear is fast, I’m just curious about this anomaly I’m seeing. From windows to manjaro, I saw a 22% overall speedup on this specific photogrammetry job. With the anomalous node I’ve been referencing, I saw a 30% faster time. I further improved performance with some hardware upgrades, eventually ending up at half the original processing time of where I started. At this point, I learned about clear and jumped through the hoops of getting nvidia drivers and CUDA installed. Almost every node saw a faster time, with DepthMap (heavy CUDA) about 20% faster, and texturing (heavy cpu/mem) almost twice as fast. However, even with these two huge performance boosts, the overall job time was almost exactly the same as my last benchmark with manjaro because of the 60% slowdown on the feature matching section.

It’s a fairly unfamiliar area for me. TBH I have had no end of trouble on other systems I’ve tried with CL and nvidia quadro (with or without proprietary drivers). Crashes daily, journalctl always reveals garish and horrific cpu errors. I only ever used nvidia and or windows for autodesk anyway. Try installing that on Clear Linux!