Run Clear Linux on Raspberry Pi (8GB)?

Raspberry Pi Foundation confirmed the 8GB RAM Raspberry Pi 4; my brothers and I plan to create a cluster using these boards, can CL run on these ‘cuties’ architecture and all?

Intended case: Server; running bots/scripts and machine learning tasks.

1 Like

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 Like

I just recently got Clear Linux working on a Beelink T4, which has an x86_64 processor on it. The install was easy enough, but I had issues with the bootloader and needed to manually add / edit the entry to make it bootable post install.

1 Like

I did not mean to delete that. mouse error is there way to restore it?

I flagged it and asked for it to be restored.

Restore is on its way :wink:

1 Like

probably not. CL is custom build by Intel for Intel CPUs. While that performance benefit is one good reason… CL overall packaging and designs are great and having the one OS in the DC would make me happy.

This is the post that was deleted by mistake:


Possible alternative

1 Like

Im very sorry for being someone to advocate for other distros, but clear being what it is, are there not better distro options to fully optimize the PiBox with presumably less headache. Just like you wouldnt use something built for amd on intel. Especially when you are looking for machine learning you want to spend more time working on that than just getting the distro to breathe.
Just my 0.02$

1 Like

This is your best option. Small enough to be handled quite nicely and powerful enough for any type of application.

1 Like

Awesome! Thank you for the alternatives… Given my location (:round_pushpin:South Africa), however, it is harder to get some of the recommendations delivered, unlike Pi boards which at least have a local distributor.

I’ve played around with the new pi’s a bit and they are pretty cool. The only trouble for me is most stuff in docker land and machinecode land still require x86 instruction set and and it can be tough to find stuff supporting arm. There are workarounds but rebuilding containers and cross-compiling code can be a headache. I do use something similar as an ai edge device though, Nvidia Jetson Xavier NX. I’m sure over time things will get better for arm but it’s still a bit of a pain for many things atm.

The Cortex - A72 ( 6.52 flop/clock/core, 33.90 GFLOPS total) is the worst choice possible.
The PI4B even worst according to real tests (13.5 GFLOPS)

Considering the 88 Euros price tag I see on Amazon today (for the 8GB model), I’d say that a full fledged workstation/server can be way more cheaper than a cluster of PI4B.

I bought an R9 280X, used, on Amazon(DE). I’ve spent 169 Euros (yep, it was really cheap).
It has a theoretical limit of 4096 GFLOPS (3481.6 GFLOPS in reality) and it consumes a max of 255 watts (tops 300 watts actually).
Suppose the cortex-a72 runs at 33.9 GFLOPS (which is NOT the case, tbh), you’ll need 103 (102.7…) boards to reach the same amount of floating point operations, you’ll spend 9064 euros of PIs and you’ll waste 751.9 watts in power.
This is enough to make any extra in import taxes affordable even in South Africa and it’s all because it doesn’t matter how far you are, you will definitely save more money if you do the math. :sweat_smile:
If you really want to have fun with machine learning, invest in a GPU and forget the PI.

But if it’s just aesthetic or if you like small things, an Intel TV stick is way more powerful and can be bought for the same price or even less.
Take this board for example. It has a Celeron J3160 which has a GPU that has been tested to provide 147 GFLOPS ( alone.This doesn’t take into account the amount of GFLOPS provided by the CPU, which would boost that number by another fair amount.
It’s x86-64 and it is cheaper than the raspberry (

There are also PICO-ITX boards that are affordable enough to compete with the PI4 and that are definitely more powerful (
Raspberry PIs are a joke that has been well marketed but they provide OVERPRICED potato performance. That’s all about them…
Be wise. Choose the best bang for the buck and always consider GFLOPS/WATT or MIPS/WATT, depending on the application. You might also want to pay attention to the ISA.

You can also develop and train your networks on a powerful GPU/CPU and run the resulting database on a smaller machine, like the aforementioned.