Should we install your server iso with GUI so we can continue to get updates to Clear Linux and still have a desktop? Or is the desktop distro still going to get important version updates so we can still use it with confidence?
The server and desktop ISO’s all use the same content stream - you can install the server ISO and then install the
desktop bundle to get exactly what is on the desktop ISO. You can similarly install the desktop ISO and remove the
desktop bundle to get what is essentially on the server ISO.
In other words: We’re not removing the desktop ISO, and reinstalling is not needed. You will get updates no matter what - regardless of what image you used to install.
I think the decision is consistent with this. Why are you disappointed?
“In other words: We’re not removing the desktop ISO, and reinstalling is not needed. You will get updates no matter what - regardless of what image you used to install.”
This is good news for people who love to use Clear Linux desktop (like me) as a relief from Win10.
I guess the desktop programs such as Firefox, Geary, wine etc. will have to be updated using flatpaks at some future time when you decide to slow support for desktop apps. the important thing for me is the system updates will not be dropped.
I think it’s premature to come to that conclusion. Obviously, you’re going to do what you’re going to do.
The CL team has said they’ll continue to build the desktop ISO with optimizations (which is why we’re all here, right? For the optimizations?)
It doesn’t sound like anyone will be prevented from running the distro on their desktop. The change in direction seems to be more along the lines of, “We’re not going to spend time making a supported bundle for Aislerot and Minesweeper, nor is the core team able to help you install Nvidia drivers or ZFS (but the answer is in the forums). Instead we’re going to spend our time getting better performance out of your CPUs under docker or on Azure.”
I am not a CL team member, so I may be misinterpreting (which, to be fair, would be easy to do given the sorta fuzzy announcement), but If I’ve got it right, it’s exactly what I want – faster stuff at the core.
Will do – thanks.
I think you guys shot yourselves in the foot with the early direction of “If you want something bundled, just ask us and we’ll probably add it.” Because — I want ZFS will you bundle it? No? How about now? No? Why not? Now?
I seriously don’t expect the core team to add bundles for everything. I think it’d be a good evolution if there were a single, curated, community source for unofficial bundles – sorta like Arch has the AUR. The challenge is that bundling stuff for Clear is not as easy as it is for traditional packaging systems. Maybe some people want to learn and lead the way, though?
But could you imagine answering the next “Will you bundle ZFS?” with, “No, but you can get it here at the Clear 3rd party bundle site.” That’d be great.
Well, you’re human and the Internet is imperfect: and maybe I overreacted. I think it’d be wise to communicate exactly what you’ve said above outside the forums – somewhere you ought to make it obvious that expecting to be able to bug a core team member daily about ZFS is asking too much. To be fair, you did put it into the FAQ.
Again: thanks for all you do.
Between the ‘About’ documentation and ‘FAQ’ we aim to give a broad indication of our direction and intent, but having said that, we can’t (and maybe don’t want to ) cover all eventualities.
I’d definitely encourage more people to look into the 3rd party bundling - we want more people to be able to helps themselves. If there are other issues that need to be addressed in the FAQ, please file a bug on github
I agree on the necessity of a “one and only” 3rd-party repository. It’s 2020 and still we haven’t learned anything from iOS AppStore or Google Play? People want a “one place to find them all” thing… How can I know if a bundle exists if I don’t know if a repository exists? How can I package a bundle if I don’t have a server to put it on? How can I be sure that repository isn’t actually shipping malwares? Centralised stores can guarantee security, avoid duplication, deprecation and garbage-bundles (or at least minimize them). It’s harder to do on the CL devs side, sure, but its a hundred times better than user-managed repositories, and TBH Intel isn’t really about to go bankrupt for that matter, so it shouldn’t be too much of an effort. Sure, it still is an effort, so financially speaking it depends all on the “seriousness” of the CL project (by seriousness I really mean the amount of money Intel is willing to put on it, not the dedication of the devs)
Why should Intel host or maintain a 3rd-party repository? There is a reason why it is called 3rd-party, because it is not maintained by the 1st and/or a direct partner. I don’t think it is a matter of money, but to maintain a repository and to guarantee that the packages are “save” need a lot of knowledged human ressources.
It makes more sense that the developer of a application (and the team and/or the community around the application) maintains and delivers the application to the end user. The distributor of the Linux (Linux is more or less only the Kernel, the rest are applications build on top of it) based distribution is responsible for its own modifications and/or applications but not for every application that can be run on Linux.
I fully understand your point, and i think it would make sense for Clear Linux (and also other distributions) to use Flatpak as a centralized service to deliver most of the applications to end users and it would be great if Flatpak would be accepted by developers as the way to distribute there applications. This would be like “the App stores”.
From a distro support perspective, flatpak is great cause you don’t have to do anything for more packages. From a user perspective it will nullify the advantages of CL. The main reason cited for using it as a desktop is for the performance benefits (to offset the disadvantages). Packages are built by default with westmere (sse4.2) and a few extra flags and a few places with extra libs optimized with AVX2/512 where generic 64 bit Linux is back on sse2.
If you use a flatpak, you forfeit all of that and are using the generic unoptimized packages that flatpak provides. May as well go elsewhere at that point.
Also lets not rely on Phoronix to determine the performance difference between distros. Many of the tests don’t even test the distribution (i.e. what you download from the package manager) and are just flung together with zero thought to pump out another article for ad views.
I Totally agree with you. I don’t know what would happen if you made a poll here about “how many of you like flatpacks”, but I for sure don’t and never will. I already hate having to see iPhones perform much better than my android flagship just because Android is a generic OS… So I wouldn’t like to cripple my PC too: a +10% performance in hardware can easily cost you a good +20%, it’s not very nice to have a 1000€ PC perform like a 600€ one because “fatpacks are so easy to distribute”, if that’s the case I’d be using Ubuntu, but the fact is I love the philosophy behind CL: speed and order. I read someone saying that CL isn’t good because swupd is too strict. I believe it is such a structured, “strict” model that leads to quality software. Having (theoretically) 100 3rd-party repositories scattered around developers’s home servers won’t be any better than just settle for apt/snap or even flatpak altogether, and let users add PPAs. Note that having a “maintained” repository is not the same as a “controlled” repo. In the latter, CL devs only act as moderators, they could go as far as reviewing submitted bundles, but they won’t have to fix/change anything, just accept or reject. The “cleanup” work can be done almost automatically by checking the bundled version with the upstream version. If it is too old, that can be marked as “deprecated”, and eventually get removed… I strongly believe a central, unique “3rd-party” repository is the way to go.
@sunnyflunk @SPAstef Sorry, maybe my post was a bit to generic. I was talking about pure end user desktop apps, i was not talking about the “core stack” of the distibution or applications where it totally makes sense to optimize like virtualiziers, compilers, database engines, server engines, DE, coding language core components and such. Nearly all the desktop apps i use are Flatpak (Firefox, Thunderbird, Gimp, Inkscape, LibreOffice, VLC…), because i do not notice a difference in performance.
I simply don’t think that the team around Clear Linux could deliver a optimized version of every application or to monitor/moderate a 3rd-party repo, maybe the community could.
I mean, there is GitHub and such, which could hold a 3rd-party repo i guess. - After reading a answer from @ahkok in another thread, i noticed that GitHub would not fit the requirements for a 3rd-party repo. This means it is even harder to provide a centralized 3rd-party repo for Clear Linux, because you need to build the structure first. If there would be a “community organization” that maintain such 3rd-party repo seriously for a while, maybe then Intel would be interested to support this community, but i don’t think that Intel will be the initiator for this 3rd-party repo.
@ahkok, are there public numbers of the data and network usage of the Clear Linux repo for reference? How many storage, how many bandwith/dowanlod/upload (just to get a understanding of cost for hosting)?
The server oriented changes to Clear Linux make sense if Microsoft is able to deliver on the hyper-visor oriented WSL 2 (Windows Subsystem for Linux version 2) promised in Windows 20.04 rumored to be released in mid-May. I am not usually a Microsoft fan-boy, but if some tech works as advertised (and that is a big IF) I may become a believer.
If WSL 2 works as well as advertised in YouTube videos (I grew up watching outrageous toy commercials spoofed in Mad magazine) ; it will reduce the need to install Linux on raw silicon, but instead install in to the WSL 2 hyper-visor layer provided by Microsoft’s Hyper-V. WSL is pre-configured to provide access to the Windows C: and D: drives by via Unix/Linux mount points named “c” and “d”. Windows goes from a hardwired command interpreter to a Windows Terminal app that can access a Bash shell in WSL2 (reducing Apple Mac envy) as well continuing to access DOS/Windows and Powershell (via drop down menu and a tabbed interface like a browser).
WSL2 provides a true Linux kernel on top of which one can install one or more Linux distributions (taking a page out of Docker). Microsoft already has an Ubuntu 20.04 in the Microsoft store (the one without a version number).
There is also a terminal app as part of Visual Studio code for Mac (search for on YouTube).
Having an easy to access true Linux Bash command line means one no longer needs Git Bash’s mingw (minimal GNU for Windows) pseudo-Linux environment, nor does one need a separate virtual machine for Linux/Docker!
Meanwhile the open source Visual Studio Code IDE is pre-configured to be client server; so a Windows installed Visual Studio Code can as a client transparently access the WSL 2 Linux file system and servers. Visual Studio Code is pre-configured for Git or SVN (Subversion) version control.
Meanwhile Docker is coming out with a version designed for WSL 2.
I have an Intel NUC (Next Unit of Computing) and I am thinking of re-configuring it from native Linux back to Windows with WSL2; my primary concerns are performance and the security implications of the larger attack surface.
As far a the Gnome desktop goes, on the desktop side it is great for apps including LibreOffice but on the server side I like a desktop interface for the file manager where for one-off tasks it is easier to drop and drag files. But, since things are rarely on the server side “one-off” it might be nice to have a file manager that logs all interactions as Bash commands; so for instance a move command from one deeply nested directory to another would be logged accurately without typing. Otherwise, on a server one wants to avoid the CPU and RAM overhead of a complex screen saver or a GUI with a transparent interface or just the sudden CPU load of a rapidly moving mouse. Client server apps help by putting the GUI and mouse load on the client computer; for example for PostgreSQL, the PGAdmin4 interface and now more recently the Visual Studio Code client / server configuration.
On second thought, one question I would have about Microsoft’s WSL2 is how does it (or the hyper-visor) allocate RAM? Would I be able to run memory intensive stuff like a large PostgreSQL database or using a machine learning package in R (which likes 16 gigabytes of RAM)? I have seen documentation referring to WSL2 use of disk space, but I haven’t seen any reference to RAM. Even if one plans on deploying to cloud; one would still like a realistic test on the developer’s notebook.
- You cannot access serial or USB devices
- You cannot access GPUs
- There are still some problems and conflicts (i.e. inability to boot) with other VM systems, like VirtualBox. It was claimed they were fixed, but as of December 2019 I had not been able to run both WSL and Virtualbox together, but maybe now it works.
For me WSL is the “next big thing” for Windows, actually the only interesting aspect of that OS (unless you consider the 6-years-painfully-long transition between Control Panel and Settings somehow interesting). Unfortunately, the inability to use GPUs alone is enough for me to say they’re still far from competitive, and that even then I wouldn’t see really why one should go through WSL for a server infrastructure when you can use literally anything else to deploy a Clear Linux OS… On desktop it would already make more sense
And for those of us able and willing to customize our own CL images, nothing changes. We can keep using CL as a desktop/production machine and still benefit from all upstream enhancements that are also good for a desktop use case.
However, I do hope that the CL team maintains the Plasma/KDE framework/Qt bundles/desktop combo. The announcement talks about focusing on server and even IoT. Big new updates for Qt for microcontrollers (MCU) are being released (https://www.qt.io/blog/qt-for-mcus-1-1-released).
I’m not a priori against GNOME, but I think it would be wise to still pay attention to KDE/Qt because it is becoming huger than ever.
So, stripped of marketing hype, what Microsoft is offering with Windows Subsystem for Linux 2 (WSLS) is using Hyper-V as a bare metal (type 1) hypervisor running both Windows 10 and a bare Linux kernel as clients. Then various Linux distributions (distros) are loaded on top of the bare Linux kernel in a Docker-like way.
So, it looks like it would be possible for some competitors to offer a similar setup using a different type 1 hypervisor (since we already pay a “Windows tax” there is not additional cost for Windows on a Windows laptop) and their are open-source type 1 hypervisors including KVM and Xen.
I did some research about the cloud; although Microsoft Azure uses a variant of Hyper-V; Amazon Web Services (AWS) used Linux Xen and has converted to Linux KVM; while Google Cloud Platform (GCP) uses Linux KVM.
So, the likely competitors to Microsoft’s WSL2 are either existing VM vendors (IBM/Red Hat Enterprise or Dell/VMWare) or a combination of a CPU manufacturer and a large Cloud provider (Intel/AWS or Intel/Google).
Intel/AWS is interesting because AWS is the largest cloud vendor; Intel/Google is interesting because of its Chrome operating system (who knows what lurks in Chrome? and Google has laptop hardware experience).
I would suspect that some Google employees already run Linux KVM on their laptops and could tweak it to emulate the advantageous parts of WSL2 because Google employees have both the itch/need and the skillset to scratch the itch. I once worked at Chase Econometrics / Interactive Data Corporation (CE/IDC) and know that time sharing (ancestor of cloud) company employees often had configurations and projects out in front of official company offerings.
The incentive for Google is similar to the reasons they developed Android and Chrome; Google wants some control over the gateway to their online services.
Both Amazon and Google would want their cloud developer clients to have an easy way to test configurations on their laptop before loading to the cloud and I think they would be concerned that WSL2 is primed for users to move to Microsoft Azure rather than their offerings.
So, back to Intel. Intel is developing products to work with Microsoft Azure, but they probably are not leaving their other cloud data center customers (Amazon and Google) behind; instead Intel probably has some unannounced KVM projects developed in cooperation with their lead customers (or at least like Google they probably have some clever engineers running KVM as a type 1 hypervisor on their laptops --it is what clever engineers do – they come up with new and novel configurations using either cutting edge tech or older tech in very novel ways).
In fact, now that I realize what I am looking for (a type 1 KVM hypervisor supporting both Windows and Linux) I was able to find this on GitHub. It doesn’t explicitly say type 1, but since step 3.4 involves modifying GRUB it sounds very type 1 like to me. The neat thing is that this Linux KVM type 1 VM does have GPU pass thru! and
it is in an Intel Github repository. Might need to simplify the steps a little for consumer use. ;}
Oh you got me on stranger fields. I am no expert when it comes to VMs, I always stayed well away from them seeing the stuttering experience I had. Only tinkered around a bit with VMware/VirtualBox/HyperV and WSL, but always left disappointed with the graphics, I/O and overall slowness and difficulty to get everything working. So IMHO the less layers over the metal the better. That said, it’s actually pretty interesting what you found. I know web-stuff is probably more CPU intensive, or actually Google/Amazon and co. prefer using thousands of efficient CPUs rather than GPUs (full speculation here), so that’s why getting GPUs working on VMs can be a pain… Anyway we’re going a bit off-topic here
As for type 1 kvm hypervisor; not only is there the github I referenced above there is an entire
Linux Foundation Project; Project ACRN (pronounced “Acorn”).
This presentation shows Project ACRN although it says it is developed for the Internet of Things (IOT) it seems to have the automotive industry in mind. But, it is designed to run a real time operating system, Project Zephyr, Clear Linux, Android and Windows!
In this presentation look at “Industry Scenario” with and without a safety VM (Project Zephyr).
This shows both Clear Linux and Windows.
Next there is this video presentation (May 14, 2019) on YouTube which shows how to install Clear Linux on an Intel NUC (Next Unit of Computing) and flip it to type 1 hypervisor and then install another copy of Clear Linux for the user VM. Unfortunately, it doesn’t say how to install Windows. Be patient, it doesn’t really get to the hypervisor until about four minutes into the video. Nor does it say how to install Zephyr.
So, would ACRN make a good type 1 VM to use for local development of systems intended to run in Amazon AWS or Google GCP? Could ACRN be configured to look like WSL2 (Windows Subsystem for Linux version 2)? Particularly, interaction with Microsoft Terminal, Microsoft Visual Studio Code and Docker? I assume the Windows :C: and D: drives could be mounted to the Linux files system as c and d.
The advantage of this setup is that there would be GPU pass through and would hope for greater compatibility with Linux based clouds such as Amazon AWS and Google GCP since one is running a native Linux under KVM.