Linux sucks, and there’s nothing you can do about it

Recently I started working on a startup tech business to implement some of my tech ideas. I got a work-space on campus at University of Memphis and they gave me a desk and computer to work on in a shared work-space area where other entrepreneurs are working. The computer they provided is an iMac, and it is configured to work with the campus network, and forces me to sign in using my University credentials.

In order to do my work, however, I have to have complete control over the computer to be able to install software to develop programs, so my idea was to bring an external drive with me and install a Linux distribution onto the external drive. I pulled a solid-state drive out of my desktop at home and an SSD-to-USB adapter and installed a copy of Linux Mint (which is based on Ubuntu (which is based on Debian)).

I used the iMac to install the OS, and under normal circumstances, this should be no problem. Ideally, the installer (running off of a flash drive) would run in RAM and not modify the internal drives on the iMac. Installing the OS would put the bootloader onto the external drive and put the OS installation in a separate partition after the bootloader.

Unfortunately this did not go as planned. The installer gives users the option of either installing Mint the easy way, and telling Mint to install to a specific drive, or the hard way, where the user has to know how to repartition the drive according to the needs of the OS, which requires knowing what the OS is going to do before it does it, which is a catch-22. So I picked the easy option, and I selected LVM format (Logical Volume Management) as the partition type, so that I can resize partitions easily after installing. LVM allows the user to create other partition types inside of it, which are then able to be resized easily and moved to other drives that are also formatted with LVM.

In the advanced mode, it is impossible to set the system up with LVM, as it does not allow creating partitions inside of an LVM partition.

In the easy installation mode, I selected the external drive, and Mint’s installer put the OS on the selected drive, but then it put the bootloader on the internal hard-drive on the system. It does this because the internal drives are marked as “bootable” (boot flag is set). The installer assumes that the user must want to dual-boot, so it replaces the internal bootloader. Since the iMac boots using EFI (or UEFI, which is newer), the bootloader is actually booted from the NVRAM, which is a special memory piece on the motherboard that boots devices and remembers the default device. The Mint installer decided to delete this NVRAM and replace it with its own information pointing to my external drive’s OS installation. The OS installation then would point to the internal drive’s bootloader partition. The bootloader would then let me choose which OS to boot.

Installer actions:

  • Overwrote NVME
  • Overwrote bootloader on internal drive
  • Pointed NVME to external drive
  • Pointed external drive back to internal drive’s bootloader

The dysfunctional configuration:

NVME --> external drive --> internal drive

This is *incredibly* stupid as a default. I reported this to Linux Mint on their forums, but Mint can’t fix this, because they rely on Ubuntu to provide the installer code, so this has to be reported again to Ubuntu, but they might not even fix it, even though it is an obvious logic error that has an easy fix.

Boot flag work-around

The internal drive can be set to “unbootable” during installation as a work-around for this installer bug. To do this, open up GParted and change the flags before installing. After installing, reboot into the Linux Live CD (or installed OS) and change the flags back.

Fixing the University’s Computer

I was unable to fix the University’s computer on my own. After several hours of research, the only fix to restore the NVRAM and bootloader involved logging in as an admin on the MacOS installation and running the “Startup disk” selection program and clicking on the internal drive to repair it. It requires administrator privileges. The only other option was to re-install the operating system, and this meant giving it back to the tech people, who would take 2 weeks to figure out a 2 minute problem and then probably still re-install MacOS.

Most operating systems allow users to fix bootloaders by running a tool from an installer CD or USB drive. There is no such tool for MacOS.

Luckily, I managed to get someone who knew the admin password to help me fix the computer.

After Installation

After installation, Linux Mint seemed pretty good. I have tried many other distros and had many issues after installing. Mint seemed to avoid most of them, but a few major ones showed up.

First, Mint was unable to install certain programs from the App store (Software manager) including Discord and another program, due to a missing software package that these two programs relied on. Later on this problem went away (I can’t remember why), but this was a problem out of the box.

The other major problem is the drivers. I intended to use this SSD on a few different computers, including my home desktop and my laptop, so that I can maintain a consistent development environment across different machines. Unfortunately, the drivers for hardware on several of the machines are missing or broken.

WiFi on Macbook Air 2013

The first I noticed was the WiFi driver on my laptop (Macbook Air mid-2013). Because of this, I cannot use Mint on the laptop at all. The internet is so integral to programming that this is a real problem.

Sound card on iMac

The sound card on the iMac also was (and is) not working. After doing some research on the issue, it has no known fix. The other reports of the same problem have different symptoms reported by the system programs used to diagnose the problem.

https://www.linuxquestions.org/questions/linux-hardware-18/no-sound-modules-after-installing-mint-mate-18-a-4175593442/

From the research, it becomes apparent that nobody knows what they’re doing, and fixing the problem is a mixture of political red tape, lack of responsibility, and technical incompetence.

What the real problem is

There are two real problems here:

Too many options!

First, there are too many options for how to debug the problem, and none of them are simple. Linux breaks the UNIX manifesto: “Do one thing, and do it well, and make things work together”. Debugging a problem in Linux requires users to research too much. The commands that are relevant for debugging the problems do not do one thing and do one thing well. There are too many options for how to approach solving the problem. This is a fundamental design flaw in modern UNIXy operating systems and should be a warning to future OS designers on what not to do as a developer.

Too much reading!

The other problem is that Linux’s debugging process is command-based. Tools on the command line are terribly inconsistent in their interfaces. The syntax for using the tools is unintuitive and finicky, and the presentation of information is typically organized according to the mental handicaps of the developer and is often overloaded with details that are irrelevant to the user. This requires users to memorize commands, instead of providing a way to debug configuration problems by exploring and inspecting a visual representation of the system’s configuration. While the terminal is a uniform interface, the languages and syntaxes of the programs within them are very inconsistent and require too much reading to be used efficiently.

General principle: Linux is not brain-friendly

The general principle behind “Too many options” is that Linux is not compatible with how the brain learns to do things. Likewise, the general principle behind “Too much reading” is a combination of “too much memorization” and “too many irrelevant details”. The UNIXy command lines are hard on the brain’s auditory cortex, temporal lobes, hippocampus, and prefrontal cortex (PFC), and they do not make use of the visual cortex efficiently.

What do these pieces of the brain do? From Wikipedia:

The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching.

The temporal lobe consists of structures that are vital for declarative or long-term memory.

Auditory Cortex

The temporal lobe is involved in processing sensory input into derived meanings for the appropriate retention of visual memory, language comprehension, and emotion association.

Temporal Lobe

The hippocampus (from the Greek ἱππόκαμπος, “seahorse”) is a major component of the brain of humans and other vertebrates. Humans and other mammals have two hippocampi, one in each side of the brain. The hippocampus is part of the limbic system, and plays important roles in the consolidation of information from short-term memory to long-term memory, and in spatial memory that enables navigation.

Hippocampus

The Visual cortex is powerful

The visual cortex is the strongest part of the brain for processing data. For example, the use of videos, demonstrations, and diagrams are very effective teaching tools, and the world around us is full of visual stimuli that must be processed quickly and efficiently in order to respond quickly and orient ourselves in a complex environment. The only way to do this is with vision. A simple test: walk outside and close your eyes, and try to find your way around for a few hours. It is incredibly slow and error prone.

Vision is much more powerful than other senses:

In the brain itself, neurons devoted to visual processing number in the hundreds of millions and take up about 30 percent of the cortex, as compared with 8 percent for touch and just 3 percent for hearing.

The Vision Thing: Mainly in the Brain, Discover Magazine

The UNIX command line and most programming languages are ineffective and slow tools, because they do not make use of the brain’s ability to process visual inputs quickly in parallel and contextualize large amounts of details into a cohesive idea. The visual cortex is the largest part of the human brain, occupying almost 1/3 of the mass of the brain. Humans evolved to process visual stimuli with massive parallel processing, so it is inefficient to have to sit and read one character at a time or even one word at a time. This bottleneck is partly why GUI’s were invented.

So, the next time you pop open your command line or a text editor, ask yourself, “do I want to use 3% of my brain or 30%?”.

Linux programmers are stupid

Because of the Linux community’s inability to grasp these basic psychological concepts, Linux will forever be crap. Linux programmers are so smart that they are in fact stupid. While they sit and worry about every detail of their software’s architecture, they ignore the brain’s architecture and its limitations and abilities.

Details

  • Operating System: Linux Mint 19.2 Cinnamon
  • Cinnamon Version: 4.2.4
  • Linux Kernel: 4.15.0-66-generic

28 thoughts on “Linux sucks, and there’s nothing you can do about it

  1. Every problem in this post is entirely your fault. Carrying around a bootable SSD that you plug into multiple machines is a terrible idea, and if you’re trying to do something fancy, you shouldn’t rely on handholding-tier Linux distros. You very clearly don’t know what you’re doing (you can’t get wifi working on a Macbook Air 2013 — just install the broadcom drivers lmao) and you Google random details to look like an expert on a topic you don’t know anything about. Then you bring in some nonsense about the brain (completely irrelevant — more grandstanding) to claim GUI supremacy over a command line (because you’ve never found a decent use case for bash scripting).

    Stick to video games and schoolwork. Leave the discussion on operating system engineering to people who aren’t just tourists.

    1. First, carrying around a bootable SSD to use on multiple machines is an excellent idea. This is the ideal use case for a portable storage medium.

      I tried Mint based on a suggestion. I was considering Arch Linux but decided not to use it because it is rolling release, and I got sick of reinstalling when it broke my system’s state with its updates.

      “You clearly don’t know what you’re doing.” I have plenty of general purpose knowledge. The only problem here is mapping it onto a specific distro’s implementation. “Can’t get wifi working on a Macbook Air just install blah blah shit blah lmao”. No, you need WiFi on a device with no Ethernet to install a driver for WiFi. Cyclical dependency much? Installing a driver from another computer is not a reproducible option. A younger me would not have that option, and probing devices and researching which kernel module to install is not a procedure I want to memorize, as it is distro-specific and inevitably will become outdated in the future. Again, you and others that think like you don’t get it: memorization of a text command is an idiot’s answer to a problem. The next time you pick up a rock and want to throw it, I want you to think what command you were supposed to give it. Computers are supposed to be indistinguishable from the objective world. They present an interface, you should not have to memorize an arbitrary string to interface with a computer: it should be self-explanatory. Strings are explicit declarative details in the brain, not implicit procedural concepts. In any human interface, the details should be interchangeable between two different instances of an interface. This is why you can apply a skin to a game or a map and still recognize its elements. As a programmer, you are designing an interface for users (downstream programmers or end-users), so quit shunting the responsibility of the details onto the user’s explicit declarative memory. Remedial learning for you: https://www.livescience.com/43713-memory.html

  2. Welcome to linux world.

    You have to accept to get something not perfect, and to contribute to fix it.

    If you don’t do that, you’ll be considered as a stupid linux guy, who do not fix problems of others… As you’ve just claimed ! 😀

    1. But you are missing the point. You can’t fix something when everything you depend on drags you back down to the level of stupidity that it was designed at. It is naive to assume that you as a programmer are going to contribute every line of code, but you can make it so that transitioning your expertise to another domain of knowledge is efficient. The fact that we have to continually interface with the C language from every other language because Linus Torvalds doesn’t like the idea of updating to a more digestible, expressive, and discoverable tool set/language means we are constantly being dragged back down into the problems associated with C. It’s infectious. More accurately, it is a network-effect problem (economics concept).

  3. Quite useless read. Life sucks. So what?
    You think this complaining will fix one thing?
    Have you considered contributing?
    How much did you pay to get your specific use case supported?

    1. Have I considered contributing? Yeah plenty of times, to plenty of projects. Do I have the patience to learn their preferred combination of tools/languages and sit and decrypt their particular style of coding and navigate all the research tangents that it will inevitably send me on to memorize and validate their code? No. For free? Certainly not. There are easier ways to inflict pain on myself 🙂 Here’s a thought: if the system were simpler to navigate, then people would be more likely to contribute for free to making it better and more navigable. It’s a cyclic dependency. Shit begets shit.

  4. So you are complaining you do not know how to install linux. Quite ignorant. I installed linux myriad times with dofferent setups and never had problems like yours. If you don’t want these ‘issues’ (namely learn how tools work) then stick to mac or windows, you pay for those systems to do everything for you. Linux by big part is community efford, either you can appreciate this or go elsewhere.

    1. Actually, you’re misreading my blog post and obviously did not read it for understanding. Quite ignorant. I’ve installed Linux many times before this. Perhaps the word “bug” did not tip you off enough. Need I be more obvious?

  5. That’s kind of weird that you’re complaining about Linux not working on the most closed-source platform in existence.

    1. I’m too stupid to use an installer and it’s all the developers fault.
      If you needed more control over the OS instead of absolutely borking the company’s equipment with your ineptitude why didn’t you request an administrative account?
      I genuinely hope they took notice of what you did and took the appropriate action of terminating your employment

      1. You’re an actual idiot. You clearly didn’t read–and you didn’t even bother to punctuate correctly. “request an administrative account”. Hello, I’m a student at a University. It’s the programmers’ fault/s for hard-coding an idiotic assumption into the installer about where the bootloader should be installed while failing to allow me to manually specify a configuration that uses LVM. This is why I don’t trust Linux programmers and their obvious lack of concern for reasonable defaults. Case in point: NixOS is complete shit because of their refusal to set reasonable defaults. Forcing the user to spend inordinate amounts of time to circumvent red tape is no way to grow a project.

    2. OK there is an ambiguity in your sentence structure, but obviously you don’t know what platform I am writing this on.

      Yes Apple is closed-source. I’m not complaining about Linux not working on a specific closed platform. I’m complaining about the programming profession’s problem with keeping things simple for people to learn and acquire. Languages are acquired, just like natural languages. After you learn it, you think in it. I use the term discoverable alot, and none of these languages are discoverable. Linux just happens to be ground zero for this shitstorm. Linux also portrays itself as an escape path from the closed-source jungle of Windows and Apple, but it presents more complexity and uncertainty than it is worth, and its developers provide no reasonable way to manage that complexity.

  6. Stop using BASH, the shell is old as dirt and as a visual interface goes is pretty archaic, try zsh or fish instead.
    Use apps that use ncurses to bring a some visual appeal to the command line (htop, ncdu, calcurse, midnight commander /mc, bmon etc)
    “ranger” gets one around the file structure without to much hassle
    “tldr” gets one past the lengthy (and rarely updated) Linux man pages
    “bat” brings syntax highlighting and default enumeration to cat
    “micro” solves the wonkiness of Vim but still gives you syntax highlighting, or just use nano
    Developers aren’t focusing on HI (human interface) design principles with Linux, but speed and other concerns.
    If you have a complaint about something not being visual enough, develop an ncurse solution to fix it.
    You could have avoided this mess by using Oracle virtualbox, or just installing Homebrew for Mac
    I agree “out-of-the-box” Linux isn’t meant for humans, Linus Torivald didn’t consider this a necessity – Ian Murdock who created Debian created it as a side experiment I recall while in college

    Business loves free, the C programming language is “free”, because of C Linux was born, Make then Ant was born, and all the other stuff. C is a funky language, why Java was born, Java runs on a virtual machine with poor memory management – why Go was born, etc etc ….. so it all goes back to C, which was never considered a language to be used outside of a lab (C was developed within the labs of AT&T).

    When things are “free” you get what you get.

    Remember Beethoven and Mozart (the composers?) Rich families let them live in their mansions free and they got a salary to just write beautiful compositions, if this was done for developers your issues wouldn’t exist.

    I love Linux, but I also know it’s free, so I know it’s as good as it gets

    1. Developers are shooting themselves in the foot if they are not focused on HID principles. They are humans too, and every developer was a user at one point (except a few). This is why I say Linux developers are idiots. They’re sinking their own ship. I don’t want to wear their old smelly socks or go looking for their 1975 paper on “whatever” in their office with papers strewn across the entire room and stacked to the ceiling in an arbitrary fashion. They’ve created a complexity problem, and every attempt to solve it has resulted in more complexity, because they won’t address the root problems, which is the nature of the abstractions they have used. People are content to create work-arounds for problems, and then create work-arounds for the work-arounds when their work-arounds introduced more problems than they solved.

      I appreciate the suggestions. However, despite how good the advice is, the problem is a human efficiency and sociological one. We were sold a lie a long time ago that technology makes less work for us by solving problems, and it has failed to deliver on this: all it has done is re-route the work to specialists who memorize their particular answer to a particular problem, because it generates more complexity (entropy), making the system as a whole less understandable to the individual programmer. This is why software keeps hitting an upper limit on how useful it is and how well-made it is. Software just sucks, because we as humans keep holding on to old broken abstractions and assume that if it ain’t our problem, someone else will fix it. It’s a lazy and irresponsible approach to system design.

      “You could have avoided this mess by using Oracle virtualbox, or just installing Homebrew for Mac”
      I chose a workspace with a desktop so that I can stay on task better. I didn’t want to terraform the University’s computer into a Linux machine, because I thought it’d be nice for others to have access to the computer using their Uni credentials when I wasn’t there.

      “Developers aren’t focusing on HI (human interface) design principles with Linux, but speed and other concerns.”
      Yes but this is the problem. They are imposing a learning overhead on later developers (downstream and new programmers). This is a compromise that is arguably worth it in the short-run, but overall really not worth it in the long-run. These languages discourage discoverability (conceptual transparency). I used black-box reverse-engineering strategies to derive the principles of computer science when I was in middle and high school, but this did not teach me how to program in C, because C’s execution model involves non-scientific/arbitrary characteristics. You have to learn the execution model from someone who knows how it works or be especially good at picking up on computer science concepts through observation to figure them out, but certain details are still not discoverable even when using the scientific method, or a black-box reverse-engineering strategy. I did not have a teacher who explained these specific implementation details until I was in college. I did not have the internet to teach me, either, and I believe the internet is a distraction and cannot be relied upon as a teaching tool. I observed various computers’ and a calculator’s behaviors, tested theories, and made deductions. Yet, even after being fully proficient in programming a structured language on the calculator, I still could not transfer that knowledge over to the C language without a teacher explaining what the stack and heap were, so the C language is not discoverable. Yes, we have code samples online today, but code samples do not explain the execution model and are therefore not comprehensible. Yes, we have developer docs, but developer docs are written for experienced developers–in a manner that is not suitable for the learning process. Books can work, but books require too much effort to read. When someone explains something they are highly familiar with to someone who is completely unfamiliar, it doesn’t work, because of a cognitive bias/phenomenon called the Curse of Knowledge, where the skilled teacher can’t remember what it was like to not understand a concept. This is why it is so difficult to find excellent teaching materials much of the time for difficult concepts. I struggle with this bias in my writing as well. I attempt to compensate for this, but inevitably, the ball will be dropped. https://www.psychologicalscience.org/observer/the-curse-of-knowledge-pinker-describes-a-key-cause-of-bad-writing

    1. I appreciate you leaving a comment. Yeah we’ve been using mouse and keyboard exclusively on the desktop for 50 years, and that one blows my mind. Touch interface hasn’t been integrated into the desktop environment properly. Microsoft failed to implement it in a meaningful way and Apple made a tepid attempt with the Macbook touch bar and never broached the topic on the desktop.

  7. This rant sounds like trolling, so of course I will bite and reply. I am a long-time Linux user and so I suffered through most of the frustrations that you describe, but I think your conclusions are quite wrong. You’re making a false dichotomy by setting off a faulty commandline-based system against an idealized, perfectly working GUI. In reality, the polished GUI systems work because e.g. Apple controls the entire environment, and makes it work. You, however, want to boot your system on multiple machines—and Linux basically tries to do that and almost succeeds, whereas MacOS or Windows would never be able to do that, without stripping the OS to bare essentials using the equivalent of MS sysprep.
    Yes, Linux is quirky, and the boot process is doubly so, what with all the different BIOS/UEFI/partitioning schemes. But it mostly works, and it gets better over time.

    Here’s what I have seen happen many times with your idealized GUI: you click a button, and it works flawlessly. You try it on a different situation, and it doesn’t, and there’s nothing you can do about it—because the symptoms are hidden behind the pretty graphic window. Easy things are super easy, but hard things are impossible.
    In contrast, on a old-style system, you can see the individual steps and their results, you can tweak them and adapt. You have a fighting chance, Yes, you have to use your analytical brain, but that’s the deal: someone has to do the hard work, and unless someone else did the work for you perfectly, your visual brain is not going to help you fix your non-visual computer.

    1. No, I’m saying there is an equivalence between the capabilities of graphical representations/diagrams and strings. The world you live in is a giant string. Your brain is equivalent to a Turing Machine in processing capabilities (with limitations on the tape size). How does your brain process your environment so quickly? This equivalence can and needs to be applied to computational devices that we use. Instead of forcing your brain to meet the machine at the string level, force the machine to meet your brain at the pictorial level. They are equivalent, but your brain processes one faster. How do you know what another programmer was intending for you to see on a website? Answer: Your brain decomposes the website into elements and forges relationships between them based on a “design language” that the designer implemented when they thought about how the website should look. It is no accident that they call it a design language. It is functionally equivalent to a computer language, but your brain can pick out points in which to start processing the meaning of the page arbitrarily and then fill in the rest of the context. On the contrary, expressions in programming languages require you to parse them front-to-back (left-to-right typically). This demonstrates that strings can be converted into a spatial representation. If you think about it, this means the reverse is true, too. So when our brains are optimized to decode and encode spatial/visual media, why are we still forcing people to write programs using a sequential system of symbols? Because we’ve been told since day one that a programming language must be written, not drawn. What are letters and symbols anyway? They are just a sequence of smaller pictures with special significance when presented in a specific order (permutation). The concepts of “before” and “after” can be represented visually. If I draw a white box under a white box of the exact same dimensions, what can you deduce from this? That you have the exact same entity repeated. This could be a block of code or a function. If you have a different entity, you give it a different shape. You can differentiate between the two visually to know what kind of results to expect from the shape. If it mutates the state of the program, that would be one; if it executes pure mathematical functions, that would be another.

      I’m not sure what you mean by my idealized GUI. GUIs are but models of an internal configuration. Existing widget toolkits are inept at representing many kinds of ideas, and the ineptitude shows worse in open-source GUIs and almost any Linux app. KDE attempts to bring some powerful features to their GUI apps, but the widget toolkits still use the same metaphors that limit the UX design. Forging new UI widgets requires–you guessed it–memorizing someone else’s code, which is individualistic in style and has a chance of not agreeing with your intuitive understanding of the structure of the software system.

      You are still thinking of the GUI as functionally separate from the code. We should be programming in one structured representation that we think in, and letting the computer handle the symbolic conversion from that static representation to a functional (runnable) representation. We already use an encoding schema via characters and ANSI codes. The problem is that these are not efficient to navigate. They require you to memorize the black box in between to understand what kinds of effects it has on the system. Like you were saying, you click a button and it works flawlessly because of the implicit assumptions the code made about its environment. You try it on a different computer or later on the same machine, and it fails, because it previously made a change to the system’s state but it depends on that stateful information to be set a certain way to operate correctly. When you are programming, do you like to memorize all the implicit assumptions that go into that black box? If so, you must really love surprises–and bugs.

Leave a Reply to Ryan Johnson Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: