Recently I started working on a startup tech business to implement some of my tech ideas. I got a work-space on campus at University of Memphis and they gave me a desk and computer to work on in a shared work-space area where other entrepreneurs are working. The computer they provided is an iMac, and it is configured to work with the campus network, and forces me to sign in using my University credentials.
In order to do my work, however, I have to have complete control over the computer to be able to install software to develop programs, so my idea was to bring an external drive with me and install a Linux distribution onto the external drive. I pulled a solid-state drive out of my desktop at home and an SSD-to-USB adapter and installed a copy of Linux Mint (which is based on Ubuntu (which is based on Debian)).
I used the iMac to install the OS, and under normal circumstances, this should be no problem. Ideally, the installer (running off of a flash drive) would run in RAM and not modify the internal drives on the iMac. Installing the OS would put the bootloader onto the external drive and put the OS installation in a separate partition after the bootloader.
Unfortunately this did not go as planned. The installer gives users the option of either installing Mint the easy way, and telling Mint to install to a specific drive, or the hard way, where the user has to know how to repartition the drive according to the needs of the OS, which requires knowing what the OS is going to do before it does it, which is a catch-22. So I picked the easy option, and I selected LVM format (Logical Volume Management) as the partition type, so that I can resize partitions easily after installing. LVM allows the user to create other partition types inside of it, which are then able to be resized easily and moved to other drives that are also formatted with LVM.
In the advanced mode, it is impossible to set the system up with LVM, as it does not allow creating partitions inside of an LVM partition.
In the easy installation mode, I selected the external drive, and Mint’s installer put the OS on the selected drive, but then it put the bootloader on the internal hard-drive on the system. It does this because the internal drives are marked as “bootable” (boot flag is set). The installer assumes that the user must want to dual-boot, so it replaces the internal bootloader. Since the iMac boots using EFI (or UEFI, which is newer), the bootloader is actually booted from the NVRAM, which is a special memory piece on the motherboard that boots devices and remembers the default device. The Mint installer decided to delete this NVRAM and replace it with its own information pointing to my external drive’s OS installation. The OS installation then would point to the internal drive’s bootloader partition. The bootloader would then let me choose which OS to boot.
Installer actions:
- Overwrote NVME
- Overwrote bootloader on internal drive
- Pointed NVME to external drive
- Pointed external drive back to internal drive’s bootloader
The dysfunctional configuration:
NVME --> external drive --> internal drive
This is *incredibly* stupid as a default. I reported this to Linux Mint on their forums, but Mint can’t fix this, because they rely on Ubuntu to provide the installer code, so this has to be reported again to Ubuntu, but they might not even fix it, even though it is an obvious logic error that has an easy fix.
Boot flag work-around
The internal drive can be set to “unbootable” during installation as a work-around for this installer bug. To do this, open up GParted and change the flags before installing. After installing, reboot into the Linux Live CD (or installed OS) and change the flags back.
Fixing the University’s Computer
I was unable to fix the University’s computer on my own. After several hours of research, the only fix to restore the NVRAM and bootloader involved logging in as an admin on the MacOS installation and running the “Startup disk” selection program and clicking on the internal drive to repair it. It requires administrator privileges. The only other option was to re-install the operating system, and this meant giving it back to the tech people, who would take 2 weeks to figure out a 2 minute problem and then probably still re-install MacOS.
Most operating systems allow users to fix bootloaders by running a tool from an installer CD or USB drive. There is no such tool for MacOS.
Luckily, I managed to get someone who knew the admin password to help me fix the computer.
After Installation
After installation, Linux Mint seemed pretty good. I have tried many other distros and had many issues after installing. Mint seemed to avoid most of them, but a few major ones showed up.
First, Mint was unable to install certain programs from the App store (Software manager) including Discord and another program, due to a missing software package that these two programs relied on. Later on this problem went away (I can’t remember why), but this was a problem out of the box.
The other major problem is the drivers. I intended to use this SSD on a few different computers, including my home desktop and my laptop, so that I can maintain a consistent development environment across different machines. Unfortunately, the drivers for hardware on several of the machines are missing or broken.
WiFi on Macbook Air 2013
The first I noticed was the WiFi driver on my laptop (Macbook Air mid-2013). Because of this, I cannot use Mint on the laptop at all. The internet is so integral to programming that this is a real problem.
Sound card on iMac
The sound card on the iMac also was (and is) not working. After doing some research on the issue, it has no known fix. The other reports of the same problem have different symptoms reported by the system programs used to diagnose the problem.
From the research, it becomes apparent that nobody knows what they’re doing, and fixing the problem is a mixture of political red tape, lack of responsibility, and technical incompetence.
What the real problem is
There are two real problems here:
Too many options!
First, there are too many options for how to debug the problem, and none of them are simple. Linux breaks the UNIX manifesto: “Do one thing, and do it well, and make things work together”. Debugging a problem in Linux requires users to research too much. The commands that are relevant for debugging the problems do not do one thing and do one thing well. There are too many options for how to approach solving the problem. This is a fundamental design flaw in modern UNIXy operating systems and should be a warning to future OS designers on what not to do as a developer.
Too much reading!
The other problem is that Linux’s debugging process is command-based. Tools on the command line are terribly inconsistent in their interfaces. The syntax for using the tools is unintuitive and finicky, and the presentation of information is typically organized according to the mental handicaps of the developer and is often overloaded with details that are irrelevant to the user. This requires users to memorize commands, instead of providing a way to debug configuration problems by exploring and inspecting a visual representation of the system’s configuration. While the terminal is a uniform interface, the languages and syntaxes of the programs within them are very inconsistent and require too much reading to be used efficiently.
General principle: Linux is not brain-friendly
The general principle behind “Too many options” is that Linux is not compatible with how the brain learns to do things. Likewise, the general principle behind “Too much reading” is a combination of “too much memorization” and “too many irrelevant details”. The UNIXy command lines are hard on the brain’s auditory cortex, temporal lobes, hippocampus, and prefrontal cortex (PFC), and they do not make use of the visual cortex efficiently.
What do these pieces of the brain do? From Wikipedia:
The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching.
…
The temporal lobe consists of structures that are vital for declarative or long-term memory.
Auditory Cortex
The temporal lobe is involved in processing sensory input into derived meanings for the appropriate retention of visual memory, language comprehension, and emotion association.
Temporal Lobe
The hippocampus (from the Greek ἱππόκαμπος, “seahorse”) is a major component of the brain of humans and other vertebrates. Humans and other mammals have two hippocampi, one in each side of the brain. The hippocampus is part of the limbic system, and plays important roles in the consolidation of information from short-term memory to long-term memory, and in spatial memory that enables navigation.
Hippocampus
The Visual cortex is powerful
The visual cortex is the strongest part of the brain for processing data. For example, the use of videos, demonstrations, and diagrams are very effective teaching tools, and the world around us is full of visual stimuli that must be processed quickly and efficiently in order to respond quickly and orient ourselves in a complex environment. The only way to do this is with vision. A simple test: walk outside and close your eyes, and try to find your way around for a few hours. It is incredibly slow and error prone.
Vision is much more powerful than other senses:
In the brain itself, neurons devoted to visual processing number in the hundreds of millions and take up about 30 percent of the cortex, as compared with 8 percent for touch and just 3 percent for hearing.
The Vision Thing: Mainly in the Brain, Discover Magazine
The UNIX command line and most programming languages are ineffective and slow tools, because they do not make use of the brain’s ability to process visual inputs quickly in parallel and contextualize large amounts of details into a cohesive idea. The visual cortex is the largest part of the human brain, occupying almost 1/3 of the mass of the brain. Humans evolved to process visual stimuli with massive parallel processing, so it is inefficient to have to sit and read one character at a time or even one word at a time. This bottleneck is partly why GUI’s were invented.
So, the next time you pop open your command line or a text editor, ask yourself, “do I want to use 3% of my brain or 30%?”.
Linux programmers are stupid
Because of the Linux community’s inability to grasp these basic psychological concepts, Linux will forever be crap. Linux programmers are so smart that they are in fact stupid. While they sit and worry about every detail of their software’s architecture, they ignore the brain’s architecture and its limitations and abilities.
Details
- Operating System: Linux Mint 19.2 Cinnamon
- Cinnamon Version: 4.2.4
- Linux Kernel: 4.15.0-66-generic
Stop using BASH, the shell is old as dirt and as a visual interface goes is pretty archaic, try zsh or fish instead.
Use apps that use ncurses to bring a some visual appeal to the command line (htop, ncdu, calcurse, midnight commander /mc, bmon etc)
“ranger” gets one around the file structure without to much hassle
“tldr” gets one past the lengthy (and rarely updated) Linux man pages
“bat” brings syntax highlighting and default enumeration to cat
“micro” solves the wonkiness of Vim but still gives you syntax highlighting, or just use nano
Developers aren’t focusing on HI (human interface) design principles with Linux, but speed and other concerns.
If you have a complaint about something not being visual enough, develop an ncurse solution to fix it.
You could have avoided this mess by using Oracle virtualbox, or just installing Homebrew for Mac
I agree “out-of-the-box” Linux isn’t meant for humans, Linus Torivald didn’t consider this a necessity – Ian Murdock who created Debian created it as a side experiment I recall while in college
Business loves free, the C programming language is “free”, because of C Linux was born, Make then Ant was born, and all the other stuff. C is a funky language, why Java was born, Java runs on a virtual machine with poor memory management – why Go was born, etc etc ….. so it all goes back to C, which was never considered a language to be used outside of a lab (C was developed within the labs of AT&T).
When things are “free” you get what you get.
Remember Beethoven and Mozart (the composers?) Rich families let them live in their mansions free and they got a salary to just write beautiful compositions, if this was done for developers your issues wouldn’t exist.
I love Linux, but I also know it’s free, so I know it’s as good as it gets
Developers are shooting themselves in the foot if they are not focused on HCI principles. They are humans too, and every developer was a user at one point (except a few). This is why I say Linux developers are idiots. They’re sinking their own ship. I don’t want to wear their old smelly socks or go looking for their 1975 paper on “whatever” in their office with papers strewn across the entire room and stacked to the ceiling in an arbitrary fashion. They’ve created a complexity problem, and every attempt to solve it has resulted in more complexity, because they won’t address the root problems, which is the nature of the abstractions they have used. People are content to create work-arounds for problems, and then create work-arounds for the work-arounds when their work-arounds introduced more problems than they solved.
I appreciate the suggestions. However, despite how good the advice is, the problem is a human efficiency and sociological one. We were sold a lie a long time ago that technology makes less work for us by solving problems, and it has failed to deliver on this: all it has done is re-route the work to specialists who memorize their particular answer to a particular problem, because it generates more complexity (entropy), making the system as a whole less understandable to the individual programmer. This is why software keeps hitting an upper limit on how useful it is and how well-made it is. Software just sucks, because we as humans keep holding on to old broken abstractions and assume that if it ain’t our problem, someone else will fix it. It’s a lazy and irresponsible approach to system design.
“You could have avoided this mess by using Oracle virtualbox, or just installing Homebrew for Mac”
I chose a workspace with a desktop so that I can stay on task better. I didn’t want to terraform the University’s computer into a Linux machine, because I thought it’d be nice for others to have access to the computer using their Uni credentials when I wasn’t there.
“Developers aren’t focusing on HI (human interface) design principles with Linux, but speed and other concerns.”
Yes but this is the problem. They are imposing a learning overhead on later developers (downstream and new programmers). This is a compromise that is arguably worth it in the short-run, but overall really not worth it in the long-run. These languages discourage discoverability (conceptual transparency). I used black-box reverse-engineering strategies to derive the principles of computer science when I was in middle and high school, but this did not teach me how to program in C, because C’s execution model involves non-scientific/arbitrary characteristics. You have to learn the execution model from someone who knows how it works or be especially good at picking up on computer science concepts through observation to figure them out, but certain details are still not discoverable even when using the scientific method, or a black-box reverse-engineering strategy. I did not have a teacher who explained these specific implementation details until I was in college. I did not have the internet to teach me, either, and I believe the internet is a distraction and cannot be relied upon as a teaching tool. I observed various computers’ and a calculator’s behaviors, tested theories, and made deductions. Yet, even after being fully proficient in programming a structured language on the calculator, I still could not transfer that knowledge over to the C language without a teacher explaining what the stack and heap were, so the C language is not discoverable. Yes, we have code samples online today, but code samples do not explain the execution model and are therefore not comprehensible. Yes, we have developer docs, but developer docs are written for experienced developers–in a manner that is not suitable for the learning process. Books can work, but books require too much effort to read. When someone explains something they are highly familiar with to someone who is completely unfamiliar, it doesn’t work, because of a cognitive bias/phenomenon called the Curse of Knowledge, where the skilled teacher can’t remember what it was like to not understand a concept. This is why it is so difficult to find excellent teaching materials much of the time for difficult concepts. I struggle with this bias in my writing as well. I attempt to compensate for this, but inevitably, the ball will be dropped. https://www.psychologicalscience.org/observer/the-curse-of-knowledge-pinker-describes-a-key-cause-of-bad-writing
Agree. The entire software industry is broken due to Linux/Windows dominance … Stuck in 1990…
I appreciate you leaving a comment. Yeah we’ve been using mouse and keyboard exclusively on the desktop for 50 years, and that one blows my mind. Touch interface hasn’t been integrated into the desktop environment properly. Microsoft tried and failed to implement it in a meaningful way, and Apple made a tepid attempt with the Macbook touch bar, received negative feedback due to their lack of vision for it, ditched it altogether, and never broached the topic on the desktop where it would have made more impact for certain workflows.
This rant sounds like trolling, so of course I will bite and reply. I am a long-time Linux user and so I suffered through most of the frustrations that you describe, but I think your conclusions are quite wrong. You’re making a false dichotomy by setting off a faulty commandline-based system against an idealized, perfectly working GUI. In reality, the polished GUI systems work because e.g. Apple controls the entire environment, and makes it work. You, however, want to boot your system on multiple machines—and Linux basically tries to do that and almost succeeds, whereas MacOS or Windows would never be able to do that, without stripping the OS to bare essentials using the equivalent of MS sysprep.
Yes, Linux is quirky, and the boot process is doubly so, what with all the different BIOS/UEFI/partitioning schemes. But it mostly works, and it gets better over time.
Here’s what I have seen happen many times with your idealized GUI: you click a button, and it works flawlessly. You try it on a different situation, and it doesn’t, and there’s nothing you can do about it—because the symptoms are hidden behind the pretty graphic window. Easy things are super easy, but hard things are impossible.
In contrast, on a old-style system, you can see the individual steps and their results, you can tweak them and adapt. You have a fighting chance, Yes, you have to use your analytical brain, but that’s the deal: someone has to do the hard work, and unless someone else did the work for you perfectly, your visual brain is not going to help you fix your non-visual computer.
“This rant sounds like trolling.” Actually, your response to this is trolling. Maybe you would feel a hit of dopamine in seeing that your comment was approved, but don’t bother thanking me. I chose to publish your response and so many troll ones like it just to show how ignorant the average programmer really is. It’s your funeral.
I’m saying there is an equivalence between the capabilities of graphical representations/diagrams and strings. The world you live in is a giant string. Your brain is equivalent to a Turing Machine in processing capabilities (with limitations on the tape size). This equivalence can and needs to be applied to computational devices that we use. You have to ask yourself, how does your brain process your environment so quickly? It’s due to massive parallel processing capacities in the visual cortex. Instead of forcing your brain to perform a single-threaded operation such as reading text characters and words sequentially and meeting the machine at the string level, you need to force the machine to meet your brain at the pictorial level where it can do massively parallel processing and resolve ambiguities faster.
The two are equivalent, but your brain processes one faster. As a test of your brain’s ability to automatically sequence large amounts of data, ask yourself: how do you know what a web designer intended for you to see on a website? Answer: Your brain decomposes the website into elements and forges relationships between them based on a “design language” that the designer implemented when they thought about how the website should look. Some design languages are more difficult to use than others, which is why your typical PHP bulletin board forum is a nightmare to use, while Google Docs and most video-games are a joy to use. It is no accident that they call this a design language.
It is operationally equivalent to a computer language, but instead of sequential lines, your brain picks out arbitrary points on the page as candidates for where to start processing the rest of the page’s contents (it has to start sequencing things only because your short term memory fills up quickly). In order to do this, it allows multiple neuronal circuits within your brain to respond to the input stimulus, and either reacts to the inputs based on some arbitrary weights and biases based on human perceptual architecture (such as color brightness, saturation, hue, shape, and the associated personal and cultural significance of those aspects of perception), or it proactively allocates mental resources/attention to the ones it expects will be the most useful. To do that, it uses Dopaminergic pathways to form a prediction/expectation or belief about the importance of those heuristic selectors (brightness, saturation, hue, shape, etc), and then boosts the amount of attention allocated to those selectors (weights and biases implemented as neurotransmitter signalling frequency and amount), as well as to the supporting processing loops. Essentially, your brain ponies up a bunch of energy/neurotransmitters without any guarantee that they will be used. Then if they get used, it reinforces the strength of the neural circuit. If they don’t get used, the circuit is not reinforced, and the brain has to clean up the mess with neurotransmitter reuptake molecules. It also has to clean up after the metabolic side-effects of that anticipatory response. This is why you experience the anticipation of a reward being better than the reward itself, after the first time you experience that reward. Forging the neural computation path/association is more exciting than simply repeating the path a second time. For some people, this novelty experience is enjoyable. For other people, it’s not as enjoyable. It may have something to do with their natural levels of dopamine, which would explain why depressed people don’t enjoy things as much. They’re “stuck in a rut”, so to speak, of repeating the same neural pattern without getting a reward for it.
On the contrary, expressions in programming languages require you to parse them sequentially (left-to-right typically). The fact that your web page is a sequential program but is parsed and reconstructed by your brain using a non-sequential methodology demonstrates that strings and spatial representations are one and the same. This means the reverse is also true. Spatial representations get converted into strings in a non-sequential manner but a “sequencer” within the brain resolves the dependencies and ambiguities between the outputs created by the non-sequential processing programs.
Because our brains are optimized to decode and encode spatial/visual media, you have to ask yourself, “why are we still forcing people to write programs using a sequential system of symbols?” and “why are we forcing people to consume their program output with the sequential reasoning part of the brain, via a terminal?”
The answer is because we’ve been told since day one that a programming language must be written, not drawn. What are letters and symbols at their most fundamental? They are just a sequence of icons. When presented in a specific order (permutation) they represent other concepts that can be represented via diagrams.
To see how a small programming language snippet correlates to a diagram, consider the concepts of “before” and “after”. These can easily be represented visually in various forms:
For example, if I draw a white box under a white box with the exact same dimensions, what can you deduce from this? That you have the exact same entity repeated. Your brain identifies an object, notices it repeats (matches the visual contract against a second object and either accepts or rejects it as the same), and then provides this mental program or the repetitive essence of it as an input to another program that replays and checks the internal structure of the first program. The checker program disrupts the original program at various points by messing with its signalling (neurotransmitter frequency and levels). That is the process of analysis: your brain takes a model and tries to disrupt its own program or break it. It identifies different kinds of feedback received from the program when it disrupts it and compares the outputs at different points.
To use the above example to improve language constructs, the white box in the example above could represent a block of code or a function. If you have a different entity, you give it a different shape. You can differentiate between the two visually to know what kind of results to expect from the shape. If it mutates the state of the program, that would be one; if it executes pure mathematical functions, that would be another.
I’m not sure what you mean by my idealized GUI. GUIs are but models of an internal configuration. Existing widget toolkits are inept at representing many kinds of ideas, and the ineptitude shows worse in open-source GUIs and almost any Linux app. KDE attempts to bring some powerful features to their GUI apps, but the widget toolkits still use the same metaphors that limit the UX design. Forging new UI widgets requires–you guessed it–memorizing someone else’s code, which is individualistic in style and has a chance of not agreeing with your intuitive understanding of the structure of the software system.
You are still thinking of the GUI as functionally separate from the code. We should be programming in one structured representation that we think in, and letting the computer handle the symbolic conversion from that static representation to a functional (runnable) representation. We already use an encoding schema via characters and ANSI codes. The problem is that these are not efficient to navigate. They require you to memorize the black box in between to understand what kinds of effects it has on the system. Like you were saying, you click a button and it works flawlessly because of the implicit assumptions the code made about its environment. You try it on a different computer or later on the same machine, and it fails, because it previously made a change to the system’s state but it depends on that stateful information to be set a certain way to operate correctly. When you are programming, do you like to memorize all the implicit assumptions that go into that black box? If so, you must really love surprises–and bugs.
And, just like all of the other trolls who posted here, you are a coward, hiding behind a fake username and email:
life@mailinator.com
IP: 129.6.122.133 Your location: Germantown, Maryland https://www.speedguide.net/ip/129.6.122.133
“Windows and Mac Are Awful, and There’s Nothing You Can Do About It”
Wait, there is, you can install Linux
Such a valuable comment. Not on topic. I bet you didn’t even read.
You are a coward hiding behind a fake email and username:
bryanjohnson@wordpres.com
Your IP: 82.118.21.59 Location: https://www.speedguide.net/ip/82.118.21.59
Ah, I see it says you’re from Poland. More like Trolland