Title: Multihead Display Part 2
Date: 21st April 2021

I managed to get it so startx will automatically detect if I have an extra monitor plugged into my laptop and if so extend my screen and set the resolution. But it was a bit more time consuming than I expected.

Point of Pain: * Xorg Conf - where am I supposed to configure my start settings? It seems like there are a million and one places. I ended up putting in my function call to /Etc/X11/Xinit/Xinit.etc.d/

Not actually a pain: Generating the XRandR script which sets my monitors. Why wasn't this a pain? Even though it pains my command-line loving soul to admit it, it's because I configured it through the Gui program arandr and used the layout to generate the output script. When it comes to positioning your screens relative to each other, the gui is the ideal medium. That said, some of the ways in which it's easier are because we just accept substandard scripting environments.

For example setting the resolution is easy in the gui: right click->resolution-> pick your resolution. In the command line there would be a whole rigmarole of finding the resolution command, working out the right resolution argument etc. How could we do better?

Again a pain:
Scripting anything in Bash. Yes, it's flexible, but god damn is it a mess. Obviously it's always going to be tricky to do something in an unfamiliar language, but I'm not sure I want to clutter my brain with something so syntactically wretched.

As a side thought, scripting bash is much more of a pain than at the CLI. At the CLI files and folders really are first class objects that you can auto-complete on. All that disappears when in Vim. Also it really is a pain to copy stuff from the CLI to Vim and vice versa. Didn't they put a shell mode into Vim? You can't stop the bloat when your interfaces don't compose nicely.

Final Thoughts

Many would argue that a GUI is easier than CLI. And it's definitely true in this case, although I keep asking does it have to be? The GUI is fine up until you need to do something that wasn't already planned for, in which case scripting offers complete freedom. But then maybe most of the common cases will get covered off in the gui and you can just work around the rest. Or maybe our minds are blighted by the limitations of the interfaces we use and dramatically reduced friction would radically change the things we want to with a computer. * A sound board is less flexible than the spoken word * GUIs haven't really panned out for programming languages. * We need more responsive languages, better CLIs, more composable programs

How should this multi-screen thing have played out? I start typing:

>Dis

This automatically completes to:

>Dis[play] -> automatically showing relevant info:

LCDA1: Resolution / orientation / available commands
VGA1: Resolution / orientation

Display.Update  
{
    LCDA1 { Resolution<valid choices\>, orientation:xxx }  
    VGA1 { Resolution:xxx, orientation:xxx }  
}

You hit tab to jump between argument positions, ctrl-n to switch between valid inputs to those arguments or just start typing with auto-complete. Perhaps as you fill in valid arguments you can update the screen immediately while keeping the command in place beneath the cursor.

when you are happy:
find object:Display in:Startup
->bring up the section with the current display setting
->select->paste from command buffer->wq

and we are done.