X11 Must DIE

Unix, C, computing like it's 1980.

Month: October, 2013

eyestrain: it’s a pain

killxzenburn

 

killx with zenburn theme

 

One of the few things that I’ve been working on recently is actually one of the more fun projects I’ve done in a while.  Luckily, I’ve had some great help (thanks Wuxmedia and MachineBacon), and it’s something that is playful in some ways.  It’s a simple little theme switcher for console and framebuffer systems.

One of the more useful parts of this project is that some of the themes are lower contrast.  Lower contrast color themes can help to reduce eyestrain.  To quote the medical community “This type of eye fatigue or eye strain is sometimes known as computer vision syndrome. It affects about 50%-90% of computer workers. Some estimates say computer-related eye symptoms may be responsible for up to 10 million primary care eye examinations each year.”

 

“The problem is expected to grow as more people use smartphones and other hand-held digital devices. Research shows that people hold digital devices closer to their eyes than they hold books and newspapers. That forces their eyes to work harder than usual as they strain to focus on tiny font sizes.”

“Digital devices may also be linked to eye fatigue because of a tendency to blink less often when staring at a computer screen. People usually blink about 18 times a minute. This naturally refreshes the eyes. But studies suggest that people only blink about half as often while using a computer or other digital device. This can result in dry, tired, itching, and burning eyes.”

So, allow me to suggest finding a theme that helps to avoid causing eye strain.  Even if you’re not using a harshly text-only system, mild colors are good if you’re going to be staring at a screen for an extended period of time.  The vt100 is certainly hip, but it’s good to take advantage of some modern conveniences if they help you work longer on fun projects.

Should you like to be able to check out some themes (not all are low contrast) you are welcome to check out our project.

Advertisements

Keep Commits Clean (a pet peeve)

greenlamps

Having just spent the last few hours reordering and recommenting a little group project, I felt the need to bring up a few things that drive me totally nuts about the FOSS world from time to time.  I am not particularly obsessive compulsive, but there are a few things that I think that everyone who works on a project should take into consideration.  If I’m getting help or working as a team on a project, I don’t care if someone has taken a bath today, as I’m probably working thousands of miles away from them.  I don’t care if their house is a mess.  I DO care, though, if they’re coding is clean.

There are standards in place for many languages, and there are agreed upon “best practices”, but sometimes, there are projects that fall outside of those areas.  If there is not an agreed upon standard, then it’s up to the team working on it to make one.  All that should matter is that there’s a standard, and not just “code that the compiler/interpreter will use.”  In many cases, if someone’s work is worthwhile, I’ll accept a pull request and try to clean up some variations on my own.  Trailing whitespace is acceptable if the work is significant, but I have to know that someone was just too occupied with trying to solve a problem to pay attention to these little details.  If it’s a pointless commit, and it has unorganized commenting, useless commenting, poor standardization, or unreadable commenting…I won’t even bother with it.

For example:

void my_funtion (int x, int y)
{
    int y, x;
 int f;
/*for loop*/
for (x= 0; x <10; x++)
 do this to int f;
      also do this;

This is an obviously exaggerated example, but I hope that you can see why this would drive someone totally insane if it was a commit to an otherwise well formatted program.

So what are the main things to be concerned with before pushing a commit?

1). That your patch works and hopefully doesn’t cause issues for other users.  Never break something that works.  Test, test, test.

2). That your patch follows the project’s format, and that it maintains consistency to itself as well.  Even if you hate K&R, if that’s what the project that you’re patching on is using, then I’d suggest maintaining the pattern.  On the other hand, you might wish to offer to clean it up for the rest of the team.

3). That you’ve deleted trailing white-space from your patch.  This is a huge pet-peeve, but it drives me totally nuts when I switch to white-space mode in emacs.  It’s simply wasted space.  As a fun fact, many Linux kernel developers get started by removing white-space.

4). That your comments make the patch easier to understand.  You should always strive to write comments that add value.  I’ve seen people explain how something in the language works in a comment.  There’s no reason  to do this for a patch, as you can assume that the author knows how the language works.  On the other hand, they may not understand what you’re hoping to accomplish.

5). That your patch is useful and elegant.  Always be attempting a samurai like approach to coding.  If you can make that perfect cut in a minimal piece of code, and it makes exactly the right difference…then people will appreciate and remember your work.

The projects that we work on may die out, or they may be forked and picked up by others.  Always remember that there is the possibility that your work won’t end with you.  Try to make life easy for those that come behind you.

Good Ideas: Good Everywhere.

vader

 

Design Your Death-Star without Exhaust Ports.

 

One of the points that I am and have been a large fan of is to simplify a system down to only essential components without sacrificing usability.  While this is a fantastic strategy for surviving unstable and experimental distributions at home, in a corporate environment it has the added benefit of longevity.

While it’s always a temptation to simplify the work itself by using newer and more interesting tools and libraries, designers should keep in mind that the Unix platforms grow and evolve at an exponential rate.  When I see a project that has been out of development for the last 7-8 years, but that you can still install on even the most modern of systems and run it without any modification, I credit the developer with having great foresight.  Sure, there is now a lot of designed in backwards compatibility, but this only extends to the projects that have reached a level of popularity as to create a substantial need for long-term support.  Gtk2 for instance is not something that people were willing to part with just because gtk3 came along.  Eventually this too will go the way of the Dodo bird.

There are tons of interesting technologies being worked on currently, especially in the display server world.  While I’m all for developing for newer platforms, I don’t think that this is a wise decision if you’re doing so with corporate platforms in mind.  The last thing that any company is looking for is to invest time and money into a prospective project that will be deprecated in 6 months.  This is where using simplicity in a design can be of great benefit.  ANSI-C and Shell scripting have stood the test of time.  There are modifications to the systems, but they’ve proven to be a wonderful way to minimize external influence in how a particular program survives.  The systems designed on this very principal have outlived even more popular MS-Win based programs, and that’s a huge benefit to anyone looking to maintain functionality regardless of the shifting tides of technology.

I cannot count how many times I’ve personally worked on systems designed on an curses platform, even today.  They still work on the hardware that they were originally designed for, and can often be ported forward for a very long duration.  Even the most modern of companies will not invest in a project if their current solution is perfectly successful.  As a designer, this can be a two-edged sword.  On one hand, if you’re basing your design on having a chance to rewrite it every few years (engineering job security), then it might seem like a bad idea to make something that will possibly still work perfectly 15 years in the future.  On the other hand, it’s a competitive market and designers who provide solutions that are elegant and rock solid will not be forgotten.  By designing a firm foundation, you are allowing your work to speak for you.  While you may not be needed for long-term support, the companies who benefit from such design are far more likely to approach you for new projects.  This is how I personally would prefer to work.

Had you rather be called with “Your product broke again…” or “Since the last project was so successful, we were wondering if you would be willing to work on….”?  I know that I’d prefer the latter.

So, by sticking to the standards of design that have proven themselves over time, you not only create an easily maintainable system, but display a wisdom about the workings of the technology industry.  Plus, all of the time that isn’t spent trying to fix a broken project can be spent on developing better answers to new problems.  This pushes the market forward, allowing newer projects to grow by your decision to stand on the shoulders of giants.

killx, rsi, and the ‘mother of invention.’

new

rsi & /dev/fb0

This was a much longer post, but after reading it again, I decided that I hated it.  So, let me make a simple request for everyone who reads this:

Do something new, that you suck at currently.

I don’t care what it is.  If you write in C, use Lisp.  If you write Bash, try Python.  If you have mastered all forms of coding, go outside and play some football.  Just don’t do what you’re good at.  Try something that makes you have to work to get it done.

My final thought on the matter is that it’s good to get out of your box, try new things, to fill a need that you have.  It doesn’t matter if anyone else benefits from your work so long as you actually gain some quality of life from it.  A friend of mine said “I believe that most musicians make music for themselves, and that being heard is just a bonus.”  Make your goal to make software that YOU enjoy and that fills YOUR needs.  If nobody else gets it, then that’s fine.  In the FOSS world, it’s not like most of us are making money at it anyhow.  You might as well enjoy the ride.

I hate Lua with a passion almost equal to how I hate Java.

local local local local = print

 

Lua is a fantastically easy to learn scripting language that is easily embedded into other languages via C/C++ bindings and is under a BSD style license.

Now that we have that out of the way, let’s talk about my intense loathing for Lua.  It has no strict syntax, only parsing of code.  This in one of the major advantages of Python, because a scripting language should make sense to anyone who knows how the language works.  Lua throws the “one right way” out the window for a much more “however you feel like doing it” approach.  Since it is often one of the scripting languages that people will learn early on in their programming careers, it almost breeds bad practices.

Add this to the fact that it defaults everything to a global scope and thus you’ve got a few really interesting options.  Use global scope for everything (a horrible idea, as it’s slow and creates serious debugging issues), or you can simply do like most Lua programmers and explicitly type “local” every time you wish to use a variable locally.  This leads to great complications when dealing with lower level languages, where most variables should be processed locally and then passed by value.  Functions in Python cover a much more standard way to handle the passing of global and localized variables.  As each is a scripting language, this isn’t a big issue…but one leads to a far more reasonable understanding of lower level languages.  By standardizing practices, learning new languages becomes far easier.  Also, polluting global namespace is disgusting.

Lua arrays often begin at ‘1’, which makes no sense to anyone that has ever studied even the least bit of computer science.  I realize that this is not necessary, and that it is due to the programmer having the freedom to start the index wherever they wish, but let’s not get in the habit of pretending that 1 is the first number that we count from in computing.  Nulls are hard enough to explain to people coming from a scripting language to a medium-level language without having to remind them that 0x00000000 is a valid memory address, and that we start counting with 0.  

The biggest thing that I hear in defense of Lua is that it’s “high performance” as long as you don’t allow it to write to the global namespace and actually handle the locals correctly.  While this is true, it’s still a scripting language.  It’s not as fast as a decent compiled language, and it lacks the power and flexibility of either Python or Lisp.  I have yet to determine the need for so many different ways to skin a cat, and one that promotes bad coding while being marginally used at best is certainly not the way to win me over as a fan.

Things that are worth the Bloat #3: emacs

2013-10-20-231752_1366x768_scrot

 

OH NO, I’m using X!

This one is not going to fit for everyone, and I’m not going to cover everything that is possible with GNUemacs, because this would be the longest post in the history of posts.  I will leave a few links, and you’re more than welcome to start the journey on your own should you so desire.

The reason I have emacs on every single install of any OS that I have is that it’s so much more than just a text editor.  As I mentioned before, vim is actually a more efficient way to deal with huge blocks of code as far as my opinion is concerned.  What emacs does well, is… well, everything.  When working outside of X, it is totally reasonable to never leave emacs.  I guess that working in X would apply the same way, but I often only install the nox-emacs, and since I’m very comfortable with using keybinds, I don’t need any menu support.  I tend to use what I have regardless of where I am and what I’mm doing.

Still, having emacs is the one constant between my Sid/BBQ/killx/Arch/whatever installs.  I keep my emacs init file pushed into a git repo so that when I’m starting work on a new project, I can install the base package, curl my files down from the cloud (if I don’t plan to install git), and be in a familiar working environment.  In a graphical or cli environment, emacs provides me file-management, syntax-highlighting, irc, email, RSS feed reading, a calculator, basic lisp interpreter, frames and windows, and a myriad of other tools.  With a few simple plug-ins, it becomes a multi-tool for doing everything.  (I strongly recommend emacs-w3m for web-browsing if you’re looking for technical documents.)

It may not be the BEST tool for each task, but it’s a huge toolkit for many individual tasks.  This make it one of the  things that I truly am not sure that I do without.  The learning curve is pretty steep.  Oh, it’s simple enough to just edit text, but once you start shifting buffers and launching scripts from it, things do increase in difficulty a bit.  One of its biggest strengths is that it is actually configured via emacs-lisp, a legitimate lisp dialect similar to common-lisp.  This allow it to be infinitesimally extensible.  This is a big advantage when using a multi-tool.

I won’t bore you with the minute details, but if you’re willing to take the next few years to learn what may be one of the most flexible tools in existence, check out the emacs wiki.  There should be plenty to keep you busy.

Static vs. Dynamic Linking

wuxpaper

Another Wuxmedia FBterm Shot

…nothing to do with this article, but it looks nice.

One of the factors in how most Unix-like operating systems work is that binaries are normally built using dynamically linked libraries.  For those who are unaware of what this means, it’s a standard for pulling in libraries for C/C++ that can either be linked on compile, or loaded and unloaded during execution.  The traditional naming for these is libfoobar.so.  They have a great deal of flexibility in how they can be used across multiple binaries, and since there is only a single lib being shared across perhaps multiple binaries, a significant amount of disk space can be saved.

Static linking involves creating libraries that are linked with, and thus become a part of, the code for the application.  If you take the time to read the IBM AIX documents, you’ll notice that they state “You can reduce the size of your programs by using dynamic linking, but there is usually a trade-off in performance. The shared library code is not present in the executable image on disk, but is kept in a separate library file. Shared code is loaded into memory once in the shared library segment and shared by all processes that reference it.”  The traditional naming for these is libfoobar.a.

I won’t go into the specifics of how to create the different types of libs, because if you’re writing libs, then you probably already know how to link them using compiler flags.

The question, and one that I’ve been wondering for a good while now is how substantial is the difference in speed if you converted the entire GNU / Linux core system to using statically linked libraries?  I’ve seen a few projects that took on this monumental task, but there has yet to be the kind of support I would have hoped to see.  Starch Linux is an interesting idea, but one that I’ve yet to check into.  I did notice that Sta-li looks to be in development, and since they’re already using many tools that I prefer in my own environment, I am looking highly towards the end product.

One of the other factors that I see as highly important in this endeavor, is movement towards using musl instead of the Glibc that we’ve all come to know far too well.  Musl is known for being far more simplified and smaller than the older GNU libs.  For a decent bit of reading, check out http://www.etalabs.net/compare_libcs.html to see a side-by-side comparison of the different libraries.  The great importance of this particular choice is that the individual statically linked binaries may actually not be significantly larger (or could possibly be smaller) than their dynamically linked counterparts.

Still, as a programmer, which should you use?  Once again, the IBM AIX documents have a really good suggestion.

One method of determining whether your application is sensitive to the shared-library approach is to recompile your executable program using the nonshared option.

If the performance is significantly better, you may want to consider trading off the other advantages of shared libraries for the performance gain. Be sure to measure performance in an authentic environment, however. A program that had been bound nonshared might run faster as a single instance in a lightly loaded machine. That same program, when used by a number of users simultaneously, might increase real memory usage enough to slow down the whole workload.

So, when it doubt…test it.

RSI: Let’s do a quick install

rsi

 

Not the same as Yesterday’s Shot.

For all of my love for killx, I’ve often found myself wishing for a way to work on some of the things (bash scripts, little binaries, etc) that I use for it, but being able to test them as I would on killx.  Today, I decided to try to use a quick way to have a similar environment, but without having to go back and rebuild all of the libs that I use.

I had mentioned LinuxBBQ-rsi in a previous post, and already had a USB drive that had been dd’ed with the image on it.  I took a few minutes to use cfdisk to cut out a new 11GB partition, and decided to install rsi on it as a test bed.

The main step of actual importance was to push the dotfiles and little projects from killx to a repo via git.  Everything else was extremely easy.  “sudo bbqinstaller” script works fantastically.  After reboot, I pulled in some build tools and pulled the git repos onto the new system.  Debian will almost spoil you with how easy to bring in new packages is.  From the time that I made the decision to start to the time that I booted in as the new user was under an hour, easily.

All of that being said, I realized from a question that was asked today that a great many people have very little concept of just how dependencies can be masked by a package manager.  The issue isn’t that the package manager doesn’t tell you what it’s doing, but rather that many folks seem to not pay attention, which creates a false sense of simplicity.  This particular user says “I don’t see why I should download 4 different packages when this other 1 does them all.”  Rather than debate the issue (because I generally don’t feel like arguing with people over things that really don’t concern me), I simply went to check how many dependencies this “1 package” had that would be pulled in by the package manager.  With a simple “install this one package” the user was actually pulling in 25 separate packages.  Sure, it was a simple “apt-get” away, but the final result was much more involved than the 4 packages that were being suggested would have been.

While I think that package managers are a fantastic tool, they come at a cost.  The cost is that it’s harder to know and understand the pieces that are being put into place to make every little program work in a modular system.  There is a self-inflicted ignorance that’s being adopted into a world where one of the strong points is that very little is actually hidden from view.  I’m not a huge fan of this idea, and as someone who’s worked with IDE’s and simple editors…I don’t think that not knowing how something is being done will ever lead to better final results.

With that being said, there are still things that happen that are simply above my ability to fully understand yet.  My quest involves trying to better understand how what I already have installed works down to the smallest detail.  I guess for others, this isn’t quite as important.  Each of us will have to find our own paths and our own destinations, but allow me to make the request that each user at least attempt to understand the basics of how your system works.

Plus, while installing RSI, if you actually look at the comments in the bbqinstaller script, it tells you the easiest way to set up a new user in a commented line, which answers a great many questions that I’ve heard regarding the subject.

One more day of killx

killx

In my initial post, I gave respect to killx.  Since I spent about 6 hours just “playing” on it today, I thought I’d share part of my love for the little guy, and why I actually like it.

When I was about 5 years old, my father bought a TRS-80 Color Computer.  It was a horrible egg-shell colored box that was totally enclosed under the keyboard.  It had an external tape-deck that you could save your programs to.  It was programmable in BASIC.  He bought me a book about how to program games in BASIC, which was actually targeted towards the Commodore-64, but with some really minor tweaking, I managed to get some of them to run.

I spent a long time learning as much as I could about the system, which was not very much and it took years.  Every little victory was a celebration.  I would write a little program to print my name over and over, but alternate through the colors 1-8.  I would write a program that drew 2 rectangles, and a line that would draw itself, erase itself, and then redraw itself 5 degrees beyond where it was previously.  This was my rudimentary “helicopter” animation.  None of this was really very impressive by today’s standards, but every little thing was very exciting.

A few months back, I was looking at sending GObject Gsignals in a program, and it caused me to think “I really miss that TRS-80, back when computers made simple sense.”

That’s what killx does for me.  It takes all of the abstract layers over a Linux system, and simply tosses them away.  Since everything that I have added to it is built from source, I can tell you the dependencies for everything that I have installed.  There may be some “ease of use” for much more advanced programs and interfaces, but when things break, they get drastically more complicated.  There is a simple ease that comes from having a better understanding of how everything is set up to work.  It might actually be Zen-like in a way.  When you silence all of those widgets and graphical layers, the simple presence of the system has a chance to shine through.  You find yourself being excited about the little things again.  The little victories are that much bigger.

Now, I’m not going to go so far as to call it “user-friendly.”  It lacks much of the automation you’ll find in other distributions, and it requires some reasonable understanding of how the system should work by the user.  Much like the TRS-80, you should not expect to just start typing things and it run.  It didn’t work like that in 83′ and it shouldn’t work like that now.

Still, I really enjoy it, and if you have a humble heart and an open mind, you may love it too.

Extensible Design vs. Solid Design

One of the things that originally attracted me to the world of Linux was the flexibility in configurations for almost everything.  I liked being able to make changes to every little aspect of all of my applications.  One of the things that was recently brought up to me was that extensible configuration would never make up for poor design choices.

I have been thinking about this one a good deal recently, and I think I’m going to have to agree that there are limits to how far configurations should be able to impact software.  The examples that keep coming to mind are some of the suckless tools.  dwm is probably my favorite tiling window manager for X environments.  One of the things that originally turned me off about it was that in order to change the color of anything in it or to adjust keybinds, you had to recompile the entire program.  It’s not that it’s difficult to do, but it just seemed like a great deal of trouble for something so minor.

After using it for a while, I came to really appreciate the fact that when it started up, it was simply loading the binary.  There’s something “pure” about not having to source an external configuration file to make a program work.  It’s a very clean answer, and it’s not being controlled from files scattered throughout the entire directory structure.

On the flip side of this is my love for emacs.  It is one of the most customization-friendly editors ever written.  I have a pretty extreme set of personal files that it sources on start-up.  If I don’t like something, I’m always glad to be able to try a few different lisp routines to see what the effects will end up being before I commit the changes to the init files.

Since having really taken the time to think about what the “best of all possible worlds” is, I’ve just about determined that for me it comes down to being application specific.  There are some places that I want flexibility, and other places where I’m looking for simple executions.  I’m not 100% sure that there is a “correct answer” to what the best choice for programmers would be.  It stands to reason that if you’ve accomplished programming perfection, that there’s no reason to need any customized options.  Still, the end user may not always agree with your decisions in design.  I certainly wouldn’t want to force my web-browser’s bookmarks on everyone.

To quote the Zen of Python:

There should be one-- and preferably only one --obvious way to do it.

I’m not sure that I totally agree with this as a general principal for everything, but the world is much simpler when there is an accepted solution to a problem.  Simplicity leads to understanding, and understanding leads to advancements.  Perhaps what we’re all looking for is that perfect balance of freedom and simplicity.  I’m still searching for it.