We never know anything until we test it
Feel free to look carefully at this one.
There was recently a short discussion regarding a bash script that had been written, and how it could be made better than it currently was. One of the big suggestions was the replacement of spaghetti-styled “if, elif, else” statements with “case” switches. While I totally agree with this, I also realized that although I could write a good deal on how optimization through a compiler was benefited by using case-switch, I wasn’t 100% sure what the difference would be using an interpreted language. Since this was about bash, it only seemed right to set up some kind of test in bash.
So, I created two scripts that are essentially identical, save the fact that they are using either the “if/elif/else” syntax or the “case” syntax inside of a for loop. The for loop should increment enough times that we can see a difference…so let’s go with 10,000 iterations. Since I want to allow for the same functionality in both of them, we’ll set up a second variable that is incremented manually in the exact same way (by direct assignment), but is value-checked using the two differing methods.
It stands to reason that due to breaking out of the loop using the case-switch, that we’d see significant differences from reading every single check using “if/elif/else”. I considered checking to see if only the “else” or “*)” default values at the end of each for iteration would make a difference, but that’s not a practical use. To be able to generalize, it makes sense to use differing levels within the conditional construct at different iterations.
It turns out that under MY test (this could certainly result in different results on other systems, and certainly with other scripts) that the case-switch script ran almost twice as fast as the if/elif/else one. I tested it quite a few times using “time /path/to/script” and made certain to screenshot at least a few of the results. The switch script ran consistently in the 0.35 second range, while the “if/elif/else” script ran between 0.56 and 0.58 second range. It wasn’t quite 2x the speed, but it was certainly close enough to warrant consideration when writing your own scripts.
I am always wary to post anything that I do in bash, as I’m not a bash-scripting artist. If you believe that I could have made these tests better, should have done them differently, or have any additional input…feel free to share them in the comments. I’m always up for being able to see testable results, and so if there’s some way I could have made the test more accurate to the individual differences, please let me know.