A Brief Conspiracy Theory

Humour me for a few minutes, on the whole topic of apple disposing of the 3.5mm jack.

I consider myself to be a fairly technically advanced guy (as do many of those people around me). I keep up to date with what’s going on with the avid engagement of a fashionista. But I refuse to be a victim. I signed up for what was arguably the first production smart-watch (Pebble) when it was on Kickstarter, but I didn’t go for the Apple Watch when it came out because, though sexy it was about four times the price. The Pebble though ugly and crude had most of the functionality I needed, and it was cheap. To me that is “good technology” – Apple Watch had most of the same stuff, and a couple of other bits besides but there’s a cognitive cost to every new gadget, as well as a money cost, and I just couldn’t be bothered in either sense.

So it goes, I love Apple Macs. Love them not quite passionately but ardently. My Macbook Pro is one of my most prized possessions. Why? Because it has given me very little trouble. I wouldn’t go so far as to say “No” trouble because of the hugely irritating Yosemite release of OS X (I am currently now two releases behind El Capitan and then Sierra because I’m too afraid of breaking my computer again). Then there was that stupidity with mDnsResponder/discoveryD, which is one of the few issues I believe Apple ever admitted they were wrong about.

For any small issues I’ve had with my Macbook though, it’s important to remember that “Computers are Hard”, and to manage expectations as such, even when paying top dollar. It’s important to remember that it’s still better than most of the alternatives. Though it looks like Windows 10 is closing this gap.

Same goes for my iPhone. I get frustrated with it from time to time, in particular that my current iPhone 5s just isn’t as good as my first iPhone 4, but I’d be hard pressed to find an alternative. The Android devices just seem a little too finicky. If WebOS was still around I might go for that …

We’re not quite in the good old “it just works” days of 10 years ago, but in perspective things are still “Okay”, “Better than the alternatives”, and fairly reliable.

But obviously I’m grumbling about something, and I think it’s Apple’s attitude to product development. Everything seems to be entirely driven from the industrial design end of things. The entire strategy seems to be around making new “cool” and “beautiful” things, and not so much looking at utility, which is what I always felt was the biggest draw to Apple stuff. Not just the shiny shiny love we Apple fanbois are so often admonished for.

… and a cornerstone of Apple’s practicality and innovation was their software, and I just don’t think they have that any more. Like I say, Apple is now being driven by an industrial design doctrine, the two key figures being Tim Cook of course (manufacturing operations) and Jonny Ive (industrial design). Phil Schiller as far as I can make out is some kind of vacuous PR/Marketing cardboard cutout.

Apple’s software died the day Apple Maps came out. I don’t know if you dear reader remember that unmitigated disaster but one day you were happily using the excellent Google maps app, and the next after an upgrade to iOS 6 it was gone and replaced with a shambolic alternative, and the vast bulk of customers who got wise beforehand  (I was one) delayed their upgrade for about 6 months until Google took pity and provided us with Google Maps once more as a third party app (probably laughing all the while).

But that wasn’t what killed Apple’s software (though I suspect it was a facet of it’s long beleaguered defeat) – I suspect it was the sacking of Scott Forstall (he of the weird obsession with Skeuomorphism that gave us a “leather” contacts app, and a “spiral bound” calendar app). I don’t know if he was brilliant (obviously at my level of a mere mortal he is, but at his level I mean) but he was a “software guy” at executive level, and that is what is sorely missed.

In the years since I’ve watched the quality and utility of Apple’s software gradually deteriorate. An Apple Mac used to have all the software you needed to effectively manage your life. iTunes, iPhoto it might shock you to learn were all great Apps back in 2010. Their newer iterations are absolute (pardon my French) shite!

There seems to be little appetite for doing “Software Engineering” in Apple any more – by which I mean developing solid, maintainable, reliable, functional software cost-effectively. Sure skeuomorphism was a bit weird, but like art-deco, it was an embellishment or veneer that didn’t inhibit the quality of the software. Now what we have veers more towards style over function. From “It just works” to “Just make it work”. Software for Apple has I believe become just another “value added” function – which is a pity because it was just as much the software in the early days that made the Apple stuff sing. There is a well known anecdote about how Steve Jobs was unhappy with the early Mac’s boot time, and how he wanted 5 seconds off it because it would “save lives”! This was a guy who recognised the role good software plays in the overall product experience. He made it integral. Furthermore, software drove the hardware, not just the other way round. Parabolically, Steve Wozniak reduced the cost of the early Apple’s by implementing certain hardware features in software rather than bear the costs of additional hardware.

But we don’t have these strategic sensibilities in Apple any more. Apple are in the “devices” business now. They’re very good at this, but concerns about production costs, supply chains and shipping now dominate. Basically it’s all about producing the most expensive devices for the cheapest cost. This style of product development follows a very simple equation: Sales – Costs = Profit. There’s two ways you can increase your profit: One is to increase sales, by selling more, more expensive devices. The other is cutting costs, and not only is software development expensive, but it’s also hard to scale in any conventional sense. It can be done, but not within a conventional manufacturing mindset and this is where having somebody at the senior executive level who “understands” software is sorely lacking.

I don’t think Scott Forstall was necessarily sacked because Apple Maps was bad. It probably wasn’t his fault. There had to have been an issue with Google swallowing a large amount of Apple’s revenue through their apps and they had to be got rid of eventually. Forstall had an impossible task I’d say: Catch up with Google who had been blazing a trail in the mapping space, in probably something like a year. It could never have worked and when he refused to be scapegoated for hit he got fired.

Now software is a cost. The effort that goes into producing it is mere “labour” rather than the creative, generative exercise it needs to be. The ways you cut costs in manufacturing (e.g. by ever and ever subdividing tasks, enforcing uniformity, driving scale and to a certain extent getting cheaper labour) just don’t work in software. So I say anyway, so I have observed in my ten years or so watching businesses time and time again try to apply these principles and just end up either losing customers or being bled on support costs because the software just isn’t good enough.

We hear ever more these days about the 10x programmer (not saying I am one mind) who is 10-times more productive (while being maybe twice as expensive) and “full-stack” which is the very antithesis of subdivision of tasks. These are the economics of software: You can do more with less. It’s like getting beaten up by 3 guys instead of 10: Your assailants will do more damage (thus be more productive) if they’re coordinated and not getting in each others’ way.

So veering back to the headphones. Sorry I swear we’ve got to traverse this ellipse a little further before I actually get to make my point but we are starting to converge … headphones, production costs, software quality, headphones … great.

Have you noticed, that you’re having to try more times getting typos right when typing on your iPhone? I have. It’s not just the dictionary is bolloxed and it’s capability to learn, but I also notice that I’m mis-hitting the keyboard a lot more. I remember being struck in the earlier days how well my iPhone compensated for mis-hits and usually guessed my intention quite well. I don’t get this feeling any more …

Have you noticed that new “non- feature” too that they added whereby the top of the keyboard provides you with a list of possible words you can type, thereby taking the load off the predictive text engine?

Touch screen device drivers, and predictive text are “hard” to do well, and they also have the curious property that most people won’t notice right away if you dumb them down. If they’re hard to “do” they’re even harder to “manage” in a conventional manufacturing mindset. Hard to do and hard to manage means you need smart high-cost people, which are “costs”. How does an industrial designer reduce such costs? By changing the product so they’re no longer relevant.

With a large screen you don’t need clever device-drivers; you have less mis-hits so you don’t need such sophisticated predictive text and you can afford the extra screen space of a word-list at the top of the keyboard so that the user can assist the keyboard.

Boom everything just became much more manageable, at a user-experience cost barely noticeable by most, unless they inconveniently stick with a small screen (which I do, because I want my phone comfortably in my pocket!). So now everybody has Phablets. In fairness it’s not just Apple, they’ve been relatively late to the party – but it does explain why we’re less and less likely to have the “option” of a reasonably sized phone any more and why manufacturers are pushing the larger screens really hard.

But that’s not the conspiracy – remember I started off talking about the new headphones? Or rather the elimination of the headphones jack? It’s all about making the iPhone “slimmer” they say but why do they care about slimmer? Really as a customer I would way prefer a small fat phone than a big thin one. Already there’s stories going around about how though the thinner phones are bendy the internals aren’t and the issues this leads to issues

Thinner is cheaper again. So a big screen right, if you buy my argument makes development of the device simpler and thus cheaper. But now what you have is a device that is bigger and heavier, and thus requires more materials to produce and costs more to ship. Marginally per unit but at the scale Apple operates at this works out at a lot of money, and remember Tim Cook is a manufacturing operations guy.

So if we go back to our equation earlier, if we reduce manufacturing and shipping costs by making the device thinner (thereby lighter – to compensate for the larger screen the purpose of which was to reduce development costs) you have more $$$$$$

Elimination of the 3.5mm jack is integral to this. If it works, it translates into mega bucks, even if people don’t go for the vendor locked-in headphones, or the EarPods and AirPods. If people just buy iPhones the same way they always did and just use the adapter then Apple have already succeeded. The other stuff is a bonus.

Yes it does further reduce the experience but the gamble is that most people won’t notice or care. It may be the case with the headphones jack they’ve pushed it too far but we shall see.

I remember when they got rid of the optical drive I was annoyed. They were charging me more for a computer that had one less optical drive. But I got used to it, and in a completely solid-state device such as my Macbook Pro is it makes complete sense. There’s no moving parts. Nothing to break. It feels so sturdy and durable. I still do miss optical disks, but in fairness I could have bought an external drive ages ago if I really wanted one.

Then there’s the fact that I’m simply outside the norm on the bell curve. Most people want a device that “Just works” (with the emphasis on “just”). I’m a demanding power user and maybe there’s just not enough money to be made fawning over my every desire vs doing the bare minimum to keep the less demanding users happy. Even then maybe it’s just good enough to be “better than the alternatives”. My Mum’s iPad still works as well the day she got it about 4 years ago. My 3 y/o Google Nexus however, does not.

We’ll see. Google maps (which was swiftly restored!) is one thing; optical disks are another; lets not even go there with mDnsResponder (also reconstituted!) but the trusty headphones jack is quite a bold (ha ha maybe “courageous”) step. What about all those kids with Skull Candy headphones; or my friend who spent €350 on his Bose noise-cancelling headphone? There’s an adaptor which I can lose, or noting the reliability of lightning thus far more likely break; what happens at a party when somebody passes you the aux lead and you forgot the adapter?

I’m still annoyed I can’t get a phone that fits comfortably in my pocket.

Now this?

Will it stop me buying one eventually??? When I run out of options (thankfully the SE is still available) and I’m caught between that and an Android device? I can’t see it going any other way.

Refactoring the Healthcare System

Outgoing Health Minister Leo Varadkar made quite an odd statement there a few weeks ago, referring to the shortage of beds and other resources in the public health system. It caused much rancour on my twitter feed and I have to wonder if he actually believed it himself, or it was a party line he was relaying, safe in the knowledge that whether it was utter nonsense or not he would retain his seat. It turns out that he was right, about his seat. But anyway, the statement:

“What can happen in some hospitals is sometimes, when they have more beds and more resources, that’s what kind of slows it down.”

When asked why, he replied: “Because they [hospital staff] don’t feel as much under pressure.
“When a hospital is very crowded, there will be a real push to make sure people get their x-rays, get their tests and, you know, ‘lets get them out in four days.’

“When a hospital isn’t under as much pressure, you start to see things slowing down and it might take five, six, seven days to get the person discharged and that’s [the] length of stay, so it’s all these different factors come into play all the time.”

I suppose there is a certain logic to this … yes, if there is a shortage of beds it does add an additional reason to turn patients over and get them out the door. I’d question why though, this turnover is necessarily seen as a measure of productivity; there’s an implication in his statement that patients staying an extra few days is down to some inefficiency – surely not “laziness” – but perhaps a lack of motivation on the part of the hospital staff.

But perhaps there are other reasons why a patient could be kept in a little longer: Perhaps it’s simply too soon for them to be discharged? Maybe it’s simply in the interest of the patients to stay in for observation a day or two more. But in any case, surely there are better ways to ensure “productivity” than limiting available resources. Roughly speaking it’s kind of like say taking a player off your football team and expecting them to “play harder” with ten men (you lose the match), or restricting the flow of petrol to your engine to increase fuel efficiency (you most likely damage your engine).

Anyway, I am obviously not a healthcare professional, nor a political actor, and while these remarks caused me some concern as a taxpayer and potential health system user, they didn’t really anger me. I see a healthcare system that is a black hole for money, a government whose whole policy is putting a rein on public spending, and a minister trying to put lipstick on the pig. For all I know Leo knew full well what he was saying was stupid, and he may have even been making a discordant statement on the sisyphean absurdity of being an impugned vassal state trying to pay off a debt with money loaned by creditors.

But it did get me thinking about the whole dynamic of “productivity” vs “utilisation”, and how such statements effectively confuse these two terms. I have a small bit of knowledge on such things rattling around in my head from my undergraduate days (specifically relating to network switches, but abstracted as “systems” a perfectly suitable analogue). In a nutshell, you can run around in a circle all day and you would be fully utilised yet you would have got nowhere so your productivity would have been effectively zero.

For a network switch, if it has a maximum capacity of 1 Gigabit per second, and you are relaying 900 Megabits per second then your utilisation is 90%. It’s generally a good thing, but then again if you relate the “value” of this data directly to this number then you only have that remaining 10% capacity to get more value out of your network before you have to buy a new switch. Then what happens if you have a spike in traffic?

As a software engineer, I sometimes find myself faced with these problems. In the rush to get to market short cuts are taken and sometimes the implementation isn’t optimal and sometimes that needs to be fixed. This often involves a broad system-level analysis to determine specific bottlenecks, and then a root and branch analysis of particular elements. Often times a one-line fix here or there is sufficient to get a significant improvement, other times an element needs to be reorganised, or even the whole system needs to be re-orchestrated – obviously the first of these options is the best but sometimes you just need to upend things, and swallow the associated risks.

This kind of pragmatism is often scoffed at by commercial personalities “why bother when you can just buy more hardware?” or “increase resource allocation on [cloud platform]?” which is in the near-term much cheaper than paying for expensive engineering time. Bjarne Stroustroup’s answer to this is that “If you’re dealing with a server farm, doubling your resource retention means you need another server farm. They aren’t cheap, and they’re not cheap to run”. There are times when you need to look at application efficiency (productivity basically), not all the time, but there are critical instances where it very definitely is commercially necessary.

So when you look at your switch running 90% utilisation and you want to get more value from it, without buying a new switch then what are the options? You need to get more value for that 90%. You look at the traffic going through it and see what you can do with that. You could add some compression to some of the applications using it; perhaps identify and throttle some high-volume activity; there may even be some unnecessary activity on there, or some misbehaving applications that need tuning.

Now you’ve still spent money in terms of expensive engineer hours rather than a shiny new switch. There is some comfort in owning your switch, since people are fickle but you would have had to pay somebody to install the switch anyway. If you got your utilisation down to 45% now you’ve got twice as long before you have to buy a new switch and when you do that it will last twice as long again (factor in ever-plummeting equipment costs and you can put the depreciation value in your pocket too).

So where does this leave Leo and his under-productive but over-utilised health service? Well after hearing what he said I felt some correction was in order so I googled “utilisation vs productivity” and came up with this beautiful article. I still haven’t finished reading it yet, but it has so much good stuff in it that I can directly relate to from my days as a junior engineer. You could pretty much take the terms “Product Development” and replace them with “Healthcare”. It covers some of the common fallacies of management seemingly reapplied rote from a widget-manufacturing context, ala Adam Smith.

Leo isn’t in a position where he can “buy a new switch” to increase capacity. Even if additional healthcare workers were readily available there is a reluctance to splurge extra money when during the nineties we did just that and just ended up making things worse. To wheel out yet another oft neglected software engineering trope “adding manpower to a late software project makes it later”.

So again what is he to do? When there’s nothing he can buy, and he can’t throw money at it anyway … when conventional business wisdom fails but he needs to get more bang for his buck??? Rather than treat it as a black box (money goes in, productivity – whatever that is – comes out) he might employ a more enlightened “white box” approach.

Just like sometimes a business needs to concede that good old fashioned sciencing is vital, he could start by consulting actual HSE staff. Some of the smartest people in the country work in medicine – it couldn’t possibly be the case that they have no idea how things could be done better.

The “box” is populated with many highly qualified, motivated people who would be only too happy to offer their advice and insight if only somebody would ask. I think traditionally it was the role of the unions to channel the concerns of workers, but that kind of process is more to protect workers than endow businesses, so I don’t think these organisations have that capability.

There is this gaping chasm in terms of cross organisational communication and for my money I think addressing that would be the best place to start. A cross disciplinary convention (similar to the constitutional convention) of HSE staff could be a good place to start getting soundings before embarking upon some subtle organisational transformations. Though perhaps reforming such a gigantic organisation would be a herculean and politically toxic task – that being the case maybe it’s time to apply another popular engineering method and start breaking the big problem down into smaller simpler parts.


I suppose it is worth noting, for the benefit of more pedantic readers, that depending on what happens over the next few weeks it may not end up being Leo’s problem at all. But seeing as I started the narrative with his statement it made sense to make him continue pushing the rock uphill throughout 🙂

Moving on from Vagrant

My daily routine for starting each of my Virtual Machines has become:

  • First check the UUIDs for the VM:
 $ VBoxManage list vms | grep -i hero7
"hero7" {2856ae7c-bbb0-4110-8b5d-126abb6f2135}
  • Then check the UUIDs that Vagrant has stored:
 $ find .vagrant/ -type f -print -exec cat {} \; -exec echo \; .vagrant/machines/hero7/virtualbox/action_provision
1.5:ccd5fc98-f80f-45d9-9e26-40e1ad92ee08
.vagrant/machines/hero7/virtualbox/action_set_name
1429893049
.vagrant/machines/hero7/virtualbox/id
ccd5fc98-f80f-45d9-9e26-40e1ad92ee08
.vagrant/machines/hero7/virtualbox/index_uuid
9068bc65efae46509227f81413018494
.vagrant/machines/hero7/virtualbox/synced_folders
{"virtualbox":{"/media/sf_sharedVMFolder":{"guestpath":"/media/sf_sharedVMFolder","hostpath":"C:/sharedVMFolder","disabled":false},"/nasnfs":{"guestpath":"/nasnfs","hostpath":"//NAS/nasnfs","disabled":false},"/vagrant":{"guestpath":"/vagrant","hostpath":"C:/Users/rrusk/Documents/vagrant/hero7","disabled":false}}}

If the UUIDs match I can go ahead and start my VM in the usual way with “vagrant up”.

If they don’t match however, I have to take remedial action, or Vagrant will actually trash my VM! Vagrant has the same command (“vagrant up”) for starting a new VM, as it does for creating a new VM from scratch, and it decides whether or not to destroy all your work to-date based on whether or not these VMs match!

First I’ve got to edit each of the files with the faulty UUID, and replace it with the one provided by VBoxManage:

 $ vi .vagrant/machines/hero7/virtualbox/action_provision
 $ vi .vagrant/machines/hero7/virtualbox/id

This used to be all I had to do. This was sufficient as a fix, and then I upgraded to a newer version of Vagrant hoping the issue would be resolved. It wasn’t and now I have to replace some stupid ssh private key as well!

 $ rm .vagrant/machines/freeradius/virtualbox/private_key

And then on boot I’ll get the message saying it is generating a new private key (presumably tied to the UUID…), but sometimes it doesn’t and I have to ssh into the VM manually and replace the public key!

 $ curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub -o .ssh/authorized_keys
 $ chmod 600 .ssh/authorized_keys

It is unclear how or why the UUIDs stored become out of sync with VirtualBox. I’ve tried various ways to prevent this happening. I’ve tried ensuring graceful shut down of my VMs, and I’ve tried getting vagrant to shut them down itself (vagrant halt), and I’ve even tried being more patient when shutting down my own work machine (the “host” environment) but to no avail. There is no predictable pattern, and some days I come in and the UUIDs just don’t match!

But that is forgivable. As a programmer myself I know how hard it can be to synchronise state between two separate applications, but what is unforgivable is that vagrant has a single command for both starting & trashing your VM (vagrant up) that takes its cue on which to do, from this single flimsy premise!

Could they not have a separate command in each instance? I’ve raised a ticket asking just that, but after a few weeks I have yet to see any response, and in the meantime the other ticket where people are seeking remedies continues to hop!

I’ve been dealing with this problem for a few months now, and I’ve had to restore my work from scratch each time. As an exercise in developing understanding of the stuff I’m working on it has been okay, but it hasn’t been kind to my deadlines.

Today I’ve had enough and I’ve just decided to start using “VBoxManage” directly to handle my VirtualBox VMs. The convenience that Vagrant provides just isn’t worth the pain it regularly inflicts upon me.

For starters then, to boot my VM headless all I need to do is:

 $ VBoxManage startvm hero7 --type headless

To find out what port I can ssh in I can do:

 $ VBoxManage showvminfo hero7 --machinereadable | grep "^Forwarding"  
Forwarding(0)="ssh,tcp,,1122,,22"

If I need to change the port (perhaps due to another clashing VM):

 $ VBoxManage controlvm hero7 natpf1 ssh,tcp,,2222,,22

Then just ssh in the usual way:

 $ ssh vagrant@localhost -p 2222

Then once logged in I can (as root) mount my “/vagrant” folder with:

 # mount.vboxsf vagrant /vagrant

I’ll add more commands as I think of them, for my own reference as much as for the benefit of others! It’s a little trickier than using Vagrant to be sure, but at least it’s safe!

Vagrant is very definitely a cool tool by the way. If you want to quickly create and provision a VM it’s your only man, particularly in a devops setting, and it has some very powerful tools such as packer and vagrant cloud that make it indispensable!

For day-to-day development however it just has too much potential for creating a mess, and so for that reason I’m done with it as a developer tool. Quite embarrassing really as up to now I have been championing it amongst my colleagues, but sure we all make mistakes!

Pair Programming

Pair programming is a collaborative practice, in which two programmers sit together at one computer and write code. It flies in the face of the conventional image of the lone hacker sweating over a hot keyboard late into the night, and it is useful food for thought in terms of considering how programming can often be better done as a collaborative activity.

Pair programming as a practice emerged from some pioneering work done by Kent Beck in the 90’s under the banner of “Extreme Programming”, which proposes a set of practices for more rigorously developing software with less bugs in a more timely manner. Pretty much all of these practices are commonly applied in the industry today, yet pair programming languishes for some reason.

If it sounds uncomfortable, it is. Two grown men sitting together in intimate circumstances for extended periods (typically men I say, though inter-gender PP could likely make an even more squirmish combination) of time quietly arguing the benefits or otherwise of indentation-styles and data structure selection. This probably has a lot to do with it. Never mind the fact that the kind of person that usually becomes a programmer is typically quite driven, a little bit individualistic, and quite confident of their ability to make judicious implementation decisions without having somebody looking over their shoulder.

Also, I believe the need for enabling collaboration in this way has also been somewhat obviated by technology itself (instant messaging, screen sharing, code review tools being commonplace) and the development of more social development methodologies such as SCRUM.

I wrote what remains here as a comment to this article, but feedback seems to be disabled, and I didn’t want it to go to waste. The author discusses how pair programming is good because we only spend 10% of our time actually writing code. My position is the opposite …

Programmers’ division of time

I would question this reasoning, but the pie-chart does seem appropriate to me.

70% of your time reading code, does seem quite high, but I would definitely support the aspiration that we should be reading, and thus reusing old code more than we should be writing new code (if not in actuality, at least the knowledge therein). It is also true that the code is often the only definitive documentation for the software it constitutes and any programmer must be able to understand what went before, before they build upon it.

20% of your time solving problems seems okay, this is the time spent in the “problem domain”, acquainting with the “real world” aspects of the problem, and in the “solution space” determining how best to implement, and what tools and technologies to use. I think this is highly dependent on the nature and novelty of the problem itself, but as a “what’s left” after reading (70) and writing (10) code, it’s an okay number.

I would contend that the foregoing categories of activity should always be highly collaborative (also that some degree of coding is involved in them also – a note for managers of the behavioural persuasion), but that “pair programming” is all too limiting a collaboration model for these. The nature of reading may always be quite solitary, though it should be supported by questioning and open discussion. Problem solving should be aided by group discussions, the scrum standup, whiteboard sessions, simulations, prototyping and adversarial testing, as well as an open and friendly work environment.

Pair programming then, can only really be applied to the remaining 10% of activities, at which point we have to ask is the discomfort worth the payoff? As you say “one keyboard for two pairs of hands can only be a limiting factor”, but at this stage I would consider slowness a virtue. It is at this stage that you commit the work of the remaining 90% to code, and mistakes can be costly. For this reason being forced to go more slowly is beneficial, as is the second pair of eyes. The second brain engaged can also prevent idiosyncratic “hero” code slipping in …

But all told, much of these benefits can be enjoyed through code reviews, some self-discipline, and strong technical leadership. That’s not to say PP doesn’t have its place, but the benefits are often overstated, and as a practice that is as uncomfortable to the typically individualistic programmer as I know it is, it will rarely be among the first things they call upon.

Plans for the New Year

Not going to say “New Year’s Resolutions”, but I’m going to put together some “plans” for early 2015.

Some simple discrete goals:

  • Lose a stone.
  • Cycle a bit more.
  • Drink less.
  • Book a Spring Break.

Nothing too strenuous there. Some are complementary (in particular cycling & booze reduction should help shift the stone). So lets say lose the stone by 17th March (Paddy’s day), do a 10k cycle once a week (to commence New Years Day), and midweek drinking to be abolished (till Paddy’s – Birthdays and other special events notwithstanding). The spring break can be a reward for a job well done, and a further opportunity to reflect and set the next set of goals!

Unformed thoughts from scripting

This is the post I was talking about in the previous post. I had a couple of tabs open for the last few weeks, loosely related to bash-scripting. I want to capture these links so I can allow myself to close the tabs, and I also want to explain why I think they are useful. Then I’ll ramble on for a couple of paragraphs about some other thoughts that arose in the process of pulling these explanations together (accompanied by other links of course).

  1. the ‘test’ man-page
  2. tldp discussion of ‘here’ documents

I’ve come to like the “test” command as a handy tool when automating any tasks that act upon the file-system. This is probably something that many would think is ugly, but I like it and here’s why …

So obviously you understand why I need to check the status (existence, readability, writeability of files). No need for me to explain that. If you think using the ‘test’ command is bad, it’s probably because you think it inelegant. You believe that using more idiomatic language-features are the way to do this kind of thing. You might be smarter than me, you might think that my reasons for choosing this approach relate to cognitive limitations (which is true, but not invalid), or laziness in not taking the time to learn the details of said language features.

I like the ‘test’ command command precisely because it is ‘not’ idiomatic. In the spirit of the unix philosophy it is a small simple program that does something very well. I know exactly what it is going to do in any setting. There are no obscure syntax rules surrounding it. I don’t need to memorize any special cases. I can copy and paste single-line expressions til the cows come home and I can even use them on the command-line unmodified.

# print "file exists" if myfile exists
$ test -f myfile && echo "file exists"

could alternatively be expressed as:

$ if [ -f myfile ]; then
>     echo "file exists"
> fi

which is more verbose and idiomatic, but has the drawbacks that it is on three lines as opposed to one. It relies quite heavily on some loaded syntax rules. It can’t be easily transplanted to another context. It can’t easily be pasted to the command-line, or live in your history. There are also various language subtleties in place here: spaces don’t matter in some places (e.g. indentation) but do in others (e.g. spacing between ‘if’ and ‘[‘ or ‘-f’).

There is also a fundamental dishonesty in the picture here because the ‘[‘ element appears to be a sytactical element when it is in-fact a command!

 $ which [
/usr/bin/[

so effectively you could just do our sample logic like this:

$ [ -f myfile ] && echo "file exists"

Here Be Dragons“: you are now using punctuation for naming a command. You are misrepresenting a command as punctuation. For somebody who knows what’s going on this is fine (in fact when doing a course a few years ago where this concept was introduced, the instructor took pains to point this out) but if you don’t there is a possibility for confusion.

Punctuation can mean different things in different circumstances. Sometimes a ‘[‘ may need to be escaped. Sometimes it may not. I’d like to be able to wallop out a simple command on a command-line without having to painfully construct weird syntax. I don’t / can’t keep all the different rules for all the different languages I use salient in my mind at all times.

Which brings me to the argument that maybe this approach is just because I am (we are?) too limited in my mental capabilities.  In a world where people often overestimate their capabilities this is perhaps a virtue. I also like to consider that the code I produce isn’t just a showcase of my own intimate knowledge of a particular language. I also like to consider the perspective of somebody else looking at what I’ve done, where they might not have the luxury of teasing out the latent meaning carried by an arcane combination of punctuation and spaces.

I often visualise the tired and stressed-out support engineer. One of my colleagues (or even myself) sitting in the office in the late evening trying to determine why a red-light has gone off. In such a scenario spirits may be low, tempers may be frayed, but almost certainly it isn’t the case that that they have the capacity to instantly recall the rules that have shaped the various “musical notes” they now see in front of them. How to determine the error isn’t among the opaque? How to make the fix? “Should this be an asterisk [*] or a plus [+]?” “Is that a tab or four spaces?”

Sometimes it’s just enough to visualise myself in 6 months looking at my own code and going W. T. F. !

I can also use ‘test’ for examining environment-variables:

$ test $myvar || echo "myvar is not set"

carries a lot less baggage than:

$ [ x$myvar != x ] && echo "myvar is not set"

“here” docs

If I have a load of different items to process I like to express it thus:

$ while read package_name; do
>   yum install -y $package_name
> done <<-END
>     httpd
>     screen
>     git
>     svn
>     gcc-c++
>     autoconf
>     automake
>     glibc-devel.i686
>     libstdc++-devel.i686
> END

which is how I might describe the installation of a bunch of pre-requisite packages for a 32-bit cross-compiling environment.

What I have done here is separate the command performing the installation from the list of packages to be installed. At a stroke you have separated concerns but also provided locality, and simplicity.

There are two alternatives here that spring to mind. The first is to simply repeat the command multiple times each with the parameter changed. Instantly people will say this breaks the “don’t repeat yourself” principle, with the very pragmatic concern that any mistakes in the composition of the command need to be corrected for each repetition, and also any later fixes can provide fertile ground for further bugs where one or more repetitions of a fix are not perfectly reproduced.

The alternative approach is to separate the list of parameters into another file, which is “sounder engineering practice” but also increases complexity as you’ve now created two files to maintain rather than one. The practice of externalising parameters makes less sense for scripting languages than compiled programs since a script can be easily edited.

Some good thoughts on this topic are here.

In the course of putting this little essay together I found this little quote from Linus:

“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.”

which I suppose could be deployed either in support of some of my arguments here, or against the whole process of writing about it 🙂

Also, a quick note on the syntax of here-documents: It should be clear that what happens is each line up until ‘END’ (which may be any arbitrary string as specified after ‘<<‘) is read into standard input and passed into the subshell as parameter “$package_name”. The use of ‘-‘ may be less clear. It means that leading tabs on each line won’t be passed in to the subshell, which is handy for presentation-purposes (although I guess in this case leading tabs wouldn’t be a problem, but anyway).

Also, something to bear in mind is that any words prefixed with ‘$’ will be treated as an environment variable, and will be expanded. Now this is very cool if you want to generate a here-doc based on parameters inferred elswhere in your script.

If you want to output your here-doc without expansion surround your termination string (which is “END” in my example) with single-quotes e.g.

$ while read f; do
>   echo $f
> done <<-'END'
>     normal text
>     $dont_expand_this
> END

A complete discussion of here-docs is available in the linux documentation project.

 

 

 

Inspiration for a first post

Up until now, I have just been maintaining this site as a jumble of pages containing things I might want to have access to, or that other people with similar interests might find useful. There hasn’t been a lot admittedly, but I always felt I had to publish “just enough” to get some content up there so that I could justify having a web-site.

Typically stuff just gets thrown up on here when I’ve been retaining a tab open in my browser for a few weeks. It’s usually something that’s interesting, or useful, but that was hard to find, or that I don’t have immediate use for but that I know will come in handy in the future. The tab will usually stay there until I either book-mark it (rare enough because I absolutely hate maintaining bookmarks) or find some way to apply it so that the knowledge becomes a part of the milieu of information surrounding me.

Sometimes I do just let it go, but lately I’ve been adding these links into my web-site, which on the one hand does retain the knowledge, but on the other hand risks eventually turning the site into a link directory, which isn’t something I want to do. Apart from the fact that this would be tremendously uninteresting, the same reasons for not bookmarking, roughly apply: Takes time to maintain, has to be organised into a hierarchy, has to be kept up-to-date, and so on and so forth.

But anyway, this practice of adding in links to pages that I had created to be topic-specific has been creeping in. I’ve had a bunch of tabs now open for a while containing some bash scripting gems. I thought to add links to the page where I present my bash profile, but as I added these I started to feel the need to explain why I thought they were useful.

As I started to add these explanations, I began looking around the web trying to find articles supporting my opinions, and as I dug I found ever more links I wanted to keep, and my explanation started to seem ever more essay-like. I had hoped to find a neatly encapsulated argument, or phrase that would illustrate my position and hand off to that but there wasn’t really one forthcoming.

I’m kind of wary not just of diluting my “topic specific” pages with more general information but also that in having to condition my information so that it fits will mean throwing away related info that’s also pretty good (because it doesn’t fit you see).

So a short informal essay, capturing the information surrounding a small specific topic is a good reason to write a blog post. It’s a grandiose description of something quite trivial, but as a lazy programmer I need it to justify doing something new. Blame years of having to justify my estimates to skeptical resource-planners 🙂