While browsing Hacker News, I came across this post by Jonathan Rockway:
Why I stick with Perl
I found it was interesting - not so much for the points he makes about Perl (although those were interesting too), but for what he says about libraries.
[ BTW, Jonathan is one of the Catalyst Core Team members. Catalyst is a web application development framework for Perl, which is similar to Ruby on Rails in some ways. He's also the author of this book about Catalyst. ]
One such point of his:
"But it irritates me when I need to get at gpg from a web application, and can't just use a "libgpg". I have to fork a gpg process, setup file descriptors just right, write input to a pipe, wait for input on another pipe, and then parse the result -- all for what amounts to a few XOR operations and bit shifts. How could anyone think this is a good idea? (There is libgpgme, but this just hides the fork inside a library. There is still tons of totally unnecessary work going on.)"
I strongly agree with that. I really think most programs (even if ultimately meant to run standalone), should be written, not as monolithic apps, but as one or more libraries in the first place. And one should then just write a main function or main class, that calls those libraries, to do the bulk of the work for one's app. This idea applies even more for open source apps. If the developer of gpg had written a libgpg as Jonathan says above (and had then used libgpg to write gpg), Jonathan (and anyone else in the world) could use libgpg in their own work without reinventing the wheel. They would thereby improve their productivity a lot - for that particular piece of code that needed the libgpg (or other) functionality. Multiply this by tens of thousands of such cases and you can see how much time could be saved, overall. Not to mention that (if the libraries in question were of high-quality), a lot more time would be saved by not having to fix errors that often get added during cut-and-paste, as any programmer knows: "Oops, I copied that function, but forgot to copy its declaration into the header file" (for C or C++ - substitute any other language and type of error here), etc.
The point is not so much the errors themselves (which are often easy to fix, though not always), but the fact that, since they have been introduced, YOU NOW HAVE THE NEED TO FIX THEM, when you may not have needed to at all (because they might not exist) if you just had to call a library function or three and pass some arguments. Basically, more code leads to more errors, and less code to less errors (in general) and cut-and-paste leads to more errors (in particular).
Even in proprietary software shops, this point still makes sense - because the developers on the same team or other teams in the same shop, can reuse that code much more easily later if it's a well designed, modularized and refactored library, rather than having to cut-and-paste the necessary parts from out of one huge (monolithic) app, or blog :-) - as Jonathan says in the case of Rails programmers (*). And I've seen this advice being followed in the breach (i.e. not being followed) even in some big companies.
(*) Jonathan is off the mark, though, when he claims that MOST Rails programmers cut-and-paste code from blogs. That has to be a generalization without enough data to back it up. I'd say that, across the board, good programmers try to find or write libraries.
This concept is actually similar to this rule in The Art of UNIX Programming by Eric Raymond, the Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
I disagree about this, though:
"If you are like me, most programming you do is about gluing things together with libraries."
I'd say it depends, on a case-by-case basis.
Vasudev Ram - Dancing Bison Enterprises