The hot library topic of the week have been the creation of the NGC4Lib -- Next Generation Catalogs for Libraries mailing-list. It's already had a flurry of emails, almost all of the really good. But.
Before we go out and define what such future systems should do and be like, we need to give careful consideration to how these systems are measured in terms of success and failure. Make no mistake about it; we're in the library world now, and there are no normal rules here like "project success is determined by resources and time spent creating it, budget for maintainence, and return on investment." Currently libraries fake these factors more than anything; if something isn't delivered on time, no one gets fired. If a project doesn't satisfies a given set of requirements, no one gets shafted. If a project is functional but completely useless, no one gets kicked up the backside. That's just the way of the library world, mostly funded through academic and / or goverment channels. And of course, there is no return on investment, as it is measured in patron-positivism, a magical force that comes and goes with the feelings of the librarians that run the project.
Now, I'm not writing this to give librarians the blame for anything here; it is of course hard to claim return on investment when the return is measured in something as flakey as "patron feelings." It's just not doable. In fact it is symptomatic for rules and measures brought upon the library world by government systems that has no clue to what it is that we're really doing; it is terribly frustrating! And it's an important point in so much that even when we would love to do X, we can only do Y, because Y fits the balanced scorecard and X doesn't. Needless to say, I don't like Balanced Scorecard, not because that system can't help when implemented right, but because it is hard to implement right (which is why there's bucketloads of friendly consultancy companies that specialises in helping organisations out with it all; if it's a consultancy business, it's the wrong model to use in my eyes ... and I used to be a consultant, I should know!). Other libraries have other measurement and management systems in place, and I don't think they're all that different.
It comes back to government funding models; "we give you X if you can prove yourself in Y and Z, with a return of G, a deficit of H and turn-over of J." The trouble is of course that libraries are not easy to classify; we land in the shady areas of most government models, as we're collectors, preservists, archivists, public access granter, lender, national institution with constitutional tasks (collecting, archiving for example), academia, special interest research, broad interest supplier, service provider, intellectual property aggregator, funds granter, practicioners of copyright and rights, educationalists, conference and concert and presentation organisers and holders, and even more stuff! We do so many things that we're stuck in a political minefield!
So how can we measure the success of the OPAC, for example, not to us internally (where success is measured by librarians, and success is rewarded by being given more projects. Hmm.), but to the outside world? How can we create systems that gives us success in raising awareness of the need of libraries? That's my important question of the week.