Panlibus Blog

Archive for the 'Systems and technologies' Category

Perceptions 2009: An international survey of library automation

Marshall BreedingIn the latest Perceptions survey, the most popular library management system is from a relatively new supplier to libraries and is available exclusively on a Software as a Service basis. The survey also reveals that interest in open source library management systems is weak outside the community of libraries that has already adopted one.

The Perceptions series of surveys is three years old now, and is part of Marshall Breeding’s armoury of library technology commentaries, the most well-used of which is Library Technology Guides. Meanwhile, Perceptions 2009: An international survey of library automation,  like its predecessors, aims to ascertain levels of satisfaction within libraries with their library management system and suppliers thereof. Despite disruption in the library software arena, the library management system (LMS), or integrated library system (ILS) as it’s known to Marshall Breeding in the US, remains important:

The integrated library system (ILS) for most libraries represents the most critical component of its technology infrastructure and can do the most to help or hinder a library in fulfilling its mission to serve its patrons and in operating efficiently.

Interest may be waning in open source

One of Marshall’s central aims this year is to gauge interest in open source ILS products, which he describes as “one of the major issues brewing in the industry”.

A key overall finding was that companies supporting proprietary library management systems tend to receive higher satisfaction scores than companies involved with open source library management systems. Marshall notes explicitly that LIbLime received particularly low marks in customer satisfaction, whilst libraries that undertook to implement Koha without external support were highly satisfied with this arrangement.

Respondents who had made use of other support firms such as PTFS, Nusoft and ByWater Solutions (it should be noted that support companies servicing open source products are still not prevalent in the UK) were not sufficiently numerous to be included in the report’s summary tables. Likewise, Talis only had 14 respondents and therefore does not figure in the main tables, although as a UK supplier, we are happy to be positioned in 10th place in terms of satisfaction with LMS in an international survey.

As Marshall told the audience at the SCONUL conference here in the UK in June 2009, there are low levels of interest registered in open source library management systems apart from the community of libraries already using one. Even those libraries that are dissatisfied with their current proprietary system fail to demonstrate interest in open source.

But Software as a Service is top of the pops

Biblionix, described by Marshall as a relatively new company, gained the top satisfaction scores in the following categories – ILS product, company, and support for its product, Apollo. This is interesting not just because it’s a relatively new entrant in the library software marketplace, but because the product is offered exclusively through Software as a Service. As Marshall comments:

The responses for Apollo were overwhelmingly positive, the only product to receive 9 as either the mode or median response. The comments offered gave effusive praise for the company, the product, the ease of migration and for support.

It should be noted that takeup of Apollo is currently limited to small public libraries in the US.

Although UK suppliers don’t feature strongly in this international survey, it remains an important source in terms of looking at the key trends in our world.

Google Book Settlement will help stimulate eBook availability in libraries

books_logo So says former Google Book Search product manager Frances Haugen in her contribution to the debate on the September Library 2.0 Gang.

This month’s Gang was kicked off by Orion Pozo from NCSU, where they have rolled out dozens of Kindles and a couple of Sony Readers.  The comparative success of their Kindles ahead of the Sony Reader appears to be because of the simpler process of distributing purchased books across sets of readers and a broader selection of titles at a lower cost.  Currently users request books for the Kindle via an online selection form, then they are purchased and downloaded on to the devices which are then loaned out.  There were no restrictions on titles purchased and they have an approximate 50% split between fiction and non-fiction.

L2Gbanner144-plainThe Gang discussed the drivers that will eventually lead to the wide adoption of eBooks.  This included things like the emergence of open eBook standards, and the evolution of devices, other than dedicated readers, that can provide an acceptable reading experience.   Carl Grant shared his experience of starting a read on his Kindle and then picking it up from where he left off on his iPhone (as he joined his wife whilst shopping).

An obvious issue influencing the availability of eBooks is licensing and author and publisher rights.  This is where the Google Book Settlement comes in to play.  If it works out as she hopes, Frances predicts that over time this will facilitate broader availability of currently unavailable titles.  I paraphrase:

[From approx 26:50] Institutional subscriptions will become available on the 10M books that Google has scanned so far.  Imagine in the future a user with a reader that accepts open formats will be able to get access to the books this institutional license would provide.  Imagine school children having access to 10M books that their library subscribe to, instead of having to formally request one-off books to be added to their device.

[From approx 44:50] There are a huge number of books that are no longer commercially available in the US, for several reasons.  If the rights holders of those books do not opt-out, they will become available for people to purchase access to.  One of the interesting things about the way the settlement is set-up is that you will be able to purchase access either directly or through an institutional subscription.  What is neat is that cycle will put a check on prices as prices for individual books are based upon the demand for the books. So less poplar books will cost less…  So if the price of the institutional subscription ever gets too high libraries can decide to buy one-offs of these books.   I think that whole economic mechanism will substantially increase access to books.

The Gang were in agreement that eBooks will soon overtake paper ones as the de facto delivery format.  It is just a question of how soon.  Some believe that this will be much more rapid than many librarians expect.  A challenge for librarians to take their services in to this eReading world. 

OLE – $5.2m to get from Diagrams to an ILS Replacement in two Years

The OLE Project I’m currently reading my way through the final draft of the OLE Project Final Report.  The one year Mellon Funded Open Library Environment (OLE) project which “convened a multi-national group of libraries to analyze library business processes and to define a next-generation library technology platform

the project planners produced an OLE design framework that embeds libraries directly in the key processes of scholarship generation, knowledge management, teaching and learning by utilizing existing enterprise systems where appropriate and by delivering new services built on connections between the library’s business systems and other technology systems.

We at Talis, along with some 200 other organisations, participated in the process by feeding back our experiences in implementing live integrations between Library Management Systems and other institutional entities that the report authors recognise as being key to delivering a seamless workflow.  Our experience indicated that successful integration between systems is as much to do with local departmental motivations, understanding, and politics as it is to do with technology. This was discussed in more depth on the March Library 2.0 Gang Show with Tim McGeary from the OLE project and Talis’ Andy Latham  were guests.

The body of the report consists of many process model diagrams, describing the required interactions between library and other processes/components, which when brought together will enable the construction of library associated workflows for the next-generation library service that will utilise this next-generation library technology platform.

This first year project is in it’s own terms a success “The OLE Project met all of its objectives and was completed on time and within budget”.  One cannot deny the thought, effort, commitment and enthusiasm that has gone in to the production of this report.   Without rerunning the analysis they undertook, it would be difficult to criticise the model they have described.  The proof of the pudding of course will come in the next phase, when they move on from describing a new technology platform to start building it.

The planning phase of this project is complete. The next steps are to identify a group of build partners to provide investment funds and to develop and test the initial software. A build partner  can be an individual library, a consortium or a vendor.

The total partnership cost of the OLE Project over two years is projected to be $5.2 million, a figure that includes all programming effort as well as project management and quality assurance staffing. In addition to OLE Project costs, costs of participation would include some local staff, governance and travel funding. Project partners intend to contribute half of the OLE partnership costs and seek the other half from The Andrew W. Mellon Foundation.

ole diag Viewing the process diagrams in the report takes me back to 1990, in a snow covered hut in the grounds of the University of Birmingham.  I shared that hut for several weeks with Talis (then BLCMP) staff and a group of folks from a Dutch library system vendor (long since subsumed in to the OCLC global organisation) with the objective of designing the next-generation library technology platform.  Several years, and a few £ million in investment, led to the development a very successful library system from which the current Talis Library System, Alto, has since evolved.

There are many parallels between that 1990 development process and the road that OLE are about to embark upon, if their bids for continued funding are successful.  Not only that BLCMP was a library cooperative during that period, but also that we had the luxury of being able to step back from previous systems and start with a clean set of library process requirements. 

I wish the OLE project continued success.  Whatever achieved, I believe the exercise they are undertaking is massively valuable to the whole library domain. 

Will they be able to translate their clean [uncluttered by interaction issues with systems over which they have little influence, or uncoloured by local institutional inter-departmental politics, and ‘traditional practices] diagrams in to an installable, manageable, collection of components suitable to deliver format agnostic library services? – possibly.  Will they be able to do it in 2 years for a mere $5.2? – Experience tells me to be a little more sceptical on that last point.

OCLC Take aim at the library automation market from the Cloud

OCLCclouds Over the last few years OCLC the US based not –for-profit cataloguing cooperative has been acquiring many for-profit organisations from the world of library automation such as PICA, Fretwell-Downing Informatics, and Sisis Information Systems. 

About fifteen months ago, Andrew Pace joined OCLC, from North Carolina State University Libraries, and was given the title of Executive Director, Networked Library Services.  After joining OCLC Andrew, who had a reputation for promoting change in the library technology sphere, almost disappeared from the radar.  

Putting these two things together, it was clear that the folks from Dublin were up to something beyond just owning a few non-US ILS vendors.

From a recent post on Andrew’s Hectic Pace blog, and press releases from OCLC themselves, we now know what that something was.  It is actually a few separate things, but the overall  approach is to deliver the functionality, traditionally provided by the ILS vendors (Innovative, SirsiDynix, Polaris, Ex Libris, etc., etc.), as services from OCLC’s data centres.   This moves the OCLC reach beyond cataloguing in to the realms of acquisitions, license management, and even circulation.

The idea of braking up the monolithic ILS (or LMS as UK libraries refer to it) is not a new one – as followers of Panlibus will know. Equally, delivering functionality as Software-as-a-Service (SaaS) has been native to the Talis Platform since its inception.  It is this that underpins already established SaaS applications Talis Prism, Talis Aspire and Talis Engage.

Both OCLC, with WorldCat Local, and Talis with Prism have been delivering public discovery interfaces (OPACs) as SaaS applications for a while now, ‡biblios.net have recently launched their social cataloguing as a service [check out the podcast with Josh Ferraro], but I think this is the first significant announcement of circulation as a service that I have been aware of.

The move to Cloud Computing, with it’s obvious benefits of economies of scale and the removal of need for libraries to be machine minders and data centre operators, is a reflection a much wider computing industry trend.  The increasing customer base of Salesforce.com, the number of organisations letting Google take care of their email, and even their whole office operation (such as the Guardian) are testament to this trend.  So the sales pitch from OCLC, and others including ourselves here at Talis, about the total cost of ownership benefits of a Cloud Computing approach are supported and validated industry wide.

So as a long time predictor of computing transforming from a set of locally managed and hosted applications to services delivered as utilities from the cloud, mirroring the same transformation for electricity generation and supply from a century ago,  I welcome this initiative by OCLC.   That’s not to say that I don’t have reservations. I do. 

The rhetoric emanating from OCLC in these announcements is reminiscent of the language of the traditional ILS vendors who are probably very concerned by this new and different encroachment on to their market place.  There is an assumption that if you get your OPAC from WorldCat (and as a FirstSearch subscriber, with this on the surface ‘free offer’,  you are probably thinking that way), you will get circulation and cataloguing and all the rest from a single supplier – OCLC.

The question that comes to mind, as with all ILS systems, is will you be able to mix and match different modules (or in this case services) from different suppliers, so that libraries can have the choice of what is best for them.  Will OCLC open up the protocols (or to be technical for a moment, the hopefully RESTful APIs) to access these application/service modules so that they can not only be used with other OCLC services but with services/applications from Open Source and other commercial vendors.  Will they take note of, or even adopt, the recommendations that will come from the OLE group [discussed in last month’s Library 2.0 Gang], that should lead towards such choice.

Some have also expressed concern that a library going down the OCLC cloud services route, will be exposing themselves to the risk of ceding to OCLC control of how all their data is used and shared, not just the bibliographic data that has been at the centre of the recent storm about record reuse policies.  Against that background, one can but wonder what OCLC’s reaction to a library’s request to openly share circulation statistics from the use of their OCLC hosted circulation service would be.  

This announcement brings to the surface many thoughts, issues, concerns and technological benefits and questions, that will no doubt rattle around the library podcasting and blogosphere for many months to come.  I also expect that in the board rooms of the the well known commercial [buy our ILS and a machine to run it on] providers, there will be many searching questions being asked about how they deal with the 500lb [not-for-profit] gorilla that has just moved from the corner of the room to start dining from their [for profit] table.

This will be really interesting to watch…..

The composite image was created using pictures published on Flickr by webhamser and Crystl.

UKSG09 Uncertain vision in sunny Torquay

uksg Glorious sunshine greeted the opening of the first day of UKSG 2009 in Torquay yesterday.  The stroll along the seafront from the conference hotel (Grand in name and all facilities, except Internet access – £1/minute for dialup indeed!)  was in delightful sharp contrast to the often depressing plane and taxi rides to downtown conference centres.

IMG_0012 The seaside theme was continued with the bright conference bags.  Someone had obviously got hold of a job lot of old deckchair canvas.  700 plus academic librarians and publishers and supplier representatives settled down, in the auditorium of the Riviera Centre, to hear about the future of their world.

The first keynote speakers were very different in topic and delivery, but all three left you with the impression of upcoming change the next few years for which they were not totally sure of the shape.

First up was Knewco Inc’s Jan Velterop pitch was a somewhat meandering treatise on the wonders and benefits of storing metadata in triples – something he kept saying he would explain later.  The Twitter #uksg09 channel was screaming “when is he going to tell us about triples” and “what’s a triple” whilst he was talking.  He eventually got there but I’m not sure how many of the audience understood the massive benefits of storing and liking data in triples, that we at Talis are fully aware of.   Coincidentally, for those who did get his message, I was posting about the launch of the Talis Connected Commons for open free storage of data – in triples, in the Talis Platform.

Next up was Sir Timothy O’Shea from the University of Edinburgh, who talked about the many virtual things they are doing up in Scotland.  You can take your virtual sheep from your virtual farm to the virtual vet, and even on to a virtual post mortem.  His picture of the way information technology is playing its part in changing life at the university, apart from being a great sales pitch for it, left him predicting that this was only the early stages of a massive revolution.  As to where it was going to lead us n a few years he was less clear.

Joseph Janes, of the University of Washington Information School, was one of those great speakers who dispensed with any visual aids or prompts and delivered us a very entertaining 30 minutes comparing the entry in to this new world of technology enhance information access, with his experience as an American wandering around a British seaside town.  His message that we expect the next few years to feel very similar on the surface, as we will recognise most of the components, but will actually be very different when you analyse it.  As an American he recognises cars, buses, adverts, and food, but in Britain they travel on the wrong side of the road, are different shapes, and are products he doesn’t recognise.   As we travel in to an uncertain but exciting future, don’t be fooled recognising a technology, watch how it is being used.

A great start to the day, which included a good break-out session from Huddersfield’s Dave Pattern. He ended his review of OPACs and predictions about the development of OPAC 2.0 and beyond, with a heads-up about my session today, which caused me to spend a couple of hours in the hotel bar, the only place with Wifi, tweaking my slides.  It would be much easier to follow Mr Janes’ example and deliver my message of the cuff without slides – not this time perhaps 😉

Looking forward to another good day – even if the sun seems to have deserted us.

Code4lib final day in Providence – looking forward to Asheville

As always, a slightly shorter day for the last day of the conference but no less stimulating.  Talis CTO Ian Davis provided the keynote for the day, entitled if you love something…    …set it free.

He provided a broad view of how the linking capability of the web has changed the way things are connected and with participation have caused network effects to result.  But that is still at the level of linking documents together.  The Semantic Web fundamentally changes how information, machines, and people are connected.  Information semantics have been around for a while, but it is this coupling with the web that is the difference.  He conjectured that data outlasts code, meaning that Open Data is more important than Open Source; there is more structured data than unstructured, therefore people that understand structure are important; and most of the value in data is unexpected or unintended, so we should engineer for serendipity. 

He gave a couple warnings about being very clear about how you licence your data so that people know what they can & can’t do with it, and about how you control the use of some of the personal parts of data.  He made it clear that we have barely begun on the road but the goal was not to build a web of data, but to enrich lives through access to information.  Making the world a better place.

Edward M. Corrado of Binghamton University gave us an overview of the Ex Libris Open Platform strategy.  This was the topic of a previous Talking with Talis podcast with Ex Libris CSO  Oren Beit-Arie.  Edward set the scene as to why APIs were important to get data out of a library system He then explained the internal (formalised design, documentation, implementation and publishing of APIs) and external (publish documentation, host community code, provide tools, and opportunities for face to face meetings with customers) initiatives from Ex Libris.  The fact that you needed to log in to an open area raised, as it has before, some comments on the background IRC channel.

The final two full presentations of the day demonstrated two very different results of applying linking data to services. Adam Soroka, of the University of Virginia, showed how Geospatial data could be linked to bibliographic data with fascinating results. Whereas Chris Beer and Courtney Michael, from WGBH Media Library and Archives showed some innovative simple techniques for representing relationships between people and data.

The day was drawn to a close with a set of 5 minute lightening talks, a feature of all three days.  These lightening talks are one of the gems of the Code4lib conference a rapid dip in to what people are doing or thinking about.  They are unstructured and folks put their name on a list to talk about whatever they want.  The vast majority of these are are fascinating to watch.

During the conference the voting for Code4lib 2010 was completed so we now know that it will all take place again next year in Asheville, NC.  From the above picture, I can’t wait.

Technorati Tags: ,,

Code4lib 2009 Day 1

We were protected from the bitterly cold wind whipping across the state of Rhode Island for day one of  the excellent Code4lib Conference 2009.  Cocooned in the warm expansive basement ballroom of the former Masonic temple that is the Renaissance hotel, the day kicked off with a thought provoking keynote from Stefano Mazzocchi of Metaweb, the folks behind Freebase.

His talk was based around the way we humans have been evolving communication over the centuries – from speech (instant, portable but transient) through cave paintings (needs tools, permanent, but not very portable) and on through writing on clay tablets, and fibre materials, to books.  with all these there is a cost – paint the cave, print the books, etc.  With the current online world the cost of a communication is virtually zero, which lead to questions such as “if it is zero cost, why do we need to keep it in a library?” – Libraries may start to evolve towards being museums of rare physical items.  If all goes on line, what happens to things like serendipitous discovery – do we loose the shelf-browse experience?

Stefano, despite some severe network difficulties, proceeded to demonstrate various aspects of Freebase, the data it holds and the way it provides answers to difficult questions.  He also demonstrated how humans interacting with Freebase are making the data better by saying if a person in an image is male or female, placing a place on a map, etc.

This great start was followed by a series of excellent speakers delivering 20 minute sessions.  Too many to mention all here but here are some of my highlights:

Anders Söderbäck, unfortunately without his colleague Martin Malmsten (previous Talking with Talis interviewee) from the National Library of Sweden described the value they gained by exposing their catalogue as Open Linked Data “Linked Open Data turns web into an API

Anders was followed by our own Ross Singer describing how the open source project Jangle had matured to a version 1.0 state with connectors being produced, including one for Talis Alto.  He explained that Jangle followed the principles of the AtomPub standard as used by Google,Microsoft, IBM, etc. for transferring and updating data.  Ross explained how the architecture of a Jangle implementation looked.  His talk laid a great foundation for a breakout session later in the day.

Glen Newton of , CISTI, National Research Council talked about LuSql – a performant way to get large amounts of data indexed and in to Lucene.

We had an entertaining session from the National Library of Australia’s Terence Ingram who took us through their experiences in delivering REST based services – the pitfalls and the unexpected benefits of being able to link things together easily.

Ed Summers & Mike Giarlo, Library of Congress, with the help of a Light Sabre iPhone  App, spoke about SWORD – light-weight protocol for depositing repository objects.

Godmar Back, of Virginia Tech took us through the 2.0 developments of the LibX browser plug-in which will enable libraries to embed their services into other websites.  He also mentioned the building of an environment where these can be shared between libraries.

The day was brought to a close with a series of 5 minute lightning talks – agreed by many to be the best bit of Code4lib. 

Looking forward to another great day tomorrow.

Flickr photo of hotel by jdn

DS Disappears

DS Axiell_3 Following DS’s merger with the Axiell Library Group back in April 2008, they have made the unsurprising announcement that the DS name is now to disappear as they become the UK Division of Axiell.

Scandinavian based Jerk Sintorn, Axiell Library Group CEO, commented “Changing the name of DS to Axiell is a logical step”.  From the Axiell group point of view, I’m sure it does but it does have the effect now that there remains only one significant UK supplier to the UK public library market – Talis.

In the web site refresh that goes along with the name change it is good to see that their blog and forum are prominent so that they can engage with the community.  (Although I note that the forum is login access only)

Technorati Tags: ,,,,

Catching the next wave

Catching the next wave was the title of my opening track keynote presentation in the “Catching the semantic wave – or down in a sea of content?” session of the “Order out of chaos – creating structure in our information universe” track at the Online Information Conference 2008.  Presentation below from Slideshare.

[slideshare id=812920&doc=rjwonlinedec08-1228306147696648-8&w=425]

This is a very well attended track.  Standing room only in most of the sessions, great interest in the Semantic Web, Web 2.0, and associated concepts and technologies.  From a lightly attended single session last year, this topic has grown in to an over subscribed 2nd track this year.  Having spent some time bending the ear of conference chair Adrian Dale last year about what was upcoming, I can wear my virtual I told you so hat with pride this year.  

My job as keynote was to provide a broad introduction to, and context for, things like Linked Open Data, the Semantic Web, Cloud Computing and clouds of data, setting the scene for the day.  Hopefully I was successful in my objective, the number of attendees is definitely a measure of the interest in the topics covered.

Considering that a large proportion of the attendees of the conference are librarians it is gratifying to note that they are already looking beyond the current Web 2.0 meme towards what will be washing over us next.    Thinking about this, it is hardly surprising.  The next wave is far more associated with data, metadata, linking and recommending, than the Web 2.0 meme of social networking, blogging and wikiing.  Dare I say it out loud, but by generalisation librarians appear to be far more comfortable with the concerns of data than socially interacting. 

lod-datasets_2008-03-31I get the feeling that these concepts are going to get adopted in libraries far quicker than we would expect once they start to gain momentum.  This would be helped if we could get past some of the terminology confusion.  The main culprit in this confusion being between semantics/semantic analysis and the semantic web.  The web of data, as against [or to be more correct in addition to] the current web of documents, is how I see the semantic web.  A great example of the web of data in action is the Open Linking Data Project.

Mashed Libraries

I’m sat at the moment amongst such a collection of library UK geekdom that I’ve not experienced the like before.  I’m in the basement of Birkbeck College in London for the Mashed Libraries UK 2008 event sponsored by UKOLN and organised by Owen Stephens.

Apart from the acrobatics of trying to get on to the wifi, which I’m sure could be made more than a little simpler and less frustrating, the day has got off to a great start.  Rob Styles did a great, mostly command line driven, introduction to using Talis Platform stores.  He was followed by Tony Hurst sharing his experiences, tips, and tricks, for using online tools such as Yahoo Pipes and the spreadsheet elements of Google Docs. This was an excellent session – each time I return to Yahoo Pipes I am amazed anew and wondering why I don’t use it more.  

Next we had Timm-Martin Siewert from Ex Libris, who gave an overview of their Open Platform Strategy, and a peek in to EL Commons.  This was the subject of a recent Talking with Talis podcast with Oren Beit-Arie Ex Libris Chief Strategy Officer.  Like myself in the podcast, others today questioned why EL Commons, being  a commons, is not open to all.

A previous colleague of mine from way back, Mark Allcock  now with OCLC then gave us a brief overview of readily available APIs from them.  Finally Ashley Sanders talked about some API work at COPAC.

After an excellent lunch, small groups formed resulting in much chatting and coding.

The afternoon was punctuated by a presentation from Paul Bevan, of the National Library of Wales.  Paul took us through the issues in how they are taking their resources to the majority of visitors – online.

That brought us to the end of the afternoon and some short reports on what people had been working.  Unsurprisingly from the presentations that started the day,  there were several groups who had made great progress using Yahoo Pipes and the Talis Platform and in several cases both of these.  For example via Pipes one group were pulling book records from Amazon, adding Jacket images then augmenting them with holdings data from the Platform. Another plotted library locations for records from the Platform, on a Google Map by again using holdings data and also location data from the Silkworm Directory.

All in all an excellent day enjoyed by thirty plus people interested in using technology to improve libraries.  There is already talk of the next one.  Well done Owen for organising this one.

Update: Dave Pattern has uploaded several photos of the day to Flickr – the image above being one of them.