Panlibus Blog

Archive for May, 2008

Microsoft abandons Digitization, Book Search and Academic Search

Live Search Books The day after I’m highlighting OCLC and Google getting closer together to make it easier to find books and their digitised versions, Microsoft announces they are getting out of that game.

The Microsoft Live Search team announces on their blog that they are winding down Book Search:

Today we informed our partners that we are ending the Live Search Books and Live Search Academic projects and that both sites will be taken down next week. Books and scholarly publications will continue to be integrated into our Search results, but not through separate indexes.

With Live Search Books and Live Search Academic, we digitized 750,000 books and indexed 80 million journal articles. Based on our experience, we foresee that the best way for a search engine to make book content available will be by crawling content repositories created by book publishers and libraries.

Not continuing the service they developed in partnership with the Internet Archive, CCS, and others, they are giving away what they have amassed:

… we intend to provide publishers with digital copies of their scanned books. We are also removing our contractual restrictions placed on the digitized library content and making the scanning equipment available to our digitization partners and libraries to continue digitization programs. We hope that our investments will help increase the discoverability of all the valuable content that resides in the world of books and scholarly publications.

It is interesting that they have come to the realisation that the best way for a search engine to make book content available will be by crawling content repositories created by book publishers and libraries.  – The question of course is who’s search engine.

Without doing much reading between the lines, it is clear that Microsoft have failed to see a business model in the worthy job of digitizing the world’s books.  I wonder if there is one, or does the answer lay with open data projects like the Open Library, the Million Book Project, and the sharing of libraries.

Technorati Tags: , , , , , ,

OCLC announce more links with Google

From the press release:

DUBLIN, Ohio, USA, 19 May 2008—OCLC and Google Inc. have signed an agreement to exchange data that will facilitate the discovery of library collections through Google search services.

Digging in to the detail, it looks like this will mean a few things.  Apart from Google Book Search providing a Find this book in a library link to as already available from other parts of Google, it appears to be only relevant to OCLC member libraries which also participate in the Google Book Search program.

It means that these libraries are able to share the MARC records for the books they have contributed [to Google Book Search] with Google, to enable them to make them easier to discover.  I am not clear if OCLC rules prevented this sharing happening prior to the agreement.  

Implicitly this also means that Google, at least in the Book Search team, recognise the value of metadata created by the library profession for making books more discoverable.  Something the library community have been saying for a long time – parsing and indexing the content is only part of the solution to making books findable.

Also in the press release:

The new agreement enables OCLC to create MARC records describing the Google digitized books from OCLC member libraries and to link to them.

OCLC should therefore be creating catalogue records for the digitized books held by Google.   This meaning that a search in WorldCat will direct a searcher to the digitized manifestation as well as to the library that contributed it.   A great way to gain wider exposure to a library’s collection without necessarily increasing the number of people through it’s doors.

To enable OCLC to create catalogue records for items in the Google Book Search collection, Google must, I presume, have made some commitment to creating and maintaining a permanent  URI for each digitized book.  I wonder if those URIs are generally available, with a commitment to maintain them, in a way that others could reliably catalogue them?

The announcement is one of a continuing series additions to the Google Book Search service, such as the recent release of their API.   

Listening to Google Product Manager, Frances Haugen in her guest slot on the Library 2.0 Gang, it is obvious that at least one person in the Book Search team is interested in and motivated by libraries – lets hope we see even more links between them and the wider library community.

Technorati Tags: , , ,

MLA Setting Course For The Next Five Years

 MLAlogoLast week England’s Museums Libraries and Archives Council launched  an Action Plan designed to set the course for Libraries over the next five years. 

The Action Plan (PDF 70KB) from the Museums, Libraries and Archives Council is the result of extensive consultation and engagement with stakeholders and sets out an agenda for change for public libraries in England with the aspiration of making every public library a great public library.

From the slim report, subtitled ‘towards 2013‘:

This Action Plan reflects the outcome of extensive consultation and engagement with stakeholders. MLA will work in partnership to deliver this ambitious agenda over the next five years.

The report doesn’t go in to detail about the extensive consultation, or the stakeholders, but after some reflection on  ‘WHAT DOES ‘GOOD’ LOOK LIKE?’, it lays out how: In 2008/9, MLA, in consultation with DCMS, ACL, SCL, LGA and others, will make a start on four key challenges.

  • Challenge one: research and evidence
  • Challenge two: best practice
  • Challenge three: innovation
  • Challenge four: digital change

Each of these four challenges have listed a set of actions under the ‘MLA will‘ heading.  It is clear from the initial wording of these actions (Invest in impact research – Analyse – Identify and promote – Promote – Actively promote – Consult – Highlight – Identify – Encourage – Examine – Maintain support – Make an effective case – Manage – Promote – Advance)  that MLA are planning to be central to the encouragement of libraries to improve the service they deliver.

I read the report with interest, but it left me with a feeling that the MLA  are going to have a difficult job on their hands.  Not so much in doing that encouraging, promoting, consulting, examining and advancing, but in how they are going to measure the effect of it all.  In five years time, how will they know that they have succeeded?  Perhaps more importantly how, in one or two years time, will they know that they are on target to succeed?

Technorati Tags: , , ,

Multi-platform OPAC Widget from Bremen via Netvibes

Talis The Jacobs University Information Resource Center (IRC), in Bremen Germany,  and the University Library have been working on some Web 2.0 tools.  Their first tool is the jOPAC Widget. – Thanks to Michael over at Tame the Web for surfacing this.

This isn’t the first OPAC widget by any means, cast your mind back to Talis Mashing up the Library winner John Blyberg’s Go-Go-Google-Gadget, and of course Jon Udell’s LibraryLookup Project.  So what is different about this one?  Multi-plaltform support, that’s what.

The problem with widget platforms is that there are so many to choose from – Firefox plug-ins, Google Gadgets, Windows Vista Sidebar Widgets, Mac Dashboard Widgets, the list goes on and on.  This means that by producing a widget to work on a particular widget platform, you are immediately limiting your potential audience and hence take up  of all the efforts put in to it.  Netvibes have addressed this problem by coming up with the Universal Widget API (UWA).  By developing a widget using the UWA they become:

Write once, run everywhere. UWA widgets are compatible with all major widget platforms (iGoogle, Windows Vista, Apple Dashboard,, iPhone, Opera, blogs, MySpace, etc.).

The effect of this is that with very little effort your widget becomes simultaneously available in lots of widget environments.  Something that a single project could never expect to achieve on its own.

Take a look at this nicely laid out OPAC search widget, that works on your favourite widget platform.  The screen shot above is from my favourite, the Mac Dashboard.  The folks at Jacobs are demonstrating that traditional development approach of building on the shoulders of others.

Technorati Tags: , , , , ,

LCSH as Linked Data

A small number of folks, including our own Rob Styles, recently flagged up the work by Ed Summers  in producing:

an experimental service that makes the Library of Congress Subject Headings available as linked-data using the SKOS vocabulary.

The results of this work can be found at At first look a deceptively simple site, with a wealth of information and relationships lurking beneath the surface.  Viewing the site in your web browser, although interesting, is not the point.  It is designed to be used by other applications using the subject headings an the relationships between them.  Take up the offer on the site of browsing some of the subjects using a linked data browser – I found Zitgist easy to use, follow the narrower, broader, and related terms, it is amazing where you end up.

Ed has used SKOS (Simple Knowledge Organisation Systems), a formal language built on RDF to describe the concepts in the LCSH.

Recently Alistair Miles from the University of Oxford, key developer of SKOS gave a presentation at the Library of Congress on SKOS in the context of Semantic Web Deployment.  From the slides it looks like one of those events that you wished you had been around to attend.

It is initiatives like this that are the early green shoots of the benefits of Semantic Web appearing in and around library data.   With a reliable unique URI for each of the concepts in LCSH which leads you to a SKOS encoded definition for that concept, which then includes URIs for both broader and narrower terms in the subject heading hierarchy, why would anyone bother encoding such information in to their own application? 

For the moment is an experimental site which hopes to inform developments inside the Library of Congress.  If LC do follow this lead and open up the LCSH as reliable Open Linked Data and applications start to use it, you will rapidly end up with a network of applications that are semantically linked together by the fact they share the same URIs to define the same concepts.

There is much more to the Semantic Web, but just extrapolate this little bit of interlinking across other authoritative data sets in the library world  such as authors, publishers, and the like, then on in to data sets in other domains that share things like geographical locations, movie databases, Wikipedia etc., and you will end up with something significantly more powerful than the Web we have produced so far.

Early days yet, but pioneers like Ed, Alistair, and a growing band of linked data enthusiasts many of whom were to be found at the Linked Data Workshop at WWW2008 co-chaired by our own Tom Heath, deserve a hat tip.

Technorati Tags: , , , , , ,

The Iron Fist of Interoperability

“Any customer can have a car painted any colour that he wants so long as it is black.”  — Henry Ford, My Life and Work (1922)

It’s not easy to get people to agree on anything.  This is why joining a standards committee is generally considered as something akin to being sent to a Siberian work camp.  Every party will have competing priorities, ideals and agendas and the resulting work generally comes out worse the wear after all of the stretching and mending from all sides.

At the same time, without some kind of agreement, there is no hope of adoption.  It is really an unenviable position.

So with this in mind, I have great awe and respect for the Digital Library Federation ILS and Discovery System Task Force for wading into these waters.  While the task force is not a standards body, per se, that they are trying to promote interoperability through the recommendation of a specification makes it seems like splitting hairs to not cast them in the same lot.

The recommendation draft is certainly welcome to library developers who have been craving something, anything, to help unlock the data from their proprietary systems (even though, as Marshall Breeding pointed out in the Library 2.0 Gang podcast on the subject, it’s about a year late).  The current draft lays out desirable pseudocode-type methods and then gives options of existing, off the shelf standards and protocols that could be used to enable the functionality that is defined.

The problem here is that they generally give multiple options for achieving the goal of any given method.  So this means that any ILS vendor can choose from a variety of protocols for implementing the spec and that a different vendor can choose alternate standards for the exact same functionality.  The most striking example of this would be a GetAvailability service (basically, “what is the current status and location of a given item”) which the recommendation says could be implemented via NCIP, a REST interface, SRU or OpenURL.

The point of a standardized API isn’t to make it simple for the vendors to implement.  The point is to make it simple for developers to implement.  The more options that the developer has to account for, the more complicated the client library must be to access the API.  This then gets to be rather chicken or egg.  If there are no programming language specific client libraries to access the API, there will be a slower rate of adoption (especially if non-library programmers have to learn the basics of SRU, OpenURL or NCIP).  If the spec is too complicated or allows for too much variation in protocols or output formats, it will be hard to find volunteers to build the domain-specific languages to help spread the proliferation of library data in other kinds of applications.

This is not an argument against using library standards.  An OpenURL interface on a library catalog to resolve holdings data would be incredibly useful.  Being able to harvest bibliographic records via OAI-PMH seems like a no-brainer.

However, it’s the combination of these where things begin to break down.  Imagine this hypothetical scenario:

  • SirsiDynix conforms to the recommendation by providing an OAI-PMH interface to their bibliographic and authority records and NCIP for item availability.
  • VTLS conforms to the recommendation by providing an SRU interface that has the capability of exposing all bibliographic records with the option to filter by date and a proprietary RESTful interface for item availability.
  • Talis conforms to the recommendation by providing a SPARQL endpoint for bibliographic records and a proprietary RESTful interface for item availability (with different syntax/responses than VTLS).
  • Ex Libris conforms to the recommendation by providing OpenURL interfaces for bibliographic, authority and holdings records.

These are entirely fictitious, of course, and somewhat facetious.  I wouldn’t expect Ex Libris to use OpenURL for everything, nor would Talis just say, “here’s SPARQL, have at it!”.  After all, Platform stores already have an OAI-PMH interface.  Replace the names with any other vendor’s name, it’s the point that they could do the above and all claim compliance.

Now imagine being a developer trying to write an application that uses data from the ILS.  Maybe the main library has a Voyager catalog and the law library uses Innovative.  Maybe the library is part of a consortium with libraries that have Aleph, Polaris and Unicorn.  Now let’s say that the developer doesn’t actually work for the library, but for the central IT department and he or she is trying integrate these services into courseware or a city government portal.  If all of these disparate systems use different protocols to access the same kinds of data, the odds lessen greatly that many of these catalogs will ever make it to the portal or VLE.

With the Jangle project, we’re trying to eliminate as much of this flexibility in implementation as possible.  It is a difficult balance, certainly, to prescribe a consistent interface while also accounting for extensibility.  But the point here is consistency.  One of the reasons we chose the Atom Publishing Protocol to interact with Jangle is because we think it will provide the lower level functionality needed to manage the data in a simple and consistent way.  On top of the AtomPub interface, simple OAI-PMH, OpenURL or NCIP servers can be built, using the AtomPub REST interface, to ensure that our library services can interact with existing library standards based applications.  At the same time, developers can use common client libraries (such as those for Google’s GData or WordPress, for example) to have a congruous means of accessing different kinds of data and services.  By only allowing Atom, we can focus on interacting with the data instead of requiring developers to focus on the protocols.

After all, sometimes to get from point A to point B, you just need a car.  The color doesn’t matter all that much.

JISC & SCONUL Talk with Talis about Library Management System Study

Rachel Bruce of JISC and Anne Bell of SCONUL join me in the latest Talking with Talis podcast to discuss the recently published JISC & SCONUL Library Management Systems Study – An Evaluation and horizon scan of the current library management systems and related systems landscape for UK higher education.

We discuss the report, the reasons for commissioning it, how it will inform the on going debate about the future of academic libraries, and how libraries could use it.


During the conversation we reference the following resources:

This conversation was recorded on Friday 9th May  and edited on a Mac with Garageband.

Technorati Tags: , , , , , , ,

CILIP Podcasts Syndicate Library 2.0 Gang

I was delighted to see that via the newly launched Podcasts area of the CILIP Communities site they are syndicating the Library 2.0 Gang.

The combined feed of podcasts that CILIP have launched, is a great service to CILIP members and a recognition that the traditional ways of learning and keeping up to date are being powerfully supplemented by blogs and podcasts.  I will be interested to see how this develops.

As the monthly round table listen for those that are interested in libraries and the technologies that influence them, I am eager to make it available to all that will benefit from the the insights and opinions from the librarians, vendors, journalists, and commentators that join the Gang.

To that end, whilst welcoming CILIP’s recognition of the Gang, I also invite others in different sectors and geographies that are interested in enriching their site by adding value for the visitors to it,  by syndicating the Library 2.0 Gang series, to drop me a line

Technorati Tags: , , , ,

Conversations in the Market

L2Gbanner144-plain …. or in the words of the Cluetrain Manifesto – Markets are conversations.

The May 2008 Library 2.0 Gang ticks a couple of boxes on the list of things that show that the best way to move forward is to talk and form a consensus.

Firstly the subject of the conversation – The Digital Library Federation (DLF) working group that are recommending a generic API for all Library Systems to support, and the ‘Berkeley Accord’ that most vendors have signed in support of this.

Secondly, the fact that senior people from at least three of the major vendors are comfortable joining the Library 2.0 Gang for an open recorded conversation, about how they might support the API recommendations in their product sets.

As facilitating host and chair for the conversation, it was very refreshing to hear how open Talin Bingham from SirsiDynix, Oren Beit-Arie from Ex Libris, and Talis’ Dan Mullineux were about their plans and support for the DLF initiative.   One point of discussion in the show was the position of Innovative Interfaces, who were the only vendor who explicitly abstained from supporting the Berkeley Accord.  All others that expressed a position supported it.   Although unable to take part in the conversation, it is clear from the blog post by Betsy Graham, Vice President of Product Management, that their position is not as negative as some have painted it.

If from this you think that the show is a vendor love-in, you would be wrong.  The Gang for this show also included Andrew Nagy, lead developer and passionate promoter of VuFind the Open Source Library OPAC, and the well known watcher of, and commentator on, the Library Systems world, Marshall Breeding.  Appropriately the show guest was John Mark Ockerbloom who is chair for the DLF’s working group.

During the show it was obvious that all were enthusiastic about the initiative, whilst in agreement that these first baby-steps to opening up access to library systems should be implemented  widely as soon as possible.

This third show consolidates the position of the Gang as being the monthly listen for those that are interested in libraries and the technologies that influence them.   As Gang host it is my goal to foster open conversations between vendors, their customers, and opinion formers in the library market.  I know, as an Evangelist employed by Talis, that some initially viewed this with some skepticism.  All three show so far, I believe demonstrate that open conversations between open minded players in our world both move things forward and an interesting and informative listen.

Technorati Tags: , , , , , , ,

Evangelist or Guru?

According to the musings of Boris Zetterlund on Axiell’s recently introduced blog, you are reading a blog post written by "a prejudiced, narrow minded, message driven, company edged fan boy".   Wow I really must get a tee-shirt made up with that emblazoned across the chest!

I’ve been asked many times what a Technology Evangelist, especially one working for Talis, is and/or does.  Strangely enough I have never used the Boris definition as a reply.

Boris thinks the term guru would be more appropriate for somebody who has the relative knowledge to be an authority of foresight in their area.  He quotes the venerable Wikipedia in support of his definition:

In a further Western metaphorical extension, guru is used to refer to a person who has authority because of his or her perceived secular knowledge or skills.

As an example of a possible library technology guru he points to the excellent Marshall Breeding, of Library Technology Guides fame.  I’ve got a lot of respect for Marshall he provides a great service to libraries, and I enjoy his presentations.  His session at the Talis Insight Conference last year [Video selectable from conference program] was received really well, opening many eyes as to the influence of Open Source on our industry.   So leaning towards a guru he is, but I wouldn’t class him as an evangelist.

Following through on the religious metaphor, which no doubt will be dangerous – using religious references to make points about secular issues.  I see a guru as an expert in his/her particular bounded area of interest who you would metaphorically go to for expert insight in to that field.  Whereas, an evangelist is someone who’s role in life is to spread the word about new ways of doing, looking at, or thinking about things.

I know that it is probably a generalisation, but I believe gurus act in an introverted way confining their interactions to those that already believe in what they are expert in, and want to know more.  Whereas the evangelists are extrovert spreading the word about what they are passionate about, to those that have not heard it yet.

Having said all that, there are in my experience two types of evangelists in the technology world, and one of those could possibly attract the company edged fan boy label that Boris uses.  We’ve all seen them promoting the religion of their company’s product portfolio, as against the sense of the generic architectural/philosophical/technical/standards based approach used to develop them.

Don’t get me wrong, if I believe Talis has a good, the best, or even the only, example of something I believe our industry should be taking notice of, I will not shy away from saying so.  But blindly promoting only things that your company or project produces, in the end becomes self defeating.

We operate in a community and open conversation and debate is the only real way forward.  The most valuable part of the evangelists role is not necessarily the presentation or blog post, it it the debate, discussion, and comments that they generate.

I could prattle on for ages on this, but the bottom line is that this evangelist is an evangelist not a guru, and definitely not a prejudiced, narrow minded, message driven, company edged fan boy!


Technorati Tags: , ,