Panlibus Blog

Archive for the 'Resource sharing' Category

RIN’s Michael Jubb Talks with Talis about bibliographic records in a networked world

michael-jub Dr Michael Jubb, Director of the Research Information Network, is my guest for this podcast.

The RIN was established by the higher education funding councils, the research councils, and the national libraries in the UK to investigate how efficient and effective the information services provided for the UK research community are.

As part of their role, they publish many reports to inform and create debate to lead to real change.  Our conversation focuses on the recently published “Creating catalogues: bibliographic records in a networked world”, which explores the production and supply chain of bibliographic metadata for physical and electronic books, journals, and articles.  We discuss the need for the report, and therefore change in this area, its recommendations and possible ways forward.

OCLC’s Andrew Pace Talks with Talis about Web-Scale ILS

andrew_pace To find out about OCLC’s move in to providing hosted, Web-scale, Software as a Service functionality for managing libraries, who better to ask than the person responsible for the programme.

Andrew Pace, Executive Director, Networked Library Services has been working on this for the last fifteen months, and as you can hear from our conversation is pleased that he can now talk openly about it.

Our wide ranging conversation takes us from the epiphany moment when Andrew announced he wanted to be a librarian through to the strategic, and architectural decisions behind this significant OCLC initiative.  

Andrew’s answers to my questions add depth and background to the brief details so far released in his blog posts and OCLC’s press releases.

Talking to Herbert van de Sompel about repositories

Over on our Xiphos blog, I’ve just published a podcast conversation I had with Herbert van de Sompel earlier this week.

It’s a nice example of the synergies between issues discussed here on Panlibus and those we’re exploring within Project Xiphos. Have a listen, and see what you think.

Google Book Search – not so free with their jacket images

When, on the April Library 2.0 Gang, Tim Spalding asked Google Product Manager Frances Haugen about the uses of Google data, specifically book jacket images received via their new API, we got the impression that there were no restrictions against using them for display in your OPAC.

As Tim posted last week, things seem to have changed:

A few months ago when the Google Book Search API came out, I was among the first notice that GBS covers could be used to deck-out library catalogs (OPACs) with covers, potentially bypassing other providers, like Amazon and Syndetics. I subsequently promoted the idea loudly on a Talis podcast, where a Google representative ducked licensing questions, giving what seemed like tacit approval.

It seemed so great–free covers for all. Unfortunately, it now seems that it was too good to be true. At a minimum, the whole thing is thrown into confusion.

Tim was contacted by ‘a major cover supplier’ saying that a large percentage of the Google covers were, in fact, licensed to Google by them. They never intended this to be a "back door" to their covers, undermining their core business. – oops!

This coupled with the recent alteration to the Amazon Web Services customer agreement:

5.1.3. You are not permitted to use Amazon Associates Web Service with any Application or for any use that does not have, as its principal purpose, driving traffic to the Amazon Website and driving sales of products and services on the Amazon Website.

… means that those looking for a free source of book jackets will have to look elsewhere?

Technorati Tags: , , , ,

OCLC announce more links with Google

From the press release:

DUBLIN, Ohio, USA, 19 May 2008—OCLC and Google Inc. have signed an agreement to exchange data that will facilitate the discovery of library collections through Google search services.

Digging in to the detail, it looks like this will mean a few things.  Apart from Google Book Search providing a Find this book in a library link to as already available from other parts of Google, it appears to be only relevant to OCLC member libraries which also participate in the Google Book Search program.

It means that these libraries are able to share the MARC records for the books they have contributed [to Google Book Search] with Google, to enable them to make them easier to discover.  I am not clear if OCLC rules prevented this sharing happening prior to the agreement.  

Implicitly this also means that Google, at least in the Book Search team, recognise the value of metadata created by the library profession for making books more discoverable.  Something the library community have been saying for a long time – parsing and indexing the content is only part of the solution to making books findable.

Also in the press release:

The new agreement enables OCLC to create MARC records describing the Google digitized books from OCLC member libraries and to link to them.

OCLC should therefore be creating catalogue records for the digitized books held by Google.   This meaning that a search in WorldCat will direct a searcher to the digitized manifestation as well as to the library that contributed it.   A great way to gain wider exposure to a library’s collection without necessarily increasing the number of people through it’s doors.

To enable OCLC to create catalogue records for items in the Google Book Search collection, Google must, I presume, have made some commitment to creating and maintaining a permanent  URI for each digitized book.  I wonder if those URIs are generally available, with a commitment to maintain them, in a way that others could reliably catalogue them?

The announcement is one of a continuing series additions to the Google Book Search service, such as the recent release of their API.   

Listening to Google Product Manager, Frances Haugen in her guest slot on the Library 2.0 Gang, it is obvious that at least one person in the Book Search team is interested in and motivated by libraries – lets hope we see even more links between them and the wider library community.

Technorati Tags: , , ,

The Iron Fist of Interoperability

“Any customer can have a car painted any colour that he wants so long as it is black.”  — Henry Ford, My Life and Work (1922)

It’s not easy to get people to agree on anything.  This is why joining a standards committee is generally considered as something akin to being sent to a Siberian work camp.  Every party will have competing priorities, ideals and agendas and the resulting work generally comes out worse the wear after all of the stretching and mending from all sides.

At the same time, without some kind of agreement, there is no hope of adoption.  It is really an unenviable position.

So with this in mind, I have great awe and respect for the Digital Library Federation ILS and Discovery System Task Force for wading into these waters.  While the task force is not a standards body, per se, that they are trying to promote interoperability through the recommendation of a specification makes it seems like splitting hairs to not cast them in the same lot.

The recommendation draft is certainly welcome to library developers who have been craving something, anything, to help unlock the data from their proprietary systems (even though, as Marshall Breeding pointed out in the Library 2.0 Gang podcast on the subject, it’s about a year late).  The current draft lays out desirable pseudocode-type methods and then gives options of existing, off the shelf standards and protocols that could be used to enable the functionality that is defined.

The problem here is that they generally give multiple options for achieving the goal of any given method.  So this means that any ILS vendor can choose from a variety of protocols for implementing the spec and that a different vendor can choose alternate standards for the exact same functionality.  The most striking example of this would be a GetAvailability service (basically, “what is the current status and location of a given item”) which the recommendation says could be implemented via NCIP, a REST interface, SRU or OpenURL.

The point of a standardized API isn’t to make it simple for the vendors to implement.  The point is to make it simple for developers to implement.  The more options that the developer has to account for, the more complicated the client library must be to access the API.  This then gets to be rather chicken or egg.  If there are no programming language specific client libraries to access the API, there will be a slower rate of adoption (especially if non-library programmers have to learn the basics of SRU, OpenURL or NCIP).  If the spec is too complicated or allows for too much variation in protocols or output formats, it will be hard to find volunteers to build the domain-specific languages to help spread the proliferation of library data in other kinds of applications.

This is not an argument against using library standards.  An OpenURL interface on a library catalog to resolve holdings data would be incredibly useful.  Being able to harvest bibliographic records via OAI-PMH seems like a no-brainer.

However, it’s the combination of these where things begin to break down.  Imagine this hypothetical scenario:

  • SirsiDynix conforms to the recommendation by providing an OAI-PMH interface to their bibliographic and authority records and NCIP for item availability.
  • VTLS conforms to the recommendation by providing an SRU interface that has the capability of exposing all bibliographic records with the option to filter by date and a proprietary RESTful interface for item availability.
  • Talis conforms to the recommendation by providing a SPARQL endpoint for bibliographic records and a proprietary RESTful interface for item availability (with different syntax/responses than VTLS).
  • Ex Libris conforms to the recommendation by providing OpenURL interfaces for bibliographic, authority and holdings records.

These are entirely fictitious, of course, and somewhat facetious.  I wouldn’t expect Ex Libris to use OpenURL for everything, nor would Talis just say, “here’s SPARQL, have at it!”.  After all, Platform stores already have an OAI-PMH interface.  Replace the names with any other vendor’s name, it’s the point that they could do the above and all claim compliance.

Now imagine being a developer trying to write an application that uses data from the ILS.  Maybe the main library has a Voyager catalog and the law library uses Innovative.  Maybe the library is part of a consortium with libraries that have Aleph, Polaris and Unicorn.  Now let’s say that the developer doesn’t actually work for the library, but for the central IT department and he or she is trying integrate these services into courseware or a city government portal.  If all of these disparate systems use different protocols to access the same kinds of data, the odds lessen greatly that many of these catalogs will ever make it to the portal or VLE.

With the Jangle project, we’re trying to eliminate as much of this flexibility in implementation as possible.  It is a difficult balance, certainly, to prescribe a consistent interface while also accounting for extensibility.  But the point here is consistency.  One of the reasons we chose the Atom Publishing Protocol to interact with Jangle is because we think it will provide the lower level functionality needed to manage the data in a simple and consistent way.  On top of the AtomPub interface, simple OAI-PMH, OpenURL or NCIP servers can be built, using the AtomPub REST interface, to ensure that our library services can interact with existing library standards based applications.  At the same time, developers can use common client libraries (such as those for Google’s GData or WordPress, for example) to have a congruous means of accessing different kinds of data and services.  By only allowing Atom, we can focus on interacting with the data instead of requiring developers to focus on the protocols.

After all, sometimes to get from point A to point B, you just need a car.  The color doesn’t matter all that much.

A Source of value

Talis Source

I’ve blogged before on the importance of libraries – and those who provide them with their systems – doing more to minimise the barriers to use and reuse of the publicly funded data they hold. This is an issue that is currently exercising the Technology team over at The Guardian, with their ‘Free our Data‘ campaign, and it is an issue with which we grapple here at Talis.

With Source, we’re sweeping away the frankly ridiculous model whereby libraries pay to share basic data about the books they hold with one another, and then pay again for the privilege of being allowed to look in the shared pool; a tradition that has held sway for far too long, and one that we are encouraging people to raise questions about. With research prototypes such as Whisper, we go a stage further and demonstrate ways in which everyone stands to benefit from access to such a pool of data; it has value beyond the facilitation of interlending of books between libraries. With the TDN, we are facilitating a community around shared innovation atop the Talis Platform, and the extensive documentation about to be released around our first set of Platform web services will enable anyone to begin realising some of that potential for themselves.

Over on our new Source blog, my colleague Fiona Leslie discusses the public release of scripts that will allow any library with one of our library systems to painlessly and regularly export data for sharing with a data pool such as that underlying the growing Talis Platform. In the same post, she highlights our engagement with our peers elsewhere in the industry, and trails the welcome news that many of those peers share our view on that which is right and that which is so clearly wrong here. Sadly, some with a vested interest in grimly hanging on to their unfair and innovation-stifling practices of charging, charging, and charging yet again are less willing to contemplate change. We look forward to continuing the debate, and welcome further opportunities to assist customers of these dinosaurs in persuading them to change their ways. You may very well have sound reasons for selecting them and their systems to meet other needs in your library, but there can surely be no reason for such restrictive practices to be tolerated in this day and age.

Keep an eye on the Source blog for further announcements as we work with like-minded organisations across the library space to ensure that the public funds spent on procuring library materials and the data recording them have maximum impact, and that any library – anywhere – is able to share public information such as that pertaining to their holdings with any third party they wish, certainly without penalty and ideally with the assistance of their software supplier.

Technorati Tags: , , , , , ,

Resource Discovery and Sharing – exploring the benefits of a Platform approach

We’re holding the fifth of our successful Talis Research Days on Tuesday 9 May.

This event focuses upon issues related to resource discovery and sharing (Inter Library Loan, for example). Participants will explore ways in which a freely available (and rapidly growing) body of holdings data, aligned with a robust technology Platform and the ideas of Web 2.0 can begin to break down some of the increasingly pointless silos with which our domain is overly replete, and deliver comprehensive services from which libraries, vendors and end users can all benefit.

If this is an area that you’re interested in exploring, please get in touch. These events have always included representatives of other vendors, so why not agitate to get yours along?

The formal invitation is included, below.

Talis Research Day 5

Building Resource Discovery and Sharing Systems on the Talis Platform

Talis is pleased to announce its next Research Day, to be held at our offices in Birmingham on Tuesday May 9th, 2006.

This free, invitation only seminar will bring together those interested in commissioning, building, and using resource discovery and sharing systems, to hear about and discuss the latest exciting development in this area. This event will examine this changing landscape within the wider context of technological advancements in Web 2.0, Web Services and the Talis Platform.

More specifically, we will explore key issues including:

  • How have resource discovery projects been designed, built, and funded in the past
  • How a shared, Web2.0 platform approach dramatically reduces the costs and complexity of such systems
  • How it is now possible for ILL modules to access discovery services from the Platform
  • The challenges around libraries’ ability to easily contribute their holding to multiple union catalogues

This event will be of particular interest to two groups:

  • ILL and LMS vendors who need to understand how to make their applications interoperate services from the Talis Platform
  • Consortia and regional associations, whether you are considering a new shared resource discovery system, or refreshing an existing system. If you are a consultant or developer involved in this field you will also find this seminar useful.

This event is an opportunity to understand the possibilities of applying the Talis Platform to the challenges of resource discovery and sharing. During the day we will present some real applications that have been built on the Talis Platform, including the recently announced Talis Source national resource discovery and sharing solution.

For further information, please contact

Technorati Tags: , , , ,

Talis Source a new name for resource discovery

Talis have announced a new name in resource sharing and with it comes a new direction. Talis Source will launch on 19th May 2006, when the current contract to provide a system for The Combined Regions expires. The announcement comes after undertaking a major strategic review of the area. As a result, the decision was taken to embark on a long-term strategy of basing our next generation resource discovery and sharing solutions on a new software platform. But what does all that actually mean for libraries?


JISC seeks to review Resource Discovery Services

JISC logo

The Joint Information Systems Committee (JISC) here in the UK, a principal funder of my old role at the Common Information Environment, has issued an Invitation To Tender (ITT) for a strategic review of their Resource Discovery Services;

“The purpose of the review is twofold: it will assess potential for convergence and economies of scale in the underlying technologies used to deliver the services; it will look at how the services may be enhanced with new technologies in order to support flexible integration across the services and within the wider resource discovery environment such as library management systems, digital repositories or search engines.

It is anticipated that the review will begin in April 2006 and will provide the evaluative report by the middle of May 2006. A final advisory report will be expected by the end of July 2006. The recommendations from the review will inform decisions about the provision of services and also development activity that might be required to enhance the services or their use.”

Specifically, the Review is meant to consider the Archives Hub, COPAC, SUNCAT and Zetoc, and to consider related non-service activities such as the central OpenURL Router, GetRef, and others.

I wonder about the extent to which end users continue to find the distinction between the types of resources offered by these services and those from something like the Resource Discovery Network (RDN) sensible? Is it time to go further than ‘simply’ seeking “convergence and economies of scale” across such similar services as these? Indeed, before I opened the Word document, I’d assumed that the ITT probably was for (yet another) review of the RDN…

Discovering RDN-type resources (web pages, etc) probably needs to fall within scope somewhere along the line. As, surely, does request fulfillment. If I find a book in COPAC or a journal in SUNCAT or Zetoc, how do I quickly, easily, and affordably lay my hands on the thing? How can we leverage the stock sat idle in libraries around the country? How can we leverage request data to ensure that out-of-print materials that are in demand get reprinted? How can we offer informed choices to searchers about the availability and relative merits of borrowing, buying new, and buying second hand? How do we make it easier and more efficient and reliable to shop around?

A few of the terms of reference do appear to point in particularly interesting directions, however;

“whether there is potential for increased takeup of services for example by opening them for contributions across the community”

Participation in the population of national infrastructure. Now that could be interesting. It would also be valuable to look beyond JISC and the mentioned stakeholders (the BL and RIN) to think about how some of these services might become embedded in other domains. I spent some time in my last job, for example, exploring whether or not Zetoc might sensibly be opened up to public libraries, taking a short-term revenue hit on subscriptions to the charged equivalent in order to deliver a better service and – I reckoned – to actually earn loads more in terms of document supply transactions down the line. It still seems like a good idea, but the people I talked to at the BL were never convinced. Short-sighted money grabbing, or a more profound grasp of the issues than I managed?

“how the services might be further enhanced by working with others or by making use of provision offered by other organisations for example OCLC , library systems vendors such as TALIS or search engine providers”

And, presumably, questions about whether or not one of these is actually going to solve the problem for your users, whether you build something else or not. Google, for example, could throw a lot of effort behind any number of these problems. So could some of the others mentioned. At Talis, we’d much prefer a cooperative approach by which we all get to recoup far higher levels of our various ways of measuring value on value-add services, rather than wasting resources on building innumerable competing Big Buckets of essentially value-light data. The value is shifting. We all just need to work out where it’s going, and at least keep up with it.

So we’d certainly like to talk to whoever does the work.

“whether the services can be offered in a modular fashion in order that they might for example be integrated within other systems provided to learners and researchers, such as virtual learning environments (VLEs), library portals or catalogues or other examples of managed learning environments (MLEs)”

Web Services. APIs. Portlets. Yes!!!

“how services might better support personalisation, in particular with reference to the JISC personalisation report”

Some of them could start by supporting it at all. It’s a real shame that some of the good work in the Personalisation report hasn’t been more widely picked up upon.

If you’re interested, you have until 9 March to submit a proposal.

Technorati Tags: