Panlibus Blog

Archive for the 'Standards' Category

JISC seeks to review Resource Discovery Services

JISC logo

The Joint Information Systems Committee (JISC) here in the UK, a principal funder of my old role at the Common Information Environment, has issued an Invitation To Tender (ITT) for a strategic review of their Resource Discovery Services;

“The purpose of the review is twofold: it will assess potential for convergence and economies of scale in the underlying technologies used to deliver the services; it will look at how the services may be enhanced with new technologies in order to support flexible integration across the services and within the wider resource discovery environment such as library management systems, digital repositories or search engines.

It is anticipated that the review will begin in April 2006 and will provide the evaluative report by the middle of May 2006. A final advisory report will be expected by the end of July 2006. The recommendations from the review will inform decisions about the provision of services and also development activity that might be required to enhance the services or their use.”

Specifically, the Review is meant to consider the Archives Hub, COPAC, SUNCAT and Zetoc, and to consider related non-service activities such as the central OpenURL Router, GetRef, and others.

I wonder about the extent to which end users continue to find the distinction between the types of resources offered by these services and those from something like the Resource Discovery Network (RDN) sensible? Is it time to go further than ‘simply’ seeking “convergence and economies of scale” across such similar services as these? Indeed, before I opened the Word document, I’d assumed that the ITT probably was for (yet another) review of the RDN…

Discovering RDN-type resources (web pages, etc) probably needs to fall within scope somewhere along the line. As, surely, does request fulfillment. If I find a book in COPAC or a journal in SUNCAT or Zetoc, how do I quickly, easily, and affordably lay my hands on the thing? How can we leverage the stock sat idle in libraries around the country? How can we leverage request data to ensure that out-of-print materials that are in demand get reprinted? How can we offer informed choices to searchers about the availability and relative merits of borrowing, buying new, and buying second hand? How do we make it easier and more efficient and reliable to shop around?

A few of the terms of reference do appear to point in particularly interesting directions, however;

“whether there is potential for increased takeup of services for example by opening them for contributions across the community”

Participation in the population of national infrastructure. Now that could be interesting. It would also be valuable to look beyond JISC and the mentioned stakeholders (the BL and RIN) to think about how some of these services might become embedded in other domains. I spent some time in my last job, for example, exploring whether or not Zetoc might sensibly be opened up to public libraries, taking a short-term revenue hit on subscriptions to the charged equivalent in order to deliver a better service and – I reckoned – to actually earn loads more in terms of document supply transactions down the line. It still seems like a good idea, but the people I talked to at the BL were never convinced. Short-sighted money grabbing, or a more profound grasp of the issues than I managed?

“how the services might be further enhanced by working with others or by making use of provision offered by other organisations for example OCLC , library systems vendors such as TALIS or search engine providers”

And, presumably, questions about whether or not one of these is actually going to solve the problem for your users, whether you build something else or not. Google, for example, could throw a lot of effort behind any number of these problems. So could some of the others mentioned. At Talis, we’d much prefer a cooperative approach by which we all get to recoup far higher levels of our various ways of measuring value on value-add services, rather than wasting resources on building innumerable competing Big Buckets of essentially value-light data. The value is shifting. We all just need to work out where it’s going, and at least keep up with it.

So we’d certainly like to talk to whoever does the work.

“whether the services can be offered in a modular fashion in order that they might for example be integrated within other systems provided to learners and researchers, such as virtual learning environments (VLEs), library portals or catalogues or other examples of managed learning environments (MLEs)”

Web Services. APIs. Portlets. Yes!!!

“how services might better support personalisation, in particular with reference to the JISC personalisation report”

Some of them could start by supporting it at all. It’s a real shame that some of the good work in the Personalisation report hasn’t been more widely picked up upon.

If you’re interested, you have until 9 March to submit a proposal.

Technorati Tags:

Jon Udell does it again

The first time I saw LibraryLookup, to be honest, it blew me away. It was wonderful. It was (apparently) so simple, yet it bridged a yawning chasm between the world of lousy interfaces and free books, and the world of Amazon and its ilk.

And it wasn’t built by a library systems company, or a librarian. It was assembled by Jon Udell, the first superpatron of whom I became truly aware.

And yet, even the brilliance that was LibraryLookup wasn’t perfect. It needed an individual to install it within their web browser before it worked. It needed a library to register information about its query interfaces. And it needed someone, somewhere, to be watching all the time, for the all-too-frequent occasions when libraries (or their vendors) tweaked something, didn’t tell anyone, and remote services such as LibraryLookup lost their ability to see inside the library.

As Richard Wallis reported back in June, slotting our Directory into the process helped (a bit) with the second problem, and (a lot) with the third, so that was progress.

In the meantime, a host of others began to offer innovative and functional tweaks onto systems, either changing things from inside their library as John Blyberg and Dave Pattern have been doing, or working with what was available outside as another superpatron, Ed Vielmetti has so consistently demonstrated.

Jon Udell has just done it again, offering a mechanism to link between books on an Amazon wishlist and books held in the local library.

These hacks and tweaks and modifications are all excellent, and most definitely point towards functions that it would be beneficial to make much more widely available. And they make it all look so easy. We’ve been showing some of the similar functions we’ve been exploring, albeit from the other end, within Whisper.

Efforts such as ours around the Talis Platform and Talis Developer Network are part of looking for scalable and sustainable futures for these individual acts of creativity and innovation. The bar is currently too high. You need to be Jon Udell or Ed Vielmetti or John Blyberg or Dave Pattern. To do so much of this, you shouldn’t have to be that good, and it should be easier to take what they’ve done for their library and deploy it in your own.

Just as a platform like Windows prevents programmers from having to know how to make a disk spin, or exactly how to light up a particular point on my screen, so a Library Platform makes it possible to consistently and reliably build new and innovative applications on top of a rich and generic set of library services, which these new applications simply have to call.

Key to this has to be more effective use of the standards we have, and more rapid agreement of the standards we still need. It’s easy enough, perhaps, for John or Dave or Jon or Ed to knock something together to meet a particular set of needs at a given point in time, on a specific system. To scale, and to last, and to translate, a greater degree of standardisation is required.

We’re working towards that, and we’re seeking to be inclusive. The Platform mustn’t require you to work for Talis, and it mustn’t require you to take a Talis ILS/LMS to gain the benefits. That’s our vision, and that’s where we’re going.

Want to come too?

Technorati Tags: , , , ,

Just RESTing

John Blyberg over at AADL posted last week about the REST interface he has grafted on to his OPAC.

Great piece of work John.

Spurred on by the efforts over at Ann Arbor, Dave Pattern of the University of Huddersfield has produced something similar. Great piece of work Dave, especially the staff OPAC examples you reference which orchestrate other web services together to add value from xISBN, jacket images, loan history, etc.

I suspect that it will not be long before this list of great pieces of work will grow and grow.

Whilst excellent and to be welcomed, because of all the applications and plug-ins that can and will be build upon theses services, can I just shout hang on a minute before all the REST enabled OPAC implementations stampede off in a Library 2.0 cloud of dust.

Looking a the examples provided by Dave [http://161.112.232.203:4128/rest/keyword/author/tolkien] and John [http://www.aadl.org/cat/rest/search/X?Harry%20Potter&searchscope=26&m=&SORT=D] it soon becomes very clear that they have both created XML schema’s for their output that differ wildly, and a url query syntax that looks very different. Their approach to getting data also differs. From the AADL interface you get a list of record IDs and an xlink to follow to get the full record. From the Huddersfield interface you get an XML structure containing brief details, plus xlinks to the full record. Apart from being different from each other, nether seem to have much in common with any REST/Data standards already out there such as SRU, OpenURLDublin Core, and OpenSearch.

The key reason for the explosion of Web One and Web Two [with a nod towards Dr. Seuss], which underpins the technology part of Library 2.0, is the emergence of simple standards. (Http, HTML, XML, RSS, et al). So can I encourage John, Dave, and those that follow them to take a look at these standards like we have ( the Talis Prism OPAC has supported OpenURL searching from its inception, and we demonstrated OpenSearch capability a while back).

If both had used OpenSearch as their interface, when IE7 gets released with its OpenSearch capability there would be a whole world of clients for their OPACs.

There will now follow a deep heated debate about the various merits of OpenSearch, OpenURL, & SRU. But that’s another story…………….

Just look at it from the point of view of an AJAX Jockey producing a wizzo interface for him to mash in AADL’s REST interface. Then imagine his response when asked to do the same for Huddersfield’s, and another & another.

Keep up the good work guy’s, but maybe a bit of discussion/agreement formally, or informally, around standards will greatly ease the use of Library services by the communities we want to reach out to.

CrossRef adds Web Services

Amongst a set of eleven topics with reasonably current draft posts (some of which I think I’ll have to give up on and just delete…) sitting in ecto, this recent press release from CrossRef is worth bubbling to the top of the pile, as it can be dealt with briefly.

“CrossRef Web Services will create an easy-to-use tool for authorized search and web services partners to gather metadata to streamline web crawling. The CrossRef metadata database contains records for the more than 18 million items from over 1,500 publishers, the majority of whom are expected to choose to participate in this optional new service. The CrossRef Search Partner program provides standard terms of use for search engines, libraries, and other partners to use the metadata available from CrossRef Web Services – terms that promote copyright compliance and the important role published works of record play in scholarly communication.

CrossRef Web Services also provides search partners with a map to published scholarly content on the Web. In this way, it functions as a notification, or ‘ping’, mechanism for the publication of new content. Alerting crawlers to new or revised content to be indexed greatly reduces the need for ongoing re-crawling of publisher sites. ‘Search engines want better ways to gather standard, cross-publisher metadata to enhance their search indexes. Publishers want to streamline the way they provide metadata to a growing number of partners. CrossRef Web Services and the Search Partner Program fill this void,’ said Ed Pentz, Executive Director of CrossRef. ‘With CrossRef repurposing parts of its metadata database and using common protocols like standardized XML and OpenURL (and SOAP, RSS and other protocols in future), these services can significantly enhance indexes.’”

Outside the internal systems of big publishers like Wiley, where Digital Object Identifiers (DOIs) play a largely invisible role in tying the whole thing together, the DOI hasn’t really gained the traction that I initially believed it would. This seems a shame, and is presumably largely due to the financial models currently deployed in supporting the central organisation and associated services such as CrossRef.

The relative complexity or effort involved in deploying DOIs within a system such as an ILS and then making them actionable must also play a significant role here, though. As such, this statement of intent must surely be welcomed as a useful step towards making the value of the DOI more easily realised in a variety of contexts and through a range of interfaces.

I’m still not sure about the business model, though…

The release was originally brought to my attention by Peter Scott, and is highly relevant, given some of the possibilities to be explored at our next Research Day, being organised with BIC.

Technorati Tags: ,

Talking with Talis – Inviting your questions for Jim Michalko of RLG

Jim Michalko

I am recording a new Talking with Talis programme with James Michalko, President and CEO of RLG, on Tuesday 13 December.

Jim will be talking around a number of areas in which RLG are currently active, doubtless including their RedLightGreen service, and their membership of the new Open Content Alliance.

Quoting from the RLG web site,

“RLG supports researchers and learners worldwide by expanding access to research materials held in libraries, archives, and museums.

RLG works with and for its member organizations enhancing their ability to provide research resources. RLG designs and delivers innovative information discovery services, organizes collaborative programs, and takes an active role in creating and promoting relevant standards and practices.”

If you have any questions that you would like put to Jim, please send them to podcasts [at] talis [dot] com by Friday 9 December.

Technorati Tags: , , ,

Talis whispers about Library 2.0 possibilities

Talis Whisper front page

Along with wider discussion of Library 2.0, such as that captured in our Do Libraries Matter white paper [PDF download], last week’s Insight conference included several examples of Library 2.0 concepts surfacing for real in forthcoming products and proof-of-concept demonstrations.

Attendees were able to see the next generation of Talis’ Prism OPAC, Prism 3, and to see – and hear – enrichments appearing alongside more traditional entries, as well as realising the ease with which the interface could be switched in order to fulfill different requirements.

An experimental Whisper (experimental, and best in Firefox for now) also attracted interest from attendees, out on the blogosphere, and even on Flickr (it wasn’t us!).

Whisper offers a visualisation of some of the ways in which library content might be aggregated with content from elsewhere in the library, from other library domain systems, or from elsewhere entirely in order to deliver rich and meaningful services to users.

As Lorcan spots, the Whisper interface builds upon the now well-established “MODELS verbs” of Discover, Locate, Request, Deliver, and offers a tabbed interface comprising Discover, Locate, Directory, Borrow and Monitor.

Discover pulls together bibliographic data, enrichments such as book jackets, holdings data from participating libraries, and pricing from Amazon. From a single screen, the user can find a book (assisted by smart suggestions as they type, drawn from the titles of actual items known to the system), discover whether or not it is available to borrow or buy and – for those systems already known to the Directory – link straight through to detailed information from the ILS (Talis or otherwise) of the holding library. By default, the system searches every book and library that it knows about, but this is easily altered to either search only for books that are actually available to borrow, or to search only your own library. It would be straightforward to expand a search of your own library, say, to only search those nearby libraries likely to allow you access to the item.

Locate interprets the word literally, and uses Google Maps to display the locations of library branches, sorted by type. Selecting an individual library causes further details to pop up. For those libraries known to the Directory, a search entered here will be directed straight into the library’s own system. There are various ways in which such functionality might usefully deliver value to a range of different users, and it should be feasible to provide the types of segmentation and subsetting that real-world uses would require.

The Directory provides much of the power behind the applications being shown, and also now drives aspects of third party systems outside Talis. The Directory recognises that information about libraries and their systems changes with depressing frequency, and that time-pressed library systems staff rarely manage to inform all those linking through to them of any change. With the Directory, however, it becomes a simple task for changes to be spotted and modified once (by anyone with access, not just Talis or library staff), and for those changes to propagate out to any services requiring the information. The scripts running behind the Google Maps mash-up on the Locate tab, for example, do not require knowledge of the URL for a given library’s OPAC in order to offer the search of that catalogue. All that the script needs to know is a way to identify any individual library, allowing it to pass that identification to the Directory and receive back information to allow the formulation of a query. Any other system inside or outside Talis should be able to do the same thing.

Before you try it with your own library, it is worth noting that not all libraries listed via the Locate tab currently link through to the back-end library system. This is not some technical fault or major failing with the system. Rather, it is a reflection of the difficulty that anyone currently faces in building an accurate picture of libraries, their services, systems and capabilities. We are working to populate the directory more fully, and welcome participation from customers and non-customers alike. More comprehensively populated, the Directory is capable of powering a host of applications from Talis and others capable of consuming the underlying services.

Borrow demonstrates the way in which an Inter-Library Loan request might be integrated into the offering, whilst Monitor again utilises the Directory, this time to poll known systems for their status.

Whisper draws together a range of functions that, individually, would actually benefit quite different people. With current models, it is unlikely that the same person would be finding out where their local library was, submitting a full-blown ILL request to a different library, and monitoring the availability of various library systems. Nevertheless, the technologies behind these functions, and the way in which they have been drawn together in this demonstration interface, certainly serve to enable innovative thinking around ways in which different user communities might be given access to a range of tools tailored to their requirements and powered by robust, easily updated and ubiquitously accessible pieces of Platform infrastructure such as the Directory.

Demonstration of Library 2.0 web services

Ian Davis also showed a more bare-bones view on the same services, in which the user could consciously and visibly enable and disable individual services. Lacking a recognisable ‘library’ interface, Ian’s demonstration underlined the point that these services might actually be surfaced anywhere, and in any combination, not just in an application that looks like an ‘obvious’ evolution from the OPAC.

Behind all of this lie web services and other systems constructed in accordance with current thinking around the most appropriate standards and specifications from W3C, NISO, OASIS and others. The technologies behind Whisper are far from closed and proprietary and, where appropriate, Talis is continuing its practice of engaging with the appropriate standards bodies in order to ensure that emerging specifications are shaped in the light of the experiences we are gaining from Whisper and other developments.

Technorati Tags: , , , , , ,

Playing nicely with the other side

In a post on RLG‘s blog, hangingtogether.org, Günter writes about the value of cooperation between the public and private sectors, and the difficulty of getting such cooperation off the ground.

As a recent mover from public to private, I can sympathise with his point, and would emphasise the value to both parties when we manage to get it right.

Günter is principally concerned with getting private sector partners to adopt and engage with a range of technical standards related to digital preservation, but his points are more widely applicable.

There are, of course, commercial organisations that engage actively and willingly with the standards process. Talis is one of those, and will become more so, but others such as Sun are also often to be found at the table.

In talking to some of my public sector peers after my intention to change job had been announced, I encountered many of the stereotypes to which Günter refers.

“How can Talis offer a platform upon which we can build? Surely they’re going to charge exhorbitant fees as soon as we’re committed?”, went one line of thought.

“Talis is a vendor. They have no role in shaping future public services. We write the Invitation To Tender. They tender. We award them a contract to build what we tell them to.”, went another.

And, through much of it, “How are they going to make money out of all this? They must know…!”, was a common refrain.

There are many answers to these questions, and none of the serious ones are even remotely sinister. Talis is not out to lure poor unsuspecting libraries, learners and the rest into honey trap-like services that appears too good to be true, only to quickly turn round and apply punitive commercial fees across the board tomorrow.

A company like Talis has an interest in standards. They make our job easier. They make it easier for us to link to content and services provided by other people, in order to build better services for our customers. They make it easier for Talis’ developers to construct the core functions that our customers expect, in order to spend more time on the value-add that differentiates a Talis implementation of standard X from someone else’s. That’s good for us, and good for you.

A company like Talis has an interest in shaping the evolution of standards. This is not so that the standard ends up reflecting what our products can do. It is, however, about making sure that the pure ideals of the standard at least reflect reality… and about making sure that it will be possible to build systems that comply.

A company like Talis has an interest in seeing their technologies used in a wide range of situations, by large numbers of people. We learn from users’ experiences with our technology. We see increased capabilities amongst those users as they become familiar with possibilities, which in turn raises expectations for what the next generation of products should be able to offer. Maybe that is where the money is to be made.

Engaging with a company like Talis, which is thinking about this space, is undoubtedly good for Talis. It is also good for those with whom we cooperate in the public sector, whether they buy from us or not.

And if Günter or any of his colleagues from RLG (Hi Jim, Hi Merrilee!) want a chat, they know where we are… And, rather more usefully, a couple of us will be here next month…

Technorati Tags:

Standards don’t just grow on trees

Lorcan Dempsey highlights the report of the NISO commissioned Blue Ribbon Panel. Their brief was to advise NISO on their strategic planning process.

The report is now available [pdf] and having read it I can only echo Lorcan’s description that it:

makes compelling reading for anybody interested in how standards work is organized – or not organized – in our space.

The old joke that “the trouble with standards is that there are so many to choose from” is just as relevant today as it always has been. This report gives interesting insight in to how NISO, a well established industry specific standards body, needs to look to its self and work out how it should operate in the current iteration of the ever changing world that we operate in.

Is XML really a standard?

The Shifted Librarian writes, from a somewhat frustrated point of view, on using the XML enabled backend to her Innovative catalogue to “just build what I want

I?m sometimes told that III has an XML backend so I should just be able to build what I want on my own. Of course, my first response (of many) is that I?m not a programmer so I can?t just build what I want, but Casey Bisson at Plymouth State University is, and he?s trying to build weird and wonderful things with his own Innovative catalog.

She quotes Casey:

…their XML schema is non-standard, is even more difficult to work with than MARC, and is prone to parsing errors. So here we have an ILS vendor that claims to have an XML backend you can do whatever you want with, except that it?s incredibly difficult to do whatever you want with it, especially if you want to do something nutty like integrate your catalog?s content into your university?s way cool portal

So, if her interpretation is correct, here is a classic example of “encoding your data using a standard, such as XML, does not necessarily make it ‘standard‘ data

Exposing the catalogue encoded in XML is a powerful first step to enabling integration, but if the schema for that data is non-standard and difficult for people unfamiliar with the inner workings of the catalogue to work with, it is about as much use as a chocolate tea pot.

The wide distribution of library functionality into non-library system interfaces is becoming an unstoppable trend. Which in its self gives rise to many none technical issues I discuss here. The implementers of these interfaces will in general have zero understanding of the inner workings of a library catalogue [why should they], and most likely will not have the time or inclination to get their head around library domain search protocols such as SRU, SRW, Z39.50, OpenURL, etc.

I suspect that in the none to distant future one of the most oft repeated statements in integration discussions will be “Give me the url for your OpenSearch Description Document and I’ll have it integrated in a jiffy!“.

Why OpenSearch? Well it [clearly has the potential to become] a de facto simple search standard, and most importantly it is simple. Simple to understand, and simple to implement. I challenge anyone, who can read XML, to be able to construct an OpenSearch search against our Talis Prism OPAC demonstrator in the address bar of their favorite browser within a few minutes of looking at the url element of our OpenSearch Description Document. OK the results come back as RSS ‘items’, but any implementer worth their salt will soon have those appearing in your portal.

Thats what I mean by standard XML, even though OpenSearch has not, as yet, ventured anywhere near a standards organization.

If Library Systems get the reputation for being difficult to integrate with, the portal implementers will end up bypassing them. It is up to the industry both individually and in groups such as VIEWS to do something about it. We at Talis intend to play our part in this.

Building an Information Infrastructure

I’ve been re-reading Theo van Veen’s D-Lib article Renewing the Information Infrastructure of the Koninklijke Bibliotheek about the proposals for the major renewal of the information infrastructure in the National Library of the Netherlands, which was recommended for reading by Lorcan Dempsey.

I agree with Lorcan, it is well worth looking at. Firstly from a position of envy. Theo hints that they will be building on their current infrastructure, but the recommendations do have the flavor of ‘clean sheet starting point’ about them. I’m always envious of those that can start from a clean sheet.

Mainly though, it is the clarity of thought that is interesting – Using XML throughout – Using standard protocols (SRU) to search and retrieve everything – Separating the the data returned as a result, from the processing required to use it – Routing requests via an organisational resolver – Using [and then building upon] Dublin Core as a minimum to allow generic search access – Using only XSLT for creating “data driven” interfaces – Using the power of the browser to render added value interfaces (Ajax is worth a look in this regard) – etc.

There is one assertion that recent experience makes me question:

In the recommendations URL-based requests have been assumed, but the proposed concept is also applicable in web services based on SOAP [17]. The implementation barrier for URL-based services is, however, assumed to be lower than for web services based on SOAP.

I shared that assumption until working on some WSDL described SOAP services in both Java and .NET. Sitting back and watching the development environments automatically building all the utility classes I needed to create and interpret the XML payloads of the SOAP messages was a joy to behold! So, no you can’t easily type a URL in to your browser to test a SOAP call like you can with a URL-based service, but when you get around to building an application on top of the service, WSDL & SOAP pay back handsomely.

Wanting the best of all worlds, my recommendation is that wherever possible all Web Services should be made available via URL and SOAP. The former for simple testing and integration, the latter to underpin serious application integration.