Panlibus Blog

Archive for the 'Search' Category

Remember OPAC Suckiness

It was all the rage three years or so ago.  Karen Schneider even did a three part series on ALA TechSource exploring How OPACSs Suck, in which she listed elements of OPAC Suckitude and desirable features in a non-sucky OPAC.  Karen was not on her own, as this 2006 post from Jennifer Macaulay reminds us.

amazon suck What brought this to mind you may wonder.  I was preparing content for a presentation, when I was struck  by the massive contrast between two sites I was taking screen shots of.  The first is a classic site which does better than any other to show how libraries were being left behind by the rest of the Web.  If amazon.sucked like our old OPAC was a humorous facade on to web services, built by David Walker of California State University, to make that well know Internet retailer look like it had been styled by a well known library System supplier.  Until recently it was a fully working OPAC style interface on to Amazon.  Unfortunately I think recent changes with Amazon web services may have broken it beyond the first couple of clicks. (If you are listening David, fancy trying to fix it?)

RSAMD I was contrasting this with the impressive recently launched interface for the Royal Scottish Academy of Music and Drama (RSAMD).  Comparing these two, drives home just how far OPACs, (if that is what we should still be calling them), and more importantly the aspirations of the librarians responsible for them, have come in the last few years.

Are we there yet?  Checking out some of Karen’s 2006 list, you can tick of many items that are now standard in so called next generation OPACs, such as relevance ranking, spelling suggestions, and faceted browsing, so we are well on the way.  As the RSAMD interface shows, it is now possible for a library search interface to hold it’s head high amongst the some of the best of the web.

There is still progress to be made, but should we be still concentrating on a destination site that puts the library’s catalogue on line or should we looking more broadly at how the web presence of the whole library should be an integral part of the web.  I think the answer is both – Stunning catalogue interfaces should become the norm, not the exception to be admired and pointed at.   Meanwhile delivering all library services seamlessly as part of our users’ web experience should be our next goal.

I wonder what contrasts I’ll be reflecting upon in another three years…….

Stephen Arnold – A conversation with the closing keynote speaker for Online Information 2008

online-information-logo-2008 Stephen E. Arnold’s career has lead him to be a prolific writer, speaker, and expert on web technologies and their application both inside the commercial enterprise and across the Internet.  He is best known for his work on search and his insights in to the Google phenomenon.

He is presenting the keynote in the closing session of the Online Information Conference 2008 which is being held at Olympia in London from 2nd – 4th December and will have a wide range of speakers of broad interest to all information professionals from all sectors – libraries, academia, government, and commerce.

Stephen talks about his career so far and the themes for his presentation, explaining how the technologies that we have seen emerging over the last few years are ready for use inside the enterprise as well as maturing into delivering services across the web.  He also explores how the componentised nature of these technologies and the applications they power, enables them to be moulded to satisfy the needs of their users.

Innovative and OCLC join the Gang

Following in the footsteps of their counterparts from Ex Libris, SirsiDynix, and Talis, Betsy Graham, Vice President Product Management for Innovative Interfaces, and Matt Goldner, Executive Director End User Services at OCLC, joined The Library 2.0 Gang for the June show.

The topic for the show is Bolt-on OPACs – search and discovery interfaces sourced from the open source community or vendors other than the incumbent ILS supplier.

Aquabrowser was the first commercial product of this type.  Taco Ekkel Director of Development for Medialab Solutions, the Amsterdam based company who produced Aquabrowser, is guest for the show.   

Matt reflecting on the OCLC experience with WorldCat Local and Betsy with Innovative’s Encore product, are joined by Andrew Nagy, lead developer on the VuFind project, Marshall Breeding, and Carl Grant, in a open discussion about  the way such products are evolving.

Apart from being an interesting discussion, it is yet another example of how key commercial players in the library systems marketplace are starting to open up and join a conversation about the opportunities for libraries, and their users, as well as the issues behind creating those opportunities.

Technorati Tags: , , , , , ,

The beauty is in the API of the beholder

I published my posting about the announcement of Project Cenote, when Paul Miller was up talking about it at Access 2006 in Ottawa. Whilst I was doing it I was monitoring the #code4lib IRC chat channel, which seemed to be totally populated by people in his audience. I could tell when Paul mentioned Cenote for the first time – the following comment appeared in the channel “wow, a project named after a deep water-filled hole where humans were tossed as sacrifices…

I could do a whole posting about the idiosyncrasies of Talis project naming, but you are safe I’ll refrain for the moment. Still there is a tenuous connection between a deep water-filled hole and the distinctive application that is Project Cenote – ‘hidden depths’. (I said it was tenuous!).

Underpinning the sleek black Cenote UI are a set of new powerful Talis Platform APIs, joining those already driving things such as Talis Whisper, LibraryThingThing, and Herefordshire’s LibMap. These APIs are so new that the documentation for them is not yet published in TDN

So pin your eyelids back here comes a pre-documentation sneak preview.

Anyone who has played with APIs before is probably sceptically wondering how I can sensibly talk about an API without the documentation. Well, these APIs were designed and written with ease of discovery in mind. Like all APIs you need a base URL to start from. This URL for the API to search UK Bibliographic items is Also like most APIs you need to add some parameters to get the call to work for you, but where these Platform APIs differ is what they do when you don’t supply such parameters – no ‘page not found‘, 404, or other unhelpful html error. What you get is a helpful html page giving you direct access to the API – go on, click the link and see. Once there, type in a query and click search.

You should have ended up with a page that looks like thisyes I know it looks like XML gobbledygook, but if you scroll down a bit you will see the bibliographic results nicely wrapped waiting for an application to pick them out.

The default page you are presented with has a single query prompt, type in a search and click search and you will be presented with two things. Firstly, the XML/RDF formatted results and secondly in your browser address prompt the API call that returned them. For the bibuk store you can enter keywords or by using terms prefixed by a search type (eg. ‘title:war and peace’, ‘author:rowling’, ‘subject:history’, etc.). There are other stores wikipedia containing Wikipedia article abstracts; holdings contains holdings details for libraries which have contributed to the Platform (currently ISBN is the only search query for holdings); and cnimages for book jacket images (again ISBN is the currently supported search).

Pretty cool, but thats only the half of it.

With applications like Cenote you want to add value to the bib results with information such as book jackets, holdings information, etc. Yes you could call the Wikipedia abstract store API with the id for each item, but that would be a bit long-winded. Click on this link. You should be looking at the default page for the augmentation service for the Wikipedia abstracts store. Copy this URL in to the prompt – click ‘Augment’ and see what you get. I squint at the returned XML should reveal that the bib results now have wikipedia abstract data included with them. The same effect can be obtained from the augment service of the book jacket images and holdings stores. – now that is impressive.

Here are the results from augmenting bib results with library holdings information. – Very cool!

I know I work with the guys who are producing this stuff, but I can’t hold back from a hat tip in their direction. This is how APIs should be built – designed to be easily understood and with the consumer in mind. You should be able to test out and see the results of what you want to without having to write a single line of code.

I’m sure someone out there is thinking, How do you argument a set of results with data from more than one store?. Well that has been thought of, and the orchestration of such things is part of another Platform API set which is well on its way to being released. You’ll just have to be a little patient.

For the XML averse among you this posting might have been a bit technical for you [sorry] but hopefully you will see that the people who produced Cenote only had to worry about how it looked and felt, leaving the heavy lifting bit of searching the data and augmenting it from other sources to the Talis Platform. An I think you will agree, only having to concentrate on the UI shows in the resultant application.

For the Talis Project name spotters reading this, you have probably identified that these APIs come from a Platform component called Bigfoot. Suffice to say the vision behind Bigfoot is:

“Bigfoot is a zero-setup, multi-tenant content and metadata storage facility capable of storing and querying across very large datasets.”

Anyway I’m all API’d out now. I’m hoping to expand this in to a TDN API user guide, so watch out for that. If in the meantime you want to know more, post a message on the TDN <a href=”″Talis Platform Forum or drop me a line.

Striking a new Cenote

Paul gets all the fun!

Not only does he get to show the first results of Medialab becoming a Talis Platform Partner, using data contributed to the Platform in their AquaBrowser Online service that I posted about earlier. But he also gets to be the first to show off Project Cenote.

Project Cenote (pronounced suh-noh-tee) joins its cousin Talis Whisper as a visible demonstration of building applications on the Talis Platform. Whisper is an AJAX application with the entire user interface running within the browser. Cenote demonstrates the power of using the Platform’s services to create a web site based application.

A glance at the screen shot above, or better still a play with Project Cenote it’s self, clearly shows that we have taken a fundamentally different approach to its user interface design. But its look is not the only thing that makes it different from other interfaces to search publicly visible library recourses.

Fire off a few searches and you will soon see that results are returned for items held by many libraries. Where available the user can click through deep-linked to the OPAC of the holding library. Bibliographic results are enhanced with book jacket images, from more than one source. In addition book descriptions and pricing information are displayed.

What is different, is the power of the recently enhanced Platform APIs that Cenote uses to deliver the functionality it wraps in its distinctive UI. More on the technical detail of these in a future posting, but for now suffice it to say that it is the Platform that is doing ‘the heavy lifting’ of searching the large scale content stores holding the data, and then orchestrating the augmentation of the basic bibliographic data with associated images, descriptions, library holdings information. etc. All the Cenote application is doing is presenting those results to the user.

Another difference is that every result and search has a static URL. For instance the URL takes you to the page displaying information for the book with the ISBN 9780747571667. The URL takes you to an author search for Rowling. The same URL format is also true for title, subject, publisher, etc. – have a try.

The search power, although not visible through an advanced search page yet, is available through Cenote’s single search prompt. As per many internet search engines words typed in to the prompt are treated as keywords, unless prefixed by a search type. So this search “title:war and peace author:Tolstoy” will give you these results.

So what is the purpose of Project Cenote? It has two main purposes. Firstly, like Whisper, it is a visible demonstration of the power of the Platform built upon open contributed data, and along with the partnership with AquaBrowser Online a working proof that the Platform approach fosters rapid innovation in the development of real solutions. Secondly, it is a tool to drive the discussion around what the future User Interfaces in to library data may look like, and how they will operate.

A Cenote TDN discussion forum has been created for this discussion to grow within. Like it, hate it, intrigued by it, think it should do more, think it is applicable to your situation – or not. Let us know, it will help us, and others that will build on the Platform, build the tools the users want.

Technorati Tags: , , , , , , ,


Superpatron (Ed Vielmetti) posted ‘a note to self’ yesterday entitled Zed REST. I’ve probably got my national stereotypes wrong, but surely Ed should have called it Zee REST, leaving it to the British to call it Zed REST.

Anyway, Ed was musing as to the possibility of writing a Z39.50 adaptor so that research library catalogues could export a PatREST(pdf) interface for public use. In that way a simple client could be constructed to talk to a wide range of catalogues, without having to mess about with horribly binary encoded formats.

He then goes on to add:

A variant on this theme would be SRU REST, which would start from SRU/SRW and export PatREST on the user side.

The implicit message of this being, that as SRU is a RESTful interface that returns XML, it would be far easier to work with. This is most certainly true.

It is no accident that in the recent free upgrade (which I talked about earlier) to Talis’ own product range, we are providing open access to Z39.50, SRW & SRU by default. If you already have a Z39.50 client, by all means continue to use it. But if you are starting from scratch, don’t touch it with a barge-pole go for SRU if you have the option.

So Ed, as a footnote to your note, can I suggest you start with SRU, and then look at a way of getting Z39.50 targets visible via SRU. Then you only need to build one PatREST adaptor. Ah the joy of reuse, very Library 2.0!

Where do I get a Z39.50 to SRU adaptor from to make this possible? – You get Yaz Proxy.

Yaz Proxy from the Danish company Index Data, is a GPL licensed application which amongst other things provides “SRU/SRW server function, to allow any Z39.50 server to also support the ZiNG protocols“.

With its in-built XSLT capabilities this could be just the thing to to get Ed closer to his ambitions without having to mess with those with horribly binary encoded formats.

The only thing a would question, is why invent a PatREST search standard, when a perfectly serviceable RESTful search interface is, or can easily be, made available for the many visible Z39.50 catalogues? I’ve studied PatREST, which as a work in progress has a lot going for it and John Blyberg should be praised for his efforts in producing it, but looking at the searching elements of the standard I can’t help feeling that SRU has solved many of the issues that PatREST may have difficulty in solving. Things like the ability to request an Author + Title search, or a search limited by published date ranges.

Don’t get me wrong, as I have said before SRU/W is not perfect, but I think it addresses many of the search query issues that may come to haunt PatREST, OpenSearch, and other ‘simple’ search protocols. Maybe there should be some combining of standards going on in this area. I wonder what happened to this initiative?

Technorati Tags: , , , ,

Free to SRU

When you approach things in a different way things just start dropping in to place.

Paul in his post yesterday commenting on Peter Murray‘s latest episode of his excellent series of posts around Service Oriented Architecture (SOA), referenced some of the work we are doing here at Talis…

Programmatic access that means the user need only interact with a library user interface if they want to? Programmatic access that means software can pull this stuff together and act upon it on our behalf? Bring it on! Which, of course, we are… 🙂 Platform APIs. Talis Keystone integration pieces. Large, scalable and affordable stores of data, freely contributed, freely shared and freely consumed by the wider community. Watch this space for the next piece in a puzzle that looks more fascinating every day.

Most of the developments Paul references are major broad advancements in the way library services and library data can begin to interoperate with other library and non-library systems and services. But its not all just about the big picture, As an old colleague of mine used to say, “remember that screen full of information is just a cunning arrangement of individual pixels!“.

Like the morning mist rolling away from the rising sun, the approach that is delivering the Keystone Sandbox, Platform APIs, and open development community is rolling across the way we, in Talis, approach everything we do. We not only talk, and in my role evangelize, about what is currently termed the Web 2.0 or Library 2.0 way – we practice what we preach.

The latest example of this comes from something called Project Lyra. [For the Talis customers reading this, switch off for the next couple of sentences as you are probably Lyra’d-out by now and will be glad when its all over!] For the rest of you, Project Lyra is the process to migrate the Talis customer base on to two new [to them] standards – ISBN13 and Marc21. Much easier to type than do, without disrupting the daily operation of those libraries. As the bibliographic core of a library system is based around the local flavour of Marc, changing the version of Marc is not a snip. The approach to the developments required for the Lyra enhancements, to our Library Management System product, was different to he way we have previously approached this sort of thing before. It has been about cross team cooperation, componentised developments, and disparate modules of the product suite sharing APIs to common components. For instance the cataloguing component needs to find bibliographic records to edit; the OPAC needs to find bibliographic records to display to users; the Z39.50 target needs to find bibliographic records to deliver to other library systems. Somewhat radically, in historical terms, we now using the same searching and indexing component to support all that functionality.

All very interesting, but what is the benefit/lesson for me in all this, I hear you say. Well [sticking with the searching and indexing bit], if that component can support three disparate internal system operations, it certainly should be able to support other external requirements. So why not make that component externally accessible for others to use.

That is why as Talis customers take the latest upgrades in the Lyra program, their systems will by default and for free become Z39.50, SRU & SRW capable. This addition dramatically increases the interoperability capabilities of those systems. For the uninitiated, SRU & SRW are Web Service based standards which make it far easier, especially for the non-library community, to integrate library search in to other applications than it ever was with Z39.50. Show Z39.50 to mashup developer and they will run a mile!

Although great for the users, and potential users, of the Talis Library Management Suite in UK & Ireland Libraries, that wasn’t the point of giving you a little insight in to how changing your approach in this way can have many unanticipated benefits.

My point is that the world is changing, whether we like it or not, and the products we and you use will have to change to enable you to reap the benefits that will flow from this gear-change in the power of the Internet.

Putting a sticker on the box in which the software is dispatched saying “The contents of this box conforms to the aspirations of Library 2.0“, or firing up a token blog won’t crack it. To do their bit to help the whole library community realize it’s potential in the increasingly information rich world, the vendors, the open source community, the library system managers, and the librarians need to start thinking differently. (I’m glad to say at least some are already).

To lift a quote from Roy Tennant talking about OPAC developments, it is not about “Putting lipstick on pigs”. There is a fundamental shift in the application of technology and the thinking behind it going on, and I for one am enjoying the ride with an organization which understands it.

Technorati Tags: , , , , , , , , ,

Red light for RedLightGreen

I was sad to read the report of the demise of the innovative RedLightGreen, from the OCLC absorbed RLG, which has been providing a book location and citation service since 2003.

An early and obvious casualty of the OCLC RLG ‘combining’ back in July.  The announcement confirms Paul Miller’s prophecy:

What, for example, does this mean for the freely accessible and oft-praised RedLightGreen? Will it disappear from the public web, locked forever inside the subscription-powered monolith that is WorldCat?

As the announcement indicates it could not compete with the features [or soon to be available in 2007 features] of

So time, and OCLC move on.  I shall miss it.  Not only because RedLightGreen was the first non-Talis consumer of the Talis Platform (using the Bibliographic Deep Linking API to link users directly in to Library OPACs) but  also as a trail-blazer in providing simple but effective web technology to do what users wanted.  To quote RedLightGreen’s own words:

What was once available to major university libraries and research institutions only can now be accessed by all who come to the Web.

In 2003 that was quite an innovative approach, providing a service to be accessed by all.  In many ways it still would be, if it wasn’t closing down.

Technorati Tags: , , , , ,

Out-of-copyright book download from Google

Reported by Michael Arrington on TechCrunch, Google Book Search has added the ability to allow PDF downloads of out-of-copyright books.

Until now, Google only allowed people to read the out-of-copyright books online (and only snippets of copyrighted works). To search the database of available full titles, go to and click the “full view books” option when searching. This new move contradicts earlier statements by Google that scans of out-of-copyright books would not be made available for printing.

SRU / OpenSearch get together

Well at least the people behind them…

De Witt reports in the A9 Developer Blog the visit of Dr. Robert Sanderson to the’s offices. With Dr Sanderson the co-creator of the SRU search standard visiting A9 the home of OpenSearch, it obviously wasn’t just a pop-in chat over coffee and bagels.

Rob and I were both pleased to find that we see eye-to-eye on nearly everything we discussed. We both view SRU and OpenSearch as complementary, not competing, technologies. The SRU (née Z39.50) community has long been tackling some of the toughest problems in search syndication, and they have done a commendable job at working out many of the intricacies of integrating and exposing diverse and distributed collections of rich data. The OpenSearch philosophy has always been to leverage other formats when possible, and the incorporation of SRU and CQL elements into OpenSearch via extensions will help create a smooth gradient connecting the two technologies.

As regular readers of Panlibus will be aware, I’ve been predicting the rise of OpenSearch for quite a while, whilst recognizing its limitations and that it could learn from the library search world and protocols such as Z39.50 and SRU. Similarly the complex, to most non-library developers, library protocols have much to learn from OpenSearch.
So it is with great interest I will be watching Palo Alto and Liverpool to see what develops.

I believe that this is just the first step along a path that will ultimately benefit both search providers and searchers alike. Expect to hear a lot more over the months to come about ways in which SRU can be leveraged by OpenSearch users and vice-versa.

Thanks to Lorcan Dempsey for the heads up on this.

Technorati Tags: , ,