Panlibus Blog

Archive for the 'TDN' Category

Take your data with you….

As InfoWorld reported earlier this week: Google CEO Eric Schmidt, at the Web 2.0 Summit in San Francisco said Google wants to make the information it stores for its users easily portable so they can export it to a competing service if they are dissatisfied. He went on:

Making it simple for users to walk away from a Google service with which they are unhappy keeps the company honest and on its toes, and Google competitors should embrace this data portability principle

If you look at the historical large company behavior, they ultimately do things to protect their business practices or monopoly or what have you, against the choice of the users.

The more we can, for example, let users move their data around, never trap the data of an end-user, let them move it if they don’t like us, the better.

I wonder what Google’s opinions are on sharing data. Its one thing for you to be happy for your leaving customers to take their data with them [in a usable format] it is another to be happy to share the data of your current customers [with their permission] with your competitors to add value to your customers lives.

Obviously Schmidt’s comments are aimed at individual users of Google’s hosted software-as-a-service applications. Will this attitude cover aggregations of broader data – digitized book contents for instance? One key to the open movement of open data between those organizations who hold and allow access to it, is licensing. Discussion around licensing in the Open data world is a topic increasing in volume.

In presentations I attended at the recent Stellenbosch Symposium it was made clear that the research community should be discouraged from signing over all rights to their publications to publishers – some right to hold Open access copies in their institution’s repository should be retained. Then there is the constant justification by Google around what they are doing with the digitization of books. There is the Free Our Data campaign in the UK, and many other examples.

There is also the discussions around not only how things are licensed, but what can be the subject of a license.

The Talis Community License (TCL), which has received some Web 2.0 Summit coverage of its own on TechWeb, addresses a hole in the current spectrum of open data licensing which is not covered by the Open Source licenses such as GPL at one end, or by the Creative Commons movement at the other. Both of these cover creative output – source code and creative works respectively.

The problem comes when you try to protect [or enable] the use of ‘an aggregation’ of either facts [which in themselves can not be copyrighted] or individually protected/licensed elements in a data set. In Europe this access to an aggregation is covered by something called Database Right, but this is not a Global phenomenon.

To many this hole in the spectrum of Open Data Licensing is not obvious, and only becomes apparent after working through some examples. As the realisation for the need of something like the TCL spreads across the community we hope that it represents a useful contribution to the evolution of Open Licensing.

In a separate but associated discussion that is emerging around the Open availability of bibliographic records, LibraryThing‘s Tim Spalding made the following comment on the Code4Lib listserv:

As I’ve been saying at conferences, anyone who wants to build an open-source repository of MARC records, with or without wiki-like access, will get my (and LibraryThing’s) direct support. I think it’s going to happen. I only we had the time to do it directly. Maybe we’ll get to it if no one else does….

…An open-source alternative to the current system is going to happen. The only question is when. The project is doable, and would be of enormous importance.

So where do the non-libraries and small libraries who do not want, or more likely cannot afford, to pay expensive fees to get at bibliographic records go at the moment? This has to change.

One of Tim O’Rielly‘s original key aspects of Web 2.0 is ‘Data as the driving force’ – its been a slow boiler but that is starting to become more obvious by the day.

Technorati Tags: , , , , , , , , , ,

Get somebody else to do it!

37,000 feet above what I think should be the Sahara desert (not that I can tell as it is pitch black outside the window of this South African Airways 747) in a mini power cut.

How smug did I feel, after listening to Paul Miller’s complaints in his Access 2006 presentation (podcast here) that he had no seat-back screen on his flight to Canada, to find just the thing in my seat on this flight to Johannesburg heading towards the upcoming Stellenbosch Symposium. My smugness bubble was soon burst upon discovering that I was in the middle of a block of twelve seats with power failure – no reading light, no music, no personal entertainment system ! ;-{ So me, and the group of ten Belgian tourists I seem to have ended up in the middle of, have had to resort to that traditional participative pastime of conversation – there are some traditions that are worth maintaining.

There are some things though that benefit from technological advances. From my earlier postings you would quite rightly get the impression that I think some of the things Amazon are doing with their utility web services (S3, SQS, EC2, MT) are pretty damn cool. I already personally use a nifty tool called JungleDisk to back up the 4Gb of data on my home PC (when do they get the time to listen to all that music, and will they ever stop storing their mp3’s in with their documents and spreadsheets) in the Amazon Simple Storage Service (S3) for less than $2 per month.

S3 came to the rescue on another front. Because I like using images to liven up my presentations the PowerPoint file for my keynote in Stellenbosch runs to a whole 22Mb. Getting something that size to a couple of people in advance is not the easiest of tasks as it would give many of the most accommodating email systems indigestion. Whilst scratching my head about this problem, I suddenly had one of those well durrr moments that we get from time to time. Upload the file to S3, make it publicly visible, and let Amazon and the recipients web browsers do the work for me – simple. So, with the aid of another bit of nifty software I can recommend – John Spurlock’s NS3, thats exactly what I did. Another knock-on benefit that didn’t initially occur to me, is the piece of mind that if I loose the memory stick in my pocket, and the back up CD goes missing with my luggage, at the same time as my laptop has a nervous breakdown, all I need is access to a browser and I can get my presentation on line in a few minutes.

I don’t think I’m alone in having a recent well durrr. I think the technical team behind Second Life had one too:

The client you download may just seem like a 5-minute nuisance to you. Magnified ten thousand times, it becomes a severe issue for our webservers on days when we release a new version- tens of thousands of people all rushing to download them at the same time. An average of 30 MB per download, multiplied by however many folks who want to login to this Second Life thing, comes out to a lot of bits

Rather than continue to pile on webservers just for this purpose, which has somewhat diminishing returns, we have elected to move the client download over to Amazon’s S3 service, which is basically a big file server.

How many teams behind academic/library projects, startups etc., must there be out there worrying about sizing their servers, backing their data up, and guesstimating the bandwidth required if they become popular? If I was in their position I would be seriously considering offloading the job to someone else for a few dollars a month on my credit card. – Ah no credit card, that is a massive obstacle for many an institution!.

This is starting to sound like a sales pitch for Amazon, Its not intended to be. (but if Jeff is listening – remember your friends at Christmas time) If you want raw compute power, or storage and distributing of files, heavy lifting done for you, you could do you self a favor and take a look at what Amazon are doing.

But what if you need a more specialized heavy lifting. What about the storage, indexing, and searching of bibliographic data? What about the augmenting of such data with book-jacket images; links to disparate but related information such as articles in Wikipediea, reviews, etc; library holdings records; links in to those libraries’ OPACs? All doable individually by many a project team, but all of it without compromising your response to deliver it yesterday with a new cool user-interface? And without having to create yet another updated version of the last application you built, from scratch?

The Talis Platform, or more specifically its component services Silkworm (open directory for Collections, Locations, OPAC deep-link definitions, Collection Groupings, and potentially much much more), Bigfoot (highly scalable large data stores, designed to hold, index, search, and augment generic data), Symphony (possibly a new one for you Talis project name spotters out their – orchestrates the interaction between other platform services), is getting ready to saddle up an deliver a few well durrr moments in our world.

I say getting ready, as we are still putting a few things in place like expanding the API documentation in TDN to cover the Bigfoot APIs (mind you based on the play with it and discover how to use it yourself approach that I blogged about recently, its questionable how much documentation you need), but as demonstrated by Project Cenote there is plenty there already.

Like it or hate it, the Cenote interface is very different in its look. It is also very different in its construction – its all UI and no application. By that I mean, all the Cenote team had to worry about was capturing user input and displaying bibliographic results in a stunning interface. How the data behind it was collected, stored, indexed, and searched was never a concern for them – they got somebody else to do that. The platform is doing all the heavy lifting for them. It is, can and will do it for others well durrr.

Want to know a bit more? – Just ask, either here or in the TDN

Technorati Tags: , , , , , ,

Is there a place for P2P

David Bigwood was thinking out loud the other day in his Catalogablog
posting P2P OPACs

Here’s an idea, not even half-baked, how about peer-to-peer (P2P) networks of OPACs? Only available items would display. I’d get to pick the institutions I’d have display and whether to display non-circulating items. Something like Limewire.

Having struggled with the effects of teenage family members installing Limewire and its predecessors on the home PC, and with how we scale the traditional search of a single library’s collection up to a reliable performant query of information within overlapping ad hoc groups of library collections, I have also wondered if the P2P (peer-to-peer) technologies underpinning the former could be helpful with the latter.

David’s thought, of using P2P and the music sharing application Limewire as an example, when you deconsruct it is attempting to address a few well known problems in the library domain.

  • Identifying and locating Library collections – how the collection is described, physically located, and accessed electronically are all concerns in this area which resource directories, many which have come and gone, have attempted to address. In the music sharing P2P world, the major concern is getting a copy of the file with little concern as to where it comes from.

    There are several current examples of these library directories around, often limited by project, type/size of library, geographic location, commercial constraints, etc. Then there is the Silkworm Directory in the Talis Platform, an open wiki-like in philosophy, directory in which anyone can enter any library collection and then use an open API to query that information

  • The grouping together of an ad hoc set of library collections to search within. – These could be as organized as all the academic libraries within 50 miles of a city, or as random as a student’s university library, the local library near her dorm, and the library in her home town – totally logical to the student – random to everyone else

    A little known, as Paul Miller only mentioned it in his Access 2006 presentation(pdf) last week, aspect of the Silkworm Directory is its ability to create ad hoc groups and then query by the members of those groups.

  • The constant searching across many dissimilar collections. – Anyone who has used or tried to pull together a federated search across many library catalogs, traditionally using Z39.50, will always have horror tails of the way locally implemented indexing rules can make a mockery of search an results ranking.

    Now if we could consistently index, search, and rank in a single store all the holdings of the collections we are interested in, as defined in a directory, providing it was scalable and performant this problem would disappear. This is the approach successfully taken by the Googles of the world. It is also how the Bigfoot element of the Talis Platform operates. (see my recent posting for a description of how Bigfoot APIs are driving driving the recently announced Project Cenote interface)

  • Filter the results of a search by the libraries in a group that have holdings. – P2P, in the same way that Z39.50 federated search does, could help in this area by querying directly individual library collections. But I suspect that it would suffer the same problems as current federated search, the fastest response you get is based on the speed of the slowest resource. P2P addresses this with caching and by down loading from several places simultaneously, which are not really applicable where you are trying to get information from a specific collection.

    The Talis Platform’s holdings stores address these issues by storing, aggregated across many collections and freely contributed by libraries, holdings statements along side bibliographic stores. This is done in such away as to enable bibliographic results to be augmented with holdings information on the fly as results are returned from an API call.

  • Filter the results of a search by libraries that have in stock items. – This final step is probably the most difficult to solve in a live situation as any store can become out of date at any time that a book is borrowed from a particular collection. P2P may well have valuable application in this area, be it filtering a results set of known holdings, or keeping stores up to date on a minute by minute basis.

It remains to bee seen as to how P2P could be used, but it should not be dismissed as only a technique used for [often illegal] music downloading

David says his thought might be ‘half-baked’, but there are some useful ingredients in his recipe. How well some of them would scale in the wider library environment I’m not so sure, but a hybrid of P2P with some of the high volume, scalable, performent, open data, open API, aspects of the Talis platform – now that may well have legs.

Technorati Tags: , , , , , , ,

The beauty is in the API of the beholder

I published my posting about the announcement of Project Cenote, when Paul Miller was up talking about it at Access 2006 in Ottawa. Whilst I was doing it I was monitoring the #code4lib IRC chat channel, which seemed to be totally populated by people in his audience. I could tell when Paul mentioned Cenote for the first time – the following comment appeared in the channel “wow, a project named after a deep water-filled hole where humans were tossed as sacrifices…

I could do a whole posting about the idiosyncrasies of Talis project naming, but you are safe I’ll refrain for the moment. Still there is a tenuous connection between a deep water-filled hole and the distinctive application that is Project Cenote – ‘hidden depths’. (I said it was tenuous!).

Underpinning the sleek black Cenote UI are a set of new powerful Talis Platform APIs, joining those already driving things such as Talis Whisper, LibraryThingThing, and Herefordshire’s LibMap. These APIs are so new that the documentation for them is not yet published in TDN

So pin your eyelids back here comes a pre-documentation sneak preview.

Anyone who has played with APIs before is probably sceptically wondering how I can sensibly talk about an API without the documentation. Well, these APIs were designed and written with ease of discovery in mind. Like all APIs you need a base URL to start from. This URL for the API to search UK Bibliographic items is Also like most APIs you need to add some parameters to get the call to work for you, but where these Platform APIs differ is what they do when you don’t supply such parameters – no ‘page not found‘, 404, or other unhelpful html error. What you get is a helpful html page giving you direct access to the API – go on, click the link and see. Once there, type in a query and click search.

You should have ended up with a page that looks like thisyes I know it looks like XML gobbledygook, but if you scroll down a bit you will see the bibliographic results nicely wrapped waiting for an application to pick them out.

The default page you are presented with has a single query prompt, type in a search and click search and you will be presented with two things. Firstly, the XML/RDF formatted results and secondly in your browser address prompt the API call that returned them. For the bibuk store you can enter keywords or by using terms prefixed by a search type (eg. ‘title:war and peace’, ‘author:rowling’, ‘subject:history’, etc.). There are other stores wikipedia containing Wikipedia article abstracts; holdings contains holdings details for libraries which have contributed to the Platform (currently ISBN is the only search query for holdings); and cnimages for book jacket images (again ISBN is the currently supported search).

Pretty cool, but thats only the half of it.

With applications like Cenote you want to add value to the bib results with information such as book jackets, holdings information, etc. Yes you could call the Wikipedia abstract store API with the id for each item, but that would be a bit long-winded. Click on this link. You should be looking at the default page for the augmentation service for the Wikipedia abstracts store. Copy this URL in to the prompt – click ‘Augment’ and see what you get. I squint at the returned XML should reveal that the bib results now have wikipedia abstract data included with them. The same effect can be obtained from the augment service of the book jacket images and holdings stores. – now that is impressive.

Here are the results from augmenting bib results with library holdings information. – Very cool!

I know I work with the guys who are producing this stuff, but I can’t hold back from a hat tip in their direction. This is how APIs should be built – designed to be easily understood and with the consumer in mind. You should be able to test out and see the results of what you want to without having to write a single line of code.

I’m sure someone out there is thinking, How do you argument a set of results with data from more than one store?. Well that has been thought of, and the orchestration of such things is part of another Platform API set which is well on its way to being released. You’ll just have to be a little patient.

For the XML averse among you this posting might have been a bit technical for you [sorry] but hopefully you will see that the people who produced Cenote only had to worry about how it looked and felt, leaving the heavy lifting bit of searching the data and augmenting it from other sources to the Talis Platform. An I think you will agree, only having to concentrate on the UI shows in the resultant application.

For the Talis Project name spotters reading this, you have probably identified that these APIs come from a Platform component called Bigfoot. Suffice to say the vision behind Bigfoot is:

“Bigfoot is a zero-setup, multi-tenant content and metadata storage facility capable of storing and querying across very large datasets.”

Anyway I’m all API’d out now. I’m hoping to expand this in to a TDN API user guide, so watch out for that. If in the meantime you want to know more, post a message on the TDN <a href=”″Talis Platform Forum or drop me a line.

Striking a new Cenote

Paul gets all the fun!

Not only does he get to show the first results of Medialab becoming a Talis Platform Partner, using data contributed to the Platform in their AquaBrowser Online service that I posted about earlier. But he also gets to be the first to show off Project Cenote.

Project Cenote (pronounced suh-noh-tee) joins its cousin Talis Whisper as a visible demonstration of building applications on the Talis Platform. Whisper is an AJAX application with the entire user interface running within the browser. Cenote demonstrates the power of using the Platform’s services to create a web site based application.

A glance at the screen shot above, or better still a play with Project Cenote it’s self, clearly shows that we have taken a fundamentally different approach to its user interface design. But its look is not the only thing that makes it different from other interfaces to search publicly visible library recourses.

Fire off a few searches and you will soon see that results are returned for items held by many libraries. Where available the user can click through deep-linked to the OPAC of the holding library. Bibliographic results are enhanced with book jacket images, from more than one source. In addition book descriptions and pricing information are displayed.

What is different, is the power of the recently enhanced Platform APIs that Cenote uses to deliver the functionality it wraps in its distinctive UI. More on the technical detail of these in a future posting, but for now suffice it to say that it is the Platform that is doing ‘the heavy lifting’ of searching the large scale content stores holding the data, and then orchestrating the augmentation of the basic bibliographic data with associated images, descriptions, library holdings information. etc. All the Cenote application is doing is presenting those results to the user.

Another difference is that every result and search has a static URL. For instance the URL takes you to the page displaying information for the book with the ISBN 9780747571667. The URL takes you to an author search for Rowling. The same URL format is also true for title, subject, publisher, etc. – have a try.

The search power, although not visible through an advanced search page yet, is available through Cenote’s single search prompt. As per many internet search engines words typed in to the prompt are treated as keywords, unless prefixed by a search type. So this search “title:war and peace author:Tolstoy” will give you these results.

So what is the purpose of Project Cenote? It has two main purposes. Firstly, like Whisper, it is a visible demonstration of the power of the Platform built upon open contributed data, and along with the partnership with AquaBrowser Online a working proof that the Platform approach fosters rapid innovation in the development of real solutions. Secondly, it is a tool to drive the discussion around what the future User Interfaces in to library data may look like, and how they will operate.

A Cenote TDN discussion forum has been created for this discussion to grow within. Like it, hate it, intrigued by it, think it should do more, think it is applicable to your situation – or not. Let us know, it will help us, and others that will build on the Platform, build the tools the users want.

Technorati Tags: , , , , , , ,

OPAC as a Service

One of the joys of watching and commenting on technology is the great sport of new acronym spotting. Only rivaled in its field by new acronym inventing. Lately there has been a small series of acronyms forming with the suffix ‘aaS’.

First we had SaaS – Software as a Service. To quote Wikipedia Software as a Service (SaaS) is a model of software delivery where the software company provides maintenance, daily technical operation, and support for the software provided to their client. SaaS is a model of software delivery rather than a market segment; software can be delivered using this method to any market segment including home consumers, small business, medium and large business. The best know example of SaaS being In simple terms, Salesforce’s business is based upon companies large and small renting a CRM system as a service over the Internet. For the customer this means no investment in hardware or software or the associated overheads – they just get on and use it

Following Amazon’s announcement of their Electronic Compute Cloud (EC2) service, where you can rent time as and when you need it on a virtual computer for as little as 10 cents an hour, Jeff Barr (Amazon Web Services Evangelist) coined the term Hardware as a Service (HaaS). In the presentation I first saw Jeff use the term, he also talked about the Amazon Mechanical Turk service. Mechanical Turk is a way of interacting with people to get done things that only humans can do such as image recognition, sheep drawing, and the like. This got me to thinking that this was really People as a Service – PaaS.

Well now, following an announcement from Medialab, the Dutch company behind Aquabrowser, we have OaaS – OPAC as a Service. This may be stretching the point a little as what they are proposing to deliver could be described as providing OPACs using a SaaS technique and business model, but if others agree that OaaS is a god acronym I would like to stake my claim to its creation.

The announcement of AquaBrowser OnLine is featured in Library Technology Guides. From the AquaBrowser OnLine site:

AquaBrowser Online is a unique web based library catalog search service. All your library needs is an Internet browser. Without any investment in software or servers, this is a whole new paradigm in library search solutions.


AquaBrowser Online brings the best features from AquaBrowser Library to the budget conscious libraries with up to 150.000 titles in their catalogs. No longer will smaller libraries have to compromise their wishes on search and web-accessibility.

So let the guys from AquaBrowser Online have access to index your catalog, and you can have a new whizzo OPAC interface for as little as $99/month. All without having to jump through all the usual hoops of justifying the purchase of hardware and software to run it on.

This is not the first example of being able to obtain some/all of your library via a service, but it is a high profile one. Building on their reputation of providing new OPAC interfaces for old monolithic Library Systems, or to echo Roy Tennant‘s words putting lipstick on pigs, this initiative may gain some traction.

A Library wishing to dip it’s toe in this water will be probably be taking a low risk decision. No hardware investment, they are not committing their whole Library Service to this as yet untested option, if they don’t like it they can pull out and go back to what they are doing now, or even move on to other OaaS solutions that undoubtedly will appear in the future.

I note that there is currently a limit of 150,000 titles (at $256/month) it would be interesting to see how this could/would be scaled for larger systems.

Back in April I predicted [again] that the future of the monolithic library system was uncertain. The Library System of the future would consist of many specialized components loosely coupled by Web Services to deliver a whole solution relevant to a specific library’s requirements. Joining some early shoots in this very different way of providing and purchasing the technology to run your library AquaBrowser Online will be very interesting to watch as it moves form announcement through to a live service.

Even the most casual reader of the TDN, and this blog will know that here at Talis we are convinced that this sort of approach to delivering library services and software is the way forward. So much so that it is benefiting us and our customers in everything that we do. No matter how certain that you are in something, it is always gratifying to see others who also ‘get it’. We will be watching AquaBrowser Online with great interest.

(Waiter photo taken by vanillasky displayed in Flickr)

Technorati Tags: , , , , , , ,

A Directory of Innovation

The ethos behind the Talis Developer Network (TDN) is one of Shared Innovation. The problem can be finding it in the first place, so that you can share it with others.

Sharing of ideas and discussing of issues takes place all the time in the TDN Discussion Forums. The re-energised Mashing up the Library competition celebrates and showcases practical examples of innovation at today’s cutting edge

So what about the great bits of innovation out there that were never submitted by their creators as competition entries? Things like E34ST, BookBurro, and the granddaddy of them all LibraryLookup. What, too, about all those entries submitted to earlier rounds of the Competition?

The TDN Innovation Directory has been launched to provide a simple open directory where you can find a pointer to these examples of innovation.

Be they Web Sites, Widgets, Gadgets, Browser Extensions, Plug-ins, or ‘mashups’ of any type that demonstrate in the display, use, and reuse of data from and about libraries, you should find them in here.

So check it out, if your favorite example of Library 2.0 technology is not in there, let us know at

The Innovation Directory marks an ongoing commitment from Talis to nurture and celebrate creative thinking and new approaches relevant to the library domain. We will continue to add to the Directory, and welcome you to do the same in order to build a truly useful resource for us all.

Currently, the Innovation Directory is technologically simple. This is deliberate. We want you to tell us what structure you would find most useful. We want you to tell us the sorts of typology or folksonomy that would add most value. Tell us what you want, and help us build the resource that meets your needs, now and in the future.

Technorati Tags: , , , , ,

Mashing up the Library Competition – attracts many excellent entries

Mashing up the Library competition logo

The competition closed last night with a total of eighteen excellent entries – all of which can be viewed here.

Now the job of the Judges starts. Watch this space for the results of their cogitations.

On a Library/Web 2.0 note – posting this from the deck of a canal boat, with the aid of a 3G enabled laptop, is certainly a surreal experience.

Still I shouldn’t moan as I have been preaching for the last many months that the point of Library 2.0 is to get the services to the users wherever they are – even if it is on holiday on a canal boat!

Open Directory now open for opening hours

The Talis Directory of Library Collections, which already underpins many open services has added opening hours to the set of attributes you can enter about a location.

Not a massive leap forward you may say, in fact the facility appeared a few days ago without many people noticing. What it demonstrates though is the simple flexibility of using RDF in the underlying semantic data store for the directory. A traditional relational database powered application would have required re-engineering to add extra columns to its tables. In an RDF world opening hours are now just associated with a location. In fact the major piece of work is around updating the user interface to manage them.

There are many other attributes that the Directory could store about Library Collections and Locations, and introducing them will be a much simpler process because of the choice of RDF as the architecture for the Directory. If you have thoughts on what information should be stored in a directory, join the discussion on the TDN Talis Platform Forum.

As with everything in the Directory these attributes are available to be retrieved and queried via the SPARQL query API. So using the Platform APIs, it is not only possible to discover which libraries hold a particular item, but also to refine that selection to only show the ones that are open on a Sunday.

Mashing up the Library competition logoMaybe wishful thinking but, with 3 days left to run for the Mashing up the Library Competition, I wonder if we will see Library Opening hours being used in any competition entries?

Technorati Tags: , , , , , , , , , ,

Reliable Dependancy

If you have been monitoring the discussions on TDN and the LibraryThing Google Group today you will have been aware that there have been people beavering away in Portland, Maine; Birmingham, UK; and Dublin Ohio over the last 24 hours or so madly trying to fix broken stuff.

Firstly the very useful xISBN Web Service from OCLC Research stopped working, I first noticed this when users of the resently published LibraryThingThing Firefox extension reported that it had stopped working. After some analysis I identified that the cause was that the LibraryThingThing code didn’t cope too elegantly when it’s calls to xISBN did not return the XML it expected.

As LibraryThingThing is an extension that only functions when a Web page from is displayed in the browser, I obviously needed to access LibraryThing whilst fixing and testing my code. LibraryThingThing also uses the ThingISBN web service. This is when I discovered that was also broken, and Tim Spalding was up in the middle of his night fixing that. Eventually, in the middle of my day, LibraryThing came back on air and my code was tested and an update to LibraryThingThing which could cope with an off-air xISBN service [which it still was] was soon released.

Just when I was starting to relax it became clear that a known defect in the Firefox extension update code was causing some LibraryThingThing users problems – but that is another story which there is at least a temporary work-around for!

So is this tail of interdependency woe a lesson in why not to use web services? Some might say it is – You wouldn’t get that trouble if you produced all the functionality, and ran it in house! – I would disagree.
Analyzing what when wrong with which over the last 24 hours, shows that the key service which started these problems is a service provided by the Research arm of OCLC as “As an experimental project of OCLC Research, this service is available without charge or guarantee. My LibraryThingThing extension is an “example of mashing together Web Services” from the Talis Developer Network. is an excellent, but nevertheless Beta site.

The first observation one might draw is that it is amazing how [normally] well these services work together considering that there was no working together to produce the result. Each Web Service author published simple documentation that was picked up the others and coded against. No training courses, no voluminous manuals, no synchronized development environments.

My second observation is that a useful tool, or service, is useful if it is available most of the time. The Internet, especially from its early modem days, has taught us to expect the occasional glitch or web site that is off air today, people only tend to comment if it is still off air the next day. There are exceptions to this though. If Amazon, or Google, or eBay go away for more than a handful of minutes, the discussion groups light up. It is the same for the Web Services these organizations provide. Why? Because people depend on these services to do their business – they are not Research/Beta Services like xISBN. Would you build a business that depended on xISBN – not unless it moved from being a Research service to become a reliable supported service.

So the moral I draw from recent events is that If you are going to depend on a Web Service it must be reliable. On the flip side, if you want people to use your Web Services, make them dependable.

Gone are the days of signing up for, probably unenforceable, Service Level Agreements for access to proprietary services. Take a look at the terms of use for the Amazon Web Services API – in legal terms “consumer beware“. But in practical terms, how long would even the massive Amazon last if they could not provide a reliable dependable service. Commercial self interest is the best SLA you can get from any organization.

So if you are going to depend on something, make sure it is reliable, supported and dependable. That is why there was a long pause between the initial announcement and research prototypes for the Talis Platform and the release of the APIs for it. We knew that consumers of the services would want to depend upon them, so that meant production quality supported services.

If you think I’m advocating that you should only ever use established fully robust Web Services and no Betas, you would be wrong. Apart from the core services of your application for which you would consume services such as Amazon’s or the Talis Platform; there is a great deal of functionality that can add value to your processes and your users that it is not operationally critical.

Who would complain if reader reviews or ‘who bought this also bought that’ disappeared for a few hours from the Amazon site, as long as you could still buy your books?

The Web Services/Web 2.0 world has changed people’s attitudes. Expect 100% reliability for core functionality, but for the nice-to-haves, the fun stuff, expectations are lower and Beta services are accepted. Of course what then happens is that new things become so useful and expected, that the service that provides them transitions in to a reliable/dependable service because so many use it. This cycle repeating its self over and over.

So what was todays experience about? Just early days in an a step along the way.

Update: I see xISBN is back, well done in Dublin!

Technorati Tags: , , , ,