Panlibus Blog

Archive for the 'Organisation of knowledge' Category

Google Book Settlement will help stimulate eBook availability in libraries

books_logo So says former Google Book Search product manager Frances Haugen in her contribution to the debate on the September Library 2.0 Gang.

This month’s Gang was kicked off by Orion Pozo from NCSU, where they have rolled out dozens of Kindles and a couple of Sony Readers.  The comparative success of their Kindles ahead of the Sony Reader appears to be because of the simpler process of distributing purchased books across sets of readers and a broader selection of titles at a lower cost.  Currently users request books for the Kindle via an online selection form, then they are purchased and downloaded on to the devices which are then loaned out.  There were no restrictions on titles purchased and they have an approximate 50% split between fiction and non-fiction.

L2Gbanner144-plainThe Gang discussed the drivers that will eventually lead to the wide adoption of eBooks.  This included things like the emergence of open eBook standards, and the evolution of devices, other than dedicated readers, that can provide an acceptable reading experience.   Carl Grant shared his experience of starting a read on his Kindle and then picking it up from where he left off on his iPhone (as he joined his wife whilst shopping).

An obvious issue influencing the availability of eBooks is licensing and author and publisher rights.  This is where the Google Book Settlement comes in to play.  If it works out as she hopes, Frances predicts that over time this will facilitate broader availability of currently unavailable titles.  I paraphrase:

[From approx 26:50] Institutional subscriptions will become available on the 10M books that Google has scanned so far.  Imagine in the future a user with a reader that accepts open formats will be able to get access to the books this institutional license would provide.  Imagine school children having access to 10M books that their library subscribe to, instead of having to formally request one-off books to be added to their device.

[From approx 44:50] There are a huge number of books that are no longer commercially available in the US, for several reasons.  If the rights holders of those books do not opt-out, they will become available for people to purchase access to.  One of the interesting things about the way the settlement is set-up is that you will be able to purchase access either directly or through an institutional subscription.  What is neat is that cycle will put a check on prices as prices for individual books are based upon the demand for the books. So less poplar books will cost less…  So if the price of the institutional subscription ever gets too high libraries can decide to buy one-offs of these books.   I think that whole economic mechanism will substantially increase access to books.

The Gang were in agreement that eBooks will soon overtake paper ones as the de facto delivery format.  It is just a question of how soon.  Some believe that this will be much more rapid than many librarians expect.  A challenge for librarians to take their services in to this eReading world. 

OCLC’s Andrew Pace Talks with Talis about Web-Scale ILS

andrew_pace To find out about OCLC’s move in to providing hosted, Web-scale, Software as a Service functionality for managing libraries, who better to ask than the person responsible for the programme.

Andrew Pace, Executive Director, Networked Library Services has been working on this for the last fifteen months, and as you can hear from our conversation is pleased that he can now talk openly about it.

Our wide ranging conversation takes us from the epiphany moment when Andrew announced he wanted to be a librarian through to the strategic, and architectural decisions behind this significant OCLC initiative.  

Andrew’s answers to my questions add depth and background to the brief details so far released in his blog posts and OCLC’s press releases.

Library of Congress launch Linked Data Subject Headings

Back in December I was very critical of the Library of Congress for forcing the take down of the Linked Data service at  LoC employee, and Talking with Talis Interviewee, Ed Summers had created a powerful and useful demonstration of how applying Linked Data principles to a LoC dataset  such as the Library of Congress Subject Headings could deliver an open asset to add value to other systems.  Very rapidly after it’s initial release another Talking with Talis interviewee Martin Malmsten, from the Royal Library of Sweden, almost immediately made use of the links to the LCSH data.   Ed was asked to take the service down, ahead of the LoC releasing their own equivalent in the future.

I still wonder at the LoC approach to this, but that is all water under the bridge now, as they have now launched their service, under the snappy title of “Authorities & Vocabularies” at

The Library of Congress Authorities and Vocabularies service enables both humans and machines to programmatically access authority data at the Library of Congress via URIs.

The first release under this banner is the aforementioned Library of Congress Subject Headings.

As well as delivering access to the information via a Linked Data service, they also provide a search interface, and a ‘visualization’ via which you can see the relationship between terms, both broader and narrower, that are held in the data.

To quote Jonathan Rochkind “ is AWESOME”:

Not only is it the first (so far as I know) online free search and browse of LCSH (with in fact a BETTER interace than the proprietary for-pay online alternative I’m aware of).

But it also gives you access to the data itself via BOTH a bulk download AND some limited machine-readable APIs. (RSS feeds for a simple keyword query; easy lookup of metadata about a known-item LCSH term, when you know the authority number; I don’t think there’s a SPARQL endpoint? Yet?).

On the surface, to those not yet bought in to the potential of Linked Data, and especially Linked Open Data, this may seem like an interesting but not necessarily massive leap forward.   I believe that what underpins the fairly simple functional user interface they provide will gradually become core to bibliographic data becoming a first-class citizen in the web of data.

Overnight this uri ‘’ has now become the globally available, machine and human readable, reliable source for the description for the subject heading of ‘Elephants’ containing links to its related terms (in a way that both machines and humans can navigate).  This means that system developers and integrators can rely upon that link to represent a concept, not necessarily the way they want to [locally] describe it.  This should facilitate the ability for disparate systems and services to simply share concepts and therefore understanding – one of the basic principles behind the Semantic Web.

This move by the LoC has two aspects to it that should make it a success.  The first one is technical.  Adopting the approach, standards, and conventions promoted by the Linked Data community ensures a ready made developer community to use and spread the word about it.  The second, one is openness.  Anyone and everyone will not have to think ”is it OK to use this stuff” before taking advantage of this valuable asset.  Many in the bibliographic community, who seem to spend far too much time on licensing and logins, should watch and learn from this.

A bit of a bumpy ride to get here but nevertheless a great initiative from the LoC that should be welcomed.  On that I hope they and many others will build upon in many ways.  – Bring on the innovation that this will encourage.

Image from the Library of Congress Flickr photostream.

Peter Brantley Talks with Talis as he moves to the Internet Archive

peter_brantley I first interviewed Peter Brantley, in the Talking with Talis series, in July 2007 about his role in the Digital Library Federation and its place in the world of digital libraries.

In this conversation we look back over the last couple of years at the DLF and then forward in to his new challenge and opportunity at the Internet Archive.

We go on to discuss his thoughts and plans to make it easy to identify books and  information and their locations in a way that is currently not possible with the processes and protocols we use today.

UKSG09 Uncertain vision in sunny Torquay

uksg Glorious sunshine greeted the opening of the first day of UKSG 2009 in Torquay yesterday.  The stroll along the seafront from the conference hotel (Grand in name and all facilities, except Internet access – £1/minute for dialup indeed!)  was in delightful sharp contrast to the often depressing plane and taxi rides to downtown conference centres.

IMG_0012 The seaside theme was continued with the bright conference bags.  Someone had obviously got hold of a job lot of old deckchair canvas.  700 plus academic librarians and publishers and supplier representatives settled down, in the auditorium of the Riviera Centre, to hear about the future of their world.

The first keynote speakers were very different in topic and delivery, but all three left you with the impression of upcoming change the next few years for which they were not totally sure of the shape.

First up was Knewco Inc’s Jan Velterop pitch was a somewhat meandering treatise on the wonders and benefits of storing metadata in triples – something he kept saying he would explain later.  The Twitter #uksg09 channel was screaming “when is he going to tell us about triples” and “what’s a triple” whilst he was talking.  He eventually got there but I’m not sure how many of the audience understood the massive benefits of storing and liking data in triples, that we at Talis are fully aware of.   Coincidentally, for those who did get his message, I was posting about the launch of the Talis Connected Commons for open free storage of data – in triples, in the Talis Platform.

Next up was Sir Timothy O’Shea from the University of Edinburgh, who talked about the many virtual things they are doing up in Scotland.  You can take your virtual sheep from your virtual farm to the virtual vet, and even on to a virtual post mortem.  His picture of the way information technology is playing its part in changing life at the university, apart from being a great sales pitch for it, left him predicting that this was only the early stages of a massive revolution.  As to where it was going to lead us n a few years he was less clear.

Joseph Janes, of the University of Washington Information School, was one of those great speakers who dispensed with any visual aids or prompts and delivered us a very entertaining 30 minutes comparing the entry in to this new world of technology enhance information access, with his experience as an American wandering around a British seaside town.  His message that we expect the next few years to feel very similar on the surface, as we will recognise most of the components, but will actually be very different when you analyse it.  As an American he recognises cars, buses, adverts, and food, but in Britain they travel on the wrong side of the road, are different shapes, and are products he doesn’t recognise.   As we travel in to an uncertain but exciting future, don’t be fooled recognising a technology, watch how it is being used.

A great start to the day, which included a good break-out session from Huddersfield’s Dave Pattern. He ended his review of OPACs and predictions about the development of OPAC 2.0 and beyond, with a heads-up about my session today, which caused me to spend a couple of hours in the hotel bar, the only place with Wifi, tweaking my slides.  It would be much easier to follow Mr Janes’ example and deliver my message of the cuff without slides – not this time perhaps 😉

Looking forward to another good day – even if the sun seems to have deserted us.

Stephen Arnold – A conversation with the closing keynote speaker for Online Information 2008

online-information-logo-2008 Stephen E. Arnold’s career has lead him to be a prolific writer, speaker, and expert on web technologies and their application both inside the commercial enterprise and across the Internet.  He is best known for his work on search and his insights in to the Google phenomenon.

He is presenting the keynote in the closing session of the Online Information Conference 2008 which is being held at Olympia in London from 2nd – 4th December and will have a wide range of speakers of broad interest to all information professionals from all sectors – libraries, academia, government, and commerce.

Stephen talks about his career so far and the themes for his presentation, explaining how the technologies that we have seen emerging over the last few years are ready for use inside the enterprise as well as maturing into delivering services across the web.  He also explores how the componentised nature of these technologies and the applications they power, enables them to be moulded to satisfy the needs of their users.

If Google Maps had built a bookstore – Zoomii - The _Real_ Online Bookstore We are always on the lookout for new user interface paradigms – remember that jaw-dropping moment when you first clicked-and-dragged a Google map.

I had a similar experience when visiting  Basically a cool storefront for Amazon books, this exercise in creating a virtual bookstore where the books are displayed on [virtual] real shelves for you to browse.  By using the mouse to zoom in and out, and drag left, right, up & down, in a way that feels natural it is a totally different experience.

On arrival on the site you are presented with a video to watch, it’s only short so take a look to see how to make the most of your first experience at Zoomii Books.

From the Amazon Web Services blog, we are told that Zoomii has been developed and runs on Amazon EC2 virtual computer instances and uses Amazon’s S3 storage.  So not only a innovative way of browsing through books, but a great example of cloud computing.

Thinks… If they can build this to front the web services that Amazon provide, how easy would it be to produce a library version – all we would need is book jackets for all our stock.

Technorati Tags: , , , ,

I wish I’d been there…

Dr Vint CerfLibLime’s Evangelist Nicole Engard does an excellent job of not only capturing the content, but also the spirit of a conversation which opened the SLA Conference this weekend.

The conversation took the form of an interview of Dr. Vint Cerf, vice president and chief internet evangelist for Google, by television journalist Charlie Rose.

Go take a read.

As Nicole says in her conclusion:

When Stephen Abram came up after the talk was over, I have to agree with him, “O.M.G. don’t you feel smarter just being in the room with those two???”

Image of Dr Vint Cerf, from Nicole’s Flickr collection


The Wikimeadia Foundation the international non-profit organization behind some of the largest collaboratively-edited reference projects in the world including Wikipedia, have a project that has been running for the last few months named Wikicat.

Wikicat’s basic premise is to become the bibliographic catalog used by the Wikicite and WikiTextrose projects. The Wikicite project recognizes that “A fact is only as reliable as the ability to source that fact, and the ability to weigh carefully that source” and because of this the need to cite sources is recognized in the Wikipedia community standards. WikiTextrose is a project to analyze relationships between texts and is “inspired by long-established theories in the field of citation analysis

In simple terms the Wikicat project is attempting to assemble a bibliographic database [yes another one] of all the bibliographic works cited in Wikimedia pages.

It is going to do this initially by harvesting records via Z39.50 from other catalogues such as the Library of Congress, the National Library of Medicine, and others as they are added to their List of Wikicat OPAC Targets. Then when a citation, that includes a recognizable identifier such as ISBN or LOC number, is included in a page the authoritative bibliographic record can then be used to create a ‘correct’ citation. Eventually the act of citing a previously unknown [to Wikicat] work should automatically help to populate the Wikicat catalogue. – Participative cataloguing without needing to use the word folksonomy!

Putting aside the tempting discussion about can a Z39.50 target be truly described as an OPAC, the thing that is different about this cataloguing project is not what they are attempting to achieve but how they are going about it. The Wikicat home page states:

It will be implemented as a Wikidata dataset using a datamodel design based upon IFLA‘s Functional Requirements for Bibliographic Records (FRBR) [1], the various ISBD standards, the Library of Congress‘s MARC 21 specification, the Anglo-American Cataloguing RulesThe Logical Structure of the Anglo-American Cataloguing Rules, and the International Committee for Documentation (CIDOC)‘s Conceptual Reference Model (CRM)[2].

So it isn’t just going to be a database of Marc records then!

Reading more it is clear that once the initial objective of creating an automatic lookup of bibliographic records to create citations has been achieved, this could become a far more general open participative cataloguing project, complete with its own cataloguing rules managed by the WikiProject Librarians.

Because they are starting with FRBR at the core of the project, the quality, authority and granularity of the relationships between bibliographic entities potentially could be of the highest quality. This could lead to many benefits for the bibliographic community, not least a wikiXisbn service [my name] that is ‘better’ than OCLC’s xISBN.

So does the world need yet another cooperative cataloguing initiative? – working for an organisation that has cooperative cataloguing in its DNA for over thirty-five years, I should be careful how I answer this!

Throwing care to the wind – Yes. When you consider that all the other cooperative cataloguing initiatives [including as of today the one traditionally supported by Talis] are bounded by project, geographical, institutional, political, subject area, commercial, exclusive licensing, or high financial barrier to entry issues. What is refreshing about Wikicat is that, like Wikipedia, the only barrier to entry, both for retrieving and adding data, is Internet connectivity.

Unlike Wikipedia where some concerns about data quality are overridden by the value of it’s totally participative nature, the Wikicat team are clearly aware that the value of a bibliographic database is directly connected to the quality, consistency and therefore authority of the data that it holds. For this reason, the establishing of cataloguing rules and training for potential editors overseen by the WikiProject Librarians is already well detailed in the project operational stages roadmap.

I will be watching Wikicat with interest to see how it develops.

Technorati Tags: , , , , , ,

When community and technology combine

LibraryThing logo

Tim Spalding over at LibraryThing provides a nice write-up of Richard Wallis’ LibraryThingThing extension to the Firefox web browser. A number of interesting points get raised in his post, and in the comments shared by members of LibraryThing’s community, and I thought it might be useful to offer a few thoughts in response.

Firstly, Tim writes;

“This is an exceedingly cool mashup, and a very good demonstration of all the components. To my mind, it would be more useful if it did less, telling you only if the book was in your library.”

With straightforward access to a raft of Platform APIs and a solid body of data on library holdings, it becomes feasible to slice and dice the results in whatever way makes most sense to the users themselves, rather than insisting upon any ‘one size fits all’ solution. I can, personally, think of a whole host of reasons why you might wish to view holdings from a user-selected set of libraries, and the real technology lying behind Richard’s simple browser extension is certainly capable of supporting these use cases.

I, for example, live in one place and work in another, 150 miles away. I’d like to see the library local to my home and the library local to my office. I have no interest (no offence intended!) in the libraries of North Lincolnshire, South Yorkshire, Derbyshire, Nottinghamshire, Leicestershire, and wherever else lies along my route.

Or what about the university student who wishes to see their own university library, the public library of their university’s city, the public library in the town where their parents live, and the public and university libraries in the city where their boy/girlfriend is studying?

We are also seeing a welcome (and long overdue) growth in interest around the notion of collaborative access arrangements between neighbouring libraries, which is ultimately to the benefit of all library users. Rather than conducting painfully slow and eye-wateringly expensive procurements for yet another monolithic dinosaur of a system (believe me, I’ve read some of the procurement documents!), technologies such as those behind Richard’s tool might usefully and easily be aligned with existing library systems, in order that a borrower is able to see holdings data from all the institutions participating in a particular scheme. Indeed, if nothing fancier were required, Richard’s existing code could easily be modified for deployment on top of an existing OPAC. Imagine looking for a book in the library of the university at which you are studying, finding that the book is on loan, and having a browser extension very similar to LibraryThingThing let you know that there’s a copy in the local public library…?

LibraryThingThing is a rapidly produced (one afternoon, essentially) illustration of a number of possibilities. A tool deployed to best advantage in day to day use would doubtless concentrate upon fulfilling a smaller set of purposes with greater focus. Given the open nature of the APIs behind LibraryThingThing, there’s nothing to stop any of you experimenting and producing the tool that does what you want it to. If you like the idea of wrapping the tool up for delivery as a Greasemonkey plugin or Firefox browser extension as Richard did, the source of the Greasemonkey plugin is also available for you to modify.

Tim goes on to add;

“How should LibraryThing tie into libraries. As always, your thoughts are much appreciated.

We were, actually, planning on doing something like this, and even started the code. When we bring something live it will be a lot less technically elegant—good old server-side programming—but also not browser- and extension-dependent.”

Excellent! We’d (obviously) be keen to see LibraryThing extend in this way with the help of the underlying Platform technologies that made Richard’s browser extension so easy to produce. The Platform and its APIs are neither browser nor extension-dependent; Firefox and Greasemonkey simply provided an easy way for Richard to bring LibraryThing and some of our Platform components together without needing to get inside LibraryThing’s codeline. Tim would be able to use the same Platform components, but in a way that integrated them far more closely with LibraryThing without the need for particular browsers or extensions. That sounds like a win-win to me, and one we’d of course be happy to lend assistance to…

Now to the comments…

James Darlack writes;

“Perhaps rather than having LTThing look up only a specific library, it would be helpful if it could look libraries within a preset distance of a zip code, similar to the way Open WorldCat works.”

Absolutely. Behind the scenes, one of the places that LibraryThingThing looks for data is to the Talis Directory. This can hold various details about libraries, including their postal address and their latitude and longitude. The Directory is an open repository of information about a growing body of libraries, and if your local library isn’t listed you are free (indeed hereby encouraged!) to add it. The information you contribute is governed by a flexible and permissive licence, and a growing body of Platform APIs ensure that the data can be consumed by a range of third party applications to provide the sort of capability that you would like to see. The open nature of the APIs ensures that you actually have a far greater degree of flexibility than Open WorldCat achieves by drawing you back to an Open WorldCat-controlled web page every time you use it, meaning that you could do all sorts of quite clever things with the location data if you had the will and the ability. Libraries within a preset distance of a zip code, but on a bus route? Libraries within a preset distance of a zip code, but close to a Starbucks? Libraries within a preset distance of a zip code, with convenient parking and a copy of the book on the shelf? These applications aren’t necessarily for Talis to build. We simply provide the tools to enable the community to do so.

Jonathan Cohen adds;

“When I click on the LTThing link, the only libraries it finds are British ones. Is Talis a British-only service, or is there some other reason?”

The Talis Platform, and the open and inclusive model that it represents, is a relatively recent activity for Talis and it will take time to work with the community on increasing the (already large) number of libraries represented. The holdings data visible to the Talis Platform today are predominantly those contributed to the Platform as part of library participation in a UK service we also run, called Talis Source.

The Platform itself is not restricted to the UK, and nor are the tools and applications built on top of it. If your local library is interested in contributing holdings data to the Platform (free of charge) so that it can be visible in LibraryThingThing and a growing number of other contexts, you should certainly encourage them to get in touch.

In investing in the Talis Platform, we at Talis are demonstrating our commitment to the continued development of libraries. We are also showing, quite explicitly, that library data has a value far beyond the walls of the library. Sites such as LibraryThing, complete with their significant (53,940 when I checked) communities of passionate bibliophiles offer one obvious place in which it makes sense to bring as many library-sourced resources as possible. Why make it hard for LibraryThing’s members to take the logical step into a convenient library? Why require those libraries to join some expensive club, just to make their holdings (or their very existence) visible?

Free participation. Easy contribution. Open APIs and a permissive license. It really does make sense, and every day it becomes harder to justify the monolithic technologies, closed clubs and exorbitant charges of the past with which libraries and their users continue to grapple today. There really is a better way. Come and see, then help build it.

Technorati Tags: , , , , , , , , , , , , , , ,