In this podcast, Sarah Bartlett talks with Jo Rowley, Head of Library Services, and Laurie Roberts, Liaison Librarian, at Queen Margaret University, Edinburgh. With only 4000 FTE students, the library at Queen Margaret University is working at a different scale from many other universities we hear about, and we discuss the impact its small size has on its ability to adopt agile ways of working. Laurie talks about her successes with Web 2.0 technologies such as RSS, blogs and iTunesU in this flexible and innovative culture. Queen Margaret University’s converged services adds to the potential for open-ended experimentation by making technological expertise readily available to the library. Laurie also makes the useful point that the inexpensive nature of Web 2.0 technologies make experimentation easier to justify. We also discuss library presence on the university’s corporate systems, such as the virtual learning environment. The institution gained university status in 2007, and in the same year, a Learning Resource Centre was opened as part of the university’s new campus. Jo gives us a detailed tour through the Learning Resource Centre, and we talk about how students respond to the facilities offered. We also discuss the contribution that the library makes to the Queen Margaret University Strategic Plan 2007-12, and the challenges that the library will unquestionably face in terms of funding, along with the rest of the sector.
Archive for the 'Web 2.0' Category
As part of the “Shock of the New” strand at the UK Umbrella conference this year, Lucy Tedd from Aberystwyth University led a session entitled “Integrated library management systems: what we need”. Attendance of this session turned out to be very supplier-heavy, and I’m not sure that’s what she anticipated. I was moderately surprised too, but thinking about it afterwards, I felt that the lack of interest from practitioners was reflective of the growing irrelevance of the traditional library management system (or ILS if you’re North American) to the needs of the modern library, particularly in academia.
It’s not that the library technology landscape has stood still, of course. Lucy was able to list quite a few innovative products– from the now-established Aquabrowser to Talis’ own Aspire resource list tool – a great product that we’re all very proud of here. But taking one step back and looking at what the library has to deliver in 2009, the library technology marketplace as a whole is failing to keep up with the pace of change.
Lucy Tedd highlighted some of the key developments of this decade. Some of them, though – such as the consolidation of the library technology marketplace with mergers, acquisitions and the increasing intervention of venture capitalists in the businesses of existing suppliers – may be symptomatic of underlying trends rather than drivers.
I felt that to get a firmer grip on the fundamental shifts in our world, I had to refer back to a session I saw last month at the annual SCONUL conference, given by Marshall Breeding (a member of Talis’Library 2.0 Gang). For the uninitiated, Marshall Breeding is an American library technology guru, author of an ongoing series of library technology guides. Where he wins out over other commentators such as Lucy Tedd is his ability to look behind headline trends, take them apart, examine the implications and project them forward. So although both Tedd and Breeding identify industry consolidation as a key trend, Breeding will go on to alert us to the disruptive impact that this has on product development, and the adverse effect this has on the lead time that libraries have to plan for a product enhancement.
Marshall Breeding hears a lot of frustration with LMS products and vendors, and is adamant that systems are not keeping up with the pace of change in libraries. Innovation, then, is falling below expectations, and Marshall reports that many US libraries are unhappy with the current state of affairs. He admitted that he wasn’t so sure about UK libraries, but following the group activity at the end of Lucy Tedd’s session, I’m quite clear that the mood here is similar to that of the US. In my group there was one librarian from Open University and one from University of Hertfordshire. Each group was asked to identify its most pressing requirement of the LMS. Both librarians agreed that the inadequacy of the LMS in managing e-resources was the biggest problem in an era in which the issuing of books is no longer the primary activity.
Marshall Breeding described the conventional LMS as untenable, now that a whole series of products required to manage fundamental library processes – such as ERM systems and knowledgebases – are located outside the LMS. In the electronic era, circulation becomes fulfilment, cataloguing is no longer MARC-centred, for example. So as the traditional modules of the LMS become less important, we need to think more in terms of SOA (Service-Oriented Architecture) – dividing functionality into small chunks that can be fitted together for multifarious purposes (a shift that my colleague Richard Wallis identified back in 2007 on this blog). This is very much the thinking of the OLE (Open Library Environment) Project, of which Marshall Breeding is a proponent.
But it’s not just a back-office problem, of course. The library OPAC, traditionally another module LMS, also suffers from the same problem, in failing to reflect the eJournals and digital objects that libraries spend so much money on. Breeding did identify further issues with library OPACs, highlighting their clunky interfaces, poor eCommerce facilities, and more worryingly, relatively weak search engines and poor relevancy ranking.
Open Source has, in the context of these difficulties, generated a lot of interest, though more in the US at present. However Breeding pointed out that Open Source offerings currently rank middle to low in terms of customer satisfaction, and the only libraries that are interested are the ones that are already doing it. There is no groundswell of interest, despite the pockets of evangelistic fervour.
Marshall Breeding also turned his attention to Web 2.0 tools, and argued persuasively against the tendency to adopt disparate tools without a broader strategy in place, which has the effect of “jettisoning library users away from our websites”. Instead, he says, Web 2.0 capabilities need to be built into the guts of our systems. I’m assuming here that he doesn’t mean library vendors reinventing social networking tools in a creepy treehouse kind of way, and that instead he’s advocating seamless integration with applications such as the VLE and Web 2.0 tools such as Twitter. Incidentally, Richard Wallis has recently been demonstrating a Juice extension enabling integration between Twitter and the OPAC.
Breeding looks forward to a future in which the library can offer a single point of access to the inside of all the eJournals that the library subscribes to. Scale is not the issue, he argues, and cites OCLC’s Lorcan Dempsey as pointing out that the whole of WorldCat will now fit on an iPod. Instead we should be looking at what the world outside the library is doing – searching the deep content directly, and identifying and examining the tools that people are using to do this. In this way, it becomes clear that the likes of Google Scholar, Amazon, Waterstones and ask.com are the competitors of the library in the 21st century, and it is incumbent upon the vendor community to help libraries with that gargantuan challenge if they are to survive.
Following on from OCLC’s recent Mashathon, Dave Pattern’s Mashed Library UK 2009, and the imminent publication of the Library Mashups book edited by Nicole Engard, The Library 2.0 Gang turn their attention to the Library Mashup.
Tallin Bingham from SIRSI/Dynix, Marshall Breeding of Library Technology Guides, LibLime’s Nicole Engard, and Google’s Frances Haugen, dip in to this topic for the July show. It is soon clear that successful mashups are all about openly publishing data in a reliable easy form via simple APIs. Library mashups are not just about bibliographic data. Usage data, statistical data, and anonomized patron data are all valuable library sources for mashups.
As with many other technology trends, libraries are going to have to move quickly to keep up with and take advantage of mashups.
Check out the July Library 2.0 Gang Show.
Competition! - Listening to the show should inspire you to enter the Library 2.0 Gang Mashup Idea competition. Send in your idea for a library mashup. It can be as simple or complex as you like. The only restriction being that it must include library data or functionality somewhere within it. The best three, as judged by Nicole Engard and myself, will each receive a copy of the Library Mashups book she has edited. Closing date is August 31st, send your entries to email@example.com.
Following on from the success of the first UK Mashed Library event organised in London last year by Owen Stephens, Dave Pattern and his colleagues from the University of Huddersfield are organising another one on Tuesday July 7th. Talis are sponsoring the sustenance that should help keep the ideas and mashups flowing through the day.
Appropriately entitled Mash Oop North! [best attempted with a Yorkshire accent], it promises to be another great event to get yourself registered for.
Dave is hoping to attract more than just the usual library-techno-geek suspects. The day is also for the non-technical with ideas about what the technologies could be doing for libraries and the communities they serve.
Mashed Library is about "bringing together interested people and doing interesting stuff with libraries and technology".
Register now – you know you want to!
Back in December I was very critical of the Library of Congress for forcing the take down of the Linked Data service at lcsh.info. LoC employee, and Talking with Talis Interviewee, Ed Summers had created a powerful and useful demonstration of how applying Linked Data principles to a LoC dataset such as the Library of Congress Subject Headings could deliver an open asset to add value to other systems. Very rapidly after it’s initial release another Talking with Talis interviewee Martin Malmsten, from the Royal Library of Sweden, almost immediately made use of the links to the LCSH data. Ed was asked to take the service down, ahead of the LoC releasing their own equivalent in the future.
I still wonder at the LoC approach to this, but that is all water under the bridge now, as they have now launched their service, under the snappy title of “Authorities & Vocabularies” at http://id.loc.gov/authorities/.
The Library of Congress Authorities and Vocabularies service enables both humans and machines to programmatically access authority data at the Library of Congress via URIs.
The first release under this banner is the aforementioned Library of Congress Subject Headings.
As well as delivering access to the information via a Linked Data service, they also provide a search interface, and a ‘visualization’ via which you can see the relationship between terms, both broader and narrower, that are held in the data.
To quote Jonathan Rochkind “id.loc.gov is AWESOME”:
Not only is it the first (so far as I know) online free search and browse of LCSH (with in fact a BETTER interace than the proprietary for-pay online alternative I’m aware of).
But it also gives you access to the data itself via BOTH a bulk download AND some limited machine-readable APIs. (RSS feeds for a simple keyword query; easy lookup of metadata about a known-item LCSH term, when you know the authority number; I don’t think there’s a SPARQL endpoint? Yet?).
On the surface, to those not yet bought in to the potential of Linked Data, and especially Linked Open Data, this may seem like an interesting but not necessarily massive leap forward. I believe that what underpins the fairly simple functional user interface they provide will gradually become core to bibliographic data becoming a first-class citizen in the web of data.
Overnight this uri ‘http://id.loc.gov/authorities/sh85042531’ has now become the globally available, machine and human readable, reliable source for the description for the subject heading of ‘Elephants’ containing links to its related terms (in a way that both machines and humans can navigate). This means that system developers and integrators can rely upon that link to represent a concept, not necessarily the way they want to [locally] describe it. This should facilitate the ability for disparate systems and services to simply share concepts and therefore understanding – one of the basic principles behind the Semantic Web.
This move by the LoC has two aspects to it that should make it a success. The first one is technical. Adopting the approach, standards, and conventions promoted by the Linked Data community ensures a ready made developer community to use and spread the word about it. The second, one is openness. Anyone and everyone will not have to think ”is it OK to use this stuff” before taking advantage of this valuable asset. Many in the bibliographic community, who seem to spend far too much time on licensing and logins, should watch and learn from this.
A bit of a bumpy ride to get here but nevertheless a great initiative from the LoC that should be welcomed. On that I hope they and many others will build upon in many ways. – Bring on the innovation that this will encourage.
Image from the Library of Congress Flickr photostream.
Have you ever looked at the OPAC from another library that sports links to WorldCat, or Copac, or Amazon, or Google Book Search, or Del.icio.us, or a shelf mapping program, an author video, or something similar and thought I wish I could have that on our interface! Have you attended a presentation about next generation OPACs and heard the presenter say “… and I added a link to an external service” and whished you had them on your library staff to be able to do cool things like that. for you?
Even in the so called library-geek community, where they know how to do these kind of things, great ideas for extending their interfaces are only copied between them, each implementing them in their own way for their own application. Because, until now, there has been no easy way to share the great innovation demonstrated by the few, we are seeing a massive waste of what could benefit the many.
The Juice Project is an open source initiative, which I launched at the recent Code4lib conference, with the specific objectives of making it easy to create extensions for web interfaces such as OPACs and then make it even easier to share those extensions in an open community of those who want to enhance their interfaces but do not have the skill or experience to do so.
The design of the extension framework, which is Juice, separates the extension itself from the code that interfaces to a particular web application. The result being that an extension created to be used on say a VuFind OPAC can be re used to extend a Talis, or a Horizon, or any other OPAC or indeed any other suitable interface.
Obviously if you are going to make changes to your interface, you need some ability to access and change the mark-up that creates the web pages. Many libraries have staff that are capable and confident enough to make a simple change to an interface – adding a link to another site in the footer, changing a bit of text on the home page etc. Juice is targeted at exactly those staff. On the Juice Project site there are simple ‘How-to’ documents, that step you through how to add the couple of lines of code to introduce Juice in to your interface, and then how to copy & paste examples into your version of Juice to add shared extensions.
Juice is already enhancing live library interfaces; for instance we are using it at Talis to introduce Google Analytics site usage monitoring in to our Talis Prism OPAC tenancies, as this Prism Blog post highlights.
Juice is an open source project that I have initiated, which is hosed on Google Code. Talis are supporting it, by letting me contribute my code and time to kick-start it, and play an active part in it. This kind of initiative, that will benefit all, can only be really successful if is owned by the community that will use and enhance it.
So, calling all those that want to add value to library and other web interfaces, take a look at and join the Juice Project. It is early days and we haven’t as yet got many interface types identified and supportable in Juice, but the more that join in and share what they know the sooner we will be able to share the innovation between all libraries.
Once you have had a browse around the Juice site, and maybe dipped your toe in to using it, I would love to hear your thoughts either in the comments on this blog, or in the Juice Project Discussion forum.
Glorious sunshine greeted the opening of the first day of UKSG 2009 in Torquay yesterday. The stroll along the seafront from the conference hotel (Grand in name and all facilities, except Internet access – £1/minute for dialup indeed!) was in delightful sharp contrast to the often depressing plane and taxi rides to downtown conference centres.
The seaside theme was continued with the bright conference bags. Someone had obviously got hold of a job lot of old deckchair canvas. 700 plus academic librarians and publishers and supplier representatives settled down, in the auditorium of the Riviera Centre, to hear about the future of their world.
The first keynote speakers were very different in topic and delivery, but all three left you with the impression of upcoming change the next few years for which they were not totally sure of the shape.
First up was Knewco Inc’s Jan Velterop pitch was a somewhat meandering treatise on the wonders and benefits of storing metadata in triples – something he kept saying he would explain later. The Twitter #uksg09 channel was screaming “when is he going to tell us about triples” and “what’s a triple” whilst he was talking. He eventually got there but I’m not sure how many of the audience understood the massive benefits of storing and liking data in triples, that we at Talis are fully aware of. Coincidentally, for those who did get his message, I was posting about the launch of the Talis Connected Commons for open free storage of data – in triples, in the Talis Platform.
Next up was Sir Timothy O’Shea from the University of Edinburgh, who talked about the many virtual things they are doing up in Scotland. You can take your virtual sheep from your virtual farm to the virtual vet, and even on to a virtual post mortem. His picture of the way information technology is playing its part in changing life at the university, apart from being a great sales pitch for it, left him predicting that this was only the early stages of a massive revolution. As to where it was going to lead us n a few years he was less clear.
Joseph Janes, of the University of Washington Information School, was one of those great speakers who dispensed with any visual aids or prompts and delivered us a very entertaining 30 minutes comparing the entry in to this new world of technology enhance information access, with his experience as an American wandering around a British seaside town. His message that we expect the next few years to feel very similar on the surface, as we will recognise most of the components, but will actually be very different when you analyse it. As an American he recognises cars, buses, adverts, and food, but in Britain they travel on the wrong side of the road, are different shapes, and are products he doesn’t recognise. As we travel in to an uncertain but exciting future, don’t be fooled recognising a technology, watch how it is being used.
A great start to the day, which included a good break-out session from Huddersfield’s Dave Pattern. He ended his review of OPACs and predictions about the development of OPAC 2.0 and beyond, with a heads-up about my session today, which caused me to spend a couple of hours in the hotel bar, the only place with Wifi, tweaking my slides. It would be much easier to follow Mr Janes’ example and deliver my message of the cuff without slides – not this time perhaps
Looking forward to another good day – even if the sun seems to have deserted us.
Catching the next wave was the title of my opening track keynote presentation in the “Catching the semantic wave – or down in a sea of content?” session of the “Order out of chaos – creating structure in our information universe” track at the Online Information Conference 2008. Presentation below from Slideshare.
This is a very well attended track. Standing room only in most of the sessions, great interest in the Semantic Web, Web 2.0, and associated concepts and technologies. From a lightly attended single session last year, this topic has grown in to an over subscribed 2nd track this year. Having spent some time bending the ear of conference chair Adrian Dale last year about what was upcoming, I can wear my virtual I told you so hat with pride this year.
My job as keynote was to provide a broad introduction to, and context for, things like Linked Open Data, the Semantic Web, Cloud Computing and clouds of data, setting the scene for the day. Hopefully I was successful in my objective, the number of attendees is definitely a measure of the interest in the topics covered.
Considering that a large proportion of the attendees of the conference are librarians it is gratifying to note that they are already looking beyond the current Web 2.0 meme towards what will be washing over us next. Thinking about this, it is hardly surprising. The next wave is far more associated with data, metadata, linking and recommending, than the Web 2.0 meme of social networking, blogging and wikiing. Dare I say it out loud, but by generalisation librarians appear to be far more comfortable with the concerns of data than socially interacting.
I get the feeling that these concepts are going to get adopted in libraries far quicker than we would expect once they start to gain momentum. This would be helped if we could get past some of the terminology confusion. The main culprit in this confusion being between semantics/semantic analysis and the semantic web. The web of data, as against [or to be more correct in addition to] the current web of documents, is how I see the semantic web. A great example of the web of data in action is the Open Linking Data Project.
Edinburgh’s Scottish Storytelling Centre was a great venue for the 3rd SLIC FE Conference on Friday, well organised by Catherine Kearney and chaired by Charles Sweeney.
With such topics as LMS, Web 2.0 and IPR in digital repositories on the agenda, you might think the day might have been disjointed. Far from it. The day hung together very well, with yours truly setting the context of the wider waves of technology and innovation that have been and will continue wash across the wider web, influencing the world of academia and libraries. Although this is being seen in the Library systems world with the emergence of so called Next Generation OPACs, is this only just doing the same old thing but better – we need to extend the user interface and the underlying systems and data to integrate with the systems and organisations around us. [Presentation available on SlideShare]
The theme continued with Phil Bradley taking us through Web 2.0 usage and techniques applicable to everyone in general and libraries in particular. Next on the bill was Charles Duncan, Intrallect CEO, taking us through the way repositories should be integrated in to institutions an the wider national and international landscape – Web Services are the key.
An afternoon of presentations: NewsFilm Online – a fascinating resource introduced by Vivienne Carr from EDINA; Intellectual Property Rights issues as applied to the output of, and material used by, e-learning; drawn to a close by the inimitable Dave Pattern, sharing his experience at Huddersfield University applying Web 2.0 principles to their OPAC.
The whole day was drawn to a close with a JISC sponsored round table discussion which I was invited to join, which served to reinforce my impression that libraries and educationalists over the last few years have found themselves in the unusual position of striving to catch up with the rest of the world.
Traditionally they have been in the role of helping to introduce new technologies & techniques to their students and the wider world. For a whole generation the OPAC was their first interaction with publicly accessible computing. With the web and now so called Web 2.0 the boot is on the other foot. We are in danger of making too big a deal out of it – many of our users are already more in tune with the things we are worrying about how to introduce.
It’s a little disconcerting when your own words from months ago are quoted back at you from a distance. That’s the trouble with the blogosphere, it is so easy for connections to what you have said to be linked in to the conversation in ways you never expected. Trouble? – No it is one of its major benefits – disconcerting or not!
Recently Mark Dahl quoted something I said a while back. I was discussing how we must stop developing destination applications and start delivering the information and functionality that users want, to where they are working – for instance inside the Learning Management System/eLearning System/VLE (or whatever you call them down your way) – apparently I boasted that the new Reading List (Course Reserves) application Talis are working on "doesn’t even have a user interface". The reason I gave, at the time, was that students don’t need yet another destination to go to to find the information they need – so why build one.
Providing the functionality to link resources to courses in a way that adds value well beyond the simple attempts to be found in ILS/LMS systems, and their course management system counterparts, is an obvious development. What is less obvious, at first, is that you don’t need to build a user interface for it – the student is already in a library system, or a learning management system, or a portal, or FaceBook, or whatever – why can we not deliver the functionality directly in to that environment? Well today the answer to that question is that those applications are not very good at embedding Web Services directly in to their interfaces.
This is why Talis development team member Julian Higman (featured in the February issue of the Library Platform News) was very quick to comment on Mark’s post "I’m working on the reading list application at Talis that you mention, and it certainly does have a user interface!" – Having calmed Julian down (I jest), we both agreed that the fact it was necessary to build a user interface for this product is symptomatic of the inability of most applications, in the University domain, to consume web services and usefully integrate their functionality in to a user’s work flow.
As I commented previously, the online university today is a collection of many silos that the user [student, professor, researcher] is expected to know how to navigate, let alone be able to identify the connections between data in those silos. I expect that this comes as a bit of a shock to the average new student. - I thought I had come to this university to learn about my chosen subject, not to spend a significant amount of time and effort becoming an expert in the use of a multiplicity of different applications and services that are supposedly here to help me.
Peter Brantley was on the money for Mark in his post, about building a Flickr-like system for academia, when he said "However, what will make the application ultimately successful is the availability of open services that permit re-use: mashups that encourage integration with other services and content."
I heartily agree, but only as an interim step. Most of today’s systems are not integrated in any way, so mashing their outputs, exposed via APIs, together in a Web 2.0 way will be a major step forward. Doing this still misses the underlying links that are usually only apparent as connections in the eye of the user, if they happen to appear on the screen together. When we can follow those links between data across silos we will remove the false barriers, imposed by technology thus far, and expose our users to the world of linked data.
Below is a diagram I am working on to hopefully help people visualise what I mean. Utilising Web 2.0 technologies we bring together [mashup] the output from various application silos in to one interface. A great improvement over Web 1.0 where each application would present its data on it’s own independent, and different, screen. Utilising Web 3.0 [Semantic Web] technologies, links between data in separate silos can be identified and presented as connections and relationships in a single Web of Data – much closer to a representation of the real world.
I would be interested in feedback on this diagram. Does it help, or does it make things more confusing?
Megaphone picture published by Paul Keleher in Flickr.