Panlibus Blog

Archive for the 'Users and usage' Category

Remember OPAC Suckiness

It was all the rage three years or so ago.  Karen Schneider even did a three part series on ALA TechSource exploring How OPACSs Suck, in which she listed elements of OPAC Suckitude and desirable features in a non-sucky OPAC.  Karen was not on her own, as this 2006 post from Jennifer Macaulay reminds us.

amazon suck What brought this to mind you may wonder.  I was preparing content for a presentation, when I was struck  by the massive contrast between two sites I was taking screen shots of.  The first is a classic site which does better than any other to show how libraries were being left behind by the rest of the Web.  If amazon.sucked like our old OPAC was a humorous facade on to Amazon.com web services, built by David Walker of California State University, to make that well know Internet retailer look like it had been styled by a well known library System supplier.  Until recently it was a fully working OPAC style interface on to Amazon.  Unfortunately I think recent changes with Amazon web services may have broken it beyond the first couple of clicks. (If you are listening David, fancy trying to fix it?)

RSAMD I was contrasting this with the impressive recently launched interface for the Royal Scottish Academy of Music and Drama (RSAMD).  Comparing these two, drives home just how far OPACs, (if that is what we should still be calling them), and more importantly the aspirations of the librarians responsible for them, have come in the last few years.

Are we there yet?  Checking out some of Karen’s 2006 list, you can tick of many items that are now standard in so called next generation OPACs, such as relevance ranking, spelling suggestions, and faceted browsing, so we are well on the way.  As the RSAMD interface shows, it is now possible for a library search interface to hold it’s head high amongst the some of the best of the web.

There is still progress to be made, but should we be still concentrating on a destination site that puts the library’s catalogue on line or should we looking more broadly at how the web presence of the whole library should be an integral part of the web.  I think the answer is both – Stunning catalogue interfaces should become the norm, not the exception to be admired and pointed at.   Meanwhile delivering all library services seamlessly as part of our users’ web experience should be our next goal.

I wonder what contrasts I’ll be reflecting upon in another three years…….

Juice up your OPAC

prism_mta_screenshot Have you ever looked at the OPAC from another library that sports links to WorldCat, or Copac, or Amazon, or Google Book Search, or Del.icio.us, or a shelf mapping program, an author video, or something similar and thought I wish I could have that on our interface!  Have you attended a presentation about next generation OPACs and heard the presenter say “… and I added a link to an external service” and whished you had them on your library staff to be able to do cool things like that. for you?

Even in the so called library-geek community, where they know how to do these kind of things, great ideas for extending their interfaces are only copied between them, each implementing them in their own way for their own application.  Because, until now, there has been no easy way to share the great innovation demonstrated by the few, we are seeing a massive waste of what could benefit the many.

The Juice Project is an open source initiative, which I launched at the recent Code4lib conference, with the specific objectives of making it easy to create extensions for web interfaces such as OPACs and then make it even easier to share those extensions in an open community of those who want to enhance their interfaces but do not have the skill or experience to do so.

Open and easy are two key facets of the approach used for this project.  JavaScript code gurus may find the way Juice is implemented a little over complex, but it is that approach which should make it simple for the non-gurus to adopt and use.  

Duke_icons_screenshotThe design of the extension framework, which is Juice, separates the extension itself from the code that interfaces to a particular web application.  The result being that an extension created to be used on say a VuFind OPAC can be re used to extend a Talis, or a Horizon, or any other OPAC or indeed any other suitable interface.

Obviously if you are going to make changes to your interface, you need some ability to access and change the mark-up that creates the web pages.  Many libraries have staff that are capable and confident enough to make a simple change to an interface – adding a link to another site in the footer, changing a bit of text on the home page etc.  Juice is targeted at exactly those staff.  On the Juice Project site there are simple ‘How-to’ documents, that step you through how to add the couple of lines of code to introduce Juice in to your interface, and then how to copy & paste examples into your version of Juice to add shared extensions.

Visits for all visitors - Google Analytics Juice is already enhancing live library interfaces; for instance we are using it at Talis to introduce Google Analytics site usage monitoring in to our Talis Prism OPAC tenancies, as this Prism Blog post highlights.

Juice is an open source project that I have initiated, which is hosed on Google Code.  Talis are supporting it, by letting me contribute my code and time to kick-start it, and play an active part in it. This kind of initiative, that will benefit all, can only be really successful if is owned by the community that will use and enhance it.

So, calling all those that want to add value to library and other web interfaces, take a look at and join the Juice Project.   It is early days and we haven’t as yet got many interface types identified and supportable in Juice, but the more that join in and share what they know the sooner we will be able to share the innovation between all libraries.

Once you have had a browse around the Juice site, and maybe dipped your toe in to using it, I would love to hear your thoughts either in the comments on this blog, or in the Juice Project Discussion forum.

Dave Pattern challenges libraries to open their goldmine of data

The simple title of Dave’s recent blog post ‘Free book usage data from the University of Huddersfield’ hides the significance of what he is announcing.

I’m very proud to announce that Library Services at the University of Huddersfield has just done something that would have perhaps been unthinkable a few years ago: we’ve just released a major portion of our book circulation and recommendation data under an Open Data Commons/CC0 licence. In total, there’s data for over 80,000 titles derived from a pool of just under 3 million circulation transactions spanning a 13 year period.

13 years worth of library circulation data opened up for anyone to use – he is right about it being unthinkable a few years ago.  I suggest that for many it is probably still unthinkable now, to whom I would ask the question why not?

In isolation the University of Huddersfield’s data may only be of limited use but if others did the same, the potential for trend analysis, and the ability to offer recommendations and who-borrowed-this-borrowed-that  services, could be significant.

If you have 14 minutes to spend I would recommend viewing Dave’s slidecast from the recent TILE project meeting, where he announced this, so you can see how he uses this data to add value to the Huddersfield University search experience..

Patrick Murry-John picked up on Dave’s announcement and within a couple of days has produced an RDF based view of this data – I recommend you download the Tabulator Firefox plug-in to help you navigate his data.

Patrick was alerted to Dave’s announcement by Tony Hirst who amplified Dave’s challenge “DON’T YOU DARE NOT DO THIS…”

As Dave puts it, your library is sitting on a goldmine of useful data that should be mined (and refined by sharing with that of other libraries).  A hat tip to Dave for doing this, and another one for using a sensible open licence to do it with.

Picture published by ToOliver2 on Flickr

Clay Shirky opens Online Information Conference 2008

Well actually he was preceded by Conference Chair Adrian Dale who popped up this fascinating counter on the screen.  Although a simulation, it drives home just how much information is being created.
 bytes created
(click for the animated version)

Against this background Clay then presented on the theme from his latest excellent book Here Comes Everybody, that also formed the starting point for the Talking with Talis podcast I recorded with him for the Online Information Conference series.

Don’t get me wrong Clay’s book is good, but you can’t beat having him stood up there telling you about it.   By using examples, such as readers of a blog that covered political unrest in Thailand who then got upset when she then blogged about her new pink mobile phone;  or flash-mobs being arrested in Belarus for ‘eating ice cream’; and many others, he showed the way that the publishing cost has moved to zero for most people which means we can all do it, the ramifications of which is enormous.

For a more detailed commentary on his presentation check out Ewan McIntosh’s post, which appeared whilst Clay was still  answering questions from the stage – a feat I could never attempt to compete with!

It’s all about links. The Future of Bibliographic Control

Stables

So, the Library of Congress Working Group on The Future of Bibliographic Control released their draft for public comment last week.

The draft contains 5 high level recommendations broken down into many smaller, detailed recommendations. The amount of detail in the 41 page report is impressive with some very focussed thoughts and very clear statements about what to do next, both for the LC and the wider community.

As well as reading the draft report you can watch the working group present their findings to LC, recorded a few weeks ago.

1. INCREASE THE EFFICIENCY OF BIBLIOGRAPHIC PRODUCTION

The first recommendation is about making the production, gathering, editing and flow of data as efficient as possible. The report makes recommendations ranging from mandating that publishers supply metadata as part of the CIP process (which LC already do) to ensuring that purchased datasets are not embargoed against sharing (hard to do given the business models of commercial data suppliers).

OCLC gets a special mention, saying that “OCLC’s business model has a real impact on the distributed system of bibliographic data exchange.” In our opinion that’s somewhat understating the case, certainly for those who aren’t members of the OCLC club. I’ve met many of the OCLC folks and they’re doing some great things, but they have a business model established before the net and having to protect that is damaging their members ability to participate in this networked world.

The thrust of the efficiency recommendations are about using data that’s already available. This comes in three flavours; contractual, ensuring that data is provided by suppliers and that data that has been purchased can be freely shared; technical, ensuring that crosswalks, converters etc are available to get this data into catalogues; and social, relaxing standards to accept existing data without effort over perfect data created from scratch.

There’s also an interesting piece on LC’s costs:

According to current congressional regulations, LC is permitted to recover only direct costs for services provided to others. As a result, the fees that the Library charges do not cover the most expensive aspect of cataloging: namely, the cost of the intellectual work. . The economics of creating LC’s products have changed dramatically since the time when the Library was producing cards for library catalogs. It is now time to reevaluate the pricing of LC’s product line in order to develop a business model that allows LC to more substantially recoup its actual costs.

Finding a business model that both allows distributed responsibility for the bibliographic data and allows LC to bring in more money is a big ask. Commercial organisations are struggling with exactly that right now; OCLC being the biggest amongst them. Given that LC data is not covered by any intellectual property rights (the work of federal employees does not qualify for Copyright protection and the US has no Database Right) I don’t see any practical way for LC to achieve both objectives.

If the management of the data can be successfully distributed then much of the cost would also be distributed. In this case, with the community as a whole producing much of the data, it becomes even more important to clarify the rights people have over the data. We’ve been working on this problem for a little while, releasing an initial draft license for this purpose (licensed community generated data) more than a year ago and having recently released further drafts for comment. We’ve funded legal work and expect to have some more news on progress with our data license here very soon.

Also included in this first recommendation is some discussion of internationalising and expanding the Authorities data. I’m not sure I see this as an efficiency gain, but bringing together authorities from national libraries in many languages and reconciling them has the potential to revolutionise global bibliographic search. This work is already underway and, as long as the data is free for all to use, will be a great endeavour. This, in my mind, is the most compelling reason for LC to be asking for more funding and clearer national mandate.

2. ENHANCE ACCESS TO RARE AND UNIQUE MATERIALS

This second section focusses on one aspect of libraries that has competitive advantage over anything else – obscure stuff. The obvious example here in the UK is St John Rylands with their manuscripts, but every library has its own unique and interesting pieces. The same arguments around data as in section come up again – with a recommendation to focus on at least some access to all resources rather than having some perfectly catalogued and others not at all.

There are also calls to digitise these assets where possible and make them available online, partnering to do this where necessary.

I’ll jump on now, as section 3 and 4 float my boat a bit more.

3. POSITION OUR TECHNOLOGY FOR THE FUTURE

We have become slaves to MARC, so too have our systems vendors

That has to be the headline of this section; it’s a quote from Brian E. C. Schottlaender from the working group presentation of the draft to LC.

In my mind, MARC stands in that sentence as a placeholder for all of the library-centric, complex and web-hostile standards that we currently rely on. You could easily add Z39.50, NCIP and a host of others to the list.

The point of all of the recommendations in this section is to be able to play nicely with the web, but even in the draft report there are still signs of the record-centric thinking that MARC forces us into.

Library bibliographic data will move from the closed database model to the open Web-based model wherein records are addressable by programs and are in formats that can be easily integrated into Web services and computer applications. This will enable libraries to make better use of networked data resources and to take advantage of the relationships that exist (or could be made to exist) among various data sources on the Web.

Overall, though, the recommendations are a very good start. The data needs to be made web-accessible, the vocabularies need to be published, freely, in machine-readable forms and software everywhere must be allowed to link to elements of the data. The description is very much inline with the work that the W3C and others are doing around publishing data on the Semantic Web – it would be nice to have the report come down explicitly in support of this to save everyone two years arguing about how to publish data on the web. Currently the semantic web is only mentioned in passing in 3.2.1.2.

There are developers at the LC who are interested and active in the Semantic Web space already. LC should invite them to show the rest of LC what they’ve been playing with and what the impact of it could be. The recommendations mention SKOS, a way of representing subject headings. I’ve seen work to represent LCSH in SKOS and it looks great – LC should be opening up and promoting this kind of work.This is specifically covered in 4.

Section 3 also contains probably the biggest shocker; “Suspend further new work on RDA”. The reason is to spend time getting FRBR tested and straightened out. I hope it’s just a question of naming as many of the names involved in the RDA stuff are now pushing actively on getting FRBR sorted. Judging by the traffic on mailing lists like NGC4LIB the recommendation at least got everyone to sit up and listen.

The working group see the results of work done on FRBR as a tantalising sign of what could be done to really change the search experience for library users. Having done work on record clustering myself I have to agree. FRBR is only one step along that road, though. With the right model many relationships can mined in the data, making it explorable in a way that catalogues just aren’t today.

4. POSITION OUR COMMUNITY FOR THE FUTURE

The changing demands of library users, the increasing diversity of uses both of library resources and the metadata about them and the recurring references to making the data machine-usable will all be very familiar to those following the biblioblogosphere. That’s a very good thing, there are a lot of smart people on the working group and many more smart people not on the working group.

I was expecting more in this section around people, they make up the community after all. Instead the report focusses on some more technical efforts to link library data with other resources, integrating user-contributed data, make FRBR happen and open LCSH up for re-use wherever desired. It seems to me that these recommendations could easily have been viewed as technology recommendations. It seems almost as though they sit as surrogates or euphemisms for what is said more openly on the mailing lists – that people will have to change. Of course many people are changing already and starting to do much of what the report discusses, others are still reticent, unsure of the need for change and fearful of losing technologies they believe have served libraries well for forty years or more.

The recommendations in section 4, if viewed as surrogates for the real point, are in effect saying “Change is coming, these are the first and you need to get on-board with these”. If that’s what the working group intended to say, I’d like to see it said explicitly.

If we really want an active, world-wide bibliographic community – which would be necessary for many of the recommendations in previous sections – it would be great to see some discussion of how that might coalesce. Distributed management of bibliographic data is a fine start, but Talis, OCLC and other already do that. What factors are necessary to really form a community, as Flickr, Facebook, MySpace or Second Life have?

5. STRENGTHEN THE LIBRARY AND INFORMATION SCIENCE PROFESSION

Step one on getting better at this is apparently to build an evidence base of the costs and benefits of various initiatives – that is, get professional about knowing what is worth the time, effort and money. This seems like a sensible thing to do, and slightly odd that it’s not already being done. As a majority employee-owned software company we all keep an eye on what we’re doing to make sure we’re doing the most important and valuable things.

The next step is to support ongoing research – that is, if you have people in your teams interested in where this should go let them play! Let them try stuff out, give them some time to try mining the marc data or designing an ontology, like Martha Yee did.

Then, to ensure the future of these efforts, change LIS education, convince educators to do more on bibliographic control fundamentals and convince them to share that material widely. A nice set of efforts, but judging by the way computer science education affects the wider industry, a ten-year strategy at a minimum.

6. JFDI

During the presentation of this report to LC, the working group were challenged that LC are already doing much of this, and others are doing much of it too. That’s a good thing, it means the recommendations and our actions are in agreement.

What’s notable by its absence in the report is input and commitment from ILS vendors. As ILSs are so intimately tied with MARC and the language of MARC very little can change without vendors involvement; even with the profusion of free and open-source tools we hope to see appear.

But, doing much of it and having done much of it are a world apart. I think there’s still time to change and to be a part of the web. The semantic web offers real opportunity for libraries to have the best of both worlds; and then some. To close the stable door in time, however, will require faster change than we’ve ever seen before.

Let me know what you think of the report by commenting below.

A cloud of clouds

Let me start with a question – what is the collective noun for clouds? In trying to dream up a catchy title for this post, which you will discover once I’ve stopped waffling is about Word Clouds, I tried to discover from colleagues and places like answers.com what you call a collection of clouds. Answers received so far: a host, a storm, a front, and the one I chose – a cloud. I’m sure someone out there will be able to put me right on this, I’ll be monitoring the comments with interest.

Anyway, why am I so interested in [word] clouds all of a sudden? Well its is not all of a sudden, I’ve been interested word/tag clouds as a device for serendipitous browsing through a set of meta data based upon the popularity of words within, or tags associated with, information, for a while.

Flickr, Technorati, and LibraryThing, are all well know examples of the use of these clouds in a user interface. More examples are appearing almost daily.

The thing that triggered me to write this post was the appearance of a word cloud on the site for the BBC’s radio station Radio 1. Scroll down to the bottom of the page and you should see a display of the most popular words contained in SMS text messages sent to the station. This is refreshed every couple of minutes or so, so gives an insight in to what the station’s audience is thinking about. With the station receiving often in excess of 1,000 messages per hour, the theme behind the words displayed is an aggregate of a fair amount of input. The tool that displays this also checks for well know words, like the name of a group or DJ, and makes them a clickable link to more information.

The thing that struck me about this implementation is that the BBC just put it there with no explanation or hints, expecting that their online audience will understand that words in larger fonts are more popular than others in smaller fonts and the ones in blue are clickable. Not that many months ago I remember having to explain those concepts to those seeing Flickr and del.icio.us tag clouds for the first time.

The Web 2.0/Library 2.0 world is one where new user interface metaphors appear and become accepted very rapidly. Although, I am still aware of some libraries who shy away from making changes to their OPACs until ‘there has been training‘. All I can say to such organizations is that I think you will find your online audience is more astute and open to change than you think. By all means offer some ‘How to get the most from the new features’ sessions, but if you have to train in the basics you have probably got your interface wrong.

Another thing that made me think about word clouds today, was a comment that somebody made in a telephone conversation about the Aquabrowser OnLine trials of libraries, such as Islington Libraries, who have contributed to the Talis Platform, that I posted about the other day. The comment passed on from a further education college was that the word cloud in the Aquabrowser OnLine interface could be of great help to those with dyslexic problems identifying different spellings etc. Another good example of how offering access to data by using new and innovative user interface metaphors, in addition to the traditional ones, can have unexpected beneficial consequences.

Technorati Tags: , , , , , , , ,

When community and technology combine

LibraryThing logo

Tim Spalding over at LibraryThing provides a nice write-up of Richard Wallis’ LibraryThingThing extension to the Firefox web browser. A number of interesting points get raised in his post, and in the comments shared by members of LibraryThing’s community, and I thought it might be useful to offer a few thoughts in response.

Firstly, Tim writes;

“This is an exceedingly cool mashup, and a very good demonstration of all the components. To my mind, it would be more useful if it did less, telling you only if the book was in your library.”

With straightforward access to a raft of Platform APIs and a solid body of data on library holdings, it becomes feasible to slice and dice the results in whatever way makes most sense to the users themselves, rather than insisting upon any ‘one size fits all’ solution. I can, personally, think of a whole host of reasons why you might wish to view holdings from a user-selected set of libraries, and the real technology lying behind Richard’s simple browser extension is certainly capable of supporting these use cases.

I, for example, live in one place and work in another, 150 miles away. I’d like to see the library local to my home and the library local to my office. I have no interest (no offence intended!) in the libraries of North Lincolnshire, South Yorkshire, Derbyshire, Nottinghamshire, Leicestershire, and wherever else lies along my route.

Or what about the university student who wishes to see their own university library, the public library of their university’s city, the public library in the town where their parents live, and the public and university libraries in the city where their boy/girlfriend is studying?

We are also seeing a welcome (and long overdue) growth in interest around the notion of collaborative access arrangements between neighbouring libraries, which is ultimately to the benefit of all library users. Rather than conducting painfully slow and eye-wateringly expensive procurements for yet another monolithic dinosaur of a system (believe me, I’ve read some of the procurement documents!), technologies such as those behind Richard’s tool might usefully and easily be aligned with existing library systems, in order that a borrower is able to see holdings data from all the institutions participating in a particular scheme. Indeed, if nothing fancier were required, Richard’s existing code could easily be modified for deployment on top of an existing OPAC. Imagine looking for a book in the library of the university at which you are studying, finding that the book is on loan, and having a browser extension very similar to LibraryThingThing let you know that there’s a copy in the local public library…?

LibraryThingThing is a rapidly produced (one afternoon, essentially) illustration of a number of possibilities. A tool deployed to best advantage in day to day use would doubtless concentrate upon fulfilling a smaller set of purposes with greater focus. Given the open nature of the APIs behind LibraryThingThing, there’s nothing to stop any of you experimenting and producing the tool that does what you want it to. If you like the idea of wrapping the tool up for delivery as a Greasemonkey plugin or Firefox browser extension as Richard did, the source of the Greasemonkey plugin is also available for you to modify.

Tim goes on to add;

“How should LibraryThing tie into libraries. As always, your thoughts are much appreciated.

We were, actually, planning on doing something like this, and even started the code. When we bring something live it will be a lot less technically elegant—good old server-side programming—but also not browser- and extension-dependent.”

Excellent! We’d (obviously) be keen to see LibraryThing extend in this way with the help of the underlying Platform technologies that made Richard’s browser extension so easy to produce. The Platform and its APIs are neither browser nor extension-dependent; Firefox and Greasemonkey simply provided an easy way for Richard to bring LibraryThing and some of our Platform components together without needing to get inside LibraryThing’s codeline. Tim would be able to use the same Platform components, but in a way that integrated them far more closely with LibraryThing without the need for particular browsers or extensions. That sounds like a win-win to me, and one we’d of course be happy to lend assistance to…

Now to the comments…

James Darlack writes;

“Perhaps rather than having LTThing look up only a specific library, it would be helpful if it could look libraries within a preset distance of a zip code, similar to the way Open WorldCat works.”

Absolutely. Behind the scenes, one of the places that LibraryThingThing looks for data is to the Talis Directory. This can hold various details about libraries, including their postal address and their latitude and longitude. The Directory is an open repository of information about a growing body of libraries, and if your local library isn’t listed you are free (indeed hereby encouraged!) to add it. The information you contribute is governed by a flexible and permissive licence, and a growing body of Platform APIs ensure that the data can be consumed by a range of third party applications to provide the sort of capability that you would like to see. The open nature of the APIs ensures that you actually have a far greater degree of flexibility than Open WorldCat achieves by drawing you back to an Open WorldCat-controlled web page every time you use it, meaning that you could do all sorts of quite clever things with the location data if you had the will and the ability. Libraries within a preset distance of a zip code, but on a bus route? Libraries within a preset distance of a zip code, but close to a Starbucks? Libraries within a preset distance of a zip code, with convenient parking and a copy of the book on the shelf? These applications aren’t necessarily for Talis to build. We simply provide the tools to enable the community to do so.

Jonathan Cohen adds;

“When I click on the LTThing link, the only libraries it finds are British ones. Is Talis a British-only service, or is there some other reason?”

The Talis Platform, and the open and inclusive model that it represents, is a relatively recent activity for Talis and it will take time to work with the community on increasing the (already large) number of libraries represented. The holdings data visible to the Talis Platform today are predominantly those contributed to the Platform as part of library participation in a UK service we also run, called Talis Source.

The Platform itself is not restricted to the UK, and nor are the tools and applications built on top of it. If your local library is interested in contributing holdings data to the Platform (free of charge) so that it can be visible in LibraryThingThing and a growing number of other contexts, you should certainly encourage them to get in touch.

In investing in the Talis Platform, we at Talis are demonstrating our commitment to the continued development of libraries. We are also showing, quite explicitly, that library data has a value far beyond the walls of the library. Sites such as LibraryThing, complete with their significant (53,940 when I checked) communities of passionate bibliophiles offer one obvious place in which it makes sense to bring as many library-sourced resources as possible. Why make it hard for LibraryThing’s members to take the logical step into a convenient library? Why require those libraries to join some expensive club, just to make their holdings (or their very existence) visible?

Free participation. Easy contribution. Open APIs and a permissive license. It really does make sense, and every day it becomes harder to justify the monolithic technologies, closed clubs and exorbitant charges of the past with which libraries and their users continue to grapple today. There really is a better way. Come and see, then help build it.

Technorati Tags: , , , , , , , , , , , , , , ,

When is Local Global?

I am currently sat at the back of a hotel conference room in Leeds, in the morning session of one of the Talis Customer Days is in full flow. I am presenting on the future, Web/Library 2.0, and the Talis Platform during the afternoon session. Through the wonders of hotel broadband [it’s a wonder it ever works]; bringing a wireless router with me; and Virtual Private Networking [VPN] I’m not only physically in Leeds, I am virtually at the Talis Offices near Birmingham.

This is, frustratingly, very real – I now know via internal email that I have just missed the breakfast van; the sandwich delivery has just taken place at the Talis office, and its still two hours to lunch here in Leeds! An example of, virtual, local activity in a global context – because of ubiquitous Internet connectivity I could have been equally frustrated by messages about sandwiches anywhere on the planet.

This brought me thinking about the comments that have flowed from Paul Miller’s posting from a couple of days ago.

In discussing shared participation Paul commented:

there’s certainly a place for viewing comments by those geographically nearby. There must surely also be value in viewing comments across a community of interest, regardless of space. Yes, there’s already Amazon, but the comments are locked up there. A shareable pool of comments contributed to and consumed by libraries

This drew the, somewhat surprising to me, comment from John Blyberg at Ann Arbor:

In regards to shared participation, yes, I agree with you Paul that building a pool of contributed content could be a powerful and useful addition to any PAC. However, in a community such as Ann Arbor where both Ed and I live, my intuition tells me that we would want to avoid such a clearinghouse and opt for a community-built social software program. The reason is that (as most people in Ann Arbor would agree), our community is very unique and filled to the brim with book-lovers and library-users who could start building a database that belongs solely to our community and reflects the tastes and interests of the community, not the world at large. The main problem with a large shared database is that it is no longer unique and will ultimately align itself with the likes of Amazon.

It is at times like these that you have the realization that your assumptions are not always in line with everyone else’s.

So what are my assumptions then? Well firstly, the contributions of the citizens of Ann Arbor would be of great use, interest, and value to a far wider audience than just their district. Secondly, contributions to any global pool should be tagged as to their source and type. Thirdly, because of that tagging, selection of results should be able to be via many filters such as library, library authority or institution, library type, country, language etc.

So following through those assumptions in John’s situation, I would hope that contributions for my community would add value to the global pot; be displayable locally in isolation as a coherent set; and optionally could be supplemented by those from other appropriate communities around the country and the rest of the world.

To answer my own question in the title of this posting, providing data is tagged as to its source and type, Local is just a filtered view of Global resources so under the hood they can be the same thing.

Technorati Tags: ,

Privacy and Library 2.0

Writing on LISNews, Blake draws my attention to a post by Rory Litwin on Library Juice.

“Privacy is a central, core value of libraries”

Is it? Ensuring access to a wide range of material, yes. Protecting the individual’s right to go where they wish without censorship or censure, yes. But ‘Privacy’ is a term that can quickly become overly loaded, and can equally quickly become a quite ridiculous justification for not doing anything interesting.

“As serious as privacy concerns may turn out to be, the features of Web 2.0 applications that make them so useful and fun all depend on users sharing private information with the owners of the site, so that it can be processed statistically or shared with others. This presents a problem for librarians who are interested in offering Library 2.0 types of services. If we value reader privacy to the extent that we always have, I think it’s clear that our experiments with Library 2.0 services will have uncomfortable limitations. This is probably going to lead many librarians to say that privacy is not as important a consideration as it once was. They will say that the Millennial generation doesn’t have the same expectations of libraries in terms of privacy that older generations do, and that we should simply adjust.”

Mungeing of data streams to create large anonymised sets, opt-in, informed consent. Each is a powerful tool in ensuring that we can leverage value in the aggregate whilst protecting the individual, and we should not be afraid to make full use of them in delivering services to those users.

There is a balance to be struck between surrendering ourselves fully to the Cloud in order to gain maximum ‘benefit’ from personalisation, network effects, wise crowds of strangers with long tails and all the rest on one hand, and presenting an opaque and impermeable face to the outside world that reduces our capability to benefit from the network’s intelligence on the other.

That balance is not for the library to strike on my behalf, although the library may well have a role to play (along with many others) in ensuring that my decisions can be informed.

Rory raises a number of points that are worth thinking through, but whilst recognising the importance of upholding and protecting personal freedoms we must not become trapped in endless agonising over whether or not our poor misguided users should be ‘allowed’ to ‘give up their privacy’.

Technorati Tags: , , , , , , ,

“MySpace for older set” sounds quite like bits of the library?

ZDNet carries a piece from Reuters, entitled “Bertelsmann looking to create ‘MySpace’ for older set”, which continues;

“German media group Bertelsmann plans a return to the Internet and is looking at transforming its Direct Group of book, CD and DVD clubs into an Internet networking scene for older people.

The company believes that Direct Group can turn its aging customer base of around 35 million to its advantage by changing its traditional clubs into Internet communities of like-minded people united by their similar cultural interests.”

In many ways, an interesting idea, but I can’t help wondering what role the library could play here were the community to demonstrate sufficient will?

“Book club” ?

“similar cultural interests” ?

Sound familiar? And would you rather have that with or without Bertelsmann’s “lucrative” proposition that is to be “driven by advertising” and [presumably with] an “intrusive brand”?

Technorati Tags: , ,