Panlibus Blog

Archive for the 'E-resources, digital content' Category

The SCONUL Shared Services Study – 3

In the final post of this series, I discuss the Electronic Resource Management (ERM) element of the SCONUL Shared Services Report. Electronic resource licensing and management is identified early in the report as one of the three domains for proposed shared services. For fairly obvious reasons, ERM is a high priority operational area for academic libraries. With electronic content licences identified as one of the four areas of cost under consideration, ERM is explicitly identified as the focus of the shared service initiative:

“The core shared service will be centred on ‘e-resource lifecycle and access management’ encompassing e-journals, e-books, abstracts and other digital content.”

And this is the clearest passage in terms of establishing the scope of ERM within this initiative:

In the target scenario, a shared ERM service will be used by the service provider and customer institutions to keep track of electronic information resources, supporting acquisition and management of licensed e-resources. This will include resources licensed at a UK level where all students and staff in the UK can access them, resources with a UK framework agreement where any UK institution can obtain discounted access for its staff and students with standard licenses. The system will handle the metadata for resources and machine-readable versions of all licence agreements. The ERM system will include usage statistics related to the electronic resources.

Can ERM be centralised?

This is surely the fundamental question here. The report claims that 90% of respondents either agree or strongly agree that much ERM work is repeated unnecessarily across institutions, and I see no reason to dispute that. The report somewhat boldly asserts ERM is a function that is no longer needed locally, and refers to a “community source platform”. It seems unclear to me what a community source platform is exactly. In the States around 2008 there was a ground-up initiative started at the University of Florida called Library Okra. Although sadly defunct, it argued cogently for a community-based ERM approach along the lines of cooperative cataloguing, where, say, the relationships between a journal title and a package would only have to be entered once, as would the core clauses of a licence. If this is what the authors mean by a “community source platform”, then I applaud it unreservedly.

The fundamental ERM problem

I’ve touched upon the fundamental ERM problem there, namely the disparity between the resource acquired by the library (e.g. a journal package) and the resource that is the focus of the user’s attention (e.g. the journal title or the article). By adopting a “cooperative cataloguing” approach, we solve one part of the problem and ensure that it’s always updated in a timely fashion, but part of the reason why the first generation of ERM systems has failed to deliver on expectation is that this is also a design problem. So, for example, an academic complains that s/he is no longer able to access an important e-journal. As the e-resource librarian investigates, the journal title needs to be mapped to commercial constructs such as the package right across the e-resource lifecycle in the system. The library management system has never had to handle this complexity.

Think local

Has the report had demonstrated sufficient sensitivity to local information resource needs? The report proposes:

Guarantee equality of access to electronic resources for students and researchers across institutions, potentially including colleges delivering such as Foundation Degrees.

I therefore wonder how much local flexibility will remain in place. It reminds me a bit of the whole supplier selection issue in public libraries – a risk of disintermediating the close relationship that currently exists between academics and liaison librarians in this instance, helping to ensure responsiveness to academic needs. The packaging of journals and national deals already compromise this. One of the most important takeaways from last month’s UKSG conference was the growing resentment of libraries towards National Deals as the need to manage costs means increased attention at the level of individual titles.

Can the authors of the report guarantee that local responsiveness will remain in place? Is there a risk that academics will simply take matters into their own hands and effectively disintermediate the library in order to get access to what they need? And if some vestige of local control is retained, we need to bear in mind that currently the management of locally procured materials alongside the national deals is yet another significant problem in ERM.

On the other hand, the enhanced aggregated usage statistics that the report proposes does offer the possibility of improved decision-making in acquisitions.

National level licensing

On licensing, the report states that

88% of respondents either agreed or agreed strongly that ERM linked to licensing at a national level would be liberating.

I agree that we should be questioning the need for local divergence in licensing terms; ERM would be greatly simplified with standard licences. Even if there remained some localised needs, a cooperative approach to licensing would ensure that e-resource librarians would only have to input those localised clauses. A system that could be readily queried for licensing terms of individual journal titles would be a great step forward, although the problem remains of the disparity between the commercial entity and the individual title.

Melting Pot

Overall, in terms of the ERM-specific proposals, I fear that the Shared Services report is conflating the need to eliminate duplication of back-office effort with ongoing acute problems in the ERM sphere, and that the scope creep which I alluded to in my first blog will be particularly problematic with this area of the initiative.

How college students seek information in the digital age

How college students seek informationHow college students seek information in the digital age is a report of findings from 2318 US students, surveyed in spring 2009 that seeks to understand how students search for information and approach research-type activities. Having read the report, I now understand fully why I’ve seen so many tweets about this report along the lines of “If you read nothing else from now to the end of the year…”

The report introduces a useful typology of students’ research activities:

1. Big picture: Background information on a specific topic
2. Language: Finding out more about the words and terms around that topic
3. Situational: Judging the extent to which an area needs to be researched
4. Information-gathering: “Finding, accessing, and securing relevant research resources.”

… and points out that students experience needs in all these areas on a frequent basis.

So here we are deep in the digital age, characterised eloquently by the report as “a fast-paced, fragmented, and data-drenched time that is not always in sync with the pedagogical goals of colleges”. Since the “digital native” archetype has been all but discredited, what can we say about the online behaviours of that generation in this confusing and sometimes overwhelming landscape?

First of all, I was impressed by reference to broader forces (i.e. those that transcend technological advances), as articulated here:

… today’s students have defined their preferences for information sources in a world where credibility, veracity, and intellectual authority are less of a given – or even an expectation from students – with each passing day.

So it’s not just the technology that is a catalyst for change in the scholarly environment.

At a general level, librarians will be struck by the gaps identified between the students’ conceptualisation of research and that of instructors and librarians. The librarian approach is broadly characterised by thoroughness – advising students to move from the general to the specific when information searching, using scholarly resources to that end. Students surveyed, on the other hand, used a whole range of resources that delivered large numbers of results early on in the searching process, irrespective of their scholarly status.

The quantitative findings are interwoven by quotations from students’ interviewed, and all have a ring of authenticity, such as this one:

When I’m doing research, usually it’s the material that I have from the class, or the stuff I’m looking up from the library databases. But if I don’t understand something from those things like a word or a concept, then I’ll go [sic] a search engine, or if I just need quick facts or something like that, I’ll use a search engine to find them.

“Information overload”

Students in all institutions used Google to complement scholarly resources found with a much larger result set, although they did not always use Google first or exclusively. The resulting “information overload” gave rise to considerable frustration:

In general, students reported little information-seeking solace in the age of the Internet and digital information. Frustrations were exacerbated, not resolved by their lack of familiarity with a rapidly expanding and increasingly complex digital landscape in which ascertaining the credibility of sources was particularly problematic.

“A risk-averse and predictable information-seeking strategy”

Another key finding is that

… nearly all of the students in our sample had developed an information-seeking strategy reliant on a small set of common information sources – close at hand, tried and true. Moreover, students exhibited little inclination to vary the frequency or order of their use, regardless of their information goals and despite the plethora of other online and in-person information resources – including librarians – that were available to them.

This, coupled with findings around “information overload”, suggests that students are dealing with the immensity of the information landscape by creating some kind of self-imposed walled garden, or what the report calls “a risk-averse and predictable information-seeking strategy.”

Scholarly databases

Students valued the “credible content, in-depth information, and the ability to meet instructors’ expectations” of scholarly research databases such as ProQuest (sponsors of this research). They were used in all of the research activities of the typology outlined above.

Most students used such databases for 3 reasons:

1. Quality of content
2. To meet lecturers’ expectations of resources consulted
3. Perceived simplicity of search interfaces.

The 24/7 availability of those resources was surprisingly less important.

Course readings

Almost every respondent turned first to course readings for course-readings for assignments, because these resources are “inextricably tied to the course and the assignment”, as well as being readily available and sanctioned by the lecturer.

Contact with lecturers and librarians

Lecturer availability was most important to students for answering questions submitted by email. 76% also found the setting of standards for resources consulted to be useful. Lecturers, then, unlike librarians, were seen as an integral part of the research workflow.

This contrasts sharply with contact with librarians. The report goes so far as to talk of a “student librarian disconnect”. So even though 78% of respondents are still using the OPAC to find books and other library materials, and 72% are making use of library study areas in the course of their research activities, only 12% made use of “on-site, non-credit library training sessions”, and 20% consulted librarians about their assignments.

As one student said:

Generally, it is not necessary to talk to a librarian – if the library is well laid out, you can search for material online, once you find it, you can request that they put them on hold for you and then just go and collect them. Or, if you know the physical location, you can just go and collect it yourself. When those ways fail, I’ll go bug a librarian. But otherwise, it just seems like there are resources to be used, rather than taking up someone’s time.

Finally…

This is an exceptionally useful report for anyone interested in student searching behaviours and student engagement in academic libraries more generally. Its sophisticated and rigorous methodology enables it to transcend received understanding and offer some really valuable insights. Academic librarians will justifiably be concerned about this “student librarian disconnect” which manifests itself not only in an ever-lessening of direct contact, but also in students’ own search behaviour. I believe that librarians are responding to this by making themselves available at the point of need, and working closely with academics to improve information literacy among undergraduates. I don’t believe that the findings of this report will be altogether surprising to the UK academic library community, but it’s an exceptionally valuable report all the same.

Google Book Scanning Project – Issues and Updates

google-logoLast night I listened to another Educause webinar – something that is developing into a (good) habit. This week’s was entitled The Google Book Scanning Project – Issues and Updates, and featured presentations and discussion between Dan Clancy, Engineering Director of Google Book Search, and Jonathan Band from the Library Copyright Alliance.

Even though the current negotiations are US-specific, it’s still a good idea for librarians everywhere to keep themselves up-to-date on progress on this area. This webinar provides a useful overview of the project, but if you haven’t got a full hour to spare, a recent article written by William Skidelsky in The Observer – Google’s plan for world’s biggest online library: philanthropy or act of piracy? – should also do the job.

So I’ll leave it to those two sources to cover the basics. However, there are a number of concepts that are important to understand in order to follow the debate between the two sides, which is what this blog posting is really about.

First of all, Google is categorising all the books it scans into one of the following:
a. Public domain – defined as having been published before 1923.
b. Books published after 1923, but which are either out of print or orphaned works (around 75% of all books scanned).
c. Books still in print.

Secondly, Google is planning to offer a number of different diverse access models, the most noteworthy being:
a. Preview uses
b. Online consumer access – enabling users to buy online access to individual works under a pricing regime set by either the rightsholder or Google.
c. Institutional subscription – on a FTE basis, for HEIs and corporations
d. Public Access Terminal – one free terminal per US public or university library.

Thirdly, an independent Books Rights Registry (no website as yet) will be set up to represent rightsholders and to collect and distribute revenues as well as resolve disputes.

Well that covers a lot of Dan Clancy’s presentation, although it’s worth mentioning in passing that Clancy does come across as being genuinely philanthropic, as the Observer article also noted.

So now let’s move onto Jonathan Band, who was there really to cover the pros and cons of the project as it currently stands.

Band had many good things to say about the Google Book Settlement, painting a rosy picture of where we’ll be if the Settlement is approved. Firstly, of course, Google will be able to continue scanning books into its search index. Notable benefits for users include free access to users to full-text through public access service terminals, and the ability to purchase access to out of print books for relatively low cost. Meanwhile, institutions will be able to purchase access to the full text of millions of books, and those that are participating in the project will receive digital copies of their collections.

As Band said, all in all there’s a lot to like.

And yet the project has generated considerable controversy. Why is this?

One frequently made argument is the absence of competition for what is bound to become an essential facility. Google has already scanned 10 million books in 5 years so it has a huge competitive advantage. So here is a situation in which there is enormous demand, yet there is no other supplier, so there is a risk of a cost-prohibitive subscription which might undermine equity of access, privacy and intellectual freedom.

The business model is also contentious. Together, Google and the Books Rights Registry (with arbitration if necessary) will set the price of the institutional subscription. Google’s objectives in pricing are the realisation of revenues at market rates and of broad access to books. The parameters for pricing include pricing of “similar products and services”, and Band is concerned that if eJournal subscriptions are used as a benchmark, then the subscription could be cost-prohibitive for many institutions.

Only Google’s library partners have the right to a separate price negotiation route. And even then, refund is limited to Google’s share (37% of price).

For Clancy, the solution is that rather than ask the court to reject the Settlement, we should ask the court to closely supervise the interpretation and implementation of the settlement, given that this is a natural monopoly needing regulation. Brand is also anxious to ensure diverse composition of the Book Rights Registry, encompassing author representation in particular.

Clancy countered this by emphasising that Google cares deeply about the pricing, and is making this investment because it believes in broad access; a limited access project will be inconsistent with their vision. Clancy compared the planned price of a typical book under the terms of the Settlement with the price of a journal article, which can cost around $30. To me this seemed like a fudge. The original argument that Band made was around the cost of the institutional subscription, so why didn’t Clancy use the price of an eJournal subscription as a comparator? He also argued, though, that the vast majority of books will be cheaper than ILLs.

Clancy didn’t touch the issue of competition, emphasising customer choice instead i.e. libraries can decide that the subscription is too expensive and instead opt for free services. Again, this lacked conviction. No library worth its salt would build its collection on such a restrictive basis. He did mention the lack of competition and choice in the eJournal marketplace though.

He also dismissed the suggestion that people will get rid of their physical books as seeming stupid. Actually this seemed strange, as Band hadn’t mentioned that argument.

The killer argument for me was made by Band towards the end of the webinar. He argued that we all want to trust Google. The Settlement is fundamentally desirable. And the people who are at Google right now seem eminently trustworthy. However, ownership can change, and that is why some degree of quasi-regulation is necessary. Clancy could only reply by saying that Google’s library partners (i.e. only the partners and not libraries as a whole) would have the right to arbitrate with Google if they felt the pricing was unfair.

e-Readers and e-Textbooks: current reality and future possibilities

north-west-missouri-state-universitye-Readers and e-Textbooks: Current reality and future possibilities turned out to be easily the most interesting webinar I’ve ever attended. This Educause webinar featured Dr Jon T Rickman and Dr Roger Von Holzen from North West Missouri State University in the States and describing an initiative there around the evaluation of e-Readers and e-Textbooks over the past year.

Like other universities, North West Missouri State university had found itself under considerable pressure to deliver electronically, and the introduction of new devices in the marketplace has acted as a catalyst for an explosion in sales. There is focus on textbooks specifically in the relentless pursuit of cost reductions. NW Missouri State University is, in terms of its computing provision, unique as it has had a computer rental scheme in place for over two decades – the university charges $360 to its students for a wireless notebook computer.

They set about evaluating the e-Readers out in the marketplace and chose Sony Reader. The Kindle people at Amazon weren’t really interested in participating in the project. The Sony Reader looked attractive for a number of reasons. It was going to cost $250 per unit with bulk purchase (Kindle would have been $299 plus shipping). Sony will be transitioning to the EPUB format. The device has a 6 inch (15cm) display. Text is available in three sizes. It also uses electronic ink technology, which is almost like paper and retains good levels of readability even in strong sunlight, as well as having low power consumption and thus offering great battery life.

They had discounted the idea of offering a paper textbook rental service as the notional cost savings would have been cancelled out by the difficulties in running such a service.

The difficulties they ran into with e-Readers turned out to be considerable. For example, formatting content for e-Readers can take weeks. For campus-wide deployments there are currently not enough e-Reader-compatible e-Textbooks. Keyword searching and annotating are very important features for both students and academics, so despite the strong affinity that students have for hand-held devices, enthusiasm waned without those functions.

They also encountered a number of issues intrinsic to the e-Textbook format rather than the device. For example, the multiple components to the textbook including graphs and images, all have separate copyright. PDF format textbooks provide very restrictive options. And it turned out that what students really want from e-Textbooks is interactivity, animation and the ability to integrate content into other online tools.

They accepted that the whole area of e-Readers and e-Textbooks is subject to rapid change. It’s already the case, for instance, that keyword searching is now offered by e-Reader suppliers even though it wasn’t at the time of evaluation. Nevertheless, they were happy with their decision to move away from e-Reader provision, and instead set about making e-Textbooks available on the notebook computers that they were already renting out to students. They perceived that e-Reading devices and notebook computers are merginginto each others. They also felt uneasy that e-Readers aren’t the platform that authors are creating on – they’re actually creating the content on notebooks. With issues such as these in mind, it was hard to justify an additional $2million costs to add e-Readers to their raft of student services.

A Notebook approach to e-Textbook provision would also integrate with other software and services, including email and web access, thus meeting a key student requirement. And user support was already in place.

The delivery of a range of eTextbooks provided by five publishers to students via notebooks turned out to be simple and efficient. Students were able to complete the download of e-Textbooks with little support.

Rickman and Von Holzen don’t expect e-Textbooks to replace the traditional textbook any time soon. They foresee a transition, but expect academics to continue to select resources on the basis of content. In the meantime, they will continue their search for a new delivery platform, seeing the tablet PC with integrated eReader as an option. Overall, then, they’ve found that e-Readers simply don’t have the functionality to support the richness of e-Textbooks right now, and are more suited to a leisure-type read.

M-Libraries: Information use on the move

mlibrariesM-Libraries: Information use on the move is a report from Keren Mills of the Arcadia Programme based at Cambridge University Library. With an eye on developments in mobile technologies and increased adoption, there is concern to assess the requirements, avoiding the expenditure of considerable resources before there is a real need.

The analysis and recommendations of the report are rooted in a survey carried out by the Arcadia Programme, in which staff and students from Cambridge University and Open University were questioned about their use of mobile phones. In the survey, most respondents said that they currently use their phones primarily for phone calls, SMS and photos. Only a small number had read e-books or journal articles on their mobile phones – for example, 91.5% of Cambridge students have never read a journal article on their mobile phones, and there’s a similar (slightly higher) figure for reading eBooks.

These seem fairly predictable findings, but I don’t know whether it follows that:

These results suggest it is not worth libraries putting development resource into delivering content such as eBooks and e-journals to mobile devices at present.

I’m not sure whether that really stands up on its own. It would imply that development of technologies should be demand-driven, and I’m not sure whether that’s true.

The report refers to other successful developments such as the Athabasca University Library’s Digital Reading Room in Canada “which allows readers to access full eBooks and journal articles through their library’s subscriptions on any mobile device.” However the report dismisses the possibility of similar developments in the UK right now partly because of the low usage figures encountered in the survey.

The other reason for not going down the Athabasca Digital Reading Room route is that the technology to make mobile e-journal access possible without such purpose-built platforms is now just around the corner:

… the key difference between the iPhone and previous web browsing mobile phones is that the iPhone can comfortably access websites intended for larger screens. As this type of device becomes increasingly available it will no longer be necessary to develop mobile-ready websites. Several manufacturers have announced that they intend to release touch-screen phones similar to the iPhone in 2009.

The report makes the point over and over again that the iPhone is revolutionising the mobile phone market. For example, although extremely low numbers of respondents had accessed e-resources on their phone:

iPhone users are already more inclined to read eBooks on their phones, according to comments from the respondents to this survey.

The report also comments on the increased uptake of mobile phone applications since the launch of the iPhone (2009 findings from ComScore), although in its own survey only 21% of respondents had downloaded applications to their phones and would do so again.

Thus the report gives us a combined reason for hanging back from M-Library developments at this stage – demand isn’t strong and more suitable technology is imminent without the library world having to develop its own bespoke solution. I think this is a fairly rational position.

The problem that I do have is that the recommendations made are strikingly conservative. Text alerting services, text reference services, audio tours and mobile OPAC interface all seem to me to be excessively anchored down by the current library offering, rather than using shifts in use of mobile devices, accelerated by the iPhone, to re-imagine the library and its services. The following quotation is enough to make you feel that you’ve gone back in time to the print-only era:

… a significant portion of respondents currently use text alerting services in some form, and would be in favour of receiving text alerts from the library to let them know when reserved items are ready for collection, when books are due for renewal or are overdue.

I’m very sympathetic to Lorcan Dempsey’s take on the report: this is a rapidly changing area, and is very difficult to capture. It reminds me of all the years I’ve spent at Silverstone trying desperately to capture Formula One racing cars as they fly past me at 100+mph. Who knows how respondents will be describing their mobile phone habits in even 12 months time. But I do feel that even at this point in time we could be formulating a much more exciting vision of transformed library services that widespread take-up of smart phones might bring out.

Will the eBook make it across the chasm

I’m currently hurtling through the English countryside on a Wifi enabled train having spent the day at E-books and E-content 2009 held at University College London.  An interesting and stimulating day  with a well matched but varied set of speakers, including yours truly (presentation on SlideShare).  The eighty strong audience were also a varied selection from academic libraries, academia in general, publishers and the information media.

The move towards a web of data, enabled by the emergence of semantic web technologies and practices, was one of my themes. Another was a plea for content publishers and providers to deliver their content to the user where he/she is.  Not expecting them to be driven to their site with a totally different interface.  This is a difficult one for the eContent industry, at a time when the publishers are in the middle of a “my platform is better than yours” battle.  Nevertheless, a student wants the content their course has recommended, not caring who published it or which aggregator their library licensed it from.

adoption curve In laying the ground, I initially discussed the technology adoption curve and how technologies don’t become mainstream overnight.  Any new technology, or new way of doing things, follows a standard pattern with a small number of innovators taking the initial often enthusiastic risk.  The early adopters then build on the innovators’ success and and join in, still very early with some risk. When the new way has been proven, adoption has increased and both costs and risk have fallen, the early and late majorities take it to mass acceptance and adoption.  This only leaves the laggards, who will only come on board if forced by circumstance.

As an adjunct to the adoption curve, I spoke about a chasm which technologies have to cross, between the early adopters and the early majority before they take off.  There are many promising technologies that failed to cross that chasm.  For example, technology watchers at the time predicted that the mini-disc would replace the cassette tape, but as we know the CD took that prize.

Today’s conference was mostly focussed on the eBook and it’s impact on libraries and publishers.  This is on the assumption that it will be the way of delivering book sized pieces of content in the approaching digital world.  In answer to a challenging question for the end of day panel, I concluded that this is by no means certain.  I believe direct access to articles will eventually see the end of the traditional journal issue format. In a similar way I believe there is a good chance that chunks of content, that are today of book size, may well be assembled and delivered in a digital object as yet to be identified.

So will the eBook jump the adoption chasm?  If I was a betting man I would only back it on an each way basis.  I believe that anyone betting their whole business model on it being a certain winner, may just be taking too much of a risk.

Photo from mstorz published on Flickr

OCLC’s Andrew Pace Talks with Talis about Web-Scale ILS

andrew_pace To find out about OCLC’s move in to providing hosted, Web-scale, Software as a Service functionality for managing libraries, who better to ask than the person responsible for the programme.

Andrew Pace, Executive Director, Networked Library Services has been working on this for the last fifteen months, and as you can hear from our conversation is pleased that he can now talk openly about it.

Our wide ranging conversation takes us from the epiphany moment when Andrew announced he wanted to be a librarian through to the strategic, and architectural decisions behind this significant OCLC initiative.  

Andrew’s answers to my questions add depth and background to the brief details so far released in his blog posts and OCLC’s press releases.

Have vendors helped make the ERM mess worse?

L2Gbanner144-plain Listening to the the Library 2.0 Gang show this month, you would probably have to agree they may well have.

I’m joined by gang regulars Marshall Breeding, of Library Technology Guides & Vanderbilt University, and Oren Beit-Arie, Ex Libris CTO, to pick over the current state of Electronic Resource Management in academic libraries.  Tools to help manage any service, which often consumes over half a library’s resource budget and vast amounts of backroom time, getting a surprising lack of take-up is symptomatic of something not being quite right.

Electronic Resource Management has evolved alongside Integrated Library Systems over the last decade, reaching a point today where many would agree it is a bit of a mess.  Where does the blame lie?  Is it with the world of electronic publishing with it’s very messy business models, terms, delivery platforms and standards compliance (or lack of)?  Is it with the libraries where the approach has been at the wrong level of granularity – approaching eJournal content at the level of the Journal itself as against the article which more often than not the target of a user’s discovery exercise?  This being aggravated by the approach of trying to catalogue the electronic in the same way as the physical – an article in an issue, of a volume, of a journal, on a virtual shelf.   Or is it with the ERM system builders who may need to look closely at the design of some of those systems as they may be helping to cause some of that mess?

Blame is probably too strong a word, but the state we find ourselves in is definitely a shared responsibility of all three groups.  So where do we go from here to get to a more efficient more relevant ERM environment – will it be evolution or revolution?

Check out this month’s Gang to hear their thoughts on this.

Considering the academic library at the 2009 JISC conference

I was pleased to see that JISC had put most of the content of last week’s JISC Conference 2009 onto their website. I’ve spent some time this week listening to the content and there’s quite a lot in there for university libraries, if you, like me, were unable to get to Edinburgh for the event itself.

Obviously, I selected the session entitled Towards the academic library of the future first. Sarah Porter from JISC introduced the session, sharing her perception that academic libraries have now reached a tipping point in terms of many of the pressures and issues we’ve all been aware of for some time. So bearing in mind the pressures she itemised, namely:
* the challenge to scholarly publishing that is Open Access.
* How to support research in the data deluge.
* The changing demographics and how to support the teacher in that.
… the question is, how can the academic library support the academic endeavour in a positive way?

With this in mind, Mark Brown from University of Southampton explored potential roles for the academic library, noting that increasingly they are acting as trusted curators of content as individuals and institutions collaborate. This gives the library a publishing role, around institutional repositories, curation of digital content and involvement with open content. David Kay from Project TILE pointed out in the same session that the library has some amazing business intelligence around activities on the network, and wondered whether it could perform a role of aggregating that intelligence. This is a vision that Talis certainly shares, with our developers working at optimising the value of user transactional data for applications such as Prism 3.

However, Mark Brown questioned whether the prevalence of information exchange made the role of the library problematic even if the traditional mediating role remained intact. There are so much activities and data that is now bypassing the library (and not just the old bête noire Google).
It was cheering to hear that Professor Derek Law, who has been working on the Libraries Horizon Scan, thinks that libraries have never been better managed, funded or staffed than at present. And yet, academic libraries are not engaging with the academy as much as they need to. Going back to the pressures that Sarah Porter identified, we can see that there are layers to this engagement – the lecturer, the researcher as well as the student. Law noted that the academy continues to build libraries, spending millions, almost as an act of faith, and it’s worth pondering why, taking into account the sheer weight of evidence about changing user demands.

The session Mind the gap: understanding the tensions between the institution and the learner provided a useful summary of the characteristics of today’s student. Rhona Sharpe from Oxford Brookes University described the complex lives that many learners are living. One student she spoke to has 3 part-time jobs and is only a full-time students for 2 days a week. So visible robust reliable resources are needed to enable them to access stuff any time they like and students are critical of complex applications that are difficult to navigate. Time is a huge restraint in their lives, and is particularly problematic for students with disabilities.

Going back to the Towards the academic library of the future session, listening to Professor Hector MacQueen from Edinburgh Law School served as a powerful reminder of the amount of change to which the university library has succeeded in adapting in the past few decades. Describing research at the start of his career as “very physical”, he recalled being highly dependent on the library, and indeed on a multitude of libraries around the country. He needed to get himself to far-flung libraries on a fairly regularly basis to access material that wasn’t available via inter-library loans. This was frustrating, expensive and tiring. We surely wouldn’t want to go back to subjecting our researchers to those experiences.

And yet, even though the imperative to manage those scarce resources has now gone away, we are still spending a lot of time managing legacy systems, as David Kay remarked. This reflects the fact that the local model of delivery has not adjusted to that change.

It was definitely useful to catch up on the adoption of eBooks in higher education in the JISC e-Books observatory project session. The project, as many of you will be aware, has been exploring in real time what students are doing now with eBooks and many of the findings are extremely interesting with regard to the academic library. 61% of students said they’d used an eBook at some point, but only 47% had used on that had been provided by their library. As Ian Rowlings himself pointed out, that needs to be taken with a pinch of salt because people don’t necessarily realise where stuff comes from.

Librarians were found to be very positive in a measured way, and the overwhelming consensus was characterised as being one of “cautious optimism”. The most cited benefits of eBooks were the need to support part-time / distance learners, and the ability to manage intense peaks of demand for materials, which has long been a problem for libraries and students alike.

The project found, for example, that male undergrads felt much less dependent on the library but happier with their own ability to go out and buy stuff. This reflects a broader reality – namely that printed and electronic books are currently enjoying a complementary relationship. There was no impact on circulation statistics with the availability of eBooks, and neither was there any impact on sales of printed books.

Another interesting outcome was that deep log analysis showed that the majority of users went through the library website and the OPAC to access eBooks.

The conference also had a session on Making the most of your physical learning spaces. Les Watson from JISC highlighted the importance of student opinion, wondering whether it will be more important one day than a visit from the QAA. It’s all about student satisfaction and happiness, and buildings and spaces are an important part of that. I recently visited University of Bradford, where I studied for my first degree. As is the case with many universities, extensive work has been carried out on the main university foyer. To the right of the foyer, back in the day, there used to be a carpark. However that has now been replaced by a gorgeous looking atrium. What really makes that space, which is used by many students as an informal learning space, is the quality of light. The roof is basically made of the same material as the Eden Project in Cornwall, and this gives a great feel not just of light but also of space. The provision of this space was seen as a priority by a university that has large numbers of muslim students for whom social spaces based on alcohol are wholly inappropriate. It’s a very popular space for all students, and even attracts students from Bradford College.

So although this session wasn’t library-specific, it said things that are useful for libraries to take on board. Brett Bligh from University of Nottingham warned against the tyranny of heavy use. These spaces are expensive so often usage is seen as the key justification. But we need to look more closely at how they’re being used. He said that we need to move from a top down approach to learning spaces, where people at the top spend vast amounts of money on learning spaces and students are expected to say how good they are. And we need to transition to a situation in which students might have some scope to design that space for themselves. From the floor, Penny Charlish-Jackson, Head of Learning Resource Centres and Teaching Accommodation at University of Hertfordshire, made the point that students, at the end of the day, will decide how to use that space, and so we should avoid over-evaluating. Brett qualified this, by saying that the less formal the space, the harder the evaluation gets.

In the same session, John Tuck, Director of Library Services at Royal Holloway described their newly transformed library space. The vision had been “a pilot development of a 21st century social learning, café-style space”, with a range of group learning environments from open plan to private (accommodating different styles from conversational to group learning)plus some silent study spaces, with varied seating, giving students the ability to shape their own environment as they work.
Students were invited to join a Facebook group called “Love your library” and were encouraged to post their likes and dislikes of the current library service. This generated significant interest and impacted plans e.g. led to a scaling down of the café element.

They were delighted to see that students moved in with immediate effect and adopted it as their space, which was wonderful, but formal evaluative mechanisms were put in place as well.
Students have mixed feelings about the change, although footfall has increased significantly at varying times of day and there is apparently a palpable buzz about the place at all times. Qualitative feedback is very interesting. One student said “If I were at a library in the future, I imagine that this is what it would feel like.” But others are less enthusiastic. There’s a feeling in some quarters that investment should be focused on provision of 24/7 ubiquitous good quality information resources, for example.

Derek Law had a powerful if painful message for academic librarians when he spoke of the need to move up to the macro level and stop navel-gazing. This resonated, sadly, as there had been much more focus on the library role than on the difference that libraries could make to the external environment. His statement “I’d rather channel the change than simply measure it” is something that we should all be taking on board. He recommended greater advocacy activities – talking to vice-chancellors and the other stakeholders who set the budgets, and asking them what they want to get out of libraries. I remember how hard this was when I myself was a Head of Library Services (in the special libraries sector). But if I’m going to be honest, I also remember one occasion missing a crucial point when presenting a business case for a new library system to the Managing Director. And this happened because I’d become over-preoccupied with internal library considerations, and the big picture (as well as my view of the impact of change on other parts of the organisation) had become skewed as a result.

Talking to Herbert van de Sompel about repositories

Over on our Xiphos blog, I’ve just published a podcast conversation I had with Herbert van de Sompel earlier this week.

It’s a nice example of the synergies between issues discussed here on Panlibus and those we’re exploring within Project Xiphos. Have a listen, and see what you think.