Panlibus Blog

Archive for the 'Web Services' Category

Opening the Walls of the Library – SOA & web services

It doesn’t happen often, but it is really nice when when you receive something produced for one purpose to find that it has been produced so well that it is good for so much more.  Let me explain….

My colleague Andy Latham has been pulling together a white paper Opening the Walls of the Library – SOA and web services at Talis[pdf].  It’s main purpose is to support the marketing effort behind Talis Keystone, our SOA platform that underpins Talis Library Integration Services.  To help explain those services, to the not necessarily technical people in library and other departments considering integration, he needed to explore the history, principles, and practical considerations of this approach.  It is in this explanation, I believe that he has produced a document that is a great introduction to the application of SOA and library web services in general.

Because of it’s original purpose, and the fact that for obvious reasons the examples and case studies come from Talis products and customers, the document could be considered by some as being a bit marketingy.  Nevertheless, if you want an overview of real-world issues (many of which are to do with people not technology), or business models, or web service functions, or why choose REST in favour of SOAP, in library SOA I can recommend this White Paper as an informative easy way in.

As Andy says in the conclusion:

SOA is not all about technology; SOA is a business journey that needs to follow a path with small commercial and technical steps towards a known vision of business maturity. Commercial and Open Source technology has paved a way for businesses to begin introducing an SOA strategy. Introducing an SOA strategy is as much of a technical challenge as it is an operational challenge as the technology will break down silos between teams, departments and organisations and conflicting business processes which worked well in the silo will need to be redeveloped to meet the new needs of the more agile business.

The release of the OLE’s report, which I commented upon previously, plus vendor initiatives such as OCLC’s Web Services and Ex Libris’ URM, have served to raise the prominence of web services in the world of libraries.  On a recent Library 2.0 Gang show about the OLE project it was clear, in the discussions between Andy, OLE’s Tim McGeary, Marshall Breeding and Ex Libris’ Oren Beit-Arie, that there is much more to integration than just technology.

I think it is fair to say that Libraries as a sector have not been at the leading edge of the SOA/web services debate.  It is also fair to say that for whatever reason the UK seems to been a few years ahead of some areas in reaping the benefits of such integration in libraries.  As Andy’s document shows, there is the potential for significant financial and organisational benefits when undertaking integration in this way.

“The 25,000 students at one of the largest Universities in the UK are now able to pay their library charges online using either debit or credit cards, enabling further efficiency savings for library staff and improving student services.”

“Getting relevant information from Voyager into personalised portal sites has been a key requirement for the University for some time…..  By building a SharePoint integration we are maximising the positive impact of our new VLE and enhancing elements of the Library service.”

“The University of Salford is in the process of transforming the way that the identities of its entire user population are managed across all key systems in the organisation. An essential part of the solution employed (using Sun Microsystems’ IdM suite) is the transition and management of up to 23,000 Talis LMS borrower identities via Talis Keystone.”

To reap these sort of benefits in a sustainable way a library has to be aware of, and have, a SOA strategy.  There is much in this white paper that can help those new to the subject to understand the issues.  As someone who thinks he knows about these things, I also found it very useful for checking and clarifying my assumptions.

So as I say, a recommended read….

Mash Oop North!

mashupnorth Following on from the success of the first UK Mashed Library event organised in London last year by Owen Stephens, Dave Pattern and his colleagues from the University of Huddersfield are organising another one on Tuesday July 7th.  Talis are sponsoring the sustenance that should help keep the ideas and mashups flowing through the day.

Appropriately entitled Mash Oop North!  [best attempted with a Yorkshire accent], it promises to be another great event to get yourself registered for.

Dave is hoping to attract more than just the usual library-techno-geek suspects.  The day is also for the non-technical with ideas about what the technologies could be doing for libraries and the communities they serve.

Mashed Library is about "bringing together interested people and doing interesting stuff with libraries and technology".

For more information, reasons to attend, and the registration form – checkout the event blog.

Register now – you know you want to!

If you can’t make it, you could always monitor the the event tag mashlib09 (or #mashlib09 for tweets).

Library of Congress launch Linked Data Subject Headings

Back in December I was very critical of the Library of Congress for forcing the take down of the Linked Data service at lcsh.info.  LoC employee, and Talking with Talis Interviewee, Ed Summers had created a powerful and useful demonstration of how applying Linked Data principles to a LoC dataset  such as the Library of Congress Subject Headings could deliver an open asset to add value to other systems.  Very rapidly after it’s initial release another Talking with Talis interviewee Martin Malmsten, from the Royal Library of Sweden, almost immediately made use of the links to the LCSH data.   Ed was asked to take the service down, ahead of the LoC releasing their own equivalent in the future.

I still wonder at the LoC approach to this, but that is all water under the bridge now, as they have now launched their service, under the snappy title of “Authorities & Vocabularies” at http://id.loc.gov/authorities/.

The Library of Congress Authorities and Vocabularies service enables both humans and machines to programmatically access authority data at the Library of Congress via URIs.

The first release under this banner is the aforementioned Library of Congress Subject Headings.

As well as delivering access to the information via a Linked Data service, they also provide a search interface, and a ‘visualization’ via which you can see the relationship between terms, both broader and narrower, that are held in the data.

To quote Jonathan Rochkind “id.loc.gov is AWESOME”:

Not only is it the first (so far as I know) online free search and browse of LCSH (with in fact a BETTER interace than the proprietary for-pay online alternative I’m aware of).

But it also gives you access to the data itself via BOTH a bulk download AND some limited machine-readable APIs. (RSS feeds for a simple keyword query; easy lookup of metadata about a known-item LCSH term, when you know the authority number; I don’t think there’s a SPARQL endpoint? Yet?).

On the surface, to those not yet bought in to the potential of Linked Data, and especially Linked Open Data, this may seem like an interesting but not necessarily massive leap forward.   I believe that what underpins the fairly simple functional user interface they provide will gradually become core to bibliographic data becoming a first-class citizen in the web of data.

Overnight this uri ‘http://id.loc.gov/authorities/sh85042531’ has now become the globally available, machine and human readable, reliable source for the description for the subject heading of ‘Elephants’ containing links to its related terms (in a way that both machines and humans can navigate).  This means that system developers and integrators can rely upon that link to represent a concept, not necessarily the way they want to [locally] describe it.  This should facilitate the ability for disparate systems and services to simply share concepts and therefore understanding – one of the basic principles behind the Semantic Web.

This move by the LoC has two aspects to it that should make it a success.  The first one is technical.  Adopting the approach, standards, and conventions promoted by the Linked Data community ensures a ready made developer community to use and spread the word about it.  The second, one is openness.  Anyone and everyone will not have to think ”is it OK to use this stuff” before taking advantage of this valuable asset.  Many in the bibliographic community, who seem to spend far too much time on licensing and logins, should watch and learn from this.

A bit of a bumpy ride to get here but nevertheless a great initiative from the LoC that should be welcomed.  On that I hope they and many others will build upon in many ways.  – Bring on the innovation that this will encourage.

Image from the Library of Congress Flickr photostream.

OCLC Take aim at the library automation market from the Cloud

OCLCclouds Over the last few years OCLC the US based not –for-profit cataloguing cooperative has been acquiring many for-profit organisations from the world of library automation such as PICA, Fretwell-Downing Informatics, and Sisis Information Systems. 

About fifteen months ago, Andrew Pace joined OCLC, from North Carolina State University Libraries, and was given the title of Executive Director, Networked Library Services.  After joining OCLC Andrew, who had a reputation for promoting change in the library technology sphere, almost disappeared from the radar.  

Putting these two things together, it was clear that the folks from Dublin were up to something beyond just owning a few non-US ILS vendors.

From a recent post on Andrew’s Hectic Pace blog, and press releases from OCLC themselves, we now know what that something was.  It is actually a few separate things, but the overall  approach is to deliver the functionality, traditionally provided by the ILS vendors (Innovative, SirsiDynix, Polaris, Ex Libris, etc., etc.), as services from OCLC’s data centres.   This moves the OCLC reach beyond cataloguing in to the realms of acquisitions, license management, and even circulation.

The idea of braking up the monolithic ILS (or LMS as UK libraries refer to it) is not a new one – as followers of Panlibus will know. Equally, delivering functionality as Software-as-a-Service (SaaS) has been native to the Talis Platform since its inception.  It is this that underpins already established SaaS applications Talis Prism, Talis Aspire and Talis Engage.

Both OCLC, with WorldCat Local, and Talis with Prism have been delivering public discovery interfaces (OPACs) as SaaS applications for a while now, ‡biblios.net have recently launched their social cataloguing as a service [check out the podcast with Josh Ferraro], but I think this is the first significant announcement of circulation as a service that I have been aware of.

The move to Cloud Computing, with it’s obvious benefits of economies of scale and the removal of need for libraries to be machine minders and data centre operators, is a reflection a much wider computing industry trend.  The increasing customer base of Salesforce.com, the number of organisations letting Google take care of their email, and even their whole office operation (such as the Guardian) are testament to this trend.  So the sales pitch from OCLC, and others including ourselves here at Talis, about the total cost of ownership benefits of a Cloud Computing approach are supported and validated industry wide.

So as a long time predictor of computing transforming from a set of locally managed and hosted applications to services delivered as utilities from the cloud, mirroring the same transformation for electricity generation and supply from a century ago,  I welcome this initiative by OCLC.   That’s not to say that I don’t have reservations. I do. 

The rhetoric emanating from OCLC in these announcements is reminiscent of the language of the traditional ILS vendors who are probably very concerned by this new and different encroachment on to their market place.  There is an assumption that if you get your OPAC from WorldCat (and as a FirstSearch subscriber, with this on the surface ‘free offer’,  you are probably thinking that way), you will get circulation and cataloguing and all the rest from a single supplier – OCLC.

The question that comes to mind, as with all ILS systems, is will you be able to mix and match different modules (or in this case services) from different suppliers, so that libraries can have the choice of what is best for them.  Will OCLC open up the protocols (or to be technical for a moment, the hopefully RESTful APIs) to access these application/service modules so that they can not only be used with other OCLC services but with services/applications from Open Source and other commercial vendors.  Will they take note of, or even adopt, the recommendations that will come from the OLE group [discussed in last month’s Library 2.0 Gang], that should lead towards such choice.

Some have also expressed concern that a library going down the OCLC cloud services route, will be exposing themselves to the risk of ceding to OCLC control of how all their data is used and shared, not just the bibliographic data that has been at the centre of the recent storm about record reuse policies.  Against that background, one can but wonder what OCLC’s reaction to a library’s request to openly share circulation statistics from the use of their OCLC hosted circulation service would be.  

This announcement brings to the surface many thoughts, issues, concerns and technological benefits and questions, that will no doubt rattle around the library podcasting and blogosphere for many months to come.  I also expect that in the board rooms of the the well known commercial [buy our ILS and a machine to run it on] providers, there will be many searching questions being asked about how they deal with the 500lb [not-for-profit] gorilla that has just moved from the corner of the room to start dining from their [for profit] table.

This will be really interesting to watch…..

The composite image was created using pictures published on Flickr by webhamser and Crystl.

Come on in – it’s open (with your Ex Libris key)

I was one of the first to welcome the Ex Libris announcement of El Commons:

“a collaborative Web-based platform hosting the Developer Zone, where community members can access documentation for the open interfaces, upload software components that they have written and want to share, and download components from other community members, adapting such components to their needs”

I recently recorded a podcast conversation with Ex Libris Chief Strategy Officer and Library 2.0 Gang regular, Oren Beit-Arie about their Open Platform Strategy, of which El Commons is part.  From the transcript:

Richard: [28:11] So that won’t be limited to just Ex Libris customers then? You could be anybody that would want to interface with an Ex Libris system?

Oren: [28:21] This is something that we are still working on, some of the calls for these. I definitely think, and this is our goal, to enable access to everybody who is interested.

[28:31] We definitely see this as an opportunity. For example, at some point at least to enable noncustomers to go in and for example perhaps even end users will be interested.

Well El Commons is up running and accessible at http://www.exlibrisgroup.org.  Unfortunately as Oren hinted, you can only enter the commons with your  Ex Libris Documentation Center or SupportWeb user name and password – a bit of a misuse of the generally understood idea behind a commons methinks.

To be fair to Oren and his colleagues, really opening up is a massive shift in culture that I suppose is a bit like turning a large oil tanker, but the sooner you bite the bullet and walk-the-talk the better for you and the whole community.   Come on in the Open water is great – and it is only a bit scary when you first jump in.

Photo published on Flicker by Daquella manera

Open Library Environment Project – is SOA right?

The OLE Project OLE – The Open Library Environment Project has been around for about a year now, and I am guilty of not monitoring as closely as I would have liked to.  So the opportunity to listen to their recent webcast seemed a great way to get up to speed again. 

Following the instructions on the OLE Project site to replay the webcast, led me to one of the most unusual webcast playback experience I’ve had for a while.   To see the slides you have to click through to a service run by Adobe Acrobat, which provides a good representation of the webcast environment, complete with chat traffic in real time.  The problem then is that you have to use the telephone system to get the audio.  This is not a cheap exercise for those of us having to dial international – at least with Skype Out you can keep the costs down a bit.  Synchronising the listening with the viewing is then a bit of a challenge, especially if you have to pause and restart.

Anyway enough about the experience – what about the content?

What is clear is that the Mellon Funded Project has got a great deal of attention and significant partners from academic and national libraries.  They also have a challenging and worthy goal, which they are taking significant early steps towards:

“By the end of our project, we will have a design for a next-generation library system using Service Oriented Architecture. We also will have built a community of interest that can be tapped to help build the OLE framework.”

The webcast inevitably, especially in the QA section, swung between the low-level detail, the strategic approach, and things like privacy which are more the policy concern of potential implementing libraries than the project itself.

Having listened to it, it is clear that they are working on an assumption that implementing libraries would have to throw their current investment in commercial or open source systems away and build all this from scratch – this being based on experience with the current generation of systems not being capable of integrating easily, or not  dealing with electronic resources.  That is a heck of a large chunk to bite off, even if you pull in things like circulation and cataloguing from other projects.

Experience also calls me to strongly question the emphasis on Service Oriented Architecture (SOA), that is if SOA is being used as generally understood as against a generic term for systems being connected via web-style calls.

A bit of background on that ‘experience’ I mention – There are [in general terms] two approaches to Web Services – tightly coupled SOA, and loosely coupled REST based services.  The difference being that a SOA developer/integrator trying to embed the service in to their application needs access to web service descriptions and other enterprise integration tools. Whereas in the RESTful world, integration calls can often be tested using a web browser, and integrators/developers need no more development tools than they currently use.

Both SOA & REST have their benefits and their, sometimes religious, proponents.  With our first use of SOAP (the underlying messaging protocol for SOA) back in the late 1990’s I have been using both of these competing approaches for some time.  Talis over that time has developed and rolled out and established a significant user community for a product known as Talis Keystone.  Keystone is a web service integration component designed to enable external enterprise services (Student Registries, Finance Systems, Student Portals, e-payment services, CRM systems, etc.) to easily and reliably integrate library system data and functionality into their workflow. 

Keystone is now in use in many Talis customer libraries, and with some from libraries with a system from another vendor, in the UK.  Successful integrations have been completed with products such as: Aggresso, Civica, Oracle, and SAP finance systems; Microsoft Sharepoint, uPortal, Moodle, and Blackboard learning and portal environments; and WorldPay e-payment services.  Integration with systems from other suppliers are already in the pipeline.

From day one, Talis Keystone has had the capability to support both SOA and RESTful integration. It maybe useful for projects such as OLE to reflect on the experience in rolling out these integrations, and the take-up of the REST and SOA options.   The vast majority of these integrations have taken the RESTful approach, with only one or two going for SOA.  There are many reasons for this, but they all fall under the heading of there being a much lower barrier to implementing REST than SOAP.  Pragmatically I am of the opinion that lack of SOA capability would not have prevented any of these integrations taking place, whereas if SOA was the the only choice many would not have been undertaken at all. 

I/We would be more than happy to share some of these experiences in implementing and rolling out a product that addresses many of the concerns of the OLE Project.

Technorati Tags: ,,,,,

The Iron Fist of Interoperability

“Any customer can have a car painted any colour that he wants so long as it is black.”  — Henry Ford, My Life and Work (1922)

It’s not easy to get people to agree on anything.  This is why joining a standards committee is generally considered as something akin to being sent to a Siberian work camp.  Every party will have competing priorities, ideals and agendas and the resulting work generally comes out worse the wear after all of the stretching and mending from all sides.

At the same time, without some kind of agreement, there is no hope of adoption.  It is really an unenviable position.

So with this in mind, I have great awe and respect for the Digital Library Federation ILS and Discovery System Task Force for wading into these waters.  While the task force is not a standards body, per se, that they are trying to promote interoperability through the recommendation of a specification makes it seems like splitting hairs to not cast them in the same lot.

The recommendation draft is certainly welcome to library developers who have been craving something, anything, to help unlock the data from their proprietary systems (even though, as Marshall Breeding pointed out in the Library 2.0 Gang podcast on the subject, it’s about a year late).  The current draft lays out desirable pseudocode-type methods and then gives options of existing, off the shelf standards and protocols that could be used to enable the functionality that is defined.

The problem here is that they generally give multiple options for achieving the goal of any given method.  So this means that any ILS vendor can choose from a variety of protocols for implementing the spec and that a different vendor can choose alternate standards for the exact same functionality.  The most striking example of this would be a GetAvailability service (basically, “what is the current status and location of a given item”) which the recommendation says could be implemented via NCIP, a REST interface, SRU or OpenURL.

The point of a standardized API isn’t to make it simple for the vendors to implement.  The point is to make it simple for developers to implement.  The more options that the developer has to account for, the more complicated the client library must be to access the API.  This then gets to be rather chicken or egg.  If there are no programming language specific client libraries to access the API, there will be a slower rate of adoption (especially if non-library programmers have to learn the basics of SRU, OpenURL or NCIP).  If the spec is too complicated or allows for too much variation in protocols or output formats, it will be hard to find volunteers to build the domain-specific languages to help spread the proliferation of library data in other kinds of applications.

This is not an argument against using library standards.  An OpenURL interface on a library catalog to resolve holdings data would be incredibly useful.  Being able to harvest bibliographic records via OAI-PMH seems like a no-brainer.

However, it’s the combination of these where things begin to break down.  Imagine this hypothetical scenario:

  • SirsiDynix conforms to the recommendation by providing an OAI-PMH interface to their bibliographic and authority records and NCIP for item availability.
  • VTLS conforms to the recommendation by providing an SRU interface that has the capability of exposing all bibliographic records with the option to filter by date and a proprietary RESTful interface for item availability.
  • Talis conforms to the recommendation by providing a SPARQL endpoint for bibliographic records and a proprietary RESTful interface for item availability (with different syntax/responses than VTLS).
  • Ex Libris conforms to the recommendation by providing OpenURL interfaces for bibliographic, authority and holdings records.

These are entirely fictitious, of course, and somewhat facetious.  I wouldn’t expect Ex Libris to use OpenURL for everything, nor would Talis just say, “here’s SPARQL, have at it!”.  After all, Platform stores already have an OAI-PMH interface.  Replace the names with any other vendor’s name, it’s the point that they could do the above and all claim compliance.

Now imagine being a developer trying to write an application that uses data from the ILS.  Maybe the main library has a Voyager catalog and the law library uses Innovative.  Maybe the library is part of a consortium with libraries that have Aleph, Polaris and Unicorn.  Now let’s say that the developer doesn’t actually work for the library, but for the central IT department and he or she is trying integrate these services into courseware or a city government portal.  If all of these disparate systems use different protocols to access the same kinds of data, the odds lessen greatly that many of these catalogs will ever make it to the portal or VLE.

With the Jangle project, we’re trying to eliminate as much of this flexibility in implementation as possible.  It is a difficult balance, certainly, to prescribe a consistent interface while also accounting for extensibility.  But the point here is consistency.  One of the reasons we chose the Atom Publishing Protocol to interact with Jangle is because we think it will provide the lower level functionality needed to manage the data in a simple and consistent way.  On top of the AtomPub interface, simple OAI-PMH, OpenURL or NCIP servers can be built, using the AtomPub REST interface, to ensure that our library services can interact with existing library standards based applications.  At the same time, developers can use common client libraries (such as those for Google’s GData or WordPress, for example) to have a congruous means of accessing different kinds of data and services.  By only allowing Atom, we can focus on interacting with the data instead of requiring developers to focus on the protocols.

After all, sometimes to get from point A to point B, you just need a car.  The color doesn’t matter all that much.

Conversations in the Market

L2Gbanner144-plain …. or in the words of the Cluetrain Manifesto – Markets are conversations.

The May 2008 Library 2.0 Gang ticks a couple of boxes on the list of things that show that the best way to move forward is to talk and form a consensus.

Firstly the subject of the conversation – The Digital Library Federation (DLF) working group that are recommending a generic API for all Library Systems to support, and the ‘Berkeley Accord’ that most vendors have signed in support of this.

Secondly, the fact that senior people from at least three of the major vendors are comfortable joining the Library 2.0 Gang for an open recorded conversation, about how they might support the API recommendations in their product sets.

As facilitating host and chair for the conversation, it was very refreshing to hear how open Talin Bingham from SirsiDynix, Oren Beit-Arie from Ex Libris, and Talis’ Dan Mullineux were about their plans and support for the DLF initiative.   One point of discussion in the show was the position of Innovative Interfaces, who were the only vendor who explicitly abstained from supporting the Berkeley Accord.  All others that expressed a position supported it.   Although unable to take part in the conversation, it is clear from the blog post by Betsy Graham, Vice President of Product Management, that their position is not as negative as some have painted it.

If from this you think that the show is a vendor love-in, you would be wrong.  The Gang for this show also included Andrew Nagy, lead developer and passionate promoter of VuFind the Open Source Library OPAC, and the well known watcher of, and commentator on, the Library Systems world, Marshall Breeding.  Appropriately the show guest was John Mark Ockerbloom who is chair for the DLF’s working group.

During the show it was obvious that all were enthusiastic about the initiative, whilst in agreement that these first baby-steps to opening up access to library systems should be implemented  widely as soon as possible.

This third show consolidates the position of the Gang as being the monthly listen for those that are interested in libraries and the technologies that influence them.   As Gang host it is my goal to foster open conversations between vendors, their customers, and opinion formers in the library market.  I know, as an Evangelist employed by Talis, that some initially viewed this with some skepticism.  All three show so far, I believe demonstrate that open conversations between open minded players in our world both move things forward and an interesting and informative listen.

Technorati Tags: , , , , , , ,

Take your data with you….

As InfoWorld reported earlier this week: Google CEO Eric Schmidt, at the Web 2.0 Summit in San Francisco said Google wants to make the information it stores for its users easily portable so they can export it to a competing service if they are dissatisfied. He went on:

Making it simple for users to walk away from a Google service with which they are unhappy keeps the company honest and on its toes, and Google competitors should embrace this data portability principle

If you look at the historical large company behavior, they ultimately do things to protect their business practices or monopoly or what have you, against the choice of the users.

The more we can, for example, let users move their data around, never trap the data of an end-user, let them move it if they don’t like us, the better.

I wonder what Google’s opinions are on sharing data. Its one thing for you to be happy for your leaving customers to take their data with them [in a usable format] it is another to be happy to share the data of your current customers [with their permission] with your competitors to add value to your customers lives.

Obviously Schmidt’s comments are aimed at individual users of Google’s hosted software-as-a-service applications. Will this attitude cover aggregations of broader data – digitized book contents for instance? One key to the open movement of open data between those organizations who hold and allow access to it, is licensing. Discussion around licensing in the Open data world is a topic increasing in volume.

In presentations I attended at the recent Stellenbosch Symposium it was made clear that the research community should be discouraged from signing over all rights to their publications to publishers – some right to hold Open access copies in their institution’s repository should be retained. Then there is the constant justification by Google around what they are doing with the digitization of books. There is the Free Our Data campaign in the UK, and many other examples.

There is also the discussions around not only how things are licensed, but what can be the subject of a license.

The Talis Community License (TCL), which has received some Web 2.0 Summit coverage of its own on TechWeb, addresses a hole in the current spectrum of open data licensing which is not covered by the Open Source licenses such as GPL at one end, or by the Creative Commons movement at the other. Both of these cover creative output – source code and creative works respectively.

The problem comes when you try to protect [or enable] the use of ‘an aggregation’ of either facts [which in themselves can not be copyrighted] or individually protected/licensed elements in a data set. In Europe this access to an aggregation is covered by something called Database Right, but this is not a Global phenomenon.

To many this hole in the spectrum of Open Data Licensing is not obvious, and only becomes apparent after working through some examples. As the realisation for the need of something like the TCL spreads across the community we hope that it represents a useful contribution to the evolution of Open Licensing.

In a separate but associated discussion that is emerging around the Open availability of bibliographic records, LibraryThing‘s Tim Spalding made the following comment on the Code4Lib listserv:

As I’ve been saying at conferences, anyone who wants to build an open-source repository of MARC records, with or without wiki-like access, will get my (and LibraryThing’s) direct support. I think it’s going to happen. I only we had the time to do it directly. Maybe we’ll get to it if no one else does….

…An open-source alternative to the current system is going to happen. The only question is when. The project is doable, and would be of enormous importance.

So where do the non-libraries and small libraries who do not want, or more likely cannot afford, to pay expensive fees to get at bibliographic records go at the moment? This has to change.

One of Tim O’Rielly‘s original key aspects of Web 2.0 is ‘Data as the driving force’ – its been a slow boiler but that is starting to become more obvious by the day.

Technorati Tags: , , , , , , , , , ,

Why Nodalities?

I read the Panlibus blog – I note Talis has another house blog called Nodalities – why is this and why/who should be reading it??”

One of the major recurring themes from myself and others in Panlibus postings is Library 2.0 and its more general cousin Web 2.0. If you followed the links I provided to their descriptions in Wikipedia you will have discovered that they are both labels for a collection of attributes as against specifications.

I have yet to read a complete concise definition of what Web 2.0 or Library 2.0 ‘is’ [and probably never will], nevertheless it is far simper to look at an application or service and pronounce to the world that it is very Web 2.0 and be fairly confident that people will understand what you mean.

Web 2.0 is virtually all about technology, Web Services, Service Oriented Architecture, Social Networking tools, etc. etc., whereas it’s Library relative mixes all of that with a heavy dose of using those Web 2.0 tools and the customer handling & social skills of the library community to provide a better service to library users. – Debates about the use of mobile phones, and the provision of coffee, in a Library environment are often found in the Library 2.0 world.

We at Talis are the ‘Technology Guys’ in the Library equation, and although interested in all that is debated, our motivations are all about how new and emerging technologies [currently labelled Web 2.0] can be beneficially applied in the Library world. To this end you will find me and my colleagues evangelising on the subject both here and at conferences around the world such as these: Access2006, Internet Librarian International, Stellenbosch Symposium, Internet Librarian 2006, and the Charleston Conference.

The Talis Platform is an excellent example of applying Web 2.0, Semantic Web [to mention another ‘label’], SOA, and other technologies to provide innovative solutions to the liberating of library data, functionality, and services for the benefit of all.

In the process of proposing and delivering those [currently library specific] solutions, we are pushing both the theoretical and practical boundaries of web technologies and the theories and standards that are behind them – especially in the World Wide Web Consortium where you find Talis involved with several comittees. In doing this we are very active members, with much to contribute and say, of the world community driving forward these technologies.

This is where Nodalities comes in. You will note [today] that there is a posting from me picking up points from the blogs of Ian Davis and Sam Tunnicliffe, from our Platform Team, who are currently at the Web 2.0 Summit in San Francisco. If you are interested, like I am, in the way that all things Web are [and are being predicted to be] moving, you will find what they are reporting most engrossing.

Reading between the lines of what is being presented it is clear that the advances already being demonstrated by the Talis Platform are only the first step in a massive change in the way large sets of data and metadata (often only linked by semantics), can be marshalled, related together, and combined to change the way information is used in the future.

Dependant on the context, you will find Talis people attending and/or speaking at both Library and more general conferences across the world. Our knowledge, and understanding, of the issues surrounding the library and information industries is very valuable input into the wider technology world. As we have demonstrated this is a two way street. It is absolutely certain that our knowledge and understanding of the Web 2.0 world is already adding unique value to the world of libraries.

So to answer the question at the start of this posting…..

If you are in the library community and want to keep abreast of technology advancements – read Panlibus. If you are in the wider web community and are interested in what we are doing, and have to say about, applying these technologies as a Platform in real world situations – read Nodalities. I suspect most people, although with concentration on one, will find postings of interest in both Panlibus and Nodalities.

Technorati Tags: , , , , , ,