Panlibus Blog

Archive for the 'Standards' Category

Google Book Settlement will help stimulate eBook availability in libraries

books_logo So says former Google Book Search product manager Frances Haugen in her contribution to the debate on the September Library 2.0 Gang.

This month’s Gang was kicked off by Orion Pozo from NCSU, where they have rolled out dozens of Kindles and a couple of Sony Readers.  The comparative success of their Kindles ahead of the Sony Reader appears to be because of the simpler process of distributing purchased books across sets of readers and a broader selection of titles at a lower cost.  Currently users request books for the Kindle via an online selection form, then they are purchased and downloaded on to the devices which are then loaned out.  There were no restrictions on titles purchased and they have an approximate 50% split between fiction and non-fiction.

L2Gbanner144-plainThe Gang discussed the drivers that will eventually lead to the wide adoption of eBooks.  This included things like the emergence of open eBook standards, and the evolution of devices, other than dedicated readers, that can provide an acceptable reading experience.   Carl Grant shared his experience of starting a read on his Kindle and then picking it up from where he left off on his iPhone (as he joined his wife whilst shopping).

An obvious issue influencing the availability of eBooks is licensing and author and publisher rights.  This is where the Google Book Settlement comes in to play.  If it works out as she hopes, Frances predicts that over time this will facilitate broader availability of currently unavailable titles.  I paraphrase:

[From approx 26:50] Institutional subscriptions will become available on the 10M books that Google has scanned so far.  Imagine in the future a user with a reader that accepts open formats will be able to get access to the books this institutional license would provide.  Imagine school children having access to 10M books that their library subscribe to, instead of having to formally request one-off books to be added to their device.

[From approx 44:50] There are a huge number of books that are no longer commercially available in the US, for several reasons.  If the rights holders of those books do not opt-out, they will become available for people to purchase access to.  One of the interesting things about the way the settlement is set-up is that you will be able to purchase access either directly or through an institutional subscription.  What is neat is that cycle will put a check on prices as prices for individual books are based upon the demand for the books. So less poplar books will cost less…  So if the price of the institutional subscription ever gets too high libraries can decide to buy one-offs of these books.   I think that whole economic mechanism will substantially increase access to books.

The Gang were in agreement that eBooks will soon overtake paper ones as the de facto delivery format.  It is just a question of how soon.  Some believe that this will be much more rapid than many librarians expect.  A challenge for librarians to take their services in to this eReading world. 

Can RFID get it together to jump the chasm?

171587228_f78f978bd8_o_d After what seems an age of working from home and in the office over the summer, I’m out on the road again.  This post is coming from the departure lounge of the airport serving the wonderful city of Glasgow.  I’m on my way back from speaking at a one day conference – Introducing RFID – Are you on the right wavelength? – jointly organised by JISC and the Scottish Library & Information Council.

RFID that wonderful technology that makes self-service so much more an engaging and simple process for library users,  has been around for many years.  Yet for many libraries it is still new technology to be concerned about, not least because of the substantial financial and time investment required to deploy it.  It is telling of where we are with the general take up of this technology that almost without exception every speaker [including yours truly] felt the need to provide the audience with their description of what RFID is and the potential future benefits that may come from adopting it.

The best simple description of what RFID is today came from JISC’s Gaynor Backhouse – RFID is barcodes on steroids.  A way of attaching a machine readable identity to a physical item, that is easier to handle than a barcode and also can act as an overt security device.  Being able to read multiple items, without the need for contact or direct line of sight, has revolutionised the self-issue & return processes; finally realising the benefits for library staff and customers that were banded about many years ago when self-issue was first promoted.  Many of the speakers also emphasised the extra benefits for staff, undertaking mind-numbing labour intensive tasks such as stock taking/weeding/finding/checking, with the introduction of RFID reading wands and smart shelving.

There was much agreement as to these benefits, which are available to all libraries.  There were a few mutterings about interoperability issues between the offerings from different RFID system suppliers, but I get the impression that these concerns are rapidly fading.

Where there was far less clarity and agreement was the future of RFID beyond being just a better barcode.  An RFID chip is not only capable of storing far more data than just an identifier, but also it has the capability for that data to be changed and added to. 

As a techie at heart, the prospect of having the equivalent of a radio accessed memory stick stuck to every book cover, gets my creative juices running: the item’s loan history could follow it around; the book could arrive from the publisher with it’s catalogue record on board; it could attract the attention of an RFID enabled phone to tell it’s owner that is overdue and needs taking back to the library – to mention just a few of the more sensible ones.

There is a major blockage to the adoption of what could be described as these RFID 2.0 visions.  Nobody can agree on how to store the data on the RFID chips – as of today there is no standard for this.  In the standards less vacuum each supplier is doing their own incompatible thing.  That is not to say that there are no standards for RFID.  As independent RFID consultant Mick Fortune testified, there are more standards in this area than is wise to display on a single PowerPoint slide, but none of them address the issue of how to store this extended book/library data.

Adoption Curve For a technology to become generally adopted, crossing that chasm between the early adopters to the take up by the early majority of users, there needs to be a standardised market in operation, reducing costs and risks.  Would the CD have been widely adopted if each record label, or equipment manufacturer, used their own proprietary encoding format?

Mick Fortune went on to describe some light on the horizon in the form of a proposed standard – ISO 28560-1  – a standard which codifies 25 data elements.  The adoption of this would be a major step forward.  Unfortunately, as always it seems in the world of standards, ISO 28560-1 is not the whole story.  There are also two competing, and apparently mutually exclusive, standards ISO 28560-2 & ISO 28560-3 which describe how these elements would be encoded on a chip –  that’s the trouble with standards, there are so many to choose from!.

If these standards are agreed, ratified and adopted by the industry I believe we will have removed a substantial barrier to the wider use of RFID for things beyond barcode replacement. The next problem will be to gain some agreement as to what those uses might be.   I may be short sighted but from my current point of view RFID 2.0 (I know I’m going to regret calling it that) looks like a great solution searching for a problem to solve.

Technorati Tags: ,,,,

Talking to Herbert van de Sompel about repositories

Over on our Xiphos blog, I’ve just published a podcast conversation I had with Herbert van de Sompel earlier this week.

It’s a nice example of the synergies between issues discussed here on Panlibus and those we’re exploring within Project Xiphos. Have a listen, and see what you think.

Does anyone have a marker-pen? An open letter to Karen Calhoun.

December 21, 2007

Karen Calhoun
Vice President, WorldCat and Metadata Services
OCLC Online Computer Library Center, Inc.
6565 Kilgour Place
Dublin, OH 43017-3395 USA

Dear Karen,

Thank you for publishing your response to the Library of Congress Working Group on the Future of Bibliographic Control. You’ll forgive me if my open letter to you is a little less formal.

The working group’s draft presents the library world with a rallying point around which it can choose to really move forwards into the internet age in a way that it has not managed to achieve so far. You hold a unique position of power, this is clear from the fact that you get a mention in the working group report. But with great power comes great responsibility, and for anyone who’s watched Spiderman, it’s clear that often means giving things up.

Reading between the lines your response seems to be saying that you are, in fact, better placed and better qualified than the Library of Congress to take on the role of national data provider. That may very well be the case, you are certainly better funded – their budget of around $387 million for the national library has to work very hard looking after more than 134 million items as well doing all the other great work they do.

On the other hand, you have more than $234 million of your customers money to spend each year, without any items to look after. You also have nearly half-a-billion dollars sat aside to dip into should you have to do something big and important.

So if the community were to rally behind you as the centre of an effort to truly modernise, what would they have to demand of you?

Firstly, the community should insist you open up more. OCLC’s Office of Research gives quite a lot, but on the whole you could do more. The Library of Congress data is unambiguously in the public-domain, at least within the US. The community should demand that the data you manage on their behalf should be Open too. That would mean removing some of the technical controls you have in place, but most of all removing the restrictive terms you put in your membership contracts. The community should then protect any further contributions of their data by making them under a license such as the Open Data Commons License. Of course, lots of people would like to see LC move further in opening up too.

Secondly, the community should insist that the software they have paid you to write should be open-source. To say that you have released your FRBR algorithm when most librarys have no real chance of implementing it is somewhat disingenuous; give them the code. To say that crosswalks can be accessed through a web service is great, but not for everyone; give them the code. That would make Devon happy too, he wants his awesome work shared.

Thirdly, the community should insist you become more inclusive. Invite some vendors’ developers to join in the Grid Developer Network, or just invite everyone and see who comes. Tell everyone they’re free to blog about anything you’re doing – including your staff. Invite me to your symposium at ALA in June. Start a series of webcasts with great speakers who can influence the community.

Finally, the community should insist that you let go of control. That means not locking people in, it means not promoting worse solutions over better solutions just because you own the Copyright on the worse one – it means providing the tools for libraries to move from DDC to LCSH or MESH as well as the tools to move to it.

This last point is the key, letting go of control, as Rod Beckstrom and Ori Brafman describe it – the difference between the starfish and the spider. At the moment you’re a spider, but using the web as a platform means being a starfish. Skype, Craigslist, the many peer-to-peer networks, email and the very web itself all work precisely because Tim Berners-Lee and Vint Cerf before him understood they had to let go of centralised control. Tim Berners-Lee got a Knighthood for letting go of control. We can’t promise you one, but you’d certainly earn the adulation of your peers.

Right now you’re too controlling, too centralised, too judgemental of your members, you’re a spider. Even your new logo with its big blue abdomen, green body and cute little orange head is just crying out for eight marker-pen legs, two marker-pen eyes and a cheeky little marker-pen smile. But the web wasn’t spun by spiders, it’s far more like the communication trails left by ants, another interesting social species.

The WoGroFuBiCo (as William Denton calls them) are asking the community to become a starfish, you need to stop being a spider.

Again, many thanks for making your response open and giving us a chance to widen the debate. Merry Christmas to all at OCLC from all of us here at Talis.

Respectfully,

Rob Styles
Geek, Talis

Zee REST

Superpatron (Ed Vielmetti) posted ‘a note to self’ yesterday entitled Zed REST. I’ve probably got my national stereotypes wrong, but surely Ed should have called it Zee REST, leaving it to the British to call it Zed REST.

Anyway, Ed was musing as to the possibility of writing a Z39.50 adaptor so that research library catalogues could export a PatREST(pdf) interface for public use. In that way a simple client could be constructed to talk to a wide range of catalogues, without having to mess about with horribly binary encoded formats.

He then goes on to add:

A variant on this theme would be SRU REST, which would start from SRU/SRW and export PatREST on the user side.

The implicit message of this being, that as SRU is a RESTful interface that returns XML, it would be far easier to work with. This is most certainly true.

It is no accident that in the recent free upgrade (which I talked about earlier) to Talis’ own product range, we are providing open access to Z39.50, SRW & SRU by default. If you already have a Z39.50 client, by all means continue to use it. But if you are starting from scratch, don’t touch it with a barge-pole go for SRU if you have the option.

So Ed, as a footnote to your note, can I suggest you start with SRU, and then look at a way of getting Z39.50 targets visible via SRU. Then you only need to build one PatREST adaptor. Ah the joy of reuse, very Library 2.0!

Where do I get a Z39.50 to SRU adaptor from to make this possible? – You get Yaz Proxy.

Yaz Proxy from the Danish company Index Data, is a GPL licensed application which amongst other things provides “SRU/SRW server function, to allow any Z39.50 server to also support the ZiNG protocols“.

With its in-built XSLT capabilities this could be just the thing to to get Ed closer to his ambitions without having to mess with those with horribly binary encoded formats.

The only thing a would question, is why invent a PatREST search standard, when a perfectly serviceable RESTful search interface is, or can easily be, made available for the many visible Z39.50 catalogues? I’ve studied PatREST, which as a work in progress has a lot going for it and John Blyberg should be praised for his efforts in producing it, but looking at the searching elements of the standard I can’t help feeling that SRU has solved many of the issues that PatREST may have difficulty in solving. Things like the ability to request an Author + Title search, or a search limited by published date ranges.

Don’t get me wrong, as I have said before SRU/W is not perfect, but I think it addresses many of the search query issues that may come to haunt PatREST, OpenSearch, and other ‘simple’ search protocols. Maybe there should be some combining of standards going on in this area. I wonder what happened to this initiative?

Technorati Tags: , , , ,

A Library SOA example – not just Buzzword Bingo!

The Disruptive Library Technology Jester [Peter Murray] in his post Services in a Service Oriented Architecture(SOA) “the second in a series about the application of the Service Oriented Architecture (SOA) system design pattern to library services” uses a Hypothetical Use Case ‘Reflection of Local Library Holdings in Open WorldCat’ to demonstrate how SOA could add great value to Libraries and Library Systems.

For those of you who are not avid players of Buzzword Bingo or who are not familiar with the term SOA, the proceeding paragraph is probably totally obscure, and you are probably wondering what on earth this has got to do with libraries. Stick with me for a while and I will try to explain.

Firstly I would recommend a read of the first post in Peter’s series, ‘Defining “Service Oriented Architecture” by Analogy‘ which uses a transportation scenario to help enlighten the reader about what is meant by a Service Oriented Architecture. – Is it more efficient to just get in your car and drive from your home in Cleveland, Ohio to a hotel in Denver Colorado, or do you use the car, bus, plane, and taxi to achieve your goal. Never having driven that journey (except once in 1976 via Austin Texas, Phoenix, LA, San Fransisco, and Seattle – but that is another story!) I don’t know if that was the right analogy for me, but nevertheless it does work.

Having done that you should have an idea that in the emerging SOA world, bits of functionality will be provided by service providers. What they use, and how the use it, will be hidden from you – all you need to know is how to send a request to their service & the format of the response they will send back, and have the confidence that they are going to do this reliably. There are well know examples of this for all to see:

  • Users of Amazon Web Services(AWS) – send a URL to Amazon containing an ISBN, and get some XML back containing book information, pricing, and a link to a book jacket image.
  • Web site designers send a bit of XML to Google Maps which then pops a very powerful mapping application in to their web page, complete with map pins relevant to their site.
  • Send an ISBN to OCLC’s xISBN service and you receive an XML formatted list of all the FRBR related ISBNs.
  • Send a URL in a standard format containing a Library Code and a search term to the Talis Deep-linking service API and find yourself redirected in to that Library’s OPAC displaying the results of that search

The key for the adoption of SOA in any environment is standards either commercial[Google & Amazon’s standards are just there & published and people are using them] or open. Take for instance PatREST(pdf), the standard being floated by the Mashing up the Library Competition Winner John Blyberg of Ann Arbour District Library.

PatREST (Patron REST) is an XML specification developed at the Ann Arbor District Library for the purpose of providing a simple and easy method of accessing various data and methods.

This is a standard for accessing data from an individual local Library System and using it, like John does in his winning entry, to drive another application. PatREST is the application programmer interface (API) for a library service.

So how does this fit with the WorldCat analogy in Peter’s latest posting? Well, if each Library had got a System Manager who was capable of implementing PatREST on their local system, it fits very well. Unfortunately, the world does not have a massive population of John Blybergs working as Library System Managers.

Those who read Panlibus regularly, and have digested some of our white papers, will know that we are passionate about getting the library’s services and data to where the users ares spending their time – not usually inside a library system interface.

We are already showing the power of this with the Talis Platform and the open Platform APIs which we have released, and will be adding to in the none too distant future. We also recognize that providing APIs to the local library system is the key stone in enabling those systems to become services that can participate in a SOA world. Watch this space for more news soon on this front as well!

We also recognize that no one organization in the library sector can solve the problem, of delivering Library [regardless of vendor, location, or institution] as a service to be consumed by all, on its own. We are doing all we can to promote dialog on this, but to be successful for the benefit of all we need to be joined by the promoters of the technology such as Peter, the likes of OCLC, our fellow system vendors, and the open the source community. We have, and will continue to, invite all of you to join in the duologue with us.

Knock on the door of your local system vendor (as many of them don’t appear to be bloging, except for a few notable and welcome exceptions), and your bibliographic service provider. Ask them how they are going to enable you, your users, and your/their data, to take your rightful place in the open distributed community emerging as a result of SOA being adopted globally.

I could go on about issues around Open Data need to be addressed to facilitate this, but that is a whole other story which I will return to in future postings.

Peter closes his posting with the following:

But What of the “Integrated Library System”?
If you read closely and have your internal sensors calibrated to such things, you may have noticed the juxtaposition of “inventory control system” with “local catalog system” in the descriptions above. That is no mistake — in the next posting of this series we’ll take a look at the disaggregation of the traditional integrated library system in a SOA environment.

Peter, come talk to us – we are not only aware of this but are actively working towards enabling it.
technorati tags: , , , , , ,

Wikicat

The Wikimeadia Foundation the international non-profit organization behind some of the largest collaboratively-edited reference projects in the world including Wikipedia, have a project that has been running for the last few months named Wikicat.

Wikicat’s basic premise is to become the bibliographic catalog used by the Wikicite and WikiTextrose projects. The Wikicite project recognizes that “A fact is only as reliable as the ability to source that fact, and the ability to weigh carefully that source” and because of this the need to cite sources is recognized in the Wikipedia community standards. WikiTextrose is a project to analyze relationships between texts and is “inspired by long-established theories in the field of citation analysis

In simple terms the Wikicat project is attempting to assemble a bibliographic database [yes another one] of all the bibliographic works cited in Wikimedia pages.

It is going to do this initially by harvesting records via Z39.50 from other catalogues such as the Library of Congress, the National Library of Medicine, and others as they are added to their List of Wikicat OPAC Targets. Then when a citation, that includes a recognizable identifier such as ISBN or LOC number, is included in a page the authoritative bibliographic record can then be used to create a ‘correct’ citation. Eventually the act of citing a previously unknown [to Wikicat] work should automatically help to populate the Wikicat catalogue. – Participative cataloguing without needing to use the word folksonomy!

Putting aside the tempting discussion about can a Z39.50 target be truly described as an OPAC, the thing that is different about this cataloguing project is not what they are attempting to achieve but how they are going about it. The Wikicat home page states:

It will be implemented as a Wikidata dataset using a datamodel design based upon IFLA‘s Functional Requirements for Bibliographic Records (FRBR) [1], the various ISBD standards, the Library of Congress‘s MARC 21 specification, the Anglo-American Cataloguing RulesThe Logical Structure of the Anglo-American Cataloguing Rules, and the International Committee for Documentation (CIDOC)‘s Conceptual Reference Model (CRM)[2].

So it isn’t just going to be a database of Marc records then!

Reading more it is clear that once the initial objective of creating an automatic lookup of bibliographic records to create citations has been achieved, this could become a far more general open participative cataloguing project, complete with its own cataloguing rules managed by the WikiProject Librarians.

Because they are starting with FRBR at the core of the project, the quality, authority and granularity of the relationships between bibliographic entities potentially could be of the highest quality. This could lead to many benefits for the bibliographic community, not least a wikiXisbn service [my name] that is ‘better’ than OCLC’s xISBN.

So does the world need yet another cooperative cataloguing initiative? – working for an organisation that has cooperative cataloguing in its DNA for over thirty-five years, I should be careful how I answer this!

Throwing care to the wind – Yes. When you consider that all the other cooperative cataloguing initiatives [including as of today the one traditionally supported by Talis] are bounded by project, geographical, institutional, political, subject area, commercial, exclusive licensing, or high financial barrier to entry issues. What is refreshing about Wikicat is that, like Wikipedia, the only barrier to entry, both for retrieving and adding data, is Internet connectivity.

Unlike Wikipedia where some concerns about data quality are overridden by the value of it’s totally participative nature, the Wikicat team are clearly aware that the value of a bibliographic database is directly connected to the quality, consistency and therefore authority of the data that it holds. For this reason, the establishing of cataloguing rules and training for potential editors overseen by the WikiProject Librarians is already well detailed in the project operational stages roadmap.

I will be watching Wikicat with interest to see how it develops.

Technorati Tags: , , , , , ,

A Licence to build…

Earlier today, Talis released a draft of our Talis Community Licence on the TDN.

This draft licence builds upon our existing commitment to free contribution into and basic discovery from Talis Platform-powered applications such as Talis Source, and codifies our intentions for data shared via the Platform in an open and unambiguous manner.

It is an important step forward in helping the whole sector to move beyond the formation of closed ‘clubs’ for the exchange of data, toward a model in which basic data is actually or essentially free, and where the differentiating services built on top of those data are where the value will be found. As such, we invite all those involved in this space to join with us in exploring the opportunities to further extend the shared pool upon which we, our customers, and their beneficiaries might build.

As licence drafter Ian Davis comments on his own blog, it

“is going to play a key role in our technology platform. It gives users and contributors of all kinds of platform data some fundamental rights with one important restriction.” [that they can’t deny those freedoms to another]

Whilst of course loathe to introduce yet another licence into the morass, our research to date suggests that there is no existing licence with sufficient reach and flexibility to give contributors confidence that their rights are protected whilst making it as easy as possible for third parties to reuse their contributions.

As one who, personally, has been a long-time proponent of Creative Commons, and who argued hard in a previous role to have their licenses looked at sensibly, I see this as an important step forward in filling a gap in the current universe of remix-friendly licenses. I look forward to engaging with Creative Commons and others to ensure that this new licence moves outside Talis and genuinely meets a wide set of requirements in the library sector and far beyond.

Please do take a look, have a think, and share your thoughts in the forum.

Technorati Tags: , , , , , , , , , , , ,

Utility Library Services

Over on the Amazon Web Services Blog, Jeff posts to dispel the ‘it must be more difficult than this‘ mists around subscribing to, consuming, and [in the case of their S3 simple storage service] paying for Web Services from Amazon.

With this post I would like to make clear just how easy it is to use this type of web service. I’m writing this because some of the developers that I talk to seem to think that it really has to be harder than it really is, and I want to correct that notion as soon as possible.

In previous Panlibus postings from Paul & myself; in Library 2.0 Gang conversations; in Library 2.0 white papers [1][2]; in conference presentations; and in Paul Miller’s latest D-Lib Article on Library 2.0, you will find a theme behind the technological part of Library 2.0 – small pieces loosely coupled.

The Lego brick analogy, that Paul used in his CiL2006 presentation, seems to have helped people get the Web 2.0 principles behind what is being discussed.

Stretching the analogy a little further – assembling/orchestrating your [Library] Service from smaller bricks [bits of library] that easily interconnect is fine, provided that it is easy to get hold of the bricks [bits of library service] in the first place. If you need a very special [but still standard] brick to finish your Model Library, you just pop to Toys R Us to get it. So where to you go get the [bits of library] Web Services? – today, the answer is all over the place, and at the moment they are not very Lego-ish.

That is why we, at Talis, believe that a Platform of, and for, Library Services must and will be established to deliver the cross institution and cross vendor standardization [Lego-ness] and ease of consumption [Toys R Us – ness]. This second attribute is of equal importance to the first. You may have the best Web Service in the world, but if a potential user has to jump through several hoops, sign loads of agreements, and send you a cheque before they try to use it, they almost certainly won’t.

Whilst building services on the Platform we [the Vendors and the Libraries] need to take this, and Amazon’s example, in to account. Consuming a library service, regardless of the Library or the Library System Vendor, should be as easy as consuming an Amazon/eBay/Google/SalesForce.com service – sign up and away you go. The consumers of the services will expect the full subscriber experience [thanks again Amazon for the example] – to be able to manage their accounts, get reports on usage, etc. And if some of the services incur usage costs, that should be simply be via their normal account management.

If we get this right, consuming Utility Library Services should be as easy as consuming any other utility computing services. Then the creative Librarian 2.0 community can concentrate on the what, instead of the how of Library 2.0.

As has already been identified by many, this will be a disruption to the way we all provide and consume systems and services. It will be an interesting journey. Join the open community and the discussions to help shape the route.

Technorati Tags: , , ,

SRU / OpenSearch get together

Well at least the people behind them…

De Witt reports in the A9 Developer Blog the visit of Dr. Robert Sanderson to the A9.com’s offices. With Dr Sanderson the co-creator of the SRU search standard visiting A9 the home of OpenSearch, it obviously wasn’t just a pop-in chat over coffee and bagels.

Rob and I were both pleased to find that we see eye-to-eye on nearly everything we discussed. We both view SRU and OpenSearch as complementary, not competing, technologies. The SRU (née Z39.50) community has long been tackling some of the toughest problems in search syndication, and they have done a commendable job at working out many of the intricacies of integrating and exposing diverse and distributed collections of rich data. The OpenSearch philosophy has always been to leverage other formats when possible, and the incorporation of SRU and CQL elements into OpenSearch via extensions will help create a smooth gradient connecting the two technologies.

As regular readers of Panlibus will be aware, I’ve been predicting the rise of OpenSearch for quite a while, whilst recognizing its limitations and that it could learn from the library search world and protocols such as Z39.50 and SRU. Similarly the complex, to most non-library developers, library protocols have much to learn from OpenSearch.
So it is with great interest I will be watching Palo Alto and Liverpool to see what develops.

I believe that this is just the first step along a path that will ultimately benefit both search providers and searchers alike. Expect to hear a lot more over the months to come about ways in which SRU can be leveraged by OpenSearch users and vice-versa.

Thanks to Lorcan Dempsey for the heads up on this.

Technorati Tags: , ,