Panlibus Blog

Archive for June, 2005

Google Maps API officially released

Following on from the large number of Google Maps hacks, Google have now released their API.

With several caveats about possible future advertising, changes to the API over time, and not using the Maps for unsuitable purposes:

There are some uses of the API that we just don’t want to see. For instance, we do not want to see maps that identify the places to buy illegal drugs in a city, or any similar illegal activity. We also want to respect people’s privacy, so the API should not be used to identify private information about private individuals.

Stand by for a tidal wave of Maps-apps………..

Tag – you’re it!

The Yahoo! Search Blog, has just published a very interesting entry on a new form of searching, they are calling My Web 2.0. You can read more about it at Search with a little help from your friends.

The premise behind social search is that no matter how powerful the search engine, it can’t contextualise the search results for you personally….yet. So, how can the search engine know that the results it is presenting are going to be relevant to you and your own tastes and preferences?

Yahoo have devised their own solution to this problem. It seems that they want to start harnessing some of the participative energy we are already seeing in the Web 2.0 environment. How about ranking search results based on what your trusted community has saved, tagged and shared?

Much like links and anchor text enabled major improvements in web search by becoming a new source of authority for search engines, people and trust networks are now an additional source of authority for social search engines. In the same way that blogs and RSS are empowering individuals to participate in publishing, individuals and communities can now participate in search, using tools like My Web2.0 that let them define what is valuable to them and their community”.

I like this concept. I was recently having a conversation with some colleagues about book reviews in Amazon. Although Amazon goes to great lengths to offer indications as to how valuable a review might be (ranking the review, ranking the reviewer etc). I still feel that with something like a review I want to “know” the reviewer in a way that means I trust their opinion.

It’s also quite interesting that Yahoo, use an example in their blog about Plasma TV screens. In a conversation, just a few days ago, I expressed an interest in buying a Plasma TV screen to my colleagues, who then bombarded me with a list of good sites for getting a good deal on Plasmas. In retrospect, it was a conscious decision on my part to qualify my investigation with trusted sources first, before I embarked on a web investigation.

Yahoo want to see My Web 2.0 evolve organically, enabling entire communities of interest to start using their platform to create their own search engines populated with content that has been tagged and shared within the community. I have two questions about this – what about if you have been merrily tagging your content in del.icio.us? Or Flickr? Are Yahoo and Del.icio.us going to create the APIs to enable tags to be imported? In a true spirit of Web 2.0 I would like to see Yahoo Search enable this interoperability.

My other question is about what impact this will have on libraries? The defence put up by libraries concerned about Google et al. has been that library search enables its community to find not just any source but a trusted source. In this context, that generally means a resource that has either been recommended by faculty or by the library, based on the vast amount of knowledge and understanding they have of the information landscape. So does “My Web 2.0”, infringe even further on the library space?

Two further observations to make at this stage:
1. Can “My Web2.0” make tagging the hidden web any easier? Academics currently have a method of tagging the resources they want their community (otherwise known as students in their class) to read, its called a reading list, and not surprisingly a lot of the material that gets tagged within a reading list, is physical content held by the library. Here Silkworm could really have a role to play, could a tagging tool like Del.ici.ous or “MyWeb2.0” use the Silkworm directory to locate resources that are physical but which have metadata that can describe the resource and point you to the location?
2. Is “MyWeb2.0” introducing layers of tagging based on us tagging the tagger? So for instance, you can see the following scenarios evolving:
a) Tags that are made by those people who I “know”, whom I have some form of direct relationship with. This is where I see “MyWeb2.0” being of immediate value. I have a community of interest based on my academic affiliation, or my research group.
b) Tags that are created by those taggers that I “know about”. Whose reputation I have come to respect or admire because I am part of that circle of participation. This is very much where both del.ici.ous and “MyWeb2.0” overlap.
c) Tags that are created by taggers who are an “unknown”, but I just come across them in a serendipitous fashion, and they may in time become “know about” taggers. This is again where del.ici.ious and indeed Technorati become useful.

To an extent all three scenarios, play well in an academic context. However, we have found from our experience with Talis List, that some academics are not keen to publish their reading lists to the wider community, they see it as their intellectual property. But, if we see reading lists as simply a set of tags, then the “MyWeb2.0” application could work very well.

Project Atlas – Microsoft and AJAX

Following on from Richard’s cool work building useful apps from distributed services (see these fancy demos) It seems the *really* big boys are getting excited about it too.
Project Atlas is a Microsoft ASP.NET development project which aims to produce a framework and set of tools to make building AJAX applications easy for those without great expertise in client scripting.
Here, Scott Guthrie of Microsoft outlines the thinking around Atlas.

Interestingly, one of the key goals of the Atlas project is “building an active community” – sounds like a recurring theme.

Tag – you’re it!

The Yahoo! Search Blog, has just published a very interesting entry on a new form of searching, they are calling My Web 2.0. You can read more about it at Search with a little help from your friends.

The premise behind social search is that no matter how powerful the search engine, it can’t contextualise the search results for you personally….yet. So, how can the search engine know that the results it is presenting are going to be relevant to you and your own tastes and preferences?

Yahoo have devised their own solution to this problem. It seems that they want to start harnessing some of the participative energy we are already seeing in the Web 2.0 environment. How about ranking search results based on what your trusted community has saved, tagged and shared?

Much like links and anchor text enabled major improvements in web search by becoming a new source of authority for search engines, people and trust networks are now an additional source of authority for social search engines. In the same way that blogs and RSS are empowering individuals to participate in publishing, individuals and communities can now participate in search, using tools like My Web2.0 that let them define what is valuable to them and their community”.

I like this concept. I was recently having a conversation with some colleagues about book reviews in Amazon. Although Amazon goes to great lengths to offer indications as to how valuable a review might be (ranking the review, ranking the reviewer etc). I still feel that with something like a review I want to “know” the reviewer in a way that means I trust their opinion.

It’s also quite interesting that Yahoo, use an example in their blog about Plasma TV screens. In a conversation, just a few days ago, I expressed an interest in buying a Plasma TV screen to my colleagues, who then bombarded me with a list of good sites for getting a good deal on Plasmas. In retrospect, it was a conscious decision on my part to qualify my investigation with trusted sources first, before I embarked on a web investigation.

Yahoo want to see My Web 2.0 evolve organically, enabling entire communities of interest to start using their platform to create their own search engines populated with content that has been tagged and shared within the community. I have two questions about this – what about if you have been merrily tagging your content in del.icio.us? Or Flickr? Are Yahoo and Del.icio.us going to create the APIs to enable tags to be imported? In a true spirit of Web 2.0 I would like to see Yahoo Search enable this interoperability.

My other question is about what impact this will have on libraries? The defence put up by libraries concerned about Google et al. has been that library search enables its community to find not just any source but a trusted source. In this context, that generally means a resource that has either been recommended by faculty or by the library, based on the vast amount of knowledge and understanding they have of the information landscape. So does “My Web 2.0”, infringe even further on the library space?

Two further observations to make at this stage:
1. Can “My Web2.0” make tagging the hidden web any easier? Academics currently have a method of tagging the resources they want their community (otherwise known as students in their class) to read, its called a reading list, and not surprisingly a lot of the material that gets tagged within a reading list, is physical content held by the library. Here Silkworm could really have a role to play, could a tagging tool like Del.ici.ous or “MyWeb2.0” use the Silkworm directory to locate resources that are physical but which have metadata that can describe the resource and point you to the location?
2. Is “MyWeb2.0” introducing layers of tagging based on us tagging the tagger? So for instance, you can see the following scenarios evolving:
a) Tags that are made by those people who I “know”, whom I have some form of direct relationship with. This is where I see “MyWeb2.0” being of immediate value. I have a community of interest based on my academic affiliation, or my research group.
b) Tags that are created by those taggers that I “know about”. Whose reputation I have come to respect or admire because I am part of that circle of participation. This is very much where both del.ici.ous and “MyWeb2.0” overlap.
c) Tags that are created by taggers who are an “unknown”, but I just come across them in a serendipitous fashion, and they may in time become “know about” taggers. This is again where del.ici.ious and indeed Technorati become useful.

To an extent all three scenarios, play well in an academic context. However, we have found from our experience with Talis List, that some academics are not keen to publish their reading lists to the wider community, they see it as their intellectual property. But, if we see reading lists as simply a set of tags, then the “MyWeb2.0” application could work very well.

The New Breed of Web Service Clients

del.icio.us director is an interesting service that has wide ramifications for the web service world.

It’s a javascript application that creates a new, interactive user interface for the del.icio.us service. It runs inside the web browser, not on a server. It’s significant because it demonstrates a technique for interfacing to web applications without using an intermediary web service broker. I think that we’re in the middle of a sea change in the way web services are brokered and consumed. The reliance on centralised, web service orchestration, proxying and coordination is diminishing already and del.ico.us director may herald their disappearance entirely.

The first indication that something big was happening was the release of Greasemonkey which demonstrated the ability to inject user-controlled Javascript into a web page. The big deal here is the demonstration of the dynamism of the browser DOM. Simply by adding a script node, new behaviour can be inserted, removed or updated at runtime. This has recently been simplified by the release of the behaviour library which allows Javascript to be added to a page declaretively. Over the past 5-6 years there have been many proposals for ways to separate behaviour from content and styling. This is the first I’ve seen that is useable right now without having to wait for the browser vendors to implement a new standard. Technically it needed the dynamic DOM but socially it needed GMail, AJAX and Greasemonkey all to have existed first.

The second indicator was Google’s in-browser XSLT engine which enables XSLT to be used client-side in browsers that don’t yet allow programmatic control of the in-built XSLT engine (Safari), those that reject XSLT on philosophical grounds (Opera) or those that are tied to unsafe technologies (IE + ActiveX). This engine is pure Javascript (but leverages native processors where possible) and unlocks a major barrier for self-contained web service clients such as del.ico.us director. I’m sure that Google have other tricks up their sleeves waiting to be released.

How are these techniques going to change the web service landscape?

The current landscape is a complex tangle of specifications concerning identity, authorisation, orchestration, transactions, messaging etc. (see here and here). The assumption is that all the pain of these specifications will be handled by the platform vendors who will build new platforms for new applications. The cost of entry into this world is high in both time and money.

The community of web developers who actually want to go ahead and build useful applications has largely ignored these standards and have instead forged ahead with an existing platform that supports existing applications: the web browser. The browser already has mechanisms for authentication and identity. It has rich UI capabilities and communications facilities. It runs in a trusted security context. A missing piece has been around orchestration and coordination. For this the developers turned to Javascript and now, with dynamic DOM and runtime Javascript injection, it’s become possible to externalise application behaviour.

The result has been a transition from data-mobility to code-mobility. Rather than submitting data to a web service and getting results, the model is to download the required behaviour and apply it to the data which may be local or remote. The data can be dispersed across any number of specialised stores. The informative diagram halfway down the director page illustrates this quite nicely: the coordination happens on the client, not on the server.

Horizontal & Vertical Platforms

Lorcan Dempsey discusses the advance of the Web 2.0 Service Platforms following OCLC’s release of screenshots of a Google Maps – Open WorldCat demo.

Its good to see others following our demonstration of the power of Google Maps in the Library World.
mcarthysmap_1.png

OCLC gets on the Maps

Mcarthysmap_1Its good to see OCLC following us, in producing a screen shot of a Google Maps demo. As preview-announced by Thom Hickey in Outgoing and discussed as an example of using Web 2.0 Platform Services by Lorcan Dempsey in his blog posting Libraries, maps and platforms.

They are both correct to identify:

This sort of thing is possible to do quickly because of the open way that the Google maps is implemented as web services.

What we are looking at here are ‘platform’ services, using ‘platform’ in a Web 2.0sort of way. In this sense, a platform is a resource which makes services available through machine interfaces. Others build applications drawing on platform services through APIs/web services. Amazon, Google and eBay are major platform service providers.

As with our Silkworm demonstration [Screencast - ] they have used the Google Maps Service to add-value to
one of their services – Open WorldCat.

As Lorcan points out The Talis Silkworm initiative is explicitly positioned as a ‘horizontal’ platform play in an environment of vertical ILS application builders. The demonstration shows how Silkworm Directory and Deep-Link Access Services can be used to deliver innovative new Wapplications [new word of mine to describe applications constructed from Web 2.0 style services, distributed across the network - I bet it doesn't catch on]. The diagram at the end of the screencast shows how the services are pulled together to construct it with very little [this application specific] code living anywhere.

For more on Silkworm checkout the Silkworm web site. I can recommend the White Paper [pdf] and the other demo
screencast
, which incidentally utilizes OCLC’s xISBN service.

Longhorn – IE7 – RSS

Check out the IE Blog – Longhorn loves RSS

Watch the Channel 9 video Longhorn (heart) RSSWarning its 58 minutes long!

Microsoft announced at Gnomedex 5.0 that they are building RSS as a Platform in to Longhorn/IE7.

Few things from the video:

  • First we had browse, then browse and search, now we have browse and search and subscribe
  • RSS is to good to only be used for news feeds
  • New Microsoft extensions to RSS 2.0 released under Creative Commons
  • Extensions so that feeds can identify tags that items can be sorted/filtered by
  • Feeds as lists – changes in position or dropping off a list can be important events
  • IE spots feeds in a page, one-click to subscribe
  • My feeds, maintained by IE, available via Windows API to all applications
  • Demo – Outlook getting Calender events from feed enclosures
  • Demo – Screen-saver getting picture enclosures from a photo-blog and overlaying them with the feed item title & description
  • RSS is everywhere
  • Well we all suspected Microsoft were dipping their toe in the RSS pond, it looks from this like we are going to get drenched from the splash they make!

    They are certainly making the moves to unlock the power of RSS having seen the first opening crack of the giant RSS door when Dave Winer launched the inclusion tag on the world.

    Talking of Dave Winer, some people have been sniping at him for cooperating with Microsoft on these developments. Also Microsoft got some flack about the way they implemented their extensions to RSS 2.0. Dave answers both of these in his RSS Blog. Good to see Microsoft are listening, even after their announcement.

    I’ve been saying for ages that RSS is much bigger than just news feeds, Its nice to have the odd person in a far away place like Seattle agree with you ;-}

    Google AJAXSLT – more useful tools to create wizo browser apps

    Google Code have announced a new Open Source Project AJAXLT, which delivers XSLT & XPATH processing in a standard [so theoreticly cross browser] Javascript implementation.

    I discuss it in more depth on the Silkworm Bloghere.

    So why is it so interesting? Well it will make it far easier to produce browser based applications, that will not need ActiveX controls, or Applets etc. to be downloaded to run them. And you should be able to do it regardless of the label on the browser.

    Having put together some example applications to demonstrate the power of the Silkworm Platform, I can appreciate the benefits that Google are bestowing on the rest of us. Keep it up guys you don’t know just how much you are helping the rest of us.

    Google AJAXSLT – another step towards a Browser based networked OS?

    Google?s announcement of AJAXSLT, a Javascript implementation of an XSLT transformation engine and associated XPATH engine is an interesting development on the AJAX front. AJAX, promoted by Jesse James Garrett at Adaptive Path is all about bringing together services in the browser to produce an application.

    Great examples of this are Google Maps and Google Suggest.

    Whats exciting about this latest development is that by introducing these tools in a standard way, regardless of browser type, it makes it easier to develop [or glue together] applications that run in a browser.

    Traditionally browser based full-blown applications have been difficult to construct. For very obvious security reasons access to PC based services from the browser [running on that PC] has been difficult if not impossible. This has given rise to joyous experience of downloading applets or ActiveX controls to enable you to do anything serious. A compromise but not a very good one.

    The new approach in the emerging service-oriented world is to get ?services? [out there somewhere on the net] to do the hard work of data manipulation, data access, business processing, calculation etc. for you. These services returning their output in XML. Amazon AWS, Google Web Services, OCLC?s xISBN, and of course the services that will be supplied by the Silkworm Platform, being examples of these services. The browser then uses a combination of CSS, XHTML, etc. to render the application in the browser window. With the use of XMLHttpRequest the browser asyncronously communicates with remote services whilst interacting with the user, delivering a fully interactive application experience to them. If you haven?t played with Google Maps yet, just do it and you?ll see what I mean.

    Obviously as the data is in service specific XML format [from a service] it is not always how the application constructor [are we developers or constructors or authors when we are working in this way?] would want it. This is where XSLT & XPATH are needed. Trouble is, prior to AJZXSLT, getting those to work depended on browser specific plug-ins & techniques. Hence you will find many useful applications will only work in Firefox, or Safari, or IE. These tools, produced in raw standard Javascript, available to be used in constructing an applications starts to standardize-away the browser dependency issues.

    This dramatically lowers the barriers to the evolution of an operating environment which reliably surfaces its self within the browser, any browser.