Steven Cohen at LibraryStuff points us to the University of Pennsylvania’s Online Books Page, and specifically to the RSS Feed for new books listed.
The Online Books Page — completely new to me — lists more than 20,000 full-text English-language books available around the Internet. While only the new books listing is available via RSS, the site itself lists them by author, subject, title, and LC subject heading.
Google Does an RSS Aggregator
Google has entered the RSS aggregator fray with a new beta product called Google Lens. I suppose the only surprise is that Google waited this long to release a product, even in beta. Weblogsinc offers a thorough review of the service.
Importing an OPML file is slow — I exported my roughly 90-feed subscription list from Bloglines and had Google Lens import it. Ten minutes later, 7 feeds show up in my subscribed list in Google Lens. A new one appears ever minute or two. My groupings were lost, but it seems Google doesn’t have the concept of “folders.”
I wonder when this and Google Mail will be integrated?
One-Stop Tagalog Searching: Kebberfegg
Kebberfegg is “a tool to help you generate large sets of keyword-based RSS feeds at one time.” What it does is quite simple, and in that lies its utility and cleverness. You enter a list of tags — user-supplied keywords to describe an RSS feed — and select one or more subject areas (Medical, News and News Search Engines, Technology, Web Search Engines, etc.). Kebberfegg translates those tags into URLs that work at all the various sites that employ tags (i.e., Technorati, Del.icio.us, Google Blog Search, Daypop, etc.).
The list is either displayed either as HTML or as an OPML file. The HTML is OK for a quick review of RSS feeds that you can select from and add to your favorite aggregator — an “Add to Yahoo!” link is provided for each feed. The OPML file is in many ways better, once you have honed your search, since you can import it directly into your aggregator of choice.
The list of sites that use some form of tagaloging is impressive in itself — over 15!
What’s New in OAI-Compliant Repositories
First, some background. The Open Archives Initiative is a project to share the resources held in digital libraries. It defines a format for describing information about digital resources — articles, images, sounds, recordings, or virtually anything else — so that the holdings of various repositories can be easily shared among institutions. There are dozens of repositories, and hundreds of thousands of resources, in OAI-compliant digital collections.
The Ockham Initiative builds tools based on the vast (and growing) universe of resources described by OAI. One of these resources, just released, is a search tool that provides an RSS feed of the search results in addition to the static view within the web interface.
As an example, here’s an RSS feed for an OAI search for RSS. Now this is a straight keyword search, so it pulls down some false positives (it turns out that “RSS” is a common acronym in other subject fields), but several clearly useful items are returned.
This is a great way to highlight otherwise hard-to-find resources on almost any topic.
On-The-Fly RSS by LC Number for Voyager
Wally Grotophorst, Associate University Librarian at George Mason University’s library, posts a small Perl application that searches his Voyager online catalog for a specific Library of Congress call number and returns the results as an RSS feed. He has an example of this feed embedded on GMU’s Library Systems Office home page (appropriately, new books on programming).
Wally embeds the code in a page using Feed2JS — but it would also be accessible to anyone who wants to track new books available in GMU’s libraries by subscribing to the RSS feed. The feed has a simple URL structure — in the example he posts, it is http://breeze.gmu.edu/cgi-bin/newrss.pl?QA76.
The Perl code is pretty straightforward though, Wally says, not particularly well optimized as yet.
Publishers Missing a Niche?
I’ve stumbled on blogs describing new table of contents feeds direct from publishers. Aside from wondering what’s taking them so long, I’ve started wondering why publishers don’t better aggregate their own data, leaving that to other parties?
Why wouldn’t a publisher with a few dozen titles provide subject-based feeds across all their own journals? Or, for publishers with many titles, offer author-specific or institution-specific feeds? (Aggregators sometimes offer the former; I don’t think I’ve seen the latter anywhere.) While a prolific author may only have a couple articles a year, if you’re interested in the same research area as scholar Waldo McGillicudy, you probably know his name and would want an easy way to be notified — pre-publication, even — when something new is coming out.
It would also be interesting to see institution-specific feeds. Everything that comes from a faculty member at a particular research university or (for large institutions) department.
Mixing Z39.50 and RSS
I’ve talked about lots of way libraries are making it possible to learn about new materials via RSS — but what if you’re an eager beaver and want to get on the waiting list the second the book is in the catalog, not when it’s ready for circulation?
The Paranoid Agnostic writes about Using RSS and Z39.50 to Find Books Your Library Doesn’t Have — Yet. in a recent post. He offers a Perl script that will query his library’s catalog (using Z39.50), find the most recently added items, and republish them via RSS. So he can then jump into the holds queue a bit ahead of the rest of the crowd.
He’s not offering the code to the public yet — but tells you to watch his RSS feed for details. The author promises it before the Access 2005 conference in mid-October.
Open OpenURL Resolver
This is a bit far afield, but it got me wondering (not always a good thing).
OCLC has launched an alpha version of an OpenURL resolver. The idea behind this is fairly straightforward, but the devil, as always, will be in the details. OpenURL is a standard for formatting citations (of books, journal articles, etc.) in a URL format that can be passed between a citation database and a full text service for which the user (or the user’s library) has obtained access. For example, if you do a search in a database that your library has access to and has activated OpenURL links in, you would see a link after each citation to find the full text. That link would take you to a “link resolver” provided by your library. The link resolver would determine, based on the citation information provided in the link, the best full-text source for that particular item. It might be a full-text database, it might be a paper copy of the journal in the library stacks, or it might be interlibrary loan. You’d see a list of possible sources of the full text, which you could click through to.
Where this great system falls apart, a bit, is if you do not have access to a link resolver or if you are providing citations of one kind or another to people who are not part of your library’s licensing arrangements for full-text resources. For example, to bring it back to RSS, if you maintained a list of publications by your patrons (or your library staff) and published that list of citations by RSS, you’d want to make it easy for your RSS subscribers to get to the full text. Since you don’t know the URL of the link resolver each of your RSS subscribers uses (if they even have access to one at all), this becomes difficult.
Hence the OCLC OpenURL resolver. The idea is to provide a central resolver that will guess, based on the IP address of the particular user clicking on the link, what the appropriate link resolver might be. So if you are on the Tufts campus, for example, it will know (because Tufts told OCLC) that the resolver address is whatever. Or if you’re in another similar environment, the OCLC link resolver would know where you are. This works less well, at least initially, for home users on broadband, but I’d guess it would still be possible to make good guesses based on cities what the public library link resolver would be (assuming cable and telephone companies assign blocks of IP addresses in a somewhat systematic way).
Library Thing
There’s been lots of traffic in blogland about Library Thing, a service that lets you build your own personal catalog of books you’ve read, link to them in Amazon, pull subject and library cataloging information from the Library of Congress, and tagalog them with your own ad hoc subject terms.
Steve Cohen of Library Stuff, among others, beat me to the punch by suggesting RSS feeds would be a great add-on feature for Library Thing. And he’s right — it would open up collective book clubs, reading lists of your friends, and so on. And it’s the sort of thing that libraries in general should be adding to their catalogs and patron services. Why not allow those patrons who wish to publicize their reading list to do so? Let them create book lists and tell their friends and family where their book feed is.
Subject-Specific New Acquisitions via RSS
Check out the University of Alabama Library’s Recently Cataloged Titles Via RSS. Alabama faculty, staff, and students can subscribe to an RSS feed of new books as they are added to the library catalog. There are a whopping 325 subject feeds to choose from — which should make sufficiently narrow topics that everyone will find something they want without feeling overburdened by books that are of no interest. I’ll bet that as this catches on, new books will have an instant waiting list.
According to Douglas Anderson, who developed this application,
Doug adds that this is brand new, so he’s not sure what the adoption rate will be on campus. But they’re going to have several promotional activities during the fall term, primarily targeting faculty at first.