Facebook Now Offers Chat

The various ways it’s possible to communicate with friends (in either the traditional “I’ve-known-you-since-childhood” sense or the current “who-the-heck-is-this-person” sense of the word) has just expanded by one more. Facebook now offers chat with any of your Facebook friends who happen to be visiting Facebook at the same time. Check out the new tool in the lower right corner of, apparently, every Facebook window (the chat tool is always in the lower right corner when you’re conducting important business on Facebook):

Facebook chat application

It helpfully shows you how many friends have a browser page open to the Facebook site, and (when you click the “Online Friends” link), how many are idle and how many are actively Facebooking something. It shows your current Facebook status message, too, when others look to see if you’re online.
The number of ways to stay in touch with people is exploding! I could Twitter you about the recent change to my Facebook status talking about the profile update I made in LinkedIn saying I posted a new entry to my blog, but I won’t. I’ll rely on good old RSS. The mind reels.

Libraries in Facebook

Facebook has a new tool, Lexicon, that “looks at the usage of words and phrases on profile, group and event Walls.” Similar to Google’s Zeitgeist, Facebook’s Lexicon shows how much people are talking about something.
So I tried a Lexicon search for “library“:

Facebook Lexicon for 'Library'

Predictably, Facebook’s users talk more about the library as the semester goes on — the rise through Fall 2007 is clear, peaking on December 10, plummeting over the winter holiday, and then slowly building through the winter until now; I suspect the peak is still a few weeks away.
Research and “study” show similar trends — you can see all three terms on one graph.

Next Generation Discovery Tools

There was a fair amount of discussion at the recent Computers in Libraries of “next generation discovery tools” — the technologies that, many of us hope, will supplant the now aging OPAC concept and provide better, more interactive, and more extensive access to our library’s holdings.
Marshall Breeding has posted a guide to who is using these new interfaces at http://www.librarytechnology.org/discovery.pl, part of his extensive Library Technology Guides site. (I blogged about Marshall’s two presentations at CIL here and here.)
These next-generation tools, whether commercial from the usually suspect ILS vendors or open source from various places, hold great promise for improving our user’s ability to browse and find items that we libraries have already acquired.

[Via Guideposts.]

CIL2008: Information not Location

My colleague Mike Creech and I presented on "Findability: Information not Location" (3.3 MB, PPT) this afternoon. The talk abstract:

Learn how to foster user-friendly digital information flows by eliminating silos, highlighting context and improving findability to create a unified web presence. Hear how the University of Michigan Libraries’ (MLibrary) are reinventing the libraries’ web sites to emphasize information over the path users previously took to access it. By elevating information over its location, users are not forced to know which library is the “right” starting place. The talk includes tips for your library web redesign process and user-centric design process.

Our talk was blogged by Librarian In Black.
I had a great time at Computers in Libraries — there were more interesting talks than I could attend, let alone blog. I have some catching up to do through the CIL2008 tag cloud, clearly.

CIL2008: Open Source Applications

Open Source Applications

Glen Horton is with the SouthWest Ohio and Neighboring Libraries
Libraries and Open Source both:
– believe information should be freely accessible to everyone
– give stuff away
– benefit from the generosity of others
– are about communities
– make the world a better place
Libraries create open source applications (LibraryFind, Evergreen, Koha, VUfind, Zotero, LibX, etc.)
Miami University of Ohio has a SOLR/Drupal OPAC in beta (beta.lib.muohio.edu). Not even a product — just a test environment.
How can you do this without a developer? You can contribute to the community in other ways. Teach how to use the open source tools your library has installed — even if not developed there. Hold classes for your patrons on how to use the tools that are available. Help build a user community around the open source tools that you think are of value.
You can document open source software — improve the documentation for other libraries. When you figure it out, help others down the same path. Documentation is often hit or miss; developers are not necessarily good documentation writers – or don’t have time to do so. You can help debug open source tools. Report bugs!Influence the development path for the software. Bigger projects often have active support forums — lots of people reporting and fixing bugs. Smaller projects may not have that infrastructure.
Even if you don’t create or use open source software, you can promote it by linking to it from your web site, distributing it on CDs or thumb drives, etc.
“Open Source or Die.” Libraries benefit from open source — make sure that you are giving back to equal the benefit. Teach it, use it, document it, evangelize it.
Slides are at http://www.glengage.com/.

Open Source Desktop Applications

Julian Clark is at Georgetown University Law Library.
Why open source? It’s free! As in kittens. Which means – acquisition is no cost, but you’ve got a lifetime of maintenance and upkeep. But even more so… you have control and customization. You can change it to make it look and act the way you want. Security — active communities keep applications safe and updated against whatever the latest attack might be.
Why now? FUD about Open Source is declining. (FUD = Fear, Uncertainty, and Doubt). As open source becomes more mainstream, gut reaction against it is on the decline.
When is best time to adopt? When you’re ready; there’s no easy way to gauge this. Depends on your IT support, library management, colleagues… But it can fit into your major upgrade cycle. If you’re planning a major upgrade anyway, why not consider a switch rather than an upgrade? These upgrades often have long lead times; why not take advantage of that planning process to migrate? Also could be triggered by reduced capital funding — where you have staff, but not money, to spend on your systems.
Can you do this? Do you have the right hardware to run the tool? (This applies to both back-end or web-based systems as well as to the operating system for public use computers — a replacement for Windows, for example.) Does your organization’s IT group support open source — how much can you do, with whom do you have to collaborate?
Support options — purchased 3rd-party support; often available, varying degrees of quality and availability depending on the software being supported. Can often hire for a project, for long-term, etc. Flexibility. Of course, there’s always in-house — someone on your staff who knows (or can learn) the software and who knows and understands your organization.
Q: Glen — what are risks of providing open source software to patrons who then want support from you for it
A: Well, you can provide it explicitly as-is.

CIL2008: The Open Source Landscape

This is the presentation I hoped to have in yesterday’s keynote

Marshall maintains a list of who has what catalogs on his Library Technology Guides site.
Federated search systems: LibraryFind; dbWiz (Simon Fraser); Masterkey (developed by Index Data). masterkey.indexdata.com for a demo.
OCLC offers some open source software — but not cutting edge stuff. Fedora is a major digital repository engine. VTLS Vital is based on Fedora. Fedora Commons is a support service around it. Keystone — also by Index Data.

Open Source Discovery Products (i.e, Next Generation Catalogs)

VUFind. Apache Solr/Lucene.
– eXtensible Catalog (Mellon funded). Not a product now, but will be one day. XC are currently seeking institutional participation. This will “probably become a player” in the coming years.
– Others, such as Fac-Bac-OPAC, Scriblio (formerly WPopac).

Open Source in the ILS Arena

Shifting from open source being risky to open source being mainstream. Medium-sized public libraries are going with open source solutions for catalog; it no long requires massive technological effort or as much risk as it did.
In 2002, the open source ILS was a distant possibility — 3 of 4 tools Marshall reviewed then (Avanti, Pytheas, OpenBook, and Koha) are now defunct. In 2002, open source ILS wasn’t a trend.
In 2007, world starting to change. Slowly. A few hundred libraries had purchased an open source ILS; 40,000 had purchased a commercial product. In March 2008 — early adopters are now catalysts for others. There’s a small installed base, which makes others see the possibilities as being real. It seems now that we have a bona fide trend.
The ILS industry is “in turmoil”. Companies are merging; libraries are faced with fewer choices from commercial vendors; this gives more credence to ILS arena from standpoint of competition.
Decision to go open source is still primarily a business decision — as a library, need to demonstrated that the open source ILS best supports the mission of the library.

Current Product Options

Koha first open source ILS. Based on Perl, Apache, MySQL, Zebra search engine (from Info Data). Has 300+ libraries using it. Including Santa Cruz Public Library, 10 sites and 2 million volumes. Has relevance-ranked search, book jackets, facets, all that jazz.
Evergreen. Developed by Georgia Public Library consortium. Two year development cycle (6/2004 – 9/2006). A single shared environment shared by all libraries. One library card. Switched from SIRSI Unicorn. Succeeded in part because of standardization of policies across libraries (lending policies, etc.). Used in Georgia, British Columbia, Kent County (Maryland), and under consideration by a group of academic libraries in Canada. So far, only publics have adapted).
OPALS Open Source Automated Library System. Developed by Media Flex. Both installed ($250) and hosted ($170) services. Used by a consortium of K-12 schools in NY.
NextGenLib ILS designed for the developing world. 122 installations (India, Syria, Sudan, Cambodia). Originally closed, converted to open source in early 2008. More information from Library Technology.
Learning Access ILS. Designed for underserved rural public and tribal libraries — a turnkey solution. But may be defunct, according to Marshall. Built on an early version of Koha, but customized.

Open Source Business Front

Lots of companies offer a business plan to help support ILS software. Index Data, LibLime (Koha), Equinox (Pines), Care Affiliates; MediaFlex.
Duke is working on an open source ILS for higher education (looking for funding from Mellon; Marshall is involved).

Open Source Issues

Rise in interest led by disillusionment with traditional vendors. But total cost of ownership is probably about the same between open source and traditional tools. Libraries hope that they are less vulnerable to mergers and acquisitions. There’s no lump sum payment (though still need hardware, support — internal or external — and development costs. Not always clear who is funding the next generation of the current system.
Risk factors: dependency on community organizations and commercial companies. Decisions are often based on philosophical reasons, but they shouldn’t be — you need to consider the merits of the system itself. Make sure features and functionality are what you need.
Open Source vendors/providers need to develop and present their total cost of ownership — with documentation.
“Urgent need for a new generation of library automation designed for current and future-looking library missions and workflows.” That is, systems built for our digital and print collections. Open source tools do OK for systems of yesterday; will they meet the needs of the new library?
Q: How close are we to a system that does not utilize MARC records?
A: Not very. We need systems that do MARC, and Dublin Core, and ONYX, and RDF, etc., etc. The value in existing MARC records is too large to ditch. (Of course, it needs to be MARC XML.)

CIL2008: Drupal and Libraries

CIL2008: Drupal and Libraries, presented by Ellyssa Kroski
Uses a course page she set up for her library school course as an example. Students each had a blog; could tag their blogs and posts; favorite things within the community; share things via email; upload videos and photos; create and take user polls; buddy lists; guest book (i.e., Facebook Wall). A class chat room and tag cloud for site’s tagged content. What’s new on site — recently added/updated content.
Drupal runs on Apache, MySQL, and PHP. Has 3 components. 1) The core CMS that lets you organized and publish content to the web. This core functionality is well maintained, with a release schedule and bug fixing. 2) Contributed modules — things added by the user community. A bit of the “wild west” with these; not much oversight or control. Some are very well done; others not. 3) Themes. The skin on the site. Created with a combination of HTML, PHP, CSS.
A very active/engaged user community. Including many libraries. Most recognized, probably, is Ann Arbor District Libraries. Wrote a custom module to place OPAC into Drupal framework. L-Net staff intranet. Manages 65,000 virtual reference transcripts. Franklin Park Public Library uses Drupal. Done by one person, not an IT guy. St. Lawrence University Library — staff intranet as a communication tool for student workers on evenings and weekends. Using Drupal to plan redesign. Public web site, launching in fall 2008, will combine all library web sites. Includes course resources module that will allow faculty to build course resource lists; students will be able to vote on them and upload images, etc. IUPUI Library — pulls databases from Metalib, via X-Server, and organizes them into appropriate subject guides by categories. Librarians have subject guides, more frequently updated than before (ease of updating).
Simon Fraser University library uses Drupal for workshops page. Users can register, wait-listed, etc. Staff can manage registration lists. Uses Drupal events module. Florida State University Libraries. Content is currently managed through pages, but are moving into more of a true CMS implementation. Red Deer Public Library. And many other examples.
Slides and links are available at
http://oedb.org/blogs/ilibrarian/2008/drupal-and-libraries-at-cil2008/

CIL2008: The New Generation of Library Interfaces

Presented by Marshall Breeding, Director for Innovative Technologies and Research, Vanderbilt University
Marshall Breeding maintains Library Technology Guides site. Today’s topic is next-generation catalogs.
Patrons are steering away from the library. Scarily low percentages of users think to start their research at the library. Libraries live in an ever-more crowded landscape — there are so many places information seekers could go. Our catalogs and sites do not meet the expectations of our patrons. Commercial sites are engaging and intuitive. “Nobody had to take a bibliographic instructions class to use a book on Amazon.com.”
A demand for compelling library information interfaces. Need a “less underwhelming experience” at a minimum.

Scope

Current public interfaces have a wealth of defects: poor search, poor presentation, confusing interfaces, etc. Users need to go here, or there, or elsewhere, to find the kind of information they’re looking for. We make them make choices. The entire audience agreed (by show of hands) that the current state of OPACs is dismal.
We need to decouple front end from the back end. Back end systems are purpose-built and useful (to us). Front end systems should be useful for users.
Features Breeding expects to see in next generation.
Redefinition of “library catalog” — needs a new name. Library interface? Isn’t just an item inventory. Must deliver information better. Needs more powerful search. Needs, importantly, a more elegant presentation. Keep up with the dot com world.
It must be more comprehensive — all books, articles, DVDs, etc. Print and digital materials must be treated equally in the interface. Users must not be forced to start in a particular place to find the material they want. They want information, not format. More consolidated user interface environment is on the horizon.
Search — not federated, but something more like OAI — searching metadata harvested from databases, not just the first results returned by each database. Coordinated search based on harvested/collected metadata. Reduces problems of scale. Still great problems of cooperation. Also — questions of licensing.
Web 2.0 influences. Whatever the next system is, it needs to have a social and collaborative approach. Tools and technologies that foster collaboration. That means integrating blogs, wikis, tagging, bookmarking, user rating, user reviews, etc. Bring people into the catalog. At the same time, important to create web 2.0 information silos. Don’t put the interactive features off on the side — integrate it. Make it all mutually searchable.
Supporting technologies: Web services, XML APIs, AJAX, Widgets. The usual suspects.
New interface needs to have a unified interface. One front end, one starting point. Link resolver, federated search, catalog, web — all in the same place, same interface. Combines print and electronic. Local and remote. Locally created content, and even — gasp — user contributed content.

Features and Functions

Even if there is a single point of entry, there should be an advanced search that lets advanced users get to specific interfaces. Relevancy-ranked results. Facets are big and growing. Query enhancement (spell check, did you mean, etc.) — to get people to the right resources. Related results, breadcrumbs, single sign-on, etc.
Relevancy ranking — Endeca and Lucene are built for relevancy. Many catalogs have default results lists by date acquired. However it’s done, the “good stuff” should be listed first. Objective matching criteria need to be supplemented by popularity and relatedness factors.
Faceted browsing — users won’t use Boolean logic, need a point-and-click interface to add and remove facets. Users will do an overly broad search; you can’t stop them. Let them, but give tools that allow them to correct their “mistake” easily. Don’t force them to know what you have before they search.
Need spell check, automatic inclusion of authorized and related terms (so search tool includes synonyms without user having to know them). Don’t give them a link from “Did you mean…” to “no results found.” That’s rude. Improve the query and the results without making the user think about it.
Don’t get hung up on LCSH — think about FAST. Describe collections with appropriate metadata standards. Good search tools can index them all, anyway. Use discipline-specific ontologies — even if not invented by librarians! — as they are the language of the users.
More visually enriched displays. Make them look nice. Book jackets, ratings, rankings.
Need a personalized approach. Single sign-on. Users log in once, the system knows who you are, and that’s it. No repeated signing on. Ability to save, tag, comment, and share content — all based on the user’s credentials. Allows them to take library into broader campus environment.
Deep Search. We’re entering a “post-metadata search era”. We’re not just searching the headings of a cataloger, but we’re searching the full text of books and across many books. And we can soon search across video, sound, etc. Need “search inside this book” within the catalog.
Libraries aren’t selling things; we’re interested in an objective presentation of the breadth of resources available. Appropriate relevancy for us might include keyword rankings, library-specific weightings on those keywords, circulation frequency, OCLC holdings. Group results (i.e., FRBR). Focus results on collections, not sales.
What we do must integrate into our “enterprise” — university, government body, city government, etc. We need to put our tools out where the users are since (as we know) we’re losing the battle to make them come to us. Systems must be interoperable — get data out of ILS and into next generation systems. And hooks back into ILS from front end.
This won’t be cheap, in terms of money and effort both. But we can’t afford not to make this transition. We don’t have years to study and work to catch up with where we should have been years ago.
Is there an open source opportunity? Yes, but implemented systems are not taking the open source approach, for the most part.

I had hoped for a product review in this session, but the overview of features and desiderata was very helpful. There was a whirlwind tour at the end, but I would have liked an overview of what’s there.

CIL2008: Keynote on “Libraries: Innovative & Inspiring”

Erik Boekesteijn, Delft Public Library
Jaap van de Geer, Delft Public Library
Geert van den Boogaard, Delft Public Library
This session was a presentation and discussion of their Shanachietour 2007, in which they crossed the United States in an RV interviewing and filming librarians and patrons. They played a segment with the head of the NYPL, Paul _____, in which he talked about his efforts to “reoxygenate” the library.
Then went to the Public Library of Charlotte & Mecklenburg County (Virginia) to talk with Matt Gullett (of the Imaginon) about gaming. “Containers” of information will change — but books aren’t going away. Technology will allow genres of information to find their appropriate digital (or analog) form. “The book is one of the best technologies ever invented, but it is a technology.” We forget that.
Next stop, Michael Stephens’ library school class at Dominican University. Brought up a library student from UIUC; she concluded her conversation with the filmmakers by saying that the best skill a librarian is to be open to change.
Ended back in Delft, at the Delft Library Concept Center (DOK) — a future-looking library. It still has books, of course, but also has all sorts of digital media and tools to use it with. Gaming, too, of course. The DOK is all about people, according to its director — people are the most important collection. From the video, the DOK has the feel more of a bookstore (à la Borders) than of a library: open, airy, inviting, and filled with people using the print and digital collections. Brings the digital into the library, rather than having the library be the access point to it.

CIL2008: User-Generated Content

Roy Tennant
Not an overview of ways users are creating content. If you want that, go buy Social Software in Libraries by Meredith Farkas. Focus will be on user-generated content on library managed sites.
Roy’s tenets for user-generated content: More content is better. More access is better. Can provide more personalized service. Can foster interaction and community. We don’t know everything — we don’t know all we can know about our own collections. Our users can help remedy this. More data trumps better algorithms. (Google learned that the more data you have, the better your algorithms are. Code can’t make up for lack of data.)
Contributions of content. Institutional Repositories are a collecting point for user-created content. (This is often not thought of as a user-generated source.) Even if faculty aren’t doing it themselves, faculty are still getting their content into the library.
Kete.net is an open repository for whatever anyone wants to contribute. (Kete developed by the folks who did Koha.) They’re digitizing the Cyclopedia of New Zealand and are transcribing text. Also enabled software to handle genealogical information well. So a community can start to get a handle on genealogical past.
Descriptive contributions. Example of the Great Lakes Images, where they post photos and get community members to fill in details (names of subjects in photo, places, etc.) Library of Congress’s Flickr project is similar. 5.4 million views of content in first month. Immensely successful.
What has LC accomplished? Higher profile for collections. Enabled community engagement. And corrected metadata. But more importantly, sparked comments and conversation around the images being tagged. People became very involved in the images. And higher visibility for LC blog. Boston Public has done this, too. But they’ve had less traffic than LC.
Exploits knowledge of the masses. Library staff may not be closely connected to the collections they manage. They may not know much about the specific collections being featured. Web offers a feedback loop.
Bookspace at Hennepin County Library — offers community space around books. Has readers’ lists — on wide range of subjects, created by library users. Also guides by librarians; these are likely less specific and focused (not to mention less numerous).
Tags. Uses user terminology. Even if it’s “stupid,” it’s the user’s. There’s a very low barrier to use for users — type and click. It’s useful to the tagger (or else they wouldn’t be doing it anyway). But it is also useful to others. However, tags can be redundant (for example, “blogs,” “blogging,” and “blog” are all, probably, the same). Phrases are often complicated and inconsistent. Steve is a tagging project by several museums. A few tags often get applied by many users.
LibraryThing’s Tagmash brings together tags that are really synonymous. It works “pretty darn well” for bringing together works on a similar topic. The more data you have (the more users), the better the results.
Third-party providers in this general space. SpringShare (LibGuides and LibMarks), LibraryThing for Libraries, ChiliFresh (book reviews by readers).
Things to keep in mind…
Our idea of content might not be our user’s idea. People are going to do weird things. It’s going to be messy, and that’s OK.
Need to know what your goals are. How do you distinguish between user content and library content? Will you need to moderate in some way?
We (libraries) need to do better at inviting our users in. We need to figure out how to get better at using these technologies.