CIL2008: The Open Source Landscape

This is the presentation I hoped to have in yesterday’s keynote

Marshall maintains a list of who has what catalogs on his Library Technology Guides site.
Federated search systems: LibraryFind; dbWiz (Simon Fraser); Masterkey (developed by Index Data). masterkey.indexdata.com for a demo.
OCLC offers some open source software — but not cutting edge stuff. Fedora is a major digital repository engine. VTLS Vital is based on Fedora. Fedora Commons is a support service around it. Keystone — also by Index Data.

Open Source Discovery Products (i.e, Next Generation Catalogs)

VUFind. Apache Solr/Lucene.
– eXtensible Catalog (Mellon funded). Not a product now, but will be one day. XC are currently seeking institutional participation. This will “probably become a player” in the coming years.
– Others, such as Fac-Bac-OPAC, Scriblio (formerly WPopac).

Open Source in the ILS Arena

Shifting from open source being risky to open source being mainstream. Medium-sized public libraries are going with open source solutions for catalog; it no long requires massive technological effort or as much risk as it did.
In 2002, the open source ILS was a distant possibility — 3 of 4 tools Marshall reviewed then (Avanti, Pytheas, OpenBook, and Koha) are now defunct. In 2002, open source ILS wasn’t a trend.
In 2007, world starting to change. Slowly. A few hundred libraries had purchased an open source ILS; 40,000 had purchased a commercial product. In March 2008 — early adopters are now catalysts for others. There’s a small installed base, which makes others see the possibilities as being real. It seems now that we have a bona fide trend.
The ILS industry is “in turmoil”. Companies are merging; libraries are faced with fewer choices from commercial vendors; this gives more credence to ILS arena from standpoint of competition.
Decision to go open source is still primarily a business decision — as a library, need to demonstrated that the open source ILS best supports the mission of the library.

Current Product Options

Koha first open source ILS. Based on Perl, Apache, MySQL, Zebra search engine (from Info Data). Has 300+ libraries using it. Including Santa Cruz Public Library, 10 sites and 2 million volumes. Has relevance-ranked search, book jackets, facets, all that jazz.
Evergreen. Developed by Georgia Public Library consortium. Two year development cycle (6/2004 – 9/2006). A single shared environment shared by all libraries. One library card. Switched from SIRSI Unicorn. Succeeded in part because of standardization of policies across libraries (lending policies, etc.). Used in Georgia, British Columbia, Kent County (Maryland), and under consideration by a group of academic libraries in Canada. So far, only publics have adapted).
OPALS Open Source Automated Library System. Developed by Media Flex. Both installed ($250) and hosted ($170) services. Used by a consortium of K-12 schools in NY.
NextGenLib ILS designed for the developing world. 122 installations (India, Syria, Sudan, Cambodia). Originally closed, converted to open source in early 2008. More information from Library Technology.
Learning Access ILS. Designed for underserved rural public and tribal libraries — a turnkey solution. But may be defunct, according to Marshall. Built on an early version of Koha, but customized.

Open Source Business Front

Lots of companies offer a business plan to help support ILS software. Index Data, LibLime (Koha), Equinox (Pines), Care Affiliates; MediaFlex.
Duke is working on an open source ILS for higher education (looking for funding from Mellon; Marshall is involved).

Open Source Issues

Rise in interest led by disillusionment with traditional vendors. But total cost of ownership is probably about the same between open source and traditional tools. Libraries hope that they are less vulnerable to mergers and acquisitions. There’s no lump sum payment (though still need hardware, support — internal or external — and development costs. Not always clear who is funding the next generation of the current system.
Risk factors: dependency on community organizations and commercial companies. Decisions are often based on philosophical reasons, but they shouldn’t be — you need to consider the merits of the system itself. Make sure features and functionality are what you need.
Open Source vendors/providers need to develop and present their total cost of ownership — with documentation.
“Urgent need for a new generation of library automation designed for current and future-looking library missions and workflows.” That is, systems built for our digital and print collections. Open source tools do OK for systems of yesterday; will they meet the needs of the new library?
Q: How close are we to a system that does not utilize MARC records?
A: Not very. We need systems that do MARC, and Dublin Core, and ONYX, and RDF, etc., etc. The value in existing MARC records is too large to ditch. (Of course, it needs to be MARC XML.)

CIL2008: Drupal and Libraries

CIL2008: Drupal and Libraries, presented by Ellyssa Kroski
Uses a course page she set up for her library school course as an example. Students each had a blog; could tag their blogs and posts; favorite things within the community; share things via email; upload videos and photos; create and take user polls; buddy lists; guest book (i.e., Facebook Wall). A class chat room and tag cloud for site’s tagged content. What’s new on site — recently added/updated content.
Drupal runs on Apache, MySQL, and PHP. Has 3 components. 1) The core CMS that lets you organized and publish content to the web. This core functionality is well maintained, with a release schedule and bug fixing. 2) Contributed modules — things added by the user community. A bit of the “wild west” with these; not much oversight or control. Some are very well done; others not. 3) Themes. The skin on the site. Created with a combination of HTML, PHP, CSS.
A very active/engaged user community. Including many libraries. Most recognized, probably, is Ann Arbor District Libraries. Wrote a custom module to place OPAC into Drupal framework. L-Net staff intranet. Manages 65,000 virtual reference transcripts. Franklin Park Public Library uses Drupal. Done by one person, not an IT guy. St. Lawrence University Library — staff intranet as a communication tool for student workers on evenings and weekends. Using Drupal to plan redesign. Public web site, launching in fall 2008, will combine all library web sites. Includes course resources module that will allow faculty to build course resource lists; students will be able to vote on them and upload images, etc. IUPUI Library — pulls databases from Metalib, via X-Server, and organizes them into appropriate subject guides by categories. Librarians have subject guides, more frequently updated than before (ease of updating).
Simon Fraser University library uses Drupal for workshops page. Users can register, wait-listed, etc. Staff can manage registration lists. Uses Drupal events module. Florida State University Libraries. Content is currently managed through pages, but are moving into more of a true CMS implementation. Red Deer Public Library. And many other examples.
Slides and links are available at
http://oedb.org/blogs/ilibrarian/2008/drupal-and-libraries-at-cil2008/

CIL2008: The New Generation of Library Interfaces

Presented by Marshall Breeding, Director for Innovative Technologies and Research, Vanderbilt University
Marshall Breeding maintains Library Technology Guides site. Today’s topic is next-generation catalogs.
Patrons are steering away from the library. Scarily low percentages of users think to start their research at the library. Libraries live in an ever-more crowded landscape — there are so many places information seekers could go. Our catalogs and sites do not meet the expectations of our patrons. Commercial sites are engaging and intuitive. “Nobody had to take a bibliographic instructions class to use a book on Amazon.com.”
A demand for compelling library information interfaces. Need a “less underwhelming experience” at a minimum.

Scope

Current public interfaces have a wealth of defects: poor search, poor presentation, confusing interfaces, etc. Users need to go here, or there, or elsewhere, to find the kind of information they’re looking for. We make them make choices. The entire audience agreed (by show of hands) that the current state of OPACs is dismal.
We need to decouple front end from the back end. Back end systems are purpose-built and useful (to us). Front end systems should be useful for users.
Features Breeding expects to see in next generation.
Redefinition of “library catalog” — needs a new name. Library interface? Isn’t just an item inventory. Must deliver information better. Needs more powerful search. Needs, importantly, a more elegant presentation. Keep up with the dot com world.
It must be more comprehensive — all books, articles, DVDs, etc. Print and digital materials must be treated equally in the interface. Users must not be forced to start in a particular place to find the material they want. They want information, not format. More consolidated user interface environment is on the horizon.
Search — not federated, but something more like OAI — searching metadata harvested from databases, not just the first results returned by each database. Coordinated search based on harvested/collected metadata. Reduces problems of scale. Still great problems of cooperation. Also — questions of licensing.
Web 2.0 influences. Whatever the next system is, it needs to have a social and collaborative approach. Tools and technologies that foster collaboration. That means integrating blogs, wikis, tagging, bookmarking, user rating, user reviews, etc. Bring people into the catalog. At the same time, important to create web 2.0 information silos. Don’t put the interactive features off on the side — integrate it. Make it all mutually searchable.
Supporting technologies: Web services, XML APIs, AJAX, Widgets. The usual suspects.
New interface needs to have a unified interface. One front end, one starting point. Link resolver, federated search, catalog, web — all in the same place, same interface. Combines print and electronic. Local and remote. Locally created content, and even — gasp — user contributed content.

Features and Functions

Even if there is a single point of entry, there should be an advanced search that lets advanced users get to specific interfaces. Relevancy-ranked results. Facets are big and growing. Query enhancement (spell check, did you mean, etc.) — to get people to the right resources. Related results, breadcrumbs, single sign-on, etc.
Relevancy ranking — Endeca and Lucene are built for relevancy. Many catalogs have default results lists by date acquired. However it’s done, the “good stuff” should be listed first. Objective matching criteria need to be supplemented by popularity and relatedness factors.
Faceted browsing — users won’t use Boolean logic, need a point-and-click interface to add and remove facets. Users will do an overly broad search; you can’t stop them. Let them, but give tools that allow them to correct their “mistake” easily. Don’t force them to know what you have before they search.
Need spell check, automatic inclusion of authorized and related terms (so search tool includes synonyms without user having to know them). Don’t give them a link from “Did you mean…” to “no results found.” That’s rude. Improve the query and the results without making the user think about it.
Don’t get hung up on LCSH — think about FAST. Describe collections with appropriate metadata standards. Good search tools can index them all, anyway. Use discipline-specific ontologies — even if not invented by librarians! — as they are the language of the users.
More visually enriched displays. Make them look nice. Book jackets, ratings, rankings.
Need a personalized approach. Single sign-on. Users log in once, the system knows who you are, and that’s it. No repeated signing on. Ability to save, tag, comment, and share content — all based on the user’s credentials. Allows them to take library into broader campus environment.
Deep Search. We’re entering a “post-metadata search era”. We’re not just searching the headings of a cataloger, but we’re searching the full text of books and across many books. And we can soon search across video, sound, etc. Need “search inside this book” within the catalog.
Libraries aren’t selling things; we’re interested in an objective presentation of the breadth of resources available. Appropriate relevancy for us might include keyword rankings, library-specific weightings on those keywords, circulation frequency, OCLC holdings. Group results (i.e., FRBR). Focus results on collections, not sales.
What we do must integrate into our “enterprise” — university, government body, city government, etc. We need to put our tools out where the users are since (as we know) we’re losing the battle to make them come to us. Systems must be interoperable — get data out of ILS and into next generation systems. And hooks back into ILS from front end.
This won’t be cheap, in terms of money and effort both. But we can’t afford not to make this transition. We don’t have years to study and work to catch up with where we should have been years ago.
Is there an open source opportunity? Yes, but implemented systems are not taking the open source approach, for the most part.

I had hoped for a product review in this session, but the overview of features and desiderata was very helpful. There was a whirlwind tour at the end, but I would have liked an overview of what’s there.

CIL2008: Keynote on “Libraries: Innovative & Inspiring”

Erik Boekesteijn, Delft Public Library
Jaap van de Geer, Delft Public Library
Geert van den Boogaard, Delft Public Library
This session was a presentation and discussion of their Shanachietour 2007, in which they crossed the United States in an RV interviewing and filming librarians and patrons. They played a segment with the head of the NYPL, Paul _____, in which he talked about his efforts to “reoxygenate” the library.
Then went to the Public Library of Charlotte & Mecklenburg County (Virginia) to talk with Matt Gullett (of the Imaginon) about gaming. “Containers” of information will change — but books aren’t going away. Technology will allow genres of information to find their appropriate digital (or analog) form. “The book is one of the best technologies ever invented, but it is a technology.” We forget that.
Next stop, Michael Stephens’ library school class at Dominican University. Brought up a library student from UIUC; she concluded her conversation with the filmmakers by saying that the best skill a librarian is to be open to change.
Ended back in Delft, at the Delft Library Concept Center (DOK) — a future-looking library. It still has books, of course, but also has all sorts of digital media and tools to use it with. Gaming, too, of course. The DOK is all about people, according to its director — people are the most important collection. From the video, the DOK has the feel more of a bookstore (à la Borders) than of a library: open, airy, inviting, and filled with people using the print and digital collections. Brings the digital into the library, rather than having the library be the access point to it.

CIL2008: User-Generated Content

Roy Tennant
Not an overview of ways users are creating content. If you want that, go buy Social Software in Libraries by Meredith Farkas. Focus will be on user-generated content on library managed sites.
Roy’s tenets for user-generated content: More content is better. More access is better. Can provide more personalized service. Can foster interaction and community. We don’t know everything — we don’t know all we can know about our own collections. Our users can help remedy this. More data trumps better algorithms. (Google learned that the more data you have, the better your algorithms are. Code can’t make up for lack of data.)
Contributions of content. Institutional Repositories are a collecting point for user-created content. (This is often not thought of as a user-generated source.) Even if faculty aren’t doing it themselves, faculty are still getting their content into the library.
Kete.net is an open repository for whatever anyone wants to contribute. (Kete developed by the folks who did Koha.) They’re digitizing the Cyclopedia of New Zealand and are transcribing text. Also enabled software to handle genealogical information well. So a community can start to get a handle on genealogical past.
Descriptive contributions. Example of the Great Lakes Images, where they post photos and get community members to fill in details (names of subjects in photo, places, etc.) Library of Congress’s Flickr project is similar. 5.4 million views of content in first month. Immensely successful.
What has LC accomplished? Higher profile for collections. Enabled community engagement. And corrected metadata. But more importantly, sparked comments and conversation around the images being tagged. People became very involved in the images. And higher visibility for LC blog. Boston Public has done this, too. But they’ve had less traffic than LC.
Exploits knowledge of the masses. Library staff may not be closely connected to the collections they manage. They may not know much about the specific collections being featured. Web offers a feedback loop.
Bookspace at Hennepin County Library — offers community space around books. Has readers’ lists — on wide range of subjects, created by library users. Also guides by librarians; these are likely less specific and focused (not to mention less numerous).
Tags. Uses user terminology. Even if it’s “stupid,” it’s the user’s. There’s a very low barrier to use for users — type and click. It’s useful to the tagger (or else they wouldn’t be doing it anyway). But it is also useful to others. However, tags can be redundant (for example, “blogs,” “blogging,” and “blog” are all, probably, the same). Phrases are often complicated and inconsistent. Steve is a tagging project by several museums. A few tags often get applied by many users.
LibraryThing’s Tagmash brings together tags that are really synonymous. It works “pretty darn well” for bringing together works on a similar topic. The more data you have (the more users), the better the results.
Third-party providers in this general space. SpringShare (LibGuides and LibMarks), LibraryThing for Libraries, ChiliFresh (book reviews by readers).
Things to keep in mind…
Our idea of content might not be our user’s idea. People are going to do weird things. It’s going to be messy, and that’s OK.
Need to know what your goals are. How do you distinguish between user content and library content? Will you need to moderate in some way?
We (libraries) need to do better at inviting our users in. We need to figure out how to get better at using these technologies.

CIL2008: Text Mining and Visualization of Open Sources

Text Mining and Visualization of Open Sources
Patrice Slert
We’re talking about structured data from open sources (Web of Science, Dialog, Silobreaker, the Internet), not necessarily free sources. This is in contrast to intelligence data, where a lot of the technologies have applications, as well.
Visualization can mislead you in terms of cause and effect. It can also lead to false similarities (such as New England and England being presented as the same place).
Open Source Information (OSI) is growing. Intelligence community is recognizing the value of librarians in searching the open source information space.
ISI Web of Knowledge includes visualization and text mining capabilities. However, limited to databases provided through ISI. To mix and match with data available through other vendors, need to use other products, such as VantagePoint. VantagePoint allows you to create filters for importing data from various sources.
SiloBreaker — a news analysis tool, commercially available. It lets you mine for information via word searches, visual searches, people, organizations, industries — ways of pulling together relationships among these facets. It pulls out networks of people, as reported in news reports. It’s provides a way to look at the news and see who is appearing in news articles about the subject. You can expand your search — or refocus it — by diving deeper into related people, organizations, companies, etc.

CIL2008: Library Web Presence

Widgets at Penn State

Ellysa Stern CahoyEmily Rimland. Facebook application for the library. Led them to think about simple pages. Build “Research JumpStart” aimed at beginning users. Uses widgets — little bits of content taken from their source and dumped into another page.
Widgets provide easy access to popular, most valuable resources. Once you have widgets, you can place them in other environments (iGoogle, for example). Widgets help you compartmentalize your information and provide just what’s needed, when it’s needed.
Widgets on JumpStart page: 1) Catalog search. 2) ProQuest 3) Research guides for specific courses/subjects — just the guides that are most used by undergraduates. 4) Chat widget (they use AIM).
iGoogle widgets have proved very popular. Faculty and students have liked taking the search tools and RSS feeds and creating a personalized page.
Binky Lush. How these were developed. Uses WidgetBox. Provide widgets for all sorts of services (iGoogle, PageFlakes, social networking sites, etc.). Provides code for your own site, to include in a blog, etc. All of Penn State’s widgets are hosted by Widget Box. This is the “get widget” chicklet that appears on the JumpStart page in each gadget. This gives you a window with options for all sorts of places the widget can be embedded — code customized for each site — or raw HTML.

I wonder about whether this makes sense; to host this sort of content on an external site. What are advantages? There’s obvious ease of creating the widget, but shouldn’t core services be hosted locally?

WidgetBox lets you create a Facebook widget, but doesn’t fully take advantage of Facebook’s social graph — so PennState is developing their own.

LibraryGuides at Temple

DerekDerik Badman and Kristina DeVoe
Original subject guides were static pages, long lists of annotated links. There were based on Contribute, which was not easy to use, according to DerekDerik. No functionality other than what was on the page.
Brought in LibGuides in spring 2007. Had a semester to migrate all 90+ guides into LibGuides. Was fairly easy to do. Creating and maintaining guides easy. Also very flexible. Content of guide can be organized by resource type (like always), but also by any other categories library wants — time period, topic, etc. Units of class, paper topics, anything that’s needed. And that librarian has time for.
Content is modular. Easy to take a content block from one guide to another. Easy to share.
Users can find guides by subject, by tags, by “featured resources”, by recently updated, by ratings. Users can comment on guides — either on guide as a whole or on a section. Allows community building to start. LibGuides also has a polls feature — about the guide, or anything else.
They’ve added widgets (chat, calendar, etc.) as well as direct search boxes so that users can search directly in featured resources without having to first go to a page and then search. Similarly, tailored federated search. Pull in RSS feeds from various sources — for example, table of contents for specific journals or news.
Have used for course guides — a guide not just for a subject, but for a particular course. Resources are targeted to specific classes and contain resources that are relevant at that point in the semester.
Usage… Usage has gone up significantly (static guides vs. dynamic guides).
Marketing is important. Students need to know the new guides exist, that they are better than the old.
What else can LibGuides be used for? Ideas… 1) Information literacy. For example, adding descriptions of “primary sources” to the Temple history guide. 2) Co-opt faculty; invite them to get involved and become partners in creating the resources, tailored for their needs.
Question: What are privacy implications of using a service like widgetbox or libguides?
Answer: LibGuides doesn’t save any data. No user accounts are created. It is hosted at LibGuides. Widgetbox… Widgetbox is similar, but not clear how much data is stored.
Question: How easy is it integrate guides into local web site?
Answer: We don’t know yet. Redirected old URLs to new. But since LibGuides is hosted, it’s not on the same server.
Question: Are other sites embedding PSU’s widgets in their sites?
Answer: We don’t know — don’t have that level of detail as to where it gets embedded.
Update 4:30 PM 7 April Corrected name of first PSU speaker and corrected link. Update 11:20 PM 7 April Corrected Derik’s name. Not my day for getting names right.

CIL 2008: Mobile Search

Megan Fox and Gary Price
Slides and more will be available at web.simmons.edu/~fox/mobile/

Mobile Market

3.3 billion mobile phones. 46 million wireless subscribers used mobile search (mostly through text, not web browsers, on the phone).
iPhone users responsible for 50 times the traffic in mobile search. 85% of iPhone users accessed news and information on their phone (compared to 58% of other wireless users). Most searches are simple, single words (hard to enter text on a mobile device). Gary thinks that next year voice search will be the new thing — you say your query, you get results by text or email.
Some search tools are carrier-specific; some are phone-specific.
People who search from mobile devices are generally looking for “ready reference” information (facts, figures, stock prices, weather etc.). Rarely in-depth research. Search engines have mobile search interfaces, aimed at handheld devices. THey assume that mobile user wants facts, information. And that users don’t want to type much. Searches are aggregated across silos otherwise provided to web users (so news, images, sites, etc., are listed on one page, not on several). This trend — “one search” for Yahoo, “universal search” for Google — is on the rise in web searching, too.
How to delivery high-bandwidth content to mobile devices with different capabilities, and with providers that allow different traffic, is a challenge.
Yahoo’s mobile search has ‘snippets’ — stripped down ‘widgets’ — that give you a preview of web content you frequently access.
Google indicates pages tailored for mobile devices with a tiny green icon. There are sites that transcode — convert for mobile use — regular web pages to mobile pages. They work differently, though; some handle different kinds of content better than others.
Live Search — Live Mobile. Makes assumptions about your future searches based on past use. Also uses personal search histories; things you’ve searched before are remembered and influence future searches.
4info — lets you search by text.
Alerts — services will watch news (sports, etc.) for certain thresholds, and will send you a text or email alert when something happens (a score is close in the 7th inning, etc.)
Medio — Working on a “predictionary” — predicts the words you are going to finish typing, based on words you’ve typed in the past. Does on the mobile device what your browser does in remembering past search queries.
Lots of mobile meta-search/federated search tools. MCN, obovo.com, upsnap are up-and-coming players in this market.
Using your phone’s camera to take a picture of something, send it to omoby, mobot, or snapnow, it sends back a search response based on the photo. Also new 2D barcodes — take a picture, it prompts your phone to pull down a URL, send a text message, etc. These are much more common in Europe/Asia.
chacha — call 1-800-2chacha or text “chacha”, say your question, get an answer by text. Humans do the answering. They provide an answer and a source URL. Not clear who is doing research (probably not librarians!)
Location based search — based on where your phone says it is, gives you localized search results.
Location based search — actually, more like a directory. you say where you are, it offers you categories that you can look through. The return of Gopher!
Clusty, a search clustering engine, works well in mobile environment. Brings back results by kind (a search for “apple” offers company, fruit, etc., categories as a filter.
Behavioral targeting on mobile devices is coming. Real estate is small, importance of what gets sent there is critical. Making sure that the right content gets to the mobile device is important.
Spinvox — Listens to your phone calls, sends you information on topics you discuss. Can also update your blog from dictation.
Searchme uses a presentation of search results like “cover-view” in iTunes or iPhone. Results pages are presented in thumbnail view that you can flip through one at a time.
A directory of hundreds of search tools (available for the next two weeks, go to mlvb.net and log in with rubble888 and cil2008.

CIL2008: Going Local in the Library

CIL2008: Going Local in the Library, Charles Lyon (SUNY Buffalo)

What is local web

The web viewed through a lens of where you are. Not just spatial, but lots of other information you need. Which stores are open now? Which are in good neighborhoods? Which can handle my particular needs? Doing local information is hard; it’s very individualized.
Google does this better than anyone. Search results are customized to where you are. But don’t include the really useful information a true local could give you.
Google spends a lot of effort on this — so libraries should do, too. Google is the bellwether.
So what is the local web? Some pieces:

  1. local search engine
  2. maps
  3. local media
  4. local photos/data/video/blogs
  5. local social networks
  6. local people –this is the most important part.

The local web is social. It’s user-generated, participatory, amateur, civic, grassroots, citizen’s journalism. It’s by and from the community it serves.
It’s localized — about neighborhoods, communities, blocks, streets, buildings. Not just geographical areas, but about “imagined communities” — people who seem themselves as part of a small unit.
Local web is joining the real world and the virtual world. Interconnection between the two. It brings the placeless infosphere — the cloud — down to wherever you are. It reverses the “antisocialization” that was feared in the early days of the Internet.
Local web brings a sense of place to the Internet. It’s becoming big business — lots of companies competing in this space.

What do libraries bring to local web

Information, local information (events, community directories, guides to local events and communities.
What can libraries do that extends this?
Everyday life is still local. The internet is getting more local. Web 2.0 has many local applications. Libraries are community-focuses institutions. Libraries have experience with local information… There is an opportunity for libraries to become even more local-focused in the web environment.
Strategies: become expert users of local resources. Raise awareness and assist the community in using online local resources. Broaden the scope of local data collection. Become active participants in community-focused resources. And create locally-focused content.

Examples of local 2.0

Local search: Enhance their own listings in local search engines; advertise (no cost!) in the local search engine. Create your own search engine — that only searches the sites you specify. Libraries can build a search tool that only includes the stuff that you feel is relevant to your clientele.
Local blogs: placeblogs, metroblogs, neighblogs. Create a local blog directory. And once you’ve found them, add them to your local search engine. And libraries can blog themselves — not about the library, per se, but about the community it serves. Whether broadly or narrowly focused, you can take advantage of library’s knowledge (or librarian’s knowledge).
Local News: News refocused on local geography — the news that happens close to you. They’re blog-like: people can comment on news articles, set up profiles, learn about neighbors.
Locally-focused online communities (Skokie Talk, MyHamilton.ca). Wikis focused on local area, open to contribution by community.
Local data: HelloMetro.com and EveryBlock.com (San Francisco, Chicago, NYC only). News for your neighborhood at block level. Building permits, restaurant inspections, graffiti, all sorts of things that are important to the neighborhood. Much of this is already available — but not aggregated by address. Everyblock is grant-funded and will open-source their code at the conclusion of the project.
Local Photos: Geotagging is geographic metadata to online information. As simple as a zip code, as complex as latitude-longitude. Geotagging makes it easy to find things. Flickr is leading drive for this in photos. Libraries can aggregate local photos.
Maps: It’s easy to create a custom map.

Why libraries are primed for local?

Local is cheap. Using free services. Guidespot, ineighbors. Local sites generally don’t generate revenue — they’re labors of love. Perfect for libraries. Also, it’s not too late — there’s no winner in the local web. There are lots of kinds of local data that aren’t web accessible yet. Much of local data is not easily automated; still requires people to determine relevance to the locale. Helps build good will.
This can be applicable to academic libraries, too — local as the campus, not just the community.

CIL2008: Keynote on “Libraries Solve Problems”

I’m attending Computers in Libraries 2008 and will be blogging many of the sessions I attend… I’ll post my (mostly unedited) notes. If you’re at CIL, look me up!
Presented by Lee Rainie, Director of Pew Internet & American Life Project.
Blogging is about information and communication. This is what makes the Internet so wonderful. That’s what the era of user-generated content is all about.
Information was scarce, expensive, and institutionally oriented. now, it’s abundant, cheap, and personally oriented.
In 2000; 46% of adults used the internet; 73% of teenagers. 5% had broadband at home. 50% owned a cell phone. Nobody connected wirelessly. Phone line ruled.
2008: 75% of adults, 93% of teens use internet. 54% have broadband at home. 78% own cell phone. 62% connect wirelessly (42% by wireless, 59% use cell phones over data networks — overlap is 62%). Cell phone users tend to be minorities, less well educated — reverses digital divide fears. Wireless connectivity is determinant of Internet behavior. Results in resurgence of email — on a cell phone, email matters a lot. News becomes more important, too – broadly defined. Fast and mobile connections rule.
The home media ecology is immensely complex. Data moves from this to that (TiVo to computer, cell phone to cable box, etc.). Internet becomes “cloud” — it’s where important stuff is stored. The Internet is the computer and storage device. This has huge, not yet understood, implications.
Content creation — 62% young adult users have uploaded photos to the internet. 34% of all users have done this. It’s an obligation of sorts to photo-document their lives. Pictures are currency of community building and communication.
58% have created a profile on social networks (33% of adults) on MySpace, Facebook, etc. 39% of online teens (13% of online adults) share and create content online.
A quarter of online teens help others get their stuff online.
33% of online college students keep blogs. 54% of online college students read blogs. 12% of online adults have blogs; 35% read them. This gets hard to measure because blogging is baked into all sorts of tools. Reading blogs even more so; what’s a blog? What do people recognize as a blog?
19% of online young adults have created an avatar that interacts with others. 6% of online adults do this.
New research on libraries in the information ecosystem. Original question was from GPO — how do people want government documents (online, print, mail, etc.)? Survey grew to be much broader: How do people get information to help them solve problems that could have a government connection or be aided by government resources?
Asked about 10 broad areas: health, schooling, taxes, jobs, Medicare, Social Security, voter registration, local government, legal actions, immigration. About 80% of respondents had been through at least one of these problem classes and needed information. This makes about 169 million adults. Survey asked where they found information? Libraries included in possible responses. 53% of adults had been to a local library in the past year. Gen Y (age 18-30) — 62%. Gen X (31-42): 59%. Trailing Boomers (43-52) 57%. Leading boomers (53-61): 46%; Matures (62-71) 42%. After Work (72+) 32%. Youngest cohort had the highest use of libraries. Teen use of libraries: 60% of online teens use the internet at libraries, up from 36% in 2000. Youth use libraries, contrary to expectations.
Those who use libraries are more likely to come from higher-income households. More likely to Internet users. More likely to have broadband at home. Parents with minor children at home more likely. Libraries matter more in the Internet age, not less (as previous expectations were). Internet users are more active in information gathering and usage than non-users. No real difference in patronage based on race or ethnicity.
How people solve problems? What sources did you use when you confronted the most recent problem you faced? 58% used Internet overall. 53% turned to professionals, then other sources. However, young adults (18-29) 21%. Blacks 26%; Latinos (22%). Younger people relied on libraries, as did minorities and lower income users.
Most popular problem-solving searches at libraries: schooling/education, finding ways to pay. Then jobs, serious illness, taxes, medicare/medicaid.
Once people are at the library… 69% got help from staff. 68% used computers (38% got technical assistance). 58% sought reference materials. People and resources matter. Libraries are social learning experience.
Future intentions: Would you go back to the library for a future problem? Overall, about 29% were somewhat likely or more. But — less well off (40%); Gen Y (41%), less educated (41%), Latinos (42%), Blacks (48%).
Why are youth so library-centric? Lee’s hypothesis: they have the most recent experience with libraries (through school assignments). Based on recent experience, they are more aware of how libraries have changed, more than other age groups. They know libraries can help.
Takeaways and Implications
Public education efforts about what libraries do and how we have changed are likely to pay off. Focus on success stories and competence. The people who know us best are the ones who keep coming back.
Patrons are happy and zealous advocates. Encourage your patrons to evangelize on your behalf. Give them Web 2.0 tools and, if needed, training to use them. They are eager to give you feedback.
Your “un-patrons” are primed to think of libraries. Need to let them know what you offer: tools available, training, mentoring skills, comfortable environment.
This is the era of social networks. People rely more now on social networks than ever before. They are for learning, news/navigation, support and problem solving. This last point is very important. Libraries can have a huge role in this. How can library be a node in social network.
Virtual communities are becoming more person-centric. Not created by a “publisher”, but ad hoc built around your friends and people you trust.