Reference is Fun!

Sep 27, 2009 12:48


Well, ok, not always.  But I haven't done it in a while and today was my first real shift doing online reference.  Kinda cool to get paid to learn how matches were invented and other stuff like that.

aaaaaaaaaand...so much better than writing schedules :)

Bonus Material:  anybody wishing to read my first essay...


click here

When readers of The Magic Tree House series came into the bookstore I worked at, they quite often asked for these books in ways that had absolutely nothing to how our product was organized: by subject, title, and author.   Instead, fans would often ask for "the Jack and Annie books" - all common words that seemingly give us little useful information. And yet, just about every child who asked for "the Jack and Annie books" was quickly directed to the correct shelf. Document retrieval often appears so simple as to be unremarkable when it works well, but the seemingly effortless tasks of first determining the unique perspective of the user, using that information to interpret their question and retrieve relevant information, and then determining if the query has been successfully answered are in fact the core problems that Information Retrieval sets out to solve.

The ability to determine if we have found what we are looking for may seem as though it is hardly worthy of being designated a task at all, much less a fundamental problem. That assumption, however, is built on a technology driven tradition of asking one question at a time, looking for very specific information, and asking an information specialist to assist us in our query (Meadow, Boyce, Kraft & Barry, 2008).   Information searches are more accurately described as both evolving and segmented. What people are looking for changes throughout their search as each new data set that is reviewed gives them more information and possible inspiration. (Bates, 1989) Determining the relevance of query responses is in fact often a complicated process rather than a simple one, in large part because relevance varies not just from user to user, but changes over time. (Morville, 2005) The child that simply wants to browse the collection today may tomorrow be looking for the latest release instead.

The capacity to evaluate the relevance of query answers is a learned behavior (Meadow et al.). Many seven year olds could not definitively answer in the affirmative when I asked them to verify that "the Jack and Annie books" they wanted were those in The Magic Tree House series; almost none recognized the author. Children of course are novices at all of this, but so in a sense is every student working on a research paper, every adult child wanting to learn about their parent's illness, and every web user looking for information on a subject that is relatively unknown to them (Meadow et al.). This is why we often feel overloaded with information. It is not just because the amount of information being generated has been increasing exponentially since digitization (Morville) it is also because our historical filters of limited publishing and authoritative gatekeepers are no longer always assisting us in narrowing our search and evaluating the credibility of material (Shirky, 2008). Information retrieval requires that the user themselves be able to determine if the data uncovered is relevant in terms of how easily they can access and make use of the information, how credible they deem the source, and how difficult it was to find the information (Morville).

As a children's bookseller or librarian, you quickly learn what Jack and Annie's fans are asking for not just by experience but also out of necessity, as most seven year old's ability to come up with additional metadata is rather limited.   You learn to use the information you are able gather about the patron - in this case age and reading level - in order to make better guesses about what they want. As users get older, we expect them to instead understand and conform to the set rules, metadata, and search vocabularies we have created in an attempt to increase the relevance and precision of search queries (Weinberger, 2007). However, just as we cannot expect users to come already knowing what they are looking for, neither can we ignore issues of perspective and semantics by assuming that everyone is familiar with not only the terms a database uses, but also the precise meaning of that word in that particular instance. Instead, we need to work on creating systems that are able to recognize flexible relationships between words and adjust search results according to data gathered about the searcher. (Meadow, et al.) They also need to do this in a way that requires minimal time and effort on the part of the user (Morville), responds to concerns about privacy and security (Morville), and recognizes copyright issues (Weinberger).

In creating the systems that assist us in retrieving information, we necessarily spend considerable time and effort considering various design features such as the organization of information and the manner in which the query program works, but the hardest task of all is understanding users (Meadow et al.). Luckily, the uniqueness of the digital age is not just that metadata and data are one in the same (Weinberger) it is also that every user is also a producer and that failure often costs much less than it used to (Shirky).

Bibliography

Bates, Marcia J. (1989) The Design of Browsing and Berrypicking Techniques for the Online Search Interface.

Retrieved September 10th, 2009, from http://www.gseis.ucla.edu/faculty/bates/berrypicking.html

Meadow, Charles T., Boyce, Bert R., Kraft, Donald H., & Barry, Carol. (2008) Text Information Retrieval Systems.

Bingly, UK: Emerald

Morville, Peter. (2005) Ambient Findability. Sebastopol, CA: O'Reilly.

Shirky, Clay. (2008) Here Comes Everybody. New York: Penguin Press.

Weinberger, David. (2007) Everything is Miscellaneous. New York: Times Books/Henry Holt and Co.

hw, that's why they pay you, learn

Previous post Next post
Up