Last Friday, Shirley and I headed down to London for the TiLE workshop: ‘”Sitting on a gold mine” — Improving Provision and Services for Learners by Aggregating and Using ‘Learner Behaviour Data.’ The aim of the workship was to take a ‘blue skies’ (but also practical) view of how usage data can be aggregated to improve resource discovery services on a local and national (and potentially global) level. Chris Keene from the University of Sussex library has written a really useful and comprehensive post about the proceedings (I had no idea he was ferverishly live blogging across the table from me — but thanks, Chris!)
I was invited to present a ‘Sector Perspective’ on the issue, and specifically the ‘Pain Points’ identifed around ‘Creating Context’ and ‘Enabling Contribution.’ The TiLE project suggests a lofty vision where, with the sufficient amount of context data about a user (derived from goldmines such as attention data pools and profile data stored within VLEs, library service databases, institional profiles — you know, simple enough;-) services could become much more Amazon-like.Â OPACs could suggest to users, ‘First Year History Students who used this textbook, also highly rated this textbook…’ and such. The OPAC is thus transformed from relic of the past, to a dynamic online space enabling robust ‘architectures of participation.’
This view is very appealing, and certainly at Copac we’re doing our part to really interrogate how we can support *effective* adaptive personalisation. Nonetheless, as a former researcher and teacher, I’ve always had my doubts as to whether the Library catalogue per se, is the right ‘place’ for this type of activity.
We might be able to â€˜enable contributionâ€™ technically, but will it make a difference? An area that perhaps most urgently needs attention is research on the social component and drivers for contributing user-generated content.Â As the TiLE project has identified, the â€˜goldmineâ€™ here to galvanise such usage is â€˜contextâ€™ or usage data. But is it enough, especially in the context of specialised research?
As an example of the potential ‘cultural issues’ that might emerge, the TiLE project suggests the case of the questionably nefarious tag â€˜wkd bk m8â€™ which is submitted as a tag for a record. They ask, â€œIs this a low-quality contribution, or does it signal something useful to other users, particularly to users who are similar to the contributor?â€
Iâ€™d tend to agree the latter, but would also say that this is just the tip of the iceberg when it comes to rhetorical context. For example, consider the user-generated content that might arise around contentious works around the â€˜State of Israel.â€™ The fact that Wikipedia has multiple differing and â€˜sparringâ€™ entries around this is a good indicator of the complexity that emerges. I would say that this is incredibly rich complexity, but on a practical level potentially very difficult for users to negotiate. Which UGC derived â€˜contextâ€™ is relevant for differing users? Will our user model be granular or precise enough to adjust accordingly?
One of the challenges of accommodating a system-wide model is the tackling of semantic context. Right now, for instance, Mimas and EDINA have been tasked to come up with a demonstrator for a tag recommender that could be implemented across JISC services. This seems like a relatively simple proposition, but as soon as we start thinking about semantic context, we are immediately confronted with the question of which concept models or ontologies do we draw from?
Semantic harvesting and text mining projects such as the Intute Repository Search have pinpointed the challenge of â€˜ontological driftâ€™ between disciplines and levels. As we move into this new terrain of Library 2.0 this drift will likely become all the more evident.
Is the OPAC too generic to facilitate the type of semantic precision to enable meaningful contribution? I have a hunch it is, as did other participants when we broke out into discussion sessions.
But perhaps the goldmine of context data, that â€˜user DNA,â€™ will provide us with new ways to tackle the challenge, and there was also a general sense that we needed to forge forward on this issue — try things out and experiment with attention data.Â A service that gathers that aggregates both user-generated and attention/context data would be of tremendous benefit, and Copac (and other like services) can potentially move to a model where adaptive personalisation is supported.Â Indeed, Copac as a system-wide service has a great potential as an aggregator in this regard.
There is risk involved around these issues, but there are some potential â€˜quick winsâ€™ that are of clear immediate benefit. Another speaker on Friday was Dave Pattern, who within a few minutes of ‘beaming to us live via video from Huddersfield’ had released the University of Huddersfield’s book usage data (check it out).
This is one goldmine we’re only too happy to dig into, and we’re looking forward to collaborating with Dave in the next year to find ways to exploit and further his work in a National context.Â We want to implement recommender functions in Copac, but also (more importantly) working at Mimas to develop a system for the store and share of usage data from multiple UK libraries (any early volunteers?!)Â The idea is that this data can also be reused to improve services on a local level.Â Â We’re just at the proposal stage in this whole process, but we feel very motivated, and the energy of the TiLE project workshop has only motivated us more.