New & Noteworthy

Heidi Hanson (University of Maryland, College Park, Maryland, USA)
Zoe Stewart-Marshall (Honolulu, Hawaii, USA)

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 5 October 2015

262

Citation

Hanson, H. and Stewart-Marshall, Z. (2015), "New & Noteworthy", Library Hi Tech News, Vol. 32 No. 8. https://doi.org/10.1108/LHTN-08-2015-0058

Publisher

:

Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 32, Issue 8

CSUDH receives grant to lead digitization of Japanese American internment documents

The National Park Service has awarded a two-year $321,554 grant to California State University, Dominguez Hills’ (CSUDH) Archives and Special Collections to serve as principal investigator on a collaborative project between archives at 15 CSU campuses to digitize nearly 10,000 documents and more than 100 oral histories related to the confinement of Japanese Americans during World War II. The CSU Japanese American Digitization Project will make these materials available on a CSU-sponsored Web site and also result in a teaching guide and traveling exhibit for schools and the public.

“It is heartening to have the National Park Service acknowledge the scale and importance of the CSU’s collections”, said CSUDH Director of Archives and Special Collections, Greg Williams. “The grant will ensure that this significant part of our history can be studied for generations to come”.

Many campuses throughout the CSU system were located near California’s incarceration camps and Japanese American communities. Throughout the past half century, their archives, libraries, oral history projects and history departments have collected archival and manuscript materials, objects and media relating to Japanese internment that have yet to be digitized.

With the grant money, CSUDH along with participating CSU archives at Bakersfield, Channel Islands, East Bay, Fresno, Fullerton, Long Beach, Northridge, Sacramento, San Jose, San Bernardino, San Diego, San Luis Obispo, San Francisco and Sonoma will have the resources to digitize and catalog more of their records.

The grant was one of 20 awarded by the National Park Service totaling more than $2.8 million to help preserve and interpret the World War II confinement sites of Japanese Americans. More than 120,000 Japanese Americans, two-thirds of whom were American citizens, were imprisoned by the US Government following Japan’s attack on Pearl Harbor on December 7, 1941.

This is the second grant the CSUDH archives department has received in support of this project, which Williams said is the first time for such a large-scale collaboration of archival collections within the CSU. In summer 2014, the National Endowment for the Humanities awarded CSUDH a $40,000 planning grant to create the centralized Web site.

CSU Japanese American History Digitization Project: http://www.csujad.com/

U of Minnesota libraries receive National Endowment for the Humanities grant to preserve Guthrie theater archives

The University of Minnesota Libraries’ Performing Arts Archives has been awarded $100,000 from the National Endowment for the Humanities (NEH) to arrange, preserve and describe the records of the Guthrie Theater. The libraries also will work with the Guthrie to create a sustainable plan for the future to preserve its twenty-first century materials, including digital records.

As part of this year-long project, the libraries also will identify materials that may be digitized and included in the Digital Public Library of America and other national digital libraries and online sources.

The Guthrie Theater collection at the University of Minnesota contains documentation from every play produced at the Guthrie and the theater’s organizational history. Included are annotated scripts, performance reports, programs, reviews, photographs, costume bible scrapbooks, audio and video recordings and more.

Though much of the Guthrie Theater history is already accessible at the Elmer L. Andersen Library, more recent materials have remained largely uncataloged. This project will allow the collection in its entirety to be accessed through online inventories and in the libraries’ reading room.

“We have a wonderful opportunity to bring one of the Performing Arts Archives’ premier collections into the twenty-first century with a flourish”, said Cecily Marcus, curator for the Performing Arts Archives. “We will have new boxes, improved online access, access to the complete collection, and plans in place to preserve a new era in the Guthrie’s history”.

The libraries, with this grant, proposes to:

  • arrange and describe nearly 800 cubic feet of records following current archival standards and best practices;

  • create an online archival finding aid;

  • photocopy at-risk materials;

  • identify priority content for digitization; and

  • create a twenty-first century records management plan in collaboration with Guthrie staff and leadership that addresses born-digital preservation and access.

The project will end in July 2016, with public access available to the full collection.

Full press release: http://www.continuum.umn.edu/2015/05/neh-grant-will-help-libraries-preserve-guthrie-archives/

The Guthrie Theater: http://www.guthrietheater.org/

Stewardship of the evolving scholarly record: New report from OCLC research

The scholarly record is increasingly digital and networked, while, at the same time, expanding in both the volume and diversity of the material it contains. The long-term future of the scholarly record cannot be effectively secured with traditional stewardship models developed for print materials. Stewardship of the Evolving Scholarly Record: From the Invisible Hand to Conscious Coordination, by Brian Lavoie and Constance Malpas, describes the key features of future stewardship models adapted to the characteristics of a digital, networked scholarly record and discusses some practical implications of implementing these models.

Key highlights include:

  • As the scholarly record continues to evolve, conscious coordination will become an important organizing principle for stewardship models.

  • Past stewardship models were built on an “invisible hand” approach that relied on the uncoordinated, institution-scale efforts of individual academic libraries acting autonomously to maintain local collections.

  • Future stewardship of the evolving scholarly record requires conscious coordination of context, commitments, specialization and reciprocity.

  • With conscious coordination, local stewardship efforts leverage scale by collecting more of less.

  • Keys to conscious coordination include right-scaling consolidation, cooperation and community mix.

  • Reducing transaction costs and building trust facilitate conscious coordination.

  • Incentives to participate in cooperative stewardship activities should be linked to broader institutional priorities.

The long-term future of the scholarly record in its fullest expression cannot be effectively secured with stewardship strategies designed for print materials. The features of the evolving scholarly record suggest that traditional stewardship strategies, built on an “invisible hand” approach that relies on the uncoordinated, institution-scale efforts of individual academic libraries acting autonomously to maintain local collections, are no longer suitable for collecting, organizing, making available and preserving the outputs of scholarly inquiry.

As the scholarly record continues to evolve, conscious coordination will become an important organizing principle for stewardship models. Conscious coordination calls for stewardship strategies that incorporate a broader awareness of the system-wide stewardship context; declarations of explicit commitments around portions of the local collection; formal divisions of labor within cooperative arrangements; and robust networks for reciprocal access. Stewardship strategies based on conscious coordination involve an acceleration of an already perceptible transition away from relatively autonomous local collections to ones built on networks of cooperation across many organizations, within and outside the traditional cultural heritage community.

This work is part of OCLC Research’s Understanding the System-wide Library theme, exploring the characteristics and organization of collections, services and infrastructure at scale. The goal of this work is to improve our understanding of the factors that guide institutions in their sourcing and scaling choices, as they seek maximum impact and efficient provision of library collections and services.

Read/download the report: http://www.oclc.org/content/dam/research/publications/2015/oclcresearch-esr-stewardship-2015.pdf

University of California, Los Angeles library builds a collaborative model for digital scholarship; video from CNI

The University of California, Los Angeles (UCLA) library recently launched the Los Angeles Aqueduct Digital Platform (LAADP), an online educational resource featuring an archives portal that allows users to search thousands of digitized primary sources across the holdings of seven institutions and a scholarship section, which showcases digital research projects created by UCLA graduate students. The LAADP was developed in UCLA Library Special Collections by the Center for Primary Research and Training (CFPRT), an innovative program that equips scholars with skills that will enable them to effectively use special collections in their research and teaching careers.

The CFPRT served as a nexus for the LAADP project, pulling together the expertise of UCLA archivists, librarians, technologists, faculty and students to accomplish two programmatic objectives. First, the CFPRT conceived the project as a method for better integrating the library into contemporary methods of learning and research. The project was strategically designed to inspire students to challenge traditional notions of academic work and realize new pathways for archival study and scholarship. For the LAADP, CFPRT graduate scholars participated in mass digitization projects, archival processing, digital storytelling and a summer-long, team-based digital humanities project. This hands-on practice provided the scholars the opportunity to enhance their digital and archival research skills, while strengthening their abilities in project management and inter-disciplinary collaboration.

Second, the CFPRT used the project as a vehicle for establishing a scalable infrastructure at UCLA library that supports creating and providing access to digital primary source scholarship. The LAADP produced new workflows and procedures that Special Collections now use for in-house digitization and metadata creation and established efficient methods for conducting copyright risk assessment for digitized primary sources. It also enabled the Digital Library Program to investigate new approaches and technologies for providing access to digital collections and content. Many of these practices have been codified into a Digital Projects Toolkit, which has enabled UCLA library to undertake subsequent high-quality, large-scale digital projects with more agility and confidence. The Toolkit is now widely available online so that other institutions can replicate these processes.

This presentation from the Coalition for Networked Information’s (CNI) spring 2015 meeting describes the building of the Los Angeles Aqueduct Digital Platform. In this session, archivists, a librarian and a PhD candidate discuss working across departments, disciplines and skill sets to successfully build a model for supporting and producing digital scholarship in the library. “A Platform for Partnership: Collaborating Across UCLA Library and Campus” is now available online from CNI’s video channels on YouTube and Vimeo.

YouTube: https://youtu.be/CPt8gKsHe1s

Vimeo: https://vimeo.com/126294905

Los Angeles Aqueduct Digital Platform: http://digital.library.ucla.edu/aqueduct/

UCLA Library Special Collections Digital Project Toolkit: http://library.ucla.edu/special-collections/programs-projects/digital-projects-special-collections

Open Preservation Foundation launches new membership, software supporter models

The Open Preservation Foundation (OPF) has introduced a new membership model to make participation more accessible to all organizations with a commitment to ensuring long-term access to digital content. Alongside the new membership model, we have also established a software supporter model to ensure the sustainability and future development of open source software tools in its stewardship.

“Over the last year the OPF has established a solid foundation for ensuring the sustainability of digital preservation technology and knowledge”, explains Dr Ross King, Chair of the OPF Board. “Our new strategic plan was introduced in November 2014 along with community surveys to establish the current state of the art. We developed our annual plan in consultation with our members and added JHOVE to our growing software portfolio. The new membership and software supporter models are the next steps toward realising our vision and mission”.

“We are an international not-for-profit organization”, adds Ed Fay, Executive Director of OPF. “We carry out a range of activities to meet our members’ priorities but we also support a number of digital preservation services and tools which are open to the whole community. These all require time and resources to sustain. It is only through commitment and contributions from our members and supporters that we are able to continue our work to sustain digital preservation technology and knowledge. We look forward to welcoming new collaborators who share this commitment to advancing digital preservation”.

The new membership model opens participation to organizations of all sizes. The membership tiers are based on the annual operating budget of your organization. Membership is available in two categories: Charter and Affiliate. Charter members steer our strategy and annual planning. They also benefit from exclusive or priority access to our interest groups, training and events and support in adopting and maintaining our open source software products. Affiliate members have access to the outputs of our activities and may choose to contribute effort in lieu of subscription fees to further digital preservation for the benefit of the community.

The software supporter model allows organizations to support individual digital preservation software products and ensure their ongoing sustainability and maintenance. We are launching support for JHOVE based on its broad adoption and need for active stewardship. It is also a component in several leading commercial digital preservation solutions. While it remains fully open source, supporters can steer our stewardship and maintenance activities and receive varying levels of technical support and training.

For more information about becoming an OPF member or software supporter visit: http://openpreservation.org/about/join/

PREMIS Data Dictionary, Version 3.0, now available

The PREMIS Editorial Committee has announced the availability of PREMIS Data Dictionary for Preservation Metadata, Version 3.0. This is a major new version with a revised data model, enhancing the ability to express information about software and hardware environments and intellectual entities.

Specific changes in this version include:

  • Make Intellectual Entity another category of PREMIS Object. In versions 1 and 2, an Intellectual Entity was a separate entity and was out of scope for description using PREMIS except for an identifier to link to it from other PREMIS entities. This change will enable a repository to represent an aggregate, such as a collection, FRBR work or expression, fonds or series, to capture descriptive metadata, to associate business requirements with it (such as significant characteristics, risk definitions, guidelines for preservation actions, etc.), to support structural and derivative relationships, to make rights statements and to establish relationships to preservation events. In addition, it will allow for the repository to capture versioning information and metadata update events at the Intellectual Entity level for resources, such as articles or issues.

  • Revise the data model so that software and hardware environments can be described and preserved reusing the Object entity. To preserve Digital Objects, repositories need to have information about the elements of the technical stack of software, hardware and other dependencies needed to correctly interpret the representations, files and bitstreams. This is particularly important for certain types of resources that are dependent on combinations of hardware and software for their use, for example multimedia or Web sites. In previous PREMIS versions, environment descriptions were associated with each individual Object; now they may be described as Intellectual Entities and preserved as Representation, File or Bitstream Objects. Semantic units that are specific to Environment descriptions capture the function and designation of the Environment and may link to environment descriptions in external registries. Environments can be represented as aggregates or as individual components (e.g. an executable file, a stylesheet); therefore, relationships become crucial. A direct relationship between Agents and Objects will now be used to capture the Environment that acted as the Agent in an Event.

  • Physical Objects can be described as Representations and related to digital Objects and are thus no longer out of scope for PREMIS descriptions.

  • preservationLevelType is added as a new semantic unit to indicate the type of preservation functions expected to be applied to the object for the given preservation level. An example might be where the preservation level type is “bit preservation level” and the repository elaborates by assigning “low”, “medium” or “high”.

  • agentVersion is added to the Agent entity to express the version of software Agents.

  • compositionLevel is no longer restricted to an integer, so that “unknown” may be used if the information is not available. An unknown value may also apply to formatName, which is a mandatory semantic unit, for unidentified formats.

The PREMIS Data Dictionary for Preservation Metadata, Version 3.0 includes extensive discussion of the revised data model and the expanded description of environments in its Introduction and Special Topics sections.

To enable automated workflows, there are many PREMIS semantic units that recommend that the value be taken from a controlled vocabulary. In previous versions, the Data Dictionary included suggested values; most of these were included in the Library of Congress’s Linked Data Service for Authorities and Vocabularies. In this version, some examples rather than suggested values are given, and the semantic unit refers to the specific vocabulary in the id.loc.gov system. Additional terms will be added to accommodate the new or revised semantic units in version 3.0.

The PREMIS XML schema is undergoing revision and will be available in the near future. When a draft is available, it will be announced, and PREMIS implementers are encouraged to experiment with it and provide feedback. In addition, the PREMIS Web Ontology Language (OWL) will be revised to reflect these changes.

The PREMIS Data Dictionary for Preservation Metadata, Version 3.0 is available from: http://www.loc.gov/standards/premis/v3/

Schema.org 2.0 public release announced

Schema.org is a collaborative, community activity with a mission to create, maintain and promote schemas for structured data on the Internet, on Web pages, in email messages and beyond.

Schema.org vocabulary can be used with many different encodings, including RDFa, Microdata and JSON-LD. These vocabularies cover entities, relationships between entities and actions and can easily be extended through a well-documented extension model. Over 10 million sites use Schema.org to markup their Web pages and email messages. Many applications from Google, Microsoft, Pinterest, Yandex and others already use these vocabularies to power rich, extensible experiences.

R.V. Guha, one of Schema.org’s co-founders and current head of the steering group, recently announced on the schema blog the public release of Schema.org 2.0. This release brings several significant changes and additions, not just to the vocabulary, but also to how the community will grow and manage it, from both technical and governance perspectives.

As schema.org adoption has grown, a number of groups with more specialized vocabularies have expressed interest in extending schema.org with their terms. Examples of this include real estate, product, finance, medical and bibliographic information. Even in something as common as human names, there are groups interested in creating the vocabulary for representing all the intricacies of names. Groups that have a special interest in one of these topics often need a level of specificity in the vocabulary and operational independence. Schema.org version 2.0 is introducing a new extension mechanism which is hoped to enable these and many other groups to extend schema.org.

Over the years, Schema.org has taken steps to become more transparent. Today, there is more community participation than ever before. The newly formed W3C Schema.org Community Group is now the main forum for schema collaboration and provides the public-schemaorg@w3.org mailing list for discussions. Schema.org issues are tracked on GitHub. The day–to-day operations of Schema.org, including decisions regarding the schema, are handled by a newly formed steering group, which includes representatives of the sponsor companies, the W3C and some individuals who have contributed substantially to Schema.org. Discussions of the steering group are public.

Schema.org is a “living” spec that is constantly evolving. Sometimes, this evolution can be an issue, such as when other standard groups want to refer to it. So, from this release on, schema.org will be providing snapshots of the entire vocabulary.

Read more at schema blog: http://blog.schema.org/

Schema.org on GitHub: https://github.com/schemaorg

University College London Press joins Open Access Publishing in European Networks

Open Access Publishing in European Networks (OAPEN) has welcomed University College London (UCL) Press to the OAPEN Library. UCL Press was officially relaunched in June 2015 as the first fully Open Access university press in the UK. UCL Press will publish scholarly research monographs, edited volumes, scholarly editions, textbooks and journals. Its first official publication is Temptation in the Archives: Essays in Golden Age Dutch Culture, by Professor Lisa Jardine CBE.

Professor Jardine was the key speaker at the launch event, attended by 150 people. She has recently been made a Fellow of the Royal Society and she gave a passionate and inspiring homily on the reasons why she had published in Open Access, and therefore, why she had chosen UCL Press as her publisher. Professor Jardine underlined that Open Access was a groundbreaking development in the world of scholarship, which loosened up the chains from the authors and their works which were hidden behind a paywall. Her book has already been selected as “Book of the Day” on unglue.it, a site dedicated to promoting freely available e-books.

Temptation in the Archives explores the fascinating cultural exchange that took place between the English and Dutch in the seventeenth century. Through a range of primary sources, including letters by such key figures as Sir Constantijn Huygens and his family, Lisa Jardine reveals the Anglo-Dutch cultural connections that were formed against the backdrop of unfolding political events in England, alongside the temptations and uncertainty of archival research.

Eelco Ferwerda, Director of OAPEN Foundation, said: “We are delighted to feature this new book by such a distinguished scholar and author. The support for Open Access by Professor Jardine demonstrates a sea change in academia and the way in which academics wish to see their work disseminated. UCL has taken a bold step toward changing the future of scholarly communication by establishing an Open Access university press”.

Publishing Manager, Lara Speicher said: “UCL Press is delighted to work with OAPEN, the central platform for publishers round the world to make their Open Access books available to individuals and libraries. The fact that it recently reached the milestone of 2 million downloads since August 2013 is testament to its effectiveness, and to the increasing demand for Open Access”.

UCL Press will publish nine other books this year. Two of those are already available: Treasures from UCL by Gillian Furlong, which celebrates the rare and beautiful manuscripts, books and archives in UCL Library Service’s Special Collections, and The Petrie Museum of Egyptian Archaeology, which celebrates the centenary of the opening of the UCL Petrie Museum. Seven other titles will be announced later this summer. All will be made available as Open Access PDF downloads from the UCL Press Web site (http://www.ucl.ac.uk/ucl-press) as well as via OAPEN. All are published under CC-BY licenses.

Temptation in the Archives is available through the OAPEN Library and UCL Press.

Search the OAPEN Online Library: http://www.oapen.org/search

UCL Press: http://www.ucl.ac.uk/ucl-press

Earlier this year, OAPEN reached the milestone of two million book downloads from the OAPEN Library. This is the number of downloads measured through a COUNTER compliant method with support of IRUS-UK beginning in August 2013.

OAPEN: http://www.oapen.org/home

USMAI library consortium and partners launch Maryland Shared Open Access Repository

The Maryland Shared Open Access Repository (MD-SOAR) was launched recently as a two-year pilot project supported by 11 colleges and universities in the state. MD-SOAR provides a single, centrally administered platform on which all partners can build library-based digital repository collections and services. Library-based digital repositories often contain a wide variety of content, such as faculty and student works, instructional content and presentations, digitized institutional heritage collections and other digital assets. The MD-SOAR initiative involves both public and private educational institutions and will avoid more than $100,000 in startup costs that otherwise would have to be spent to provide the same capabilities for each participating institution.

MD-SOAR is hosted through an arrangement with the USMAI Library Consortium but is jointly governed by all participating partners. During the two-year pilot project, these college and university libraries will build digital repository collections and services suited to each institution’s needs but will also develop shared policies, workflows and other agreements that are necessary in a shared platform environment. The joint governance group also will develop an effective evaluation strategy for the collaboration, a long-term sustainability plan to continue beyond the initial two-year period and an ongoing statement of lessons learned during the pilot. All of these materials will be made freely available as a planning toolkit for other libraries that may wish to consider similar collaborative approaches.

MD-SOAR was built using the open-source DSpace institutional repository software and contains links to each partner institution’s site. In the coming months, these partner sites will grow as each library adds new collections, services and guidance to best serve their local campus communities.

Pilot project partners include the libraries of:

  • Frostburg State University;

  • Goucher College;

  • Loyola/Notre Dame Library;

  • Maryland Institute College of Art;

  • Morgan State University;

  • Salisbury University;

  • St. Mary’s College of Maryland;

  • Towson University;

  • University of Baltimore;

  • University of Maryland Baltimore;

  • University of Maryland – Baltimore County; and

  • University of Maryland College Park (hosting MD-SOAR).

MD-SOAR: http://mdsoar.org

USMAI Library Consortium: http://usmai.org/

10th International Conference on open repositories conference videos available

Video recordings of talks given at the 10th International Conference on Open Repositories (OR2015) by keynote speaker Kaitlin Thaney, director of the Mozilla Science Lab, and featured speaker Anurag Acharya of Google Scholar are now available for online viewing.

Kaitlin Thaney delivered the OR2015 keynote presentation, “Leveraging the power of the Web for science”. Ms Thaney is the director of the Mozilla Science Lab, an open science initiative of the Mozilla Foundation focused on innovation, best practice and skills training for research. She also advises the UK Government on infrastructure for data intensive science and business, serves as a Director for DataKind UK and is the founding co-chair for the Strata Conference series in London on big data.

OR2015 Featured Speaker, Anurag Acharya spoke on “Indexing repositories: pitfalls and best practices”. Anurag Acharya is a Distinguished Engineer at Google and a co-creator of Google Scholar. Prior to joining Google, he was a postdoctoral researcher at the University of Maryland, College Park and an assistant professor at the University of California, Santa Barbara.

You may view all videos from the OR2015 conference at: https://media.dlib.indiana.edu/catalog?f[collection_ssim][]=Open+Repositories+2015

Or follow links in the online program to view videos by session: http://www.conftool.com/or2015/index.php?page=browseSessions&form_room=8

Japanese National Diet Library publishes new Web pages about Linked Open Data

The National Diet Library, Japan (NDL) has announced the publication of new English version Web pages introducing NDL’s Linked Open Data (LOD). The NDL provides some kinds of data as Linked Open Data.

The following pages provide further details about NDL’s LOD:

Use and Connect: What is NDL’s Linked Open Data? Overview of data source, metadata vocabulary, schema, etc. (http://www.ndl.go.jp/en/aboutus/standards/lod.html)

Use and Connect: How to use NDL LOD? Access, download, terms of use, etc. (http://www.ndl.go.jp/en/aboutus/standards/lod_download.html)

Use and Connect: How to link to NDL LOD? Examples of utilization (http://www.ndl.go.jp/en/aboutus/standards/lod_usecase.html)

Free Data Service. Data set files are available for download. There are no restrictions on the use of these data sets (http://www.ndl.go.jp/en/aboutus/standards/opendataset.html)

National Diet Library Standards for Digital Information (http://www.ndl.go.jp/en/aboutus/standards/)

Library-linked data in the cloud: OCLC’s experiments detailed in new book

OCLC’s work to help increase the visibility of library collections on the Web through the creation of library-linked data – moving from a Web of documents to a Web of data – is described in a new book, Library Linked Data in the Cloud: OCLC’s Experiments with New Models of Resource Description.

Written by OCLC Research staff members, Carol Jean Godby, Shenghui Wang and Jeffrey K. Mixter, the book focuses on the conceptual and technical challenges involved in publishing linked data derived from traditional library metadata. This transformation is urgent, the book maintains, because it is common knowledge that most searches for information start not in a library, or even in a Web-accessible library catalog, but elsewhere on the Internet. Modeling data in a form that the broader Web understands may help keep libraries relevant in the network environment.

In the book, the authors explain how the new Web is a growing “cloud” of interconnected resources that identify the people, places, things and concepts that people want to know about when they approach the Internet with an information need. They also explain why linked data are appropriate architecture for the description of library resources.

“This work represents significant contributions OCLC is making with library linked data”, said Lorcan Dempsey, OCLC Vice President, Research and Chief Strategist. “Our researchers are participating in the development of Web standards for machine-understandable data, contributing to the debate on how the key values of librarianship are represented in linked data techniques, and publishing some of the most widely used linked data enabled authoritative hubs in the library community”.

Linked data has achieved a critical mass in the library community coincident with its growing importance in the wider Web. The evolutionary path from traditional library metadata standards toward a global Web of linked library resources is becoming clear”, said Carol Jean Godby, OCLC Senior Research Scientist and lead author of the book. “Our goal with this book is to explain how this new architecture promises to simplify and promote the quest for knowledge”.

The publication aims to achieve a balanced treatment of theory, technical detail and practical application for librarians, archivists, computer scientists and other professionals interested in modeling bibliographic descriptions as linked data.

Find Library Linked Data in the Cloud in a library: http://www.worldcat.org/oclc/909811018

Major content and discovery service providers declare conformance with National Information Standards Organization’s open discovery initiative

In the cooperative spirit of the National Information Standards Organization (NISO), four leading academic content providers – Credo; Gale, a part of Cengage Learning; IEEE (Institute of Electrical and Electronics Engineers); and SAGE – have publically disclosed their support of the Open Discovery Initiative (ODI). In NISO’s Open Discovery Initiative: Promoting Transparency in Discovery (RP-19-2014), content providers are encouraged to take specific measures to declare their conformance with ODI’s recommended practice for pre-indexed “web-scale” discovery services, with the goal of increasing open communication and collaboration.

Each organization has published an ODI Conformance Statement, which articulates their efforts to meet ODI’s discovery service indexing recommendations. For example, using ODI’s conformance checklists, all four participating content providers make their full-text articles available to all discovery service providers for indexing. These providers state that core metadata for their publications is distributed routinely for indexing and that these data provide “the content item and additional descriptive content for as much of their content as possible”. In the spirit of transparency, the checklists from these four publishers note any relevant exceptions or gaps in their conformance to the ODI-recommended practice.

The four publishers have declared the following:

We jointly release this statement in order to support and improve cross-sector collaboration in library discovery. We share the belief that clear and open communication is paramount in supporting the libraries that license pre-indexed discovery services, and in progressing industry-wide improvements to the discoverability of scholarly content in today’s library ecosystem.

We see our conformance statements and checklists as living documents, made possible by ODI’s standardized format for communicating the indexing and discoverability efforts of publishers with the library community. We encourage other publishers and discovery service providers to embrace ODI conformance, as a method for opening up conversations across the industry about discovery service indexing.

Furthermore, we believe that collaboration is central to improving academic resource discoverability, and that active and transparent disclosure of participation in discovery services is critical to our collective successes. By shedding light on the often misunderstood aspects of resource indexing, we believe that solutions and opportunities are brought to light.

“The ODI Conformance Checklist was created to enable standardized methods for content providers to assert their ODI support of libraries, discovery services, and other industry stakeholders”, commented Laura Morse, ODI Co-chair and Director, Library Systems, Harvard University. “We are delighted that these four publishers are leading the way in implementing ODI’s recommendations, and we truly believe that their compliance will enhance the entire scholarly communication community”.

“It’s gratifying to see the support for the Open Discovery Initiative recommendations from such preeminent content providers, and we hope that others will follow this lead”, said Todd Carpenter, Executive Director of NISO. “We recognize that these statements represent their commitment to transparency and their willingness to communicate openly with all stakeholders in the discovery arena”.

Soon after the joint announcement from these publishers, additional library service providers declared their support for ODI and the efforts they are pursuing toward full conformance with NISO RP-19-2014.

Ex Libris® Group announced its support of the Open Discovery Initiative and conformance with ODI’s recommended practice for pre-indexed “web-scale” discovery services. As one of the founders and an active member of ODI, Ex Libris remains committed to the ODI objectives and has published an ODI Conformance Statement which articulates its efforts to meet ODI’s recommendations for discovery service providers. In line with ODI Conformance Checklist, Ex Libris already fully complies with the recommendations that pertain to content neutrality. Ex Libris Primo® discovery and delivery solution displays search results that are ranked according to their relevance to the user’s search context and the preferences set by libraries for their users. The ranking of search results, linking to content, inclusion of materials in Primo Central and discovery of open access content all uphold the principles of content neutrality. Ex Libris continues to work toward full compliance with ODI recommendations.

“We are very pleased to be part of the ODI Working Group and to advance the concept of open discovery for the benefit of customers and industry stakeholders”, said Shlomi Kringel, Ex Libris Corporate Vice President for discovery and delivery solutions. “We continue to work closely with the group members to strengthen collaboration and transparency among content and discovery providers”.

In a recent post on the EBSCO Pulse Blog discussing a post-modern technology world where library choice is paramount, Scott Bernier, Senior Vice President at EBSCO, introduces the concept of “EBSCO Open” and discusses EBSCO’s support of, and conformance to, the recommended practices of NISO’s Open Discovery Initiative.

“With representation on the NISO ODI committees (both the original working group and the current standing committee), EBSCO remains actively involved in the goal of pushing further collaboration that will ultimately lead to greater value and usage of our libraries and their resources”, writes Bernier. “According to Todd Carpenter, NISO’s Executive Director, EBSCO’s Policy for Open Metadata Sharing and Collaboration is, “consistent with the ODI goals, and is a significant step toward greater vendor collaboration that enhances overall value for libraries and end users.” As requested by NISO, EBSCO is happy to report its progress and conformance with ODI recommended practices as both a content provider and a discovery service provider”.

Brent Cook, Director of Product Management at ProQuest, also wrote a post on the ProQuest blog affirming ProQuest’s continuing commitment to NISO Open Discovery Initiative and ODI standards conformance. “ProQuest supports the National Information Standards Organization (NISO) and ProQuest leaders such as Steven Guttman, Senior Director of Product Management, have long standing participation in efforts including the Open Discovery Initiative Standing Committee to help the library industry improve the research and discovery experience. ProQuest remains committed to meeting and exceeding the standards and recommended practices outlined in the Open Discovery Initiative”.

View the ODI Conformance Checklists and Statements at: http://www.niso.org/workrooms/odi/conformance/

For more information about ODI’s goals and requirements: http://www.niso.org/workrooms/odi/

Networked Digital Library of Theses and Dissertations announces Global Electronic Thesis and Dissertation Search

The Networked Digital Library of Theses and Dissertations (NDLTD) has announced the launch of an exciting new service. Researchers now have a central portal to search and locate Electronic Theses and Dissertations (ETDs) from universities around the world.

NDLTD is an international organization dedicated to promoting the adoption, creation, use, dissemination and preservation of ETDs. It supports electronic publishing and open access to scholarship to enhance the sharing of knowledge worldwide.

Global ETD Search – developed in partnership with the University of Cape Town, South Africa – is a free service that allows researchers to find ETDs based on keyword, date, institution, language and subject. Researchers may submit queries and view results using the familiar style of popular Web search engines. This new search service will allow researchers to locate relevant theses and dissertations far more effectively than current tools.

Global ETD Search is based on a growing collection of approximately four million ETDs from more than 200 universities on all continents. Metadata records are automatically collected daily from individual institutions and consortia. These records form the basis of the search service. Once researchers have located the ETDs of interest, they are able to access the original documents from the originating institutions.

Visit the new search service at Global ETD Search: http://search.ndltd.org/

Universities that wish to participate in this service can find further information about the NDLTD and the Global ETD Search service at:

http://www.ndltd.org/resources/manage-etds/help-build-global-etd-search

GreyGuide offers common ground for good practices and resources in grey literature

The GreyGuide is an online forum and repository of good practice and resources in grey literature. The GreyGuide seeks to capture proposed as well as published practices dealing with the supply and demand sides of grey literature. This initiative is undertaken by GreyNet International (content provider) and the Institute of Information Science and Technologies of the Italian National Research Council (ISTI-CNR) (service provider). The launch of the GreyGuide Repository took place in December 2013 and the acquisition of both proposed and published good practices is underway.

In 2014, the GreyGuide further developed as GreyNet’s Web access portal and both GreyNet and ISTI-CNR are involved in the process of migrating Web-based content to the GreyGuide. This migration of records and content commenced with the GreySource Index (GSI), the Who is in Grey Literature (BIO), along with the GL16 and GL17 Conference Proposals submitted and accepted for presentation in the International Conference Series on Grey Literature. These collections are now accessible via combined search and browse capability and their metadata as well as full-text content can be harvested online.

GreyGuide Current Collections:

  • Proposed Good Practices;

  • Published Good Practices;

  • GL16 Conference Proposals;

  • GL17 Conference Proposals;

  • GSI – GreySource Index; and

  • BIO – Who is in Grey Literature.

GreyGuide is a community-driven open resource project. The initial goal of the project was to develop an open source repository of good practices in the field of grey literature. That which is originated in a monographic form would now be open and expand to include content from the international grey literature community. Such practices will range from the production and processing of grey literature through to its distribution, uses and preservation. The repository will contain guidelines, such as those in handling theses and dissertations, how to write research reports, metadata required for working papers as well as good practices in the subject areas of agriculture, health, education, energy, environment, etc. The purpose of an online repository of good practice in grey literature will provide the many stakeholders in government, academics, business and industry with the benefits of experience, sustained management and proven results.

The procedure initially applied in this project deals with the design and development of a template that will capture data and information about published as well as proposed good practices within a standard format. While the metadata captured in the template are indeed standardized, their accompanying full-text documents need not be. Furthermore, the template seeks to identify intended users of a good practice, as well as metadata that will facilitate the search and retrieval of records in the repository.

Technical developments related to the design and construction of the repository, its eventual platform as well as its maintenance are other related issues addressed in the project. While there are no direct costs associated with the project, each partner is committed to allocate human and material resources needed to carry out their related tasks.

It is expected that the initial phase in acquiring records for the repository will rely on channels available through the Grey Literature Network Service. Populating the repository will be somewhat time consuming and the first harvest is not expected to produce an abundance of records; however, as the project is long term, it is considered all the more worthwhile. The GreyGuide will provide a unique resource in the field of grey literature that is long awaited and which responds to the information needs of a diverse, international grey literature community.

GreyGuide: http://greyguide.isti.cnr.it/

National Information Standards Organization releases draft technical report on SUSHI Lite for public trial and comment

The National Information Standards Organization (NISO) is seeking trial users and comments on the draft technical report, SUSHI-Lite: deploying SUSHI as a lightweight protocol for exchanging usage via Web services, NISO TR-06-201X. This technical report proposes and describes a method of exchanging COUNTER statistics ranging from usage for a single article to a complete COUNTER report, using commonly used approaches to Web services. The SUSHI-Lite technical report does not replace the SUSHI standard but rather supplements it with an alternative approach for requesting and exchanging usage.

“Librarians and publishers are well aware of how critical the SUSHI standard is for communicating measurement of electronic resources”, states Oliver Pesch, Chief Product Strategist, EBSCO Information Services and Co-chair of the SUSHI-Lite Working Group.

“However, since SUSHI was originally published by NISO in 2007, there have been numerous changes to the online environments in which we work, such as alternative metrics, the COUNTER Journal Usage Factor, and the rise of institutional repositories and the need to measure their use. There is a need for more lightweight technologies to allow smaller sets of usage to be exchanged in real-time, and the technologies and approaches described in the SUSHI-Lite technical report can support these newer requirements. We anticipate that in time and with further practical experiences applied to it, the SUSHI-Lite protocol will become part of ANSI/NISO Z39.93, the Standardized Usage Statistics Harvesting Initiative (SUSHI) Protocol”.

“SUSHI-Lite supports modern, commonly used approaches to web services, such as RESTful interfaces and JSON data formats”, adds Paul Needham, Research and Innovation Manager, Cranfield University and Co-chair of the SUSHI-Lite Working Group”. “This optional implementation of SUSHI using current-day practices also includes mechanisms for including additional filters and report attributes which support limits of scope for data and further controls for format and completeness of data in returned sets. We think that ultimately, the SUSHI-Lite protocol is much easier to implement and will help ensure acceptance of SUSHI and COUNTER by the mainstream web development community”.

“NISO is soliciting users and feedback on this draft Technical Report from any organization that uses SUSHI or would like to apply it to their data transmission project”, states Nettie Lagace, NISO Associate Director for Programs. “This feedback will be used to make any needed revisions to the document before final publication”.

The draft technical report is open for public comment through September 30, 2015. To download the draft or submit online comments, visit the SUSHI Lite Working Group Web page at: http://www.niso.org/workrooms/sushi/sushi_lite/

veraPDF consortium issues first public prototype of veraPDF’s validation software

veraPDF is developing the definitive open source, file-format validator for all parts and conformance levels of ISO 19005 (PDF/A). The software is designed to meet the needs of memory institutions responsible for preserving digital content for the long term. The project is led by the Open Preservation Foundation and the PDF Association and is supported by leading members of the PDF software development community through their Technical Working Group.

This initial public release of veraPDF’s software is incomplete and is not to be used as a validator; it is currently more a proof of concept than a usable file-format validator.

The veraPDF consortium is a unique collaboration, bringing together an end-user community of digital preservationists and a software industry rooted in the principle of interoperability based on ISO standardized technology to develop the definitive conformance checker for PDF/A. veraPDF’s Web site is now up and contains information on the project’s software and roadmap, the team behind it and how you can get involved.

veraPDF home page: http://verapdf.org/

The software can be downloaded at: http://verapdf.org/software/

The release notes are published at: https://github.com/veraPDF/veraPDF-library/

Re-envisioning the Master of Library Science: final report from the University of Maryland iSchool

A recent posting on blogMLS, the official blog of the Master of Library Science (MLS) Program at Maryland’s iSchool, announces the release of a new report, Re-Envisioning the MLS: Findings, Issues, and Considerations.

Discussions on the future of libraries have become common place in the profession; from Pew and Aspen Institute studies to forums to articles, the topic is clearly on the mind of many. But the Future of Libraries discussion also necessitated a deeper look at emerging technology, information, demographic, and other trends – and a discussion regarding the future of librarians – and information professionals.

In August 2014, the iSchool and the Information Policy & Access Center (iPAC) at the University of Maryland launched the Re-Envisioning the MLS initiative as part of a three-year process that explores what a future MLS degree should be. That is, as we think about the mix of changes in the information landscape, our communities, information organizations, technology, the economy, workforce needs and trends and other factors, what does a future MLS degree look like?

To answer this (and other) question(s), we hosted a speaker’s series, held engagement events, conducted regional visits, spoke with a range of leaders in the information professions, worked with our inaugural MLS Advisory Board, conducted extensive analysis and scanning, published a white paper, and more.

A summary of key findings from the Re-Envisioning the MLS report:

  • The Shift in Focus to People and Communities: A significant shift that has occurred in information organizations from collections to the individuals and the communities that they serve.

  • Core Values Remain Essential: The values of an MLS degree and information professionals remain essential, in particular ensuring access, equity, intellectual freedom, privacy, inclusion human rights, learning, social justice, preservation and heritage, open government and civic engagement.

  • Competencies for Future Information Professionals: Information professionals need to have a set of core competencies that include (among others) the ability to lead and manage projects and people; to facilitate learning and education either through direct instruction or other interactions; to work with, and train others to use, a variety of technologies; a strong desire to work with the public; problem-solving and the ability to think and adapt instantaneously; policy-making; and relationship building.

  • Access for All: The tension between the growing societal gaps (income and other), a shrinking public sphere and social safety net, wanting to help those with acute needs, not having the resources or skills to and questioning whether this is an appropriate role for information organizations and professionals were the recurring themes throughout Re-Envisioning the MLS.

  • Social Innovation and Change: By forming partnerships, information organizations are essential catalysts for creative solutions to community challenges in a wide range of areas, such as health, education and learning, economic development, poverty and hunger, civic engagement, preservation and cultural heritage and research innovation.

  • Working with Data and Engaging in Assessment: The data role for information professionals is at least threefold: helping the communities that they serve engage in a range of data-based activities; helping communities leverage data to better understand their communities, community needs and develop solutions to community challenges; and using data to demonstrate the contributions of their libraries, archives, etc., to the community(ies) that they serve.

  • Knowing and Leveraging the Community: There is a need for information professionals who can fully identify the different populations and needs of the communities that they serve, their challenges and underlying opportunities.

  • Learning/Learning Sciences, Education and Youth: Information organizations have a particular opportunity to foster learning by attending to an individual’s particular interests, needs and educational goals. A particular opportunity exists in focusing on youth learning, particularly STEAM (Science, Technology, Engineering, Arts and Math).

These findings have a number of implications for MLS education, selectively summarized below:

  • Attributes of Successful Information Professionals: The findings indicate that successful information professionals are not those who wish to seek a quiet refuge out of the public’s view. They need to be collaborative, problem solvers, creative, socially innovative, flexible and adaptable and have a strong desire to work with the public.

  • Ensure a Balance of Competencies and Abilities: The debate between MLS programs needing to produce graduates with a “toolkit” of competencies versus providing graduates with a conceptual foundation that will enable them to grow and adapt over time evidenced itself throughout Re-Envisioning the MLS. Further interjected into this debate was the notion of “aptitude” (specific skills) versus “attitude” (“can do”, “change agent”, “public service”). Any MLS curriculum needs to balance aptitude with attitude.

  • Re-Thinking the MLS Begins with Recruitment: Neither a love of books or libraries is enough for the next generation of information professionals. Instead, they must thrive on change, embrace public service and seek challenges that require creative solutions. MLS programs must seek and recruit students who reflect these attributes.

  • Be Disruptive, Savvy and Fearless: Through creativity, collaboration and entrepreneurship, information professionals have the opportunity to disrupt current approaches and practices to existing social challenges. The future belongs to those who are able to apply critical thinking skills and creativity to better understanding the communities they serve today and will serve 5-10 years down the road – and those who are bold, fearless, willing to take risks, go “big” and go against convention.

Over the next year, the iSchool will consider these findings and implications as it revises its MLS curriculum.

Read/download the final report on Re-Envisioning the MLS: http://mls.umd.edu/wp-content/uploads/2015/08/ReEnvisioningFinalReport.pdf

Related articles