One of the more curious African scripts around is one that I was introduced to by Joe Lauer, a librarian at Michigan State University. In 2001, an article of his was published in the Mande Studies Journal called “A transliteration scheme for cataloging Nko publications”. He brought some examples of N’ko publications to a meeting of the Africana Librarians Council spring conference in 2003, which was held that year at Yale University Library. I was still just starting out in librarianship and hadn’t yet become very involved in professional associations, and it struck me as something worth putting some time in to learn. The history of the N’ko script, beginning as an invention of Souleymane Kanté in 1949, has been well-documented. The bibliographic output of the Mande-speaking communities who publish in N’ko script has been increasing for some time, with maps, newspapers, dictionaries, and texts of many other kinds produced.
Fig. 1. Ibrahima Sory 2 Condé in front of the Librairie Centrale N’ko, Conakry, Guinea, 2009. Photo by Charles Riley.
UNESCO’s welcome Initiative B@bel provided a critical boost of support in collaboration with the Script Encoding Initiative at UC-Berkeley to help ensure that N’ko became encoded in Unicode with the release of the 5.0 version of the Unicode Standard in 2006. Several fonts prior to the Unicode release had been developed, but a system font to support N’ko natively on the Windows platform, Ebrima, finally came through with the introduction of Windows 7 in 2009. Other options became available for Mac and Linux: https://www.evertype.com/fonts/nko/. While Ebrima initially did not come with ligatures and proper rendering support, those needed features did start to come together in successive versions of Windows.
Fig. 2. Map of Guinea in N’ko, created by the Librairie N’ko. Used with permission.
One thing that the encoding and the system font support started to allow was for the accurate production of bibliographic metadata, but Lauer’s transliteration system would need an update first. An ALA-LC romanization table was drafted, circulated to stakeholders, revised, and approved for use between June of 2014 and February of 2015. Many thanks are due to Bruce Johnson and Coleman Donaldson for their key roles in this process. In 2016, OCLC, a library cooperative non-profit, upgraded the version of its Connexion software to 2.63, allowing for greater use of the Unicode Standard in bibliographic records. With that, enough was in place for librarians at Harvard and Yale to agree on an informal joint project beginning in October of 2017 to produce catalog records with access points in N’ko script (an example shown in OCLC’s Worldcat is here, shown in the Harvard catalog here, and in the Yale catalog here.) N’ko was showing up well at key points in the toolchain, including Connexion, Voyager, Alma, and MarcEdit, with minimal tweaks to the system that would allow for validation of the script’s characters. Valentin Vydrin’s bibliography (http://llacan.vjf.cnrs.fr/PDF/Mandenkan48/48Vydrin.pdf), published in 2012, proved very useful, although it had been developed using a pre-Unicode font. A helpful conversion tool was developed by Andrij Rovenchak and made available here. My colleagues at Harvard who were involved include Chiat Naun Chew, Isabel Quintana, Boubacar Diakite, Bassey Irele, and Richard Lesage. To date, according to the last OCLC report, there are 109 bibliographic records cataloged using the N’ko script, approximately 60 of which were produced in the joint Yale-Harvard project.
What lies ahead for N’ko in bibliographic metadata? Thanks to Youssouf Diaby and many others, an incubated Wikipedia project has started. This could lead to contributions to Wikidata, which could then find their way into the Virtual International Authority File, leading to authorized forms of names in N’ko script to be usable as linked data in authority headings files.
3 thoughts on “A close look at N’ko”