Version 3 due in June!

We have been busy, with both software and content development. Version 3 of the World Historical Gazetteer has been in development since February 2023, and a beta version will be available mid-2024. What follows is a brief outline of what we have been working on, much of which came as suggestions from our user community. Details will follow in the coming weeks and months, on this blog and on Twitter. We do expect to establish a Mastodon account soon as well.

Version 3 (alpha) home page

New “Gazetteer Builder” feature

  • Link multiple datasets in a single collection, e.g. for a group or individual to assemble a “Historical Gazetteer of {x}”
  • Merge multiple datasets into new dataset

Home page

  • A map(!), with search and advanced search
  • ‘Carousels’ of published datasets and collections, with extents previewed on the map
  • Improved explanation of what the WHG offers
  • News and announcements

Maps

  • All 14 maps on the site significantly upgraded
  • Most maps now have temporal controls: a timespan ‘slider’ and/or a sequence ‘player’
  • Faster display of large datasets and collections, thanks to WHG’s own new “tileboss” server

Search

  • Search now across all published records-the confusing “search the index or database” choice is gone!
  • Options for ‘starts with”, “contains”, “similar to” (aka fuzzy) as well as ‘exact’
  • Spatial filter on search results
  • More information returned in search result items

Place Portal pages

  • Complete makeover of its design
  • Physical geographic context: ecoregions, watersheds, rivers, boundaries
  • Nearby places
  • Preview of annotated collections that include the place

Publication and editorial workflow

  • We are now especially highlighting three types of publications: Datasets, annotated Place Collections, and Dataset Collections
  • Expanded Managing Editor role
  • Improved tracking of contributors and data, from ‘interested’ to full accessioning
  • DOIs for data publications, enhanced metadata, significantly enhanced presentation pages
  • Improved download options

Annotated place collections for teaching

  • Support for class and workshop group scenarios
  • Optional image per annotation
  • Order places sequentially with or without dates
  • Enhanced display and temporal control options
  • Optional gallery per class
  • Site-wide student gallery

“My Data” dashboard and profile

  • Single page, simpler

Study Areas

  • Discontiguous areas, e.g. Iberian peninsula and S. America as a single area

API and data dumps

  • More endpoints, better documented
  • Regular dumps of published data in multiple formats

Codebase

  • Improved file upload validation and error reporting
  • The codebase is now “dockerized,” making it much easier to contribute to the platform’s development
  • Upgraded versions of all major components: Django, PostgreSQL, Elasticsearch, etc.
  • All map-related functions refactored for efficiency

Of Historical Mapathons, Seeds, and Graphs

“To make a digital historical gazetteer, it would help to have a digital historical gazetteer.” — anonymous

The World-Historical Gazetteer project (WHG) is soliciting contributions of place data (attestations of places in historical sources) from any region and period, in any quantity, in order to link them in a “union index” and thereby link the research that discovered them. Almost all of the historical sources are texts or tabular datasets, and although text sources often include descriptions of relative location (e.g. in a province or near a river) rarely do either include geographical coordinates. Considering that one of the main reasons we record place names in such sources is to map them or further analyze and compare them, this presents a problem.

A Problem

If we look up the names in the global modern place authority sources like GeoNames, Getty Thesaurus of Geographic Names (TGN), DBpedia, or Wikidata (a process commonly called “reconciliation”) we obtain generally poor results. Many historical names are no longer in use, many refer to multiple places, and many potential matches get lost in the shuffle due to varying transliteration schemes, alternate spellings, and OCR transcription errors.

A typical scenario we encounter is that a sizable majority of places referenced in a historic text or corpus remain un-located after reconciliation against modern authorities and are therefore not mappable. Granted, mapping is not always the point, but it is often an essential goal. Even if it isn’t, some graphical or otherwise computable representation of the spatio-temporality in a historical source usually is.

We are coming to realize there are some steps we as a community can take to improve this situation: “historical mapathons,” “prioritizing seed datasets,” and “geographical graphs.”

Historical Mapathons

To quote Wikipedia, “a mapathon is a coordinated mapping event.” Until now they have almost always involved adding features to OpenStreetMap in a area for which they are relatively sparse, often in response to a disaster. Mapathons might occur in a single room, where some guidance (and/or pizza) is provided to participants, or “virtually” – where anyone across the globe makes contributions using a web-based software like the iD map editor.

A historical mapathon is a coordinated mapping event where the activity is “feature extraction” for one or more historical maps. That is, tracing features as point, line, or area geometries, along with the associated place name and potentially other attributes the cartography offers. This activity has been performed routinely by creators of historical GISes and others, but we’re aware of only one web-based group “crowd-sourced” virtual mapathon – GB1900, the self-described “name transcription” project of University of Portsmouth in 2017-18 (results). Over a period of many months, volunteers transcribed over 2.5 million text strings represented on Ordnance Survey six inch to the mile County Series maps published between 1888 and 1914. The result is a dataset that should prove immensely valuable to historians of Britain in that period. We can assume that if the GB1900 data is indexed by WHG (our intention), future efforts to map texts of the period should be improved greatly. This is one example of the “seed” principle discussed below.

Seed Datasets

Thanks to the Pleiades project, “a community-built gazetteer and graph of ancient places,” researchers wishing to study the geographies of texts of and about the ancient Mediterranean can locate a very high proportion of place references found in their sources. The Pelagios Commons’ Peripleo project has used Pleiades data as a “seed” in a growing index of place attestations. Records of places within the index are continually augmented with further attestations of places contributed by others. Over time the index has been expanding, spatially and temporally. WHG is following on from Peripleo, extending its aims in a few ways and offering unlimited spatial and temporal coverage. Therefore, seed datasets for particular regions and periods are highly desirable for us.

A seed dataset can take a few forms and might come from a few directions: 1) a repository of attestations laboriously curated by a group of scholars over time (e.g. Pleiades); 2) a historical geographic information system (HGIS), also laboriously developed by one or more scholars and derived from cited primary and secondary sources (e.g. China HGIS, HGIS de las Indias); 3) a historical mapathon.

An Example

Two datasets at the top of our long queue of pending contributions are Werner Stampl’s HGIS de las Indias, and “the Alcedo gazetteer” [1]. Each is fairly comprehensive for 18th century Latin America (~15k and ~18k records respectively). Alcedo happens to be one of over 200 sources for HGIS de las Indias, and there is considerable overlap in coverage. The LatAm Gazetteer project, recently initiated under a Pelagios Commons micro-grant, developed digital text versions of Alcedo and its English translation from scanned images, and from those, a dataset of headwords and place types. At that stage, effective reconciliation with modern gazetteers was impossible — for example, Getty TGN has eight distinct Acapulco listings in modern day México. The original entry text reads, “situada en la Costa de la Mar del S,” which could narrow the possibilities considerably, but there is no ready way to send that phrase as actionable context when searching TGN.

LatAm research developer Nidia Hernández was then able to match 60% of the Alcedo entries to an Indias HGIS entry, and because it has containing districts, provinces, and countries (with geometry), we can record those topological relations and ultimately improve results of the reconciliation to Getty TGN. Still, we have thousands of records that are locatable only by painstaking reading of the original entry, which might still place them only in relation to entities that no longer exist or whose names have changed. A mess! And commonplace.

What we can take away from this is a) it helps to have a large authoritative “seed” for any given region and period – like HGIS de las Indias in this case; and b) in any case, it would help immeasurably to have data from historical maps of the period in place before working with texts — data developed in historical mapathons, that is. Maps provide approximate geometry and a hierarchy of “within” relationships, making reconciliation to modern gazetteers easier, and a “nice to have” option rather than essential.

Realizing a Historical Mapathon

Extracting (tracing) place data from old maps can be tedious and time consuming, so it’s best if a) highly motivated groups do it; b) it’s limited to a few key maps per group; and c) tools are available to make it as easy as possible. The steps involved are:

  1. Choose a few maps having the desired coverage and a viable license (e.g. using Old Maps Online or the David Rumsey Map Collection).
  2. Decide on an encoding strategy: what to digitize and whether to geo-rectify each map to a modern map. This will vary according to the group’s purposes and the individual maps’ cartography. If the distortion is not too extreme, digitizing features will produce estimated geometries that may be of value.
  3. If indicated, geo-rectify maps using desktop GIS (QGIS, ArcMap) or a web-based tool like MapWarper or Georeferencer (built in to the Rumsey site or standalone). This important step is best done by someone with experience at it or willingness to master it.
  4. Have group members use an online tool to view the map(s) (overlaid on a “real-world map” if geo-rectified) and create point, line, and polygon features according to the encoding strategy of Step 2. Saved data can be downloaded and mapped at any stage, and when complete, uploaded to WHG as a contribution.

Having completed this, any subsequent groups attempting to map texts of the region and period in tools like Recogito should see radically better results. Over time: more seeds like this, an easier contribution workflow to WHG, better coverage generally, leading to a true world historical gazetteer resource.

Next steps

The roadblocks to staging a historical mapathon right now are a) the lack of a single tool designed specifically for Step 4*; b) lack of straightforward tutorial for Steps 1-4. The WHG team is committed to working on both of these, and to have them in place by mid-July 2019. We’ll post progress periodically. In the meantime, think of which maps you’d really like to mine in this way, and who you might get to join your mapathon team.

*It should be noted that in some scenarios for non-georeferenced maps, the Recogito tool can be used as is. In that case, all that’s missing is a workflow for converting “within” relationship tags to the Linked Places format used for contributions to WHG. The result will be a graph dataset that could be useful for some purposes.

————-

[1] The 1787 “Diccionario geográfico-histórico de las Indias Occidentales o América” (Alcedo) and its English translation (Thoimposn, 1812).