For Developers

Web Services

Bloodhound data are exposed as csv or JSON-LD documents on publicly shared profile pages. Individual occurrence records are exposed as JSON-LD documents.

Specimen Record

Where /occurrence/477976412 is that provided by the Global Biodiversity Information Facility (GBIF), This same service is used in two browser extensions - what the developer community calls dogfooding.


The MIT-licensed code is available on GitHub. Technologies at play include Apache Spark to group occurrence records by raw entries in recordedBy and identifiedBy and to import into MySQL, Neo4j to store the scores between similarly structured people names, Elasticsearch to aid in the searching of people names once parsed and cleaned, Redis to coordinate the processing queues, and Sinatra/ruby for the application layer. A stand-alone ruby gem, dwc_agent may be used to parse people names and additionally score them for structural similarity.

Raw Data

List of Public Profiles

Where the above csv includes a header, "Family, Given, wikidata, ORCID, URL"

All Claims from Public Profiles
Daily build, bloodhound-public-claims.csv.gz (14.02 MB)

Where the above gzipped csv includes a header, "Subject, Predicate, Object" and rows are expressed as, ",,"

Unverified, Unauthenticated Agents
bloodhound-agents.gz (474 MB)

The above gzipped csv includes a header, "agents, gbifIDs_recordedBy, gbifIDs_identifiedBy", was constructed from using a Scala / Apache Spark script where the gbifIDs_recordedBy and gbifIDs_identifiedBy columns each contain an array of GBIF IDs. The "agents" column is as presented on GBIF and will require additional parsing.