Microservice API to expose and maintain a dump-things instance #13

Open
opened 2025-01-14 14:57:04 +00:00 by jsheunis · 0 comments
jsheunis commented 2025-01-14 14:57:04 +00:00 (Migrated from hub.datalad.org)

https://github.com/psychoinformatics-de/datalad-concepts/pull/206 brings the dump-things specification into existence. The idea being:

This is a knowledge base/graph dump specification, and a companion of the Things schema and its derivatives and extensions.

It defines a data structure for dumping arbitrarily complex information, expressed in these data models, in a version-controllable fashion directly on a filesystem.

Such a "dump" of metadata is likely to be the source of (sub)sets of information to various client applications, and is likely to be updated over time from/by various sources/actors, which points to the need for functionality to expose and maintain information in a dump.

A microservice component running on top of a dump instance could for example:

  • provide endpoints to "query" specific subsets of metadata (e.g. give me all Person records)
  • allow records to be returned in different formats (e.g. a zip file following the dump-things spec, or a TTL file with RDF)
  • periodically check and merge different sources of metadata for its dump

To define the functionality/API for such a microservice, we can work from the perspective of requests that would be made to its API (TODO):

  • return all records
  • return all records of a specific class
  • return a record with a specific ID, if available
  • allow specification of return format (json, yaml, rdf, ...)
https://github.com/psychoinformatics-de/datalad-concepts/pull/206 brings the `dump-things` specification into existence. The idea being: > This is a knowledge base/graph dump specification, and a companion of the [Things](/s/things) schema and its derivatives and extensions. > > It defines a data structure for dumping arbitrarily complex information, expressed in these data models, in a version-controllable fashion directly on a filesystem. Such a "dump" of metadata is likely to be the source of (sub)sets of information to various client applications, and is likely to be updated over time from/by various sources/actors, which points to the need for functionality to expose and maintain information in a dump. A microservice component running on top of a dump instance could for example: - provide endpoints to "query" specific subsets of metadata (e.g. give me all `Person` records) - allow records to be returned in different formats (e.g. a zip file following the `dump-things` spec, or a TTL file with RDF) - periodically check and merge different sources of metadata for its dump To define the functionality/API for such a microservice, we can work from the perspective of requests that would be made to its API (TODO): - return all records - return all records of a specific class - return a record with a specific ID, if available - allow specification of return format (json, yaml, rdf, ...)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
orinoco/tools#13
No description provided.