« Moving to Wikidata | Main | In defense of MARC »


Simon Spero

There can be a bit of tradeoff here, depending on the granularities of the dependent entities, and the design of the backend storage system.

One problem is latency; the dependent entities cannot be requested for at least one RTT ;if there are chains of depencies then this can quickly add up (HTTP/2.0 can reduce this effect somewhat).

If most of the dependent entities are used most of the time then it can be more efficient to send a dataset containing multiple entities.

This is especially true if the data is compressed (either ahead of time or on-the-fly). Most compression algorithms require a bit of a run-up to get started.
This is very much the case for RDF - especially n-triples / n-quads, where there aren't any prefixes, and for RDF/XML, which is XML...

This does not lessen the importance of making the other entities available by name (blank nodes must die).

[If the client requests an RDF format that supports named graphs then each entity can go in a separate graph. This can help with caching]

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name is required. Email address will not be displayed with the comment.)

My Photo

April 2018

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30