There are many cases when RDF data should be retrieved from remote sources only when really needed. E.g., a scheduling application may read personal calendars from personal sites of its users. Calendar data expire quickly, so there's no reason to frequently re-load them in hope that they are queried before expired.
Virtuoso extends SPARQL so it is possible to download RDF resource from a given IRI, parse them and store the resulting triples in a graph, all three operations will be performed during the SPARQL query execution. The IRI of graph to store triples is usually equal to the IRI where the resource is download from, so the feature is named "IRI dereferencing" There are two different use cases for this feature. In simple case, a SPARQL query contains from clauses that enumerate graphs to process, but there are no triples in DB.DBA.RDF_QUAD that correspond to some of these graphs. The query execution starts with dereferencing of these graphs and the rest runs as usual. In more sophisticated case, the query is executed many times in a loop. Every execution produces a partial result. SPARQL processor checks for IRIs in the result such that resources with that IRIs may contain relevant data but not yet loaded into the DB.DBA.RDF_QUAD. After some iteration, the partial result is identical to the result of the previous iteration, because there's no more data to retrieve. As the last step, SPARQL processor builds the final result set.
Virtuoso extends SPARQL syntax of from and from named clauses. It allows additional list of options at end of clause: option ( param1 value1, param2 value2, ... ) where parameter names are QNames that start with get: prefix and values are "precode" expressions, i.e. expressions that does not contain variables other than external parameters. Names of allowed parameters are listed below.
SQL>SPARQL define get:uri "http://myopenlink.net/dataspace/person/kidehen" SELECT ?id FROM NAMED <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> WHERE { graph ?g { ?id a ?o } } LIMIT 10; id VARCHAR _______________________________________________________________________________ http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1231 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1231 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1243 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1243 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com#this http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D 10 Rows. -- 10 msec.
SQL>SPARQL define get:refresh "3600" SELECT ?id FROM NAMED <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> WHERE { graph ?g { ?id a ?o } } LIMIT 10; id VARCHAR _______________________________________________________________________________ http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1231 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1231 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1243 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1243 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com#this http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D 10 Rows. -- 10 msec.
SQL>SPARQL define get:proxy "www.openlinksw.com:80" define get:method "GET" define get:soft "soft" SELECT ?id FROM NAMED <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> WHERE { graph ?g { ?id a ?o } } LIMIT 10; id VARCHAR _______________________________________________________________________________ http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1231 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1231 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1243 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1243 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1261 http://www.openlinksw.com/dataspace/kidehen@openlinksw.com#this http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D 10 Rows. -- 10 msec. SQL> limit 10;
If a value of some get:... parameter repeats for every from clause then it can be written as a global pragma like define get:soft "soft". The following two queries will work identically:
SQL>SPARQL SELECT ?id FROM NAMED <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> OPTION (get:soft "soft", get:method "GET") FROM NAMED <http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/sioc.ttl> OPTION (get:soft "soft", get:method "GET") WHERE { graph ?g { ?id a ?o } } LIMIT 10; id VARCHAR _______________________________________________________________________________ http://www.openlinksw.com/dataspace/person/oerling#this http://www.openlinksw.com/mt-tb http://www.openlinksw.com/RPC2 http://www.openlinksw.com/dataspace/oerling#this http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/958 http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/958 http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/949 http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/949 10 Rows. -- 862 msec.
SQL>SPARQL define get:method "GET" define get:soft "soft" SELECT ?id FROM NAMED <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> FROM NAMED <http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/sioc.ttl> WHERE { graph ?g { ?id a ?o } } LIMIT 10; id VARCHAR _______________________________________________________________________________ http://www.openlinksw.com/dataspace/person/oerling#this http://www.openlinksw.com/mt-tb http://www.openlinksw.com/RPC2 http://www.openlinksw.com/dataspace/oerling#this http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/958 http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/958 http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/949 http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/949 10 Rows. -- 10 msec.
It can make text shorter and it is especially useful when the query text comes from client but the parameter should have a fixed value due to security reasons: the values set by define get:... can not be redefined inside the query and the application may prevent the text with desired pragmas before the execution.
Note that the user should have SPARQL_UPDATE role in order to execute such a query. By default SPARQL web service endpoint is owned by SPARQL user that have SPARQL_SELECT but not SPARQL_UPDATE. It is possible in principle to grant SPARQL_UPDATE to SPARQL but this breaches the whole security of the RDF storage.
SQL>SPARQL PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT DISTINCT ?friend FROM NAMED <http://myopenlink.net/dataspace/person/kidehen> OPTION (get:soft "soft", get:method "GET") WHERE { <http://myopenlink.net/dataspace/person/kidehen#this> foaf:knows ?friend . }; friend VARCHAR _______________________________________________________________________________ http://www.dajobe.org/foaf.rdf#i http://www.w3.org/People/Berners-Lee/card#i http://www.w3.org/People/Connolly/#me http://my.opera.com/chaals/xml/foaf#me http://www.w3.org/People/Berners-Lee/card#amy http://www.w3.org/People/EM/contact#me http://myopenlink.net/dataspace/person/ghard#this http://myopenlink.net/dataspace/person/omfaluyi#this http://myopenlink.net/dataspace/person/alanr#this http://myopenlink.net/dataspace/person/bblfish#this http://myopenlink.net/dataspace/person/danja#this http://myopenlink.net/dataspace/person/tthibodeau#this ... 36 Rows. -- 1693 msec.
Consider a set of personal data such that one resource can list many persons and point to resources where that persons are described in more details. E.g. resource about user1 describes the user and also contain statements that user2 and user3 are persons and more data can be found in user2.ttl and user3.ttl, user3.ttl can contain statements that user4 is also person and more data can be found in user4.ttl and so on. The query should find as many users as it is possible and return their names and e-mails.
If all data about all users were loaded into the database, the query could be quite simple:
SQL>SPARQL prefix foaf: <http://xmlns.com/foaf/0.1/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?id ?firstname ?nick where { graph ?g { ?id rdf:type foaf:Person. ?id foaf:firstName ?firstname. ?id foaf:knows ?fn . ?fn foaf:nick ?nick. } } limit 10; id firstname nick VARCHAR VARCHAR VARCHAR _______________________________________________________________________________ http://myopenlink.net/dataspace/person/pmitchell#this LaRenda sdmonroe http://myopenlink.net/dataspace/person/pmitchell#this LaRenda kidehen{at}openlinksw.com http://myopenlink.net/dataspace/person/pmitchell#this LaRenda alexmidd http://myopenlink.net/dataspace/person/abm#this Alan kidehen{at}openlinksw.com http://myopenlink.net/dataspace/person/igods#this Cameron kidehen{at}openlinksw.com http://myopenlink.net/dataspace/person/goern#this Christoph captsolo http://myopenlink.net/dataspace/person/dangrig#this Dan rickbruner http://myopenlink.net/dataspace/person/dangrig#this Dan sdmonroe http://myopenlink.net/dataspace/person/dangrig#this Dan lszczepa http://myopenlink.net/dataspace/person/dangrig#this Dan kidehen 10 Rows. -- 80 msec.
It is possible to enable IRI dereferencing in such a way that all appropriate resources are loaded during the query execution even if names of some of them are not known a priori.
SQL>SPARQL define input:grab-var "?more" define input:grab-depth 10 define input:grab-limit 100 define input:grab-base "http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1300" prefix foaf: <http://xmlns.com/foaf/0.1/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?id ?firstname ?nick WHERE { graph ?g { ?id rdf:type foaf:Person. ?id foaf:firstName ?firstname. ?id foaf:knows ?fn . ?fn foaf:nick ?nick. OPTIONAL { ?id rdfs:SeeAlso ?more } } } LIMIT 10; id firstname nick VARCHAR VARCHAR VARCHAR _______________________________________________________________________________ http://myopenlink.net/dataspace/person/ghard#this Yrj+?n+? kidehen http://inamidst.com/sbp/foaf#Sean Sean d8uv http://myopenlink.net/dataspace/person/dangrig#this Dan rickbruner http://myopenlink.net/dataspace/person/dangrig#this Dan sdmonroe http://myopenlink.net/dataspace/person/dangrig#this Dan lszczepa http://myopenlink.net/dataspace/person/dangrig#this Dan kidehen http://captsolo.net/semweb/foaf-captsolo.rdf#Uldis_Bojars Uldis mortenf http://captsolo.net/semweb/foaf-captsolo.rdf#Uldis_Bojars Uldis danja http://captsolo.net/semweb/foaf-captsolo.rdf#Uldis_Bojars Uldis zool http://myopenlink.net/dataspace/person/rickbruner#this Rick dangrig 10 Rows. -- 530 msec.
The IRI dereferencing is controlled by the following pragmas:
SQL>SPARQL define input:storage "" define input:grab-iri <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> define input:grab-var "id" define input:grab-depth 10 define input:grab-limit 100 define input:grab-base "http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1300" SELECT ?id WHERE { graph ?g { ?id a ?o } } LIMIT 10; id VARCHAR _______________________________________________________________________________ http://www.openlinksw.com/virtrdf-data-formats#default-iid http://www.openlinksw.com/virtrdf-data-formats#default-iid-nullable http://www.openlinksw.com/virtrdf-data-formats#default-iid-nonblank http://www.openlinksw.com/virtrdf-data-formats#default-iid-nonblank-nullable http://www.openlinksw.com/virtrdf-data-formats#default http://www.openlinksw.com/virtrdf-data-formats#default-nullable http://www.openlinksw.com/virtrdf-data-formats#sql-varchar http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-nullable http://www.openlinksw.com/virtrdf-data-formats#sql-longvarchar http://www.openlinksw.com/virtrdf-data-formats#sql-longvarchar-nullable 10 Rows. -- 530 msec.
SQL>SPARQL define input:grab-all "yes" define input:grab-depth 10 define input:grab-limit 100 define input:grab-base "http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1300" prefix foaf: <http://xmlns.com/foaf/0.1/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?id ?firstname ?nick where { graph ?g { ?id rdf:type foaf:Person. ?id foaf:firstName ?firstname. ?id foaf:knows ?fn . ?fn foaf:nick ?nick. } } limit 10; id firstname nick VARCHAR VARCHAR VARCHAR ____________________________________________________________________ http://myopenlink.net/dataspace/person/pmitchell#this LaRenda sdmonroe http://myopenlink.net/dataspace/person/pmitchell#this LaRenda kidehen{at}openlinksw.com http://myopenlink.net/dataspace/person/pmitchell#this LaRenda alexmidd http://myopenlink.net/dataspace/person/abm#this Alan kidehen{at}openlinksw.com http://myopenlink.net/dataspace/person/igods#this Cameron kidehen{at}openlinksw.com http://myopenlink.net/dataspace/person/goern#this Christoph captsolo http://myopenlink.net/dataspace/person/dangrig#this Dan rickbruner http://myopenlink.net/dataspace/person/dangrig#this Dan sdmonroe http://myopenlink.net/dataspace/person/dangrig#this Dan lszczepa http://myopenlink.net/dataspace/person/dangrig#this Dan kidehen 10 Rows. -- 660 msec.
SQL>SPARQL define input:grab-iri <http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/sioc.ttl> define input:grab-var "id" define input:grab-depth 10 define input:grab-limit 100 define input:grab-base "http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1300" define input:grab-seealso <foaf:maker> prefix foaf: <http://xmlns.com/foaf/0.1/> SELECT ?id where { graph ?g { ?id a foaf:Person . } } limit 10; id VARCHAR _______________________________________________________________________________ mailto:somebody@example.domain http://localhost:8895/dataspace/person/dav#this http://localhost:8895/dataspace/person/dba#this mailto:2@F.D http://localhost:8895/dataspace/person/test1#this http://www.openlinksw.com/blog/~kidehen/gems/rss.xml#Kingsley%20Uyi%20Idehen http://art.weblogsinc.com/rss.xml# http://digitalmusic.weblogsinc.com/rss.xml# http://partners.userland.com/nytrss/books.xml# http://partners.userland.com/nytrss/arts.xml# 10 Rows. -- 105 msec.
SQL>SPARQL define input:grab-depth 10 define input:grab-limit 100 define input:grab-var "more" define input:grab-base "http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com%27s%20BLOG%20%5B127%5D/1300" prefix foaf: <http://xmlns.com/foaf/0.1/> SELECT ?id where { graph ?g { ?id a foaf:Person . optional { ?id foaf:maker ?more } } } limit 10; id VARCHAR _______________________________________________________________________________ mailto:somebody@example.domain http://localhost:8895/dataspace/person/dav#this http://localhost:8895/dataspace/person/dba#this mailto:2@F.D http://localhost:8895/dataspace/person/test1#this http://www.openlinksw.com/blog/~kidehen/gems/rss.xml#Kingsley%20Uyi%20Idehen http://art.weblogsinc.com/rss.xml# http://digitalmusic.weblogsinc.com/rss.xml# http://partners.userland.com/nytrss/books.xml# http://partners.userland.com/nytrss/arts.xml# 10 Rows. -- 115 msec.
Default resolver procedure is DB.DBA.RDF_GRAB_RESOLVER_DEFAULT(). Note that the function produce two absolute URIs, abs_uri and dest_uri. Default procedure returns two equal strings, but other may return different values, e.g., return primary and permanent location of the resource as dest_uri and the fastest known mirror location as abs_uri thus saving HTTP retrieval time. It can even signal an error to block the downloading of some unwanted resource.
DB.DBA.RDF_GRAB_RESOLVER_DEFAULT ( in base varchar, -- base IRI as specified by input:grab-base pragma in rel_uri varchar, -- IRI of the resource as it is specified by input:grab-iri or a value of a variable out abs_uri varchar, -- the absolute IRI that should be downloaded out dest_uri varchar, -- the graph IRI where triples should be stored after download out get_method varchar ) -- the HTTP method to use, should be "GET" or "MGET".
URL rewriting is the act of modifying a source URL prior to the final processing of that URL by a Web Server.
The ability to rewrite URLs may be desirable for many reasons that include:
URI naming schemes don't resolve the challenges associated with referencing data. To reiterate, this is demonstrated by the fact that the URIs http://demo.openlinksw.com/Northwind/Customer/ALFKI and http://demo.openlinksw.com/Northwind/Customer/ALFKI#this both appear as http://demo.openlinksw.com/Northwind/Customer/ALFKI to the Web Server, since data following the fragment identifier "#" never makes it that far.
The only way to address data referencing is by pre-processing source URIs (e.g. via regular expression or sprintf substitutions) as part of a URL rewriting processing pipeline. The pipeline process has to take the form of a set of rules that cater for elements such as HTTP Accept headers, HTTP response code, HTTP response headers, and rule processing order.
An example of such a pipeline is depicted in the table below.
URI Source(Regular Expression Pattern) | HTTP Accept Headers(Regular Expression) | HTTPResponse Code | HTTP Response Headers | Rule Processing Order |
---|---|---|---|---|
/Northwind/Customer/([^#]*) | None (meaning default) | 200 or 303 redirect to a resource with default representation. | None | Normal (order irrelevant) |
/Northwind/Customer/([^#]*) | (text/rdf.n3) | (application/rdf.xml) | 303 redirect to location of a descriptive and associated resource (e.g. RESTful Web Service that returns desired representation) | None |
/Northwind/Customer/([^#]*) | (text/html) | (application/xhtml.xml) | 406 (Not Acceptable)or303 redirect to location of resource in requested representation | Vary: negotiate, acceptAlternates: {"ALFKI" 0.9 {type application/rdf+xml}} |
The source URI patterns refer to virtual or physical directories for ex. at http://demo.openlinksw.com/. Rules can be placed at the head or tail of the pipeline, or applied in the order they are declared, by specifying a Rule Processing Order of First, Last, or Normal, respectively. The decision as to which representation to return for URI http://demo.openlinksw.com/Northwind/Customer/ALFKI is based on the MIME type(s) specified in any Accept header accompanying the request.
In the case of the last rule, the Alternates response header applies only to response code 406. 406 would be returned if there were no (X)HTML representation available for the requested resource. In the example shown, an alternative representation is available in RDF/XML.
When applied to matching HTTP requests, the last two rules might generate responses similar to those below:
$ curl -I -H "Accept: application/rdf+xml" http://demo.openlinksw.com/Northwind/Customer/ALFKI HTTP/1.1 303 See Other Server: Virtuoso/05.00.3016 (Solaris) x86_64-sun-solaris2.10-64 PHP5 Connection: close Content-Type: text/html; charset=ISO-8859-1 Date: Mon, 16 Jul 2007 22:40:03 GMT Accept-Ranges: bytes Location: /sparql?query=CONSTRUCT+{+%3Chttp%3A//demo.openlinksw.com/Northwind/Customer/ALFKI%23this%3E+%3Fp+%3Fo+}+FROM+%3Chttp%3A//demo.openlinksw.com/Northwind%3E+WHERE+{+%3Chttp%3A//demo.openlinksw.com/Northwind/Customer/ALFKI%23this%3E+%3Fp+%3Fo+}&format=application/rdf%2Bxml Content-Length: 0
In the cURL exchange depicted above, the target Virtuoso server redirects to a SPARQL endpoint that retrieves an RDF/XML representation of the requested entity.
$ curl -I -H "Accept: text/html" http://demo.openlinksw.com/Northwind/Customer/ALFKI HTTP/1.1 406 Not Acceptable Server: Virtuoso/05.00.3016 (Solaris) x86_64-sun-solaris2.10-64 PHP5 Connection: close Content-Type: text/html; charset=ISO-8859-1 Date: Mon, 16 Jul 2007 22:40:23 GMT Accept-Ranges: bytes Vary: negotiate,accept Alternates: {"ALFKI" 0.9 {type application/rdf+xml}} Content-Length: 0
In this second cURL exchange, the target Virtuoso server indicates that there is no resource to deliver in the requested representation. It provides hints in the form of an alternate resource representation and URI that may be appropriate, i.e., an RDF/XML representation of the requested entity.
Virtuoso provides a URL rewriter that can be enabled for URLs matching specified patterns. Coupled with customizable HTTP response headers and response codes, Data-Web server administrators can configure highly flexible rules for driving content negotiation and URL rewriting. The key elements of the URL rewriter are:
A Virtuoso virtual directory maps a logical path to a physical directory that is file system or WebDAV based. This mechanism allows physical locations to be hidden or simply reorganised. Virtual directory definitions are held in the system table DB.DBA.HTTP_PATH. Virtual directories can be administered in three basic ways:
Although we are approaching the URL Rewriter from the perspective of deploying linked data, the Rewriter was developed with additional objectives in mind. These in turn have influenced the naming of some of the formal argument names in the Configuration API function prototypes. In the following sections, long URLs are those containing a query string with named parameters; nice (aka. source) URLs have data encoded in some other format. The primary goal of the Rewriter is to accept a nice URL from an application and convert this into a long URL, which then identifies the page that should actually be retrieved.
When an HTTP request is accepted by the Virtuoso HTTP server, the received nice URL is passed to an internal path translation function. This function takes the nice URL and, if the current virtual directory has a url_rewrite option set to an existing ruleset name, tries to match the corresponding rulesets and rules; that is, it performs a recursive traversal of any rulelist associated with it. For every rule in the rulelist, the same logic is applied (only the logic for regex-based rules is described; that for sprintf-based rules is very similar):
The path translation function described above is internal to the Web server, so its signature is not appropriate for Virtuoso/PL calls and thus is not published. Virtuoso/PL developers can harness the same functionality using the DB.DBA.URLREWRITE_APPLY API call.
Virtuoso is a full-blown HTTP server in its own right. The HTTP server functionality co-exists with the product core (i.e., DBMS Engine, Web Services Platform, WebDAV filesystem, and other components of the Universal Server). As a result, it has the ability to multi-home Web domains within a single instance across a variety of domain name and port combinations. In addition, it also enables the creation of multiple virtual directories per domain.
In addition to the basic functionality, Virtuoso facilitates the association of URL Rewriting rules with the virtual directories associated with a hosted Web domain.
In all cases, Virtuoso enables you to configure virtual domains, virtual directories and URL rewrite rules for one or more virtual directories, via the (X)HTML-based Conductor Admin User Interface or a collection of Virtuoso Stored Procedure Language (PL)-based APIs.
The steps for configuring URL Rewrite rules via the Virtuoso Conductor are as follows:
![]() |
Figure: 14.12.3.6.1. URL-rewrite UI using Conductor |
The vhost_define()API is used to define virtual hosts and virtual paths hosted by the Virtuoso HTTP server. URL rewriting is enabled through this function's opts parameter. opts is of type ANY, e.g., a vector of field-value pairs. Numerous fields are recognized for controlling different options. The field value url_rewrite controls URL rewriting. The corresponding field value is the IRI of a rule list to apply.
Virtuoso includes the following functions for managing URL rewriting rules and rule lists. The names are self-explanatory.
-- Deletes a rewriting rule DB.DBA.URLREWRITE_DROP_RULE -- Creates a rewriting rule which uses sprintf-based pattern matching DB.DBA.URLREWRITE_CREATE_SPRINTF_RULE -- Creates a rewriting rule which uses regular expression (regex) based pattern matching DB.DBA.URLREWRITE_CREATE_REGEX_RULE -- Deletes a rewriting rule list DB.DBA.URLREWRITE_DROP_RULELIST -- Creates a rewriting rule list DB.DBA.URLREWRITE_CREATE_RULELIST -- Lists all the rules whose IRI match the specified 'SQL like' pattern DB.DBA.URLREWRITE_ENUMERATE_RULES -- Lists all the rule lists whose IRIs match the specified 'SQL like' pattern DB.DBA.URLREWRITE_ENUMERATE_RULELISTS
Rewriting rules take two forms: sprintf-based or regex-based. When used for nice URL to long URL conversion, the only difference between them is the syntax of format strings. The reverse long to nice conversion works only for sprintf-based rules, whereas regex-based rules are unidirectional.
For the purposes of describing how to make dereferenceable URIs for linked data, we will stick with the nice to long conversion using regex-based rules.
Regex rules are created using the URLREWRITE_CREATE_REGEX_RULE() function.
The Northwind schema is comprised of commonly understood SQL Tables that include: Customers, Orders, Employees, Products, Product Categories, Shippers, Countries, Provinces etc.
An RDF View of SQL data is an RDF named graph (RDF data set) comprised of RDF Linked Data (triples) stored in a Virtuoso Quad Store (the native RDF Data Management realm of Virtuoso).
In this example we are going interact with Linked Data deployed into the Data-Web from a live instance of Virtuoso, which uses the URL Rewrite rules from the prior section.
The components used in the example are as follows:
The curl utility provides a useful tool for verifying HTTP server responses and rewriting rules. The curl exchanges below show the URL rewriting rules defined for the Northwind RDF view being applied.
Example 1:
$ curl -I -H "Accept: text/html" http://demo.openlinksw.com/Northwind/Customer/ALFKI HTTP/1.1 303 See Other Server: Virtuoso/05.00.3016 (Solaris) x86_64-sun-solaris2.10-64 PHP5 Connection: close Content-Type: text/html; charset=ISO-8859-1 Date: Tue, 14 Aug 2007 13:30:02 GMT Accept-Ranges: bytes Location: http://demo.openlinksw.com/about/html/http/demo.openlinksw.com/Northwind/Customer/ALFKI Content-Length: 0
Example 2:
$ curl -I -H "Accept: application/rdf+xml" http://demo.openlinksw.com/Northwind/Customer/ALFKI HTTP/1.1 303 See Other Server: Virtuoso/05.00.3016 (Solaris) x86_64-sun-solaris2.10-64 PHP5 Connection: close Content-Type: text/html; charset=ISO-8859-1 Date: Tue, 14 Aug 2007 13:30:22 GMT Accept-Ranges: bytes Location: /sparql?query=CONSTRUCT+{+%3Chttp%3A//demo.openlinksw.com/Northwind/Customer/ALFKI%23this%3E+%3Fp+%3Fo+}+FROM+%3Chttp%3A//demo.openlinksw.com/Northwind%3E+WHERE+{+%3Chttp%3A//demo.openlinksw.com/Northwind/Customer/ALFKI%23this%3E+%3Fp+%3Fo+}&format=application/rdf%2Bxml Content-Length: 0
Example 3:
$ curl -I -H "Accept: text/html" http://demo.openlinksw.com/Northwind/Customer/ALFKI#this HTTP/1.1 404 Not Found Server: Virtuoso/05.00.3016 (Solaris) x86_64-sun-solaris2.10-64 PHP5 Connection: Keep-Alive Content-Type: text/html; charset=ISO-8859-1 Date: Tue, 14 Aug 2007 13:31:01 GMT Accept-Ranges: bytes Content-Length: 0
The output above shows how RDF entities from the Data-Web, in this case customer ALFKI, are exposed in the Document Web. The power of SPARQL coupled with URL rewriting enables us to produce results in line with the desired representation. A SPARQL SELECT or CONSTRUCT query is used depending on whether the requested representation is text/html or application/rdf+xml, respectively.
The 404 response in Example 3 indicates that no HTML representation is available for entity ALFKI#this. In most cases, a URI of this form (containing a '#' fragment identifier) will not reach the server. This example supposes that it does: i.e., the RDF client and network routing allows the suffixed request. The presence of the #this suffix implicitly states that this is a request for a data resource in the Data-Web realm, not a document resource from the Document Web.2
Rather than return 404, we could instead choose to construct our rewriting rules to perform a 303 redirect, so that the response for ALFKI#this in Example 3 becomes the same as that for ALFKI in Example 1.
So as not to overload our preceding description of Linked Data deployment with excessive detail, the description of content negotiation presented thus far was kept deliberately brief. This section discusses content negotiation in more detail.
Recall that a resource (conceptual entity) identified by a URI may be associated with more than one representation (e.g. multiple languages, data formats, sizes, resolutions). If multiple representations are available, the resource is referred to as negotiable and each of its representations is termed a variant. For instance, a Web document resource, named 'ALFKI' may have three variants: alfki.xml, alfki.html and alfki.txt all representing the same data. Content negotiation provides a mechanism for selecting the best variant.
As outlined in the earlier brief discussion of content negotiation, when a user agent requests a resource, it can include with the request Accept headers (Accept, Accept-Language, Accept-Charset, Accept-Encoding etc.) which express the user preferences and user agent capabilities. The server then chooses and returns the best variant based on the Accept headers. Because the selection of the best resource representation is made by the server, this scheme is classed as server-driven negotiation.
An alternative content negotiation mechanism is Transparent Content Negotiation (TCN), a protocol defined by RFC2295 . TCN offers a number of benefits over standard HTTP/1.1 negotiation, for suitably enabled user agents.
RFC2295 introduces a number of new HTTP headers including the Negotiate request header, and the TCN and Alternates response headers. (Krishnamurthy et al. note that although the HTTP/1.1 specification reserved the Alternates header for use in agent driven negotiation, it was not fully specified. Consequently under a pure HTTP/1.1 implementation as defined by RFC2616, server-driven content negotiation is the only option. RFC2295 addresses this issue.)
Weaknesses of server-driven negotiation highlighted by RFCs 2295 and 2616 include:
Rather than rely on server-driven negotiation and variant selection by the server, a user agent can take full control over deciding the best variant by explicitly requesting transparent content negotiation through the Negotiate request header. The negotiation is 'transparent' because it makes all the variants on the server visible to the agent.
Under this scheme, the server sends the user agent a list, represented in an Alternates header, containing the available variants and their properties. The user agent can then choose the best variant itself. Consequently, the agent no longer needs to send large Accept headers describing in detail its capabilities and preferences. (However, unless caching is used, user-agent driven negotiation does suffer from the disadvantage of needing a second request to obtain the best representation. By sending its best guess as the first response, server driven negotiation avoids this second request if the initial best guess is acceptable.)
As well as variant selection by the user agent, TCN allows the server to choose on behalf of the user agent if the user agent explicitly allows it through the Negotiate request header. This option allows the user agent to send smaller Accept headers containing enough information to allow the server to choose the best variant and return it directly. The server's choice is controlled by a 'remote variant selection algorithm' as defined in RFC2296.
A further option is to allow the end-user to select a variant, in case the choice made by negotiation process is not optimal. For instance, the user agent could display an HTML-based 'pick list' of variants constructed from the variant list returned by the server. Alternatively the server could generate this pick list itself and include it in the response to a user agent's request for a variant list. (Virtuoso currently responds this way.)
The following section describes the Virtuoso HTTP server's TCN implementation which is based on RFC2295, but without "Feature" negotiation. OpenLink's RDF rich clients, iSparql and the OpenLink RDF Browser, both support TCN. User agents which do not support transparent content negotiation continue to be handled using HTTP/1.1 style content negotiation (whereby server-side selection is the only option - the server selects the best variant and returns a list of variants in an Alternates response header).
In order to negotiate a resource, the server needs to be given information about each of the variants. Variant descriptions are held in SQL table HTTP_VARIANT_MAP. The descriptions themselves can be created, updated or deleted using Virtuoso/PL or through the Conductor UI. The table definition is as follows:
create table DB.DBA.HTTP_VARIANT_MAP ( VM_ID integer identity, -- unique ID VM_RULELIST varchar, -- HTTP rule list name VM_URI varchar, -- name of requested resource e.g. 'page' VM_VARIANT_URI varchar, -- name of variant e.g. 'page.xml', 'page.de.html' etc. VM_QS float, -- Source quality, a number in the range 0.001-1.000, with 3 digit precision VM_TYPE varchar, -- Content type of the variant e.g. text/xml VM_LANG varchar, -- Content language e.g. 'en', 'de' etc. VM_ENC varchar, -- Content encoding e.g. 'utf-8', 'ISO-8892' etc. VM_DESCRIPTION long varchar, -- a human readable description about the variant e.g. 'Profile in RDF format' VM_ALGO int default 0, -- reserved for future use primary key (VM_RULELIST, VM_URI, VM_VARIANT_URI) ) create unique index HTTP_VARIANT_MAP_ID on DB.DBA.HTTP_VARIANT_MAP (VM_ID)
Two functions are provided for adding or updating, or removing variant descriptions using Virtuoso/PL:
-- Adding or Updating a Resource Variant: DB.DBA.HTTP_VARIANT_ADD ( in rulelist_uri varchar, -- HTTP rule list name in uri varchar, -- Requested resource name e.g. 'page' in variant_uri varchar, -- Variant name e.g. 'page.xml', 'page.de.html' etc. in mime varchar, -- Content type of the variant e.g. text/xml in qs float := 1.0, -- Source quality, a floating point number with 3 digit precision in 0.001-1.000 range in description varchar := null, -- a human readable description of the variant e.g. 'Profile in RDF format' in lang varchar := null, -- Content language e.g. 'en', 'bg'. 'de' etc. in enc varchar := null -- Content encoding e.g. 'utf-8', 'ISO-8892' etc. ) --Removing a Resource Variant DB.DBA.HTTP_VARIANT_REMOVE ( in rulelist_uri varchar, -- HTTP rule list name in uri varchar, -- Name of requested resource e.g. 'page' in variant_uri varchar := '%' -- Variant name filter )
The Conductor 'Content negotiation' panel for describing resource variants and configuring content negotiation is depicted below. It can be reached by selecting the 'Virtual Domains & Directories' tab under the 'Web Application Server' menu item, then selecting the 'URL rewrite' option for a logical path listed amongst those for the relevant HTTP host, e.g. '{Default Web Site}'
The input fields reflect the supported 'dimensions' of negotiation which include content type, language and encoding. Quality values corresponding to the options for 'Source Quality' are as follows:
Source Quality | Quality Value |
---|---|
perfect representation | 1.000 |
threshold of noticeable loss of quality | 0.900 |
noticeable, but acceptable quality reduction | 0.800 |
barely acceptable quality | 0.500 |
severely degraded quality | 0.300 |
completely degraded quality | 0.000 |
When a user agent instructs the server to select the best variant, Virtuoso does so using the selection algorithm below:
If a virtual directory has URL rewriting enabled (has the 'url_rewrite' option set), the web server:
The server may return the best-choice resource representation or a list of available resource variants. When a user agent requests transparent negotiation, the web server returns the TCN header "choice". When a user agent asks for a variant list, the server returns the TCN header "list".
In this example we assume the following files have been uploaded to the Virtuoso WebDAV server, with each containing the same information but in different formats:
We add TCN rules and define a virtual directory:
DB.DBA.HTTP_VARIANT_ADD ('http_rule_list_1', 'page', 'page.html','text/html', 0.900000, 'HTML variant'); DB.DBA.HTTP_VARIANT_ADD ('http_rule_list_1', 'page', 'page.txt', 'text/plain', 0.500000, 'Text document'); DB.DBA.HTTP_VARIANT_ADD ('http_rule_list_1', 'page', 'page.xml', 'text/xml', 1.000000, 'XML variant'); DB.DBA.VHOST_DEFINE (lpath=>'/DAV/TCN/', ppath=>'/DAV/TCN/', is_dav=>1, vsp_user=>'dba', opts=>vector ('url_rewrite', 'http_rule_list_1'));
Having done this we can now test the setup with a suitable HTTP client, in this case the curl command line utility. In the following examples, the curl client supplies Negotiate request headers containing content negotiation directives which include:
The server returns a TCN response header signalling that the resource is transparently negotiated and either a choice or a list response as appropriate.
In the first curl exchange, the user agent indicates to the server that, of the formats it recognizes, HTML is preferred and it instructs the server to perform transparent content negotiation. In the response, the Vary header field expresses the parameters the server used to select a representation, i.e. only the Negotiate and Accept header fields are considered.
$ curl -i -H "Accept: text/xml;q=0.3,text/html;q=1.0,text/plain;q=0.5,*/*; q=0.3" -H "Negotiate: *" http://localhost:8890/DAV/TCN/page HTTP/1.1 200 OK Server: Virtuoso/05.00.3021 (Linux) i686-pc-linux-gnu VDB Connection: Keep-Alive Date: Wed, 31 Oct 2007 15:43:18 GMT Accept-Ranges: bytes TCN: choice Vary: negotiate,accept Content-Location: page.html Content-Type: text/html ETag: "14056a25c066a6e0a6e65889754a0602" Content-Length: 49 <html> <body> some html </body> </html>
Next, the source quality values are adjusted so that the user agent indicates that XML is its preferred format.
$ curl -i -H "Accept: text/xml,text/html;q=0.7,text/plain;q=0.5,*/*;q=0.3" -H "Negotiate: *" http://localhost:8890/DAV/TCN/page HTTP/1.1 200 OK Server: Virtuoso/05.00.3021 (Linux) i686-pc-linux-gnu VDB Connection: Keep-Alive Date: Wed, 31 Oct 2007 15:44:07 GMT Accept-Ranges: bytes TCN: choice Vary: negotiate,accept Content-Location: page.xml Content-Type: text/xml ETag: "8b09f4b8e358fcb7fd1f0f8fa918973a" Content-Length: 39 <?xml version="1.0" ?> <a>some xml</a>
In the final example, the user agent wants to decide itself which is the most suitable representation, so it asks for a list of variants. The server provides the list, in the form of an Alternates response header, and, in addition, sends an HTML representation of the list so that the end user can decide on the preferred variant himself if the user agent is unable to.
$ curl -i -H "Accept: text/xml,text/html;q=0.7,text/plain;q=0.5,*/*;q=0.3" -H "Negotiate: vlist" http://localhost:8890/DAV/TCN/page HTTP/1.1 300 Multiple Choices Server: Virtuoso/05.00.3021 (Linux) i686-pc-linux-gnu VDB Connection: close Content-Type: text/html; charset=ISO-8859-1 Date: Wed, 31 Oct 2007 15:44:35 GMT Accept-Ranges: bytes TCN: list Vary: negotiate,accept Alternates: {"page.html" 0.900000 {type text/html}}, {"page.txt" 0.500000 {type text/plain}}, {"page.xml" 1.000000 {type text/xml}} Content-Length: 368 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>300 Multiple Choices</title> </head> <body> <h1>Multiple Choices</h1> Available variants: <ul> <li> <a href="page.html">HTML variant</a>, type text/html</li> <li><a href="page.txt">Text document</a>, type text/plain</li> <li><a href="page.xml">XML variant</a>, type text/xml</li> </ul> </body> </html>
Example of LSIDs: A scientific name from UBio
SQL>SPARQL define get:soft "soft" SELECT * FROM <urn:lsid:ubio.org:namebank:11815> WHERE { ?s ?p ?o } LIMIT 5; s p o VARCHAR VARCHAR VARCHAR _______________________________________________________________________________ urn:lsid:ubio.org:namebank:11815 http://purl.org/dc/elements/1.1/title Pternistis leucoscepus urn:lsid:ubio.org:namebank:11815 http://purl.org/dc/elements/1.1/subject Pternistis leucoscepus (Gray, GR) 1867 urn:lsid:ubio.org:namebank:11815 http://purl.org/dc/elements/1.1/identifier urn:lsid:ubio.org:namebank:11815 urn:lsid:ubio.org:namebank:11815 http://purl.org/dc/elements/1.1/creator http://www.ubio.org urn:lsid:ubio.org:namebank:11815 http://purl.org/dc/elements/1.1/type Scientific Name 5 Rows. -- 741 msec.
Example of LSIDs: A segment of the human genome from GDB
SQL>SPARQL define get:soft "soft" SELECT * FROM <urn:lsid:gdb.org:GenomicSegment:GDB132938> WHERE { ?s ?p ?o } LIMIT 5; s p o VARCHAR VARCHAR VARCHAR _______________________________________________________________________________ urn:lsid:gdb.org:GenomicSegment:GDB132938 urn:lsid:gdb.org:DBObject-predicates:accessionID GDB:132938 urn:lsid:gdb.org:GenomicSegment:GDB132938 http://www.ibm.com/LSID/2004/RDF/#lsidLink urn:lsid:gdb.org:DBObject:GDB132938 urn:lsid:gdb.org:GenomicSegment:GDB132938 urn:lsid:gdb.org:DBObject-predicates:objectClass DBObject urn:lsid:gdb.org:GenomicSegment:GDB132938 urn:lsid:gdb.org:DBObject-predicates:displayName D20S95 urn:lsid:gdb.org:GenomicSegment:GDB132938 urn:lsid:gdb.org:GenomicSegment-predicates:variantsQ nodeID://1000027961 5 Rows. -- 822 msec.
Example of OAI: an institutional / departmental repository.
SQL>SPARQL define get:soft "soft" SELECT * FROM <oai:etheses.bham.ac.uk:23> WHERE { ?s ?p ?o } LIMIT 5; s p o VARCHAR VARCHAR VARCHAR _____________________________________________________________________________ oai:etheses.bham.ac.uk:23 http://purl.org/dc/elements/1.1/title A study of the role of ATM mutations in the pathogenesis of B-cell chronic lymphocytic leukaemia oai:etheses.bham.ac.uk:23 http://purl.org/dc/elements/1.1/date 2007-07 oai:etheses.bham.ac.uk:23 http://purl.org/dc/elements/1.1/subject RC0254 Neoplasms. Tumors. Oncology (including Cancer) oai:etheses.bham.ac.uk:23 http://purl.org/dc/elements/1.1/identifier Austen, Belinda (2007) A study of the role of ATM mutations in the pathogenesis of B-cell chronic lymphocytic leukaemia. Ph.D. thesis, University of Birmingham. oai:etheses.bham.ac.uk:23 http://purl.org/dc/elements/1.1/identifier http://etheses.bham.ac.uk/23/1/Austen07PhD.pdf 5 Rows. -- 461 msec.
Example of DOI
In order to execute correctly queries with doi resolver you need to have:
[Plugins] LoadPath = ./plugin ... Load6 = plain,hslookup
SQL>SPARQL define get:soft "soft" SELECT * FROM <doi:10.1045/march99-bunker> WHERE { ?s ?p ?o } ; s p o VARCHAR VARCHAR VARCHAR _______________________________________________________________________________ http://www.dlib.org/dlib/march99/bunker/03bunker.html http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.openlinksw.com/schemas/XHTML# http://www.dlib.org/dlib/march99/bunker/03bunker.html http://www.openlinksw.com/schemas/XHTML#title Collaboration as a Key to Digital Library Development: High Performance Image Management at the University of Washington 2 Rows. -- 12388 msec.
Other examples
SQL>SPARQL PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX doap: <http://usefulinc.com/ns/doap#> SELECT DISTINCT ?name ?mbox ?projectName WHERE { <http://dig.csail.mit.edu/2005/ajar/ajaw/data#Tabulator> doap:developer ?dev . ?dev foaf:name ?name . OPTIONAL { ?dev foaf:mbox ?mbox } OPTIONAL { ?dev doap:project ?proj . ?proj foaf:name ?projectName } }; name mbox projectName VARCHAR VARCHAR VARCHAR ____________________ ___________________________________________ Adam Lerer NULL NULL Dan Connolly NULL NULL David Li NULL NULL David Sheets NULL NULL James Hollenbach NULL NULL Joe Presbrey NULL NULL Kenny Lu NULL NULL Lydia Chilton NULL NULL Ruth Dhanaraj NULL NULL Sonia Nijhawan NULL NULL Tim Berners-Lee NULL NULL Timothy Berners-Lee NULL NULL Yuhsin Joyce Chen NULL NULL 13 Rows. -- 491 msec.
SQL>SPARQL PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT DISTINCT ?friendsname ?friendshomepage ?foafsname ?foafshomepage WHERE { <http://myopenlink.net/dataspace/person/kidehen#this> foaf:knows ?friend . ?friend foaf:mbox_sha1sum ?mbox . ?friendsURI foaf:mbox_sha1sum ?mbox . ?friendsURI foaf:name ?friendsname . ?friendsURI foaf:homepage ?friendshomepage . OPTIONAL { ?friendsURI foaf:knows ?foaf . ?foaf foaf:name ?foafsname . ?foaf foaf:homepage ?foafshomepage . } } LIMIT 10; friendsname friendshomepage foafsname foafshomepage ANY ANY ANY ANY Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Dan Connolly http://www.w3.org/People/Connolly/ Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Henry J. Story http://bblfish.net/ Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Henry Story http://bblfish.net/ Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Henry J. Story http://bblfish.net/people/henry/card Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Henry Story http://bblfish.net/people/henry/card Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Ruth Dhanaraj http://web.mit.edu/ruthdhan/www Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Dan Brickley http://danbri.org/ Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Dan Brickley http://danbri.org/ Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Daniel Krech http://eikeon.com/ Tim Berners Lee http://www.w3.org/People/Berners-Lee/ Daniel Krech http://eikeon.com/
Faceted views over structured and semi structured data have been popular in user interfaces for some years. Deploying such views of arbitrary linked data at arbitrary scale has been hampered by lack of suitable back end technology. Many ontologies are also quite large, with hundreds of thousands of classes.
Also, the linked data community has been concerned with the processing cost and potential for denial of service presented by public SPARQL end points.
This section discusses how we use Virtuoso Cluster Edition for providing interactive browsing over billions of triples, combining full text search, structured querying and result ranking. We discuss query planning, run-time inferencing and partial query evaluation. This functionality is exposed through SPARQL, a specialized web service and a web user interface.
The transition of the web from a distributed document repository into a universal, ubiquitous database requires a new dimension of scalability for supporting rich user interaction. If the web is the database, then it also needs a query and report writing tool to match. A faceted user interaction paradigm has been found useful for aiding discovery and query of variously structured data. Numerous implementations exist but they are chiefly client side and are limited in the data volumes they can handle.
At the present time, linked data is well beyond prototypes and proofs of concept. This means that what was done in limited specialty domains before must now be done at real world scale, in terms of both data volume and ontology size. On the schema, or T box side, there exist many comprehensive general purpose ontologies such as Yago[1], OpenCyc[2], Umbel[3] and the DBpedia[4] ontology and many domain specific ones, such as [5]. For these to enter into the user experience, the platform must be able to support the user's choice of terminology or terminologies as needed, preferably without blow up of data and concomitant slowdown.
Likewise, in the LOD world, many link sets have been created for bridging between data sets. Whether such linkage is relevant will depend on the use case. Therefore we provide fine grained control over which owl:sameAs assertions will be followed, if any.
Against this background, we discuss how we tackle incremental interactive query composition on arbitrary data with Virtuoso Cluster.
Using SPARQL or a web/web service interface, the user can form combinations of text search and structured criteria, including joins to an arbitrary depth. If queries are precise and select a limited number of results, the results are complete. If queries would select tens of millions of results, partial results are shown.
The system being described is being actively developed as of this writing, early March of 2009 and is online at http://lod.openlinksw.com/. The data set is a combination of DBpedia, MusicBrainz, Freebase, UniProt, NeuroCommons, Bio2RDF, and web crawls from PingTheSemanticWeb.com.
The hardware consists of two 8-core servers with 16G RAM and 4 disks each. The system runs on Virtuoso 6 Cluster Edition. All application code is written in SQL procedures with limited client side Ajax, the Virtuoso platform itself is in C.
The facets service allows the user to start with a text search or a fixed URI and to refine the search by specifying classes, property values etc., on the selected subjects or any subjects referenced therefrom.
This process generates queries involving combinations of text and structured criteria, often dealing with property and class hierarchies and often involving aggregation over millions of subjects, specially at the initial stages of query composition. To make this work with in interactive time, two things are needed:
It is often the case, specially at the beginning of query formulation, that the user only needs to know if there are relatively many or few results that are of a given type or involve a given property. Thus partially evaluating a query is often useful for producing this information. This must however be possible with an arbitrary query, simply citing precomputed statistics is not enough.
It has for a long time been a given that any search-like application ranks results by relevance. Whenever the facets service shows a list of results, not an aggregation of result types or properties, it is sorted on a composite of text match score and link density.
The section is divided into the following parts:
Virtuoso has for a long time had built-in superclass and superproperty inference. This is enabled by specifying the DEFINE input:inference "context" option, where context is previously declared to be all subclass, subproperty, equivalence, inverse functional property and same as relations defined in a given graph. The ontology file is loaded into its own graph and this is then used to construct the context. Multiple ontologies and their equivalences can be loaded into a single graph which then makes another context which holds the union of the ontology information from the merged source ontologies.
Let us consider a sample query combining a full text search and a restriction on the class of the desired matches:
DEFINE input:inference "yago" PREFIX cy: <http://dbpedia.org/class/yago/> SELECT DISTINCT ?s1 AS ?c1 ( bif:search_excerpt ( bif:vector ( 'Shakespeare' ), ?o1 ) ) AS ?c2 WHERE { ?s1 ?s1textp ?o1 . FILTER ( bif:contains (?o1, '"Shakespeare"') ) . ?s1 a cy:Performer110415638 } LIMIT 20
This selects all Yago performers that have a property that contains "Shakespeare" as a whole word.
The DEFINE input:inference "yago" clause means that subclass, subproperty and inverse functions property statements contained in the inference context called yago are considered when evaluating the query. The built-in function bif:search_excerpt makes a search engine style summary of the found text, highlighting occurrences of Shakespeare.
The bif:contains function in the filter specifies the full text search condition on ?o1.
This query is a typical example of queries that are executed all the time when a user refines a search. We will now look at how we can make an efficient execution plan for the query. First, we must know the cardinalities of the search conditions:
To see the count of subclasses of Yago performer, we can do:
SPARQL PREFIX cy: <http://dbpedia.org/class/yago/> SELECT COUNT (*) FROM <http://dbpedia.org/yago.owl> WHERE { ?s rdfs:subClassOf cy:Performer110415638 OPTION (TRANSITIVE, T_DISTINCT) }
There are 4601 distinct subclasses, including indirect ones. Next we look at how many Shakespeare mentions there are:
SPARQL SELECT COUNT (*) WHERE { ?s ?p ?o . FILTER ( bif:contains (?o, 'Shakespeare') ) }
There are 10267 subjects with Shakespeare mentioned in some literal.
SPARQL DEFINE input:inference "yago" PREFIX cy: <http://dbpedia.org/class/yago/> SELECT COUNT (*) WHERE { ?s1 a cy:Performer110415638 }
There are 184885 individuals that belong to some subclass of performer.
This is the data that the SPARQL compiler must know in order to have a valid query plan. Since these values will wildly vary depending on the specific constants in the query, the actual database must be consulted as needed while preparing the execution plan. This is regular query processing technology but is now specially adapted for deep subclass and subproperty structures.
Conditions in the queries are not evaluated twice, once for the cardinality estimate and once for the actual run. Instead, the cardinality estimate is a rapid sampling of the index trees that reads at most one leaf page.
Consider a B tree index, which we descend from top to the leftmost leaf containing a match of the condition. At each level, we count how many children would match and always select the leftmost one. When we reach a leaf, we see how many entries are on the page. From these observations, we extrapolate the total count of matches.
With this method, the guess for the count of performers is 114213, which is acceptably close to the real number. Given these numbers, we see that it makes sense to first find the full text matches and then retrieve the actual classes of each and see if this class is a subclass of performer. This last check is done against a memory resident copy of the Yago hierarchy, the same copy that was used for enumerating the subclasses of performer.
However, the query
SPARQL DEFINE input:inference "yago" PREFIX cy: <http://dbpedia.org/class/yago/> SELECT DISTINCT ?s1 AS ?c1, ( bif:search_excerpt ( bif:vector ('Shakespeare'), ?o1 ) ) AS ?c2 WHERE { ?s1 ?s1textp ?o1 . FILTER ( bif:contains (?o1, '"Shakespeare"') ) . ?s1 a cy:ShakespeareanActors }
will start with Shakespearean actors since this is a leaf class with only 74 instances and then check if the properties contain Shakespeare and return their search summaries.
In principle, this is common cost based optimization but is here adapted to deep hierarchies combined with text patterns. An unmodified SQL optimizer would have no possibility of arriving at these results.
The implementation reads the graphs designated as holding ontologies when first needed and subsequently keeps a memory based copy of the hierarchy on all servers. This is used for quick iteration over sub/superclasses or properties as well as for checking if a given class or property is a subclass/property of another. Triples with OWL predicates equivalentClass, equivalentProperty and sameAs are also cached in the same data structure if they occur in the ontology graphs.
Also cardinality estimates for members of classes near the root of the class hierarchy take some time since a sample of each subclass is needed. These are cached for some minutes in the inference context, so that repeated queries will not redo the sampling.
Specially when navigating social data, as in FOAF and SIOC spaces, there are many blank nodes that are identified by properties only. For this, we offer an option for automatically joining to subjects which share an IFP value with the subject being processed. For example, the query for the friends of friends of Kjetil Kjernsmo returns empty:
SPARQL SELECT COUNT (?f2) WHERE { ?s a foaf:Person ; ?p ?o ; foaf:knows ?f1 . ?o bif:contains "'Kjetil Kjernsmo'" . ?f1 foaf:knows ?f2 }
But with the option
SPARQL DEFINE input:inference "b3sifp" SELECT COUNT (?f2) WHERE { ?s a foaf:Person ; ?p ?o ; foaf:knows ?f1 . ?o bif:contains "'Kjetil Kjernsmo'" . ?f1 foaf:knows ?f2 }
we get 4022. We note that there are many duplicates since the data is blank nodes only, with people easily represented 10 times. The context b3sifp simple declares that foaf:name and foaf:mbox sha1sum should be treated as inverse functional properties (IFP). The name is not an IFP in the actual sense but treating it as such for the purposes of this one query makes sense, otherwise nothing would be found.
This option is controlled by the choice of the inference context, which is selectable in the interface discussed below.
The IFP inference can be thought of as a transparent addition of a subquery into the join sequence. The subquery joins each subject to its synonyms given by sharing IFPs. This subquery has the special property that it has the initial binding automatically in its result set. It could be expressed as:
SPARQL SELECT ?f WHERE { ?k foaf:name "Kjetil Kjernsmo" . { SELECT ?org ?syn WHERE { ?org ?p ?key . ?syn ?p ?key . FILTER ( bif:rdf_is_sub ( "b3sifp", ?p, <b3s:any_ifp>, 3 ) && ?syn != ?org ) } } OPTION ( TRANSITIVE , T_IN (?org), T_OUT (?syn), T_MIN (0), T_MAX (1) ) FILTER ( ?org = ?k ) . ?syn foaf:knows ?f . }
It is true that each subject shares IFP values with itself but the transitive construct with 0 minimum and 1 maximum depth allows passing the initial binding of ?org directly to ?syn, thus getting first results more rapidly. The rdf_is_sub function is an internal that simply tests whether ?p is a subproperty of b3s:any_ifp.
Internally, the implementation has a special query operator for this and the internal form is more compact than would result from the above but the above could be used to the same effect.
Our general position is that identity criteria are highly application specific and thus we offer the full spectrum of choice between run time and precomputing. Further, weaker identity statements than sameness are difficult to use in queries, thus we prefer identity with semantics of owl:sameAs but make this an option that can be turned on and off query by query.
It is a common end user expectation to see text search results sorted by their relevance. The term entity rank refers to a quantity describing the relevance of a URI in an RDF graph.
This is a sample query using entity rank:
SPARQL PREFIX yago: <http://dbpedia.org/class/yago/> PREFIX prop: <http://dbpedia.org/property/> SELECT DISTINCT ?s2 AS ?c1 WHERE { ?s1 ?s1textp ?o1 . ?o1 bif:contains 'Shakespeare' . ?s1 a yago:Writer110794014 . ?s2 prop:writer ?s1 } ORDER BY DESC ( <LONG::IRI_RANK> (?s2) ) LIMIT 20 OFFSET 0
This selects works where a writer with Shakespeare in some property is the writer.
Here the query returns subjects, thus no text search summaries, so only the entity rank of the returned subject is used. We order text results by a composite of text hit score and entity rank of the RDF subject where the text occurs. The entity rank of the subject is defined by the count of references to it, weighed by the rank of the referrers and the outbound link count of referrers. Such techniques are used in text based information retrieval.
Example with Entity Ranking and Score
## Searching over labels, with text match ## scores and additional ranks for each ## iri / resource: SELECT ?s ?page ?label ?textScore AS ?Text_Score_Rank ( <LONG::IRI_RANK> (?s) ) AS ?Entity_Rank WHERE { ?s foaf:page ?page ; rdfs:label ?label . FILTER( lang( ?label ) = "en" ) . ?label bif:contains 'adobe and flash' OPTION (score ?textScore ) . }
One interesting application of entity rank and inference on IFPs and owl:sameAs is in locating URIs for reuse. We can easily list synonym URIs in order of popularity as well as locate URIs based on associated text. This can serve in application such as the Entity Name Server
Entity ranking is one of the few operations where we take a precomputing approach. Since a rank is calculated based on a possibly long chain of references, there is little choice but to precompute. The precomputation itself is straightforward enough: First all outbound references are counted for all subjects. Next all ranks of subjects are incremented by 1 over the referrer's outbound link count. On successive iterations, the increment is based on the rank increment the referrer received in the previous round.
The operation is easily partitioned, since each partition increments the ranks of subjects it holds. The referrers are spread throughout the cluster, though. When rank is calculated, each partition accesses every other partition. This is done with relatively long messages, referee ranks are accessed in batches of several thousand at a time, thus absorbing network latency.
On the test system, this operation performs a single pass over the corpus of 2.2 billion triples and 356 million distinct subjects in about 30 minutes. The operation has 100% utilization of all 16 cores. Adding hardware would speed it up, as would implementing it in C instead of the SQL procedures it is written in at present.
The main query in rank calculation is:
SPARQL SELECT O , P , iri_rank (S) FROM rdf_quad TABLE OPTION (NO CLUSTER) WHERE isiri_id(O) ORDER BY O
This is the SQL cursor iterated over by each partition. The no cluster option means that only rows in this process's partition are retrieved. The RDF_QUAD table holds the RDF quads in the store, i.e., triple plus graph. The S, P, O columns are the subject, predicate, and object respectively. The graph column is not used here. The textttiri rank is a partitioned SQL function. This works by using the S argument to determine which cluster node should run the function. The specifics of the partitioning are declared elsewhere. The calls are then batched for each intended recipient and sent when the batches are full. The SQL compiler automatically generates the relevant control structures. This is like an implicit map operation in the map-reduce terminology.
An SQL procedure loops over this cursor, adds up the rank and when seeing a new O, the added rank is persisted into a table. Since links in RDF are typed, we can use the semantics of the link to determine how much rank is transferred by a reference. With extraction of named entities from text content, we can further place a given entity into a referential context and use this as a weighting factor. This is to be explored in future work. The experience thus far shows that we greatly benefit from Virtuoso being a general purpose DBMS, as we can create application specific data structures and control flows where these are efficient. For example, it would make little sense to store entity ranks as triples due to space consumption and locality considerations. With these tools, the whole ranking functionality took under a week to develop.
Note: In order to use the IRI_RANK feature you need to have the Facet (fct) vad package installed as the procedure is part of this vad.
When scaling the Linked Data model, we have to take it as a given that the workload will be unexpected and that the query writers will often be unskilled in databases. Insofar possible, we wish to promote the forming of a culture of creative reuse of data. To this effect, even poorly formulated questions deserve an answer that is better than just timeout.
If a query produces a steady stream of results, interrupting it after a certain quota is simple. However, most interesting queries do not work in this way. They contain aggregation, sorting, maybe transitivity.
When evaluating a query with a time limit in a cluster setup, all nodes monitor the time left for the query. When dealing with a potentially partial query to begin with, there is little point in transactionality. Therefore the facet service uses read committed isolation. A read committed query will never block since it will see the before-image of any transactionally updated row. There will be no waiting for locks and timeouts can be managed locally by all servers in the cluster.
Thus, when having a partitioned count, for example, we expect all the partitions to time out around the same time and send a ready message with the timeout information to the cluster node coordinating the query. The condition raised by hitting a partial evaluation time limit differs from a run time error in that it leaves the query state intact on all participating nodes. This allows the timeout handling to come fetch any accumulated aggregates.
Let us consider the query for the top 10 classes of things with "Shakespeare" in some literal. This is typical of the workload generated by the faceted browsing web service:
SPARQL DEFINE input:inference "yago" SELECT ?c COUNT (*) WHERE { ?s a ?c ; ?p ?o . ?o bif:contains "Shakespeare" } GROUP BY ?c ORDER BY DESC 2 LIMIT 10
On the first execution with an entirely cold cache, this times out after 2 seconds and returns:
?c COUNT (*) yago:class/yago/Entity100001740 566 yago:class/yago/PhysicalEntity100001930 452 yago:class/yago/Object100002684 452 yago:class/yago/Whole100003553 449 yago:class/yago/Organism100004475 375 yago:class/yago/LivingThing100004258 375 yago:class/yago/CausalAgent100007347 373 yago:class/yago/Person100007846 373 yago:class/yago/Abstraction100002137 150 yago:class/yago/Communicator109610660 125
The next repeat gets about double the counts, starting with 1291 entities.
With a warm cache, the query finishes in about 300 ms (4 core Xeon, Virtuoso 6 Cluster) and returns:
?c COUNT (*) yago:class/yago/Entity100001740 13329 yago:class/yago/PhysicalEntity100001930 10423 yago:class/yago/Object100002684 10408 yago:class/yago/Whole100003553 10210 yago:class/yago/LivingThing100004258 8868 yago:class/yago/Organism100004475 8868 yago:class/yago/CausalAgent100007347 8853 yago:class/yago/Person100007846 8853 yago:class/yago/Abstraction100002137 3284 yago:class/yago/Entertainer109616922 2356
It is a well known fact that running from memory is thousands of times faster than from disk.
The query plan begins with the text search. The subjects with "Shakespeare" in some property get dispatched to the partition that holds their class. Since all partitions know the class hierarchy, the superclass inference runs in parallel, as does the aggregation of the group by. When all partitions have finished, the process coordinating the query fetches the partial aggregates, adds them up and sorts them by count.
If a timeout occurs, it will most likely occur where the classes of the text matches are being retrieved. When this happens, this part of the query is reset, but the aggregate states are left in place. The process coordinating the query then goes on as if the aggregates had completed. If there are many levels of nested aggregates, each timeout terminates the innermost aggregation that is still accumulating results, thus a query is guaranteed to return in no more than n timeouts, where n is the number of nested aggregations or subqueries.
The Virtuoso Faceted Web Service is a general purpose RDF query facility for Faceted based browsing. It takes an XML description of the view desired and generates the reply as an XML tree containing the requested data. The user agent or a local web page can use XSLT for rendering this for the end user. The selection of facets and values is represented as an XML tree. The rationale for this is the fact that such a representation is easier to process in an application than the SPARQL source text or a parse tree of SPARQL and more compactly captures the specific subset of SPARQL needed for faceted browsing. All such queries internally generate SPARQL and the SPARQL generated is returned with the results. One can therefore use this is a starting point for hand crafted queries.
The query has the top level element. The child elements of this represents conditions pertaining to a single subject. A join is expressed with the property or propertyof element. This has in turn children which state conditions on a property of the first subject. Property and propertyof elements can be nested to an arbitrary depth and many can occur inside one containing element. In this way, tree-shaped structures of joins can be expressed.
Expressing more complex relationships, such as intermediate grouping, subqueries, arithmetic or such requires writing the query in SPARQL. The XML format is for easy automatic composition of queries needed for showing facets, not a replacement for SPARQL.
Consider composing a map of locations involved with Napoleon. Below we list user actions and the resulting XML query descriptions.
<query inference="" same-as="" view3="" s-term="e" c-term="type"> <text>napoleon</text> <view type="text" limit="20" offset="" /> </query>
<query inference="" same-as="" view3="" s-term="e" c-term="type"> <text>napoleon</text> <view type="classes" limit="20" offset="0" location-prop="0" /> </query>
<query inference="" same-as="" view3="" s-term="e" c-term="type"> <text>napoleon</text> <view type="classes" limit="20" offset="0" location-prop="0" /> <class iri="yago:ontology/MilitaryConflict" /> </query>
<query inference="" same-as="" view3="" s-term="e" c-term="type"> <text>napoleon</text> <view type="classes" limit="20" offset="0" location-prop="0" /> <class iri="yago:ontology/MilitaryConflict" /> <class iri="yago:class/yago/NapoleonicWars" /> </query>
<query inference="" same-as="" view3="" s-term="e" c-term="type"> <text>napoleon</text> <class iri="yago:ontology/MilitaryConflict" /> <class iri="yago:class/yago/NapoleonicWars" /> <view type="geo" limit="20" offset="0" location-prop="any" /> </query>
This last XML fragment corresponds to the below text of SPARQL query:
SPARQL SELECT ?location AS ?c1 ?lat1 AS ?c2 ?lng1 AS ?c3 WHERE { ?s1 ?s1textp ?o1 . FILTER ( bif:contains (?o1, '"Napoleon"') ) . ?s1 a <yago:ontology/MilitaryConflict> . ?s1 a <yago:class/yago/NapoleonicWars> . ?s1 ?anyloc ?location . ?location geo:lat ?lat1 ; geo:long ?lng1 } LIMIT 200 OFFSET 0
The query takes all subjects with some literal property with "Napoleon" in it, then filters for military conflicts and Napoleonic wars, then takes all objects related to these where the related object has a location. The map has the objects and their locations.
A long awaited addition to the LOD cloud is the Vocabulary of Interlinked Data (voiD). Virtuoso automatically generates voiD descriptions of data sets it hosts. Virtuoso incorporates an SQL function rdf_void_gen which returns a Turtle representation of a given graph's voiD statistics.
The test system consists of two 2x4 core Xeon 5345, 2.33 GHz servers with 16G RAM and 4 disks each. The machines are connected by two 1Gbit Ethernet connections. The software is Virtuoso 6 Cluster. The Virtuoso server is split into 16 partitions, 8 for each machine. Each partition is managed by a separate server process.
The test database has the following data sets:
Ontologies:
The database is 2.2 billion triples with 356 million distinct URIs.
Previous
Virtuoso Faceted Web Service |
Chapter Contents |
Next
Inference Rules & Reasoning |