Semantic Web Seminar

先週 セマンティックウェブ国際会議 (ISWC2016) が神戸で開催されたのに伴い、海外から研究者が来日しておりますので、国立情報学研究所にて下記の講演会を企画しました。

Executing SPARQL queries over Mapped Document Stores with SparqlMap-M
Jorg Unbehauen
AKSW, Leipzig University

Abstract: With the increasing adoption of NoSQL data base systems
like MongoDB or CouchDB more and more applications store
structured data according to a non-relational, document oriented
model. Exposing this structured data as Linked Data is currently
inhibited by a lack of standards as well as tools and requires the
implementation of custom solutions. While recent efforts aim at
expressing transformations of such data models into RDF in a
standardized manner, there is a lack of approaches which
facilitate SPARQL execution over mapped non-relational data
sources. With SparqlMap-M we show how dynamic SPARQL access to
non-relational data can be achieved. SparqlMap-M is an extension
to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial)
transformation of SPARQL queries by using a relational abstraction
over a document store. Further, duplicate data in the document
store is used to reduce the number of joins and custom
optimizations are introduced. Our showcase scenario employs the
Berlin SPARQL Benchmark (BSBM) with different adaptions to a
document data model. We use this scenario to demonstrate the
viability of our approach and compare it to different MongoDB
setups and native SQL.

Simplified RDB2RDF Mapping
Claus Stadler
AKSW, Leipzig University

Abstract: The combination of the advantages of widely used
relational databases and semantic technologies has attracted
significant research over the past decade. In particular, mapping
languages for the conversion of databases to RDF knowledge bases
have been developed and standardized in the form of R2RML. In this
article, we first review those mapping languages and then devise
work towards a unified formal model for them. Based on this, we
present the Sparqlification Mapping Language (SML), which provides
an intuitive way to declare mappings based on SQL VIEWS and SPARQL
construct queries. We show that SML has the same expressivity as
R2RML by enumerating the language features and show the
correspondences, and we outline how one syntax can be converted
into the other. A conducted user study for this paper juxtaposing
SML and R2RML provides evidence that SML is a more compact syntax
which is easier to understand and read and thus lowers the barrier
to offer SPARQL access to relational databases.

D2RQ Mapper: Accelerating RDFization in the Life Science Domain
Yasunori Yamamoto
Database Center for Life Science

Abstract: D2RQ Mapper is a web application to edit a mapping file of D2RQ,
a middleware to bridge Relational Database (RDB) and Resource Description
Framework (RDF). D2RQ Mapper supports to export a mapping file in the D2RQ
mapping language or the R2RML formats. A D2RQ mapping file defines how to
map data stored in an RDB to RDF in the turtle format, and to write it by a text
editor is cumbersome. D2RQ Mapper assists you to edit it by contextualizing
input forms in the mapping language. We provide a Docker image of D2RQ
Mapper, and so you can easily use it within your intranet.

Test-driven Evaluation of Linked Data Quality
Sebastian Hellmann
AKSW & KILT Competence Center, Institute for Applied Informatics (InfAI)

Abstract: Linked Open Data (LOD) comprises of an unprecedented
volume of structured data on the Web. However, these datasets are
of varying quality ranging from extensively curated datasets to
crowd-sourced or extracted data of often relatively low
quality. We present a methodology for test-driven quality
assessment of Linked Data, which is inspired by test-driven
software development. We argue, that vocabularies, ontologies and
knowledge bases should be accompanied by a number of test-cases,
which help to ensure a basic level of quality. We present a
methodology for assessing the quality of linked data resources,
based on a formalization of bad smells and data quality
problems. Our formalization employs SPARQL query templates, which
are instantiated into concrete quality test queries. Based on an
extensive survey, we compile a comprehensive library of data
quality test patterns. We perform automatic test instantiation
based on schema constraints or semi-automatically enriched
schemata and allow the user to generate specific test
instantiations that are applicable to a schema or dataset. We
provide an extensive evaluation of five LOD datasets, manual test
instantiation for five schemas and automatic test instantiations
for all available schemata registered with LOV. One of the main
advantages of our approach is that domain specific semantics can
be encoded in the data quality test cases, thus being able to
discover data quality problems beyond conventional quality


RDF Editing on the Web with REX
Claus Stadler
AKSW, Leipzig University

Abstract: While several tools for simplifying the task of
visualizing (SPARQL accessible) RDF data on the Web are available
today, there is a lack of corresponding tools for exploiting
standard HTML forms directly for RDF editing. The few related
existing systems roughly fall in the categories of (a)
applications that are not aimed at being reused as components, (b)
form generators, which automatically create forms from a given
schema -- possibly derived from instance data -- or (c) form
template processors which create forms from a manually created
specification. Furthermore, these systems usually come with their
own widget library, which can only be extended by wrapping
existing widgets. In this paper, we present the AngularJS-based
Rdf Edit eXtension (REX) system, which facilitates the enhancement
of standard HTML forms as well as many existing AngularJS widgets
with RDF editing support by means of a set of HTML attributes. We
demonstrate our system though the realization of several usage

Sebastian Hellmann
KSW & KILT Competence Center, Institute for Applied Informatics (InfAI)

Abstract: The Natural Language Processing Interchange Format (NIF)
is an RDF/OWL-based format that aims to achieve interoperability
between Natural Language Processing (NLP) tools, language
resources and annotations. The introduction will show its usage in
the FREME project: http://www.freme-project.eu/

Challenges for DBpedia
Sebastian Hellmann
AKSW & KILT Competence Center, Institute for Applied Informatics (InfAI)

Abstract: The talk will summarize the current state of DBpedia and
then go into the technical challenges that DBpedia and its
community are facing in the future. This includes: Extraction of
article text and NLP, Extending DBpedia with further facts,
Wikidata, Identifier management, Metadata problems, and
Contributing links to DBpedia.

Application examples of DBpedia Japanese
Fumihiro Kato
National Institute of Informatics

Abstract: DBpedia Japanese is getting used widely in Japanese
Linked Open Data community since 2012. This talk will introduce
applications using DBpedia Japanese.
  • イベント詳細情報を更新しました。 Diff#202556 2016-10-25 02:44:18
Thu Oct 27, 2016
2:30 PM - 5:20 PM JST
Add to Calendar
Venue Address
東京都千代田区一ツ橋2-1-2 国立情報学研究所 1208-1210室 Japan