The web of data:
how are we
doing so far?
E L E N A S I M P E R L
K I N G ’ S C O L L E G E LO N D O N
@ E S I M P E R L
THE WEB CONFERENCE, APRIL 2021
The web has shaped our understanding and
interactions with data
Answering
factual
questions
Sharing
data
online
Publishing
data for
others to
use
Creating
datasets in
collaboration
Creating
digital
traces
Labelling
data for
algorithms
to use
(Source: Fensel, 2013)
The theory and practice of the web of data
are different
We are living through a crucial moment in
how data is published and used on the web
(Source: Hitzler, 2021)
European Data Portal
Technology, resources and support to increase the value of European open government data
Highlights of our work
Supporting the entire data value chain from publishing to reuse
Low uptake of linked data, limited vocabulary
reuse, proprietary, non-dereferenceable
vocabularies, reasonable metadata quality
Content metadata published as linked data,
joint data model, data sharing framework,
Europeana identifiers
25 million datasets (DCAT, schema.org) in
summer 2020
There is a lot of annotated data online,
especially about products, people and
businesses
Making portals more user-centric
Walker & Simperl, 2017
The ten guidelines
Organise for use of the datasets - rather than simply for publication
Promote use through data storytelling and community building, borrowing from open-source communities and other
peer-production systems
Invest in discoverability best practices, borrowing from e-commerce and web search
Publish good quality metadata - to enhance reuse
Adopt standards to ensure interoperability
Co-locate tools so that a wider range of users can be engaged with
Link datasets to enhance value
Be accessible by offering options for from APIs to CSV downloads
Co-locate documentation - users should not need to be domain experts to understand the data;
Be measurable - as a way to assess how well they are meeting users’ needs.
Operationalising the guidance
Literature review to develop 5* schemes to operationalise indicators.
Application of the schemes on 10 open data portals at different maturity level.
(Walker & Simperl, 2017)
Example: Organise for use
Each dataset is accompanied by a comprehensive descriptive
record (going beyond a collection of structured metadata)
An extract of the data can be previewed (for sense making)
The portal provides recommendations for related datasets
The portal enables users to review/rate the datasets
Keywords from datasets are linked to other published datasets
Example: Promote for use
The portal is connected with social media to create a social distribution channel
for open data.
The portal provides users with online support for feedback, to request/suggest
the publication of new datasets, and when problems arise during use (e.g.
contact form, discussion forum, FAQs, helpdesk, search tips, tutorials, demos).
The portal provides a way for users to keep informed of updates to the data (e.g.
news feed).
Datasets are accompanied by links or resources that provide user guidance and
support.
Examples of reuse (fictitious or real) are provided (e.g. information contributed
by other users, last reuse, best reuse, data stories).
Example: Co-locate documentation
Supporting documentation does not exist.
Supporting documentation exists, but as a document found separately from the data.
Supporting documentation is found at the same time as the data (e.g. the link to the document is
next to the link to the data in the search).
Supporting documentation can be immediately accessed from within the dataset but it is not context
sensitive (e.g. a link to the documentation or text contained within the dataset).
Supporting documentation can be immediately accessed from within the dataset and it is context
sensitive so that users can immediately access information about a specific item of concern (e.g. a
link to a specific point in the documentation or the text contained within the dataset).
Varying open data maturity levels
Be discoverable , co-locate documentation, be
measurable are universally challenging
A lot of guidance available already
Is there any evidence that it works?
Be
measurable:
GitHub as a
data platform
~1.4 million datasets (e.g. CSV,
excel) from ~65K repos
Map literature features to both
dataset and repository features
Use engagement metrics as
proxies for data reuse
Train a predictive model to see
what publishing guidance leads to
higher engagement values
Size Attributes
Age
Quality
Documentation
Reviews
Recommendations for publishers
Co-locate documentation:
◦ Informative, short text about the dataset
◦ Comprehensive README file in a structured form,
links to further information
Co-locate tools:
◦ Standard processable file sizes for dataset
distributions
◦ Openable with a standard configuration of a
common library (such as Pandas)
Can people find the data they need?
Analysis of logs and data requests
(2018)
• Four national open government data portals, 2.2 million queries (2013 – 2016), 1500 data
requests.
• Shorter queries, include temporal and location information.
• Explorative search.
• Native and external queries topically different.
• Data requests offer more context to user intent.
Analysis of logs (2020 - 21)
844k sessions from 04/2018 to 06/2020, web search as well as
native search sessions from the European Data Portal
Location, provenance, format, licence, time frame and date, publishing
date, location of publication and data schema
Mostly web search, web search and native search users have different
information needs and different success rates
Dataset preview page is important in web search
Linking to stories and other content helps with traffic
Recommendations for publishers
Two types of
users
Spatial and
temporal
queries
Result
presentation
Quality
reviews
Data stories
More logs
needed!
Data documentation and sensemaking
practices
(Source: Gregory et al., 2020)
Data work is teamwork
Open approaches and standards work best when solving
actual problems. These problems are rarely about a set of
technologies.
Conclusions
We are at a crucial moment in data availability and use, online and elsewhere
There is an increasing body of evidence about what people’s data needs and about how data is
published on the web
We don’t have links and we don’t always have great business cases for creating and maintaining
them on the open, decentralised web. In fact, we need better models to resource data publishing
all together
There are other data modalities e.g. charts which web technologies can help share responsibly
Metadata vocabularies used where there is a clear
business case
More documentation needed to make data useful
for others
Some data is missing, with serious consequences
Charts as alternative to ‘raw’ data. Where are the
links to the data?
Thank you
Talking Datasets: understanding data sensemaking behaviours. L Koesten, K Gregory, P Groth, E Simperl.
International Journal of Human-Computer Studies. 146:102562. 2021
Everything You Always Wanted to Know about a Dataset: Studies in Data Summarisation. L Koesten, E Simperl,
E Kacprzak, T Blount, J Tennison. International Journal of Human-Computer Studies. 2019
Collaborative Practices with Structured Data: Do Tools Support what Users Need? L Koesten, E Kacprzak, E
Simperl, J Tennison; ACM CHI Conference on Human Factors in Computing Systems, CHI 2019.
Dataset search: a survey. A Chapman, E Simperl, L Koesten, G Konstantinidis, LD Ibáñez, E Kacprzak, P Groth.
The International Journal on Very Large Data Bases, 2019.
Characterising dataset search — An analysis of search logs and data requests. E Kacprzak, L Koesten, LD
Ibáñez, T Blount, J Tennison, E Simperl; Journal of Web Semantics, 2018
Making sense of numerical data-semantic labelling of web tables. Kacprzak, E., Giménez-García, J.M., Piscopo,
A., Koesten, L., Ibáñez, L.D., Tennison, J. and Simperl, E. In European Knowledge Acquisition Workshop (pp. 163-
178). Springer, 2018
The Trials and Tribulations of Working with Structured Data - a Study on Information Seeking Behaviour. L
Koesten, E Kacprzak, J Tennison, E Simperl. Proceedings of ACM CHI Conference on Human Factors in
Computing Systems, CHI 2017
Dataset Reuse: Toward Translating Principles to Practice. L Koesten, P Vougiouklis, E Simperl, P Groth - Patterns,
2020
Characterising Dataset Search on the European Data Portal . L Ibáñez, L Koesten, E Kacprzak, E Simperl.
European Data Portal Analytical Report 18, 2020
Understanding Supply and Demand on the European Data Portal. L Ibáñez, E Simperl. European Data Portal
Analytical Report 19, 2020
The Future of Open Data Portals. J Walker, E Simperl. European Data Portal Analytical Report 8, 2017
Smart Rural: The Open Data Gap. J Walker, G Thuermer, E Simperl, L Carr. Data for Policy, 2020

The web of data: how are we doing so far?

  • 1.
    The web ofdata: how are we doing so far? E L E N A S I M P E R L K I N G ’ S C O L L E G E LO N D O N @ E S I M P E R L THE WEB CONFERENCE, APRIL 2021
  • 3.
    The web hasshaped our understanding and interactions with data
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
    The theory andpractice of the web of data are different We are living through a crucial moment in how data is published and used on the web
  • 12.
  • 13.
    European Data Portal Technology,resources and support to increase the value of European open government data
  • 14.
    Highlights of ourwork Supporting the entire data value chain from publishing to reuse
  • 15.
    Low uptake oflinked data, limited vocabulary reuse, proprietary, non-dereferenceable vocabularies, reasonable metadata quality
  • 16.
    Content metadata publishedas linked data, joint data model, data sharing framework, Europeana identifiers
  • 17.
    25 million datasets(DCAT, schema.org) in summer 2020
  • 18.
    There is alot of annotated data online, especially about products, people and businesses
  • 19.
    Making portals moreuser-centric Walker & Simperl, 2017
  • 20.
    The ten guidelines Organisefor use of the datasets - rather than simply for publication Promote use through data storytelling and community building, borrowing from open-source communities and other peer-production systems Invest in discoverability best practices, borrowing from e-commerce and web search Publish good quality metadata - to enhance reuse Adopt standards to ensure interoperability Co-locate tools so that a wider range of users can be engaged with Link datasets to enhance value Be accessible by offering options for from APIs to CSV downloads Co-locate documentation - users should not need to be domain experts to understand the data; Be measurable - as a way to assess how well they are meeting users’ needs.
  • 21.
    Operationalising the guidance Literaturereview to develop 5* schemes to operationalise indicators. Application of the schemes on 10 open data portals at different maturity level. (Walker & Simperl, 2017)
  • 22.
    Example: Organise foruse Each dataset is accompanied by a comprehensive descriptive record (going beyond a collection of structured metadata) An extract of the data can be previewed (for sense making) The portal provides recommendations for related datasets The portal enables users to review/rate the datasets Keywords from datasets are linked to other published datasets
  • 23.
    Example: Promote foruse The portal is connected with social media to create a social distribution channel for open data. The portal provides users with online support for feedback, to request/suggest the publication of new datasets, and when problems arise during use (e.g. contact form, discussion forum, FAQs, helpdesk, search tips, tutorials, demos). The portal provides a way for users to keep informed of updates to the data (e.g. news feed). Datasets are accompanied by links or resources that provide user guidance and support. Examples of reuse (fictitious or real) are provided (e.g. information contributed by other users, last reuse, best reuse, data stories).
  • 24.
    Example: Co-locate documentation Supportingdocumentation does not exist. Supporting documentation exists, but as a document found separately from the data. Supporting documentation is found at the same time as the data (e.g. the link to the document is next to the link to the data in the search). Supporting documentation can be immediately accessed from within the dataset but it is not context sensitive (e.g. a link to the documentation or text contained within the dataset). Supporting documentation can be immediately accessed from within the dataset and it is context sensitive so that users can immediately access information about a specific item of concern (e.g. a link to a specific point in the documentation or the text contained within the dataset).
  • 25.
    Varying open datamaturity levels Be discoverable , co-locate documentation, be measurable are universally challenging
  • 26.
    A lot ofguidance available already
  • 27.
    Is there anyevidence that it works?
  • 28.
    Be measurable: GitHub as a dataplatform ~1.4 million datasets (e.g. CSV, excel) from ~65K repos Map literature features to both dataset and repository features Use engagement metrics as proxies for data reuse Train a predictive model to see what publishing guidance leads to higher engagement values Size Attributes Age Quality Documentation Reviews
  • 29.
    Recommendations for publishers Co-locatedocumentation: ◦ Informative, short text about the dataset ◦ Comprehensive README file in a structured form, links to further information Co-locate tools: ◦ Standard processable file sizes for dataset distributions ◦ Openable with a standard configuration of a common library (such as Pandas)
  • 30.
    Can people findthe data they need?
  • 31.
    Analysis of logsand data requests (2018) • Four national open government data portals, 2.2 million queries (2013 – 2016), 1500 data requests. • Shorter queries, include temporal and location information. • Explorative search. • Native and external queries topically different. • Data requests offer more context to user intent.
  • 32.
    Analysis of logs(2020 - 21) 844k sessions from 04/2018 to 06/2020, web search as well as native search sessions from the European Data Portal Location, provenance, format, licence, time frame and date, publishing date, location of publication and data schema Mostly web search, web search and native search users have different information needs and different success rates Dataset preview page is important in web search Linking to stories and other content helps with traffic
  • 33.
    Recommendations for publishers Twotypes of users Spatial and temporal queries Result presentation Quality reviews Data stories More logs needed!
  • 34.
    Data documentation andsensemaking practices
  • 35.
    (Source: Gregory etal., 2020) Data work is teamwork
  • 36.
    Open approaches andstandards work best when solving actual problems. These problems are rarely about a set of technologies.
  • 37.
    Conclusions We are ata crucial moment in data availability and use, online and elsewhere There is an increasing body of evidence about what people’s data needs and about how data is published on the web We don’t have links and we don’t always have great business cases for creating and maintaining them on the open, decentralised web. In fact, we need better models to resource data publishing all together There are other data modalities e.g. charts which web technologies can help share responsibly
  • 38.
    Metadata vocabularies usedwhere there is a clear business case More documentation needed to make data useful for others
  • 39.
    Some data ismissing, with serious consequences
  • 40.
    Charts as alternativeto ‘raw’ data. Where are the links to the data?
  • 41.
    Thank you Talking Datasets:understanding data sensemaking behaviours. L Koesten, K Gregory, P Groth, E Simperl. International Journal of Human-Computer Studies. 146:102562. 2021 Everything You Always Wanted to Know about a Dataset: Studies in Data Summarisation. L Koesten, E Simperl, E Kacprzak, T Blount, J Tennison. International Journal of Human-Computer Studies. 2019 Collaborative Practices with Structured Data: Do Tools Support what Users Need? L Koesten, E Kacprzak, E Simperl, J Tennison; ACM CHI Conference on Human Factors in Computing Systems, CHI 2019. Dataset search: a survey. A Chapman, E Simperl, L Koesten, G Konstantinidis, LD Ibáñez, E Kacprzak, P Groth. The International Journal on Very Large Data Bases, 2019. Characterising dataset search — An analysis of search logs and data requests. E Kacprzak, L Koesten, LD Ibáñez, T Blount, J Tennison, E Simperl; Journal of Web Semantics, 2018 Making sense of numerical data-semantic labelling of web tables. Kacprzak, E., Giménez-García, J.M., Piscopo, A., Koesten, L., Ibáñez, L.D., Tennison, J. and Simperl, E. In European Knowledge Acquisition Workshop (pp. 163- 178). Springer, 2018 The Trials and Tribulations of Working with Structured Data - a Study on Information Seeking Behaviour. L Koesten, E Kacprzak, J Tennison, E Simperl. Proceedings of ACM CHI Conference on Human Factors in Computing Systems, CHI 2017 Dataset Reuse: Toward Translating Principles to Practice. L Koesten, P Vougiouklis, E Simperl, P Groth - Patterns, 2020 Characterising Dataset Search on the European Data Portal . L Ibáñez, L Koesten, E Kacprzak, E Simperl. European Data Portal Analytical Report 18, 2020 Understanding Supply and Demand on the European Data Portal. L Ibáñez, E Simperl. European Data Portal Analytical Report 19, 2020 The Future of Open Data Portals. J Walker, E Simperl. European Data Portal Analytical Report 8, 2017 Smart Rural: The Open Data Gap. J Walker, G Thuermer, E Simperl, L Carr. Data for Policy, 2020