A fable of an Aggregator

June 13, 2020

Many parallel data computing tasks can be solved with one abstract data type (ADT). We will describe how an Aggregator does that by walking through a problem we want to solve with parallelism and uncovering the ideal properties of an ADT that enable us to do so. Relevance of Aggregations: The Desirable ADT In the world of analytics and machine learning, data processing makes up a significant chunk of the plumbing required to do both. ... Read more


March 15, 2020

We are in the middle of a viral outbreak. There are many things that we're learning as we go along about this new corona virus. In the meantime, health organizations from around the world are scrambling to learn as much as they can from the cases that are known about. In Stockholm, where I live, the number of cases has been growing over the past two weeks. During this time, I have found many tools and dashboards that have been created to track the number of total infections. ... Read more

Understanding Data

April 3, 2018

Rich Metadata on Data I have been learning many things about dealing with data at a large scale. At first, I kept using the term quality to describe the state of data. However, it quickly became clear that the term had various dimensions to it, and it could not summarise the issues one can observe. I have come to use the expression understanding data instead because (1) it captures the state I wish to describe and (2) speaks the scientific and functional purposes of that state. ... Read more

YelpSí: Visualizing Yelpers Daily Activity

January 29, 2016

Find out what the most popular places yelpers check-in to with YelpSí, a visualization tool that let's you explore Yelpers past daily check-in activity across different cities in the US and Europe. It was built with Shiny, using the Yelp Dataset Challenge academic dataset.

Indexing UIMA Annotated Docs, with Solr

January 7, 2016

In this post, I'm going to walk you through the process of indexing UIMA annotated documents with Solr. In a previous post, Finding Movie Starts, I demonstrated how we can use UIMA to find and tag structured information in unstructured data. In most scenarios, once we have that data we extracted, we want to be able to query it. To do this, we can put our data into a data store, be it an RDMS, document store, graph, or other. ... Read more

Finding Movie Stars: Named Entity Recognition with UIMA & OpenNLP

November 26, 2015

In this post, we are going to use text analysis tools UIMA and OpenNLP to identify film personas, like directors and screenwriters, from a corpus of movie reviews. Warning: Working knowledge of Java is necessary for completing this guide. Estimated required time: ~60-90 minutes Overview of Natural Language Processing Since the 70s, experts and businesses had realised the potential that exists in gathering, storing, and processing information about their operations to add and create new value for their organisation and their customers. ... Read more

Connecting Data: multi-domain graphs

September 9, 2015

In a post where I talked about modelling data in graphs, I stated that graphs can be used to model different domains, and discover connections. In this post, we'll do an exercise in using multi-domain graphs to find relationships across them. An institution can have data from multiple sources, in different formats, with different structure and information. Some of that data, though, can be related. For example, we can have public data on civil marriages, employment records, and land/property ownership titles. ... Read more

12 Steps for a Performant Graph, with Neo4j

July 16, 2015

In recent posts, I wrote about data stores, specifically, about choosing the right one for the right problem domain. I also wrote about modelling data in graphs, with Neo4j. I the last post, on modelling graphs, I promised to discuss how we can get good performance in Neo4j. I will be addressing that in this post, by presenting 12 steps you can follow to attain high performance when using Neo4j, especially in a large data volume setting. ... Read more

Modelling Graphs, with Neo4j

June 8, 2015

On an early post, I described a non-exhaustive taxonomy of data store types, as well as the types of problem domains each one was best suited for. On this post, I will address some approaches to modelling data in graph data stores, particularly with Neo4j. Graph data stores have been increasingly adopted over the past couple of years in several business domains, ranging from logistics to bio-informatics. Their power lies in their ability to model complex networks and tree structures, with data points ranging from hundreds to millions of nodes and edges. ... Read more