Knowledge bases (KBs) today contain millions of entities and facts. In some knowledge bases, the correctness of these facts has been evaluated. However, much less is known about their completeness, i.e., the proportion of real facts that the KBs cover. In this work, we investigate different signals to identify the areas where the knowledge base is complete. We show that we can combine these signals in a rule mining approach, which allows us to predict where the knowledge base is complete and where facts may be missing. We also show that completeness predictions can help other applications such as fact inference.

Completeness

In this work we study completeness in KBs for queries of the form:

SELECT ?object WHERE {<entity> <relation> ?object}

That is, we are interested in knowing for a given entity and a given relation, whether the KB knows all the object values that hold in reality. We conducted our study on 10 relations from YAGO3 and 11 from Wikidata.

Completeness oracles

We can see a completeness oracle as a black-box that given a pair entity-relation from a KB returns either "complete" or "unknown" depending on whether the oracle thinks the KB knows all the object values of the entity-relation pair. Formally, a completeness oracle is a binary relation defined from entities to relations in a KB. This oracle-relation consists of all pairs entity-relation in the KB that are presumably complete. The golden oracle contains all the pairs entity-relation which are actually complete in the KB. We define precision and recall of a completeness oracle on a relation with respect to the golden oracle.

Trivial completeness oracles

  • Closed World Assumption (CWA). Every entity-pair in the KB is trivially complete.
  • Partial Completeness Assumption (PCA). Entity-relation pairs with at least one object value are complete.
  • Cardinality (cardk). Entity-relation pairs with at least k object values are complete. card0 is the CWA, card1 is the PCA.
  • Popularity. Relevant entities (e.g., famous people) are complete.
  • No change. Entity-relation relation pairs that did not change with respect to an older version of the KB are complete.

Oracles have an applicability set, i.e., the set of entity-relation pairs for which the oracle says "complete". For example the applicable set of the PCA is the set of entity-relation pairs with at least one object value. To evaluate the performance of the oracles, we took a sample of the applicability set of the oracle for each relation. We then determined whether the entity-pair was complete or not in reality (that is, whether it is in the golden oracle). This procedure led to the generation of a set of completeness and incompleteness assertions for both YAGO and Wikidata. We provide them in TSV format in the following order:

round-id, entity, completeness assertion, KB relation, object value #1, object value #2...

Here round-id is an integer in [0, 4]. Completeness assertion is either isComplete or isIncomplete, depending on whether the entity-relation pair was found to be complete (no more object values hold in reality) or incomplete (we found object values not present in the KB).

Data

  • Annotations for the cardk oracles (randomly sampled): YAGO, Wikidata.
  • Biased sample for the cardk oracles: YAGO, Wikidata (due to sparsity of some relations, we had to build a biased sample in order to have enough entities with object values).

Other completeness oracles

The class of an entity can give us hints about its completeness with respect to a relation. For example YAGO defines the class of living people. For an entity, belonging to this class is a clear signal of completeness for the relation <diedIn>. We could therefore frame this notion as having a being-in-the-class-Living-People completeness oracle, which has certain precision and recall for a given relation. Since there as many oracles of this type as classes in the KB (more than 200K in YAGO), we resort to rule mining to find the class oracles that are pertinent to a relation.

The same principle applies to having values for another relation. We found out that movies for which the producer is known, are normally complete in the relation director. More complicated patterns can be conceived, e.g., knowing the producer and the editor of the movie could be an even better signal. We also use rule mining to figure out which other relations can be signals of completeness. We call them star pattern oracles, as they resemble star-shaped queries, e.g., producer(x, z) ^ editor(x, z').

Learning completeness

We trained the AMIE rule mining system to learn completeness rules that combine the completeness oracles we introduced. For this purpose we used the completeness assertions in our samples. Examples of rules are:

  • #x.hasParent > 1 => isComplete(x, hasParent)
  • date_of_death(x, y) ^ #x.place_of_death < 1 => isIncomplete(x, place_of_death)

The system was trained on 80% of the completeness assertions (marked with round-ids 0 to 3) and tested in the remaining 20% (round-id 4). AMIE found more than 13K rules on YAGO and more than 1.6K on Wikidata. The quality of those rules is quantified by two metrics: the support is the number of entities for which the rule holds, that is, the absolute number of correct predictions. The confidence is the ratio of correct predictions (support) divided by the total number of predictions that the rule makes. Since our training set contains both completeness and incompleteness assertions, the learning system has explicit counter-examples.

We provide those rules for download: YAGO rules, Wikidata rules.

Using completeness

Completeness assertions can be used as counter-examples for rule mining or data inference. If an inference system predicts a new object values for entity-relation pair known to be complete, we can directly discard the prediction. To motivate this assertion, we conducted an experiment where we mined logical rules on YAGO3 and used to them to make predictions of the form r(x, y). We then used the AMIE oracle to predict completeness on the subjects and relations of the predictions (entity-relation pairs x, r) and discard predictions on complete entity-relation pairs.

We evaluated the precision of the inference approach before and after using the completeness oracle for pruning. For the evaluation, we extracted a sample of 1219 predictions from the 1.05M predictions made by AMIE. The predictions in the sample were sent to crowd workers who determined their correctness. In 39 cases the workers could not find an answer and said unknown. We removed those cases, leaving 1180 predictions. Then we used the AMIE completeness oracle to predict completeness and pruned those predictions that contradicted the oracle. The oracle pruned 783 predictions. From those, 121 were wrongly pruned, i.e., the prediction was actually correct but the oracle said there were no more values for the subject and the relation. We observed that all the surviving predictions were indeed correct, that is the oracle had a perfect recall on the sample. We provide the relevant data of this experiment:

  • Rules mined by AMIE on YAGO3. Sorted by PCA confidence.
  • Sample of predictions. This TSV file contains a sample of size 1180 with the predictions mined by AMIE, sorted by the confidence. The columns in this order are:
    Correct according to the crowd workers, AMIE's confidence, Subject, Relation, Object, Survived pruning (1=keep the prediction, 0=the completeness oracle pruned it)

Additional resources