인공지능 AI/자연어처리

ComDensE : Combined Dense Embedding of Relation-aware and Common Features for Knowledge Graph Completion

 

논문선정이유

KG 관련 국내 연구 읽기 위해.

Abstract

Real-world knowledge graphs (KG) are mostly incomplete. The problem of recovering missing relations, called KG completion, has recently become an active research area. Knowledge graph (KG) embedding, a low-dimensional representation of entities and relations, is the crucial technique for KG completion. Convolutional neural networks in models such as ConvE, SACN, InteractE, and RGCN achieve recent successes. This paper takes a different architectural view and proposes ComDensE which combines relation-aware and common features using dense neural networks. In the relation-aware feature extraction, we attempt to create relational inductive bias by applying an encoding function specific to each relation. In the common feature extraction, we apply the common encoding function to all input embeddings. These encoding functions are implemented using dense layers in ComDensE. ComDensE achieves the state-of-the-art performance in the link prediction in terms of MRR, HIT@1 on FB15k-237 and HIT@1 on WN18RR compared to the previous baseline approaches. We conduct an extensive ablation study to examine the effects of the relation-aware layer and the common layer of the ComDensE. Experimental results illustrate that the combined dense architecture as implemented in ComDensE achieves the best performance.

 

 

 

KG 임베딩을 유클리드, non-유클리드에 접목하여 link prediction 진행.

 

이 논문은 KG 임베딩에 convolution 동작에 의문을 제기.

 

dense or fully connected networks combining relation-aware and common features

 

Combined relation-aware and common features

KG는 다양한 관계 유형을 포함

따라서 CNN의 여러 필터처럼.

두 개의 층을 결합하는 건 시너지 유형을 보여줌.

 

Dense vs Convolution.

이미지와 다르게 저차원.

따라서, dense networks for KG 임베딩은 파라미터 수로 실험 가능.

 

Width vs Depth.

width 최적화는 좋음.

depth 증가는 nn에 취약.

 

 

ComDensE, KG 임베딩 아키텍처를 소개합니다.

관계 인식 및 공동 특징 추출하는 dense layers 결합 KG 임베딩.

 

Translational Distance Models

지식그래프 링크 예측을 위한 임베딩

저차원 벡터의 엔티티 및 관계 임베딩.

 

 

Common Feature Extraction Layer

Relation-aware Feature Extraction Layer

Projection to Embedding Space

Loss Function

 

 

Relation-specific inductive bias is important.

 

Full-sized filters for concatenated embeddings are effective.

 

Computational overhead is manageable.

 

Results

 

Ablation Study

Shared dense layer only: Effects of Widths

dense layer 증가 ⇒ CNN kernel 증가

 

Shared dense layer only: Effects of Depths

 

Different relation-aware encoding functions