Interesting Concept Mining With Concept Lattice Convolutional Networks

The extraction of meaningful conceptual structures is often a critical task in many scientific and engineering disciplines, as it enables a comprehensive analysis of complex data in terms of both context and content. In this paper, we introduce the Concept Lattice Convolutional Network (<inline-f...

Full description

Saved in:
Bibliographic Details
Main Authors: Mohamed Hamza Ibrahim, Rokia Missaoui, Pedro Henrique B. Ruas
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11027055/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The extraction of meaningful conceptual structures is often a critical task in many scientific and engineering disciplines, as it enables a comprehensive analysis of complex data in terms of both context and content. In this paper, we introduce the Concept Lattice Convolutional Network (<inline-formula> <tex-math notation="LaTeX">$\mathcal {LCN}$ </tex-math></inline-formula>), an efficient semi-supervised learning approach to identify actionable concepts (i.e., interesting conceptual structures) based on a scalable convolutional neural network architecture that operates on concept lattices. The <inline-formula> <tex-math notation="LaTeX">$\mathcal {LCN}$ </tex-math></inline-formula> captures diverse levels of global context by employing a message-passing mechanism that incorporates local structural and conceptual information within a lattice. It also employs parameter-sharing convolutional operations as conceptual filters to efficiently discern relevant concepts amidst the irrelevant ones. Moreover, it applies consistent aggregations that maintain local consistency of labeling across concepts in the lattice. Experiments on several datasets show that <inline-formula> <tex-math notation="LaTeX">$\mathcal {LCN}$ </tex-math></inline-formula> can accurately identify actionable concepts and is at least three times faster than state-of-the-art exact interestingness indices.
ISSN:2169-3536