Cited 0 times in 
Cited 0 times in 
Semantic Tabular-to-Image Conversion and Contrastive Learning for Lightweight Intrusion Detection
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 박준영 | - |
| dc.contributor.author | 강건우 | - |
| dc.contributor.author | 이호인 | - |
| dc.contributor.author | 이승은 | - |
| dc.contributor.author | 박유랑 | - |
| dc.date.accessioned | 2026-04-02T00:49:47Z | - |
| dc.date.available | 2026-04-02T00:49:47Z | - |
| dc.date.created | 2026-04-01 | - |
| dc.date.issued | 2026-03 | - |
| dc.identifier.issn | 1598-849X | - |
| dc.identifier.uri | https://ir.ymlib.yonsei.ac.kr/handle/22282913/211711 | - |
| dc.description.abstract | With the proliferation of IoT emphasizing the need for high-performance Intrusion Detection Systems (IDS) in edge environments, deep learning-based IDS research is actively pursued; However, existing approaches that directly utilize tabular data as input for deep learning models are limited in their ability to capture the complex inherent relationships of network traffic. To address this, we propose a 3-stage lightweight IDS framework that converts tabular data into images to leverage the CNN. (1) First, feature selection based on Shapley Additive exPlanations (SHAP) is performed to compress data by retaining only critical features. (2) The selected data is transformed into images using the LLM-categorized Vortex Feature Positioning (LVFP) technique, which reconstructs tabular data into CNN-optimized spatial patterns by assigning semantically categorized feature groups to RGB channels and rearranging them through vortex feature allocation. (3) Finally, we construct a lightweight CNN encoder and pre-train it on the converted images via contrastive learning to establish generalizable feature representations. As downstream tasks, evaluations on 6 IDS & IoT benchmark datasets demonstrate that the proposed model outperforms existing models while using a minimal number of parameters. | - |
| dc.language | 영어 | - |
| dc.publisher | 한국컴퓨터정보학회 | - |
| dc.relation.isPartOf | 한국컴퓨터정보학회논문지 | - |
| dc.title | Semantic Tabular-to-Image Conversion and Contrastive Learning for Lightweight Intrusion Detection | - |
| dc.title.alternative | 의미론적 정형-이미지 변환 및 대조 학습을 활용한 경량 침입 탐지 | - |
| dc.type | Article | - |
| dc.contributor.googleauthor | 박준영 | - |
| dc.contributor.googleauthor | 강건우 | - |
| dc.contributor.googleauthor | 이호인 | - |
| dc.contributor.googleauthor | 이승은 | - |
| dc.contributor.googleauthor | 박유랑 | - |
| dc.identifier.doi | 10.9708/jksci.2026.31.03.021 | - |
| dc.identifier.url | https://journal.kci.go.kr/jksci/archive/articleView?artiId=ART003317576 | - |
| dc.subject.keyword | Intrusion Detection System | - |
| dc.subject.keyword | Tabular-to-Image | - |
| dc.subject.keyword | Lightweight Model | - |
| dc.subject.keyword | LLM | - |
| dc.subject.keyword | Contrastive Learning | - |
| dc.subject.keyword | 침입 탐지 시스템 | - |
| dc.subject.keyword | 정형-이미지 변환 | - |
| dc.subject.keyword | 거대 언어 모델 | - |
| dc.subject.keyword | 대조 학습 | - |
| dc.contributor.affiliatedAuthor | 박유랑 | - |
| dc.citation.volume | 31 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 21 | - |
| dc.citation.endPage | 35 | - |
| dc.identifier.bibliographicCitation | 한국컴퓨터정보학회논문지, Vol.31(3) : 21-35, 2026-03 | - |
| dc.identifier.rimsid | 92343 | - |
| dc.type.rims | ART | - |
| dc.description.journalClass | 2 | - |
| dc.description.journalClass | 2 | - |
| dc.subject.keywordAuthor | Intrusion Detection System | - |
| dc.subject.keywordAuthor | Tabular-to-Image | - |
| dc.subject.keywordAuthor | Lightweight Model | - |
| dc.subject.keywordAuthor | LLM | - |
| dc.subject.keywordAuthor | Contrastive Learning | - |
| dc.subject.keywordAuthor | 침입 탐지 시스템 | - |
| dc.subject.keywordAuthor | 정형-이미지 변환 | - |
| dc.subject.keywordAuthor | 거대 언어 모델 | - |
| dc.subject.keywordAuthor | 대조 학습 | - |
| dc.type.docType | Y | - |
| dc.identifier.kciid | ART003317576 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.