To improve the accuracy and reliability of LiDAR semantic segmentation, previous studies have introduced multi-modal approaches that utilize additional modalities, such as 2D RGB images, to provide complementary information. However, these methods increase the cost of data collection, sensor hardware requirements, power consumption, and computational complexity. We observed that multi-modal approaches improve the semantic alignment of 3D representations. Motivated by this observation, we propose a pseudo multi-modal approach. To this end, we introduce a novel class-label-driven artificial 2D image construction method. By leveraging the close semantic alignment between image and text features of vision-language models, artificial 2D images are synthesized by arranging LiDAR class label text features. During training, the semantic information encoded in the artificial 2D images enriches the 3D features through knowledge distillation. The proposed method significantly reduces the burden of training data collection and facilitates more effective learning of semantic relationships in the 3D backbone network. Extensive experiments on two benchmark datasets demonstrate that the proposed method improves performance by 2.2-3.5 mIoU over the baseline using only LiDAR data, achieving performance comparable to that of real multi-modal approaches.