45 124

Cited 0 times in

A study on the effectiveness of intermediate features in deep learning on facial expression recognition

DC Field Value Language
dc.contributor.author유선국-
dc.contributor.author오경택-
dc.date.accessioned2023-11-07T08:13:55Z-
dc.date.available2023-11-07T08:13:55Z-
dc.date.issued2023-06-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/196616-
dc.description.abstractThe purpose of this study is to evaluate the impact of intermediate features on FER performance. To achieve this objective, intermediate features were extracted from the input images at specific layers (FM1~FM4) of the pre-trained network (Resnet-18). These extracted intermediate features and original images were used as inputs to the vision transformer (ViT), and the FER performance was compared. As a result, when using a single image as input, using intermediate features extracted from FM2 yielded the best performance (training accuracy: 94.35%, testing accuracy: 75.51%). When using the original image as input, the training accuracy was 91.32% and the testing accuracy was 74.68%. However, when combining the original image with intermediate features as input, the best FER performance was achieved by combining the original image with FM2, FM3, and FM4 (training accuracy: 97.88%, testing accuracy: 79.21%). These results imply that incorporating intermediate features alongside the original image can lead to superior performance. The findings can be referenced and utilized when designing the preprocessing stages of a deep learning model in FER. By considering the effectiveness of using intermediate features, practitioners can make informed decisions to enhance the performance of FER systems.-
dc.description.statementOfResponsibilityopen-
dc.formatapplication/pdf-
dc.relation.isPartOfInternational Journal of Advanced Smart Convergence-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.titleA study on the effectiveness of intermediate features in deep learning on facial expression recognition-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Medical Engineering (의학공학교실)-
dc.contributor.googleauthorKyeongTeak Oh-
dc.contributor.googleauthorSun K. Yoo-
dc.identifier.doi10.7236/IJASC.2023.12.2.25-
dc.contributor.localIdA02471-
dc.subject.keywordIntermediate Feature-
dc.subject.keywordArtificial Intelligence-
dc.subject.keywordFacial Expression Recognition-
dc.contributor.alternativeNameYoo, Sun Kook-
dc.contributor.affiliatedAuthor유선국-
dc.citation.volume12-
dc.citation.number2-
dc.citation.startPage25-
dc.citation.endPage33-
dc.identifier.bibliographicCitationInternational Journal of Advanced Smart Convergence, Vol.12(2) : 25-33, 2023-06-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Medical Engineering (의학공학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.