Cited 0 times in
A study on the effectiveness of intermediate features in deep learning on facial expression recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 유선국 | - |
dc.date.accessioned | 2023-11-07T08:13:55Z | - |
dc.date.available | 2023-11-07T08:13:55Z | - |
dc.date.issued | 2023-06 | - |
dc.identifier.uri | https://ir.ymlib.yonsei.ac.kr/handle/22282913/196616 | - |
dc.description.abstract | The purpose of this study is to evaluate the impact of intermediate features on FER performance. To achieve this objective, intermediate features were extracted from the input images at specific layers (FM1~FM4) of the pre-trained network (Resnet-18). These extracted intermediate features and original images were used as inputs to the vision transformer (ViT), and the FER performance was compared. As a result, when using a single image as input, using intermediate features extracted from FM2 yielded the best performance (training accuracy: 94.35%, testing accuracy: 75.51%). When using the original image as input, the training accuracy was 91.32% and the testing accuracy was 74.68%. However, when combining the original image with intermediate features as input, the best FER performance was achieved by combining the original image with FM2, FM3, and FM4 (training accuracy: 97.88%, testing accuracy: 79.21%). These results imply that incorporating intermediate features alongside the original image can lead to superior performance. The findings can be referenced and utilized when designing the preprocessing stages of a deep learning model in FER. By considering the effectiveness of using intermediate features, practitioners can make informed decisions to enhance the performance of FER systems. | - |
dc.description.statementOfResponsibility | open | - |
dc.relation.isPartOf | International Journal of Advanced Smart Convergence | - |
dc.rights | CC BY-NC-ND 2.0 KR | - |
dc.title | A study on the effectiveness of intermediate features in deep learning on facial expression recognition | - |
dc.type | Article | - |
dc.contributor.college | College of Medicine (의과대학) | - |
dc.contributor.department | Dept. of Medical Engineering (의학공학교실) | - |
dc.contributor.googleauthor | KyeongTeak Oh | - |
dc.contributor.googleauthor | Sun K. Yoo | - |
dc.identifier.doi | 10.7236/IJASC.2023.12.2.25 | - |
dc.contributor.localId | A02471 | - |
dc.subject.keyword | Intermediate Feature | - |
dc.subject.keyword | Artificial Intelligence | - |
dc.subject.keyword | Facial Expression Recognition | - |
dc.contributor.alternativeName | Yoo, Sun Kook | - |
dc.contributor.affiliatedAuthor | 유선국 | - |
dc.citation.volume | 12 | - |
dc.citation.number | 2 | - |
dc.citation.startPage | 25 | - |
dc.citation.endPage | 33 | - |
dc.identifier.bibliographicCitation | International Journal of Advanced Smart Convergence, Vol.12(2) : 25-33, 2023-06 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.