Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/5737
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYussiff, Abdul-Lateef-
dc.contributor.authorSuet-Peng, Yong-
dc.contributor.authorBaharudin, Baharum B.-
dc.date.accessioned2021-07-26T13:25:55Z-
dc.date.available2021-07-26T13:25:55Z-
dc.date.issued2016-
dc.identifier.issn23105496-
dc.identifier.urihttp://hdl.handle.net/123456789/5737-
dc.description6p;, ill.en_US
dc.description.abstractOne of the driving forces of behavior recognition in video is the analysis of surveillance video. In this video, humans are monitored and their actions are classifed as being normal or a deviation from the norm. Local spatio-temporal features have gained attention to be an effective descriptor for action recognition in video. The problem of using texture as local descriptor is relatively unexplored. In this paper, a work on human action recognition in video is presented by proposing a fusion of appearance, motion and texture as local descriptor for the bag-of-feature model. Rigorous experiments was conducted on the recorded UTP dataset using the proposed descriptor. The average accuracy obtained was 85.92% for the fused description as compared to 75.06% for the combination of shape and motion descriptor. The result shows an improved performance for the proposed descriptor over the combination of appearance and motion as local descriptor of an interest pointen_US
dc.language.isoenen_US
dc.subjectHuman Action Recognitionen_US
dc.subjectVideo Representationen_US
dc.subjectSchool Surveillanceen_US
dc.subjectCodebook Descriptoren_US
dc.titleHuman Action Recognition in Surveillance Video of a Computer Laboratoryen_US
dc.typeArticleen_US
Appears in Collections:Department of Chemistry

Files in This Item:
File Description SizeFormat 
Human Action Recognition in Surveillance Video.pdfArticle4.75 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.