Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/5911
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYussif, Abdul-Lateef-
dc.contributor.authorSuet-Peng, Yong-
dc.contributor.authorBaharudin, Baharum B.-
dc.date.accessioned2021-08-18T11:22:27Z-
dc.date.available2021-08-18T11:22:27Z-
dc.date.issued2016-
dc.identifier.issn23105496-
dc.identifier.urihttp://hdl.handle.net/123456789/5911-
dc.description6p:, ill.en_US
dc.description.abstractOne of the driving forces of behavior recognition in video is the analysis of surveillance video. In this video, humans are monitored and their actions are classified as being normal or a deviation from the norm. Local spatio-temporal features have gained attention to be an effective descriptor for action recognition in video. The problem of using texture as local descriptor is relatively unexplored. In this paper, a work on human action recognition in video is presented by proposing a fusion of appearance, motion and texture as local descriptor for the bag-of-feature model. Rigorous experiments were conducted in the recorded UTP dataset using the proposed descriptor. The average accuracy obtained was 85.92% for the fused descriptor as compared to 75.06% for the combination of shape and motion descriptor. The result shows an improved performance for the proposed descriptor over the combination of appearance and motion as local descriptor of an interest pointen_US
dc.language.isoenen_US
dc.publisherUniversity of Cape Coasten_US
dc.subjectHuman Action Recognitionen_US
dc.subjectVideo Representationen_US
dc.subjectSchool Surveillanceen_US
dc.subjectCodebook Descriptoren_US
dc.titleHuman action recognition in surveillance video of a computer laboratoryen_US
dc.typeArticleen_US
Appears in Collections:Department of Computer Science & Information Technology

Files in This Item:
File Description SizeFormat 
Human Action Recognition in Surveillance Video.pdfArticle4.75 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.