A low-resolution real-time face recognition using extreme learning machine and its variants

Ankit Rajpal, Khushwant Sehra, Anurag Mishra, Girija Chetty

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Extreme Learning Machine (ELM) is an excellent candidate for its exemplary performance as a generalized Single Layer Feed-forward Network (SLFN). In this regard, ELM has attracted widespread attention for tackling multiclass classification and regression problems with relative ease. Due to the random allocation of weights to the input and hidden layer biases, the running time spans of ELM are observed to be in seconds and agree with real-time events whilst giving a competitive performance. In this work, the suitability of ELM and its Online Sequential variant for recognition of unknown face samples is investigated under uncontrolled environments, including varying pose, variance, and illumination. The presented face recognition approach is evaluated sing four datasets from YALE, CMU, BIOID, and LFW. To track the face features of a sample face in its spatial domain with varying windows corresponding to different datasets, the Viola-Jones object detection framework is used. The face features are then extracted and evaluated using a Histogram of Oriented Gradients, which forms the dataset to be fed to ELM and OS-ELM classifiers. The running time for training the proposed face recognition approach is found to span within milliseconds to seconds with the computed time complexity of (Formula presented.), justifying the suitability of ELM and OS-ELM classifiers for real-time face recognition applications with a high recognition rate outcome.

Original languageEnglish
Pages (from-to)456-471
Number of pages16
JournalImaging Science Journal
Volume71
Issue number5
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'A low-resolution real-time face recognition using extreme learning machine and its variants'. Together they form a unique fingerprint.

Cite this