Maximal Margin Learning Vector Quantisation

Dat TRAN, Van Nguyen, Wanli MA

Research output: A Conference proceeding or a Chapter in BookConference contribution

Abstract

Kernel Generalised Learning Vector Quantisation (KGLVQ) was proposed to extend Generalised Learning Vector Quantisation into the kernel feature space to deal with complex class boundaries and thus yielded promising performance for complex classification tasks in pattern recognition. However KGLVQ does not follow the maximal margin principle, which is crucial for kernel-based learning methods. In this paper we propose a maximal margin approach (MLVQ) to the KGLVQ algorithm. MLVQ inherits the merits of KGLVQ and also follows the maximal margin principle to improve the generalisation capability. Experiments performed on the well-known data sets available in UCI repository show promising classification results for the proposed method
Original languageEnglish
Title of host publicationThe 2013 International Joint Conference on Neural Networks (IJCNN)
EditorsPlamen Angelov, Daniel Levine
Place of PublicationUSA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages1668-1673
Number of pages6
Volume1
ISBN (Print)9781467361293
DOIs
Publication statusPublished - 2013
Event2013 International Joint Conference on Neural Networks (IJCNN) - Dallas, Texas, United States
Duration: 4 Aug 20139 Aug 2013

Conference

Conference2013 International Joint Conference on Neural Networks (IJCNN)
CountryUnited States
CityTexas
Period4/08/139/08/13

Fingerprint

Vector quantization
Pattern recognition
Experiments

Cite this

TRAN, D., Nguyen, V., & MA, W. (2013). Maximal Margin Learning Vector Quantisation. In P. Angelov, & D. Levine (Eds.), The 2013 International Joint Conference on Neural Networks (IJCNN) (Vol. 1, pp. 1668-1673). USA: IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/IJCNN.2013.6706940
TRAN, Dat ; Nguyen, Van ; MA, Wanli. / Maximal Margin Learning Vector Quantisation. The 2013 International Joint Conference on Neural Networks (IJCNN). editor / Plamen Angelov ; Daniel Levine. Vol. 1 USA : IEEE, Institute of Electrical and Electronics Engineers, 2013. pp. 1668-1673
@inproceedings{ca3b25e010ec48c7a28fd8baf44b26f3,
title = "Maximal Margin Learning Vector Quantisation",
abstract = "Kernel Generalised Learning Vector Quantisation (KGLVQ) was proposed to extend Generalised Learning Vector Quantisation into the kernel feature space to deal with complex class boundaries and thus yielded promising performance for complex classification tasks in pattern recognition. However KGLVQ does not follow the maximal margin principle, which is crucial for kernel-based learning methods. In this paper we propose a maximal margin approach (MLVQ) to the KGLVQ algorithm. MLVQ inherits the merits of KGLVQ and also follows the maximal margin principle to improve the generalisation capability. Experiments performed on the well-known data sets available in UCI repository show promising classification results for the proposed method",
keywords = "Learning Vector Quantisation",
author = "Dat TRAN and Van Nguyen and Wanli MA",
year = "2013",
doi = "10.1109/IJCNN.2013.6706940",
language = "English",
isbn = "9781467361293",
volume = "1",
pages = "1668--1673",
editor = "Plamen Angelov and Daniel Levine",
booktitle = "The 2013 International Joint Conference on Neural Networks (IJCNN)",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
address = "United States",

}

TRAN, D, Nguyen, V & MA, W 2013, Maximal Margin Learning Vector Quantisation. in P Angelov & D Levine (eds), The 2013 International Joint Conference on Neural Networks (IJCNN). vol. 1, IEEE, Institute of Electrical and Electronics Engineers, USA, pp. 1668-1673, 2013 International Joint Conference on Neural Networks (IJCNN), Texas, United States, 4/08/13. https://doi.org/10.1109/IJCNN.2013.6706940

Maximal Margin Learning Vector Quantisation. / TRAN, Dat; Nguyen, Van; MA, Wanli.

The 2013 International Joint Conference on Neural Networks (IJCNN). ed. / Plamen Angelov; Daniel Levine. Vol. 1 USA : IEEE, Institute of Electrical and Electronics Engineers, 2013. p. 1668-1673.

Research output: A Conference proceeding or a Chapter in BookConference contribution

TY - GEN

T1 - Maximal Margin Learning Vector Quantisation

AU - TRAN, Dat

AU - Nguyen, Van

AU - MA, Wanli

PY - 2013

Y1 - 2013

N2 - Kernel Generalised Learning Vector Quantisation (KGLVQ) was proposed to extend Generalised Learning Vector Quantisation into the kernel feature space to deal with complex class boundaries and thus yielded promising performance for complex classification tasks in pattern recognition. However KGLVQ does not follow the maximal margin principle, which is crucial for kernel-based learning methods. In this paper we propose a maximal margin approach (MLVQ) to the KGLVQ algorithm. MLVQ inherits the merits of KGLVQ and also follows the maximal margin principle to improve the generalisation capability. Experiments performed on the well-known data sets available in UCI repository show promising classification results for the proposed method

AB - Kernel Generalised Learning Vector Quantisation (KGLVQ) was proposed to extend Generalised Learning Vector Quantisation into the kernel feature space to deal with complex class boundaries and thus yielded promising performance for complex classification tasks in pattern recognition. However KGLVQ does not follow the maximal margin principle, which is crucial for kernel-based learning methods. In this paper we propose a maximal margin approach (MLVQ) to the KGLVQ algorithm. MLVQ inherits the merits of KGLVQ and also follows the maximal margin principle to improve the generalisation capability. Experiments performed on the well-known data sets available in UCI repository show promising classification results for the proposed method

KW - Learning Vector Quantisation

U2 - 10.1109/IJCNN.2013.6706940

DO - 10.1109/IJCNN.2013.6706940

M3 - Conference contribution

SN - 9781467361293

VL - 1

SP - 1668

EP - 1673

BT - The 2013 International Joint Conference on Neural Networks (IJCNN)

A2 - Angelov, Plamen

A2 - Levine, Daniel

PB - IEEE, Institute of Electrical and Electronics Engineers

CY - USA

ER -

TRAN D, Nguyen V, MA W. Maximal Margin Learning Vector Quantisation. In Angelov P, Levine D, editors, The 2013 International Joint Conference on Neural Networks (IJCNN). Vol. 1. USA: IEEE, Institute of Electrical and Electronics Engineers. 2013. p. 1668-1673 https://doi.org/10.1109/IJCNN.2013.6706940