EEG-Based Person Authentication with Variational Universal Background Model

Huyen Tran, Dat Tran, Wanli Ma, Phuoc Nguyen

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review


Silent speech is a convenient and natural way for person authentication as users can imagine speaking their password instead of typing it. However there are inherent noises and complex variations in EEG signals making it difficult to capture correct information and model uncertainty. We propose an EEG-based person authentication framework based on a variational inference framework to learn a simple latent representation for complex data. A variational universal background model is created by pooling the latent models of all users. A likelihood ratio of user claimed model to the background model is constructed for testing whether the claim is valid. Extensive experiments on three datasets show the advantages of our proposed framework.

Original languageEnglish
Title of host publicationNetwork and System Security - 13th International Conference, NSS 2019, Proceedings
EditorsJoseph K. Liu, Xinyi Huang
Place of PublicationNetherlands
Number of pages15
ISBN (Electronic)9783030369385
ISBN (Print)9783030369378
Publication statusPublished - 1 Jan 2019
Event13th International Conference on Network and System Security, NSS 2019 - Sapporo, Japan
Duration: 15 Dec 201918 Dec 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11928 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference13th International Conference on Network and System Security, NSS 2019


Dive into the research topics of 'EEG-Based Person Authentication with Variational Universal Background Model'. Together they form a unique fingerprint.

Cite this