image001

Seminar: Large-scale Support Vector Learning

Dr. Nguyen Duc Dung – Vice Director – Institute of Information Technology, Vietnam Academy of Science and Technology, will give talk on Large-scale Support Vector Learning

​The seminar will be at Dao tao room, 14th floor, FPT Building Hanoi, from 14h00 to 16h00, 21th April 2015.

Anyone interested is welcome.

Short Bio

  • Dung Duc Nguyen received the Bachelors degree in mathematics in 1994.
  • He received the Masters and Ph.D. degrees in knowledge science from the Japan Advanced Institute of Science and Technology, Ishikawa, Japan, in 2003 and 2006, respectively.
  • He was a Research Engineer at KDDI Research and Development Laboratories Inc., Saitama, Japan.
  • He is now with the Institute of Information Technology, Vietnam Academy of Science and Technology, Ha Noi, Vietnam. His current research interests include machine learning, pattern recognition, and data mining.
  • Dr. Nguyen was awarded the Innovative Medal from the Youth Union of Vietnam in 1998 for developing the first Vietnamese optical character recognition software, and the Technical Support Achievement Award in 2008 for his contributions at KDDI Laboratories.
  • He is one of the best experts on SVMs in Vietnam

Outline

  • Efficiency is one of the main limitations of kernel-based method and support vector machines (SVMs). With a large number of support vectors (SVs), SVMs requires a huge demand on memory and computation time in both training and testing phases.
  • The first part of the seminar presents several methods to reduce the number of SVs included in SVM solutions. There are two main approaches, including incrementally constructing the new SVs set from empty and decrementally reduce the original SVs set. The common objective is to reduce to a minimum number of SVs included in SVMs solution.
  • The second part of the seminar introduces our latest result in training SVMs on large data. The newly introduced CondensedSVMs is the integration of SVs condensation into the SVM training process. The integration does not only help minimizing the memory and computation requirement in training, but also make the trained SVMs more compact and run faster in testing phase.