Swinburne
Browse

A Novel DNN Training Framework via Data Sampling and Multi-Task Optimization

Download (753.27 kB)
conference contribution
posted on 2024-07-11, 14:17 authored by Boyu Zhang, Kai QinKai Qin, Hong Pan, Timos Sellis
Conventional DNN training paradigms typically rely on one training set and one validation set, obtained by partitioning an annotated dataset available for the purpose of training, namely gross training set, in a certain way. The training set is used for training the model while the validation set is used to estimate the generalization performance of the trained model as the training proceeds to avoid over-fitting. There exist two major issues in this training paradigm. Firstly, the validation set may hardly guarantee an unbiased estimate of the generalization performance due to potential mismatching with the test data. Secondly, training a DNN corresponds to solve a complex optimization problem, which is prone to getting trapped into inferior local optima and thus leads to the undesired training result. To address these issues, we propose a novel DNN training framework. It generates multiple pairs of training and validation sets from the gross training set via random splitting, trains a DNN model of a pre-specified network structure on each pair while making the useful knowledge (e.g., promising network parameters) obtained from one model training process to be transferred to other model training processes via multi-task optimization (i.e., a recently emerging optimization paradigm), and outputs the best one, among all trained models, which has the overall best performance across the validation sets from all pairs. The knowledge transfer mechanism featured in this new framework can not only enhance training effectiveness by helping the model training process to escape from local optima but also improve on generalization performance via implicit regularization imposed on one model training process from other model training processes. We implement the proposed framework, parallelize the implementation on a GPU cluster, and apply it to train several widely used DNN models. Experimental results on several classification datasets of different nature demonstrate the superiority of the proposed framework over the conventional training paradigm.

Funding

Identifying technological trajectories using machine learning algorithms

Australian Research Council

Find out more...

Data-driven Traffic Analytics for Incident Analysis and Management

Australian Research Council

Find out more...

Next-generation Intelligent Explorations of Geo-located Data

Australian Research Council

Find out more...

History

Available versions

PDF (Accepted manuscript)

ISBN

9781728169262

Journal title

Proceedings of the International Joint Conference on Neural Networks

Conference name

International Joint Conference on Neural Networks, IJCNN 2020; Virtual

Location

Glasgow

Start date

2019-07-19

End date

2019-07-24

Publisher

IEEE

Copyright statement

Copyright © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Language

eng

Usage metrics

    Publications

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC