Position Prediction as an Effective Pre-training Strategy
AuthorsShuangfei Zhai, Navdeep Jaitly, Tatiana Likhomanenko, Jason Ramapuram, Dan Busbridge, Walter Talbott, Chen Huang, Hanlin Goh, Joseph Yitan Cheng, Josh Susskind
Position Prediction as an Effective Pre-training Strategy
AuthorsShuangfei Zhai, Navdeep Jaitly, Tatiana Likhomanenko, Jason Ramapuram, Dan Busbridge, Walter Talbott, Chen Huang, Hanlin Goh, Joseph Yitan Cheng, Josh Susskind
Transformers have gained increasing popularity in a wide range of applications, including Natural Language Processing (NLP), Computer Vision and Speech Recognition, because of their powerful representational capacity. However, harnessing this representational capacity effectively requires a large amount of data, strong regularization, or both, to mitigate overfitting. Recently, the power of the Transformer has been unlocked by self-supervised pretraining strategies based on masked autoencoderswhich rely on reconstructing masked inputs, directly, or contrastively from unmasked content. This pretraining strategy which has been used in BERT models in NLP, Wav2Vec models in Speech and, recently, in MAE models in Vision, forces the model to learn about relationships between the content in different parts of the input using autoencoding related objectives. In this paper, we propose a novel, but surprisingly simple alternative to content reconstruction— that of predicting locations from content, without providing positional information for it. Doing so requires the Transformer to understand the positional relationships between different parts of the input, from their content alone. This amounts to an efficient implementation where the pretext task is a classification problem among all possible positions for each input token. We experiment on both Vision and Speech benchmarks, where our approach brings improvements over strong supervised training baselines and is comparable to modern unsupervised/self-supervised pretraining methods. Our method also enables Transformers trained without position embeddings to outperform ones trained with full position information.
How PARTs Assemble into Wholes: Learning the Relative Composition of Images
February 6, 2026research area Computer Vision, research area Methods and Algorithmsconference Northern Lights Deep Learning Conference (NLDL)
The composition of objects and their parts, along with object-object positional relationships, provides a rich source of information for representation learning. Hence, spatial-aware pretext tasks have been actively explored in self-supervised learning. Existing works commonly start from a grid structure, where the goal of the pretext task involves predicting the absolute position index of patches within a fixed grid. However, grid-based…
Unsupervised Style and Content Separation by Minimizing Mutual Information for Speech Synthesis
March 9, 2020research area Speech and Natural Language Processingconference ICASSP
We present a method to generate speech from input text and a style vector that is extracted from a reference speech signal in an unsupervised manner, i.e., no style annotation, such as speaker information, is required. Existing unsupervised methods, during training, generate speech by computing style from the corresponding ground truth sample and use a decoder to combine the style vector with the input text. Training the model in such a way leaks…