XLNet

Explained: XLNet – Generalized Autoregressive Pretraining for Language Understanding

Carnegie Mellon and Google’s Brain outfit have tried to undo some of the techniques of Google’s BERT machine learning model for natural language processing.

They propose a new approach called “XLNet.” Built on top of the popular “Transformer” A.I. for language, it may be a more straightforward way to examine how language works.

XLNet is an exciting development in NLP, not only because of its results but because it shows us that there is still room to improve upon for transfer learning in NLP.

Machine Learning Explained’s article, Paper Dissected: “XLNet: Generalized Autoregressive Pretraining for Language Understanding” Explained, offers a clear summary of arguably one of 2019’s most important developments in Natural Language Processing.

Further Readings

The original paper

Blog post on the Transformer

Blog post on ELMo

Blog post on BERT

Leave a Reply

Your email address will not be published. Required fields are marked *