Glossary
Transformer: neural networks for automatic language processing
Archive REF: IN195 V1
Glossary
Transformer: neural networks for automatic language processing

Author : François YVON

Publication date: March 10, 2022, Review date: November 20, 2024 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

5. Glossary

Annotation; Labeling

Annotated data is needed to guide supervised learning. Annotations can relate to a text, a sentence, or even isolated words; they can be linguistic in nature (morphological, syntactic, semantic), or represent the output of a processing task (e.g. the polarity of a text, or the semantic equivalence between two sentences).

Refining; Finetuning

A model trained for a particular task (e.g. one on a language model task) can be transferred to another task by extending the training with other types of data or annotations: this is the refinement stage. In this way, the parameters of a model like BERT can be specialized using a few examples from a sentiment analysis task. Model refinement is one method of transfer learning.

Transfer Learning

Transfer learning...

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


Article included in this offer

"Software technologies and System architectures"

( 227 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details