33
/es/
AIzaSyB4mHJ5NPEv-XzF7P6NDYXjlkCWaeKw5bc
November 1, 2025
8571163
809611
2
Public Timelines
FAQ Obtener premium

18 oct 2018 año - BERT Large Language Model released

Descripción:

BERT, short for Bidirectional Encoder Representations from Transformers, a deep learning model, is introduced by Google. Built upon the Transformer architecture, BERT excels in natural language processing (NLP) tasks by effectively capturing long-range word dependencies. It undergoes a two-step training process using a vast dataset comprising the BooksCorpus and English Wikipedia. In the first step, called masked language modeling (MLM), BERT predicts masked words within the input text to understand word context. The second step, next sentence prediction (NSP), trains BERT to determine if two given sentences are consecutive. Once trained, BERT can be utilized for tasks such as question answering, sentiment analysis, sentence relationship determination, and text generation.

Añadido al timeline:

fecha:

18 oct 2018 año
Ahora mismo
~ 6 years and 11 months ago

Fotos:

YouTube: