Main Characteristics of Transformers
Positional Encoding
Attention
Self-Attention
Basics
Transformers, explained: Understand the model behind GPT, BERT, and T5 by Dale Markowitz
Convolutional Neural Network (CNN): Vision
Recurrent Neural Network (RNN): Text
Transformers: Text and more
Positional Encoding
Attention
Self-Attention
GPT (Generative Pretrained Transformers): GPT-3, GPT-3.5, GPT-4
BERT (Bidirectional Encoder Representations from Transformers)
T5 (Text-to-Text Transfer Transformer)
RoBERTa (Robustly Optimized BERT Pretraining Approach)
The Model Hub is a platform for sharing and discovering pre-trained models, contributed by the AI community.
Discover ML apps made by the community: Spaces
Hugging Face Pipelines
cover common machine learning tasks
Pre-built, easy-to-use abstractions (almost no code necessary)
Simplify workflow
Assemble, configure, and deploy autonomous AI Agents in your browser.
https://github.com/microsoft/JARVIS
Jan Kirenz