Tag: pretraining
- Paper Review: CogVLM: Visual Expert for Pretrained Language Models (09 Nov 2023)
- Paper Review: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (15 Jun 2023)
- Paper Review: The effectiveness of MAE pre-pretraining for billion-scale pretraining (05 Jun 2023)
- Paper Review: DarkBERT: A Language Model for the Dark Side of the Internet (18 May 2023)
- Paper Review: DINOv2: Learning Robust Visual Features without Supervision (20 Apr 2023)
- Paper Review: NÜWA Visual Synthesis Pre-training for Neural visUal World creAtion (25 Nov 2021)
- Paper Review: Efficient Visual Pretraining with Contrastive Detection (01 Sep 2021)
- Paper Review: CoAtNet Marrying Convolution and Attention for All Data Sizes (10 Jun 2021)
- Paper Review: ByT5 Towards a token-free future with pre-trained byte-to-byte models (02 Jun 2021)
- Paper Review: Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence (21 May 2021)
- Paper Review: Are Pre-trained Convolutions Better than Pre-trained Transformers? (10 May 2021)
- Paper Review: LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval (21 Mar 2021)
- Paper Review: ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network (04 Jul 2020)
- Paper Review: VirTex: Learning Visual Representations from Textual Annotations (14 Jun 2020)