Veiner, MarcellSupek, Fran2026-01-272026-01-272026-01-191744-4292https://hdl.handle.net/2445/226202Following their success in natural language processing and protein biology, pretrained large language models have started appearing in genomics in large numbers. These genomic language models (gLMs), trained on diverse DNA and RNA sequences, promise improved performance on a variety of downstream prediction and understanding tasks. In this review, we trace the rapid evolution of gLMs, analyze current trends, and offer an overview of their application in genomic research. We investigate each gLM component in detail, from training data curation to the architecture, and highlight the present trends of increasing model complexity. We review major benchmarking efforts, suggesting that no single model dominates, and that task-specific design and pretraining data often outweigh general model scale or architecture. In addition, we discuss requirements for making gLMs practically useful for genomic research. While several applications, ranging from genome annotation to DNA sequence generation, showcase the potential of gLMs, their use highlights gaps and pitfalls that remain unresolved. This guide aims to equip researchers with a grounded understanding of gLM capabilities, limitations, and best practices for their effective use in genomics.24 p.application/pdfengcc-by (c) Veiner, Marcell et al., 2026https://creativecommons.org/licenses/by/4.0/ArgotPortadesNeuritisSlangTitle pagesNeuritisThe DNA dialect: a comprehensive guide to pretrained genomic language modelsinfo:eu-repo/semantics/article2026-01-26info:eu-repo/semantics/openAccess675445241555097