This paper describes the ImageCLEFmed 2020 Concept De- tection Task. After first being proposed at ImageCLEF 2017, the med- ical task is in its 4th edition this year, as the automatic detection from medical images still remains a challenging task. In 2020, the format re- mained the same as in 2019, with a single sub-task. The concept de- tection task is part of the medical tasks, alongside the tuberculosis and visual question and answering tasks. Similar to the 2019 edition, the data set focuses on radiology images rather than biomedical images, however with an increased number of images. The distributed images were ex- tracted from the biomedical open access literature (PubMed Central). The development data consists of 65,753 training and 15,970 valida- tion images. Each image has corresponding Unified Medical Language System (UMLS R©) concepts, that were extracted from the original arti- cle image captions. In this edition, additional imaging acquisition tech- nique labels were included in the distributed data, which were adopted for pre-filtering steps, concept selection and ensemble algorithms. Most applied approaches for the automatic detection of concepts were deep learning based architectures. Long short-term memory (LSTM) recurrent neural networks (RNN), adversarial auto-encoder, convolutional neural networks (CNN) image encoders and transfer learning-based multi-label classification models were adopted. The performances of the submitted models (best score 0.3940) were evaluated using F1-scores computed per image and averaged across all 3,534 test images.