To avoid data duplication and save hard drive space, we provide access to a selection of public datasets frequently used in AI/ML research. The datasets are available read-only under COMMON_DATASETS=/proj/common-datasets
.
Please refer to the List of Common Datasets on Berzelius for the information of version control and license.
Users are encouraged to contact us to request corrections, updates, or the addition of new datasets.
AlphaFold needs multiple genetic (sequence) databases to run:
The dataset is available under $COMMON_DATASETS/AlphaFold
.
AlphaFold 3 needs multiple genetic (sequence) databases to run:
The dataset is available under $COMMON_DATASETS/AlphaFold3
.
Due to Terms of Use limitations, you will need to obtain the model parameters yourself.
Argoverse is a publicly available dataset for autonomous driving research and development. It is widely used for tasks such as perception, prediction, motion forecasting, 3D object detection, and other aspects of self-driving car development.
Please send us an email confirming you agree to the Argoverse Terms of Use. Once you do this, we can grant you access to the dataset on Berzelius.
The dataset is available under $COMMON_DATASETS/Argoverse
.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The CIFAR-100 dataset has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class.
The dataset is available under $COMMON_DATASETS/CIFAR
.
The COCO (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset.
The dataset is available under $COMMON_DATASETS/COCO
.
DomainNet is a large, multi-domain dataset used for domain adaptation research in machine learning and computer vision. It is specifically designed to help researchers train models that can generalize across different visual domains.
The dataset is available under $COMMON_DATASETS/DomainNet
.
Fashion-MNIST is a dataset of Zalando’s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes.
The dataset is available under $COMMON_DATASETS/Fashion-MNIST
.
ImageNet is a large and widely used dataset in the field of computer vision, particularly in tasks involving image classification, object detection, and other types of visual recognition tasks. We provide the datasets for ImageNet Large-scale Visual Recognition Challenge (ILSVRC) 2012, including
We also provide the training and validation images in both LMDB and TFRecord formats.
Please open the ImageNet site, find the terms of use (http://image-net.org/download), copy them, replace the needed parts with your name, send us an email including the terms with your name - thereby confirming you agree to the these terms. Once you do this, we can grant you access to the dataset on Berzelius.
The dataset is available under $COMMON_DATASETS/ImageNet
.
Imagenette is a subset of 10 easily classified classes from Imagenet (tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute).
Please open the ImageNet site, find the terms of use (http://image-net.org/download), copy them, replace the needed parts with your name, send us an email including the terms with your name - thereby confirming you agree to the these terms. Once you do this, we can grant you access to the dataset on Berzelius.
The dataset is available under $COMMON_DATASETS/Imagenette
.
The KITTI dataset is one of the most widely used datasets in autonomous driving research. It was created by the Karlsruhe Institute of Technology (KIT) and Toyota Technological Institute (TTI) and is designed for developing and evaluating autonomous vehicle algorithms, particularly for tasks such as 3D object detection, tracking, stereo vision, optical flow, and visual odometry.
The dataset is available under $COMMON_DATASETS/KITTI
.
MNIST is a handwritten digit database used for image processing and machine learning algorithms.
Four files are available:
The dataset is available under $COMMON_DATASETS/MNIST
.
The nuScenes dataset is a public large-scale dataset for autonomous driving developed by the team at Motional.
Please send us an email confirming you agree to the nuScenes Terms of Use. Once you do this, we can grant you access to the dataset on Berzelius.
The dataset is available under $COMMON_DATASETS/nuScenes
.
In addition to AlphaFold’s genetic databases, OpenFold requires the following datasets to run:
openfold_params
openfold_soloseq_params
,mmseqs_dbs
,alignment_data
,alignment_data/alignment_dbs
,pdb_data/data_caches
.The dataset is available under $COMMON_DATASETS/OpenFold
.
There are 1.8 million train images from 365 scene categories in the Places365-Standard, which are used to train the Places365 CNNs. There are 50 images per category in the validation set and 900 images per category in the testing set. We have the high-resolution images.
Please send us an email including the following terms with your name - thereby confirming you agree to the these terms. Once you do this, we can grant you access to the dataset on Berzelius.
Terms of use: by downloading the image data I, [Your Name], agree to the following terms:
1. I will use the data only for non-commercial research and educational purposes.
2. I will NOT distribute the above images.
3. Massachusetts Institute of Technology makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
4. I accept full responsibility for your use of the data and shall defend and indemnify Massachusetts Institute of Technology, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.
The dataset is available under $COMMON_DATASETS/Places
.
SMHI IFCB plankton includes three datasets of manually annotated plankton images by phytoplankton experts at the Swedish Meteorological and Hydrological Institute (SMHI).
The dataset is available under $COMMON_DATASETS/SMHI-IFCB-Plankton
.
The SYKE-plankton_IFCB_2022 dataset consists of approximately 63,000 images representing 50 different classes of phytoplankton, collected using the Imaging FlowCytobot (IFCB) from various locations in the Baltic Sea. These images were manually annotated by expert taxonomists and are used to develop and evaluate classification methods for phytoplankton recognition.
The dataset is available under $COMMON_DATASETS/SYKE-plankton_IFCB_2022
.
The SYKE-plankton_IFCB_Utö_2021 dataset is a collection of approximately 150,000 images of phytoplankton, classified into 50 distinct categories, with an additional set of about 94,000 unclassifiable images. The dataset was collected using an Imaging FlowCytobot (IFCB) at the Utö Atmospheric and Marine Research Station in the Baltic Sea during 2021.
The dataset is available under $COMMON_DATASETS/SYKE-plankton_IFCB_Utö_2021
.
The Waymo Open Dataset is a publicly available dataset provided by Waymo, focused on autonomous driving technology. This dataset is designed to advance research and development in the field of autonomous driving by providing high-quality, diverse, and large-scale data collected from Waymo’s fleet of autonomous vehicles.
To get access to the dataset, you need to:
Once you do this, send us an email and we can grant you access to the dataset on Berzelius.
The dataset is available under $COMMON_DATASETS/Waymo
.
WHOI-Plankton is a comprehensive dataset of annotated plankton images developed by researchers at the Woods Hole Oceanographic Institution (WHOI). The dataset contains over 3.5 million images of microscopic marine plankton, categorized into 103 classes. These images are used primarily for developing and evaluating visual recognition models in plankton classification.
The dataset is available under $COMMON_DATASETS/WHOI-Plankton
.
The Zenseact Open Dataset (ZOD) is a large multi-modal autonomous driving (AD) dataset, created by researchers at Zenseact. It was collected over a 2-year period in 14 different European counties, using a fleet of vehicles equipped with a full sensor suite. The dataset consists of three subsets: Frames, Sequences, and Drives, designed to encompass both data diversity and support for spatiotemporal learning, sensor fusion, localization, and mapping.
Please send us an email confirming you agree to the Zenseact Open Dataset License. Once you do this, we can grant you access to the dataset on Berzelius.
The dataset is available under $COMMON_DATASETS/Zenseact-Open-Dataset
.
Guides, documentation and FAQ.
Applying for projects and login accounts.