A data scientist uses dataset images – image datasets – integral to computer vision research and applications to train an image classification model to recognize patterns and objects in images and build algorithms that do complex tasks like object recognition and facial recognition. A data scientist uses dataset images – image datasets – integral to computer vision research and applications to train an image classification with AI model to recognize patterns and objects in images and build algorithms that do complex tasks like object recognition and facial recognition.
This article explores most of the popular open image datasets for computer vision, organized by type and task.
A data scientist teaches a machine to “see” and process images exactly like us.
Object level annotations provide object detection algorithms the means to locate – item localization – and label objects in an image, based on images annotated using bounding boxes or polygons. Image data labeling gives context for a machine to learn from it.
Popular Open-Source Image Datasets
Open-source, free image datasets – open image datasets – are vital for computer vision researchers and practitioners worldwide. These annotated dataset images benchmark new algorithms and models with unique characteristics, challenges, and applications. Some well-known open-source image datasets under a creative commons license include:
CIFAR-10, a widely used image classification benchmark, has trained many state-of-the-art image classification models. The CIFAR-10 image dataset consists of 60,000 32×32 color images in 10 classes, with 6,000 images per class. It has 50,000 training and 10,000 test images split across five training batches and one test batch, each with a corresponding label.
The MNIST images dataset, which can be mapped to the WordNet hierarchy, has 70,000 28×28 pixel grayscale images annotating 60,000 training and 10,000 testing images, each representing a single handwritten digit from 0 to 9. MNIST has been used extensively for training and testing various classification algorithms, often used as a baseline for evaluating new machine learning models. Practical applications of image annotation are really fascinating.
WordNet is a large English synsets – cognitive synonyms – digital database. Each set of synsets expresses a distinct meaning.
The Street View House Numbers (SVHN) images dataset, used in object detection, consists of over 600,000 RGB images of printed digits of house numbers and street numbers, with an additional 530,000 images annotated for training. It is a large database of a large dataset, with one test batch and two training batches, of which one is extra. The images acquired from Google Street View vary in lighting, scale, orientation, and background clutter. SVHN is often used as a baseline for digit recognition and localization tasks.
The PASCAL Visual Object Classes (VOC) 2012 images dataset consists of annotated – using bounding boxes – images and objects to identify various object categories, such as people, cars, and animals. With one training set, one validation set, and one test batch for private testing, over 11,000 images are divided into 20 object classes, labeled for detecting the presence or absence of each object class. PASCAL VOC has also been used for semantic segmentation, and object category recognition.
Image annotation services form the training data that is input for machines to self-learn.
General Image Datasets
General image datasets have many images, often spanning multiple categories and topics. They typically train machines for image recognition, classification, and segmentation. Some popular, general dataset images include ImageNet, OpenImages, and Flickr30k.
ImageNet, a widely used image dataset for classification and organized according to the WordNet hierarchy, has over 14 million images and 20,000 categories. A detailed visual knowledge base with a large dataset, it has trained some of the most advanced image recognition algorithms, leading to many seminal breakthroughs in computer vision training and research.
OpenImages, another large image dataset organized per the WordNet hierarchy, has over 9 million images, often with 2D bounding box annotation for object detection, segmentation, relationships, and classification. Being under a creative commons attribution license, it is unique because it includes annotations – one training and validation set each and one test batch – for over 6,000 object categories, making it a valuable resource for anyone developing complex computer vision models.
The Flickr30k image captioning dataset, organized per the WordNet hierarchy, consists of 31,000 images and captions, often used for natural language processing and computer vision tasks. Its images and captions, annotated with sentence-level descriptions, make it useful for visual question answering.
Large scale image datasets, though valuable, pose several challenges. Their sheer size makes efficient digital storage and processing difficult, while their annotations can be complex, requiring specialized tools and significant computational resources. The data points across image categories need to be evenly distributed in a dataset, a tough task. All these make them less usable for low-budget projects.
Specialized Image Datasets
Specialized image datasets cater to specific computer vision tasks, like facial recognition or object detection, and improve accuracy by providing focused training data. Some examples:
The CelebFaces Attributes dataset (CelebA), a large scale face dataset of human faces, has over 200,000 celebrity images, each with 40 binary attribute annotations covering facial attributes such as hair color and expression. This large image dataset, commonly used for facial recognition research, trains deep learning models for face detection, facial landmark detection, and facial attribute prediction.
The Labeled Faces in the Wild (LFW) has over 13,000 images of 5,749 human faces collected from the web, and each face is labeled with the identity of the person in the image. The image dataset trains deep-learning models for face verification and identification.
The COCO – Common Objects in Context – image dataset, a large scale object detection, segmentation, and captioning dataset, is used for image segmentation research. It is a detailed visual knowledge base of an image dataset for machine learning, with over 330,000 images with over 2.5 million object instances labeled across 80 object categories. It trains state-of-the-art models in object recognition and instance segmentation.
The KITTI image dataset specializes in autonomous driving research. It contains stereo video sequences of urban traffic scenes and precise 3D camera and Lidar calibration data for each scene. It is used for training models for tasks such as 3D object detection, semantic segmentation that may use polygon annotation techniques, and monocular depth estimation.
Specialized image datasets, however, have several limitations, such as poor generalizations unsuitable to other applications and difficulty in their creation and annotation due to specific task requirements.
Medical Image Datasets
Medical image datasets, used extensively in healthcare research, can potentially improve healthcare outcomes. These datasets have images, including video sequences, of medical conditions such as X-rays, CT scans, and MRIs. They train machines to assist doctors in diagnosing medical conditions.
MURA (Musculoskeletal Radiographs) is a popular medical image dataset. With over 40,000 bone X-rays that radiologists have labeled, the dataset develops machine-learning models to diagnose musculoskeletal conditions such as fractures and arthritis.
The NIH Chest X-ray image dataset contains over 100,000 chest X-rays with labeled diagnoses for over 14 text-mined diseases. It develops machine-learning models to assist doctors in diagnosing lung diseases such as pneumonia and tuberculosis.
Medical image datasets are crucial for healthcare research to improve patient outcomes. They provide detailed, annotated images of our various organs, tissues, and structures, enabling healthcare professionals in more accurate diagnoses and develop effective treatment plans.
NLP Image Datasets
Natural Language Processing (NLP) datasets are typically used for language translation and sentiment analysis. However, some of them also include images. These dataset images are valuable for training models to understand the relationship between language and visual information.
Visual Genome is a popular NLP dataset with annotated images taken under variable lighting conditions. Over 100,000 images, each with its objects and their attributes and relationships, develop machine learning models to generate captions for images automatically and answer queries about visual content.
The Flickr30k dataset contains 31,000 images with captions crowdsourced from Amazon Mechanical Turk. It trains machines to generate image captions automatically.
NLP datasets with images have the potential to revolutionize how machines understand and process visual information.
Miscellaneous Image Datasets
Miscellaneous image datasets used for research and development in specialized fields offer unique challenges and opportunities for innovation in art, geography, and transportation. Sketchy is an example. Its dataset, with over 125,000 sketches of animals, furniture, and vehicles, trains machine-learning models to recognize and classify sketches.
The Mapillary Vistas image dataset, a street-level, large scale dataset, contains over 25,000 street-level images of 66/124 object categories, 37/70 such categories being detailed instance-specific human annotations describing the objects, attributes, and relationships depicted in each image. The dataset helps develop machines to assist autonomous driving and urban planning. Autonomous driving often uses 3D cuboid annotation to aid navigation in the real world.
Conclusion
Open-source image datasets are crucial for advancing computer vision. The availability of free, high-quality datasets – model performance depends on images’ size and sharpness – allows more researchers and developers to enter the field, driving innovation and progress. As technology evolves, the need for diverse, specialized image datasets will only grow. Therefore, it is essential to continue sharing and collaborating in creating open-source image datasets to ensure progress in computer vision.
FAQs
What is the most famous computer vision dataset?
The most famous computer vision dataset is ImageNet, organized under the WordNet hierarchy.
What is the most used image dataset?
The most used image dataset is COCO (Common Objects in Context), organized per the WordNet hierarchy.
What is a good image dataset?
Such a large scale dataset should have diverse images annotated to be useful for machines to learn.
What is the alternative to the MNIST image dataset?
The alternative to the MNIST dataset is Fashion-MNIST, which comprises 60,000 28×28 grayscale images of 10 fashion categories, alongside a test set of 10,000 images of clothing items instead of handwritten digits.
Why is MNIST so popular?
MNIST is popular because it is a relatively simple image dataset successfully used to benchmark the performance of new machine learning algorithms in image recognition. It has also been used as a standard dataset for teaching machine learning concepts due to its simplicity and accessibility.
How is an image dataset stored?
Image datasets are typically stored as a collection of image files representing an individual image. The files, stored in various formats such as JPEG, PNG, or BMP, may be organized into folders or directories based on criteria such as image category or label. In addition to the image files, image metadata such as image size, color space, and label information may be stored in a separate file or database for efficient querying and analyses.
How do I read an image from an image dataset?
You can use an image processing library such as OpenCV or Pillow in Python to read an image from a dataset. Typically, you would load an image file using a function like cv2.imread() or Image.open() and then perform any necessary preprocessing steps, such as resizing, normalizing, or anomaly detection, before using the image for further analysis or training machines.