Google releases millions of captioned images to help with AI training

Source:zdnet.com

Google just keeps on developing new technologies that they can give to the public and hope it changes their lives. Their latest offering is a dataset of close to nine million links that the hope will be able to label images so that they can help train the computer vision systems.

The dataset is called open images and it’s able to provide some links to the images that will be on the web and will be annotated with the descriptions of the objects and when it’s within the image. The service also has some machine populated annotations and also includes some annotations that will be validated by humans so that you can remove the false positives. The Google Research Team says that the dataset is large enough for researchers to be able to enable them to use a deep neural network from scratch.

Source:research.googleblog.com
Source:research.googleblog.com

Reasons as to why Google is releasing the dataset right now are unknown but some researchers say that the advances which are taking place in the computer vision industry might not be taking place without the availability of some large and heavy datasets so they can train the machine learning networks. Some examples of the datasets include the ImageNet which is said to have about 14 million images, and the COCO image recognition, segmentation and captioning dataset that is owned by Microsoft.

Without such public resources, automated image captioning and functions such as Google’s Allo ability to reply automatically to some shared snapshots would not be easily available. The new dataset is a result of a partnership between Google, Carnegie University and Cornell University.

Google also released a Youtube dataset named YouTube 8-M which contained about eight million videos and some video labels that when used would be able to help accelerate the research in video understanding. They can also be used for some advances in search and discovery. The dataset has 500,000 hours of videos which Google prepared for researchers so that they can use deep learning to extract billion frame features which were all compressed to 1.5 TB.

Google is trying to help researchers analyze data even if they do not have big data and high-powered computers which can process the data.