Want to learn about the demographics of a particular neighborhood, such as political tendencies, average incomes and buying habits? Then look no further than the cars on the streets, which can serve as effective guides to the identities and behaviors of local residents. What’s more, by using a combination of object-recognition technology and the Street View function in Google Earth and Google Maps, researchers now can use car-related data to gather detailed insights on neighborhoods throughout the country.
The researchers, located at Stanford University, analyzed 50 million images and location data from Google Street View, according to the New York Times. The images then were scanned using object-recognition technology that was able to distinguish vehicles’ makes, models and years.
The Stanford team then was able to correlate the vehicle information with data from other sources. As a result, the researchers were able to predict factors like local pollution levels and voting trends, with detail down to individual neighborhoods.
Such insights could be invaluable to people who need to gather and analyze information about specific areas, such as researchers, marketers and political consultants.
The Stanford project was enabled by the advent of advanced object-recognition technology that can sift through millions of images and recognize specific objects contained in them.
“All of a sudden we can do the same kind of analysis on images that we have been able to do on text,” said Erez Lieberman Aiden, a computer scientist who heads a genomic research center at the Baylor School of Medicine, as reported by The New York Times. Lieberman Aiden served as an advisor on the Stanford project.
With the 50 million Google Street View images, 22 million cars were identified. The cars then were divided into more than 2,600 categories, including their make and model. This information also was combined with location data for more than 3,000 ZIP codes and 39,000 voting districts.
The researchers then hired hundreds of people to pick out and classify sample images of cars to train the object-recognition algorithm. This represented the most challenging part of the project, requiring many man-hours to classify the images.
Once the object-recognition algorithm was trained, it demonstrated remarkable speed and accuracy. The system classified all the cars depicted in the 50 million images within a period of two weeks. For a human expert, who would need 10 seconds to analyze each image, this process would take more than 15 years, according to the New York Times.
Tyler Schulze is vice president, strategy & development at Veritone. He serves as general manager for developer partnerships, cognitive engine ecosystem, and media ingestion for the Veritone platform. Learn more about our platform and join the Veritone developer ecosystem today.