Events Add an event Speakers Talks Collections
 
MLconf Online 2020
November 6, 2020, Online
MLconf Online 2020
Request Q&A
MLconf Online 2020
From the conference
MLconf Online 2020
Request Q&A
Video
RarePlanes: Exploring the Value of Synthetic Data from an Overhead Perspective
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
14
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

RarePlanes is a unique open-source machine learning dataset from CosmiQ Works and AI.Reverie that incorporates both real and synthetically generated satellite imagery. The RarePlanes dataset specifically focuses on the value of AI.Reverie synthetic data to aid computer vision algorithms in their ability to automatically detect aircraft and their attributes in satellite imagery. Although other synthetic/real combination datasets exist, RarePlanes is the largest openly-available very-high resolution dataset built to test the value of synthetic data from an overhead perspective. Previous research has shown that synthetic data can reduce the amount of real training data needed and potentially improve performance for many tasks in the computer vision domain. The real portion of the dataset consists of 253 Maxar WorldView-3 satellite scenes spanning 112 locations and 2,142 km^2 with 14,700 hand-annotated aircraft. The accompanying synthetic dataset is generated via AI.Reverie’s novel simulation platform and features 50,000 synthetic satellite images with ~630,000 aircraft annotations. Both the real and synthetically generated aircraft feature 10 fine grain attributes including: aircraft length, wingspan, wing-shape, wing-position, wingspan class, propulsion, number of engines, number of vertical-stabilizers, presence of canards, and aircraft role. The talk will focus on the dataset and on various experiments with the dataset. These experiments show the value of synthetic data for the task of detecting and classifying aircraft from an overhead perspective.

About speaker

Jake Shermeyer
Research Scientist at IQT CosmiQ Works

Jake is a researcher and geographer specializing in geospatial machine learning and computer vision. His research with satellite imagery focuses on time series analysis, super-resolution, and object detection with overhead imagery. Jake also is leading SpaceNet 6, a sensor fusion challenge featuring both synthetic aperture radar and electro-optical remote sensing data for foundational mapping.

View the profile
Share

Alright, great. Thank you for the introduction. Can everyone hear me? All right. Yes, yes. Yes. So I'm Jake Sherm. I'm a research scientist at Cosmic works and a research study. That will be conducted over the past two years and I'm going to walk you through what with all that entails at the time. This is me. I work for Cosmic works and cows. It works really since that intersection of juice facial and machine learning computer vision is mostly I applied research product projects. Our parent company is in Q cell.

Greensboro airplanes airplanes. The dataset portion is now the largest openly available data sets built to test the value of synthetic theater for machine learning applications, from an overhead perspective. And this was really a unique collaboration between our lab cosmetics, and a company called AI repartee to supplied, that the synthetic date for this date of death. So what a i r every is in case you're not familiar with with synthetic data for computer vision, is really a generating.

Well, obviously since I'm using a different techniques for every relies on at the Unreal Engine at its core. So using a gaming engines to do this and its really designed to procedurally generates multiple multiple varieties of synthetic data with annotations already pre-baked in. So that that's the truly important because such an expense and the synthetic day. Really can be helpful for speeding up your ear training Pipeline and getting valuable data when you need it. Looking at this work flow here, as an example of a traditional

sense rather than an overhead sense that I'm going to walk us through, is how you can apply synthetic data. So, in this initial setup here, that the model was mistaking, all these elephants hurt for sheep or cows. I when you add in some of the distance etiquetar, retrain your model, then what would happens if you're actually able to detect these elephants with greater accuracy, this really became the foundation for for airplanes because we wanted to flip this to the overhead space, and really investigate the value of synthetic in a new modality, which is that overhead satellite imagery, you

special Arena to do this week. We really wanted that data set to examine the value of synthetic. And we created this company services, the top companies for Macs, are they? They are a satellite image provider and this is from there or Q3 sensor, which is the highest resolution sensor commercially available right now. So I will be traded that a very large dataset thing with annotations and annotations of aircraft, and then we built a similar synthetic data set with that. They are. Engineer on the bottom, two rows

and you can see a really the synthetic data is very realistic, the anything about the simulators you can have like a variety of weather conditions. So you can see different things here like that, the clouds on Hayes as well, as be able to change the background, sat in snow or make the flame sit on the grass, really, to improve the diversity of your data sets. And also increase the generalizability of your models to new location. So I in terms of an outline. What I be talking about fully is the Davis, an overview of the state has said is actually openly available so you can

download it today. How we labeled that data sets, is Balsam benchmarks, and experimental results. So benchmarks really focused on rare object detection, and that's where it really the rare and airplanes comes from. So if you have limited trained in example, can you boost with synthetic data and cannot help you improve your results and work with this data set? Pictures of the locations. We ended up with 253 unique satellite images clear across Under 12 locations. In 22 countries.

Are we really tried to diversify? And we did a stratified random sample across different climate zones to try to maximize that that Geographic diversity, which could be very important for training models. That do you like to generalize do the new areas. So also need a bit. The real data set. Has 14,000 aircraft, almost 15,000 with ten attributes, talk about with his arm spans, over two thousand square kilometers. The synthetic portion is much larger, teachers, 50,000 synthetic images, across 15 locations. And if you're wondering about the Dots here, the blue

ones are going to be a real location. The red being the synthetic simulated areas and over six hundred thousand aircraft play Bulgarian, the same ten attributes as in the real data setup. Over 9,000 square kilometers of simulated area included. Hear. Another said, this is hosted via the AWS open data program. You can go in there, download this today. S3, like they're so as long as you have an AWS accounts and the command line configured before, all this down for free and I'll be sharing some more resources. We can learn how

to do that at the end. As I stated previously were really trying to maximize that Geographic diversity with different types of snow different, backgrounds, different look angles. If you look at the top right and make sure you can see really slightly off angle alternator, look angle here. And this is just some of that the complexities that are inherent when you trying to analyze satellite imagery and it can be a real challenge to have model build models that can work well and diverse conditions. So that was really the point of this trying to

maximize that and hope that this would Aid in our ability to detect different types of aircraft in the skies. So we which is a very unique label taxonomy for this data set. And the reason for that was really wanted to build something that would have lots of different attributes that you could also be tied together to maybe figure out. What's the type of aircraft? It is that you're labeling. So you'd we chose this Diamond Style because it enables you really to pull out two very important characteristics. That

can be used to define what an aircraft type is nuts. The wingspan and end the length of the plan. So if you measure from the the less, the second note here to the final note on that will give you the wingspan if you measure from the 1st, no magic here. It sits back and that will give you the likes of your aircraft in actual meters as all of your efforts beyond that. We have a number of other characteristics that that I'll talk a little bit about. But the important thing is he's character. Places that each of these can be assigned to an aircraft

and you can train them off with pull out the different types of different haircuts. Different types of wings, shapes or Wing positions, or measure, the type of propulsion. This is really a hierarchical overview of our full taxonomy here really showing how we ended up the labeling this and have the full diversity in in the data set. And it's important. I understand that you can actually build custom classes with us as well. So if you wanted to measure with five planes with different

types of engines or different types of propulsion work in buying any or all of these together that will all classes except for your own purposes. Which contributes early much more challenging than just finding a single acebes? The biondi diamond annotations. We also provide bounding boxes, of course for training object detection models. And then in the synthetic day that we offer. Well to go with you in a little bit deeper, these to the bounding box and the Diamonds are available, both for the synthetic and and the real datum. So some of the basic

detection tasks here. This is been trained on this data set that you were pulling out aircraft roll. If they are large or medium sized simple Transport Aircraft simpler. Aircraft detection is also possible as we trained and performed very well for these tasks with 98% for this. So impressive results so far, listen example that we did that this past summer showing how you can ultimately detect the propulsion and that the number of engines of different types of plans. Again, this was where we combine these two classes together, and then you were able to

detect motion cipes and the number of engines for these different types of plants in this is color-coded. So you can check this out. One on the right hand side here. If you trying to figure out what did the color scheme, answer. Looking at some of the the results for that number of engines and propulsion type that we can see her the Vets. This is an F1 score for each of these classes on r, y axis, or x-axis. We have basically a description of these aircraft

really see here. How this is the performance is typically Waits with the number of training examples that you have received that we have this, this very rare class, where you got three engines with jet propulsion. That really only happens 43 times in the data set. And as a result, we end up with this much lower F1 score, and then you can see other types here that are hiring where you able to do this very well. Modern object detection, architectures here, really inspires us to try to investigate the value of synthetic

data. And if we can boost and augments our I'll try to get up with synthetic. Now that might work that we did some experiments on this. This is the first training scheme we used. It was a very traditional transfer learning approach portrait rain on a large area of synthetic examples. Train on that for about a quarter to a third of the total training time, much longer than that would actually start to harm you my performance, then we will train on the full real David Sutton and this is a traditional approach for working with synthetic data.

And what we found here was it was it was quite effective looking at these performance metrics are. The blue bar is if we trained on the full real data sets, the red bar here is. If we trained on Saturday is the only. And then the yellow bar here is if we trained on just a small subset but 10% of our real data. And a full synthetic data set and you can see here. Is that even with just 10% real data augmenting with the synthetic performs at really nearly

to replace your real data with synthetic and have very minimal performance decline. Moving into this object detection overview. So why is Rob ejected section important? So it is particularly important in the overhead domain. As you can never really guarantee when a satellite image comes over that you're actually able to have your objects of Interest included in that collection though. They wanted the fines Beechcraft Starships. So they're presently 6 active in the world, to them actually appear in the air. Flight status

said, if we want to build an object to text Tamara to find these. So, this is a challenging task because there's only a couple of decent in the world and building a representative train data set for this will be nearly impossible. So can we all met with synthetic data and does that help us improve our performance for protecting these objects or or really? Any other type of object? This is just one example, but you could think of this applying to any other do painting in the computer vision space where you might have limited data examples. This is our, our

first performance metric hear what we're looking at the Y, axis, gyro or mean F1 score. When the x-axis year. This is the number of training examples for each object, and the kids were just looking at real data on. So you can see that when we have again 10 or fewer trained at examples performances around 4:05. So it's also very terrible result. There has been gradually. Add more obviously, performance increases until they really start the peach. 650 battery at the multiple classes together. We

leverage synthetic data in such a way to try to, to boost at that performance. This is a bit of a different scheme training scheme, used totally different. But here we train our own a real data for maximum performance, and then we trained on just our classes of interest that we want to boost our performance than a really doing. Find tuning here, training up on those classes of interest and then we finally trained again or a portion of our real data sets that that again features. Just some of those examples of Everett bus Adventures. So really just a quick training here and this

technique is valuable because it doesn't require as much data as the full real, but ends up being quite effective for this task of really boosting rare object detection performance, and the it ends up being a premature, lightweight augmentation. So this was that same grass from if you slide to go now, but now we're testing performance with when we add in this this synthetic data and interesting. Lee enough. You can see that we have a very strong performance boost here for our object detection metric. When we were poor or adding in

the synthetic. So Performance, Volkswagen creases from 05 to around 40.35%. That synthetic data and ultimately died. So small boost here in 30, or less Trinidad examples, but gradually start to decline as you get some more and more prevalent training examples here. So I'm in terms of resources here at it. I do want to point out a few of these. So then the code base that we created coordination with ar every is hosted want to get Hub and this is really up

reprocessing Cove Base that you can start to get started. Working with you, official data can be particularly challenging if you're not used to it. So that this is a way to bridge that Gap in and get you started to be able to modify the shoe. Jason's be able to work with modern computer, vision techniques, and look your data and also create custom classes. We have a full user guide here on our website, Catholic, works.org airplanes. That also links to a paper

that was just accepted. So wack fee 2021. So that's the winner applications of computer Vision conference and that will be published. I believe in January. There's also a copy of that. Archive right now and again, you can read that right here. The Ripley's dataset on S3. Again. This is in the S3 bucket. So the address replaced at Publix if you wanted to download it today, and these are our two respective organizations are so it again, there they provided all the set of data for the study and then

Cosmic Works, which is So that I will open it up to questions. Thanks so much for your attention. Appreciate it. Thank you Jake. We actually have a coffee break coming up. So this session will reconvene in about 15 minutes. So on the quarter hour depending on your time zone I see there. Aren't any questions in the chat for Jake. So I'm going to I'm going to put forth one since you were using transfer learning to basically leverage either. The synthetic they do or the

real data for the the airplane, temporary or categories. Do you think you could repurpose the same technique for other types of objects that one can identify from like, satellite imagery? Challenge. But ya didn't shorts know, we really haven't tried these techniques that at least for a 110, bring synthetic date of that is a, but we have tried it for a number, has numerous different tasks here. So if you can boost just a generic, aircraft detection or so you're trying to

identify the role of aircraft where the number to settings. Cool. Thank you. Absolute. A round of applause. I know one of the other speakers earlier today that they missed the Applause from the audience and unfortunately, I cannot unmute everyone to provide but I'm assuming In their, in their houses, I'll see everyone at the quarter hour.

Cackle comments for the website

Buy this talk

Access to the talk “RarePlanes: Exploring the Value of Synthetic Data from an Overhead Perspective”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “MLconf Online 2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Artificial Intelligence and Machine Learning”?

You might be interested in videos from this event

February 4 - 5, 2021
Online
26
104
ai, application, bot, chatbot, conversation, data, design, healthcare, ml

Similar talks

Meghana Ravikumar
Machine Learning Engineer at SigOpt
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Sophie Watson
Senior Data Scientist at Red Hat
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Agustín Mautone
Machine Learning Engineer at Tryolabs
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “RarePlanes: Exploring the Value of Synthetic Data from an Overhead Perspective”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
949 conferences
37757 speakers
14408 hours of content