Take a dive into our technology and see our geospatial processing platform in action.
It allows you to interact with and visualize any pixel out of our 20 petabyte archives in kind of sub-second latencies here. We're seeing a normal red, green, blue visual data from the Sentinel to constellation over Hong Kong and see clouds and other aspects of the image.
All of this is rendered on the fly. We can also, in the same interface, look at things like synthetic aperture radar data. Here we're looking at polarization fractions, and you can see very bright, large metal objects like ship produced very strong signals in radar.
Again, all of this is live coming from the platform. None of these layers are static or pre-configured. If I were to go in and modify, for example, the Sentinel-2 layer. I can change the bands to be, instead of red, green, blue, I can do shortwave infrared and near infrared bands. And now when I do that, it repopulates this image.
It's all again, on the fly streaming back from our commercial cloud platform. That gives you a sense of the scale and speed of pulling data back from the system. I want to show, you know, then what can you do with it? As I mentioned that team of applied scientists will often work really closely with our customers.
I'm going to show a couple of examples here that are focused around the defender Pacific exercise. And so we're going to show a TEL detector model, a transport road rector launcher, an airplane detector, and a change detection. So here, if I click on here, it's now going to go pull imagery back, live from the platform.
This is a three band one and a half meter image from Airbus. And, what you're seeing here is a model that was run to detect airplanes. The model was run on half meter resolution data and then applied to one and a half meter. So we miss a few detections here and there. If we were to train on the same kind of imagery that would perform better, we did that due to data availability, availability in this.
If I zoom out here and come in and look at our TEL detector model. This'll pull back imagery here from Maxar panchromatic data. I find it useful to hit our little auto enhance here that just stretches the image better. As you can see, all of these tells in here, were detected, I should note that we did run a segmentation model here where each individual tell did have its own detect.
We then ran a clustering algorithm that group so that it's easier to see that if you were visualizing each one of those small boxes, it would be difficult to understand in this visualization. The airplane model was developed out of Air Force Research Laboratory, work that we have, and that TEL detector was in service of a contract here at user pack that was through Elian and then through Huntington Ingles. Here, I'm going to show generic change detection.
I should emphasize the word generic. This is an algorithm that looks for any type of change over a given area. This is Spratly Islands in the south China sea and the way that this changed detector works, which is showing in this, these blue colors here is that, you know, you could run a simple change detection algorithm that looks at 2016 to 2021 to see what's changed, that's fine.
The way we ran this was run out of a time series mode, where you start in 2016 and you identify when the change happens. And so that color then represents when that changes. Now this doesn't, you know, you can see in this bottom right here, there's some structures being built up some up here as well.
This is again, generic change. So it's also picking stuff up in the ocean. You would want to combine this with ancillary data sets or areas to filter out. This provides a really unique tool to focus your eyes onto what changed so that you can dive deeper into the data. Next, I want to show some recent work that we actually did while we were out here.
So if you recall, you might've seen a press release about a month ago, where a hundred new missile silos were found in the Western desert of China. This was found using relatively, you know, kind of high resolution imagery by the Middlebury Institute of international studies using Planet data.
Well, we then took that and try to understand if there's any data sets that we have access to in the open that it would allow us to see the same, evidence of those silos existing. And it turns out there is. What you're seeing here is interferometric SAR data. We're looking at the coherence between two observations spaced by about a week, apart.
And the dark areas are where there is a low coherence. You see this network of what looked to be roads. We will leave this as due to trucks, driving along those roads, displacing the dirt and changing the phase coherence of the signal that comes back to the SAR satellite. And it just shows up clear as day.
Because, you know, we were looking at this data because we had access to all of this data very easily, very quickly. We then use the DL platform and found a potential second site outside of Hami. Here's the first set on the bottom, right? Second site, top left. And it looks very visually similar.
You know, the first, first indication that might be a site as well. The real smoking gun in our belief is that if you look at the time series of this, all of this was built up in the last six months and both sites, both the first one reported and the second one on the right we're kind of coincident in time.
And so to us not having access to that high resolution imagery at the time, I'd still make the claim that it's very likely that the second site is a second missile silo site. We briefed this here, the high-tech back on July 21st and a few days later it was actually reported in the New York Times.
And this is a pickup on fas.org, on the 26th, where that same group looked and, also found these sites using that higher resolution imagery. So we think that's, that's pretty interesting and pretty exciting. The next step here would be to take some of that imagery and now build a model that would go out and find and detect all the sites that we don't know about yet find the sites that have been built in the past.
And when we start to see the initial indications of a new site being built, we can build an alert that triggers off and say, Hey, this is an area of interest, where there might be, something similar to this happening.
The last thing I'll show here is do you have visual search combination of a deep neural network and large-scale geospatial satellite imagery. So what we did is a few years ago, you can see this at search.descarteslabs.com, we ran a convolutional neural net, deep learning net over a one meter imagery across the United States from the NAIP.
Results in about a billion of these small little tiles. As I move around each orange square is one of those tiles. And when you click on one of these tiles, what our system is now doing is going out and finding all of the visually similar tiles to that. We did not build a runway detector. We're taking an abstract feature space that was generated by that net, running that across the whole United States, building the infrastructure, then allows us to search very quickly across the whole United States. You can see all the dots in the top left and find similar objects. This jumped over to the Northeast. If we look at other features like golf courses, you know, we can click on a golf course and it's very good at finding golf courses.
We can, this is not trained on any particular thing. Again, if we look at football fields, we'll find, you know, all of the football fields or areas where there's these, you know, really intense red, green combination. So now you think about combining this technology with the InSAR interferometrics SAR data that we just showed was capable of seeing that structural change out in deserts and other words, otherwise you could start to build out the same types of tools that would allow you to go find all of the relevant objects to your, to your application.
Another useful tool for this or another use case for this tool is to actually build out training data for very sparse data sets. You think about that tell example that I showed you, it was quite a feat to build that model, because there are so few, actual, known instances of those tells. Here, if we had processed all of that same imagery through this tool, we could click on one of those tells and potentially find a number of other tells it would start to build out that training dataset. That would allow you to fine tune an object detection layer that like what I just showed you that would go then find all of them or find a new ones when they come online.
Discover how you can rapidly develop geospatial analytics with Descartes Labs Government.
Remotely monitor the world's activity to make better decisions in a time of economic disruption.
Learn more about Descartes Labs Government's mission-critical GEOINT sensor automation and data fusion.
Our team of applied scientists have expertise spanning diverse scientific disciplines.
Data focused prospectivity mapping at global scale, within an easy-to-use, GIS-like interface.
Enable global scale predictive analytics with real-time data & cloud-based supercomputing.
Increase safety and reduce cost with scalable deformation & velocity measurements.
High-frequency geospatial data to gives common operating picture and toolset across all sensors.
Satellite-based wildfire detection and alerting to enhance coverage over your AOI.