✔️Analysis Ready Data at the Speed of Collection
✔️Always On Broad Area Search and Change Detection
✔️On-Demand Persistent, Global, Passive Surveillance
✔️Autonomous Tipping, Cueing, Indicators and Warnings
Access to data might not be a problem for government agencies, but having too much data is a challenge. The problem with too much data is the amount of time it takes to analyze it and develop quick insights, especially when timing is paramount.
DLG provides rapid and continuous intelligence through mission-critical insights delivered in minutes (not days or months) by combining environmental, human, and economic intelligence to arm defense and intelligence agencies with near-real-time decision intelligence.
We have the technology and expertise to help organizations take on the hardest, data driven problems:
Take a dive into our technology and see our geospatial processing platform in action.
Descartes Labs Government is a subsidiary of Descartes Labs, which in 2014, is a spinoff from the Los Alamos National Labs. This is an underpinning for how Descartes Labs Government became a decision engine for the US government with a legacy from the Los Alamos National Labs. Descartes Labs Government was founded by astrophysicists like our very own Dr. Mike Warren. He developed leading edge high performance compute for geospatial data with the ability to operate 100% in the cloud. This decision engine ingests and layers in the entire globe and it's searchable in milliseconds. Our birthright from Los Alamos gives us the perfect legacy, with a history of satellite research and development.
The photo you see is of a 1960s satellite biller and two scientists from the Los Alamos National Labs, Richard taschek and Jerry Connor, the Vela series satellite carried Los Alamos designed and built sensors for detecting X-rays, gamma rays, neutrons, and the natural background of radiation in space. In fact, the first seamless mosaic of Landsat was completed at Los Alamos National Labs in 1992. We understand the need for supercomputing, scale and speed, and managing data of all types.
This brief video you are about to see is presented by another of our co founders from the Los Alamos National Labs, Dr. Sam Skillman, he will give you a glimpse into this global scalable high speed compute Decision Engine. And its ease of use, thank you.
Dr. Sam Skillman:
Allows you to interact with and visualize any pixel out of our 20 petabyte archive, in kind of sub second latencies. Here we're seeing normal red, green and blue visual data from the Sentinel two constellation over Hong Kong can see clouds and other aspects of the image. That's fine, all this is rendered on the fly. We can also in the same interface, look at things like synthetic aperture radar data, here, we're looking at polarization fractions, and you can see very bright, large metal objects like chips, produce very strong signals in radar.
Again, all of this is live coming from the platform, this is, you know, none of these layers are static or pre configured. If I were to go in and modify, for example, the Sentinel two layer, I can change, the bands to be instead of red, green, and blue, I can do shortwave infrared, infrared, and near infrared bands. And now when I do that, it repopulates this image, it's all again, on the fly streaming back from our commercial cloud platform. That gives you a sense of of kind of the scale and speed of pulling data back from the system. I want to show you, then what what can you do with it.
As I mentioned, that team of applied scientists will often work really closely with our customers, I'm going to show a couple examples here that are focused around the defender Pacific Exercise. And so we're going to show a tel detector model and transporter erector launcher, an airplane detector and change detection. So if I click on here, it's now I'm going to go pull imagery back live from the platform. This is a panchromatic or sorry, a three band one and a half meter image from Airbus. And what you're seeing here is a model that was run to detect airplanes.
The model was run on half meter resolution data and then applied to one and a half meter. So we miss a few detections here and there, if we were to train on the same kind of imagery, it would perform better. We did that due to date availability, availability in this region. If I zoom out here and come in and look at our tell detector model, this will pull back imagery here from Maxar, panchromatic data, I find that useful to hit our little auto enhance here just stretches the image better.
As you can see all of these towels in here which protected I should note that we did run a segmentation model here where each individual towel did have its own detect. We then ran a clustering algorithm algorithm that groups those so that it's easy to see that if you're visualizing each one of those small boxes, it'd be difficult to to understand in this visualization. The airplane model was developed out of an Air Force Research Library Free work that we have and that till the doctor was in service of a contract here at user pack that was through Elian and then through Huntington Ingalls. Here I'm going to show generic change detection. And I should emphasize the word generic. This is an algorithm that looks for any type of change over a given area. This is Spratly Island in the South China Sea. And the way that this change detector works, which is showing in this these blue colors here is that you could run a simple change detection algorithm that looks at 2016 to 2021. And see what's changed, that's fine. The way we ran this was run on of a time series mode, where you start in 2016. And you identify when the change happens.
And so with that color that represents when that changes. Now, this doesn't, you know, you can see in this bottom right here, there's some structures being built up, some up here as well. This is again, generic chain. So it's also picking stuff up in the ocean, you would want to combine this with ancillary datasets or areas to filter out this provides a really unique tool to focus your eyes on to what change so that you can dive deeper into the data. Next, I want to show some recent work on that we actually did while we were out here. So if your recall, you might have seen a press release about a month ago where 100 new missile silos were found in the western desert of China.
This was found using relatively kind of high resolution imagery by the Middlebury Institute of International Studies using planet data. Well, we then took that and try to understand if there's any datasets that we have access to in the open that would allow us to see the same evidence of those silos existing. And it turns out there is what you're seeing here is interferer metrics are data. So we're looking at the coherence between two observations spaced by about a week apart. And the dark areas are where there's low coherence, you see this network of flood look to the roads, we believe this is due to trucks driving along those roads, displacing the dirt, and changing the the phase coherence of the signal that comes back to this our satellite. And it just shows up clear as day. Because you know, we were looking at this data, because we had access to all of this data very easily, very quickly, we then use the DL platform and found a potential second site outside of Hami.
Here's the first site on the bottom right second site, top left eye, and it looks very visually similar. And that's, you know, the first first indication that might be a site as well. The real smoking gun in our understanding our belief is that if you look at the time series of this, all of this was built up in the last six months, and both sides, both the first one recorded and the second one on the right. Were kind of coincident in time. And so to us not having access to that high resolution imagery at the time, I still make the claim that it's very likely that the second site is is a second missile silo site. We briefed this period, that high tech back on until 21st. And a few days later was actually reported in the New York Times. This is a pickup on fas.org on May 26, where that same group looked and also found the sites using higher resolution imagery. So we think that's, that's pretty interesting and pretty exciting.
The next step here would be to take some of that imagery, and now build a model that would go out and find and detect all the sites that we don't know about yet, find the sites that have been built in the past. And when we start to see the initial indications of a new site being built, we can alert building an alert that triggers often say this is an area of interest, where there might be something similar to this happening again. The last thing I'll show here is geo visual search, combination of deep neural networks, and large scale geospatial satellite imagery. So what we did is a few years ago, you can see this it searched at DescartesLabs.com We ran a convolutional neural net deep learning net over one meter imagery across the United States from nape.
That results in about a billion of these small little tiles as I move around each orange square is one of those tiles and when you click on one of these tiles, what our system is now doing is going out and finding all of the visually similar tiles to that we do to not build a runway detector, we're taking an abstract feature space that was generated by that net, running that across the whole United States, building the infrastructure that then allows us to search very quickly across the whole United States, you can see all the text in the top left and find similar objects, this jumped over to the northeast. If we look at other features, like golf courses, you know, we can click on a golf course. And it's very good at finding golf courses we can. This is not trained on any particular thing. And again, if we look at football fields, we'll find, you know, all the football fields or areas where there's these really intense red green
So now you think about combining this technology with the InSAR interferometric SAR data that we just showed was capable of seeing that structural change in deserts in other words, otherwise, you could start to build out the same types of tools that would allow you to go find all of the relevant objects to your to your application. Another useful tool for this, another useful use case for this tool is actually build out training data for very sparse datasets. You think about that till example that I showed you, is it quite a feat to build that model? Because there are so few actual known instances of those tiles.
Here, if we had processed all of that same imagery through this tool, we could click on one of those tiles and potentially find a number of other tiles that we start to build out that training dataset that would allow you to fine tune an object detection layer that like what I just showed you, then find all of them or find new ones when they come online.
Hi, I'm Hector Cevallos, Head of Business Development here Descartes Labs. For more information on how we can help solve your hardest data challenges, please contact us at firstname.lastname@example.org.
Feel free to connect with us on LinkedIn.