OUR

TECHNOLOGY

A unique toolbox gathering the most advanced AI technologies to tackle specific challenges of customers who need AI at scale on premise, with the most stringent reliability requests. 

PERFORMANCE

Earthcube is leveraging proprietary cutting-edge solutions and fully customizable architectures to reach the performance levels required by mission-critical systems. Through different proprietary techniques, Earthcube's models reach sky-rocketing detection rates and identification performance values.

Never miss anything

Satellite images are a very specific type of data to work with. An object of interest is only represented by a few pixels, which makes it very hard to detect.

Despite this, high-performing algorithms are critical to fit the requirements of mission-critical systems. Failure is not an option and the models must be able to reach detection rates that are at least as good as the eye of an analyst, if not better. 

Our deep learning team implements cutting-edge methods, such as CapsNet, in our AI framework as soon as they are published. This allows us to then increase the performance levels of our already high-performing models to create game-changing detection percentages.

Example of cutting-edge methods implemented to offer state of the art detection and counting of small ships

Automatically identify object types

Detection is critical but it is identification that creates real value by adding intelligence into the context of the scene.  Going beyond just the detection of objects of interest, Earthcube’s technology also classifies objects using a very precise and detailed methodology.  This allows Earthcube’s AI to automatically detect the type of an object of interest from among hundreds of possibilities

It is important to evaluate the number of objets found in an image, compared to the regular number expected. Counting them is then a necessity. 

Yet, it is sometimes hard to count in high density areas, and detections then need to be accurate up to the pixel.

This is why attention blocks enable a better separation of the objects and thus an increased counting accuracy.

Increase counting accuracy

 

ON-PREMISE AUTONOMY

Our solutions have been designed to run on client premises where access for manual intervention is not possible.  The continued performance of our AI technology in autonomous environments is therefore critical.

Robustness

Our technology has been built for mission-critical applications. It is therefore crucial that it remains resilient when applied to challenging images or when faced with adversarial attack.

Ensembling methods have been proven to maximize sooner the final detection performance by fusing individual detectors. They enable stronger detections, regardless of the diversity of the images. 

Explainability

It is critical to define rules when training the algorithms to detect a certain type of object, since this is required when explaining the output of the algorithms

With methodologies such as the WEFT system (Wings, Engines, Fuselage, Tail), the definition of a plane is set in stone and enables a much higher confidence indicator when detecting and classifying aircrafts within an image.

Our products are deployed on-premise, with seamless integration to existing third party systems.
When errors are noted by analysts, algorithms must be able to learn from them.

Our proprietary technologies are designed to apply continuous learning techniques. The algorithms can be retrained autonomously on proprietary data and without our intervention.

Continuous learning

 

FRUGALITY

Our technology is very often used to detect objects that only exist in a very limited number of instances across a large number of images. To overcome this lack of training data we apply various innovative building blocks within the model development process. Our in-house developed algorithm production pipeline combines different approaches. Each approach will contribute to an improvement in performance while minimising the number of examples needed to perform the training. They underpin the frugality of the system and enable our algorithms to learn with fewer labelled examples.

Synthetic data

An efficient way to increase the number of examples available to train on is to generate one's own images containing objects to detect, while providing and integrating simulations in a way that is credible for the algorithms.

Two key methods are currently used for image simulation. One utilises 3D engines to create images of objects.  The second uses Generative Adversarial Networks to apply neural networks for the creation of images used in training.

Example of the insertion of a plane within a satellite image using 3D

Style transfer

Data augmentation techniques exist to create a greater number of training examples when very few real examples exist.

These techniques make use of random transformations applied to real data, thereby creating a broader diversity of examples.​

We have developed approaches using even more advanced GAN features to transform the context of a scene. This introduces more diversity into the training algorithms and in turn enhances their genericity.

The process of labelling data in manually intensive and can be very expensive, especially on premise.

Also of note is that not all labelled data brings with it the same value to the training process.  Our proprietary AI framework prioritizes those images that will maximize the training outcome, even on rarely seen objects.
 

Active learning

The red zones correspond to the 'uncertainties' of the algorithm. This image is thus to be chosen for the training since it contains valuable information.