top of page

Machine

Learning Audit

Casting light into the dark

ML security
LOGO.png

Building trustworthy AI 

Why do we care

AI is changing the world, and we believe it must serve people, not compel them. AI should be a trustworthy cog to be used in society. How can we trust what is not understood?

Moreover, since April 2016, the General Data Protection Regulation (GDPR) gives a right to algorithmic decision explanation to EU citizens. There is a need for control and mastery over AI for our society to embrace the AI revolution. 

It is why, at Disaitek, we are building expertise to understand AI in action, explain their decision, analyze their biases and test their robustness in an adversarial setup.

Roots of an artificial disaster

FAIRNESS

AI can learn unwanted biases to solve a problem. In particular, when the area of application is linked to the human (medical, financial, legal ...), AI can learn bias inappropriate to our society standards. To be sure that AI does not discriminate according to unfair characteristics, it is necessary to study its internal functioning. We develop techniques that allow to know on which basis AI makes their predictions.

SECURITY
ROBUSTNESS

AI has not been designed with measures against security threats and inputs out of the leart distribution. We test AI in an adversarial setup, to assess its robustness and resistance 
to security threats.

 
 
CONFIDENTIALITY

When AI training data is confidential, then AI is a vector of information leakage. Because of several attack methods, it is possible to recover training samples and to steal model hyperparameters (intellectual property). It can compromise your company confidentiality and your customers confidentiality.

We designed a comprehensive Map of Threats over AI, that explains the subjets that need to be mastered before launching an AI.  

​

​

​

​

​

​

​

​

​

Deploy your ML

with trust and control

We created ML Audit, a consulting service, designed to give you the tools to master your AI service. We analyse the training data and the trained ML with state-of-the-art research method to understand the ML decisions, biases, feature importance and explore its robustness from an attacker's perspective.

How do we proceed

   How do we proceed

  • We analyse your dataset and your ML to extract valuable information.

    • Feature correlation & distribution analysis, feature imbalance, feature selection ...​

  • Regarding these information, we decide together if a corrective procedure on the ML or the dataset is needed.

    • Word embedding debiasing, neural network weights debiasing, ML decision explanation, adversarial sample / privacy leakage / data poisonning contermeasures ...​

  • We define with you the measure of success.

    • Leveraging the tradeoff between corrective measures & performance.  â€‹

​

When the work is done, you gain control and mastery over your ML, you can deploy it trusting its decisions.

About ML Security

Disaitek was founded with a single mission: to use AI to bring knowledge and to bring knowledge over AI. 

​

We are an active player in the domains of AI Fairness / Bias, Security and Robustness. We have been selected in the The European AI Alliance , to participate to the design of the European AI ethics guidelines, and ensuring competitiveness of the European Region in the burgeoning field of Artificial Intelligence. We also contribute to several AI open-source projects like BERT, a large pre trained model that capture deep information about syntax and grammar of languages. 

​

If you share our beliefs upon building trustworthy AI and we can help you for your business, contact us today. We will be thrilled to contribute to your success.

About

The Team: Where the passion begins

Anthony Graveline

Founder, CEO

Master of Civil Engineering

Grégory Châtel

Associate, CSO
PhD Comp. Science

Passionate about artificial intelligence and computational neuroscience, Anthony has 20 years of experience in consulting and project management. 
He’s in charge of the product roadmap and business development.

Grégory is an active member of the Intel Innovator program. In this frame, he spoke in 3 meetups in Europe and he wrote several blog posts presenting privacy and security concerns on machine learning.

Team

Thanks for submitting!

Contact
blue4.png
bottom of page