Chatterbox Labs are pleased to announce the availability of the eighth pillar in the AI Model Insights platform: Privacy.
London, UK -- (ReleaseWire) -- 08/10/2021 --Chatterbox Labs are pleased to announce the availability of the eighth pillar in the AI Model Insights platform: Privacy.
The patented AI Model Insights platform sits as part of an Enterprise AI governance process (sometimes referred to as an Ethical AI, Trustworthy AI or Responsible AI process) and gives critical insights into any AI model. Privacy will complement the existing 7 pillars of:
- Explain - Reverse engineers the AI model to give reasons behind the decisions that were made
- Fairness - Assesses both the data and the AI model for unwanted bias producing disparity metrics with respect to configurable sensitive attributes
- Vulnerabilities - Automatically profiles and detects exploitable weaknesses in the live AI model
- Testing - Benchmarks and assesses quality of service within AI models
- Actions - Assesses how the AI model recommends changing data to reach a goal or target
- Trace - Highlights drift and decay of an AI model
- Imitation - Demonstrates the risk of the AI model being imitated and the business logic compromised
Privacy
Chatterbox Labs' privacy pillar aims to demonstrate risks associated with the re-identification of people within datasets. Datasets are shared within organisations for many reasons and are often stored for long periods of time. Data leaks may happen many years after the dataset was originally created or shared, and can go unnoticed for long periods of time.
It is therefore important to measure the privacy risks associated with the datasets, and also verify that any deidentification (or privacy preserving) techniques result in a reduction of these risks.
The metrics that Chatterbox Labs generate are indexed on the premise that, in the right context, individuals may be reidentified (with a certain level of probability) even after privacy preserving techniques are applied.
The metrics reported are the Informed Private Information Risk (IPI), Uninformed Private Information (UPI) Risk, List Generation Risk and the k-Anonymity levels. As they are generated based on an organization's datasets, metrics can be generated both for existing AI projects currently exploiting datasets for AI, and new AI projects which are still curating datasets.
Availability
Chatterbox Labs' AI Model Insights platform is always deployed on a customer's infrastructure, typically via Docker. Existing customers are able to pull the update immediately. If you would like to find out more, please get in touch at this link.
About Chatterbox Labs
Chatterbox Labs are an enterprise AI software company that focuses on explaining, tracing, actioning, scoring bias, testing, detecting weaknesses, measuring privacy & identifying imitation risks in AI models pre & post deployment.