We believe that good practice in AI and algorithmic business starts with a statement of principles.  We offer 9.  We can also certify to recognised principle sets or a company's own.   The important thing is that these are owned to Board-level.
We focus on stating principles that are linked to practice that can be evidenced.  At least 7 of the principles have to be supported using technical tools in order to pass the certification audit. We are not prescriptive about the tools that are used; we simply expect to see that they are there and are working. 
The principles are shown below, including top-level descriptions. Our delivery team can go into as much detail as necessary, including examining source code, test results, tooling and datasets.

Understanding decisions

Open to challenge


Balancing interests



Avoiding biases

Being accountable


Using data fairly

Avoiding addiction

The company understands where AI is making decisions
The company effectively tracks and monitors where it uses AI to interact with its customers. It knows exactly what the AI system is doing, who has access to it and who is responsible for the decisions it makes.
Decisions made by AI are recorded and can be audited
The company keeps a record of the decisions made by its AI systems and what prompted those decisions. This shows exactly what decisions were made about each customer and with what information.
That people are accountable for decisions made by AI
The company must take ownership of any decisions made by their AI systems. They have demonstrated clear responsibility within the organisation for each implementation of AI and all AI systems are required to operate within the same rules as human employees.
AI is tested (and trained) against representative data and does not display unjustifiable biases
The company has checked and tested that its AI systems are free from unintended and unethical biases. Evidence from testing and live operation shows fair results in the treatment of customers.
AI balances the interests of people, business and the environment
The company has worked to reduce possible unintended consequences of its AI systems and optimised for more than just profit.  There is evidence that the company has considered the welfare of its customers, employees and the environment in defining the goals of the AI.
Decisions made by AI can be challenged by customers and staff
The company has a process for customers and staff to report unsatisfactory AI system performance. There is a plan in place for how to re-evaluate decisions and if necessary retrain the AI system to make better future decisions.
AI is monitored & safeguarded
The company ensures the quality of its AI by employing real-time and regular human reviews of the system behaviour. They have plans in place to defend against and deal with any unexpected issues.
That the company has the legal right to use the data that its systems ingest
The company knows where the data used by its AI systems is coming from and going to. Data is handled within all legal requirements and there is a consent audit trail for the use of any customer's personal data.
That advanced intelligence is not encouraging addiction
Digital services can become addictive and AI could, perhaps accidentally, be encouraging addiction to a product it is helping to sell. The company must have considered and identified what addiction looks like for its products and services and taken steps to detect and prevent addiction.
Join our mailing list
Join the discussion 
  • LinkedIn Social Icon

reputable.AI is owned and operated by Reputable Ventures Ltd, UK 11292351, © Reputable Ventures Ltd.