reputable.ai addresses two pressing issues:
As customers, how do we know which businesses are using AI and algorithms constructively?
How do we know a company has got a grip on this complex area of technology management?
How do we know where to put our trust?
How do we know that an institution has an eye to fairness as well as efficiency?
Within a company or government body, how does the board know that decision processes are well managed?
Current governance structures assume that it is people who are held to account. How does this work in a world where algorithms routinely make decisions that affect customers and employees?
How does the board satisfy itself that it, as opposed to algorithms, is making the decisions that affect people?
That within the last 12 months we've audited the company for its use of AI and algorithms. We will have asked some difficult and searching questions about how the company is using AI.
It means that the organisation has systems and processes in place that help ensure the constructive and well-managed use of AI and algorithms.
It's not about good intentions. It's about systematic practice throughout the software design-develop-operate process. It's also as much about people and responsibility as it is about technology.
The reputable.AI certification mark means ...
At reputable.AI we look for evidence of best practice. We have an audit methodology that asks searching questions. We know, as they say, “what good looks like”.
Nobody can guarantee that every decision a company makes is a good decision. What we can say with confidence is that the companies that pass a reputable.ai certification audit have good methods in place for the management of AI.
We’ve listed the principles behind our certification process on the site and given some detail on each. Our methodology also allows for other recognised principle sets. There are several further layers to our certification process, which we will discuss with any company considering a reputable.AI certification audit.