AI is not a single technology, but a diverse set of methods and tools continuously evolving in tandem with advances in data science, chip design, cloud services and end-user adoption. The most common examples of AI methods and tools include natural language processing, machine learning, deep learning, computer vision, conversational intelligence and neural networks.
Although there is a growing consensus on the need for AI to be ethical and trustworthy, the development of AI functionality is outpacing developers’ ability to ensure it is transparent, unbiased, secure, accurate and auditable. There is a need for organisation’s to develop an AI governance model that embeds ethical design principles into AI projects and overlays existing technology governance structures.
Our latest report outlines EY’s trusted artificial intelligence framework to show the slate of risks that may undermine trust in AI system, as well as products, brands and reputations.
AI is coded to learn rather than execute commands and has the unique ability to dynamically evolve at speed through supervised or unsupervised learning. This evolution in technology provides great opportunities to enhance products and services and the ability to change and shape financial markets the world over. However as with all change; sound and transparent controls need to evolve at pace and in tandem with the technology to ensure AI solutions are trusted to deliver the outcomes intended and do not result in harm to consumers and financial markets.
The report explains how to proactively design trust into every facet of AI systems and elaborates on five attributes necessary to sustain trust:
For more on this topic, read our recent report on Intelligent Automation, the combination of AI and robotic process automation, with digital enablers and human-in-the-loop processing. Got a question? Get in touch to discuss the potential impact on your business.