Chris Cormack and David Kelly
- Artificial Intelligence (AI) is at its core a stack of mathematical models from statistics, calculus and algebra – however there is more to what AI can achieve
- People who make decisions off the back of a recommendation from an AI model really need to appreciate what it is doing – black boxes are only useful in aircraft and are actually yellow, this must be matched with the greater understanding of individuals that apply it
- Users must still appreciate that “Garbage in Garbage out” still applies to AI – choosing the right data sets and features is more art than science and requires the right expertise
- AI should not be allowed to propagate in a lazy, unaccountable way – it requires critical supervision and monitoring at all times by those with detailed business and technical understanding- this requires the right type of management in an organisation
- If you look hard enough, you will be able to find a correlated pattern of behaviour across a wide enough of data – but that is not the same a causality
- Predictions must always be probability-weighted together with heavy caveats – beware of those seeking absolutes in uncertainty
- Models that learn from history may cement biases around areas such as gender, race and location that are considered unacceptable – this creates reputational risk
- Applying pressure on the quants to perform will develop models that conform to management’s viewpoint, but are likely to lead to unintended consequences or absurd predictions
- Understand that AI and Machine Learning will not at this point solve all problems in an organisation beware of snake oil to grease the wheels – understand the value proposition.
- Be aware of strategic motivation for building AI capability. Understand the ethical choices in terms of the impact on both employees, customers and society at large