I am joining a panel tomorrow at the AI-Summit in London, focused on practical Artificial Intelligence (AI) for business applications. I am to be asked the question “What can Artificial Intelligence do for business?”, so by way of preparation I thought I should try to answer the question on my blog.
Perhaps we can break the question down – first considering the corollary question of “what can’t AI do for business” even if its cognitive potential matches or exceeds that of a human, then discussing “what can AI do for businesses practically today”. What would happen if we did succeed in developing AI which has significant cognitive potential (as IBM’s Watson provides a foretaste of)?  Let’s undertake a thought experiment. Imagine that we have AI software (Fred) which is capable of matching or exceeding human level intelligence (cognitively defined), but obviously remains locked inside a prison of its computer body.  What would Fred miss that might limit his ability to help the business? Firstly much of business is about social relationships – those attending the AI-Summit have decided that something is available which is not as effective via reading the Internet – perhaps it is the herd mentality of seeing what others are doing, perhaps it is the subtle clues, perhaps the serendipitous conversations, or perhaps it is about building trust such that unwritten knowledge is shared. Fred would likely be absent from this – even if he were given a robotic persona it is unlikely it would fit in with the subtle social activity needed to navigate the drinks reception. Second Fred is necessarily backward looking, gleaning his intelligence and predictive capacity from processing the vast informational traces of human existence available from the past (or present). Yet we humans, and business in general, is forward looking – we live by imagined futures as much as remembered pasts. How well Fred could handle that prediction when the world can change in an instant (remember the sad day of 9/11)? Perhaps quicker than us (processing the immediate tweets) but perhaps wrongly – not seeing the mood shifts, changes and immediate actions. Who knows? My third point is derived from the famous hawthorn experiments which showed that humans’ behaviour changes when we are observed. Embedding Fred into an organisation will change the organisation’s social dynamic and so change the organisation. Perhaps people will stop talking where Fred can hear, or talk differently when they know he is watching.  Perhaps they will be most risk averse – worried Fred would question the rationality of their decisions. Perhaps they would be more scientific – seeking to mimic Fred – and lose their aesthetic intuitive ideas? Perhaps they will find it hard to challenge, debate and argue with Fred –debate that is necessary for businesses to arrive at decisions in the face of uncertainty? Or perhaps Fred will deny the wisdom of the crowd (Surowiecki, 2005) by over representing one perspective, when the crowd may better reflect human’s likely future response? Or perhaps, as Nicholas Carr suggests (Carr, 2014) they will prove so useful and intelligent that they dull our interest in the business, erode our attentiveness and deskill the CxOs in the organisation – just as it has been suggested flying on Autopilot can do for pilots. Finally, (and arguably most importantly as those who believe in AI and will likely dismiss the earlier pronouncements as simplistic as AI will overcome these by brute force of intelligence), Fred’s intelligence would be based on data gleaned from a human world and “raw data is an oxymoron, data are always already cooked and never entirely raw” (Gitelman andJackson 2013 following Bowker 2005 – cited in (Kitchin, 2014)). Fred’s data is partial and decisions were made as to what was, and wasn’t counted, recorded, and how it was recorded (Bowker & Star, 1999). Our data reflects our social world and Fred is likely to over-estimate the benign nature of this representation (or extreme representations) of the data. While IBM’s Watson can reflect human knowledge in games such as Jeopardy, its limited ability to question the provenance of data without real human experience may limit its ability to act humanly – and in a world which continues to be dominate by humans this may be a problem. I had the pleasure of attending a talk two weeks ago by Prof Ross Koppel who discusses this challenge in detail in relation to health-care payments data.  AI is founded upon an ontology of scientific rationality – by far the most dominant ontological position today. This idea argues that science, and statistical inference from data, presents the truth (a single unassailable truth at that). Such rationality denies human belief, superstition, irrationality – yet these continue to play a part in the way humans act and behave. Perhaps AI needs to explore further these philosophical assumptions as Winograd and Flores famously did around AI three decades ago (Winograd & Flores, 1986). Finally we should try, when evaluating any new technologies impact on business to be critical of “solutionism” which argues that business problems will be solved by one silver bullet. Instead we should evaluate each through a range of relevant filters – asking questions about their likely economic, social and political distortions and from this evaluate how they can truly add value to business.   In exploiting AI today, at its most basic, businesses should start by focusing on the low-hanging fruit.  AI doesn’t have to be that intelligent to provide huge benefits.  Consider how Robotic Process Automation  can help companies (e.g. O2) deal with its long tail of boring repetitive processes (Willcocks & Lacity, 2016). For example “swivel chair” functions where people extract data from one system (e.g. email) undertake simple processes using rules, then enter the output into a system of record such as ERP (Willcocks & Lacity, 2016). As such processes involve only a modicum of intelligence, and are repetitive and boring for humans, they offer cost opportunities (see Blue Prism as an example of this type of solution) – particularly as one estimate suggests such automation costs around $7500/PA(FTE) compared to $23k PA for an offshore salary (Willcocks and Lacity 2016 quoting  Operationalagility.com). Obviously AI might move up the chain to deal with more significant business process issues – however at each stage we are reminded that CxOs will need leadership, and IT departments will need specific skills to ensure that the AI makes sensible decisions, and reflects business practices. Business Analysts will need to learn about AI such that they can act as sensible teachers – identifying risks that AI are unlikely to notice, and steering the AI to act sensibly.  Finally as the technology improves so organisational and business sociologists will be needed to wrestle with the challenges identified above.
© Will Venters
Bowker, G., & Star, S. L. (1999). Sorting Things Out:Classification and Its Consequences. Cambridge,MA: MIT Press. Carr, N. (2014). The Glass Cage: Automation and Us: WW Norton & Company. Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their consequences: Sage. Surowiecki, J. (2005). The wisdom of crowds: Anchor. Willcocks, L., & Lacity, M. C. (2016). Service Automation: Robots and the future of work. Warwickshire, UK: Steve Brookes Publishing. Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood,NJ: Ablex.
(Image (cc) from Jorge Barba – thanks)

Written by Dr Will Venters