Popular discourse around “AI” has recently particularly focused on the issue of ethics in the algorithm. Examples include the issue of discrimination (maybe when deciding whether to give someone credit or select the insurance premium), or when making split-second decisions around whether a self-driving vehicle should dodge an oncoming car by veering into a pedestrian.

My suspicion is that many of these issues are already in algorithms that we use every day, but that we don’t call “artificially intelligent”.

First, I’ll briefly consider the issue of discrimination; to reiterate the general consensus that it isn’t the algorithm that’s bias, but rather the data that it learns from. A recent example was that ethnic minorities pay a ‘penalty’for their car insurance. I briefly looked at the analysis, and I suspect (but haven’t tested) that the difference may be due to the region where different demographics live. Minority ethnic groups are more likely to live in cities, where there is more vehicle crime, hence the apparent ‘ethnic penalty’.

What do we do about this? Without looking into what latent variables explain the difference one might simply correct for the difference by adjusting the premium to ensure that the sensitive variable (race) is no longer correlated with price. However this ends up with new biases (potentially associated with variables that are unrecorded by the researcher/company). Personally I also worry that current attempts to correct biases, e.g. in recruitment could unwittingly discriminate against subpopulations that we aren’t measuring or detecting.

Second, life-and-death decisions are already being taken ‘automatically’ by algorithms. I was recently emailed by someone asking for examples of “near-term, real-life situations where AI will have to make unsupervised ethical choices…for the developing world.” Already in clinic machines are providing decision support assistance to doctors and consultants when diagnosing or prescribing treatment.

A very simple example from a developing country, can be found on the nutrition unit of Mulago Hospital, Uganda. There mothers arriving with malnourished children and the children are assessed for treatment. This can be as simple as measuring their middle-upper-arm circumference (MUAC) and comparing it to a given threshold to decide whether the child has Severe Acute Malnutrition (SAM) and thus will receive treatment. The nutritionists on the ward will also take into account other aspects of the child’s health (HIV, oedema, etc), but, in effect we are already following an algorithm.

These algorithms have been developed through scientific observational studies, investigating the outcomes of children with different MUAC.

How is machine AI different? Foremost is the opacity in the decision making. In principle the process is similar – the machine has observed lots of training examples, and has to make a decision about whether a child needs treatment. Unlike the human-implemented case, the exact reasons for a decision may be unclear.

For the last few years there has been considerable debate about the importance of interpretability, and concerns around the fragility of deep learning (in particular with respect to Adversarial Examples and the apparent sensitivity of the networks to changes in the source distribution. Arguably it is unclear, from the practitioners point of view, why a given threshold is used even in the human-algorithm case.

In summary, we are already letting algorithms make life-or-death decisions, potentially without fulling understanding the rationale behind a threshold. The process at the moment is simple, as the dimensionality of the data increases the reasoning becomes increasingly opaque, and maybe it is this that raises concern around AI making such decisions?