Don’t Let Artificial Intelligence Get Under Your Skin

 

Our fears about the future often blind us to what technology is capable of doing in the present


Right now, you can do something magical if you own a smartphone.

With a few taps of your thumb, you can download a little program called SkinVision that uses the camera to take a picture of moles or other blemishes on your skin. That image is automatically sent via a complicated relay system of cables and satellites to a data server in Europe. A machine, which has been trained on hundreds of thousands of pictures of melanomas over the last few years, analyses that image. It uses a piece of code, employing a clever statistical technique called deep learning, running off a whole bunch of custom-built integrated circuits that were originally designed for playing computer games.

Some people call this technological voodoo artificial intelligence, but that’s not really an accurate description. Those involved in casting the spells usually prefer to call it machine learning. It’s a new thing. The type of phone you use to take the picture has only been around for about 12 years, and the ability of our machines to see what’s in that picture is less than seven years old. Computers have been able to recognise text and numbers for as long as we’ve had computers, but until recently images and video were impossible. Now though, they’re able to see in the same sense as they can read.

This ‘cognitive code’, a proprietary mathematical algorithm, calculates the fractal dimension of skin lesions and surrounding tissue and builds its own map that reveals different growth patterns. The algorithm checks for irregularities in colour, texture and the shape of the lesion. Once all of that is done, it sends back its diagnosis: low, medium or high-risk. And the accuracy is incredible. It correctly identifies dangerous melanomas 97% of the time, way better than family doctors (60%), dermatologists (75%) and even specialist dermatologists (92%). Oh, and all of this happens within 30 seconds of you pressing the button marked SEND.

If like us, you live in Australia and New Zealand, that’s pretty useful. We’ve got the leading rates of skin cancer in the world. It’s got nothing to do with the ozone hole. It’s because we’re closer to the sun during summer, and because both countries are heavily populated by pale humans whose ancestors came from places where the weather was pretty depressing. in Australia, more than 750,000 people are treated for one or more non-melanoma skin cancers every year and by the age of 70, two out of every three people will have been diagnosed with some form of skin cancer. Our exhausted GPs have to spend a disproportionate amount of their time looking at people’s backs, and there aren’t enough skilled skin specialists to meet the demand.

Now though, a machine can do the job for you. It costs $50 for a year and you can use it as many times as you like, on as many people as you like. You can perform checks on your kids or your grandmother or your housemates. In other words, for the equivalent of two and a half hours of work on a minimum wage you have the ability right now to spot skin cancer more accurately than the world’s leading dermatologists. This is not a pilot, or a test program. It’s a real world application of AI (we’re using that description lightly) that has more than a million users around the world, and has already spotted 27,000 dangerous melanomas.

It’s a piece of code that’s saving real, human lives.

It’s just the beginning. The human race is at the beginning of a profound, technology-driven revolution in our ability to diagnose disease. There are machines that can now diagnose more than 50 different kinds of eye disease. In India, where radiologists are in short supply, a breast cancer-spotting algorithm is being used by 11 hospitals in five cities. The best lung-reading software in the world comes from Beijing, where a four-year-old startup has amassed more than a million scans from Chinese hospitals. Machines are now better than medical professionals at spotting brain haemorrhages, enlarged hearts, collapsed lungs, pneumonia, autism, Alzheimers and hematomas. And they do it in a fraction of the time.

It’s not just medicine. Plantix is an application that allows farmers to take a photo of their crops, beam it over to a server, and cross-reference it against a database of a variety of crop species using image recognition. Within a few minutes, an AI-powered analysis arrives back, informing them if their crops need more water, or if they need to cut back on the use of fertilizer. It’s not a university project, it’s not a startup deck, it’s not a breathless article in a tech magazine. It’s a real-world application that’s in the field and being used by more than 620,000 farmers, many of whom have never heard of a place called Silicon Valley or a TV show called Black Mirror.


When most people hear the words ‘artificial intelligence’ they don’t think of skin cancer or farming. They project their hopes, fears and political beliefs about technological change onto it, and hear whatever they want to. It’s a cultural Rorschach test. It’s been used so many times, in so many different contexts, that it’s lost whatever meaning it originally had. In this sense it’s similar to other terms like climate change or inequality which originally had quite clear scientific or economic definitions, but are now so emotionally charged that the technical accuracy doesn’t matter any more.

If you’re a media company and your business model depends on getting people’s attention, this is great. The outsize influence of both Hollywood and Silicon Valley on our cultural landscape means that large swathes of the general public are used to the idea that artificial intelligence is either a terrifying new technology that threatens humanity or that it’s a new business innovation that’s going to ‘change everything.’ In any given week, you’ll come across robot cars forced to make terrible choices about killing old women, studies predicting unprecedented unemployment chaos, and Orwellian surveillance systems that undermine the social contract and amplify our worst behaviours.

This stuff is all lazy journalism. The trolley problem is an old philosophical thought experiment, not an engineering consideration. In capitalist economies, automation replaces tasks within jobs, not jobs themselves, allowing people to do more with less. This generally leads to net job creation. Hyperventilating articles about social credit systems and police surveillance are usually written by people who’ve never put a foot inside Chinaor a police station. Even facial recognition, the most terrifying of all AI applications, tends to be a lot less scary in real life. One of the most common applications of facial recognition in the US for example, is catching shoplifters. At Target, Walmart and Lowes, it’s reduced theft in stores by as much as 30%. At LAX, Lufthansa uses facial recognition to identify passengers and allow them to board the plane within a few seconds. It’s not Big Brother. It’s a new, more efficient more secure process that speeds up boarding times by 50%.

@matvelloso,  telling it like it is

@matvelloso, telling it like it is

Of course, the invention of any new, powerful technology comes with concerns and challenges. Marvin Kransky’s famous maxim still applies: technology is neither good, nor bad, nor is it neutral. The problem of algorithmic bias for example, is a real one. If left unchecked, algorithms can be opaque, biased, and unaccountable tools wielded unfairly in the interests of institutional power. Facial-recognition algorithms for example, were created by mostly pale-skinned young men and trained on images of mostly pale-skinned young men. That means they’re less accurate on individuals who are older, female or have darker skin. That’s a serious flaw, especially if being used in combination with the New York City Police Department’s gang database, for example, which is overwhelmingly comprised of people of colour. It’s pretty obvious that this requires better regulation and oversight.

Even here though, there’s a pattern. Critics of these systems are happy to point out their potential flaws, but rarely ask how well the systems they’re talking about operate without algorithms. Rather than simply asking whether algorithms are flawed, we should be asking how these flaws compare with those of human beings. The dirty secret is that the humans the algorithms are replacing are significantly worse. Bias isn’t a computer thing. It’s a human thing. Far from amplifying our worst tendencies, the algorithms we’ve created are revealing them to us. They show us that our skin doctors make far more mistakes than we think, that we hire people who are like us rather than the ones that are best for the job, and that police databases are skewed from decades of all too human racism.

AI is obviously going to kill us all and replace us with paperclips. In the meantime, you can use it to spot skin cancer or diseases on your crops.

AI is obviously going to kill us all and replace us with paperclips. In the meantime, you can use it to spot skin cancer or diseases on your crops.

Remember, human nature doesn’t change. What does change is the technology. The algorithms are getting better, as we realise bias is a problem we should pay attention to. The tech companies are furiously scrambling to fix it. Microsoft for example, has revised and expanded its facial recognition datasets to include more skin tones, genders, and ages, and was able to reduce error rates for men and women with darker skin by up to 20 times, and by nine times for women. Facebook, Google and Accenture have all built new tools to expose algorithmic bias. And IBM has just released a million person dataset explicitly designed for diversity where gender for example, isn’t coded as a binary category, but on a spectrum. These moves are part of a much wider, richer new wave of research in machine learning that takes the social and political consequences of algorithms far more seriously.

The spellcasters have been forced to consider the ethical implications of their work and the dangers of biased decision-making. That didn’t happen by magic. It happened because a lot of people cared enough to make an issue out of it, and told a compelling story, backed by evidence, that appealed to our innate sense of fairness. The challenge now is to go beyond simple trolley-style thought experiments or drum-bashing outrage about bias, and ask more sophisticated questions. What is the right balance of monitoring that allows for security but resists coercive surveillance? How should we shape the redistribution of gains from advanced technology so we don’t increase the divide between the haves and have-nots? How do we scale and automate education but still enable creativity and independent thought to flourish?”

We’ve just figured out a way to train our machines to do all sorts of magic. Now comes the hard part, as we try to figure out the implications. Remember though, it’s a work in progress, and it’s getting better.

One skin cancer diagnosis at a time.

 
Angus Hervey