Various news stories went unnoticed last week due to the overwhelming coverage of the horrifying Boston bombings. One such story was the on-going legal battle over US patients dying after undergoing robot-assisted surgery.
The procedure, which is usually done without major complications, seems to raise some interesting questions: who is responsible for a machine’s erroneous calculations or mal-functioning calibrations? Is it the hospital who bought the machine? Is it the manufacturer who didn’t explain it properly? Is it the surgical team actually operating it? Or is it simply a matter of bad luck?
These are some of the legal and moral issues we’ll have to get used to answering in the near future.
If Japan is any indication, robots will soon be used for anything from routine examinations, simple patient care, complex surgeries, to long recovery treatments.
It’s pretty hard to predict people’s reactions here in Singapore, but one thing’s for sure – most people will start wondering how comfortable they are leaving their elderly parents or their new-borns in the “hands” of robots.
This is perfectly normal, as sci-fi literature and movies have carved in us a sense of potential threat from anything designed to strictly and efficiently follow orders without considering moral or philosophical dilemmas.
As pointed out by a recent New Yorker article on self-driving cars, it’s one thing to let robots do tasks we find inconvenient and it’s a whole other thing to expect them to take rational and informed decisions.
If, for instance, my sI, Doctorelf-driving car calculates an imminent crash with an incoming self-driving bus full of children, will it decide it’s better to veer off the road and risk killing me in order to avoid hurting children? Or will it choose to stay the course based on the probability of the crash killing all the people involved? Will the car even be allowed to make such decisions? If so, who decides which priorities to give it? The driver? The manufacturer?
This sort of example shows that there’s nothing wrong with robots and machines per se.
We have, after all, automated many of our daily tasks, sometimes to the point that we don’t even think twice about it (when was the last time you had to rethink your decision of getting into a lift, drawing cash from an ATM, or asking SIRI for directions?).
The problem arises when the tasks undertaken require some degree of human understanding current machines still don’t possess; in hospital and medical professions, this means empathy, hope, sensitivity, compassion, understanding, tact, etc.
Moral considerations aside, the age-old question of the role of human workers/practitioners will be another issue to consider. Will the introduction of robot carers take jobs away from human carers? Will human surgeons be replaced by mechanical arms equipped with scalpels? Will we one day do away with doctors and just scan a chip embedded under our skin?
The answer is: yes and no.
Yes, eventually many jobs, including specialised jobs in the healthcare sector, will be done by sophisticated robots and machines. Some argue that it’s only a matter of time before robots take over most jobs, no matter how complex they may seem.
But no, that time is not yet here. For now, as advanced and complex as some robots and machines may be, they can only reproduce simple and repetitive tasks. They do not learn, they do not take initiatives, they do not choose, they do not decide.
For the next 50 years or so, humans will still use robots and machines as tools to make their jobs faster or easier and supervise the ones that do work on their own.
That’s why it’s important that we start preparing our workforce for this inevitable change; teaching workers how to best use technology, giving them the opportunity to upgrade their skills, and showing them that the endless possibilities will be pivotal to the way Singapore gets ahead in this coming robotic revolution.