The birth of the modern computer system was nothing less than revolutionary for businesses. New levels of efficiency and consistency suddenly became possible which had an immediate and significant effect on productivity not only in technological industries but in everything from manufacturing to financial control.
Whilst for many this was a welcome development, others realised that this effectively launched a head-to-head battle between man and machine that at present sees no sign of relenting.
A common mantra in computing particularly amongst those who work with them routinely is “rubbish in, rubbish out”. Computers simply make decisions based on instructions given by a human being and furthermore they can only process the data available. It therefore follows that computers can’t be held responsible for failures borne by human input.
Does this lead us to a scenario where man and machine should strive to complement each other rather than compete? Perhaps yes, it’s something that’s particularly evident in the field of automation.
As human beings, psychologists have often noted that we’re perfectly capable of reacting to stimuli such as sound, touch and visible changes but we’re not well-tuned to waiting for a situation to occur that may never actually happen. In short, we become distracted and for want of a better word – bored. This is a topic explored frequently by those who investigate aviation disasters when pilot error is a suspected factor. During the investigation into the crash of Air France flight 447 in 2009, retired Boeing captain David Jenkins commented: “computers make great monitors for people, but people make poor monitors for computers.”
This was a response to the debate about how much control the captain of an aircraft has over the flight technology and whether computer systems should have sovereign control.
“Computers make great monitors for people, but people make poor monitors for computers.”
David Jenkins, Retired Boeing Pilot
Another flaw in human monitoring is objectivity. A computer would typically make a decision based on inputs being constantly compared to quantitative thresholds that have been set in advance which can be consistent and infinitely replicable. In stark contrast, human beings can’t make such objective decisions which introduces ambiguity and inconsistency which has serious implications where personal safety is a concern. What one person perceives as safe may be dangerous to another. This wouldn’t be the case with an automated computer system.
Whilst the human brain is nothing short of remarkable, keeping track of multiple streams of information simultaneously is something many of us find difficult even over short periods. A computer on the other hand has no such problem and would handle multiple streams of information at any given time, and possibly foresee issues that a human wouldn’t notice.
Automation takes advantage of the fact that computers have this aforementioned capacity and objectivity but the most productive way to view the impact that automation has had is by concluding that it has simply changed the role of humans and made way for efficiency improvements.
An open-minded conclusion to this contentious topic is to say that the interaction between man and computer should be one of partnership, not one of adversary. After all, a computer will only ever behave in such a way that it has been told to by a human.