Page 9 of 11

21 Jan 20


Computer Misuse Act an inhibitor to CyberSecurity?


I am looking forward to the launch of the CLRNN (@CLRNNetwork, report Reforming the Computer Misuse Act 1990 on January 22.  I am a major contributor.


The key to understanding the Act was that from the outset it was designed to fill in gaps in the existing legislation rather than to provide a comprehensive response to whatever you think "cybercrime" is. Most cybercrime can now be charged under existing legislation including the Fraud Act 2006, extortion/blackmail, Data Protection Act 2018 and the various Terrorism acts. Computer misuse is only invoked as a primary means of prosecution when none of these appear to be satisfactory. Indeed there are frequent occasions in which the Computer Misuse Act has clearly been breached but where prosecutors decide not to pursue charges with any vigour or indeed at all because success would be unlikely to alter the court's view of punishment in the event of conviction.


(This is one of the reasons why there appear to be so few prosecutions under the Computer Misuse Act and why it may be misleading to consider convictions under the act as reliable indicators of the extent of cybercrime).


The main offences - unauthorised access, unauthorised access in pursuit of a further criminal offence, and unauthorised system impairment - do not appear to require substantive modification.


The main problem is that, although this was never the intention, the act is an inhibitor to cyber security investigations and research. The reason is that the whole framework of the three main offences is based around the concept of "unauthorised". It turns out that authority to access a computer for any purpose can only be given by the owner of that computer or someone clearly delegated on their behalf – s 17. So far so apparently sensible. But these days computer systems are not self-contained stand-alone devices but rely on a constant supply of external input such as material from the web and data streams from other sources. Employees, sub- contractors and others may be being granted remote access from their own devices. In addition much of the processing might take place on devices which are not owned by the company or organisation using them – as in use of cloud services,  outsourcing contracts and external archiving for email and other documents.  


Where then are the boundaries between what an organisation can "authorise" and the outside world? The answer may or may not be in complex contracts of service and supply. This creates a difficulty for those carrying out penetration testing (otherwise known as ethical hacking and designed to look for weaknesses in an organisations computer systems) and for those carrying out investigations and seeking to find causes and those responsible for them. There is also a problem for researchers and academics and also for organisations offering threat intelligence. Threat intelligence, at its best, offers not only advice on generic threats but on new specific sources of threats including hostile actors. Customers of threat intelligence use it to devise their own detailed security precautions.  


All of these activities require investigators to look beyond the boundaries of any one specific computer system. At the moment the only organisations entitled to carry out these activities are law enforcement and the intelligence agencies which have specific "savings" in section 10 of the Act.


The public policy issue, therefore, is that under current law only law enforcement and the National Cyber Security Centre (NCSC) which is part of GCHQ, appear to be the only UK bodies that can carry out threat intelligence beyond a corporate boundary. This places a significant limit on the resources available to identify threats and also on the range of threats investigated; law enforcement will concentrate on events likely to have criminal prosecution outcomes, NCSC’s central remit is state security. The current legal framework therefore runs in direct opposition to repeatedly stated national policy of partnership working across public and private sector, effectively preventing industry from deploying its technical capabilities in the pursuit of national cyber security objectives.


The answer appears to be some sort of “public interest” defence but this would need to be very tightly defined so as not to be abused by recreational hackers.


The report also looks at issues of international jurisdiction,  corporate liability (can organisations as opposed to individuals be charged under the Act?) and guidelines for the handling of young and  “neurologically diverse” defendants.


I am very interested in responses to the detailed analyses and recommendations.  


Peter Sommer