Lessons learned in automation security?

Assembly Automation

ISSN: 0144-5154

Article publication date: 12 April 2011

717

Citation

Piggin, R. (2011), "Lessons learned in automation security?", Assembly Automation, Vol. 31 No. 2. https://doi.org/10.1108/aa.2011.03331baa.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited


Lessons learned in automation security?

Article Type: Viewpoint From: Assembly Automation, Volume 31, Issue 2

The recent Stuxnet infections and revelations have opened the eyes of controls engineers and their IT colleagues, exposing an often glaring lack of appreciation of each others’ domains.

Security of automation systems has not always been a priority, evidenced from poor OS and software patch management, sometimes without patch testing or validation. Poor practices demonstrate a mistaken belief that security can be achieved through obscurity, or that systems are secure since they are isolated, or IT is assumed to be responsible. Consider the “unauthorised” or poor implementation of wireless in an automation system. Such behaviours have caused significant economic losses, not only as a result of systems availability, but also consequential losses, including intellectual property (process or application data), environmental and repetitional damage. Hacking of the Modbus protocol and development of Stuxnet demonstrate a will to target automation systems, utilising publicly available information and industrial experience.

Stuxnet was the first publicly known worm to target industrial control systems (ICS). Its goal was to damage real-world industrial plants – not disrupt abstract IT systems. The threat posed by Stuxnet has been portrayed as beyond anything seen before, and unique. It has been likened to a weapon, a missile precisely targeted, deploying a destructive payload. Stuxnet was aimed at ICS with the intention to reprogram systems in a manner that would sabotage plants, hiding the changes from programmers or users.

Since PCs used for control system programming are not normally connected to the internet, Stuxnet replicates via removable USB drives exploiting a vulnerability enabling auto-execution (Figure 1). It then spreads across the LAN via a Windows print spooler vulnerability and via a Windows server remote procedure calls vulnerability. It copies and executes on remote computers through network shares and Siemens WinCC database servers (SCADA software). It copies itself into Siemens step 7 programmable logic controller (PLC) program project and executes when a project is loaded. It updates versions via peer-to-peer communication across a LAN. Stuxnet communicates with two command and control servers originally located in Denmark and Malaysia to enable code download and execution, including updating versions and the ability to change command and control servers.

 Figure 1 Stuxnet infection routes

Figure 1 Stuxnet infection routes

Stuxnet fingerprints specific PLC configurations that use the Profibus industrial network for distributed I/O. If the fingerprint does not match the target configuration, Stuxnet remains benign. If the fingerprint matches, the code on the Siemens PLCs is modified with the infected step 7 programming software and the changes are hidden. The modified code prevents the original code from running as intended, causing the plant equipment to operate incorrectly, potentially sabotaging the system under control. This is achieved by interrupting processing of code blocks, injecting network traffic on the Profibus network and modifying output bits of the PLC and distributed network I/O.

Automation security is an issue for organisations that do not necessarily consider their systems as critical, but would consider system availability loss a risk. Nefarious intentions might not be considered a real threat; however, perhaps the actions of well-intentioned or disgruntled employees or former employees might be more prominent risks. However, viruses can cause havoc, without being targeted at control systems. Stuxnet changed this, being the first virus to target PLCs, leading many IT journalists to discuss the virus and such systems with little depth due to unfamiliarity, often concentrating on the Trojan element and distribution via Windows and database exploits, missing the unique real-world element.

According to The US National Institute of Standards and Technology (NIST) guide to ICS security, potential incidents may include:

  • Blocked or delayed flow of information through ICS networks, which could disrupt ICS operation; unauthorised changes to instructions, commands, or alarm thresholds, which could damage, disable, or shut down equipment, create environmental impacts, and/or endanger human life; inaccurate information sent to system operators, either to disguise unauthorised changes, or to cause the operators to initiate inappropriate actions, which could have various negative effects; ICS software or configuration settings modified, or ICS software infected with malware, which could have various negative effects; interference with the operation of safety systems, which could endanger human life.

A student of military history will tell you the military-identified lessons learned from past operations; however, such lessons learned were not always applied, possibly due to loss of “corporate memory” being predominant amongst other reasons. Nowadays, lessons are said to be identified, not learned, removing the default assumption. Those responsible for automation systems and security would be well advised to identity appropriate lessons to be learnt, whilst avoiding potentially damaging assumptions.

There are many sources of suitable advice, such as that from the UK Centre for the Protection of the National Infrastructure, in a series of process control and SCADA security good-practice guidelines, the foundation of which has three principles:

  1. 1.

    Protect, detect and respond. It is important to be able to detect possible attacks and respond in an appropriate manner in order to minimise the effects.

  2. 2.

    Defence in depth. No single security measure itself is fool proof, as vulnerabilities and weaknesses could be identified at any point in time. In order to reduce these risks, implementing multiple protection measures in series avoids single points of failure.

  3. 3.

    Technical, procedural and managerial protection measures. Technology is insufficient on its own to provide robust protection.

Recommendations from NIST include:

  • Restricting physical access to the ICS network and devices.

  • Protecting individual ICS components from exploitation. This includes deploying security patches in as expeditious a manner as possible, after testing.

  • Disabling all unused ports and services.

  • Restricting ICS user privileges to only those that are required.

  • Tracking and monitoring audit trails; and using security controls such as anti-virus software and file-integrity checking software where feasible to prevent, deter, detect and mitigate malware.

  • Maintaining functionality during adverse conditions. This involves designing the ICS so that each critical component has a redundant counterpart. Additionally, if a component fails, it should fail in a manner that does not generate unnecessary traffic on the ICS or other networks, or does not cause another problem elsewhere, such as a cascading event.

  • Restoring systems after an incident. Incidents are inevitable and an incident response plan is essential.

Standards in this area are developing rapidly, including work being done by the US International Society of Automation (ISA) who has published ISA-99 Parts 1 and 2 that deal with industrial automation and control systems security. Part 1 serves as the foundation for all subsequent standards in the ISA-99 series. Part 3 will make recommendations for operating a manufacturing and control systems security program, and Part 4 provides information on specific security requirements for manufacturing and control systems. Meanwhile, IEC is also working on automation security standards and is incorporating work being done in ISA. IEC 62443 industrial communication networks – network and system security – will form a series of standards. Part 1 covers concepts and models, Part 2 provides guidelines for establishing an industrial automation and control system security program and Part 3 describes security technologies for industrial automation and control systems.

The most significant lessons learned from Stuxnet include the human element; technology alone is not sufficient. Essential components are the procedural and management aspects of an automation security program, in combination with the necessary technical measures. The challenge is to develop a sustainable approach, with a continuous process of assessment, risk management, adjustment and review in light of emerging vulnerabilities, threats and consequences.

Richard PigginNetwork and Security Consultant in Northamptonshire, UK

Related articles