This incident began when the backup mode was activated for the flight data processing system (FDPS) local area network at a major European airport. Aircraft already in the system were unaffected. However, some of the flight data
information was not displayed for aircraft entering the system. The ANSPs engineers worked to restore system but could not determine the root cause of the malfunction. The airport continued to operate but with capacity restrictions. A multi-party team was formed among the engineers, the system suppliers and their sub-contractors.
A similar, intermittent malfunction occurred two days after the first problems. However, normal levels of operation were resumed after some suspected network components had been replaced and the system seemed to have stabilised. As before, engineers continued to work on identifying the root causes. This task was complicated because it proved to be very difficult to replicate the observed symptoms. After 28 days without any subsequent problems, the malfunction occurred once again. The subsequent report into the failure noted that “the determination of capacity was based on continuous risk
assessments of the system performance and the technical and operational mitigations put in place to ensure safe operations”. As before, capacity was gradually increased once the system seemed to have stabilised. Additional
personnel joined the investigation team but they still faced significant problems identifying the causes of the problem given that the FDPS local area network was still in operational use.
Monitoring systems were deployed and operational changes were introduced to ensure that aircraft were not in holding patterns during any potential future
Replaced/Superseded by document(s)
|File||MIME type||Size (KB)||Language||Download|
|HUMAN FACTORS ISSUES OF SYSTEMS ENGINEERING.pdf||application/pdf||64.24 KB||English||DOWNLOAD!|
The increasing complexity of safety-critical applications has
led to the introduction of decision support tools in the
transportation and process industries. Automation has also
been introduced to support operator intervention in safetycritical
applications. These innovations help reduce overall operator workload, and filter application data to maximise the finite cognitive and perceptual resources of system operators. However, these benefits do not come without a cost.
Increased computational support for the end-users of safetycritical
applications leads to increased reliance on engineers to monitor and maintain automated systems and decision support tools. This paper argues that by focussing on the endusers of complex applications, previous research has tended to neglect the demands that are being placed on systems engineers.
The argument is illustrated through discussing three recent accidents. The paper concludes by presenting a possible strategy for building and using highly automated systems based on increased attention by management and regulators, improvements in competency and training for technical staff, sustained support for engineering team resource management, and the development of incident reporting systems for infrastructure failures. This paper
represents preliminary work, about which we seek comments