This report covers a recurring incident, but it is probably the worse outcome to date. The background issues leading to this incident are interesting. As with the previous, and most, incident human factors are at the heart of this. The engineering error within this report has been examined in great detail.
Aircraft checks are often conducted overnight during times where humans have a well-known natural decrease in alertness. Rostering and the monitoring of roster patterns are as important as the individual’s responsibility to manage their rest, report when fatigued and being confident that they can do so without fear.
In this incident the engineers went to finish some work but approached the wrong aircraft. In spite of there being a few clues that this was the wrong aircraft, they had created in their minds an assumption that these clues were a result of other actions that had not taken place. Information provided to assist the engineers in locating the aircraft was more often than not incorrect and required updating. So the process was not very robust, leading to wrong aircraft being attended to. This situation was not helped by having the wrong aircraft location information initially and by the informal work-rounds then employed to correct this issue. Failing to note the aircraft markings, the engineers returned to an aircraft that in common with the incident aircraft was the only other aircraft that they were required to conduct a weekly check on during that shift. The tech logs had been taken from the aircraft during this time; this fact again may have provided another clue that the wrong aircraft was being worked on.
It was later found that incidents where engineers attend to the wrong aircraft were not uncommon, but as they had had no serious consequences they had not been reported. Had there been reports, the numbers of these events may have led to changes and improvements that may have prevented this incident.
The internal quality audit of the processes employed tended to look at outputs, so the aircraft’s physical condition post check and the correct completion of paperwork rather than the robustness of and adherence to the process itself. So the audit process failed to find any weaknesses or hazards created in the informal work-rounds employed by the engineers. This is not just an important lesson for any service provider’s auditors, but also for regulators. Having a process may be a tick on the audit checklists, but does the process work and is it adhered to? Processes can be very difficult to create and manage. Therefore it is important to see whether they work in providing the important intended protection against errors and mistakes. If people find they need to work round the procedure this needs to be highlighted.
There are many points within the report. On the flight operations side, there was some discussion on the crew’s actions but this was not examined in as much depth. It would have been interesting to have examined the human factors in the handling of the incident also to learn more lessons.
It is a good idea to invest some time in preparing for training, safety meetings and briefings to include a review of such incidents. Sometimes it is the little points that may be useful to your operation. Doing this not only promotes an awareness of potential hazards, but also provides an opportunity to allow any discussions regarding any hazards or issues within your current operation. Things that may not have been reported may well then be highlighted for review and possible action.
Read the full report on the accident to Airbus A319-131 at London Heathrow Airport: https://www.gov.uk/aaib-reports/aircraft-accident-report-1-2015-airbus-a319-131-g-euoe-24-may-2013.
ASSI is not responsible for the content of external websites.