The people's voice of reason

Standardization/ Evaluation: Training's Evil Twin

Because of the Germanwings Flight 9525 crash where the copilot Andreas Lubitz intentionally crashed his Airbus 320-200 into the French Alps killing all 150 people onboard, my previous Robservation focused on the aviation industry's need to develop and maintain robust selection and training programs. In addition to these two aspects, however, any organization that wants to succeed must also implement training's evil twin, Standardization/Evaluation. I say evil with tongue in cheek, but standardization goes beyond training in that it encompasses aspects of analysis, enforcement and if necessary, punishment; punishment in that if you don't make the standard, you're finished.

In my 30 years of flying, while admittedly creating consternation and many a restless night, it is standardization that continues to serves as the second lynchpin to safety and mission effectiveness. In the flying world, standardization culminates in annual checkrides. During this evaluation, everything the pilot does is watched and graded by the evaluator and having been on both sides of the checkride fence, much like a Christmas present, it is always better to give than to receive.

A checkride, depending on the type aircraft a pilot flies, determines what is evaluated. Although civilian and military requirements differ greatly, while flying the NATO AWACS for example, I received at least two checkrides per year. One checkride was a four-hour simulator event from hell where I was tested on dozens of normal and emergency procedures. The other consisted of flying the airplane and a written or oral test on what we called Bold Face or immediate action requirements along with aircraft limitations. A sample (non-AWACS) Bold Face would be:

Two Engine Flameout

1. IGNITION Switch . . . . . . . . ON

2. START PUMP switch . . . . . ON

You would have to regurgitate this exactly and woe be to the Joe who flubbed either this or the limitations. No matter how well you may have performed in the jet, you would be given the opportunity to shine later in the month when you were forced to reaccomplish the entire checkride. Likewise, in the simulator checkride, you always had the opportunity to actually practice each of the Bold Face items.

Once I became an instructor pilot, annual instructor requirements had to be graded as well. Also, in the AWACS world, if you were air refueling qualified, we had to "hit" a tanker and stay on the boom for a specified amount of time. In other words, there was a lot to accomplish. Guys in the tactical world dropping bombs and the like have a whole other set of annual requirements to fulfill.

Each of these "tests" consisted of mission planning, ground operations, all phases of flight to include take-offs, landings, approaches, air refueling, post flight and debrief. In all, 30 or more items were watched by an evaluator pilot and then graded Level 1, 2, or 3. (Pass, Pass but needs work, or Fail). A Level 3 in certain items such as "Judgment" meant an automatic failure for the entire ride. Failure for any reason also meant mandatory retraining and then a "recheck" consisting of re-accomplishing the failed event or possibly the entire checkride.

In the airline world, it is much the same but as you can imagine, not quite as intense. Two days per year in the simulator and an oral exam on Bold Face, limitations and walk around slides where we are shown pictures of the airplane and asked what different things are in the photo.

As a Standardization and Evaluation pilot, my primary instrument outside of the checkride was the Trend Analysis Tool (TAT). It is here that every item graded on every checkride was logged. If I had 30 pilots in my squadron, some 60 annual checkrides or more would be reflected in the TAT. At a glance, I could see results for ground ops perhaps 60 times during the year, maybe 70 3-engine approaches to a missed approach, how many pilots flew 10 knots slow on approach or how many pilots successfully hung onto the boom the requisite amount of time during air refueling. If two out of 30 pilots demonstrated the inability to land the jet on centerline for example, that would not show a problem but 15 pilots failing the same task would indicate a trend requiring the training program to address the weakness.

This is where standardization and training go hand in hand. While training equips personnel to perform at a certain standard, it is standardization and evaluation that compels training to produce the desired effect. If an organization has an "excellent" training program, exactly how do you know? Without a method of testing, grading and evaluating personnel, you don't. In order to monitor the success of a training program, regardless of how painful it may be, there needs to be a method in place to evaluate personnel. Personally, this has never been a problem since training and evaluation has been and continues to be a normal occurrence in the aviation world. But for some in other careers, the thought of being rigidly tested two to three times per year is a scary proposition. I have talked with people like teachers who cringe at the idea of being tested on a regular basis, especially when one's job would hang precariously in the balance. For us, it is a way of life.

This past January and February, Delta went through a Line Oriented Safety Audit (LOSA). I was fortunate to be one of 50 LOSA observers. During the first 7 weeks of the year, each of the observers accomplished a minimum of 12 safety audits. An observation consisted of the observer sitting in the cockpit and watching and taking note of everything that occurred during a flight. Some flights were short like Atlanta to Augusta, some were longer like Atlanta to Los Angeles while others were long international flights. For the most part, observers watched flights on the airplane we currently fly but also conducted a few on airplanes we had flown in the past.

The results of the LOSA audit will be available later this summer. But for us in the best airline in the country, it was an opportunity to look at our flight operations, identify threats and see how well our crews mitigated those threats. When I say threats, I don't mean folks with bombs, but external factors like weather, icing, rain, snow, gusty winds, ATC issues, unruly passengers, operational interruptions and the like. After an observation, it normally took me 3-5 hours to write one up. Although it seems like a lot of work, the fact that these observations increase our safety and highlights where our standardization may be lacking, this type of safety audit is indispensable. This is one of the many reasons our passengers should feel safe when riding in a Delta airplane.

While nobody will disagree that excellent training in any career field is paramount, many still unfortunately oppose standardization; mostly because of its potential punitive aspects. At the same time, however, success of any operation cannot be relegated to a training program that has no way of verifying its own validity. Regardless of the profession, development of a program that embraces training and standardization/ evaluation is essential for success.

 

Reader Comments(0)

 
 
Rendered 10/02/2024 05:10