Flight Deck Automation Issues
Home Page

Back to main page
Back to main page

This website has being upgraded and moved. See http://www.flightdeckautomation.com/fdai.aspx.

Synopsis: This website provides information about the human factors issues of commercial transport aircraft flight deck automation. This home page provides an overview, with links to more detailed information.

Keywords: flight deck, flightdeck, cockpit, aircraft, automation, issues, problems, concerns, flight management system, FMS, FMC, CDU, autopilot, autoflight system, AFS, electronic flight instrument system, EFIS, human factors, ergonomics, aviation safety.

Last update: 4 June 2003 What's new.



 
Ken Funk
<funkk@engr.orst.edu>
Candy Suroteguh, Griffith Owen, Cherag Sukhia, Ranjit Kurup, Robert Wilson
Beth Lyall
<Beth.Lyall@ResearchIntegrations.com>
Jennifer Wilson, Mary Niemczyk, Rebekah Vint
Oregon State University 
Department of Industrial and Manufacturing Engineering
Corvallis, Oregon, USA
Research Integrations, Inc. 
Tempe, Arizona, USA
[OSU logo]
[RII logo]



 
This work is funded by the Federal Aviation Administration, Office of the Chief Scientific and Technical Advisor for Human Factors (AAR-100). Our technical monitors were John Zalenchak, Tom McCloy, and Eleana Edens. We gratefully acknowledge their encouragement and support.

Any opinions, conclusions, or recommendations expressed in this website are those of the authors and do not necessarily reflect the views of their employers (past or present) or of the Federal Aviation Administration.


Overview


Introduction: Background and Problem

Automation is the allocation of functions to machines that would otherwise be allocated to humans. The term is also used to refer to the machines which perform those functions. Flight deck automation, therefore, consists of machines on the commercial transport aircraft flight deck which perform functions otherwise performed by pilots. Current flight deck automation includes autopilots, flight management systems, electronic flight instrument systems, and warning and alerting systems.

Flight deck automation has generally been well received by pilots and the aviation industry and accident rates for advanced technology aircraft are generally lower than those of comparable conventional aircraft. Nevertheless, with the advent of advanced technology, so called "glass cockpit," commercial transport aircraft and the transfer of safety-critical functions away from human control, pilots, scientists, and aviation safety experts have expressed concerns about flight deck automation. For example, Wiener (1989) surveyed a group of pilots of advanced technology commercial transport aircraft and found significant concerns. Wise and his colleagues (1993) found similar concerns among pilots of advanced technology corporate aircraft. Based on incident and accident data, Billings (1991, 1996) cited problems with flight deck automation and proposed a more human-centered approach to design and use. Sarter and Woods (1992, 1994, 1995) have sought to further investigate and verify some of the concerns expressed by pilots and others in a series of studies exploring pilot interaction with automation.

The fact that flight deck automation human factors issues exist is widely recognized. However, until now there did not exist a comprehensive list of such issues. This has prevented a full understanding of flight deck automation issues and a coordinated effort to address those issues using limited research, development, manufacturing, operational, and regulatory resources.


Objectives and General Approach

The objectives of our study were to
  1. develop a comprehensive list of flight deck automation human factors issues,
  2. compile a large body of data and other evidence related to those issues, and
  3. disseminate the issues and supporting data to the aviation research, development, manufacturing, operational, and regulatory communities.
Our general approach followed our objectives. Phase 1 was completed to address objective 1, Phase 2 was completed to address objective 2, and we have created this website to address objective 3. The rest of this page describes our methodology and provides links to details of our studies and results.


Phase 1: Identification of Possible Problems and Concerns

To identify flight deck automation issues, in Phase 1 of the study we compiled a list of possible problems with, or concerns about, flight deck automation, as expressed by pilots, scientists, engineers, and flight safety experts. We reviewed 960 source documents, including papers and articles from the scientific literature as well as the trade and popular press, accident reports, incident reports, questionnaires filled out by pilots and others, and documentation from our own analyses. In these source documents, we found more than 2,000 specific citations of 114 possible problems and concerns, which we organized into two taxonomies.

It is important to note that in Phase 1 we did not attempt to substantiate the claims made about automation problems. Rather, we merely identified and recorded people's perceptions of problems and their concerns about automation as a prelude to our Phase 2 work.

Phase 1 Details


Phase 2: Compilation of Evidence Related to Issues

In Phase 2 we located and recorded evidence related to the possible problems and concerns identified in Phase 1 from a wide variety of sources. Because an issue is "[a] point of discussion, debate, or dispute ..." (Morris, 1969), we refer to these possible problems and concerns throughout this website as flight deck automation issues or, just issues, except where referring to the process and results of Phase 1. Associated with each issue is an issue identifier, which we use as a concise representation of an issue; an issue statement, which suggests that a problem may exist; and an abbreviated issue statement, which we use as a concise -- yet meaningful -- representation of an issue. For example:
 
issue identifier
issue095
issue statement
Pilots may not be able to tell what mode or state the automation is in, how it is configured, what it is doing, and how it will behave. This may lead to reduced situation awareness and errors.
abbreviated issue statement
mode awareness may be lacking

The sources we reviewed for evidence included accident reports, documents describing incident report studies, and documents describing scientific experiments, surveys and other studies. We also conducted a survey of individuals with broad expertise related to human factors and flight deck automation. We reviewed these sources for data and other objective information related to an issue. For each instance of this evidence we qualitatively assessed the extent to which it supported one side of the issue or the other, and assigned a numeric strength rating between -5 and +5. We assigned a positive strength rating to evidence supporting that side of the issue suggested by its issue statement (supportive evidence) and a negative strength rating to evidence supporting the other side (contradictory evidence). Due to the nature of the sources we reviewed, we found mostly supportive evidence.

For each instance of evidence found, we recorded in a database the following:

During the process of collecting and recording evidence, we revised, updated, consolidated, and organized the issues, yielding 92 flight deck automation issues.

Phase 2 Details


Evidence from Experts

We conducted a survey of individuals who have a broad experience or knowledge base related to human factors and flight deck automation. The participants included pilots of several automated aircraft types, university researchers, airline management pilots, industry designers and researchers, and government regulators and researchers. The survey requested general demographics information then presented 114 statements, one for each of the problems and concerns identified in Phase 1. Each statement was presented as an unqualified assertion that a problem exists, for example, that pilots do lack mode awareness (see above). We asked the participants to rate their level of agreement that the assertion was true, to rate the criticality of the problem, and to provide the basis for their judgement (their own data, the data of others, personal opinion, etc.). We used their agreement ratings as evidence and the sources they listed to help guide our review of papers and reports describing experiments, surveys, and other studies.

Expert Survey Details


Accident Evidence

We identified 34 aircraft accident reports we thought might contain evidence related to the flight deck automation issues. We were able to obtain 20 of these reports from the US National Transportation Safety Board and other national and international agencies that conduct accident investigations. We reviewed these reports, looking for statements by the investigating board identifying one or more of the flight deck automation issues as contributing to the accident. We found evidence related to flight deck automation issues in 17 of the 20 accident reports we reviewed. In addition to accident reports prepared by official investigating boards, we included several accident reviews in our study. These were reviews conducted by qualified individuals after the official investigations, which benefited from additional information and the perspective offered by the individual's field of technical expertise.

Accident Analysis Details


Incident Evidence

We reviewed eight studies of Aviation Safety Reporting System (ASRS) incident reports, including one we conducted ourselves. In each of the incident studies we reviewed, the investigators selected a set of incident reports from the larger ASRS database based on study-specific criteria, then reviewed the narratives for information identifying and/or describing automation-related issues. We reviewed the investigators' summaries and conclusions in search of evidence for the flight deck automation issues identified earlier in our study. We found evidence in three of the eight incident studies.

Incident Analysis Details


Evidence from Experiments, Surveys, and Other Studies

Based on our Phase 1 bibliography, recommendations from the experts who participated in our survey, and our review of recently published literature, we identified 63 studies of flight deck automation. These studies included We obtained documentation on each study in the form of papers, technical reports, and World Wide Web pages. We analyzed the documents and found evidence related to the flight deck automation issues in 54 of them.

Study Analysis Details


Evidence From Our Phase 1 Survey

In Phase 1 we conducted a broad survey of pilots, aviation safety experts, and others with knowledge about flight deck automation merely to identify possible problems and concerns. In Phase 2 we reviewed questionnaires returned by pilots. In 21 of them, the pilots provided not only citations of the problems and concerns, but also evidence related to the flight deck automation issues, which we recorded.

Phase 1 Survey Evidence Details


The Flight Deck Automation Issues Database

Most of the information obtained in this study was recorded in a Web-accessible, searchable database. The Flight Deck Automation Database contains the following data:

Flight Deck Automation Issues Database Details


Meta-Analysis

To summarize the data collected in the Flight Deck Automation Issues study and to lay the groundwork for developing recommendations based on our findings, we performed a meta-analysis of all the data we collected. For each issue we compiled the number of citations of the issue collected in Phase 1, the number of instances of evidence collected in Phase 2 (supportive, contradictory, and total), the mean agreement rating given by our experts in the experts survey, the mean criticality rating given by the experts, and the sum of evidence excerpt strengths (i.e., a total "weight" of evidence for each issue). We then ranked the issues by each of these criteria to get different perspectives on the whole set of issues. Finally, to prioritize the issues for solutions and further research, we developed a composite ranking (which we called a "meta-ranking") of the issues based on multiple criteria: number of citations, expert agreement rating, expert criticality rating, and sum of strengths.

Despite some caveats, we consider those issues with the greatest overall supportive evidence, and especially those issues ranking highest in multiple criteria, as problems which require solutions. The issues with the greatest overall contradictory evidence (i.e., the lowest in sums of strengths) are not significant problems. Those issues that fall between these extremes require further study.

Meta-Analysis Details


Summary, Conclusions and an Invitation

The issues of flight deck automation are well documented and there is evidence related to most of them. In some cases, supportive evidence suggests that problems exist and require solutions. In other cases, the existence of both supportive and contradictory evidence makes the matter less clear, suggesting the need for further clarification. The list of flight deck automation human factors issues and related evidence we compiled in this study should be a valuable resource in the search for solutions and the further clarification of issues. This website makes that information available to the aviation research, development, manufacturing, operational, and regulatory communities. We invite you to use this website and to provide feedback on its contents and format in order to increase its usefulness in improving the safety and effectiveness of commercial air transportation.


Request for User Feedback

Our goal is to make this website a useful tool for the improvement of air transport safety and effectiveness. If you have comments, questions, criticisms, or suggestions about the content or format of this website, please send them to the appropriate web page authors or to the Flight Deck Automation Issues Website Team <fdai@engr.orst.edu>.


Acknowledgements

This work is funded by the Federal Aviation Administration, Office of the Chief Scientific and Technical Advisor for Human Factors (AAR-100). We gratefully acknowledge the many contributions of the two individuals from that office who have served as our technical monitor, originally John Zalenchak, currently Tom McCloy. We also thank our colleague Vic Riley, of Honeywell, Inc. who has assisted us at many stages of the work. Finally, we appreciate the cooperation of the many pilots, researchers, aviation safety professionals, and designers who participated in the research.


References

Billings, C.E. (1991). Human-centered aircraft automation: A concept and guidelines (NASA TM 103885). Moffett Field, CA: NASA Ames Research Center.

Billings, C.E. (1996). Human-centered aviation automation: principles and guidelines (NASA TM 110381). Moffett Field, CA: NASA Ames Research Center.

Morris, W. (Ed.). (1969). The American Heritage dictionary of the English language. Boston: Houghton Mifflin.

Sarter, N.B., & Woods, D.D. (1992). Pilot interaction with cockpit automation: Operational experiences with the Flight Management System. International Journal of Aviation Psychology 2(4), 303-321.

Sarter, N.B., & Woods, D.D. (1994). Pilot interaction with cockpit automation II : An experimental study of pilot's model and awareness of the Flight Management System. International Journal of Aviation Psychology 4(1), 1-28.

Sarter, N.B., Woods, D.D. (1995). 'How in the world did we ever get into that mode?' Mode error awareness in supervisory control. Human Factors 31(1), 5-19.

Wiener, E.L. (1989). Human factors of advanced technology ("glass cockpit") transport aircraft (NASA CR 177528). Moffet Field, CA: NASA Ames Research Center.

Wise, J.A., Abbott, D.W., Tilden, D., Dyck, J.L., Guide, P.C., & Ryan, L. (1993, August 27). Automation in corporate aviation: Human factors issues (CAAR-15406-93-1). Daytona Beach, FL: Center for Aviation/Aerospace Research, Embry-Riddle Aeronautical University.


What's New

Following, in reverse chronological order (most recent first), are short descriptions of changes made to this page.

4 Jun 03

2 Jun 99

12 Apr 99

12 Sep 97