Voting System Standards
FEC HOME > AGENDAS > 12/13/2001 AGENDA > AGENDA DOCUMENT 01-62
This document is part of Agenda Document Number 01-62 on the agenda for consideration at the December 13, 2001, meeting of the Federal Election Commission.
This document provides an overview of the Voting System Standards (the
Standards), developed by the Federal Election Commission (FEC). This overview serves as a companion document for
understanding and interpreting both Volume I, the performance provisions of the Standards,
and Volume II, the testing specifications.
Background
The program to develop and implement performance and test Standards for electronic
voting equipment is over 25 years old. However,
national interest in this program has been renewed as a result of the 2000 Presidential
election.
In
1975, the National Bureau of Standards (now the National Institute of Standards and
Technology) and the Office of the Federal Elections (the Office of Election
Administrations predecessor at the General Accounting Office) produced a joint
report, Effective Use of Computing Technology in
Vote Tallying. This report concluded that
a basic cause of computer-related election problems was the lack of appropriate technical
skills at the state and local level to develop or implement sophisticated Standards
against which voting system hardware and software could be tested. A subsequent Congressionally-authorized study
produced by the FEC and the National Bureau of Standards cited a significant number of
technical and managerial problems affecting the integrity of the vote counting process. The report detailed the need for a federal agency
to develop national performance Standards that could be used as a tool by state and local
election officials in the testing, certification, and procurement of computer-based voting
systems.
In 1984, Congress appropriated funds for the FEC to develop voluntary national
Standards for computer-based voting systems. During
this developmental period more than 130 participants, including state and local election
officials, independent technical experts, election system vendors, Congressional staff,
and other interested parties, attended numerous public hearings and reviewed the proposed
criteria for the draft Standards. Prior to
final issuance, the FEC published the draft Standards in the Federal Register and requested that all interested
parties submit formal comments. After
reviewing all responses and incorporating corrections and suitable suggestions, the FEC
formally approved the Performance and Test Standards
for Punchcard, Marksense and Direct Recording Electronic Voting Systems[1]
in January 1990.
The national testing effort is overseen by NASEDs Voting Systems Board, which is composed of election officials and
independent technical advisors (see attachment).[2] NASED has established a process for vendors to
submit their equipment to an Independent Test Authority (ITA) for evaluation against the
Standards. To date, Wyle Laboratories, Inc.,
CIBER, Inc., and SysTest Labs are certified by NASED to serve as program ITAs for the
testing of hardware and the examination of software.
[3]
Since
NASEDs testing program was initiated in 1994, more than 30 voting systems or
components of voting systems have gone through the NASED testing and qualification
process. In addition, many systems have
subsequently been certified at the state level using the Standards in conjunction with
functional and technical requirements developed by state and local policymakers to address
the specific needs of their jurisdictions.
As the qualification process matured and as qualified systems were used in the
field, the Voting Systems Board, in consultation with the ITAs, was able to identify
certain testing issues that needed to be resolved. Moreover,
rapid advancements in information and personal computer technologies have introduced new
voting system development and implementation scenarios not contemplated by the 1990
Standards.
In
1997, NASED briefed the FEC on the necessity for continued FEC involvement, citing the
importance of keeping the Standards current in its reflection of modern and emerging
technologies employed by voting system vendors. Following
a Requirements Analysis released in 1999, the Commission authorized the Office of Election
Administration to revise the Standards to reflect contemporary needs of the elections
community.
Issues Addressed by the Revised Standards
The
primary goal of the Standards is to provide a mechanism for state and local election
officials to assure the public of the integrity of computer-based election systems; this
has remained unchanged since 1990. However,
the methods for achieving this goal have broadened over the last decade.
The revised Standards provide a common set of requirements across all voting
technologies, using technology-specific requirements only where essential to address the
specified technologys impact on voting accuracy, integrity, and reliability. The original Standards classified systems as
either Punchcard and Marksense (P&M) or Direct Recording Electronic (DRE) and defined
separate Standards for each technology. The
revised document revise this terminology to specify standards for two separate categories:
paper-based voting systems and DRE voting
systems.
Paper-based
systems encompass both punchcards and optically scanned ballots. Electronic systems include a broad range of DRE
systems, such as those that use touch screens and/or keyboards to record votes. In addition, voting systems that use electronic
ballots and transmit official vote data from the polling place to another location over a
public network are now designated as Public Network DRE Voting Systems and are subject to
the standards applicable to other DRE systems, and to requirements specific to systems
that use public network telecommunications.
Revised
Performance Features
The revised Standards provide new or expanded coverage of the following functional
and technical system capabilities:
·
Election
Management Functions: performance
requirements are specified for components that define, develop and maintain election
databases; perform election definition and setup functions; format ballots; count votes;
consolidate and report results; and maintain audit trails.
·
Feedback to Voter:
Performance requirements are defined for DRE systems and for paper-based
precinct-based systems in order to provide direct feedback to the voter that indicates
when an undervote or overvote is detected.
·
Accessibility:
Performance requirements are defined for voting
systems so that a system can meet the specific needs of voters with disabilities. These requirements were developed by the Access
Board, a federal agency responsible for developing accessibility standards. The requirements are based on the accessibility
standards for electronic and information technology established in 36 CFR Part
1194 - Electronic and Information Technology Accessibility Standards, which implement
Section 508 of the Rehabilitation Act Amendments of 1998.
The requirements provide common standards that must be met by all voting devices
claiming accessibility and specific standards related to various types of DRE voting
systems.
·
Audit
Trails: Performance requirements for audit trails are
strengthened to address the full range of election management functions, including such
functions such as ballot definition and election programming.
·
Telecommunications:
Performance
requirements are defined for hardware and software components of voting systems that
transmit voting-related information using public telecommunications components. These requirements apply to systems where data is
carried between devices at a single site, and systems where data is carried between
devices in two geographically distinct locations. Systems
must be designed to provide the secure transfer of many distinct types of vote data,
including lists of eligible voters, voter authentication information, ballot definition
information, and vote transmission and tabulation information. Due to the limits of existing technology to
prevent unauthorized use of data, the Standards include some blanket prohibitions against
the communications or transfer of certain types of data via telecommunications under any
circumstances.
·
Broadcasting
of Unofficial Results: Performance requirements are defined for the
content and labeling of data provided to the media and other organizations (in reports,
data files, or postings to official Web sites) prior to the canvass and certification of
election results.
Revised Test Features
The
revised Standards also provide a restructured and expanded description of the tests
performed by ITAs:
·
Expanded
Testing Standards:
Additional tests are defined to address the expanded functional and technical requirements
for voting systems.
·
Stages
in the Test Process:
The test process is re-defined in terms of pre-testing, testing, and post-testing
activities.
·
Distinction
Between Initial Tests and Testing of Modifications to Previously Tested Systems: A
voting system remains qualified as long as no modifications are made. Any changes to a system must be submitted to the
appropriate ITA. The proper course of action
to evaluate the implication of a modification to a system, including the possibility of
requiring additional testing, depends on the nature of the changes made by the vendor. Some criteria for determining the scope of testing
for modifications are defined in the Standards, but the ITA has full discretion to
evaluate this criteria against modifications made to the system.
·
Documentation
Submitted by Vendors:
The description of documentation provided by vendors as part of the Technical Data Package
(TDP) is refined to support the collection of all information required by the ITAs to
conduct the expanded testing.
Revised Organizational
Features
The
Standards have been reorganized and edited to better suit the needs of different user
groups and to improve readability. These
changes include:
·
Multiple
Volumes:
While the original Standards was published as a single document, the revision is divided
into two distinct volumes. Volume I, Voting System Performance Standards,
provides an introduction to the Standards. It
describes the functional and technical requirements for voting systems, and provides a
summary of the ITAs testing process. This volume is intended for a general audience
including the public, the press, state and local election officials, and prospective
vendors, as well as the ITAs and current vendors already familiar with the Standards and
the testing process. Volume II, Voting System Test Standards, is written
specifically for jurisdictions purchasing a new system, vendors, and ITAs. This volume provides details of the test process,
including the information to be submitted by the vendor to support testing, the
development of test plans by the ITAs for initial system testing, the testing of
modifications to the system, the conduct of system qualification tests by the ITAs, and
the test reports generated by the ITAs.
·
Standards,
Guidelines and Fundamental System Development Techniques:
The
revised Standards clearly identify individual elements as mandatory requirements or
recommended guidelines. Such requirements are
designated in the Standards by the term shall.The Standards no longer provide
descriptions of basic professional system developmental and managerial techniques, which
were included in the 1990 version of the Standards. However,
they do provide references to common industry practices, and require the vendors to submit
documentation of its processes for some topics such as quality assurance and configuration
management,.
·
Inclusion
of Selected Test Procedure Details:
Volume II of the Standards specify the procedure for certain hardware tests for voting
devices and vote counting devices. However,
many tests of hardware and software in a voting system can not be developed without
examining the design and configuration of the specific system seeking qualification. Because of this, the Standards give the ITAs wide
latitude to develop and perform appropriate tests to fully evaluate a system against the
Standards.
Issues Not Addressed by the Revised
Standards
This
revisions to the Standards do not provide sufficient guidance for a number of important
issues. Some of these issues are outside the
scope of the Standards, some are only partially addressed by the Standards, and some will
be addressed in future modules of the Standards. These
issues include:
·
Administrative
Functions: The
revised Standards do not address
administrative and managerial practices outside the direct control of the vendor. Election officials have long recognized that
adequate Standards and test criteria are only part of the formula for ensuring that votes
are cast and counted in an accurate manner. The
other key component that is often overlooked in the rush to embrace technological
solutions to election problems is efficient and consistent administration and management. Effective administration at the local level
requires the adoption and implementation of consistent and effective procedures for
acquiring, securing, operating and maintaining a voting system. Although the Standards mandate that vendors
document many components of optimal managerial practices, the execution of such procedures
are not included in a Standards document that focuses on the system itself.
·
Integration
with the Voter Registration Database: Local
and statewide automated voter registration databases have become more common in recent
years as election officials throughout the country attempt to harness innovations in
network computing to address the needs of increasingly complex voter registration
information requirements. In some instances,
a voter registration database will contain many data fields common to other election
administration applications. These
applications include campaign finance recording, election worker management, and the
reporting of election results. Although many
of these applications are co-dependent, the testing of the design and interface between
the voting system and the voter registration database has been specifically excluded from
this update of the Standards for practical reasons. First,
because there is such a variety of databases and interfaces being used among the various
states and within the localities of each individual state, there is no practical and
systematic way to test a voting system against all possible combinations and
configurations. Second, many of the voting
systems being used today still do not include an electronic interface with the voter
registration database. At such time when the
majority of voting systems and voter registration databases become more seamlessly
integrated, a module will be added to the Standards covering their performance,
functionality, and testing.
·
Commercial
Off-the-Shelf (COTS) Products: Some
voting systems use one or more readily-available COTS hardware devices (such as card
readers, printers, or personal computers) or software products (such as operating systems,
programming language compilers, or database management systems). These devices and software are exempted from
certain portions of the qualification testing process so long as such products are not
modified in any manner for use in a voting system.
·
Internet
Voting: A
recent report[4]
conducted by the Internet Policy Institute and sponsored by the National Science
Foundation in cooperation with the University of Maryland stated:
Remote
Internet voting systems pose significant risk to the integrity of the voting process and
should not be fielded for use in public elections until substantial technical and social
science issues have been addressed. The
security risk associated with these systems are both numerous and pervasive and, in many
cases, cannot be resolved using even todays most sophisticated technology.
The
findings of this and other studies on internet voting have led the FEC and NASED to
conclude that controls cannot be developed at the present time to make remote Internet
voting sufficiently risk-resistant to be confidently used by election officials and the
voting public. Therefore, the Standards can
not be written for the testing and qualification of these systems. However, the Standards
contemplate the development of systems that integrate public telecommunications networks
at the poll site setting. These voting
systems are considered public network direct recording electronic (DRE) voting systems and
must meet the same revised Standards for security, accuracy, and reliability as other
similarly defined voting systems. Such
systems must additionally meet requirements specific to systems that integrate certain
telecommunications components.
·
Detailed
Human Interface and Usability Standards: Recent
controversy over the design of the Presidential ballot in certain jurisdictions has
highlighted the importance of ballot design and system usability by both election
officials and the general public. As
mentioned earlier, the revised Standards cover design and usability in a detailed manner
as it pertains to disabled voters. Human
interface and usability issues for the general voting public are addressed in standards
for ballot formatting, which require vendors to have the capability of producing ballots
with uniform allocation of space and fonts. However,
the FEC recognizes that neither the original Standards nor the update do an adequate job
of developing detailed test Standards for interface and usability. The FEC has begun the development of the next
module to the Standards, which will focus on interface and usability issues such as
typography, layout, use of graphic elements, sequencing, screen flow (for electronic
systems), language simplification, and user testing.
·
Human
Error Rate vs. System Error Rate: In
the Standards, the term error rate applies to errors introduced by the system,
not by a voters action, such as the
failure to mark a ballot in accordance with instructions.
The updated accuracy standard is defined as a ballot position error rate. The error rate applies to specific system
functions, such as recording a vote, storing a vote and consolidating votes into vote
totals. Each location on a paper ballot card
or electronic ballot image where a vote may be entered represents a ballot position. The Standards set two error rates:
1. Target
error rate:
a maximum of one error in 10,000,000 ballot positions, and
2. Testing
error rate:
a maximum acceptable rate in the test process of one error in 500,000 positions.
This
system error rate applies to data that is entered into the system in conformance with the
applicable instructions and specifications. Further
research on human interface and usability issues is needed to enable the development of
Standards for error rates that account for human error.
Summary of Content of Volume I
Volume I contains performance standards for electronic components of voting
systems. In addition to containing a glossary
(Appendix A) and applicable references (Appendix B), Volume I is divided into nine
sections:
Objectives
and usage of the Standards;
Development
history for initial Standards;
Update
of the Standards;
Accessibility
for individuals with disabilities;
Definitions
of key terms;
Application
of the Standards and test specifications; and
Outline
of contents.
Overall Capabilities: These functional capabilities apply throughout the election process. They include security, accuracy, integrity, system auditability, election management system, vote tabulation, ballot counters, telecommunications, and data retention.
Pre-voting Capabilities: These functional capabilities are used to prepare the voting system for voting. They include ballot preparation, the preparation of election-specific software (including firmware), the production of ballots or ballot pages, the installation of ballots and ballot counting software (including firmware), and system and equipment tests.
Voting Capabilities: These functional capabilities include all operations conducted at the polling place by voters and officials including the generation of status messages.
Post-voting Capabilities: These functional capabilities apply after all votes have been cast. They include closing the polling place; obtaining reports by voting machine, polling place, and precinct; obtaining consolidated reports; and obtaining reports of audit trails.
Maintenance, Transportation and Storage Capabilities: These capabilities are necessary to maintain, transport, and store voting system equipment.
For
each functional capability, common standards are specified.
In recognition of the diversity of voting systems, some of the standards have
additional requirements that apply only if the system incorporates certain functions (for
example, voting systems employing telecommunications to transmit voting data) or
configurations (for example, a central count component). Where system-specific standards
are appropriate, common standards are followed by standards applicable to specific
technologies (i.e., paper-based or DRE) or intended use (i.e., central or precinct count).
The
requirement that voting systems provide access to individuals with disabilities is one of
the most significant additions to the Standards. The
FEC has incorporated specifications that were developed by the Access Board and are based
on the accessibility Standards for electronic and information technology established in 36 CFR Part 1194 - Electronic and Information
Technology Accessibility Standards, which implement Section 508 of the Rehabilitation
Act Amendments of 1998.
For paper ballots: printers, cards, boxes, transfer boxes, and readers;
For electronic systems: ballot displays, ballot recorders, precinct vote control units;
For voting devices: punching and marking devices and electronic recording devices;
Voting booths and enclosures;
Equipment used to prepare ballots, program elections, consolidate and report votes, and perform other elections management activities;
Fixed servers and removable electronic data storage media; and
Printers.
The
Standards specify the minimum values for the relevant attributes of hardware, such as:
Accuracy;
Reliability;
Stability
under normal environmental operating conditions and when equipment is in storage and
transit;
Power
requirements and ability to respond to interruptions of power supply;
Susceptibility
to interference from static electricity and magnetic fields;
Product
marking; and
Safety.
The
requirements of this section apply to all software developed for use in voting systems,
including:
Software provided by the voting system vendor and its component suppliers; and
Software furnished by an external provider where the software is potentially used in any way during voting system operation.
The
general standards in this section apply to software used to support the broad range of
voting system activities, including pre-voting, voting and post-voting activities. System specific Standards are defined for ballot
counting, vote processing, the creation of an unalterable audit trail, and the generation
of output reports and files. Voting system
software is also subject to the security requirements of Section 6.
This
section addresses telecommunications hardware and software across a broad range of
technologies such as dial-up communications technologies, high-speed telecommunications
lines (public and private), cabling
technologies, communications routers, modems, modem drivers, channel service units
(CSU)/data service units (DSU), and dial-up networking applications software.
Additionally,
this section applies to voting-related transmissions over public networks, such as those
provided by regional telephone companies and long distance carriers. This section also applies to private networks
regardless of whether the network is owned and operated by the election jurisdiction. For systems that transmit data over public
networks, this section applies to telecommunications components installed and operated at
settings supervised by election officials, such as polling places or central offices.
Establish and maintain controls that can ensure that accidents, inadvertent mistakes, and errors are minimized;
Protect the system from intentional manipulation and fraud;
Protect the system from malicious mischief;
Identify fraudulent or erroneous changes to the system; and
Protect secrecy in the voting process.
These
Standards are intended to address a broad range of risks to the integrity of a voting
system. While it is not possible to identify
all potential risks, the Standards identify several types of risk that must be addressed,
including:
Unauthorized changes to system capabilities for defining ballot formats, casting and recording votes, calculating vote totals consistent with defined ballot formats, and reporting vote totals;
Alteration of voting system audit trails;
Altering a legitimately cast vote;
Preventing the recording of a legitimately cast vote,
Introducing data for a vote not cast by a registered voter;
Changing calculated vote totals;
Preventing access to vote data, including individual votes and vote totals, to unauthorized individuals; and
Preventing access to voter identification data and data for votes cast by the voter such that an individual can determine the content of specific votes cast by the voter.
Development of procedures for identifying and procuring parts and raw materials of the requisite quality, and for their inspection, acceptance, and control.
Documentation of hardware and software development processes.
Identification and enforcement of all requirements for in-process inspection and testing that the manufacturer deems necessary to ensure proper fabrication and assembly of hardware, as well as installation and operation of software or firmware.
Procedures for maintaining all data and records required to document and verify the quality inspections and tests.
The
requirements of this section address a broad set of record keeping, audit, and reporting
activities that include:
Identifying discrete system components;
Creating records of formal baselines of all components;
Creating records of later versions of components;
Controlling changes made to the system and its components;
Submitting new versions of the system to ITAs;
Releasing new versions of the system to customers;
Auditing the system, including its documentation, against configuration management records;
Controlling interfaces to other systems; and
Identifying tools used to build and maintain the system.
The
qualification test process is intended to discover errors that, should they occur in
actual election use, could result in failure to complete election operations in a
satisfactory manner. This section describes
the scope of qualification testing, its applicability to voting system components,
documentation that is must be submitted by the vendor, and the flow of the test process. This section also describes differences between
the test process for initial qualification testing of a system and the testing for
modifications and re-qualification after a qualified system has been modified.
The
testing described in this section is performed by an ITA that is certified by NASED. The testing may be conducted by one or more ITAs
for a given system, depending on the nature of tests to be conducted and the expertise of
the certified ITA. The testing process
involves the assessment of:
Absolute correctness of all ballot processing software, for which no margin for error exists;
Operational accuracy in the recording and processing of voting data, as measured by the error rate articulated in Volume I, Section 3;
Operational failure or the number of unrecoverable failures under conditions simulating the intended storage, operation, transportation, and maintenance environments for voting systems, using an actual time-based period of processing test ballots;
System performance and function under normal and abnormal conditions; and
Completeness and accuracy of the system documentation and configuration management records to enable purchasing jurisdictions to effectively install, test, and operate the system.
Summary of Volume II Content
·
Section
1 - Introduction: This
section provides an overview of Volume II, addressing the following topics:
The
objectives of Volume II;
The
general contents of Volume II;
The
qualification testing focus;
The
qualification testing sequence;
The
evolution of testing; and
The
outline of contents
·
Section
2 - Technical Data Package: This
section contains a description of vendor documentation relating to the voting system that
shall be submitted with the system as a precondition for qualification testing. These items are necessary to define the product
and its method of operation; to provide the vendors technical and test data
supporting the its claims of the system's functional capabilities and performance levels;
and to document instructions and procedures governing system operation and field
maintenance.
The
content of the Technical Data Package (TDP) is intended must contain a complete
description of the following information about the system:
Overall system design, including subsystems, modules, and interfaces;
Specific functional capabilities;
Performance and design specifications;
Design constraints and compatibility requirements;
Personnel, equipment, and facilities necessary for system operation, maintenance, and logistical support;
Vendor practices for assuring system quality during the systems development and subsequent maintenance; and
Vendor practices for managing the configuration of the system during development
and for modifications to the system throughout its life-cycle.
·
Section
3 - Functionality Testing: This
section contains a description of the testing to be performed by the ITA to confirm the
functional capabilities of a voting system submitted for qualification testing. It describes the scope and basis for functional
testing, the general sequence of tests within the overall test process, and provides
guidance on testing for accessibility. It
also discusses testing of functionality of systems that operate on personal computers.
·
Section
4 - Hardware Testing: This
section contains a description of the testing to be performed by the ITAs to confirm the
proper functioning of the hardware components of a voting system submitted for
qualification testing. This section requires
ITAs to design and perform procedures that test the voting system hardware for both
operating and non-operating environmental tests.
Hardware
testing begins with non-operating tests that require the use of an environmental test
facility. These are followed by operating
tests that are performed partly in an environmental facility and partly in a standard test
laboratory or shop environment. The
non-operating tests are intended to evaluate the ability of the system hardware to
withstand exposure to various environmental conditions incidental to voting system
storage, maintenance, and transportation. The
procedures are based on test methods contained in Military Standards (MIL-STD) 810D,
modified where appropriate, and include such tests as: bench handling, vibration, low and
high temperature, and humidity.
The
operating tests involve running the system for an extended period of time under varying
temperatures and voltages. This ensures that
the hardware meets or exceeds the minimum requirements for reliability, data reading, and
processing accuracy contained in Section 3 of Volume I.
Although the procedure emphasizes equipment operability and data accuracy, it is
not an exhaustive evaluation of all system functions.
Moreover, the severity of the test conditions has in most cases been reduced from
that specified in the Military Standards to reflect commercial, rather than military,
practice.
·
Section
5 - Software Testing: This
section contains a description of the testing to be performed by the ITAs to confirm the
proper functioning of the software components of a voting system submitted for
qualification testing. It describes the scope
and basis for software testing, the initial review of documentation to support software
testing, and the review of voting system source code.
The
software qualification tests encompass a number of interrelated examinations. The primary objective is to selectively provide an
in-depth examination of all ballot processing source code for absolute logical
correctness, for its modularity and overall construction, and for conformance with the
documentation provided by the vendor. Part of
this code examination will focus on the identification of hidden code. The code inspection will be followed by a series
of functional tests to verify the proper performance of all system functions controlled by
the software.
Conclusion
Almost eighty percent of the States have adopted the Standards. The Commission recommends that individual States
continue to decide how best to adopt and implement the Standards to aid in the procurement
of electronic voting systems. States are also
encouraged to develop and implement individual certification processes to make sure that
qualified voting systems can meet the unique and particular demands of the purchasing
jurisdiction.
As
a whole, implementation of the original Standards, combined with NASEDs national
testing program, has allowed election officials to be more confidant than ever that the
voting systems they procure will work accurately and reliably. Although the requirements for voting systems and
the technologies used to build them have evolved over the past decade, the revised
Standards will close the gaps in the Standards for system performance and testing. In order to prevent technology gaps in the future,
the FEC and NASED are committed to making the Standards a living document capable of being
updated in an expedited manner to respond to constantly evolving technology. Such technological innovation should be embraced
in order to maintain a sophisticated and robust voting systems industry.
[1] This document is generally referred to as
the Voting Systems Standards.
[2] The FECs Director of the Office of
Election Administration and representatives from IEEE, Wyle Laboratories, SysTest, and
Ciber serve as ex-officio members.
[3] NASED also continues to encourage other
qualified testing facilities to request certification as Independent Test Authorities.
[4] Report
of the National Workshop on Internet Voting: Issues and Research Agenda March,
2001. Internet Policy Institute.