· 6 years ago · Jan 30, 2019, 07:20 AM
1Information Security
2The NSA Security Manual [NOTE: This file was retyped from an anonymous photocopied submission. The authenticity of it was not verified.] Security Guidelines This handbook is designed to introduce you to some of the basic security principles and procedures with which all NSA employees must comply. It highlights some of your security responsibilities, and provides guidelines for answering questions you may be asked concerning your association with this Agency. Although you will be busy during the forthcoming weeks learning your job, meeting co-workers, and becoming accustomed to a new work environment, you are urged to become familiar with the security information contained in this handbook. Please note that a listing of telephone numbers is provided at the end of this handbook should you have any questions or concerns. Introduction In joining NSA you have been given an opportunity to participate in the activities of one of the most important intelligence organizations of the United States Government. At the same time, you have also assumed a trust which carries with it a most important individual responsibility–the safeguarding of sensitive information vital to the security of our nation. While it is impossible to estimate in actual dollars and cents the value of the work being conducted by this Agency, the information to which you will have access at NSA is without question critically important to the defense of the United States. Since this information may be useful only if it is kept secret, it requires a very special measure of protection. The specific nature of this protection is set forth in various Agency security regulations and directives. The total NSA Security Program, however, extends beyond these regulations. It is based upon the concept that security begins as a state of mind. The program is designed to develop an appreciation of the need to protect information vital to the national defense, and to foster the development of a level of awareness which will make security more than routine compliance with regulations. At times, security practices and procedures cause personal inconvenience. They take time and effort and on occasion may make it necessary for you to voluntarily forego some of your usual personal perogatives. But your compensation for the inconvenience is the knowledge that the work you are accomplishing at NSA, within a framework of sound security practices, contributes significantly to the defense and continued security of the United States of America. I extend to you my very best wishes as you enter upon your chosen career or assignment with NSA. Philip T. Pease Director of Security 1 1 INITIAL SECURITY RESPONSIBILITIES 1.1 Anonymity Perhaps one of the first security practices with which new NSA personnel should become acquainted is the practice of anonymity. In an open society such as ours, this practice is necessary because information which is generally available to the public is available also to hostile intelligence. Therefore, the Agency mission is best accomplished apart from public attention. Basically, anonymity means that NSA personnel are encouraged not to draw attention to themselves nor to their association with this Agency. NSA personnel are also cautioned neither to confirm nor deny any specific questions about NSA activities directed to them by individuals not affiliated with the Agency. The ramifications of the practice of anonymity are rather far reaching, and its success depends on the cooperation of all Agency personnel. Described below you will find some examples of situations that you may encounter concerning your employment and how you should cope with them. Beyond the situations cited, your judgement and discretion will become the deciding factors in how you respond to questions about your employment. 1.2 Answering Questions About Your Employment Certainly, you may tell your family and friends that you are employed at or assigned to the National Security Agency. There is no valid reason to deny them this information. However, you may not disclose to them any information concerning specific aspects of the Agency’s mission, activities, and organization. You should also ask them not to publicize your association with NSA. Should strangers or casual acquaintances question you about your place of employment, an appropriate reply would be that you work for the Department of Defense. If questioned further as to where you are employed within the Department of Defense, you may reply, â€NSA.†When you inform someone that you work for NSA (or the Department of Defense) you may expect that the next question will be, â€What do you do?†It is a good idea to anticipate this question and to formulate an appropriate answer. Do not act mysteriously about your employment, as that would only succeed in drawing more attention to yourself. If you are employed as a secretary, engineer, computer scientist, or in a clerical, administrative, technical, or other capacity identifiable by a general title which in no way indicates how your talents are being applied to the mission of the Agency, it is suggested that you state this general title. If you are employed as a linguist, you may say that you are a linguist, if necessary. However, you should not indicate the specific language(s) with which you are involved. The use of service specialty titles which tend to suggest or reveal the nature of the Agency’s mission or specific aspects of their work. These professional titles, such as cryptanalyst, signals collection officer, and intelligence research analyst, if given verbatim to an outsider, would likely generate further questions which may touch upon the classified aspects of your work. Therefore, in conversation with outsiders, it is suggested that such job titles be generalized. For example, you might indicate 2 that you are a â€research analyst.†You may not, however, discuss the specific nature of your analytic work. 1.3 Answering Questions About Your Agency Training During your career or assignment at NSA, there is a good chance that you will receive some type of job-related training. In many instances the nature of the training is not classified. However, in some situations the specialized training you receive will relate directly to sensitive Agency functions. In such cases, the nature of this training may not be discussed with persons outside of this Agency. If your training at the Agency includes language training, your explanation for the source of your linguistic knowledge should be that you obtained it while working for the Department of Defense. You Should not draw undue attention to your language abilities, and you may not discuss how you apply your language skill at the Agency. If you are considering part-time employment which requires the use of language or technical skills similar to those required for the performance of your NSA assigned duties, you must report (in advance) the anticipated part-time work through your Staff Security Officer (SSO) to the Office of Security’s Clearance Division (M55). 1.4 Verifying Your Employment On occasion, personnel must provide information concerning their employment to credit institutions in connection with various types of applications for credit. In such situations you may state, if you are a civilian employee, that you are employed by NSA and indicate your pay grade or salary. Once again, generalize your job title. If any further information is desired by persons or firms with whom you may be dealing, instruct them to request such information by correspondence addressed to: Director of Civilian Personnel, National Security Agency, Fort George G. Meade, Maryland 20755-6000. Military personnel should use their support group designator and address when indicating their current assignment. If you contemplate leaving NSA for employment elsewhere, you may be required to submit a resume/job application, or to participate in extensive employment interviews. In such circumstances, you should have your resume reviewed by the Classification Advisory Officer (CAO) assigned to your organization. Your CAO will ensure that any classified operational details of your duties have been excluded and will provide you with an unclassified job description. Should you leave the Agency before preparing such a resume, you may develop one and send it by registered mail to the NSA/CSS Information Policy Division (Q43) for review. Remember, your obligation to protect sensitive Agency information extends beyond your employment at NSA. 1.5 The Agency And Public News Media From time to time you may find that the agency is the topic of reports or articles appearing in public news media–newspapers, magazines, books, radio and TV. The NSA/CSS Information Policy Division (Q43) represents the Agency in matters involving the press and other media. This office serves at the Agency’s official media center 3 and is the Director’s liaison office for public relations, both in the community and with other government agencies. The Information Policy Division must approve the release of all information for and about NSA, its mission, activities, and personnel. In order to protect the aspects of Agency operations, NSA personnel must refrain from either confirming or denying any information concerning the Agency or its activities which may appear in the public media. If you are asked about the activities of NSA, the best response is â€no comment.†You should the notify Q43 of the attempted inquiry. For the most part, public references to NSA are based upon educated guesses. The Agency does not normally make a practice of issuing public statements about its activities. 2 GENERAL RESPONSIBILITIES 2.1 Espionage And Terrorism During your security indoctrination and throughout your NSA career you will become increasingly aware of the espionage and terrorist threat to the United States. Your vigilance is the best single defense in protecting NSA information, operations, facilities and people. Any information that comes to your attention that suggests to you the existence of, or potential for, espionage or terrorism against the U.S. or its allies must be promptly reported by you to the Office of Security. There should be no doubt in your mind about the reality of the threats. You are now affiliated with the most sensitive agency in government and are expected to exercise vigilance and common sense to protect NSA against these threats. 2.2 Classification Originators of correspondence, communications, equipment, or documents within the Agency are responsible for ensuring that the proper classification, downgrading information and, when appropriate, proper caveat notations are assigned to such material. (This includes any handwritten notes which contain classified information). The three levels of classification are Confidential, Secret and Top Secret. The NSA Classification Manual should be used as guidance in determining proper classification. If after review of this document you need assistance, contact the Classification Advisory Officer (CAO) assigned to your organization, or the Information Policy Division (Q43). 2.3 Need-To-Know Classified information is disseminated only on a strict â€need-to-know†basis. The â€need-to-know†policy means that classified information will be disseminated only to those individuals who, in addition to possessing a proper clearance, have a requirement to know this information in order to perform their official duties (need-to-know). No person is entitled to classified information solely by virtue of office, position, rank, or security clearance. All NSA personnel have the responsibility to assert the â€need-to-know†policy as part of their responsibility to protect sensitive information. Determination of â€needto-know†is a supervisory responsibility. This means that if there is any doubt in 4 your mind as to an individual’s â€need-to-know,†you should always check with your supervisor before releasing any classified material under your control. 2.4 For Official Use Only Separate from classified information is information or material marked â€FOR OFFICIAL USE ONLY†(such as this handbook). This designation is used to identify that official information or material which, although unclassified, is exempt from the requirement for public disclosure of information concerning government activities and which, for a significant reason, should not be given general circulation. Each holder of â€FOR OFFICAL USE ONLY†(FOUO) information or material is authorized to disclose such information or material to persons in other departments or agencies of the Executive and Judicial branches when it is determined that the information or material is required to carry our a government function. The recipient must be advised that the information or material is not to be disclosed to the general public. Material which bears the â€FOR OFFICIAL USE ONLY†caveat does not come under the regulations governing the protection of classified information. The unauthorized disclosure of information marked â€FOR OFFICIAL USE ONLY†does not constitute an unauthorized disclosure of classified defense information. However, Department of Defense and NSA regulations prohibit the unauthorized disclosure of information designated â€FOR OFFICIAL USE ONLY.†Appropriate administrative action will be taken to determine responsibility and to apply corrective and/or disciplinary measures in cases of unauthorized disclosure of information which bears the â€FOR OFFICIAL USE ONLY†caveat. Reasonable care must be exercised in limiting the dissemination of â€FOR OFFICIAL USE ONLY†information. While you may take this handbook home for further study, remember that is does contain â€FOR OFFICIAL USE ONLY†information which should be protected. 2.5 Prepublication Review All NSA personnel (employees, military assignees, and contractors) must submit for review any planned articles, books, speeches, resumes, or public statements that may contain classified, classifiable, NSA-derived, or unclassified protected information, e.g., information relating to the organization, mission, functions, or activities of NSA. Your obligation to protect this sensitive information is a lifetime one. Even when you resign, retire, or otherwise end your affiliation with NSA, you must submit this type of material for prepublication review. For additional details, contact the Information Policy Division (Q43) for an explanation of prepublication review procedures. 2.6 Personnel Security Responsibilities Perhaps you an recall your initial impression upon entering an NSA facility. Like most people, you probably noticed the elaborate physical security safeguards–fences, concrete barriers, Security Protective Officers, identification badges, etc. While these measures provide a substantial degree of protection for the information housed within our buildings, they represent only a portion of the overall Agency security program. In fact, vast amounts of information leave our facilities daily in the minds of NSA personnel, and this is where our greatest vulnerability lies. Experience has indicated 5 that because of the vital information we work with at NSA, Agency personnel may become potential targets for hostile intelligence efforts. Special safeguards are therefore necessary to protect our personnel. Accordingly, the Agency has an extensive personnel security program which establishes internal policies and guidelines governing employee conduct and activities. These policies cover a variety of topics, all of which are designed to protect both you and the sensitive information you will gain through your work at NSA. 2.7 Association With Foreign Nationals As a member of the U.S. Intelligence Community and by virtue of your access to sensitive information, you are a potential target for hostile intelligence activities carried out by or on behalf of citizens of foreign countries. A policy concerning association with foreign nationals has been established by the Agency to minimize the likelihood that its personnel might become subject to undue influence or duress or targets of hostile activities through foreign relationships. As an NSA affiliate, you are prohibited from initiating or maintaining associations (regardless of the nature and degree) with citizens or officials of communist-controlled, or other countries which pose a significant threat to the security of the United States and its interests. A comprehensive list of these designated countries is available from your Staff Security Officer or the Security Awareness Division. Any contact with citizens of these countries, no matter how brief or seemingly innocuous, must be reported as soon as possible to your Staff Security Officer (SSO). (Individuals designated as Staff Security Officers are assigned to every organization; a listing of Staff Security Officers can be found at the back of this handbook). Additionally, close and continuing associations with any non-U.S. citizens which are characterized by ties of kinship, obligation, or affection are prohibited. A waiver to this policy may be granted only under the most exceptional circumstances when there is a truly compelling need for an individual’s services or skills and the security risk is negligible. In particular, a waiver must be granted in advance of a marriage to or cohabitation with a foreign national in order to retain one’s access to NSA information. Accordingly, any intent to cohabitate with or marry a non-U.S. citizen must be reported immediately to your Staff Security Officer. If a waiver is granted, future reassignments both at headquarters and overseas may be affected. The marriage or intended marriage of an immediate family member (parents, siblings, children) to a foreign national must also be reported through your SSO to the Clearance Division (M55). Casual social associations with foreign nationals (other than those of the designated countries mentioned above) which arise from normal living and working arrangements in the community usually do not have to be reported. During the course of these casual social associations, you are encouraged to extend the usual social amenities. Do not act mysteriously or draw attention to yourself (and possibly to NSA) by displaying an unusually wary attitude. Naturally, your affiliation with the Agency and the nature of your work should not be discussed. Again, you should be careful not to allow these associations to become 6 close and continuing to the extent that they are characterized by ties of kinship, obligation, or affection. If at any time you feel that a â€casual†association is in any way suspicious, you should report this to your Staff Security Officer immediately. Whenever any doubt exists as to whether or not a situation should be reported or made a matter of record, you should decided in favor of reporting it. In this way, the situation can be evaluated on its own merits, and you can be advised as to your future course of action. 2.8 Correspondence With Foreign Nationals NSA personnel are discouraged from initiating correspondence with individuals who are citizens of foreign countries. Correspondence with citizens of communist-controlled or other designated countries is prohibited. Casual social correspondence, including the â€penpal†variety, with other foreign acquaintances is acceptable and need not be reported. If, however, this correspondence should escalate in its frequency or nature, you should report that through your Staff Security Officer to the Clearance Division (M55). 2.9 Embassy Visits Since a significant percentage of all espionage activity is known to be conducted through foreign embassies, consulates, etc., Agency policy discourages visits to embassies, consulates or other official establishments of a foreign government. Each case, however, must be judged on the circumstances involved. Therefore, if you plan to visit a foreign embassy for any reason (even to obtain a visa), you must consult with, and obtain the prior approval of, your immediate supervisor and the Security Awareness Division (M56). 2.10 Amateur Radio Activities Amateur radio (ham radio) activities are known to be exploited by hostile intelligence services to identify individuals with access to classified information; therefore, all licensed operators are expected to be familiar with NSA/CSS Regulation 100-1, â€Operation of Amateur Radio Stations†(23 October 1986). The specific limitations on contacts with operators from communist and designated countries are of particular importance. If you are an amateur radio operator you should advise the Security Awareness Division (M56) of your amateur radio activities so that detailed guidance may be furnished to you. 2.11 Unofficial Foreign Travel In order to further protect sensitive information from possible compromise resulting from terrorism, coercion, interrogation or capture of Agency personnel by hostile nations and/or terrorist groups, the Agency has established certain policies and procedures concerning unofficial foreign travel. All Agency personnel (civilian employees, military assignees, and contractors) who are planning unofficial foreign travel must have that travel approved by submitting a 7 proposed itinerary to the Security Awareness Division (M56) at least 30 working days prior to their planned departure from the United States. Your itinerary should be submitted on Form K2579 (Unofficial Foreign Travel Request). This form provides space for noting the countries to be visited, mode of travel, and dates of departure and return. Your immediate supervisor must sign this form to indicate whether or not your proposed travel poses a risk to the sensitive information, activities, or projects of which you may have knowledge due to your current assignment. After your supervisor’s assessment is made, this form should be forwarded to the Security Awareness Director (M56). Your itinerary will then be reviewed in light of the existing situation in the country or countries to be visited, and a decision for approval or disapproval will be based on this assessment. The purpose of this policy is to limit the risk of travel to areas of the world where a threat may exist to you and to your knowledge of classified Agency activities. In this context, travel to communist-controlled and other hazardous activity areas is prohibited. A listing of these hazardous activity areas is prohibited. A listing of these hazardous activity areas can be found in Annex A of NSA/CSS Regulation No. 30-31, â€Security Requirements for Foreign Travel†(12 June 1987). From time to time, travel may also be prohibited to certain areas where the threat from hostile intelligence services, terrorism, criminal activity or insurgency poses an unacceptable risk to Agency employees and to the sensitive information they possess. Advance travel deposits made without prior agency approval of the proposed travel may result in financial losses by the employee should the travel be disapproved, so it is important to obtain approval prior to committing yourself financially. Questions regarding which areas of the world currently pose a threat should be directed to the Security Awareness Division (M56). Unofficial foreign travel to Canada, the Bahamas, Bermuda, and Mexico does not require prior approval, however, this travel must still be reported using Form K2579. Travel to these areas may be reported after the fact. While you do not have to report your foreign travel once you have ended your affiliation with the Agency, you should be aware that the risk incurred in travelling to certain areas, from a personal safety and/or counterintelligence standpoint, remains high. The requirement to protect the classified information to which you have had access is a lifetime obligation. 2.12 Membership In Organizations Within the United States there are numerous organizations with memberships ranging from a few to tens of thousands. While you may certainly participate in the activities of any reputable organization, membership in any international club or professional organization/activity with foreign members should be reported through your Staff Security Officer to the Clearance Division (M55). In most cases there are no security concerns or threats to our employees or affiliates. However, the Office of Security needs the opportunity to research the organization and to assess any possible risk to you and the information to which you have access. In addition to exercising prudence in your choice of organizational affiliations, you should endeavor to avoid participation in public activities of a conspicuously controversial nature because such activities could focus undesirable attention upon you and the Agency. NSA employees may, however, participate in bona fide public affairs such as 8 local politics, so long as such activities do not violate the provisions of the statutes and regulations which govern the political activities of all federal employees. Additional information may be obtained from your Personnel Representative. 2.13 Changes In Marital Status/Cohabitation/Names All personnel, either employed by or assigned to NSA, must advise the Office of Security of any changes in their marital status (either marriage or divorce), cohabitation arrangements, or legal name changes. Such changes should be reported by completing NSA Form G1982 (Report of Marriage/Marital Status Change/Name Change), and following the instructions printed on the form. 2.14 Use And Abuse Of Drugs It is the policy of the National Security Agency to prevent and eliminate the improper use of drugs by Agency employees and other personnel associated with the Agency. The term â€drugs†includes all controlled drugs or substances identified and listed in the Controlled Substances Act of 1970, as amended, which includes but is not limited to: narcotics, depressants, stimulants, cocaine, hallucinogens ad cannabis (marijuana, hashish, and hashish oil). The use of illegal drugs or the abuse of prescription drugs by persons employed by, assigned or detailed to the Agency may adversely affect the national security; may have a serious damaging effect on the safety and the safety of others; and may lead to criminal prosecution. Such use of drugs either within or outside Agency controlled facilities is prohibited. 2.15 Physical Security Policies The physical security program at NSA provides protection for classified material and operations and ensures that only persons authorized access to the Agency’s spaces and classified material are permitted such access. This program is concerned not only with the Agency’s physical plant and facilities, but also with the internal and external procedures for safeguarding the Agency’s classified material and activities. Therefore, physical security safeguards include Security Protective Officers, fences, concrete barriers, access control points, identification badges, safes, and the compartmentalization of physical spaces. While any one of these safeguards represents only a delay factor against attempts to gain unauthorized access to NSA spaces and material, the total combination of all these safeguards represents a formidable barrier against physical penetration of NSA. Working together with personnel security policies, they provide â€security in depth.†The physical security program depends on interlocking procedures. The responsibility for carrying out many of these procedures rests with the individual. This means you, and every person employed by, assign, or detailed to the Agency, must assume the responsibility for protecting classified material. Included in your responsibilities are: challenging visitors in operational areas; determining â€need-to-know;†limiting classified conversations to approved areas; following established locking and checking procedures; properly using the secure and non-secure telephone systems; correctly wrapping and packaging classified data for transmittal; and placing classified waste in burn bags. 9 2.16 The NSA Badge Even before you enter an NSA facility, you have a constant reminder of security–the NSA badge. Every person who enters an NSA installation is required to wear an authorized badge. To enter most NSA facilities your badge must be inserted into an Access Control Terminal at a building entrance and you must enter your Personal Identification Number (PIN) on the terminal keyboard. In the absence of an Access Control Terminal, or when passing an internal security checkpoint, the badge should be held up for viewing by a Security Protective Officer. The badge must be displayed at all times while the individual remains within any NSA installation. NSA Badges must be clipped to a beaded neck chain. If necessary for the safety of those working in the area of electrical equipment or machinery, rubber tubing may be used to insulate the badge chain. For those Agency personnel working in proximity to other machinery or equipment, the clip may be used to attach the badge to the wearer’s clothing, but it must also remain attached to the chain. After you leave an NSA installation, remove your badge from public view, thus avoiding publicizing your NSA affiliation. Your badge should be kept in a safe place which is convenient enough to ensure that you will be reminded to bring it with you to work. A good rule of thumb is to afford your badge the same protection you give your wallet or your credit cards. DO NOT write your Personal Identification Number on your badge. If you plan to be away from the Agency for a period of more than 30 days, your badge should be left at the main Visitor Control Center which services your facility. Should you lose your badge, you must report the facts and circumstances immediately to the Security Operations Center (SOC) (963-3371s/688-6911b) so that your badge PIN can be deactivated in the Access Control Terminals. In the event that you forget your badge when reporting for duty, you may obtain a â€non-retention†Temporary Badge at the main Visitor Control Center which serves your facility after a co-worker personally identifies your and your clearance has been verified. Your badge is to be used as identification only within NSA facilities or other government installations where the NSA badge is recognized. Your badge should never be used outside of the NSA or other government facilities for the purpose of personal identification. You should obtain a Department of Defense identification card from the Civilian Welfare Fund (CWF) if you need to identify yourself as a government employee when applying for â€government discounts†offered at various commercial establishments. Your badge color indicates your particular affiliation with NSA and your level of clearance. Listed below are explanations of the badge colors you are most likely to see: Green (*) Fully cleared NSA employees and certain military assignees. Orange (*) (or Gold) Fully cleared representative of other government agencies. Black (*) Fully cleared contractors or consultants. Blue Employees who are cleared to the SECRET level while awaiting completion of their processing for full (TS/SI) clearance. These Limited Interim Clearance (LIC) employees are restricted to certain activities while inside a secure area. Red Clearance level is not specified, so assume the holder is uncleared. 10 * - Fully cleared status means that the person has been cleared to the Top Secret (TS) level and indoctrinated for Special Intelligence (SI). All badges with solid color backgrounds (permanent badges) are kept by individuals until their NSA employment or assignment ends. Striped badges (â€non-retention†badges) are generally issued to visitors and are returned to the Security Protective Officer upon departure from an NSA facility. 2.17 Area Control Within NSA installations there are generally two types of areas, Administrative and Secure. An Administrative Area is one in which storage of classified information is not authorized, and in which discussions of a classified nature are forbidden. This type of area would include the corridors, restrooms, cafeterias, visitor control areas, credit union, barber shop, and drugstore. Since uncleared, non-NSA personnel are often present in these areas, all Agency personnel must ensure that no classified information is discussed in an Administrative Area. Classified information being transported within Agency facilities must be placed within envelopes, folders, briefcases, etc. to ensure that its contents or classification markings are not disclosed to unauthorized persons, or that materials are not inadvertently dropped enroute. The normal operational work spaces within an NSA facility are designated Secure Areas. These areas are approved for classified discussions and for the storage of classified material. Escorts must be provided if it is necessary for uncleared personnel (repairmen, etc.) to enter Secure Areas, an all personnel within the areas must be made aware of the presence of uncleared individuals. All unknown, unescorted visitors to Secure Areas should be immediately challenged by the personnel within the area, regardless of the visitors’ clearance level (as indicated by their badge color). The corridor doors of these areas must be locked with a deadbolt and all classified information in the area must be properly secured after normal working hours or whenever the area is unoccupied. When storing classified material, the most sensitive material must be stored in the most secure containers. Deadbolt keys for doors to these areas must be returned to the key desk at the end of the workday. For further information regarding Secure Areas, consult the Physical Security Division (M51) or your staff Security Officer. 2.18 Items Treated As Classified For purposes of transportation, storage and destruction, there are certain types of items which must be treated as classified even though they may not contain classified information. Such items include carbon paper, vu-graphs, punched machine processing cards, punched paper tape, magnetic tape, computer floppy disks, film, and used typewriter ribbons. This special treatment is necessary since a visual examination does not readily reveal whether the items contain classified information. 11 2.19 Prohibited Items Because of the potential security or safety hazards, certain items are prohibited under normal circumstances from being brought into or removed from any NSA installation. These items have been groped into two general classes. Class I prohibited items are those which constitute a threat to the safety and security of NSA/CSS personnel and facilities. Items in this category include: 1. Firearms and ammunition 2. Explosives, incendiary substances, radioactive materials, highly volatile materials, or other hazardous materials 3. Contraband or other illegal substances 4. Personally owned photographic or electronic equipment including microcomputers, reproduction or recording devices, televisions or radios. Prescribed electronic medical equipment is normally not prohibited, but requires coordination with the Physical Security Division (M51) prior to being brought into any NSA building. Class II prohibited items are those owned by the government or contractors which constitute a threat to physical, technical, or TEMPEST security. Approval by designated organizational officials is required before these items can be brought into or removed from NSA facilities. Examples are: 1. Transmitting and receiving equipment 2. Recording equipment and media 3. Telephone equipment and attachments 4. Computing devices and terminals 5. Photographic equipment and film A more detailed listing of examples of Prohibited Items may be obtained from your Staff Security Officer or the Physical Security Division (M51). Additionally, you may realize that other seemingly innocuous items are also restricted and should not be brought into any NSA facility. Some of these items pose a technical threat; others must be treated as restricted since a visual inspection does not readily reveal whether they are classified. These items include: 1. Negatives from processed film; slides; vu-graphs 2. Magnetic media such as floppy disks, cassette tapes, and VCR videotapes 3. Remote control devices for telephone answering machines 4. Pagers 2.20 Exit Inspection As you depart NSA facilities, you will note another physical security safeguard–the inspection of the materials you are carrying. This inspection of your materials, conducted by Security Protective Officers, is designed to preclude the inadvertent removal 12 of classified material. It is limited to any articles that you are carrying out of the facility and may include letters, briefcases, newspapers, notebooks, magazines, gym bags, and other such items. Although this practice may involve some inconvenience, it is conducted in your best interest, as well as being a sound security practice. The inconvenience can be considerably reduced if you keep to a minimum the number of personal articles that you remove from the Agency. 2.21 Removal Of Material From NSA Spaces The Agency maintains strict controls regarding the removal of material from its installations, particularly in the case of classified material. Only under a very limited and official circumstances classified material be removed from Agency spaces. When deemed necessary, specific authorization is required to permit an individual to hand carry classified material out of an NSA building to another Secure Area. Depending on the material and circumstances involved, there are several ways to accomplish this. A Courier Badge authorizes the wearer, for official purposes, to transport classified material, magnetic media, or Class II prohibited items between NSA facilities. These badges, which are strictly controlled, are made available by the Physical Security Division (M51) only to those offices which have specific requirements justifying their use. An Annual Security Pass may be issued to individuals whose official duties require that they transport printed classified materials, information storage media, or Class II prohibited items to secure locations within the local area. Materials carried by an individual who displays this pass are subject to spot inspection by Security Protective Officers or other personnel from the Office of Security. It is not permissible to use an Annual Security Pass for personal convenience to circumvent inspection of your personal property by perimeter Security Protective Officers. If you do not have access to a Courier Badge and you have not been issued an Annual Security Pass, you may obtain a One-Time Security Pass to remove classified materials/magnetic media or admit or remove prohibited items from an NSA installation. These passes may be obtained from designated personnel in your work element who have been given authority to issue them. The issuing official must also contact the Security Operations Center (SOC) to obtain approval for the admission or removal of a Class I prohibited item. When there is an official need to remove government property which is not magnetic media, or a prohibited or classified item, a One-Time Property Pass is used. This type of pass (which is not a Security Pass) may be obtained from your element custodial property officer. A Property Pass is also to be used when an individual is removing personal property which might be reasonably be mistaken for unclassified Government property. This pass is surrendered to the Security Protective Officer at the post where the material is being removed. Use of this pass does not preclude inspection of the item at the perimeter control point by the Security Protective Officer or Security professionals to ensure that the pass is being used correctly. 13 2.22 External Protection Of Classified Information On those occasions when an individual must personally transport classified material between locations outside of NSA facilities, the individual who is acting as the courier must ensure that the material receives adequate protection. Protective measures must include double wrapping and packaging of classified information, keeping the material under constant control, ensuring the presence of a second appropriately cleared person when necessary, and delivering the material to authorized persons only. If you are designated as a courier outside the local area, contact the Security Awareness Division (M56) for your courier briefing. Even more basic than these procedures is the individual security responsibility to confine classified conversations to secure areas. Your home, car pool, and public places are not authorized areas to conduct classified discussions–even if everyone involved in he discussion possesses a proper clearance and â€need-to-know.†The possibility that a conversation could be overheard by unauthorized persons dictates the need to guard against classified discussions in non-secure areas. Classified information acquired during the course of your career or assignment to NSA may not be mentioned directly, indirectly, or by suggestion in personal diaries, records, or memoirs. 2.23 Reporting Loss Or Disclosure Of Classified Information The extraordinary sensitivity of the NSA mission requires the prompt reporting of any known, suspected, or possible unauthorized disclosure of classified information, or the discovery that classified information may be lost, or is not being afforded proper protection. Any information coming to your attention concerning the loss or unauthorized disclosure of classified information should be reported immediately to your supervisor, your Staff Security Officer, or the Security Operations Center (SOC). 2.24 Use Of Secure And Non-Secure Telephones Two separate telephone systems have been installed in NSA facilities for use in the conduct of official Agency business: the secure telephone system (gray telephone) and the outside, non-secure telephone system (black telephone). All NSA personnel must ensure that use of either telephone system does not jeopardize the security of classified information. The secure telephone system is authorized for discussion of classified information. Personnel receiving calls on the secure telephone may assume that the caller is authorized to use the system. However, you must ensure that the caller has a â€need-to-know†the information you will be discussing. The outside telephone system is only authorized for unclassified official Agency business calls. The discussion of classified information is not permitted on this system. Do not attempt to use â€double-talk†in order to discuss classified information over the non-secure telephone system. In order to guard against the inadvertent transmission of classified information over a non-secure telephone, and individual using the black telephone in an area where 14 classified activities are being conducted must caution other personnel in the area that the non-secure telephone is in use. Likewise, you should avoid using the non-secure telephone in the vicinity of a secure telephone which is also in use. 3 HELPFUL INFORMATION 3.1 Security Resources In the fulfillment of your security responsibilities, you should be aware that there are many resources available to assist you. If you have any questions or concerns regarding security at NSA or your individual security responsibilities, your supervisor should be consulted. Additionally, Staff Security Officers are appointed to the designated Agency elements to assist these organizations in carrying out their security responsibilities. There is a Staff Security Officer assigned to each organization; their phone numbers are listed at the back of this handbook. Staff Security Officers also provide guidance to and monitor the activities of Security Coordinators and Advisors (individuals who, in addition to their operational duties within their respective elements, assist element supervisors or managers in discharging security responsibilities). Within the Office of Security, the Physical Security Division (M51) will offer you assistance in matters such as access control, security passes, clearance verification, combination locks, keys, identification badges, technical security, and the Security Protective Force. The Security Awareness Division (M56) provides security guidance and briefings regarding unofficial foreign travel, couriers, special access, TDY/PCS, and amateur radio activities. The Industrial and Field Security Division (M52) is available to provide security guidance concerning NSA contractor and field site matters. The Security Operations Center (SOC) is operated by two Security Duty Officers (SDOs), 24 hours a day, 7 days a week. The SDO, representing the Office of Security, provides a complete range of security services to include direct communications with fire and rescue personnel for all Agency area facilities. The SDO is available to handle any physical or personnel problems that may arise, and if necessary, can direct your to the appropriate security office that can assist you. After normal business hours, weekends, and holidays, the SOC is the focal point for all security matters for all Agency personnel and facilities (to include Agency field sites and contractors). The SOC is located in Room 2A0120, OPS 2A building and the phone numbers are 688- 6911(b), 963-3371(s). However, keep in mind that you may contact any individual or any division within the Office of Security directly. Do not hesitate to report any information which may affect the security of the Agency’s mission, information, facilities or personnel. 3.2 Security-Related Services In addition to Office of Security resources, there are a number of professional, securityrelated services available for assistance in answering your questions or providing the services which you require. The Installations and Logistics Organization (L) maintains the system for the collection and destruction of classified waste, and is also responsible for the movement 15 and scheduling of material via NSA couriers and the Defense Courier Service (DCS). Additionally, L monitors the proper addressing, marking, and packaging of classified material being transmitted outside of NSA; maintains records pertaining to receipt and transmission of controlled mail; and issues property passes for the removal of unclassified property. The NSA Office of Medical Services (M7) has a staff of physicians, clinical psychologists and an alcoholism counselor. All are well trained to help individuals help themselves in dealing with their problems. Counseling services, with referrals to private mental health professionals when appropriate, are all available to NSA personnel. Appointments can be obtained by contacting M7 directly. When an individual refers himself/herself, the information discussed in the counseling sessions is regarded as privileged medical information and is retained exclusively in M7 unless it pertains to the national security. Counselling interviews are conducted by the Office of Civilian Personnel (M3) with any civilian employee regarding both on and off-the-job problems. M3 is also available to assist all personnel with the personal problems seriously affecting themselves or members of their families. In cases of serious physical or emotional illness, injury, hospitalization, or other personal emergencies, M3 informs concerned Agency elements and maintains liaison with family members in order to provide possible assistance. Similar counselling services are available to military assignees through Military Personnel (M2). 4 A FINAL NOTE The information you have just read is designed to serve as a guide to assist you in the conduct of your security responsibilities. However, it by no means describes the extent of your obligation to protect information vital to the defense of our nation. Your knowledge of specific security regulations is part of a continuing process of education and experience. This handbook is designed to provide he foundation of this knowledge and serve as a guide to the development of an attitude of security awareness. In the final analysis, security is an individual responsibility. As a participant in the activities of the National Security Agency organization, you are urged to be always mindful of the importance of the work being accomplished by NSA and of the unique sensitivity of the Agency’s operations.
31 U/OO/122630-18 PP-18-0120 March 2018 NSA’S Top Ten Cybersecurity Mitigation Strategies NSA’s Top Ten Mitigation Strategies counter a broad range of exploitation techniques used by Advanced Persistent Threat (APT) actors. NSA’s mitigations set priorities for enterprise organizations to minimize mission impact. The mitigations also build upon the NIST Cybersecurity Framework functions to manage cybersecurity risk and promote a defense-in-depth security posture. The mitigation strategies are ranked by effectiveness against known APT tactics. Additional strategies and best practices will be required to mitigate the occurrence of new tactics. The cybersecurity functions are keyed as: ï® Identify, ï® Protect, ï® Detect, ï® Respond, ï® Recover 1. Update and Upgrade Software Immediately Apply all available software updates, automate the process to the extent possible, and use an update service provided directly from the vendor. Automation is necessary because threat actors study patches and create exploits, often soon after a patch is released. These “N-day†exploits can be as damaging as a zero-day. Vendor updates must also be authentic; updates are typically signed and delivered over protected links to assure the integrity of the content. Without rapid and thorough patch application, threat actors can operate inside a defender’s patch cycle. 2. Defend Privileges and Accounts Assign privileges based on risk exposure and as required to maintain operations. Use a Privileged Access Management (PAM) solution to automate credential management and fine-grained access control. Another way to manage privilege is through tiered administrative access in which each higher tier provides additional access, but is limited to fewer personnel. Create procedures to securely reset credentials (e.g., passwords, tokens, tickets). Privileged accounts and services must be controlled because threat actors continue to target administrator credentials to access high-value assets, and to move laterally through the network. 3. Enforce Signed Software Execution Policies Use a modern operating system that enforces signed software execution policies for scripts, executables, device drivers, and system firmware. Maintain a list of trusted certificates to prevent and detect the use and injection of illegitimate executables. Execution policies, when used in conjunction with a secure boot capability, can assure system integrity. Application Whitelisting should be used with signed software execution policies to provide greater control. Allowing unsigned software enables threat actors to gain a foothold and establish persistence through embedded malicious code. 4. Exercise a System Recovery Plan Create, review, and exercise a system recovery plan to ensure the restoration of data as part of a comprehensive disaster recovery strategy. The plan must protect critical data, configurations, and logs to ensure continuity of operations due to unexpected events. For additional protection, backups should be encrypted, stored offsite, offline when possible, and support complete recovery and reconstitution of systems and devices. Perform periodic testing and evaluate the backup plan. Update the plan as necessary to accommodate the ever-changing network environment. A recovery plan is a necessary mitigation for natural disasters as well as malicious threats including ransomware. 5. Actively Manage Systems and Configurations Take inventory of network devices and software. Remove unwanted, unneeded or unexpected hardware and software from the network. Starting from a known baseline reduces the attack surface and establishes control of the operational environment. Thereafter, actively manage devices, applications, operating systems, and security configurations. Active enterprise management ensures that systems can adapt to dynamic threat environments while scaling and streamlining administrative operations. ï® Identify, ï® Protect ï® Identify, ï® Protect ï® Protect, ï® Detect ï® Identify, ï® Respond, ï® Recover ï® Identify, ï® Protect U/OO/122630-18 PP-18-0120 March 2018 2 6. Continuously Hunt for Network Intrusions Take proactive steps to detect, contain, and remove any malicious presence within the network. Enterprise organizations should assume that a compromise has taken place and use dedicated teams to continuously seek out, contain, and remove threat actors within the network. Passive detection mechanisms, such as logs, Security Information and Event Management (SIEM) products, Endpoint Detection and Response (EDR) solutions, and other data analytic capabilities are invaluable tools to find malicious or anomalous behaviors. Active pursuits should also include hunt operations and penetration testing using well documented incident response procedures to address any discovered breaches in security. Establishing proactive steps will transition the organization beyond basic detection methods, enabling real-time threat detection and remediation using a continuous monitoring and mitigation strategy. 7. Leverage Modern Hardware Security Features Use hardware security features like Unified Extensible Firmware Interface (UEFI) Secure Boot, Trusted Platform Module (TPM), and hardware virtualization. Schedule older devices for a hardware refresh. Modern hardware features increase the integrity of the boot process, provide system attestation, and support features for high-risk application containment. Using a modern operating system on outdated hardware results in a reduced ability to protect the system, critical data, and user credentials from threat actors. 8. Segregate Networks Using Application-Aware Defenses Segregate critical networks and services. Deploy application-aware network defenses to block improperly formed traffic and restrict content, according to policy and legal authorizations. Traditional intrusion detection based on knownbad signatures is quickly decreasing in effectiveness due to encryption and obfuscation techniques. Threat actors hide malicious actions and remove data over common protocols, making the need for sophisticated, application-aware defensive mechanisms critical for modern network defenses. 9. Integrate Threat Reputation Services Leverage multi-sourced threat reputation services for files, DNS, URLs, IPs, and email addresses. Reputation services assist in the detection and prevention of malicious events and allow for rapid global responses to threats, a reduction of exposure from known threats, and provide access to a much larger threat analysis and tipping capability than an organization can provide on its own. Emerging threats, whether targeted or global campaigns, occur faster than most organizations can handle, resulting in poor coverage of new threats. Multi-source reputation and information sharing services can provide a more timely and effective security posture against dynamic threat actors. 10. Transition to Multi-Factor Authentication Prioritize protection for accounts with elevated privileges, remote access, and/or used on high value assets. Physical token-based authentication systems should be used to supplement knowledge-based factors such as passwords and PINs. Organizations should migrate away from single factor authentication, such as password-based systems, which are subject to poor user choices and susceptible to credential theft, forgery, and reuse across multiple systems. Disclaimer of Warranties and Endorsement The information and opinions contained in this document are provided "as is" and without any warranties or guarantees. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement, recommendation, or favoring by the United States Government, and this guidance shall not be used for advertising or product endorsement purposes. Contact Information Client Requirements and General Cybersecurity Inquiries Cybersecurity Requirements Center (CRC), 410-854-4200, email: Cybersecurity_Requests@nsa.gov ï® Detect, ï® Respond, ï® Recover ï® Identify, ï® Protect ï® Protect, ï® Detect ï® Identify, ï® Protect ï® Protect, ï® Detect
4NATIONAL SECURITY AGENCY (NSA) SYSTEMS AND NETWORK ATTACK CENTER (SNAC) SECURITY GUIDES VERSUS KNOWN WORMS THESIS Matthew W. Sullivan, 2d Lt, USAF AFIT/GIA/ENG/05-07 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government. AFIT/GIA/ENG/05-07 NATIONAL SECURITY AGENCY (NSA) SYSTEMS AND NETWORK ATTACK CENTER (SNAC) SECURITY GUIDES VERSUS KNOWN WORMS THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Assurance Matthew W. Sullivan, BS 2d Lt, USAF March 2005 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT/GIA/ENG/05-07 NATIONAL SECURITY AGENCY (NSA) SYSTEMS AND NETWORK ATTACK CENTER (SNAC) SECURITY GUIDES VERSUS KNOWN WORMS Matthew W. Sullivan, BS 2d Lt, USAF Approved: /signed/ ____________________________________ ________ Rusty O. Baldwin, Ph.D. (Chairman) Date /signed/ ____________________________________ ________ Richard A. Raines, Ph.D. (Member) Date /signed/ ____________________________________ ________ Robert F. Mills, Ph.D. (Member) Date iv Acknowledgments I want to thank the many people that helped me make it through the thesis process. First, I would like to thank Dr. Baldwin for the many hours of editing and guidance that he gave me. I would also like to thank Mr. Lacey for quickly providing me all the equipment and software that I needed as well as all of his technical support. Captain Chaboya also truly helped me out with his expertise in debugging and provided some great insight into the world of hackers. I also would like to thank my sponsors from the NSA for providing me with the CERT database of exploits. Last but not least, I would like to thank my wife, for putting up with the late hours and weekends that I put into this thesis. Matthew W. Sullivan v Table of Contents Page Acknowledgments.............................................................................................................. iv Table of Contents.................................................................................................................v List of Figures.................................................................................................................. viii List of Tables ..................................................................................................................... ix Abstract................................................................................................................................x 1 Introduction And Importance Of Research Topic ............................................................1 1.1 Outline Of Research Goals ............................................................................................1 1.2 Overview Of Research Document .................................................................................2 2 Literature Review..............................................................................................................4 2.1 Worms............................................................................................................................4 2.2 Worms History And Cost...............................................................................................5 2.3 How Worms Work.........................................................................................................6 2.4 Worms, Friend Or Foe? .................................................................................................8 2.5 Future of Worms............................................................................................................9 2.6 Case Studies.................................................................................................................10 2.6.1 Code Red & Code Red II (July 2001).......................................................................10 2.6.2 Nimda (September 2001)..........................................................................................12 2.6.3 SQL Slammer (January 2003) ..................................................................................12 2.7 Types of Worm Preventions / Protection.....................................................................14 2.7.1 Host-Based................................................................................................................14 2.7.2 Network-Based Solutions .........................................................................................16 2.7.3 Other Protections ......................................................................................................17 2.8 NSA SNAC Guides......................................................................................................18 2.9 Exploits ........................................................................................................................20 2.9.1 OS Exploits ...............................................................................................................20 vi 2.9.1.1 DCOM RPC Exploit ..............................................................................................20 2.9.1.2 LSASS Exploit.......................................................................................................21 2.9.2 Microsoft IIS Extended Unicode Directory Traversal Vulnerability .......................21 2.9.3 Outlook Exploit.........................................................................................................21 2.9.4 Multipurpose Internet Mail Extensions (MIME) Header Exploit.............................22 2.10 Summary....................................................................................................................22 3 Methodology...................................................................................................................23 3.1 Goals And Hypothesis .................................................................................................23 3.2 Approach......................................................................................................................23 3.3 System Boundaries.......................................................................................................25 3.4 System Services ...........................................................................................................26 3.5 Workload......................................................................................................................26 3.6 Performance Metrics....................................................................................................26 3.7 Parameters....................................................................................................................27 3.8 Factors..........................................................................................................................27 3.9 Evaluation Technique ..................................................................................................29 3.10 Experimental Design..................................................................................................30 3.11 Summary....................................................................................................................31 4 Results.............................................................................................................................32 4.1 Operating System Exploit Results ...............................................................................32 4.1.1 DCOM RPC Exploit .................................................................................................32 4.1.2 LSASS Exploit..........................................................................................................33 4.1.3 Operating System Exploit Summary ........................................................................34 4.2 IIS Exploit Results .......................................................................................................35 4.2.1 Microsoft IIS Extended Unicode Directory Traversal Vulnerability .......................35 4.2.2 Code Red Worm .......................................................................................................35 4.2.3 Conclusions...............................................................................................................35 4.3 SQL Server Exploit Results.........................................................................................36 4.3.1 SQL Slammer Worm ................................................................................................36 vii 4.3.2 Conclusions...............................................................................................................36 4.4 Internet Explorer (IE) / Email Exploits........................................................................37 4.4.1 Microsoft IE MIME Header Exploit Results............................................................37 4.4.2 Microsoft IE / Outlook Exploit Results ....................................................................37 4.4.3 Conclusions...............................................................................................................38 4.5 Summary......................................................................................................................38 5 Conclusions.....................................................................................................................40 5.1 Conclusions of Research..............................................................................................40 5.2 Significance of Research..............................................................................................42 5.3 Recommendations for Action ......................................................................................43 5.4 Summary......................................................................................................................43 Bibliography ......................................................................................................................44 viii List of Figures Page Figure 1: System Including IDS ....................................................................................... 24 Figure 2: System Under Test (SUT) ................................................................................. 25 ix List of Tables Page Table 1: Result of Exploit on Different Configurations ................................................... 39 Table 2: NSA SNAC Guides Configuration Results ........................................................ 42 x AFIT/GIA/ENG/05-07 Abstract Internet worms impact Internet security around the world even though there are many defenses to prevent the damage they inflict. The National Security Agency (NSA) Systems and Network Attack Center (SNAC) publishes in-depth configuration guides to protect networks from intrusion; however, the effectiveness of these guides in preventing the spread of worms hasn’t been studied. This thesis establishes how well the NSA SNAC guides protect against various worms and exploits compared to Microsoft patches alone. It also identifies the aspects of the configuration guidance that is most effective in the absence of patches and updates, against network worm and e-mail virus attacks. The results from this thesis show that the Microsoft patches and the NSA SNAC guides protect against all worms and exploits tested. The main difference is NSA SNAC guides protected as soon as they were applied where as the Microsoft patches needed to be written, distributed and applied in order to work. The NSA SNAC guides also provided protection by changing default permissions and passwords some worms and exploits use to exploit the computer as well as removed extraneous packages that could have undiscovered exploits. 1 NSA SNAC SECURITY GUIDES VS KNOWN WORMS 1 Introduction and Importance of Research Topic Worms are similar to computer viruses in that they can destroy data on computers and networks, but they have the additional ability to spread and disrupt the network without human interaction. Worms have been spreading faster as Internet connectivity has increased, some worldwide in as little as 15 minutes. This gives little warning or time for defensive measures to be put in place. The research objective of this effort is to determine if the National Security Agency (NSA) Systems and Network Attack Center (SNAC) security guides, alone, are effective protection against worms and viruses. Since the United States has become increasingly dependent on computer networks for both defense and commerce, small disruptions in these networks can cause both great distress and damage. Computer worms cost both money and man hours to correct wasting resources. Knowing how well or what parts of the NSA SNAC guides are effective can help to minimize the damage from worms and may protect systems in future attacks. 1.1 Outline of Research Goals The goal of this thesis is to determine whether the National Security Agency (NSA) Systems and Network Attack Center (SNAC) security guides are effective protection against the infection and spread of worms. In addition, aspects of the configuration guidance that are most effective in the absence of patches and updates are identified. 2 Since Microsoft products are found on over 90% of desktop systems, 55% of servers [Thu03] and 53% of Fortune 1000 Internet web servers [Huc03], this research uses Microsoft based operating systems and worms that attack those systems. Two LANs are used as a test bed, one with a default installation of the Windows Operating System and the other with varying levels of protection to determine how well the NSA SNAC guides protect the respective computers. The levels of protection are: initial setup, initial setup with current Microsoft patches installed, initial setup with only NSA SNAC guides applied, and initial setup and both current Microsoft patches installed and NSA SNAC guides applied. Worms are run against each of the levels of protection to determine which level of protection works best. These worms are selected based on whether they attack the operating system or applications. 1.2 Overview of Research Document Chapter 2 is an introduction on the history of worms as well as an overview of how they work and some common attributes. It covers other ways to prevent worms from spreading, both host based and network based. An analysis on four worms, Code Red version I and II, Nimda, and SQL Slammer is also covered. The exploits tested in this thesis are also discussed. The chapter discusses current research on defeating worms. Chapter 3 contains the methodology used to conduct the research. The goals are discussed as well as the approach to solve the problem. System boundaries, services, parameters and factors are presented as well. The experimental design and the evaluation technique are also covered. 3 Chapter 4 presents the results of the experiments. The chapter also examines reasons for the exploits failure or success with respect to the NSA SNAC guides. Several additional ways to secure the computers other than what the NSA SNAC guides suggest are examined. Chapter 5 discusses of what type of configuration protects the best. What exploits the NSA SNAC guides protect against is specified. The significance of these findings to the security community is also given. Recommendations are made on how to better protect computer systems against worms. 4 2 Literature Review In this chapter, Internet worms and the financial costs they have incurred are discussed. The operation of a worm is explained by describing common traits they all have. Detailed descriptions of four current worms are presented: Code Red & Code Red II, Nimda, and Sapphire. The exploits used in this thesis are also explained. Methods to prevent worms from attacking and destroying networks are also discussed. Since proper configuration of a computer is an effective way to stop worms, the National Security Agency’s (NSA) System and Network Attack Center (SNAC) “how to†guides are described. 2.1 Worms The United States has become increasingly dependent on computer networks for both defense and commerce and even small disruptions in this network can cause great distress among its users. Computer worms have the ability to disrupt the network without the human interaction that viruses require. Worms are stand alone programs that seek out vulnerable computers on the network wasting both computing time and bandwidth. This chapter concentrates on worms and exploits written for Microsoft operating systems and products rather than on their UNIX counterparts for a number of reasons. The first reason is Microsoft products are found on over 90% of desktop systems, 55% of servers [Thu03] and 53% of Fortune 1000 Internet web servers [Huc03]. The fact that the Microsoft OS runs on a common x86 architecture while a Unix OS runs on numerous platforms allows worms to exploit more systems with a minimum of coding on part of the hacker. Furthermore, since most users of Microsoft products do so with no formal 5 security training, they form the most vulnerable group to be threatened by worms. They also form a large group that, if combined, could form a large distributed denial of service attack. Furthermore, the NSA SNAC guides that are the subject of this research are targeted to the Windows-based platform. 2.2 Worms History and Cost Worms predate the Internet; they are named after a 1975, John Brunner story, The Shockwave Rider [Arc99]. The major defining characteristic of a worm is they are self contained and require no interaction with a user to execute. This independent execution ability gives worms the ability to use a significant amount of network bandwidth. In early 1980 Xerox created user independent processes that were used as helpful services, but some were poorly written and demonstrated the future danger of worms when they continuously rebooted infected computers [Arc99]. The first self-replicating, selfpropagating worm was created by Robert Morris Jr. as part of his doctoral work in 1988. The Morris worm shutdown the largest percentage of the Internet to date, nearly ten percent, and cost an estimated $10-100 million to cleanup [Sul98]. This damage was completely unintentional. Errors in the code caused computers to be infected multiple times, spawning new processes that eventually brought infected computers to a halt. The CERT Coordination Center, a federally funded center of Internet security expertise was formed as a direct result of the Morris worm’s ability to do so much harm in such a short period. When the Morris worm was released, the Internet was largely homogeneous. This allowed the same worm to propagate throughout each server without alteration of the 6 code. Until Windows became the dominate OS, the Internet had a variety of operating systems (OS) and platform architectures. Now, with the Microsoft OS controlling approximately 90% of desktop and 45% of servers, the Internet has returned to a relative state of homogeneity [Naz04]. Current worms usually take advantage of bugs and security holes to infiltrate networks. The SoBig and Blaster worms of 2003 resulted in the biggest cleanup and longest down time thus far. The SoBig.F worm alone cost over $30 billion for cleanup and according to experts; the Blaster worm may have contributed to the failure of the eastern US power grid on August 14th 2003 [AdG03]. About 70% of South Korean users access the Internet using broadband and in 2003 the SQL Slammer worm infected their top three Internet providers which virtually brought the Korean Internet to a halt [AdG03]. With the increase of broadband Internet connections to home users, worm damage and propagation is expected to increase substantially. Given their ability to cause damage, it is clear emerging worms need to be stopped before they spread. The first step is to find out how worms actually work. 2.3 How Worms Work Since they do not rely on user interaction, worms are programmed with all the information they need to spread from the beginning. To speed up the process of creating worms, most hackers exploit published security flaws with readily available patches to gain access to computers. Some even use the published flaw’s code in their worm. Another type of worm, the so-called “zero-day†worms, are harder to prevent because they use vulnerabilities that haven’t been identified by the security community and don’t 7 have patches. This makes it much more difficult to stop them. Even so, all worms share some basic characteristics: autonomy, replication, reconnaissance, attack, defense, command interface, and polymorphism [Tod03]. The first four characteristics, autonomy, replication, reconnaissance, and attack, are all present in modern worms. Autonomy is a fundamental ability in a worm since once unleashed, a worm should spread without intervention. Replication is also a key trait for a worm since it needs this to spread. Worms use reconnaissance to find other computers that have a vulnerability that can be exploited. When the worm attacks, it is usually done in a two stage process. First, the worm exploits the vulnerability and loads itself onto the computer, and then it executes code to start the process of replicating from that computer [Tod03]. Modern worms use the last three characteristics, defense, command interface, and polymorphism, to increase their destructive ability. Modern worms use multiple attacks so they can exploit multiple vulnerabilities or different operating systems. In addition, they can take advantage of multiple vulnerabilities in multiple operating systems and report compromises to a central database. Once the worm loads itself on a vulnerable computer it must avoid detection using its defensive capabilities. It can change its process name to something obscure, like a critical system process. The worm could also disable detection systems or send decoy packets to make it hard to locate other infected computers. A worm may send an identical replica of its code or use polymorphism to send out a modified version. Worms use polymorphism so there is no single code 8 signature to discover and block. This can be potentially devastating since many worm filters use signatures and are rendered ineffective if each instance is different. There are four main reasons worms continue to be generated even though they produce a great deal of “noise†during intrusion; ease, penetration, coverage and persistence [Naz01]. Worms are easy to generate because automation makes tasks easier. Writing a worm is not necessarily fast. It can, in fact, take a long time. Worms can penetrate systems not only through effective code, but also through good fortune on the part of the attacker. Worms spread quickly due to their very nature; this coverage helps them persist over long periods of time since some users don’t patch their systems quickly or at all [Naz01]. Some of the exceptionally virulent worms like Code Red and Nimda have persisted on the network for months after patches have been applied since worm writers have targeted broadband users of late. Each of these reasons make worms a threat for the foreseeable future. The relative ease of writing a worm also ensures that they will be around for a while [Naz01]. 2.4 Worms, Friend or Foe? While most worms are used to cause damage, some worms have been used for productive tasks, albeit with mixed results. The Xerox worm of the early 1980’s operated at night to balance daytime processing load for large tasks or to update system files. Unfortunately, these worms had some unintended consequences and caused computers to continually restart showing that worms could be used for malicious actions [Arc99]. The recent Nachi / Welchia worm loaded itself onto vulnerable Windows machines and 9 attempted to patch the computer so that neither it nor the Blaster worm could affect it anymore. While both of these worms tried to help automate tasks, there were serious problems with bandwidth utilization and unintentional system misconfigurations that could have serious consequences. While all worms use network bandwidth and CPU time, some carry a payload to perform malicious actions on compromised computer such as installing a Trojan horse, keystroke logger, or other types of spy ware. Worms can also destroy files or other information unintentionally. Since even the best intentioned worms have been shown to cause problems therefore, it is best not to allow any worm to be transmitted across a network. 2.5 Future of Worms Code Red and Nimda, which spread around the world in days, seem tame compared to the predictions concerning future worms. After the appearance of the Code Red worm, it was postulated that in the future worms could take over the Internet within 15 minutes [Naz04]. This type of worm was dubbed the Warhol worm. Today’s worms spread faster with less code, giving rise to the thought of the Warhol or flash worm. A flash worm could be achieved by scanning in advance for vulnerable machines and splitting up the worm distribution so the servers and network bandwidth is not overwhelmed [Naz04]. While today’s worms have been troublesome in both cost of cleanup and wasted bandwidth, the future looks even worse. To date, worms haven’t been overly malicious; they have mainly wasted bandwidth and caused temporary denial of service. Future 10 worms could carry devastating packages that delete data causing widespread damage. This is especially true for broadband users, the new target among worms, since they seldom make backups of their data. Worms frequently announce their terrorist or political agendas. The Code Red worm proclaimed, “Hacked by Chineseâ€, on the web pages it defaced. Future worms will likely try to spread messages of groups either by defacing web pages like the Code Red worm, causing some type of denial of service or worse [ArR01]. Future worms will also likely target new areas. Peer-to-peer networks, such as Kazaa and Bittorrent, encourage the swapping of files among users. These could be used spread worms through the exchange of tainted files. If embedded devices such as routers are attacked, entire networks could be taken off-line. Many other embedded devices, printers, home appliances, and broadband adapters, have become accessible to the web with little concern about their security vulnerabilities. Since these devices use firmware, upgrading them is difficult if not impossible making worms even more of a threat. 2.6 Case Studies At this point, case studies of four particular worms are presented: Code Red & Code Red II, Nimda, and SQL Slammer. 2.6.1 Code Red & Code Red II (July 2001) The Code Red worm uses a buffer overflow attack to gain access to Microsoft’s Internet Information Services (IIS) Indexing Service Dynamic Link Library (DLL) which had a known vulnerability at the time. A patch to fix the vulnerability had been released a month earlier. Code Red spawns 100 threads, one trying to alter the main web page and 11 the other 99 trying to find new computers to attack [Naz04]. CERT describes the attack as a three step process. First, the worm tries to exploit a random computer with a buffer overflow on TCP port 80. Then, it changes the default web page on English language machines to read, “HELLO! Welcome to http://www.worm.com! Hacked By Chinese!†Finally, the worm performs one of three actions depending on the day of the month: propagate to other machines, flood a fixed IP address to create a denial of service attack, or sleep. Additionally, the IIS attack sometimes results in root level access to the compromised machine [CER02]. Code Red is one of the first worms to use the homogeneity of the Internet to spread with the same speed of the 1988 Morris worm. It also foreshadows information warfare with the politically motivated “Hacked by Chinese†slogan. Code Red was contained because of the flaws in its random number generator code and the ability to fool it into thinking a computer was already infected [Naz04]. Code Red 2 fixed the flaw in the random number generator resulting in a significant increase in the number of scans by the worm. The worm used TCP, so every instance of a Code Red worm had to wait for an explicit response from the computer it was attacking before it would continue which prevented it from spreading faster. While Code Red II used the same buffer overflow of the original Code Red, it used a probabilistic island hopping approach instead of the less effective randomly generated IP address of its predecessor. This island hopping approach treats network blocks as islands and the worm focuses its attention on this local network before moving to another random destination network [Naz04]. It also creates an entry in the registry to 12 flag the computer as compromised [CER01]. Finally, it generates backdoors on the compromised machines by loading the executables “cmd.exe†to executable script directories and a Trojan horse copy of “explore.exe†that maps the computer’s disk drives [Naz04]. 2.6.2 Nimda (September 2001) A little more than a month after Code Red II’s release Nimda was released. Nimda used the same probabilistic island hopping approach as Code Red II to infiltrate vulnerable servers. In contrast with other worms, Nimda uses multiple attack vectors to penetrate systems. In web server exploits, Nimda used backdoor shells from previously exploited Code Red II web servers and another exploit that allowed access of a computer’s true root directory and the execution of arbitrary programs [Naz04]. It also exploited a vulnerability in the Microsoft email client that automatically ran a MIME encoded readme.exe attachment [CER01a]. It spread using open network shares of MIME-encoded copies of itself that were automatically run if the preview option was enabled. Another web exploit uploads more exploits to an infected site. Since Nimda used many infection techniques, it has avoided complete removal and has remained largely active for many months after its first introduction to the network [Naz04]. 2.6.3 SQL Slammer (January 2003) SQL Slammer, also known as Sapphire and W32.Slammer, is the fastest spreading worm to date. Almost 90% of vulnerable computers were infected within 10 minutes on January 25, 2003, nearly an hour before anyone could even begin to protect against it [MPS03]. Five of the 13 root-name servers and huge sections of the Internet 13 went off line in the first 15 minutes of a relentless packet storm. Sapphire used a buffer overflow attack on Microsoft SQL Server 2000 and Desktop Engine 2000 software. The vulnerability had been known for 6 months and a patch was available [CER03]. Due to improper software configurations, some victims didn’t even realize SQL was running [Bou03]. The Sapphire worm infected nearly 75 thousand hosts and reached its maximum scanning rate in three minutes. At this point, network bandwidth limitations began to limit its spread. Sapphire also caused airline flight cancellations, interfered with elections, and ATM failures [MPS03]. This was the first worm to employ the concept of the Warhol worm. It was two orders of magnitude faster than Code Red. Luckily Sapphire didn’t carry a malicious payload or the effects would have been much more severe [MPS03]. The Sapphire worm used a buffer overflow exploit that was contained in a single UDP packet, as opposed to the TCP scan of Code Red and Nimda. Since it used UDP, it didn’t wait for a response and quickly consumed much of the available bandwidth. “Slammer’s scanning technique is so aggressive that it quickly interferes with its own growth. Subsequent infections’ contribution to its growth rate diminishes because those instances must compete with existing infections for scarce bandwidth. Thus, Slammer achieved its maximum Internet-wide scanning rate in minutes.†[MPS03] Fortunately, there were three problems with the Sapphire’s random number generation code that helped limit the spread. Further, the Internet community was better trained to stop the spread of worms with the prior outbreaks of Code Red and Nimda and within an hour put in place UDP filters for 376-byte packets destined for port 1434 14 [CER03]. Additionally, port 1434 could easily be blocked. In contrast, blocking commonly used ports like 80 or 443 would effectively result in a denial of service that could have been catastrophic. The disturbing aspect about this incident is the author of the Sapphire worm is described as only having decent programming skills. Much of the code taken was from the actual published exploit. This worm has now set the bar for future worms and is considered an alarming new standard. The fact that an average programmer can create the fastest spreading worm in history shows that automated defenses are a necessity since humans can’t respond in nearly enough time to protect online resources [MPS03]. 2.7 Types of Worm Preventions / Protection While it may seem that Internet worms are invincible, there are many network and host-based techniques that are effective against them. The host-based approach has much finer control but the network approach is still needed to block the huge number of incoming packets that a worm can produce. While some of these methods require a great deal of preparation, they are well worth the effort when an especially rampant worm tries to invade a network. Active methods seek out and destroy worms. 2.7.1 Host-Based There are many ways of preventing or slowing the spread of worms using a hostbased approach including firewalls and anti-virus software. A host-based firewall is a great tool to prevent the spread of a worm that breached the larger network firewall. Firewalls however, cannot block worms through ports that must remain open. Anti-virus software can get rid of worms on a machine, but requires constant updates on worm 15 signatures. Another problem with host-based firewalls and anti-virus software is the amount of time to setup [Naz04]. There are also potential problems with polymorphic worms that change their signatures or quickly propagating worms that could overwhelm these tools. Other ways of preventing worms is to lower the privileges on software or to use sandboxing or cratering. If software is running at root level, any compromise could result in a worm gaining that level of privilege; therefore running a process at a lower level would require extra steps be taken for the worm to compromise a system. Sandboxing is another way of controlling worms. Sandboxing runs processes in a restricted region. While in this region, the worm is unable to elevate privileges or alter files outside of the region. Experts agree that sandboxing is too resource intensive to be used effectively [Rob04]. Another novel way to stop worms is through cratering. Changing access control lists for certain files a worm requires to run would render it ineffective [Lie03]. This solution was used in the 1988 HI.com worm where experts recommended creating a file of the same name without read or write access [Naz04]. Misconfiguration of software seems to be one of the leading ways that worms exploit a system. Many software packages install unneeded routines by default. Systems can be made more secure by reducing the number of services offered. Most worms exploit vulnerabilities that have patches available. By installing current patches, worms would not be able to gain access to a system. Furthermore, most worms are released within 1 month of the patch’s release [Naz04]. Proactively scanning a network to determine what services are offered on ports and installing patches for those port services 16 is a good practice. Installing the latest patches is, however, could cause downtime and the patch could be incompatible with already installed software. Another prevention technique observes host behavior to determine if it has been compromised. There is a high learning curve with this method since it must be customized to a particular network, but it can limit an infected host from spreading the worm any further. The problem with this solution is it won’t stop passive worms, or worms that spread using the current usage patterns of the network [Naz04]. 2.7.2 Network-Based Solutions Network solutions should be used in conjunction with host-based solutions to form a better defense against worm based attacks. Network solutions depend on both perimeter and subnet firewalls and on intrusion detections systems. These can be used alone or can be integrated for better protection [Naz04]. Perimeter firewalls prevent a worm from penetrating the network outer layer thereby protecting the intranet resources. It can also prevent a worm leaving an affected network. Subnet firewalls add an additional layer of protection in case the worm passes through the perimeter firewall. While firewalls can’t ensure a network can be accessed by computers outside the firewall, they can protect the network behind the firewalls perimeter. An Intrusion Detection System (IDS) can detect worms. IDS that create rules for network firewalls could prevent worms in their initial stages. However, firewalls could become overloaded with rules causing a denial of service [Naz04]. 17 A hardware solution called the Field-programmable Port Extender (FPX) scans 2.4 billion bits per second and drops any data deemed malicious [LMK04]. While this throughput is a sizable increase from traditional software firewalls, worm signatures must be constantly updated for it to be effective. TCP worms can be stopped by the LeBrea program. LeBrea looks for worms trying to connect to unused IP addresses on a network. The worm is “fooled†by completing the TCP three-way handshake only to put the worm computer “on hold†by keeping the connection open indefinitely. This virtually halts the worm by having its outgoing connections idle instead of looking for other hosts to infect [Lis03]. 2.7.3 Other Protections While host-based and network based protections counter worms passively, other methods seek out worms and their networks to destroy them. These methods are controversial and legally questionable because they search through intranets much like the worm they are trying to fight. Some can also cause a high load on a network that is already under strain from the spread of the worm. Some active attacks against spreading worms send messages to the infected machine to shut down using the same attack as the worm. This slows the spread of a worm by shutting down machines that are replicating worms. When a worm initiates a check to see if it has already infected a machine, another active approach sends out a false message to the worm that the computer has already been infected. This approach is quite time consuming depending on the number of computers in the network [Naz04]. 18 Some worms use a central location to update their code. To attack the worm’s host network itself, an inoperable module could be installed at this central update node. This inoperable module would spread to newly infected nodes stopping the worm in its tracks. However, worm writers could easily defeat this by using public key encryption to update the module [Naz04]. Another way to stop worms is to send out a worm to patch computers. Worms like Bagle and Netsky each install themselves and uninstall the other [Rob04]. The Welchia worm downloads Microsoft updates and attempts to unload the Blaster worm [Sym04]. Many factors must be considered when writing this type of worm. If this “good†worm has errors, it may cause a bigger problem than the original worm. The bandwidth that the new worm uses compounds the problem with a potential denial of service. Finally, it is illegal to have a worm “fix†other computers just as it is for the hacker to release the first malicious one [Naz04]. 2.8 NSA SNAC Guides In 2001, a Congressional oversight committee learned that over 155 separate government computer systems had been hacked. This led to the enforcement of some established policies such as the Computer Security Act of 1987 which had a dual purpose. The first was to create a set of minimal security practices for Federal computer systems that contain sensitive information. The second was to assign responsibility for developing standards and guidelines to the National Institute of Standards and Technology (NIST) with guidance from the National Security Agency (NSA) [Cor01]. 19 Many security vulnerabilities can be fixed by simply configuring the system properly. The NSA, working with Defense Information Systems Agency (DISA), NIST, FBI, SANS Institute, Center for Internet Security and other vendors, have developed a set of benchmark security configuration guides to provide a “pre-flight checklist†of security settings [Wol03]. The NSA has recently de-classified a group of documents it created to secure the Microsoft and some UNIX operating systems and applications [NSA04]. These NSA Systems and Network Attack Center (SNAC) guides use a top-down approach to securing a computer and are broken into six broad categories: Application Guides, Database Server Guides, Operating System guides, Router Guides, Supporting Document Guides, and Web Servers Guides. The NSA SNAC guides have in-depth explanations of how to secure their respective category as well as detailed instructions on how to perform those actions. The checklists at the end of the chapters are to the point and allow system administrators with an in-depth knowledge of their systems to setup the computers quickly. An example of a checklist entry from the Guide to the Secure Configuration and Administration of Microsoft Internet Information Services 5.0 [Wal02] is: • Remove all NTFS permissions from the Inetpub directory, and assign only required access groups and accounts (i.e., remove everyone, add WebUsers, WebAdmins, etc.) • Establish logical directory structure (i.e., separate static content, html, asp, scripts, executables into different labeled directories) • Set NTFS permission on directory structures as required • Delete/move all sample directories and scripts that execute the samples The NSA’s 60 Minute Network Security Guide [NSA02], part of the Supporting Document guides section, provides an overview of security in both the Windows and 20 UNIX environments. This guide defines the properties that make a good security policy. The most important aspect of a good policy is to have buy-in from all involved which ensures both the writers of the policy and those who implement the policy agree. The policy must have guidelines for implementation and be enforced with appropriate security tools [NSA02]. 2.9 Exploits The exploits used in this thesis are now discussed. Each exploit was selected to test the ability to compromise the OS and selected services used. 2.9.1 OS Exploits Worms like Blaster (August, 2003) and Sasser (April, 2004) send out random IP addresses which make it difficult to use them to attack other computers without extensive modification to the worm. Instead of using these worms, the actual exploit that the respective worms employed is used. 2.9.1.1 DCOM RPC Exploit The Windows Distributed Component Object Model (DCOM) Remote Procedure Call (RPC) buffer overflow exploit is used by worms like MSBlaster. This exploit is described in the Microsoft Security Bulletin MS03-026 originally posted on 16 July, 2003 [Mic03]. An attacker can send a buffer overflow to ports 135, 139, 445 or other RPC configured ports and gain system privileges for remote code execution. These ports are not intended to be used in a hostile environment are normally blocked with either a hardware firewall or a software firewall such as Windows Internet Connection Firewall (ICF) that is built into Windows XP Professional. 21 This experiment used the DCOM exploit written by Moore and analyzed by Wayne J Freeman [Fre03] which sends the buffer overflow to port 135 where the RPC improperly checks it. It then allows this malformed message to overflow the DCOM process and open a command shell on port 4444 with system level privileges. 2.9.1.2 LSASS Exploit The Local Security Authority Service (LSASS) Buffer Overflow buffer overflow exploit is used by worms like Sasser, Korgo, Phatbot, Donk and Bobax. This vulnerability is described in Microsoft Security Bulletin MS04-11 [Mic04]. This exploit attacks certain Active Directory service functions in LSASRV.DLL with a buffer overflow that causes the DsRolerUpgradeDownlevelServer function to write entries to the dcpromo.log file. It also lets the attacker remotely execute code of their choosing. This exploit was discovered by eEye Digital Security and uses code written by Houseofdabus and analyzed by Travis Abrams [Abr04]. It tries to connect to port 445 remotely and opens a port of your choosing on the vulnerable computer. 2.9.2 Microsoft IIS Extended Unicode Directory Traversal Vulnerability The Unicode directory traversal exploit, as discussed in Microsoft Security Bulletin MS00-078, is used by worms like Nimda. This exploit allows attackers to move out of the web root directory and access any file with the basic Internet user permissions by replacing the forward or backward slash with its respective UNICODE character. 2.9.3 Outlook Exploit To test Internet Explorer 6.0 on the Microsoft XP Professional computers, Georgi Guninski’s security advisory #49 [Gun01] is used. This exploit uses Active X to control 22 “Microsoft Outlook View Control†which permits access and manipulation of the user’s mail messages through Internet Explorer. It also allows the execution of arbitrary programs through Outlook’s Application object. 2.9.4 Multipurpose Internet Mail Extensions (MIME) Header Exploit The MIME type exploit as described in the Microsoft Security Bulletin MS01-020 [Mic01] is used by worms such as Klez, Bugbear, Mydoom, and Sobig. The original code that proved that this concept would work was written by Juan Carlos Garcia Cuartango. Microsoft Internet Explorer uses MIME to extend the functionality of Internet mail to allow formats other than just ASCII text to be used. MIME headers are used only to evaluate if the embedded file is potentially dangerous and not when the file is actually processed on the computer. When the embedded file is misrepresented it could allow potentially dangerous code to be processed on the vulnerable computer with the permissions of the current user [Mic01]. 2.10 Summary This chapter covered the background on Internet worms as well as the financial costs that have resulted. Some common traits of worms are described as well as what the future holds for worms. Four worms were covered in detail: Code Red & Code Red II, Nimda, and Sapphire. Each exploit used in the thesis is also discussed. Ways to stop worms from disrupting the network as well as the NSA SNAC guides were described. 23 3 Methodology This chapter covers the goals and hypothesis of this thesis. It also covers the approach taken as well as the system boundaries. 3.1 Goals and Hypothesis The intent of this thesis is to determine how well the NSA SNAC security guides protect the Windows 2000 Server and Windows XP Professional Workstation operating systems. It also looks at protection of the following applications from selected Windowsbased worms and exploits: Internet Information Services (IIS) 5.0, SQL Server 2000, and Exchange 2000/ Outlook 2002/ Outlook Express. This experiment also determines how well the NSA SNAC guides protect against worms on a newly installed operating system (OS) and applications with and without recommended patches. It is expected that Microsoft patches protect against most of the chosen worms and exploits since they are written specifically to stop them. It is unknown how well the SNAC guides protect an initial setup and no patches. 3.2 Approach The effectiveness of the SNAC guides is evaluated using two LANs connected by a Cisco 2600 router as shown in Figure 1. In place of certain worms, the actual exploit is used because of the randomness of their connections to other nodes. 24 Figure 1: System Including IDS One LAN is the Infected LAN and serves as a launching point for the worm or actual exploit. This LAN only used the initial setup of the Microsoft OS and applications in order to make sure that the worms can propagate without hindrance. The other (initially) Uninfected LAN is used to determine how well the NSA SNAC guides protect against worm infection using four configurations: 1) Initial install from Microsoft CDs Cisco 5 port hub W2K DNS Cisco 5 port hub W2K Exchange/IIS/ W2K DNS Cisco Router carol CRABBY bob alice BLABBY Infected LAN ABBY KVM Switch 1 ABBY 2 BLABBY 3 CRABBY W2K Exchange/IIS/ XP Pro client Outlook XP Pro client Outlook Uninfected LAN 25 2) Initial install and all current patches from Microsoft Update website installed 3) Initial install and NSA SNAC guides incorporated, no patches are installed except Service Pack 1 which is required to install Exchange 2000 4) Initial install, all Microsoft patches and NSA SNAC guides incorporated 3.3 System Boundaries The system under test (SUT) is called the Worm Protection System and includes computers with Windows 2000 Server operating system, and computers with Windows XP Professional (see Figure 2). Figure 2: System under Test (SUT) It also includes the following Microsoft applications; Internet Information Services (IIS) 5.0, SQL Server 2000, and Exchange 2000/Outlook 2002 / Outlook Express. The components under test are the NSA SNAC security guideline settings and all the current Microsoft patches for these applications as well as those for the OS. The scope of this experiment is limited to using the NSA SNAC guides and current patches only, no other means of preventing worms, such as firewalls or packet filtering, are used. Worm Protection System (SUT) Microsoft Computers Microsoft Operating Systems Microsoft Applications Microsoft Patches SNAC guides Components under Test (CUT) Uninfected LAN computers only 26 3.4 System Services This system provides one service: protection against network propagated worms and exploits. There are two possible outcomes of this service; system is vulnerable or system is not vulnerable. A system is vulnerable when a worm or exploit has executed its particular attack vector and has compromised the intended service on the target computer. A system is not vulnerable when the worm is unable to compromise the intended service. This research does not examine denial of service attacks. 3.5 Workload The workload in this research consists of selected Windows-based worms; namely, versions of the CodeRed worms and the Slammer worm from the CERT/CC Artifact Catalog [build 528]. The following exploits are used. Unicode Web Traversal [Sec00], the Distributed Component Object Model (DCOM) Remote Procedure Call (RPC) exploit [Fre03], the Local Security Authority Subsystem Service (LSASS) exploit [Abr04], the MIME exploit described in Microsoft Security Bulletin (MSB) MS01-020 [Mic01], and the Outlook XP exploits described in Georgi Guninski’s security advisory #49 [Gun01]. These worms and exploits are selected to test the ability to compromise the Windows OS and selected services, IIS 5.0, SQL Server, and Exchange 2000/Outlook 2002, while the NSA SNAC guides are designed to protect. 3.6 Performance Metrics The performance metrics are based on whether or not the computer is vulnerable to the exploit / worm. The outcome is either system is vulnerable or system is not vulnerable. Since this experiment only tests whether a particular system is vulnerable 27 from exploits against the specific vector of attack, there is no collection of other data such as how rapidly the worm spreads or how much bandwidth it used. 3.7 Parameters The system parameters for this experiment are listed below: • Computer Setup: Each is loaded with an OS, Windows 2000 version 5.0.2195 or Windows XP Pro version 5.1.2600, and the appropriate applications; Active Directory/DNS, IIS 5.0, SQL Server 2000, and Exchange 2000/Outlook 2002 • Number of Computers: There are three computers on both the Infected and the Uninfected LAN. These are used to simulate an actual working environment with a Windows 2000 DNS server, a Windows 2000 Exchange / IIS / SQL server and a Windows XP Professional client computer. • Security Setup: The Infected LAN has an initial setup while the Uninfected LAN has four security configurations. • Worm / exploit entry points: worms and exploits are released from the Infected LAN using the standard method of deployment explained in Chapter 4. The worm / exploit workload parameters for this experiment are: • Worm / exploit target of attack: OS and/or applications 3.8 Factors The system factors and corresponding values for this experiment are: 28 • Security Configuration Setup: 1) Initial install from Microsoft CDs 2) Initial install and all current patches from Microsoft Update website 3) Initial install, all Microsoft patches and NSA SNAC guides 4) Initial install with NSA SNAC guides incorporated (no patches are installed except Service Pack 1 which is required to install Exchange 2000). The following NSA SNAC guides are used in the configuration of the computers: Guide to Secure Configuration and Administration of Microsoft Windows 2000 Certificate Services [Chr01] Guide to Secure Configuration and Administration of Microsoft Windows 2000 Certificate Services (Checklist Format) [Chr01a] Guide to Securing Microsoft Windows 2000 Group Policy [Han01] Guide to Securing Microsoft Windows 2000 Group Policy: Security Configuration Tool Set [Han01a] Guide to Securing Microsoft Windows 2000 Active Directory [SaR00] Guide to Securing Microsoft Windows 2000 DNS [Ste01] Guide to Securing Microsoft Windows 2000 File and Disk Resources [HaM02] Guide to the Secure Configuration and Administration of Microsoft Exchange 2000 Version 1.2 [Pit03] Guide to Secure Configuration and Administration of Microsoft Internet Information Services 5.0 [Wal02] Guide to Secure Configuration and Administration of Microsoft SQL Server 2000 [ChH00] Guide to Securing Microsoft Windows XP [BCH03] Guide to Securing Microsoft Internet Explorer 5.5 Using Group Policy [Doe02] The one divergence with the security setup is that the NSA SNAC guides call for all recent Microsoft patches to be installed. Only Service Pack 1 is applied to the initial setup with NSA SNAC guides to test how well the NSA SNAC guides work alone without Microsoft patches. 29 The workload factors are: • Worm attack vectors, worms are selected to attack the following categories and tested against each level of security setup: o Operating Systems, Windows 2000 and Windows XP Pro: • DCOM RPC exploit [Fre03], LSASS exploit [Abr04] o IIS 5.0: • Multiple Code Red worms from CERT/CC Artifact Catalog, Unicode Web Traversal [Sec00] o Exchange / Outlook XP • the MIME exploit described in MSB MS01-020 [Mic01], the Outlook XP exploits [Gun01] o SQL Server • Slammer worm from CERT/CC Artifact Catalog 3.9 Evaluation Technique The hypothesis is tested by direct measurement of a real network. Currently there are no simulations that can directly model the vulnerabilities and their subsequent fixes with patches. Validation of the results is done by examining the computer for evidence of infection based on known results of an attack. Validation is performed on the worm or exploit on each node of the network. Every worm /exploit is run on an initial setup to make sure that it functions as expected. 30 Each computer node is checked to make sure it is setup correctly in each configuration. The network is checked to make sure that it is sending and receiving packets correctly. Ethereal is used on each machine to verify that a specific worm is working correctly and that it traveled across the network. 3.10 Experimental Design The experimental design for this research is a full factorial design with replications. This allows for the examination of every possible combination of workloads and configurations. The number of factors, levels, and replications: • Number of computer configuration setups = 4 1) Initial install from Microsoft CDs 2) Initial install and all current patches from Microsoft Update website 3) Initial install, all Microsoft patches and NSA SNAC guides 4) Initial install with NSA SNAC guides incorporated (no patches are installed except Service Pack 1 which is required to install Exchange 2000). The following NSA SNAC guides are used in the configuration of the computers: • Number of replications = 2 The second replication is done to verify that the results are the same. • Number of worm workloads (the number of computers on a LAN represent the number of computers that are susceptible to the particular exploit): o DCOM RPC exploit = 3 computers on LAN * 4 setups = 12 o LSASS exploit = 3 computers on LAN * 4 setups = 12 o Unicode Web Traversal = 1 computer on LAN * 4 setups = 4 o MS01-020 exploit = 1 computer on LAN * 4 setups = 4 o Georgi Guninski’s exploit = 1 computer on LAN * 4 setups = 4 o Slammer worm = 1 computer on LAN * 4 setups = 4 o Code Red versions = 1 computer on LAN * 4 setups = 4 Total number of experiments (4) * (2) * (44) = 352 31 3.11 Summary The experiments outlined in this chapter determine how well NSA SNAC guides protect against specific worms and exploits compared to an initial setup or patched systems. The system boundaries are outlined as the computers involved including their OS and applications, the NSA SNAC guides and current patches. It is expected that current Microsoft patches block worms and exploits better than the NSA SNAC guides since they are written specifically for them. The experiments performed and the data received from these experiments is discussed in the next chapter. 32 4 Results This chapter introduces each type of exploit and the results obtained during the exploit. Each exploit is presented in Chapter 2 and the results are described below for each configuration. Some alternative ways to protect against the exploit are also discussed here. A short conclusion of each exploit is also provided in each section. 4.1 Operating System Exploit Results 4.1.1 DCOM RPC Exploit Windows 2000 Server The DCOM exploit is successful on the initial configuration of Windows 2000 Server opening a command prompt to “C:\WINNT\System32â€. The exploit failed on all other configurations with the exception of the initial install with NSA SNAC guides when Internet Protocol Security (IPSec) is turned off. When IPSec is used to block the ports that are vulnerable the exploit is unsuccessful. Security Focus [Sec03] states that another way to protect from the exploit is by having the Distributed COM be turned off, but states that this can only be done on Windows 2000 with Service Pack 3 installed. The problem with this solution is that it could create problems with the communication between the Active Directory /DNS server and the Exchange Server, which are closely linked and need to communicate on these ports. Windows XP Professional The Windows XP Pro initial setup is also compromised by the exploit opening a command prompt to “C:\Windows\System32â€. The exploit failed on all other 33 configurations including the initial configuration with NSA SNAC guides when the builtin ICF enabled. When the ICF is disabled, the DCOM RPC exploit is successful on this configuration. 4.1.2 LSASS Exploit Windows 2000 Server The Travis Abrams experiment used a Windows 2000 computer with Service Pack 3, whereas this experiment used Service Pack 1 on both the initial and initial with NSA SNAC guides and Service Pack 4 on the patched configurations. With only Service Pack 1 installed the LSASS exploit restarts the computer after connecting. The LSASS exploit failed on the initial setup on the Active Directory Windows 2000 Server, but succeeded on the Exchange / IIS / SQL Server. The exploit failed on all other Windows 2000 configurations. The exploit didn’t work on the initial setup of Windows 2000 Server with only NSA SNAC guides applied because the Local Security Policy “Additional restrictions for anonymous connections†setting is set to “No access without explicit anonymous permissions.†This prevented the LSASS exploit from connecting to the NetBIOS null session. Security Focus recommends creating a read-only ‘dcpromo.log’ to stop this vulnerability [Sec04] which is why the exploit failed on the initial setup on Windows 2000 Active Directory / DNS server. They also recommend TCP/IP filtering to block all un-initiated inbound TCP traffic to any port. TCP/IP filtering may cause problems with the interaction of the Active Directory /DNS server and the Exchange Server which need 34 to communicate over this port. Another approach is to stop the server service; unfortunately this is needed for IIS and Exchange administration to function correctly. Windows XP Professional The LSASS exploit succeeded on the Windows XP Pro computer with just an initial setup. The exploit failed on all other configurations. When the ICF is disabled on the initial configuration with NSA SNAC guides, the exploit succeeded. The exploit succeeded even when the two Local Security Policies: “Network Access: Do not allow anonymous enumeration of SAM accounts†and “Network Access: Do not allow anonymous enumeration of SAM accounts and shares†are enabled. A reason for this may be that the “Restrict Anonymous = 2†is no longer a valid setting for Windows XP Professional which is present in Windows 2000 Server. This setting fully prevents enumeration of the users and shares [Cer02]. 4.1.3 Operating System Exploit Summary The configurations with patches protected the computers since these patches are written specifically for the exploit. Note that all these patches were written after the exploit was discovered. The patches made it possible to prevent the buffer overflows by altering the vulnerable code. The NSA SNAC guides could not prevent inherent buffer overflow exploits to the Operating System, but with IPSec enabled it could prevent the packets from getting to the computer. IPSec could also prevent an insider threat from attacking these NetBIOS ports, which are usually open behind a firewall. 35 4.2 IIS Exploit Results 4.2.1 Microsoft IIS Extended Unicode Directory Traversal Vulnerability The exploit from Security Focus [Sec00] is used on the IIS 5.0 server on all configurations. The initial configuration is vulnerable to this exploit, while all other configurations are found to be secure. 4.2.2 Code Red Worm The actual Code Red worm binaries, ‘codered.D’, ‘red1.bin’, ‘red2a.bin’, ‘red2b.bin’, from the CERT/CC Artifact Catalog [build 528] database are used to test the IIS 5.0 server. The Code Red worm exploit sends a buffer overflow to the Indexing Service DLL. Code Red exploited the ‘Idq.dll’ file because the script mappings for the Internet Data Query (.idq) and Internet Data Administration (.ida) files are present. In this exploit, the binaries are sent to the Uninfected IIS server with NetCat on port 80. The initial configuration is vulnerable to each binary when tested; all other configurations prevented the exploit from working. 4.2.3 Conclusions The NSA SNAC guides changed the IIS home directory so that it is on a drive separate from the operating system preventing the UNICODE traversal. The guides also rename common directories and eliminating unnecessary ones in case any of these are vulnerable. The NTFS file permissions are also changed so that minimal permissions are granted and that “Guest†and “Everyone†are removed from the IIS directories. This prevents the “IUSR†account from having too much control over the IIS directories. 36 The NSA SNAC guides also remove any unneeded script mappings to prevent any potential vulnerability that these “.dll†files could have from affecting the security of the web server. 4.3 SQL Server Exploit Results 4.3.1 SQL Slammer Worm To test the Microsoft SQL Server 2000, the SQL Slammer ‘worm.bin’ binary from CERT/CC Artifact Catalog [build 528] database is used. The SQL Slammer worm uses a buffer overflow against SQL Server 2000 as described in Chapter 2. The SQL binary is sent to the SQL Servers in each configuration using Netcat. This exploit is successful on the initial setup, but is unsuccessful on all other configurations. 4.3.2 Conclusions The NSA SNAC guides recommend the use of Windows Authentication Mode. This prevented the worm from connecting to the server. Also, by changing the port like the NSA SNAC guides suggest, it would be more difficult and require more coding for the worm to find and try to exploit. In addition, the NSA SNAC guides recommend using IPSec to secure the server. This experiment didn’t use IPSec, but it would certainly add another substantial layer to the security of the SQL server as shown by the success of IPSec in the Operating System exploits. 37 4.4 Internet Explorer (IE) / Email Exploits 4.4.1 Microsoft IE MIME Header Exploit Results The MIME type exploit as described in the Microsoft Security Bulletin MS01-020 [Mic01] is used by worms such as Klez, Bugbear, Mydoom, and Sobig. The original code that proved that this concept would work was written by Juan Carlos Garcia Cuartango. The demonstration from Inside Security is used to test the Windows 2000 servers with Internet Explorer 5.0 [Ins01]. Since Internet Explorer 6.0, installed by default on the Windows XP Professional is not affected, it was not tested. This exploit has an incorrectly configured MIME map on the server and allows “foo.vbs†to run on the client which writes a test.txt file to the C: drive. The initial configuration is vulnerable to this exploit and had the “test.txt†file written to the C: drive. The patched Windows 2000 configuration had Internet Explorer 6.0 so it is not vulnerable to the exploit. The NSA SNAC guides configuration is vulnerable to this exploit when the security setting “file download†is enabled but when “file download†is disabled the exploit failed to run the script which prevented the creation of the “test.txt†file to the C: drive. 4.4.2 Microsoft IE / Outlook Exploit Results The initial configuration of this exploit deleted email from the user’s Outlook as well as opened a command prompt that is able to execute any command. The patched system opened the Outlook mail in Internet Explorer, but it didn’t delete the mail or open up the command window. The NSA SNAC guides are not vulnerable with Active X disabled and didn’t open Outlook emails or the command prompt. 38 4.4.3 Conclusions The patched configuration does nothing to disable Active X or File Download which could lead to other exploits. They do however protect from both of these exploits although they still allow Internet Explorer to access Outlook’s Application object. The NSA SNAC guides let the system administrators choose to enable Active X and File Download based on usability in the Internet Zone. While this is done to ensure functionality for end users, these tests show it is a risk to keep them enabled. 4.5 Summary This chapter covers all the results of the exploits on each experiment conducted. It also explained the exploits and how the NSA SNAC guides protected against them. Some alternative protection methods are also covered. Table 1 identifies the results of how the four different security setups performed against each exploit or worm. The Initial system configuration is vulnerable to all exploits. The NSA SNAC guides configuration as well as the patched system prevented the attacks. 39 Table 1: Result of Exploit on Different Configurations Type Configuration Exploit/worm Initial Initial + NSA SNAC guides Initial + Patches Initial + Patches + NSA SNAC guides DCOM RPC System Vulnerable Exploit Failed w/ IPSEC or XP firewall System Vulnerable w/o IPSEC or XP firewall Exploit failed Exploit failed LSASS System Vulnerable Exploit failed Exploit failed Exploit failed Code Red System Vulnerable Exploit failed Exploit failed Exploit failed Unicode Traversal System Vulnerable Exploit failed Exploit failed Exploit failed SQL Slammer System Vulnerable Exploit failed Exploit failed Exploit failed Georgi Guninski’s security advisory System Vulnerable Exploit failed w/ Active X disabled System Partially Vulnerable Exploit failed w/ Active X disabled MS01-020 (on IE 5.0) System Vulnerable Exploit failed when file download disabled Exploit failed Exploit failed 40 5 Conclusions This chapter presents the conclusions of the research. It compares the results of all configurations as well as gives the reason for the results on the configuration using the NSA SNAC guides. 5.1 Conclusions of Research While both the NSA SNAC guides and the Microsoft patches are comparable in their protection against the exploits, as shown in Table 1, there are many factors to look at when trying to determine what type of configuration is better. The most important factor to consider is from what point the exploit is discovered to the time when the system is protected. Another issue is what type of vulnerability the exploit is attacking. While the patched configuration protects about as well as the NSA SNAC guides configuration, there is a big difference in the timeliness of the fix to the vulnerability. The NSA SNAC guides are applied to the initial configuration so the computers are protected as soon as they are put online. The patched systems, on the other hand, are vulnerable until the patch for the particular exploit is released and then installed on the computers. Furthermore, patches on the computers do not secure passwords, change security settings, limit access or remove extraneous packages that could have undiscovered exploits. Some exploits rely on weak passwords which patches do not fix. The NSA SNAC guides make sure that the passwords meet complexity requirements as well as being 12 characters long. 41 The NSA SNAC guides limit which ports that can be accessed by using IPSec or XP Professional’s built-in firewall. This not only stops the known buffer overflow vulnerabilities, but could potentially stop any new exploits from attacking these ports. It also can prevent insider threats since many organizations’ NetBIOS ports are open behind their firewall. The NSA SNAC guides also protect applications with ports that can’t be closed, like IIS, and SQL Server. The NSA SNAC guides recommended removing superfluous Internet Server Application Program Interface (ISAPI) filters as well as unused sample directories from IIS to prevent exploits. They also recommended the web root directory be on a separate drive from the OS to prevent UNICODE traversal exploits. The SQL Server should be moved to a non-standard port which would help against worms that scan for the standard port. Further, the NSA SNAC guides recommend Windows Authentication Mode for the SQL Server which uses the built in security authentication of the Windows OS. Another recommendation is to use IPSec on these SQL services so connections to your computer are limited thus reducing your exposure to possible exploits. The NSA SNAC guides also disable unneeded services to prevent exploitation. This further reduces what ports are listening and services that could be vulnerable. With all these facts and the results of the experiments it is reasonably certain that the NSA SNAC guides provide better protection than Microsoft patches alone. Specific reasons that the NSA SNAC guides prevented the exploits from working are shown in Table 2. 42 Table 2: NSA SNAC Guides Configuration Results Exploit/worm Reason exploit failed on initial configuration with NSA SNAC guides DCOM RPC Windows 2000: IPSec blocked vulnerable ports XP Professional: ICF blocked vulnerable ports LSASS Windows 2000: NetBIOS null session not allowed XP Professional: ICF blocked vulnerable ports Code Red Removed vulnerable ISAPI filters Unicode Traversal Moved web directory to separate drive SQL Slammer only use Windows Authentication Mode Georgi Guninski’s security advisory Internet Explorer 6.0 Security Settings: Disabled Active X MS01-020 Internet Explorer 5.0 Security Settings: Disabled file downloads 5.2 Significance of Research The results that show the NSA SNAC guides protect from a number of vulnerabilities as well as patches allows administrators time to test out the patches. Some companies make sure Microsoft patches do not interfere with existing software so using the NSA SNAC guides will help protect their computer systems during this validation process time. 43 5.3 Recommendations for Action While the NSA SNAC guides alone provide a better protection against just patches, it is not the intention of this experiment to persuade anyone to stop using patches. The NSA SNAC guides also advocate the use of defense in depth. They recommend not only the use of patches, but firewalls, virus scanning software as well as user education. While the NSA SNAC guides protected against all the attacks that are used in these experiments, there is no guarantee that they will protect against all vulnerabilities by themselves. Computers can be best protected against vulnerabilities through constant reevaluations of security practices. It should be a priority for the NSA to produce non-technical guidelines to secure: Windows XP Home / Professional, and Windows 2000 as well as common applications since home users are now being targeted by many exploits. These non-technical home users need simple and concise checklists in order to be used. The current NSA SNAC guides are in-depth guides written for knowledgeable system administrators. These would frustrate common users and prevent them from being used. 5.4 Summary This chapter covered the conclusions made from the results of the experiments. While the NSA SNAC guides seem to work as well as the Microsoft patches it is not recommended to use the NSA SNAC guides alone. The real strength of the NSA SNAC guides is that they promote defense in depth and don’t just rely on one method of protection to defend against exploits. 44 Bibliography [Abr04] Abrams, T., Microsoft LSASS Buffer Overflow from exploit to worm. www.giac.org/practical/GCIH/Travis_Abrams_GCIH.pdf, Apr. 2004. [Adg03] Adams, J. and F. Guterl, Bringing Down the Internet, Newsweek International, vol. 2004; http://msnbc.msn.com/id/3339638, Oct. 28, 2003. [Arc99] Archambault, J., The history of worm like programs, http://www.snowplow.org, Jul. 2001. [ArR01] Arquilla, J., D. Ronfeldt, Networks and Netwars: The Future of Terror, Crime, and Militancy, RAND Corp, http://www.rand.org/publications/MR/MR1382/, May 2001. [BCH03] Bickel, R., M. Cook, J. Haney, et al., Guide to Securing Microsoft Windows XP, http://www.nsa.gov/snac/os/winxp/winxp.pdf, Dec. 2003. [Bou03] Boutin, P., Slammed! An inside view of the worm that crashed the Internet in 15 minutes. http://www.wired.com/wired/archive/11.07/slammer_pr.html, Jul. 2003. [CER01] CERT/CC, CERT Advisory CA-2001-19 "Code Red" Worm Exploiting Buffer Overflow in IIS Indexing Service DLL, http://www.cert.org/advisories/CA2001-19.html, Jan. 2002. [CER01a] CERT/CC, CERT Advisory CA-2001-26 Nimda Worm, http://www.cert.org/advisories/CA-2001-26.html, Sep. 25, 2001. [CER02] CERT/CC, CERT Incident Note IN-2001-09, http://www.cert.org/incident_notes/IN-2001-09.html, Aug. 6, 2001. [CER03] CERT/CC, CERT Advisory CA-2003-04 MS-SQL Server Worm, http://www.cert.org/advisories/CA-2003-04.html, Jan. 27, 2003. [ChH00] Christman, S. and J. Hayes, Guide to Secure Configuration and Administration of Microsoft SQL Server 2000, http://www.nsa.gov/snac/db/mssql_2k.pdf, Aug. 2003. [Chr01] Christman, S., Guide to Secure Configuration and Administration of Microsoft Windows 2000 Certificate Services, http://www.nsa.gov/snac/os/win2k/w2k_cert_services.pdf, Oct. 10, 2001. 45 [Chr01a] Christman, S., Guide to Secure Configuration and Administration of Microsoft Windows 2000 Certificate Services (Checklist Format), http://www.nsa.gov/snac/os/win2k/w2k_cert_services_checklist.pdf, Oct. 10, 2001. [Cor01] Corrie, J., Federal Systems Level Guidance for Securing Information Systems, http://www.sans.org/rr/whitepapers/policyissues/489.php, Aug. 16, 2001. [Doe02] Doernberg, C., Guide to Securing Microsoft Internet Explorer 5.5 Using Group Policy, http://www.nsa.gov/snac/os/winxp/winxp.pdf, Jul. 2002. [Fre03] Freeman, W., An Analysis of the Microsoft RPC/DCOM Vulnerability, http://www.giac.org/practical/GCIH/Wayne_Freeman_GCIH.pdf, Sep. 22, 2003. [Gun01] Guninski, G., MS Office XP – the more money I give to Microsoft, the more vulnerable my Windows Computers are, http://www.guninski.com/vv2xp.html, Jul. 2001 [HaM02] Haney, J. and O. McGovern, Guide to Securing Microsoft Windows 2000 File and Disk Resources, http://www.nsa.gov/snac/os/win2k/w2k_file_disk_resource.pdf, Nov. 2002. [Han01] Haney, J., Guide to Securing Microsoft Windows 2000 Group Policy, http://www.nsa.gov/snac/os/win2k/w2k_group_policy.pdf, Sep. 13, 2001. [Han01a] Haney, J., Guide to Securing Microsoft Windows 2000 Group Policy: Security Configuration Tool Set, http://www.nsa.gov/snac/os/win2k/w2k_group_policy_toolset.pdf, Dec. 1, 2003. [Huc03] Huckaby, T. Web Server Market Share and HTTP Compression, http://www.windowsitpro.com/Windows/Article/ArticleID/39729/39729.html, Jul. 29, 2003. [Ins01] Inside Security, Microsoft Internet Explorer MIME Header Attachment Execution Vulnerability, http://www.insidesecurity.de/msie_mime_demo.html, 2001. [Lie03] Lieberman, P., Cratering: Survive and Prevent virus outbreaks, http://www.lanicu.com/index.cfm/whitepapers/Cratering_Survive_and_Preve nt_Virus_Outbreaks?id=450E141831BFF9AFBFD216D57277FB1D, Aug. 28, 2003. 46 [Lis03] Liston, T., Worm and Virus Defense: How We Can Protect the Nation’s Computers from These Threats Today. http://www.hackbusters.net/Worm%20and%20Virus%20Defense.pdf, Sep. 22, 2003. [LMK04] Lockwood, J., J. Moscola, M. Kulig, et al., Internet Worm and Virus Protection in Dynamically Reconfigurable Hardware http://www.arl.wustl.edu/~lockwood/publications/MAPLD_2003_e10_lockw ood_p.pdf, Sep. 11, 2003. [Mic01] Microsoft, Microsoft Security Bulletin (MS01-020) http://www.microsoft.com/technet/security/bulletin/MS01-020.mspx, Mar. 29, 2001. [Mic02] Microsoft, The 60 Minute Network Security Guide, http://www.nsa.gov/snac/support/sixty_minutes.pdf, Jul. 12, 2002. [Mic03] Microsoft, Microsoft Security Bulletin MS03-026 http://www.microsoft.com/technet/security/bulletin/MS03-026.mspx, Jul. 16, 2003. [Mic04] Microsoft, Microsoft Security Bulletin MS04-011, http://www.microsoft.com/technet/security/bulletin/MS04-011.mspx, Apr. 13, 2004. [MPS03] Moore, D., V. Paxson, S. Savage, et al., Inside the Slammer worm," Security & Privacy Magazine, IEEE, Vol. 1, pp. 33-39, Jul.-Aug. 2003. [Naz01] Nazario, J., J. Anderson, R. Wash, et al., The Future Of Internet Worms, Crimelabs Research., http://www.crimelabs.net/docs/worms/worm.pdf, Jul. 20, 2001. [Naz04] Nazario, J., Defense and Detection Strategies against Internet Worms, Artech House, INC, Norwood, MA, 2004. [NSA02] NSA SNAC, The 60 Minute Network Security Guide (First Steps Towards a Secure Network Environment), http://www.nsa.gov/snac/support/sixty_minutes.pdf, Jul. 12, 2002. [NSA04] NSA SNAC, Security Configuration Guides Overview, http://www.nsa.gov/snac/, Apr. 1, 2004. [Pit03] Pitsenbarger, T., Guide to the Secure Configuration and Administration of Microsoft Exchange 2000 Version 1.2, http://www.nsa.gov/snac/os/win2k/exch_2k_v1_2.pdf, Oct. 2003. [Rob04] Robbins, A., The Virus Wars, PC Magazine, pp. 115-118, Jul. 2004. 47 [SaR00] Sanderson, M., D. Rice, Guide to Securing Microsoft Windows 2000 Active Directory, http://www.nsa.gov/snac/os/win2k/w2k_active_dir.pdf, Dec. 2000. [Sec00] Security Focus, Microsoft IIS and PWS Extended Unicode Directory Traversal Vulnerability, http://www.securityfocus.com/bid/1806/, Oct. 17, 2000. [Sec03] Security Focus, Microsoft Windows DCOM RPC Interface Buffer Overrun Vulnerability, http://www.securityfocus.com/bid/8205, Jul. 16, 2003. [Sec04] Security Focus, Microsoft Windows LSASS Buffer Overrun Vulnerability, http://www.securityfocus.com/bid/10108, Apr. 13, 2004. [Ste01] Stephens, R., Guide to Securing Microsoft Windows 2000 DNS, http://www.nsa.gov/snac/os/win2k/w2k_active_dir.pdf, Apr. 2001. [Sul98] Sullivan, B., Remembering the Net Crash of ‘88, http://www.msnbc.com/news/209745.asp?cp1=1, Nov. 2, 1998. [Sym04] Symantec, W32.Welchia.Worm, http://securityresponse.symantec.com/avcenter/venc/data/w32.welchia.worm. html, Feb. 26, 2004. [Thu03] Thurrott, P., OS Market Share: Microsoft Stomps the Competition, http://www.winnetmag.com/Article/ArticleID/40481/40481.html, Oct. 9, 2003. [Tod03] Todd, M., Worms as Attack Vectors: Theory, Threats, And Defenses, http://sans.org/rr/papers/index.php?id=930, Jan. 31, 2003. [Wal02] Walker, W., Guide to the Secure Configuration and Administration of Microsoft Internet Information Services 5.0, http://www.nsa.gov/snac/os/win2k/iis_5.pdf, Mar. 4, 2002. [Wol03] Wolf, D., Cyber security. Getting it Right http://www.nsa.gov/ia/Wolf_SFR_22_July_2003.pdf, Jul. 22, 2003. REPORT DOCUMENTATION PAGE Form Approved OMB No. 074-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of the collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 21-03-2005 2. REPORT TYPE Master’s Thesis 3. DATES COVERED (From – To) April 2003 – March 2005 5a. CONTRACT NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE National Security Agency (NSA) Systems and Network Attack Center (SNAC) Security Guides versus Known Worms 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 6. AUTHOR(S) Sullivan, Matthew W., 2d Lt, USAF 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAMES(S) AND ADDRESS(S) Air Force Institute of Technology Graduate School of Engineering and Management (AFIT/EN) 2950 Hobson Way, Building 640 WPAFB OH 45433-8865 8. PERFORMING ORGANIZATION REPORT NUMBER AFIT/GIA/ENG/05-07 10. SPONSOR/MONITOR’S ACRONYM(S) 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Harley Parkes, CISSP Chief, Operational Network Evaluations National Security Agency Fort George G. Meade, Maryland 20755-6000 (410) 854-6529 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Internet worms impact Internet security around the world even though there are many defenses to prevent the damage they inflict. The National Security Agency (NSA) Systems and Network Attack Center (SNAC) publishes in-depth configuration guides to protect networks from intrusion; however, the effectiveness of these guides in preventing the spread of worms hasn’t been studied. This thesis establishes how well the NSA SNAC guides protect against various worms and exploits compared to Microsoft patches alone. It also identifies the aspects of the configuration guidance that is most effective in the absence of patches and updates, against network worm and e-mail virus attacks. The results from this thesis show that the Microsoft patches and the NSA SNAC guides protect against all worms and exploits tested. The main difference is NSA SNAC guides protected as soon as they were applied where as the Microsoft patches needed to be written, distributed and applied in order to work. The NSA SNAC guides also provided protection by changing default permissions and passwords some worms and exploits use to exploit the computer as well as removed extraneous packages that could have undiscovered exploits. 15. SUBJECT TERMS Computer security, Computer viruses, Computer worms, NSA SNAC guides 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON Dr. Rusty O. Baldwin a. REPORT U b. ABSTRACT U c. THIS PAGE U 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 60 19b. TELEPHONE NUMBER (Include area code) (937) 255-6565, ext 4445 rbaldwin@afit.edu Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18
5CNSS Secretariat (I42) . National Security Agency. 9800 Savage Road STE 6716. Ft Meade MD 20755-6716 (410) 854-6805. UFAX: (410) 854-6814 nstissc@radium.ncsc.mil FACT SHEET CNSS Policy No. 15, Fact Sheet No. 1 National Policy on the Use of the Advanced Encryption Standard (AES) to Protect National Security Systems and National Security Information June 2003 Background (1) Federal Information Federal Information Processing Standard (FIPS) No. 197, dated 26 November 2001, promulgated and endorsed the Advanced Encryption Standard (AES) as the approved algorithm for protecting sensitive (unclassified) electronic data. Since that time, questions have arisen whether AES (or products in which AES is implemented) can or should be used to protect classified information and at what levels. Responsive to those questions, the National Security Agency (NSA) has conducted a review and analysis of AES and its applicability to the protection of national security systems and/or information. The policy guidance documented herein reflects the results of those efforts. Introduction (2) In the context of today’s complex world and even more complex communicating environments, the need for protecting information takes on added importance and significance. The protection of information is not solely dependent on the mathematical strength of an algorithm that may be a part of a communications security device or a communications system, nor is the selection of that algorithm based only on the classification of the information to be protected. Many factors come into play in deciding what algorithm can or should be used to satisfy a particular requirement. These include: - The quality of implementation of the algorithm in specific software, firmware, or hardware - Operational requirements associated with U.S. Government-approved key and key management activities; This Fact Sheet is available at: www.nsstissc.gov CNSS Policy No. 15, FS-1 June 2003 - The uniqueness of the classified information to be protected; and/or - Requirements for interoperability both domestically and internationally. (3) The above realities dictate the adoption of a flexible and adaptable strategy that encourages the use of a mix of appropriately implemented NSA-developed algorithms, and those available within the public domain. Scope (4) This policy is applicable to all U.S. Government Departments or Agencies that are considering the acquisition or use of products incorporating the Advanced Encryption Standard (AES) to satisfy Information Assurance (IA) requirements associated with the protection of national security systems and /or national security information. Policy (5) NSA-approved cryptography1 is required to protect (i.e., to provide confidentiality, authentication, non-repudiation, integrity, or to ensure system availability) national security systems and national security information at all classification levels. (6) The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use. (7) Subject to policy and guidance for non-national security systems and information (e.g., FIPS 140-2), U.S. Government Departments and Agencies may wish to consider the use of security products that implement AES for IA applications where the protection of systems or information, although not classified, nevertheless, may be critical to the conduct of organizational missions. This would include critical infrastructure protection and homeland security activities as addressed in Executive Order 13231, Subject: Critical Infrastructure Protection in the Information Age (dated 16 October 2001), and Executive Order 13228, Subject: Homeland Security (dated 8 October 2001), respectively. Evaluations of products employing AES for these types of applications are subject to review and approval by the National Institute of Standards and Technology (NIST) in accordance with the requirements of Federal Information Processing Standard (FIPS) 140-2. 1 NSA-approved cryptography consists of an approved algorithm; an implementation that has been approved for the protection of classified information in a particular environment; and a supporting key management infrastructure. 2 CNSS Policy No. 15, FS-1 June 2003 3 Responsibilities (8) U.S. Government Departments or Agencies desiring to use security products implementing AES to protect national security systems and/or information, or other mission critical information related to national security, should submit the details of their requirements to the Director, National Security Agency (ATTN: IA Directorate, V1) for review. NSA will employ established programs (e.g., NSA sponsored developments, the Commercial COMSEC Endorsement Program (CCEP), or the User Partnership Program) in developing and certifying AES security products for these requirements. (9) The Director, National Security Agency shall: - Review and approve all cryptographic implementations intended to protect national security systems and/or national security information. - Provide advice and assistance to U.S. Government Departments and Agencies in identifying protection requirements and selecting the encryption algorithms and product implementations most appropriate to their needs. (10) The Director, National Institute and Standards (NIST) shall provide advice and assistance to U.S. Government Departments and Agencies regarding the use of AES for protecting sensitive (unclassified) electronic data.
61. WHAT IS ELECTRONIC CASH?
7We begin by carefully defining "electronic cash." This term is often applied to any electronic payment scheme that superficially resembles cash to the user. In fact, however, electronic cash is a specific kind of electronic payment scheme, defined by certain cryptographic properties. We now focus on these properties.
81.1 Electronic Payment
9The term electronic commerce refers to any financial transaction involving the electronic transmission of information. The packets of information being transmitted are commonly called electronic tokens. One should not confuse the token, which is a sequence of bits, with the physical media used to store and transmit the information.
10We will refer to the storage medium as a card since it commonly takes the form of a wallet-sized card made of plastic or cardboard. (Two obvious examples are credit cards and ATM cards.) However, the "card" could also be, e.g., a computer memory.
11A particular kind of electronic commerce is that of electronic payment. An electronic payment protocol is a series of transactions, at the end of which a payment has been made, using a token issued by a third party. The most common example is that of credit cards when an electronic approval process is used. Note that our definition implies that neither payer nor payee issues the token.l
12The electronic payment scenario assumes three kinds of players:2
13• a payer or consumer, whom we will name Alice.
14• a payee, such as a merchant. We will name the payee Bob.
15• a financial network with whom both Alice and Bob have accounts. We will informally refer to the financial network as the Bank.
16__________
171 In this sense, electronic payment differs from such systems as prepaid phone cards and subway fare cards, where the token is issued by the payee.
182 In 4.1, we will generalize this scenario when we discuss transfers.
19________________________________________
201.2 Security of Electronic Payments
21With the rise of telecommunications and the Internet, it is increasingly the case that electronic commerce takes place using a transmission medium not under the control of the financial system. It is therefore necessary to take steps to insure the security of the messages sent along such a medium.
22The necessary security properties are:
23• Privacy, or protection against eavesdropping. This is obviously of importance for transactions involving, e.g., credit card numbers sent on the Internet.
24• User identification, or protection against impersonation. Clearly, any scheme for electronic commerce must require that a user knows with whom she is dealing (if only as an alias or credit card number).
25• Message integrity, or protection against tampering or substitution. One must know that the recipient's copy of the message is the same as what was sent.
26• Nonrepudiation, or protection against later denial of a transaction. This is clearly necessary for electronic commerce, for such things as digital receipts and payments.
27The last three properties are collectively referred to as authenticity.
28These security features can be achieved in several ways. The technique that is gaining widespread use is to employ an authentication infrastructure. In such a setup, privacy is attained by enciphering each message, using a private key known only to the sender and recipient. The authenticity features are attained via key management, e.g., the system of generating, distributing and storing the users' keys.
29Key management is carried out using a certification authority, or a trusted agent who is responsible for confirming a user's identity. This is done for each user (including banks) who is issued a digital identity certificate. The certificate can be used whenever the user wishes to identify herself to another user. In addition, the certificates make it possible to set up a private key between users in a secure and authenticated way. This private key is then used to encrypt subsequent messages. This technique can be implemented to provide any or all of the above security features.
30Although the authentication infrastructure may be separate from the electronic-commerce setup, its security is an essential component of the security of the electronic-commerce system. Without a trusted certification authority and a secure infrastructure, the above four security features cannot be achieved, and electronic commerce becomes impossible over an untrusted transmission medium.
31We will assume throughout the remainder of this paper that some authentication infrastructure is in place, providing the four security features.
32________________________________________
331.3 Electronic Cash
34We have defined privacy as protection against eavesdropping on one's communications. Some privacy advocates such as David Chaum (see [2],[3]), however, define the term far more expansively. To them, genuine "privacy" implies that one's history of purchases not be available for inspection by banks and credit card companies (and by extension the government). To achieve this, one needs not just privacy but anonymity. In particular, one needs
35• payer anonymity during payment,
36• payment untraceability so that the Bank cannot tell whose money is used in a particular payment.
37These features are not available with credit cards. Indeed, the only conventional payment system offering it is cash. Thus Chaum and others have introduced electronic cash (or digital cash), an electronic payment system which offers both features. The sequence of events in an electronic cash payment is as follows:
38• withdrawal, in which Alice transfers some of her wealth from her Bank account to her card.
39• payment, in which Alice transfers money from her card to Bob's.
40• deposit, in which Bob transfers the money he has received to his Bank account.
41(See Figure 1.)
42________________________________________
43
44Figure 1. The three types of transactions in a basic electronic cash model.
45________________________________________
46These procedures can be implemented in either of two ways:
47• On-line payment means that Bob calls the Bank and verifies the validity of Alice's token3 before accepting her payment and delivering his merchandise. (This resembles many of today's credit card transactions.)
48• Off-line payment means that Bob submits Alice's electronic coin for verification and deposit sometime after the payment transaction is completed. (This method resembles how we make small purchases today by personal check.)
49Note that with an on-line system, the payment and deposit are not separate steps. We will refer to on-line cash and off-line cash schemes, omitting the word "electronic" since there is no danger of confusion with paper cash.
50__________
513 In the context of electronic cash, the token is usually called an electronic coin.
52________________________________________
531.4 Counterfeiting
54As in any payment system, there is the potential here for criminal abuse, with the intention either of cheating the financial system or using the payment mechanism to facilitate some other crime. We will discuss some of these problems in 5. However, the issue of counterfeiting must be considered here, since the payment protocols contain built-in protections against it.
55There are two abuses of an electronic cash system analogous to counterfeiting of physical cash:
56• Token forgery, or creating a valid-looking coin without making a corresponding Bank withdrawal.
57• Multiple spending, or using the same token over again. Since an electronic coin consists of digital information, it is as valid-looking after it has been spent as it was before. (Multiple spending is also commonly called re-spending, double spending, and repeat spending.)
58One can deal with counterfeiting by trying to prevent it from happening, or by trying to detect it after the fact in a way that identifies the culprit. Prevention clearly is preferable, all other things being equal.
59Although it is tempting to imagine electronic cash systems in which the transmission and storage media are secure, there will certainly be applications where this is not the case. (An obvious example is the Internet, whose users are notoriously vulnerable to viruses and eavesdropping.) Thus we need techniques of dealing with counterfeiting other than physical security.
60• To protect against token forgery, one relies on the usual authenticity functions of user identification and message integrity. (Note that the "user" being identified from the coin is the issuing Bank, not the anonymous spender.)
61• To protect against multiple spending, the Bank maintains a database of spent electronic coins. Coins already in the database are to be rejected for deposit. If the payments are on-line, this will prevent multiple spending. If off-line, the best we can do is to detect when multiple spending has occurred. To protect the payee, it is then necessary to identify the payer. Thus it is necessary to disable the anonymity mechanism in the case of multiple spending.
62The features of authenticity, anonymity, and multiple-spender exposure are achieved most conveniently using public-key cryptography. We will discuss how this is done in the next two chapters.
63________________________________________
642. A CRYPTOGRAPHIC DESCRIPTION
65In this chapter, we give a high-level description of electronic cash protocols in terms of basic authentication mechanisms. We begin by describing these mechanisms, which are based on public-key cryptography. We then build up the protocol gradually for ease of exposition. We start with a simplified scheme which provides no anonymity. We then incorporate the payment untraceability feature, and finally the payment anonymity property. The result will be a complete electronic cash protocol.
66________________________________________
672.1 Public-Key Cryptographic Tools
68We begin by discussing the basic public-key cryptographic techniques upon which the electronic cash implementations are based.
69One-Way Functions. A one-way function is a correspondence between two sets which can be computed efficiently in one direction but not the other. In other words, the function phi is one-way if, given s in the domain of phi, it is easy to compute t = phi(s), but given only t, it is hard to find s. (The elements are typically numbers, but could also be, e.g., points on an elliptic curve; see [10].)
70Key Pairs. If phi is a one-way function, then a key pair is a pair s, t related in some way via phi. We call s the secret key and t the public key. As the names imply, each user keeps his secret key to himself and makes his public key available to all. The secret key remains secret even when the public key is known, because the one-way property of phi insures that t cannot be computed from s.
71All public-key protocols use key pairs. For this reason, public-key cryptography is often called asymmetric cryptography. Conventional cryptography is often called symmetric cryptography, since one can both encrypt and decrypt with the private key but do neither without it.
72Signature and Identification. In a public key system, a user identifies herself by proving that she knows her secret key without revealing it. This is done by performing some operation using the secret key which anyone can check or undo using the public key. This is called identification. If one uses a message as well as one's secret key, one is performing a digital signature on the message. The digital signature plays the same role as a handwritten signature: identifying the author of the message in a way which cannot be repudiated, and confirming the integrity of the message.
73Secure Hashing. A hash function is a map from all possible strings of bits of any length to a bit string of fixed length. Such functions are often required to be collision-free: that is, it must be computationally difficult to find two inputs that hash to the same value. If a hash function is both one-way and collision-free, it is said to be a secure hash.
74The most common use of secure hash functions is in digital signatures. Messages might come in any size, but a given public-key algorithm requires working in a set of fixed size. Thus one hashes the message and signs the secure hash rather than the message itself. The hash is required to be one-way to prevent signature forgery, i.e., constructing a valid-looking signature of a message without using the secret key.4 The hash must be collision-free to prevent repudiation, i.e., denying having signed one message by producing another message with the same hash.
75__________
764 Note that token forgery is not the same thing as signature forgery. Forging the Bank's digital signature without knowing its secret key is one way of committing token forgery, but not the only way. A bank employee or hacker, for instance, could "borrow" the Bank's secret key and validly sign a token. This key compromise scenario is discussed in 5.3.
77________________________________________
782.2 A Simplified Electronic Cash Protocol
79We now present a simplified electronic cash system, without the anonymity features.
80PROTOCOL 1: On-line electronic payment.
81Withdrawal:
82 Alice sends a withdrawal request to the Bank.
83 Bank prepares an electronic coin and digitally signs it.
84 Bank sends coin to Alice and debits her account.
85Payment/Deposit:
86 Alice gives Bob the coin.
87 Bob contacts Bank5 and sends coin.
88 Bank verifies the Bank's digital signature.
89 Bank verifies that coin has not already been spent.
90 Bank consults its withdrawal records to confirm Alice's withdrawal. (optional)
91 Bank enters coin in spent-coin database.
92 Bank credits Bob's account and informs Bob.
93 Bob gives Alice the merchandise.
94__________
955 One should keep in mind that the term "Bank" refers to the financial system that issues and clears the coins. For example, the Bank might be a credit card company, or the overall banking system. In the latter case, Alice and Bob might have separate banks. If that is so, then the "deposit" procedure is a little more complicated: Bob's bank contacts Alice's bank, "cashes in" the coin, and puts the money in Bob's account.
96________________________________________
97PROTOCOL 2: Off-line electronic payment.
98Withdrawal:
99 Alice sends a withdrawal request to the Bank.
100 Bank prepares an electronic coin and digitally signs it.
101 Bank sends coin to Alice and debits her account.
102Payment:
103 Alice gives Bob the coin.
104 Bob verifies the Bank's digital signature. (optional)
105 Bob gives Alice the merchandise.
106Deposit:
107 Bob sends coin to the Bank.
108 Bank verifies the Bank's digital signature.
109 Bank verifies that coin has not already been spent.
110 Bank consults its withdrawal records to confirm Alice's withdrawal. (optional)
111 Bank enters coin in spent-coin database.
112 Bank credits Bob's account.
113The above protocols use digital signatures to achieve authenticity. The authenticity features could have been achieved in other ways, but we need to use digital signatures to allow for the anonymity mechanisms we are about to add.
114________________________________________
1152.3 Untraceable Electronic Payments
116In this section, we modify the above protocols to include payment untraceability. For this, it is necessary that the Bank not be able to link a specific withdrawal with a specific deposit.6 This is accomplished using a special kind of digital signature called a blind signature.
117We will give examples of blind signatures in 3.2, but for now we give only a high-level description. In the withdrawal step, the user changes the message to be signed using a random quantity. This step is called "blinding" the coin, and the random quantity is called the blinding factor. The Bank signs this random-looking text, and the user removes the blinding factor. The user now has a legitimate electronic coin signed by the Bank. The Bank will see this coin when it is submitted for deposit, but will not know who withdrew it since the random blinding factors are unknown to the Bank. (Obviously, it will no longer be possible to do the checking of the withdrawal records that was an optional step in the first two protocols.)
118Note that the Bank does not know what it is signing in the withdrawal step. This introduces the possibility that the Bank might be signing something other than what it is intending to sign. To prevent this, we specify that a Bank's digital signature by a given secret key is valid only as authorizing a withdrawal of a fixed amount. For example, the Bank could have one key for a $10 withdrawal, another for a $50 withdrawal, and so on.7
119_________
1206 In order to achieve either anonymity feature, it is of course necessary that the pool of electronic coins be a large one.
1217 0ne could also broaden the concept of "blind signature" to include interactive protocols where both parties contribute random elements to the message to be signed. An example of this is the "randomized blind signature" occurring in the Ferguson scheme discussed in 3.3.
122________________________________________
123PROTOCOL 3: Untraceable On-line electronic payment.
124Withdrawal:
125 Alice creates an electronic coin and blinds it.
126 Alice sends the blinded coin to the Bank with a withdrawal request.
127 Bank digitally signs the blinded coin.
128 Bank sends the signed blinded coin to Alice and debits her account.
129 Alice unblinds the signed coin.
130Payment/Deposit:
131 Alice gives Bob the coin.
132 Bob contacts Bank and sends coin.
133 Bank verifies the Bank's digital signature.
134 Bank verifies that coin has not already been spent.
135 Bank enters coin in spent-coin database.
136 Bank credits Bob's account and informs Bob.
137 Bob gives Alice the merchandise.
138________________________________________
139PROTOCOL 4: Untraceable Off-line electronic payment.
140Withdrawal:
141 Alice creates an electronic coin and blinds it.
142 Alice sends the blinded coin to the Bank with a withdrawal request.
143 Bank digitally signs the blinded coin.
144 Bank sends the signed blinded coin to Alice and debits her account.
145 Alice unblinds the signed coin.
146Payment:
147 Alice gives Bob the coin.
148 Bob verifies the Bank's digital signature. (optional)
149 Bob gives Alice the merchandise.
150Deposit:
151 Bob sends coin to the Bank.
152 Bank verifies the Bank's digital signature.
153 Bank verifies that coin has not already been spent.
154 Bank enters coin in spent-coin database.
155 Bank credits Bob's account.
156________________________________________
1572.4 A Basic Electronic Cash Protocol
158We now take the final step and modify our protocols to achieve payment anonymity. The ideal situation (from the point of view of privacy advocates) is that neither payer nor payee should know the identity of the other. This makes remote transactions using electronic cash totally anonymous: no one knows where Alice spends her money and who pays her.
159It turns out that this is too much to ask: there is no way in such a scenario for the consumer to obtain a signed receipt. Thus we are forced to settle for payer anonymity.
160If the payment is to be on-line, we can use Protocol 3 (implemented, of course, to allow for payer anonymity). In the off-line case, however, a new problem arises. If a merchant tries to deposit a previously spent coin, he will be turned down by the Bank, but neither will know who the multiple spender was since she was anonymous. Thus it is necessary for the Bank to be able to identify a multiple spender. This feature, however, should preserve anonymity for law-abiding users.
161The solution is for the payment step to require the payer to have, in addition to her electronic coin, some sort of identifying information which she is to share with the payee. This information is split in such a way that any one piece reveals nothing about Alice's identity, but any two pieces are sufficient to fully identify her.
162This information is created during the withdrawal step. The withdrawal protocol includes a step in which the Bank verifies that the information is there and corresponds to Alice and to the particular coin being created. (To preserve payer anonymity, the Bank will not actually see the information, only verify that it is there.) Alice carries the information along with the coin until she spends it.
163At the payment step, Alice must reveal one piece of this information to Bob. (Thus only Alice can spend the coin, since only she knows the information.) This revealing is done using a challenge-response protocol. In such a protocol, Bob sends Alice a random "challenge" quantity and, in response, Alice returns a piece of identifying information. (The challenge quantity determines which piece she sends.) At the deposit step, the revealed piece is sent to the Bank along with the coin. If all goes as it should, the identifying information will never point to Alice. However, should she spend the coin twice, the Bank will eventually obtain two copies of the same coin, each with a piece of identifying information. Because of the randomness in the challenge-response protocol, these two pieces will be different. Thus the Bank will be able to identify her as the multiple spender. Since only she can dispense identifying information, we know that her coin was not copied and re-spent by someone else.
164________________________________________
165PROTOCOL 5: Off-line cash.
166Withdrawal:
167 Alice creates an electronic coin, including identifying information.
168 Alice blinds the coin.
169 Alice sends the blinded coin to the Bank with a withdrawal request.
170 Bank verifies that the identifying information is present.
171 Bank digitally signs the blinded coin.
172 Bank sends the signed blinded coin to Alice and debits her account.
173 Alice unblinds the signed coin.
174Payment:
175 Alice gives Bob the coin.
176 Bob verifies the Bank's digital signature.
177 Bob sends Alice a challenge.
178 Alice sends Bob a response (revealing one piece of identifying info).
179 Bob verifies the response.
180 Bob gives Alice the merchandise.
181Deposit:
182 Bob sends coin, challenge, and response to the Bank.
183 Bank verifies the Bank's digital signature.
184 Bank verifies that coin has not already been spent.
185 Bank enters coin, challenge, and response in spent-coin database.
186 Bank credits Bob's account.
187Note that, in this protocol, Bob must verify the Bank's signature before giving Alice the merchandise. In this way, Bob can be sure that either he will be paid or he will learn Alice's identity as a multiple spender.
188________________________________________
1893. PROPOSED OFF-LINE IMPLEMENTATIONS
190Having described electronic cash in a high-level way, we now wish to describe the specific implementations that have been proposed in the literature. Such implementations are for the off-line case; the on-line protocols are just simplifications of them. The first step is to discuss the various implementations of the public-key cryptographic tools we have described earlier.
191________________________________________
1923.1 Including Identifying Information
193We must first be more specific about how to include (and access when necessary) the identifying information meant to catch multiple spenders. There are two ways of doing it: the cut-and-choose method and zero-knowledge proofs.
194Cut and Choose. When Alice wishes to make a withdrawal, she first constructs and blinds a message consisting of K pairs of numbers, where K is large enough that an event with probability 2-K will never happen in practice. These numbers have the property that one can identify Alice given both pieces of a pair, but unmatched pieces are useless. She then obtains signature of this blinded message from the Bank. (This is done in such a way that the Bank can check that the K pairs of numbers are present and have the required properties, despite the blinding.)
195When Alice spends her coins with Bob, his challenge to her is a string of K random bits. For each bit, Alice sends the appropriate piece of the corresponding pair. For example, if the bit string starts 0110. . ., then Alice sends the first piece of the first pair, the second piece of the second pair, the second piece of the third pair, the first piece of the fourth pair, etc. When Bob deposits the coin at the Bank, he sends on these K pieces.
196If Alice re-spends her coin, she is challenged a second time. Since each challenge is a random bit string, the new challenge is bound to disagree with the old one in at least one bit. Thus Alice will have to reveal the other piece of the corresponding pair. When the Bank receives the coin a second time, it takes the two pieces and combines them to reveal Alice's identity.
197Although conceptually simple, this scheme is not very efficient, since each coin must be accompanied by 2K large numbers.
198Zero-Knowledge Proofs. The term zero-knowledge proof refers to any protocol in public-key cryptography that proves knowledge of some quantity without revealing it (or making it any easier to find it). In this case, Alice creates a key pair such that the secret key points to her identity. (This is done in such a way the Bank can check via the public key that the secret key in fact reveals her identity, despite the blinding.) In the payment protocol, she gives Bob the public key as part of the electronic coin. She then proves to Bob via a zero-knowledge proof that she possesses the corresponding secret key. If she responds to two distinct challenges, the identifying information can be put together to reveal the secret key and so her identity.
199________________________________________
2003.2 Authentication and Signature Techniques
201Our next step is to describe the digital signatures that have been used in the implementations of the above protocols, and the techniques that have been used to include identifying information.
202There are two kinds of digital signatures, and both kinds appear in electronic cash protocols. Suppose the signer has a key pair and a message M to be signed.
203• Digital Signature with Message Recovery. For this kind of signature, we have a signing function SSK using the secret key SK, and a verifying function VPK using the public key PK. These functions are inverses, so that
204(*) VPK (SSK (M)) = M
205• The function VPK is easy to implement, while SSK is easy if one knows SK and difficult otherwise. Thus SSK is said to have a trapdoor, or secret quantity that makes it possible to perform a cryptographic computation which is otherwise infeasible. The function VPK is called a trapdoor one-way function, since it is a one-way function to anyone who does not know the trapdoor.
206• In this kind of scheme, the verifier receives the signed message SSK (M) but not the original message text. The verifier then applies the verification function VPK. This step both verifies the identity of the signer and, by (*), recovers the message text.
207• Digital Signature with Appendix. In this kind of signature, the signer performs an operation on the message using his own secret key. The result is taken to be the signature of the message; it is sent along as an appendix to the message text. The verifier checks an equation involving the message, the appendix, and the signer's public key. If the equation checks, the verifier knows that the signer's secret key was used in generating the signature.
208We now give specific algorithms.
209RSA Signatures. The most well-known signature with message recovery is the RSA signature. Let N be a hard-to-factor integer. The secret signature key s and the public verification key v are exponents with the property that
210Msv = M (mod N)
211for all messages M. Given v, it is easy to find s if one knows the factors of N but difficult otherwise. Thus the "vth power (mod N)" map is a trapdoor one-way function. The signature of M is
212C := Ms (mod N);
213to recover the message (and verify the signature), one computes
214M := Cv (mod N).
215Blind RSA Signatures. The above scheme is easily blinded. Suppose that Alice wants the Bank to produce a blind signature of the message M. She generates a random number r and sends
216rv . M (mod N)
217to the Bank to sign. The Bank does so, returning
218r . Ms (mod N)
219Alice then divides this result by r. The result is Ms (mod N), the Bank's signature of M, even though the Bank has never seen M.
220The Schnorr Algorithms. The Schnorr family of algorithms includes an identification procedure and a signature with appendix. These algorithms are based on a zero-knowledge proof of possession of a secret key. Let p and q be large prime numbers with q dividing p - 1. Let g be a generator; that is, an integer between 1 and p such that
221gq = 1 (mod p).
222If s is an integer (mod q), then the modular exponentiation operation on s is
223phi : s -> gs (mod p).
224The inverse operation is called the discrete logarithm function and is denoted
225logg t <- t.
226If p and q are properly chosen, then modular exponentiation is a one-way function. That is, it is computationally infeasible to find a discrete logarithm.
227Now suppose we have a line
228(**) y = mx + b
229over the field of integers (mod q). A line can be described by giving its slope m and intercept b, but we will "hide" it as follows. Let
230c = gb (mod p),
231n = gm (mod p).
232Then c and n give us the "shadow" of the line under phi. Knowing c and n doesn't give us the slope or intercept of the line, but it does enable us to determine whether a given point (x, y) is on the line. For if (x, y) satisfies (**), then it must also satisfy the relation
233(***) gy = nx . c (mod p).
234(Conversely, any point (x, y) satisfying (***) must be on the line.) The relationship (***) can be checked by anyone, since it involves only public quantities. Thus anyone can check whether a given point is on the line, but points on the line can only be generated by someone who knows the secret information.
235The basic Schnorr protocol is a zero-knowledge proof that one possesses a given secret quantity m. Let n be the corresponding public quantity. Suppose one user (the "prover") wants to convince another (the "verifier") that she knows m without revealing it. She does this by constructing a line (**) and sending its shadow to the verifier. The slope of the line is taken to be secret quantity m, and the prover chooses the intercept at random, differently for each execution of the protocol. The protocol then proceeds as follows.
236Schnorr proof of possession:
237 1. Alice sends c (and n if necessary) to Bob.
238 2. Bob sends Alice a "challenge" value of x.
239 3. Alice responds with the value of y such that (x, y) is on the line.
240 4. Bob verifies via (**) that (x, y) is on the line.
241Bob now knows that he is speaking with someone who can generate points on the line. Thus this party must know the slope of the line, which is the secret quantity m.
242An important feature of this protocol is that it can be performed only once per line. For if he knows any two points (xo, yo) and (x1, y1) on the line, the verifier can compute the slope of the line using the familiar "rise over the run" formula
243m = yo - y1 / x1 - x1 (mod q),
244and this slope is the secret quantity m. That is why a new intercept must be generated each time. We call this the two-points-on-a-line principle. This feature will be useful for electronic cash protocols, since we want to define a spending procedure which reveals nothing of a secret key if used once per coin, but reveals the key if a coin is spent twice.
245Schnorr identification. The above protocol can be used for identification of users in a network. Each user is issued a key pair, and each public key is advertised as belonging to a given user. To identify herself, a user needs only prove that she knows her secret key. This can be done using the above zero-knowledge proof, since her public key is linked with her identity.
246Schnorr Signature. It is easy to convert the Schnorr identification protocol to produce a digital signature scheme. Rather than receiving a challenge from an on-line verifier, the signer simply takes x to be a secure hash of the message and of the shadow of the line. This proves knowledge of his secret key in a way that links his key pair to the message.
247Blind Schnorr Signature. Suppose that Alice wants to obtain a blind Schnorr signature for her coin, which she will spend with Bob. Alice generates random quantities (mod q) which describe a change of variables. This change of variables replaces the Bank's hidden line with another line, and the point on the Bank's line with a point on the new line. When Bob verifies the Bank's signature, he is checking the new point on the new line. The two lines have the same slope, so that the Bank's signature will remain valid. When the Bank receives the coin for deposit, it will see the protocol implemented on the new line, but it will not be able to link the coin with Alice's withdrawal since only Alice knows the change of variables relating the two lines.
248Chaum-Pederson Signature. A variant of Schnorr's signature scheme given in [6] is used in electronic cash protocols. This modified scheme is a kind of "double Schnorr" scheme. It involves a single line and point but uses two shadows. This signature scheme can be blinded in a way similar to the ordinary Schnorr signature.
249Implementations of the Schnorr Protocols. We have described the Schnorr algorithms in terms of integers modulo a prime p. The protocols, however, work in any setting in which the analogue of the discrete logarithm problem is difficult. An important example is that of elliptic curves (see [10]). Elliptic curve based protocols are much faster, and require the transmission of far less data, than non-elliptic protocols giving the same level of security.
250________________________________________
2513.3 Summary of Proposed Implementations
252We can now present summaries of the main off-line cash schemes from the academic literature. There are three: those of Chaum-Fiat-Naor [4], Brands [1], and Ferguson [9].
253Chaum-Fiat-Naor. This was the first electronic cash scheme, and is the simplest conceptually. The Bank creates an electronic coin by performing a blind RSA signature to Alice's withdrawal request, after having verified interactively that Alice has included her identifying information on the coin. The prevention of multiple spending is accomplished by the cut-and-choose method. For this reason, this scheme is relatively inefficient.
254Brands. Brands' scheme is Schnorr-based.8 Indeed, a Schnorr protocol is used twice: at withdrawal, the Bank performs a blind Chaum-Pederson signature, and then Alice performs a Schnorr possession proof as the challenge-and-response part of the spending protocol.
255The withdrawal step produces a coin which contains the Bank's signature, authenticating both Alice's identifying information and the shadow of the line to be used for the possession proof. This commits Alice to using that particular line in the spending step. If she re-spends the coin, she must use the same line twice, enabling the Bank to identify her.
256The Brands scheme is considered by many to be the best of the three, for two reasons. First, it avoids the awkward cut-and-choose technique. Second, it is based only on the Schnorr protocols, and so it can be implemented in various settings such as elliptic curves.
257Ferguson. Ferguson's scheme is RSA-based like Chaum-Fiat-Naor, but it uses the "two-points-on-a-line" principle like Brands. The signature it uses is not the blind RSA signature as described above, but a variant called a randomized blind RSA signature. The ordinary blind RSA scheme has the drawback that the Bank has absolutely no idea what it is signing. As mentioned above, this is not a problem in the cut-and-choose case, but in this case it can allow a payer to defeat the mechanism for identifying multiple spenders. The randomized version avoids this problem by having both Alice and the Bank contribute random data to the message. The Bank still doesn't know what it is signing, but it knows that the data was not chosen maliciously.
258The rest of the protocol is conceptually similar to Brands' scheme. The message to be signed by the Bank contains, in addition to the random data, the shadow of a line whose slope and intercept reveal Alice's identity. During payment, Alice reveals a point on this line; if she does so twice, the Bank can identify her.
259Although Ferguson's scheme avoids the cut-and-choose technique, it is the most complicated of the three (due largely to the randomized blind RSA signature). Moreover, it cannot be implemented over elliptic curves since it is RSA-based.
260__________
2618 For ease of exposition, we give a simplified account of Brands' protocol.
262________________________________________
2634. OPTIONAL FEATURES OF OFF-LINE CASH
264Much of the recent literature on off-line cash has focused on adding features to make it more convenient to use. In this chapter we will discuss two of these features.
265________________________________________
2664.1 Transferability
267Transferability is a feature of paper cash that allows a user to spend a coin that he has just received in a payment without having to contact the Bank in between. We refer to a payment as a transfer if the payee can use the received coin in a subsequent payment. A payment system is transferableif it allows at least one transfer per coin. Figure 2 shows a maximum length path of a coin in a system which allows two transfers. The final payment is not considered a transfer because it must be deposited by the payee. Transferability would be a convenient feature for an off-line cash system because it requires less interaction with the Bank. (A transferable electronic cash system is off-line by definition, since on-line systems require communication with the Bank during each payment.)
268________________________________________
269
270Figure 2. A maximum length path of a coin in a system which allows 2 transfers per coin.
271________________________________________
272Transferable systems have received little attention in academic literature. The schemes presented in 3.3 are not transferable because the payee cannot use a received coin in another payment - his only options are to deposit or to exchange it for new coins at the Bank. Any transferable electronic cash system has the property that the coin must "grow in size" (i.e., accumulate more bits) each time it is spent. This is because the coin must contain information about every person who has spent it so that the Bank maintains the ability to identify multiple spenders. (See [5].) This growth makes it impossible to allow an unlimited number of transfers. The maximum number of transfers allowed in any given system will be limited by the allowable size of the coin.
273There are other concerns with any transferable electronic cash system, even if the number of transfers per coin is limited, and we remove the anonymity property. Until the coin is deposited, the only information available to the Bank is the identity of the individual who originally withdrew the coin. Any other transactions involving that withdrawal can only be reconstructed with the cooperation of each consecutive spender of that coin. This poses the same problems that paper cash poses for detecting money laundering and tax evasion: no records of the transactions are available.
274In addition, each transfer delays detection of re-spent or forged coins. Multiple spending will not be noticed until two copies of the same coin are eventually deposited. By then it may be too late to catch the culprit, and many users may have accepted counterfeit coins. Therefore, detection of multiple spending after the fact may not provide a satisfactory solution for a transferable electronic cash system. A transferable system may need to rely on physical security to prevent multiple spending. (See 5.1.)
275________________________________________
2764.2 Divisibility
277Suppose that Alice is enrolled in a non-transferable, off-line cash system, and she wants to purchase an item from Bob that costs, say, $4.99. If she happens to have electronic coins whose values add up to exactly $4.99 then she simply spends these coins. However, unless Alice has stored a large reserve of coins of each possible denomination, it is unlikely that she will have the exact change for most purchases. She may not wish to keep such a large reserve of coins on hand for the some of the same reasons that one doesn't carry around a large amount of cash: loss of interest and fear of the cash being stolen or lost. Another option is for Alice to withdraw a coin of the exact amount for each payment, but that requires interaction with the Bank, making the payment on-line from her point of view. A third option is for Bob to pay Alice the difference between her payment and the $4.99 purchase price. This puts the burden of having an exact payment on Bob, and also requires Alice to contact the Bank to deposit the "change."
278A solution to Alice's dilemma is to use divisible coins: coins that can be "divided" into pieces whose total value is equal to the value of the original coin. This allows exact off-line payments to be made without the need to store a supply of coins of different denominations. Paper cash is obviously not divisible, but lack of divisibility is not as much of an inconvenience with paper cash because it is transferable. Coins that are received in one payment can be used again in the next payment, so the supply of different denominations is partially replenished with each transaction. (Imagine how quickly a cashier would run out of change if paper cash were not transferable and each payment was put in a separate bin set aside for the next bank deposit!)
279Three divisible off-line cash schemes have been proposed, but at a cost of a longer transaction time and additional storage. Eng and Okamoto's divisible scheme [7] is based on the "cut and choose" method. Okamoto [11] is much more efficient and is based on Brands' scheme but will also work on Ferguson's scheme. Okamoto and Ohta [12] is the most efficient of the three, but also the most complicated. It relies on the difficulty of factoring and on the difficulty of computing discrete logarithms.
280________________________________________
281
282Figure 3. A binary tree for a divisible coin worth $4.00, with a minimum unit of $1.00. A $3.00 payment can be made by spending the shaded nodes. Node 1I cannot be used in a subsequent payment because it is an ancestor of nodes 2 and 6. Nodes 4 and 5 cannot be used because they are descendants of node 2. Node 3 cannot be used because it is an ancestor of node 6. Nodes 2 and 6 cannot be used more than once, so node 7 is the only node which can be spent in a subsequent payment.
283________________________________________
284All three of these schemes work by associating a binary tree with each coin of value $w. (See Figure 3). Each node is assigned a monetary value as follows: the unique root node (the node at level 0) has value $w, the two nodes at level 1 each have value $w/2, the four nodes at level 2 each have value $w/4, etc. Therefore, if w = 21, then the tree has l+ 1 levels, and the nodes at level j each have value $w/2j. The leaves of the tree are the nodes at level l, and have the minimum unit of value.
285To spend the entire amount of value $w, the root node is used. Amounts less than $w can be spent by spending a set of nodes whose values add up to the desired amount.
286Initially, any whole dollar amount of up to $w can be spent. Subsequent payments are made according to the following rules:
287 1. Once a node is used, all its descendant and ancestor9 nodes cannot be used.
288 2. No node can be used more than once.
289These two rules insure that no more than one node is used on any path from the root to a leaf. If these two rules are observed, then it will be impossible to spend more than the original value of the coin. If either of these rules are broken, then two nodes on the same path are used, and the information in the two corresponding payments can be combined to reveal the identity of the individual that over-spent in the same way that the identity of a multiple spender is revealed.
290More specifically, in the Eng/Okamoto and Okamoto schemes, each user has a secret value, s, which is linked to their identity (uncovering s will uncover their identity, but not vice-versa.) Each node i is assigned a secret value, ti. Hence, each node i corresponds to a line
291y = sx + ti
292When a payment is made using a particular node n, ti will be revealed for all nodes i that are ancestors of node n. Then the payee sends a challenge xi and the payer responds with
293y1 = sx1 + tn .
294This reveals a point (x1, y1) on the line y = sx + tn, but does not reveal the line itself. If the same node is spent twice, then responses to two independent challenges, x1 and x2, will reveal two points on the same line: (x1, y1) and (x2, y2). Then the secret value s can be recovered using the two-points-on-a-line principle described in 3.2.
295If someone tries to overspend a coin, then two nodes in the same path will be used. Suppose that nodes n and m are in the same path, and node n is farther from the root on this path. Spending node n will reveal tm, since node m is an ancestor of node n. Now if node m is also spent, then the response to a challenge x1 will be y1 = sx1 + tm. But tm was revealed when tn was spent, so sx1 and hence s will be revealed. Therefore, spending two nodes in the same path will reveal the identity of the over-spender. The Okamoto/Ohta divisible scheme also uses a binary tree with the same rules for using nodes to prevent multiple and over-spending, but when nodes are used improperly, a different technique is used to determine the identity of the spender. Instead of hiding the user's identifying secret in a line for which a point is revealed when a coin is spent, the user's identifying secret is hidden in the factorization of an RSA modulus. Spending the same node twice, or spending two nodes on the same path will provide enough information for the Bank to factor the modulus (which is part of the coin) and then compute the user's secret identifying information.
296Although these three divisible schemes are untraceable, payments made from the same initial coin may be "linked" to each other, meaning that it is possible to tell if two payments came from the same coin and hence the same person. This does not reveal the payer's identity if both payments are valid (follow Rules 1 and 2, above), but revealing the payer's identity for one purchase would reveal that payer's identity for all other purchases made from the same initial coin.
297These are three examples of off-line cash schemes that have divisible coins. Although providing divisibility complicates the protocol, it can be accomplished without forfeiting untraceability or the ability to detect improper spenders. The most efficient divisible scheme has a transaction time and required memory per coin proportional to the logarithm of N, where N is the total coin value divided by the value of the minimum divisible unit. More improvements in the efficiency of divisible schemes are expected, since the most recent improvement was just presented in 1995.
298__________
2999 A descendant of a node n is a node on a path from node n to a leaf. An ancestor of node n is a node on the path from node n to the root node.
300________________________________________
3015. SECURITY ISSUES
302In this section we discuss some issues concerning the security of electronic cash. First, we discuss ways to help prevent multiple spending in off-line systems, and we describe the concept of wallet observers. We also discuss the consequences of an unexpected failure in the system?s security. Finally, we describe a solution to some of the law enforcement problems that are created by anonymity.
303________________________________________
3045.1 Multiple Spending Prevention
305In 1.3, we explained that multiple spending can be prevented in on-line payments by maintaining a database of spent electronic coins, but there is no cryptographic method for preventing an off-line coin from being spent more than once. Instead, off-line multiple spending is detected when the coin is deposited and compared to a database of spent coins. Even in anonymous, untraceable payment schemes, the identity of the multiple-spender can be revealed when the abuse is detected. Detection after the fact may be enough to discourage multiple spending in most cases, but it will not solve the problem. If someone were able to obtain an account under a false identity, or were willing to disappear after re-spending a large sum of money, they could successfully cheat the system.
306One way to minimize the problem of multiple spending in an off-line system is to set an upper limit on the value of each payment. This would limit the financial losses to a given merchant due to accepting coins that have been previously deposited. However, this will not prevent someone from spending the same small coin many times in different places.
307In order to prevent multiple spending in off-line payments, we need to rely on physical security. A "tamper-proof" card could prevent multiple spending by removing or disabling a coin once it is spent. Unfortunately, there is no such thing as a truly "tamper-proof" card. Instead, we will refer to a "tamper-resistant" card, which is physically constructed so that it is very difficult to modify its contents. This could be in the form of a smart card, a PC card10, or any storage device containing a tamper-resistant computer chip. This will prevent abuse in most cases, since the typical criminal will not have the resources to modify the card. Even with a tamper-resistant card, it is still essential to provide cryptographic security to prevent counterfeiting and to detect and identify multiple spenders in case the tamper-protection is somehow defeated. Also, setting limits on the value of off-line payments would reduce the cost-effectiveness of tampering with the card.
308Tamper-resistant cards can also provide personal security and privacy to the cardholder by making it difficult for adversaries to read or modify the information stored on the card (such as secret keys, algorithms, or records).
309__________
31010 Formerly PCMCIA, or Personal Computer Memory Card International Association.
311________________________________________
3125.2 Wallet Observers
313All of the basic off-line cash schemes presented in 3.3 can cryptographically detect the identity of multiple spenders, but the only way to prevent off-line multiple spending is to use a tamper-resistant device such as a smart card. One drawback of this approach is that the user must put a great deal of trust in this device, since the user loses the ability to monitor information entering or leaving the card. It is conceivable that the tamper-resistant device could leak private information about the user without the user's knowledge.
314Chaum and Pedersen [6] proposed the idea of embedding a tamper-resistant device into a user-controlled outer module in order to achieve the security benefits of a tamper-resistant device without requiring the user to trust the device. They call this combination an electronic wallet (see Figure 4). The outer module (such as a small hand-held computer or the user's PC) is accessible to the user. The inner module which cannot be read or modified is called the "observer." All information which enters or leaves the observer must pass through the outer module, allowing the user to monitor information that enters or leaves the card. However, the outer module cannot complete a transaction without the cooperation of the observer. This gives the observer the power to prevent the user from making transactions that it does not approve of, such as spending the same coin more than once.
315________________________________________
316
317Figure 4. An electronic wallet.
318________________________________________
319Brands[1] and Ferguson[8] have both shown how to incorporate observers into their respective electronic cash schemes to prevent multiple spending. Brands' scheme incorporates observers in a much simpler and more efficient manner. In Brands' basic scheme, the user's secret key is incorporated into each of his coins. When a coin is spent, the spender uses his secret to create a valid response to a challenge from the payee. The payee will verify the response before accepting the payment. In Brands' scheme with wallet observers, this user secret is shared between the user and his observer. The combined secret is a modular sum of the two shares, so one share of the secret reveals no information about the combined secret. Cooperation of the user and the observer is necessary in order to create a valid response to a challenge during a payment transaction. This is accomplished without either the user or the observer revealing any information about its share of the secret to the other. It also prevents the observer from controlling the response; hence the observer cannot leak any information about the spender.
320An observer could also be used to trace the user's transactions at a later time, since it can keep a record of all transactions in which it participates. However, this requires that the Bank (or whoever is doing the tracing) must be able to obtain the observer and analyze it. Also, not all types of observers can be used to trace transactions. Brands and Ferguson both claim that they can incorporate observers into their schemes and still retain untraceability of the users' transactions, even if the observer used in the transactions has been obtained and can be analyzed.
321________________________________________
3225.3 Security Failures
323Types of failures.
324In any cryptographic system, there is some risk of a security failure. A security failure in an electronic cash system would result in the ability to forge or duplicate money. There are a number of different ways in which an electronic cash system could fail.
325One of the most serious types of failure would be that the cryptography (the protocol or the underlying mathematics) does not provide the intended security.11 This could enable someone to create valid looking coins without knowledge of an authorized bank's secret key, or to obtain valid secret keys without physical access to them. Anyone who is aware of the weakness could create coins that appear to come from a legitimate bank in the system.
326Another serious type of failure could occur in a specific implementation of the system. For example, if the bank's random number generator is not a-good one, one may be able to guess the secret random number and use it to compute the secret keys that are used to create electronic money.
327Even if the cryptography and the implementation are secure, the security could fail because of a physical compromise. If a computer hacker, thief, dishonest bank employee, or a rogue state were to gain access to the bank's secret key they could create counterfeit money. If they gain access to a user's secret key they could spend that user's money. If they could modify the user or bank's software they could destroy the security of the system.
328The above failure scenarios apply, not only to the electronic cash system, but also to the underlying authentication infrastructure. Any form of electronic commerce depends heavily on the ability of users to trust the authentication mechanisms. So if, for example, an attacker could demonstrate a forgery of the certification authority's digital signature, it would undermine the users' trust in their ability to identify each other. Thus the certification authorities need to be secured as thoroughly as do the banks.
329Consequences of a failure.
330All three of the basic schemes described in this paper are anonymous, which makes it impossible for anyone to connect a deposited coin to the originating banks withdrawal record of that coin. This property has serious consequences in the event of a security failure leading to token forgery. When a coin is submitted for deposit, it is impossible to determine if it is forged. Even the originating bank is unable to recognize its own coins, preventing detection of the compromise. It is conceivable that the compromise will not be detected until the bank realizes that the total value of deposits of its electronic cash exceeds the amount that it has created with a particular key. At this point the losses could be devastating.
331After the key compromise is discovered, the bank will still be unable to distinguish valid coins from invalid ones since deposits and withdrawals cannot be linked. The bank would have to change its secret key and invalidate all coins which were signed with the compromised key. The bank can replace coins that have not yet been spent, but the validity of untraceable coins that have already been spent or deposited cannot be determined without cooperation of the payer. Payment untraceability prevents the Bank from determining the identity of the payer, and payer anonymity prevents even the payee from identifying the payer.
332It is possible to minimize this damage by limiting the number of coins affected by a single compromise. This could be done by changing the Bank's public key at designated time intervals, or when the total value of coins issued by a single key exceeds a designated limit. However, this kind of compartmentation reduces the anonymity by shrinking the pool of withdrawals that could correspond to a particular deposit and vice versa.
333__________
33411 We are unaware of anything in the literature that would suggest this type of failure with the protocols discussed in this paper.
335________________________________________
3365.4 Restoring Traceability
337The anonymity properties of electronic cash pose several law enforcement problems because they prevent withdrawals and deposits from being linked to each other. We explained in the previous section how this prevents detection of forged coins. Anonymity also makes it difficult to detect money laundering and tax evasion because there is no way to link the payer and payee. Finally, electronic cash paves the way for new versions of old crimes such as kidnapping and blackmail (see [13]) where money drops can now be carried out safely from the criminal's home computer.12
338One way to minimize these concerns is to require large transactions or large numbers of transactions in a given time period to be traceable. This would make it more difficult to commit crimes involving large sums of cash. However, even a strict limit such as a maximum of $100 a day on withdrawals and deposits can add up quickly, especially if one can open several accounts, each with its own limit. Also, limiting the amount spent in a given time period would have to rely on a tamper-resistant device.
339Another way to minimize these concerns is to provide a mechanism to restore traceability under certain conditions, such as a court order. Traceability can be separated into two types by its direction. For~ard traceability is the ability to identify a deposit record (and hence the payee), given a withdrawal record (and hence the identity of the payer). In other words, if a search warrant is obtained for Alice, forward tracing will reveal where Alice has spent her cash. Back~ard traceability is the ability to identify a withdrawal record (and hence the payer), given a deposit record (and hence the identity of the payee). Backward tracing will reveal who Alice has been receiving payments from.
340A solution that conditionally restores both forward and backward traceability into the cut-and-choose scheme is presented by Stadler, Piveteau, and Camenisch in [14]. In the basic cut-and choose scheme, an identifying number is associated with each withdrawal record and a different identifying number is associated with each deposit record, although there is no way to link these two records to each other. To provide a mechanism for restoring backward traceability, the withdrawal number (along with some other data which cannot be associated with the withdrawal) is encrypted with a commonly trusted entity's public key and incorporated into the coin itself. This encrypted withdrawal number is passed to the payee as part of the payment protocol, and then will be passed along to the bank when the coin is deposited by the payee. The payer performs the encryption during the withdrawal transaction, but the bank can insure that the encryption was done properly. If the required conditions for tracing are met, the payment or deposit can be turned over to the trusted entity holding the secret key to decrypt the withdrawal number. This withdrawal number will allow the bank to access its withdrawal records, identifying the payer.
341To provide a mechanism for restoring forward traceability, the payer must commit to a deposit number at the time that the coin is withdrawn. The payer encrypts this deposit number with a commonly trusted entity's public key (along with some other data that cannot be associated with the deposit) and must send this value to the bank as part of the withdrawal protocol. The bank is able to determine that the payer has not cheated, although it only sees the deposit number in encrypted form. If the required conditions for tracing are met, the withdrawal record can be turned over to the trusted entity holding the secret key to decrypt the deposit number. The bank can use this deposit number to identify the depositor (the payee).
342Stadler et al. have shown that it is possible to provide a mechanism for restoring traceability in either or both directions. This can be used to provide users with anonymity, while solving many of the law enforcement problems that exist in a totally untraceable system. The ability to trace transactions in either direction can help law enforcement officials catch tax evaders and money launderers by revealing who has paid or has been paid by the suspected criminal. Electronic blackmailers can be caught because the deposit numbers of the victim's ill-gotten coins could be decrypted, identifying the blackmailer when the money is deposited.
343The ability to restore traceability does not solve one very important law enforcement problem: detecting forged coins. Backwards tracing will help identify a forged coin if a particular payment or deposit (or depositor) is under suspicion. In that case, backwards tracing will reveal the withdrawal number, allowing the originating bank to locate its withdrawal record and verify the validity of the coin. However, if a forged coin makes its way into the system it may not be detected until the bank whose money is being counterfeited realizes that the total value of its electronic cash deposits using a particular key exceeds the values of its withdrawals. The only way to determine which deposits are genuine and which are forged would require obtaining permission to decrypt the withdrawal numbers for each and every deposit of electronic cash using the compromised key. This would violate the privacy that anonymous cash was designed to protect.
344Unfortunately, the scheme of [14] is not efficient because it is based on the bulky cut-and-choose method. However, it may be possible to apply similar ideas to restore traceability in a more efficient electronic cash scheme.
345__________
34612 We will not focus on such crimes against individuals, concentrating instead on crimes against the Government, the banking system, and the national economy.
347________________________________________
348CONCLUSION
349This report has described several innovative payment schemes which provide user anonymity and payment untraceability. These electronic cash schemes have cryptographic mechanisms in place to address the problems of multiple spending and token forgery. However, some serious concerns about the ability of an electronic cash system to recover from a security failure have been identified. Concerns about the impact of anonymity on money laundering and tax evasion have also been discussed.
350Because it is simple to make an exact copy of an electronic coin, a secure electronic cash system must have a way to protect against multiple spending. If the system is implemented on-line, then multiple spending can be prevented by maintaining a database of spent coins and checking this list with each payment. If the system is implemented off-line, then there is no way to prevent multiple spending cryptographically, but it can be detected when the coins are deposited. Detection of multiple spending after-the-fact is only useful if the identity of the offender is revealed. Cryptographic solutions have been proposed that will reveal the identity of the multiple spender while preserving user anonymity otherwise.
351Token forgery can be prevented in an electronic cash system as long as the cryptography is sound and securely implemented, the secret keys used to sign coins are not compromised, and integrity is maintained on the public keys. However, if there is a security flaw or a key compromise, the anonymity of electronic cash will delay detection of the problem. Even after the existence of a compromise is detected, the Bank will not be able to distinguish its own valid coins from forged ones. Since there is no way to guarantee that the Bank's secret keys will never be compromised, it is important to limit the damage that a compromise could inflict. This could be done by limiting the total value of coins issued with a particular key, but lowering these limits also reduces the anonymity of the system since there is a smaller pool of coins associated with each key.
352The untraceability property of electronic cash creates problems in detecting money laundering and tax evasion because there is no way to link the payer and payee. To counter this problem, it is possible to design a system that has an option to restore traceability using an escrow mechanism. If certain conditions are met (such as a court order), a deposit or withdrawal record can be turned over to a commonly trusted entity who holds a key that can decrypt information connecting the deposit to a withdrawal or vice versa. This will identify the payer or payee in a particular transaction. However, this is not a solution to the token forgery problem because there may be no way to know which deposits are suspect. In that case, identifying forged coins would require turning over all of the Bank's deposit records to the trusted entity to have the withdrawal numbers decrypted.
353We have also looked at two optional features of off-line electronic cash: transferability and divisibility. Because the size of an electronic coin must grow with each transfer, the number of transfers allowed per coin must be limited. Also, allowing transfers magnifies the problems of detecting counterfeit coins, money laundering, and tax evasion. Coins can be made divisible without losing any security or anonymity features, but at the expense of additional memory requirements and transaction time.
354In conclusion, the potential risks in electronic commerce are magnified when anonymity is present. Anonymity creates the potential for large sums of counterfeit money to go undetected by preventing identification of forged coins. Anonymity also provides an avenue for laundering money and evading taxes that is difficult to combat without resorting to escrow mechanisms. Anonymity can be provided at varying levels, but increasing the level of anonymity also increases the potential damages. It is necessary to weigh the need for anonymity with these concerns. It may well be concluded that these problems are best avoided by using a secure electronic payment system that provides privacy, but not anonymity.