Документ взят из кэша поисковой машины. Адрес оригинального документа : http://oit.cmc.msu.ru/lectures/FirewallPolicyGuide(NCSA).txt
Дата изменения: Mon Dec 20 15:52:25 1999
Дата индексирования: Mon Oct 1 22:17:34 2012
Кодировка:
NCSA Firewall Policy Guide

1. INTRODUCTION

The Internet, the global network of computers that is the basis for
universal electronic mail, the World Wide Web, and numerous forms of
electronic commerce, has variously been described as bigger than the
personal computer, more significant than the printing press, and as
revolutionary as the discovery of fire. These days, the computer section of
every book store is crammed with Internet titles. Every new movie has a Web
site. Billboards and advertisements without URLs are becoming the exception.

Yet firewalls, which are designed to control the flow of information
between two networks, were being developed even before the world at large
had heard of "The Internet". Indeed, common sense says you should consider
using a firewall whenever you internetwork. This term refers to the process
of connecting two networks together. The result is referred to as an
"internet" without the capital 'I'. Typically, we reserve the term
"Internet" for the TCP/IP-based descendant of ARPAnet's marriage to CSnet in
1982, now serving tens of millions of users via hundreds of thousands of
host machines.

Internetworking For computers to successfully communicate with each
other, they have to follow standards and observe rules or protocols. TCP/IP
stands for Transmission Control Protocol/Internet Protocol, the fundamental
protocol of the Internet. Although initial development of TCP/IP occurred
within a defense and government environment, it is important to note that it
was designed to be reliable, not secure. The intent was to develop a
protocol that is good at getting information to its destination, even if
different parts of the information have to travel different paths. However,
because this development took place within an environment of trust, with a
relatively small number of participants, many of whom were known to each
other, security of the data in transit, or the internetwork connections
which it traversed, was not a major concern.

Now the Internet has become global, with tens of millions of users,
almost all of whom are completely unknown to you. So it is no longer wise to
trust other computers or users on the Internet. But the Internet is not the
only place you will find "untrusted" computers. Think about any network that
you do not manage or control. To what extent can you trust it? Do you really
want to connect it to your network without any way of controlling the
traffic between the two? These days, whenever you connect your trusted
network to someone else's untrusted network, it is wise to place a firewall
of some kind between the two. This helps you keep insiders in and outsiders
out. For example, firewalls would be appropriate at points C and D in Figure
1, but may not be needed at points A and B.

Figure 1: The placement of firewalls

The idea is not to cut off communication at these points, but to control
it. This means controlling which users can data pass between the networks on
either side of the firewall as well the types of data they are allowed to
exchange. These principles apply at all levels of internetworking, from
small offices to corporate offices, from a couple of interconnected LANs to
corporate WANs, from Web surfing machines to electronic commerce servers.

Internet Risks So what risks do you face when connecting networks to
each other or the Internet? A recent Ernst & Young survey found that four
out of five large organizations (those with more than 2,500 employees) are
running mission-critical applications on local area networks. Those LANs,
and the vital information they are processing, are increasingly threatened
by internetwork connections. For example, when NCSA studied a profile group
of 61 large organizations they reported 142 separate security breach and
system hacking encounters in the last three months. IP spoofing, which can
be used to gain widespread access to an internal network, accounted for 49
of these encounters. Yet a recent Corporate Information Technology Policies
Survey conducted by the Chicago-based information technology law firm of
Gordon & Glickson revealed that less than half of respondents performed
routine security checks. Only 44% had the ability to track access to
sensitive data and only one third used any form of encryption.

At the same time 98% of these same companies provide access to the
Internet to some employees, 97% provide remote access to corporate networks,
61% host their own Web site, and 9 out of 10 permit some level of access to
commercial on-line services such as CompuServe. To this recipe for disaster
you can add another ingredient: the way that people in the Gordon & Glickson
survey dealt with access to the Internet. While 75% say they would like to
restrict access to some parts of the Internet, only 62% had policies
governing Internet access, 42% did not monitor employee Internet use and
only 30% actually applied access controls. Furthermore, fewer than two out
of five respondents said they imposed restrictions on downloading files from
third parties. No wonder that one out of six surveyed corporations reported
experiencing damage associated with Internet usage by employees (one in
eight reported legal claims arising from the use of information technology
by an employee).

The risks related to using the Internet range from public embarrassment,
when a Web site is defaced (as happened to the U.S. Department of Justice
and the Central Intelligence Agency in 1996) or internal correspondence is
revealed, to theft of trade or government secrets from a poorly protected
internal network. Risks include coordinated and systematic abuse of
computing resources, sometimes for mounting attacks on other sites
[Stall95a]. Consider the findings of the United States General Accounting
Office, which was asked by the Senate Committee on Governmental Affairs to
report on the current vulnerability of Department of Defense non-classified
computer systems. Here are some highlights:

"Unknown and unauthorized individuals are increasingly attacking and
gaining access to highly sensitive unclassified information on the
Department of Defense's computer systems... as many as 250,000 attacks last
year... successful 65 percent of the time.... At a minimum, these attacks
are a multi-million dollar nuisance to Defense. At worst, they are a serious
threat to national security. Attackers have seized control of entire Defense
systems... stolen, modified, and destroyed data and software... installed
unwanted files and "back doors" which circumvent normal system protection
and allow attackers unauthorized access in the future. They have shut down
and crashed entire systems and networks, denying service to users who depend
on automated systems to help meet critical missions. Numerous Defense
functions have been adversely affected, including weapons and supercomputer
research, logistics, finance, procurement, personnel management, military
health, and payroll."

Whether it is viruses, Trojan horses, or penetration of internal
networks, the most important factor affecting network security today is
clearly the Internet. If your network is connected to the Internet you have
a whole new set of problems, some of which make pre-existing problems worse.
If your network is not connected to the Internet, you are most likely facing
pressure to make that connection, even if it is merely a demand for
electronic mail. The is pressure is so strong that some organizations find
that they are already connected to the Internet even though upper management
have not authorized any such connections.

Connecting to the Internet is a bit like opening the shades on the
office windows and letting in the full glare of the midday sun. Problems
with network security that were previously invisible are thrown into sharp
contrast. Unprotected guest accounts and obvious passwords might not have
been much of a problem when your network was only visible to insiders. But
if people manage to penetrate your network from the outside (something
experienced by at least one out of every six respondents in several recent
surveys) you can bet these weaknesses will be exploited. And news of such
vulnerabilities can spread through "the underground" at the speed of
electrons, leading to rapidly escalating attacks and system abuse.

Such incidents represent more than kids getting their kicks with modems.
Systematic and automated probing of new Internet connections is being
carried out by a shady cast of characters that includes hackers-for-hire,
information brokers, and foreign governments. One in five companies
responding to the annual Information Week/Ernst & Young Security Survey
admitted that intruders had broken into, or had tried to break into, their
corporate networks, via the Internet, during the preceding twelve months
[Info]. And most experts agree that the majority of break-ins go undetected.
For example, attacks by the Defense Information Systems Agency (DISA) on
38,000 Department of Defense computer systems had an 88% success rate but
were detected by less that one in twenty of the target organizations. Of
those organizations, only 5% actually reacted to the attack [Wash]. The
bottomline is that when you connect your network to another network, bad
things can happen.

Internetwork Protection Firewalls come into the picture when any of the
networks that you are internetworking are untrusted. The Internet is always
assumed to be untrusted, but experience tells us that we really shouldn't
trust any network, even ones within our own company, unless we have full
assurance of their security status. In other words, if you are responsible
for the company's sales and marketing network you shouldn't just assume that
the company's production and inventory network is trustworthy, at least not
without some fairly strong assurances. Besides, can you really trust, or do
you even know about, all of the other networks that are connected to the
production and inventory network? This might sound paranoid, but that
doesn't mean it is unreasonable. An analogy might be a floppy disk handed to
you by a colleague. Even though you are assured it is virus-free, prudence
dictates that you scan it for viruses anyway.

So firewalls should be considered whenever you connect trusted networks
to untrusted networks. This means they are sometimes appropriate within an
organization, for example to control access between segments of a wide area
network, but they are almost always appropriate when you connect a company
network to the Internet. In a moment we will discuss how firewalls work and
the role they play in internetwork security.

Firewall Limitations Information security professionals often find
themselves working against misconceptions and popular opinions formed from
incomplete data. Some of these opinions spring more from hope than fact,
such as the idea that internal network security problems can be solved
simply by deploying a firewall. It is true firewalls deserve to be near the
top of the agenda for organizations who have, or are thinking about
creating, a connection between their network and another network. However,
firewalls are not the whole answer.

For a start, firewalls are not the answer to attacks behind the
firewall. The nature of firewall protection is perimeter defense [Amor].
Firewalls are not general-purpose access control systems and they are not
designed to control insiders abusing authorized access behind the firewall.
Information security surveys consistently report that more than half of all
incidents are insider attacks (many seasoned security professionals refer to
the 80/20 rule to describe the relative probability that a problem was
caused by insiders as opposed to outsiders).

Firewalls are not a solution to the malicious code problem. There are
two parts to this problem, viruses, self-replicating code that can cause
considerable disruption on networks as well as individual workstations, and
Trojan horses, programs pretending to be something they are not, such as
"password sniffers." To put this problem in perspective, the 1997 NCSA Virus
Study reports that virtually all North American companies and large
organizations have experienced virus infections. Some 90% of organizations
with more than 500 PCs experience, on average, at least one virus incident
per month. The cost of incidents averages over $8,000 and can run as high as
$100,000, with survey results indicating that the problem is getting worse
rather than better. New types of viruses which use macro languages are
spreading through shared documents, not programs. They can travel over the
Internet or through the World Wide Web as e-mail attachments. The Web itself
is a source of virus programs, which can be downloaded from a number of
sites. An additional complication is that many na ve users allow their
e-mail program or their operating systems to load and interpret e-mail
attachments such as MS-Word documents or HTML files without scanning for
harmful code. The Web is also a potential path for Trojan code (e.g., Java
applets or ActiveX controls), which is a potentially serious problem for
distributed application technologies.

Some firewalls can be configured to check incoming code for signs of
viruses and Trojan horses; however, defenses, while helpful, are not
foolproof. As far as Trojan code is concerned, current defenses are
essentially limited to barring known programs, which leaves a big gap
through which new Trojan programs may slip. Furthermore, firewalls can only
be expected to address one aspect of the malicious-code problem. Many virus
infections still occur because people have introduced infected disks into
the network. A typical example is the traveling salesperson who returns with
an infected laptop which is then attached to the network and infects it.
Another classic is the maintenance engineer who uses an infected disk to
test machines. Proper anti-virus policies and procedures can reduce these
risks, but virus-scanning firewalls are only part of the answer.

Another fact lost in the hyperbole about the Internet is that many of
the hacking incidents reported by the media have very little to do with the
Internet itself. Indeed, one of the most widely used hacking techniques is
social engineering, which essentially means tricking someone, either in
person or over the telephone, into revealing something like their network
password. And even though many companies now have an Internet connection,
phone lines intended for data, such as remote maintenance lines and field
office access lines, are still popular as means of gaining access to
internal systems.

In other words, efforts to protect data from Internet threats should not
take place in a vacuum. It must be stressed there is little point installing
a firewall if you haven't addressed the infosec basics, like classifying and
labeling data according to its sensitivity, password protecting
workstations, enforcing anti-virus policies, and tracking removable media.
One effect of the Internet phenomenon has been to hold up a mirror to
internal networks. What a lot of companies see is not pretty. The problem of
securing desktop PCs was not adequately addressed before we cabled, some
might say cobbled, them together to form local area networks (LANs). The
problem of securing LANs was not solved before they became wide area
networks (WANs). These facts come back to haunt us as we rush toward GANs or
global area networks [Eward].

Other Internet Problems Another oft-neglected security-related Internet
fact is that nobody owns the Internet. While the lack of ownership is
sometimes mentioned in articles about the Internet, the implications for
security, which are both positive as well as negative, are seldom
highlighted. The most obvious negative implication is that the Internet
includes some wild and lawless places. Traditionally a playground for
hackers, the Internet has no central authority. Despite recent rumblings
about "cyber-cops" from the U.S. Department of Justice, there is currently
no Internet police force and we are not likely to see one. The
trans-national nature of the Internet alone makes any such attempts at
policing problematic at best.

Ironically, this very lack of ownership has resulted in a growing
awareness of security. Because nobody owns the Internet, nobody is obliged
to minimize the risks associated with using it. Not so long ago, mainframe
makers assured users that their systems were safe and secure behind locked
doors. When personal computers first started appearing on corporate desktops
they were blasted as a security risk by some purveyors of big iron, but soon
these vendors were talking up their own PCs and talking less about security.
The trend continued as PCs came together as LANs.

With the exception of a few vendors selling security products, the lack
of talk about security persisted during the aggregation of LANs into WANs
and the enormous marketing push towards client/server solutions. But when
you start talking about the transition from WANs to Internet-based GANs, the
lack of security is well documented. We now hear major hardware and software
vendors talking publicly about Internet risks as they market their security
solutions, no longer obliged to overlook security issues. The potential
benefits of using the public Internet rather than dedicated private networks
are so financially compelling that few organizations feel they can afford to
turn their back on the Internet just because it is inherently insecure.

Unfortunately SATAN isn't as portable as we would like it to be, but it
still will run on a fairly large number of Un*x machines. One of the main
problems we had is that for it to do all of the tasks that we wanted and to
actually be able to release it within any reasonable time frame, we had to
both rely on many other publically available tools and forego much of our
usual testing methodologies. Still under development, and most often used by
us as a research and discovery tool, it will become more robust and portable
as we get feedback and are able to test it on more platforms ourselves.

Operating systems

Currently SATAN is known to work on the following Operating Systems:

SunOS 4.1.3_ U1 SunOS 5.3 Irix 5.3

Hardware platforms

SATAN has been tested with the following machines:

SPARCstation 4/75 SPARCstation 5 Indigo 2

However, it should run on quite a few more; try typing make for a list
of the ones we think it'll work on (currently, this is AIX, BSD types,
IRIX5, HP-UX 9.x, SunOS 4 & 5, SYSV-R4, Ultrix 4.x, and maybe, just maybe,
with a bit of tweaking, Linux.)

Disk space

Approximately 20 megabytes of total space is needed to install all of
the supplementary packages and the SATAN program. The bulk of this is due to
the other software packages, chiefly Mosaic or netscape (5.5 MB or 2.5 MB on
a sun) and perl5 (10 MB); SATAN itself takes up about two megabytes of
space, including the documentation. If the supplementary programs are
already installed, it isn't necessary to reinstall them. If you use the
binaries only, it can be as small as 5 MB for a full installation.

Memory

Memory is another issue - this is very dependent on how many hosts
you're scanning in or are in your database, but rest assured, SATAN is a
real pig when it comes to memory. From our experiences:

With approximately 1500 hosts scanned, with approximately 18000 facts in
the facts file took about 14 megabytes of memory on a SPARC 4/75 running
SunOS 4.1.3.

With approximately 4700 hosts scanned, with about 150000 facts, it took
up almost 35 megabytes of memory on an Indigo 2.

Needless to say, swapping is very painful if you don't have enough
memory.

Other software tools required and where you can get them

We realize that you may not have all of the additional software required
to run SATAN already on your system. If you're not on the Internet, we're
sorry but we currently do not have the resources to help you get all of
these programs. Perhaps at some point a tape or CD distribution could be
made (probably by a 3rd party) if the demand is high enough.

Although all of it is widely and freely available on the Internet, on a
wide number of sites, here are some easy places to find perl, mosaic, and
netscape:

Perl, version 5.001
Mosaic, version 2.5
Netscape, version 1.0

2. Defining Terms

So how do we define a firewall? Broadly speaking, it is a system or
group of systems that enforces an access control policy between two networks
[FAQ]. More specifically, a firewall is a collection of components or a
system that is placed between two networks and possesses the following
properties:

1. all traffic from inside to outside, and vice-versa, must pass through
it;

2. only authorized traffic, as defined by the local security policy, is
allowed to pass through it; and

3. the system itself is immune to penetration [Ches94].

As we said earlier, a firewall is a mechanism used to protect a trusted
network from an untrusted network; the two networks in question are
typically an organization's internal network (trusted) and the Internet
(untrusted). But there is nothing in the definition of a firewall that ties
the concept to the Internet (remember that we defined the Internet as the
global network of networks that communicates using TCP/IP and an internet as
any connected set of networks).

Internal Firewalls Consider a manufacturing company that has different
networks for sales, marketing, payroll, accounting, production, and product
development. Over time, these have been connected because some users have
made a case for having access to more than one network. But it is probably
unnecessary and undesirable for all users to have access to all of these
networks. Although application level security may be used to protect
sensitive data in a wide area network that offers any-to-any connectivity,
segregation of networks by means of firewalls greatly reduces many of the
risks involved; in particular, firewalls can notably reduce the threat of
hacking between networks by insiders (44% of respondents to Respondents in a
recent Infosecurity News/Yankee Group survey reported security compromises
by insiders). Insider hacking encompasses unauthorized or inappropriate
access to data and processing resources by employees, including authorized
users. It should be noted that the importance of insider abuse consistently
outranks that of external hacking in information security surveys.

Although the phenomenal growth of Internet connections has
understandably focused attention on Internet firewalls, modern business
practices continue to underline the importance of internal firewalls.
Consider mergers, acquisitions, reorganizations, outsourcing, joint
ventures, and strategic partnerships. In all but the most technologically
challenged industries these increasingly common occurrences have significant
internet implications. Suddenly, someone outside the organization needs
access to internal information. Multiple networks designed by different
people, according to different rules, are suddenly asked to trust each
other. In these circumstances, firewalls have an important role to play as a
mechanism to enforce an access- control policy between networks and to
protect trusted networks from those that are untrusted.

Gateways Medieval towns were often surrounded by huge walls for
protection. Access to and from the town was possible only through a limited
number of large gates or gateways. As a digital version of this concept,
"gateway" is now an important term often used as synonymous, or in
conjunction, with firewall; that is, a point of control through which
network traffic must pass. Internet firewalls are often referred to as
secure Internet gateways [Wack].

More specifically, a gateway is a computer that provides relay services
between two networks. As you can see from Figure 2, a firewall may consists
of several different components, including filters or screens that block
transmission of certain classes of traffic. A gateway is a machine or set of
machines that provides relay services which complement the filters. Another
term illustrated in Figure 2 is "demilitarized zone" or "DMZ" [Ches94]. This
is an area or sub-network between the inside and outside networks that is
partially protected. One or more gateway machines may be located in the DMZ.
Exemplifying a traditional security concept, defense-in-depth, the outside
filter protects the gateway from attack, while the inside gateway guards
against the consequences of a compromised gateway [Ches94].

Figure 2: Firewall schematics 3. Policy as the Key

Although it is helpful to diagram various configurations of filters and
gateways, it is imperative that we not lose sight of the broad definition of
a firewall as an implementation of security policy, not the totality of
security. A firewall is an approach to security; it helps implement a larger
security policy that defines the services and access to be permitted [Wack,
which is the basis for the rest of this section]. In other words, a firewall
is both policy and the implementation of that policy in terms of network
configuration, one or more host systems and routers, and other security
measures such as advanced authentication in place of static passwords. There
are two levels of network policy that directly influence the design, the
installation and the use of a firewall system:

Network Service Access Policy: a higher-level, issue-specific policy
which defines those services that will be allowed or explicitly denied from
the restricted network, plus the way in which these services will be used,
and the conditions for exceptions to this policy.

Firewall Design Policy: a lower-level policy which describes how the
firewall will actually go about restricting the access and filtering the
services as defined in the network service access policy.

Network Service Access Policy While focusing on the restriction and use
of internetwork services, the network service access policy should also
include all other outside network access such as dial-in and SLIP/PPP
connections. This is important because of the "belt-and-bulge" effect, where
restrictions on one network service access can lead users to try others. For
example, if restricting access to the Internet via a gateway prevents Web
browsing, users are likely to create dial-up PPP connections in order to
obtain this service. Since these are non-sanctioned, ad hoc connections,
they are likely to be improperly secured while at the same time opening the
network to attack.

Network service-access policy should be an extension of a strong
site-security policy and an overall policy regarding the protection of
information resources in the organization. This includes everything from
document shredders to virus scanners, remote access to floppy disk tracking.
At the highest level, the overall organizational policy might state a few
broad principles. For example, the fictitious Megabank, Inc. might use the
following as a starting point for its information security policy:

A. Information is vital to the economic well-being of Megabank.

B. Every cost-effective effort will be made to ensure the
confidentiality, control, integrity, authenticity, availability and utility
of Megabank information.

C. Protecting the confidentiality, control, integrity, authenticity,
availability and utility of Megabank's information resources is a priority
for all Megabank employees at all levels of the company.

D. All information processing facilities belonging to Megabank will be
used only for authorized Megabank purposes.

Below this statement of principles come site-specific policies covering
physical access to the property, general access to information systems, and
specific access to services on those systems. The firewall's network
service-access policy is formulated at this level.

For a firewall to be successful, the network service-access policy
should be drafted before the firewall is implemented. The policy must be
realistic and sound. A realistic policy is one that provides a balance
between protecting the network from known risks while still providing users
reasonable access to network resources. If a firewall system denies or
restricts services, it usually requires the strength of the network
service-access policy to prevent the firewall's access controls from being
modified or circumvented on an ad hoc basis. Only a sound, management-backed
policy can provide this defense against user resistance. Here are the two
typical and contrasting network service-access policies that a firewall can
implement: - Allow no access to a site from the Internet, but allow access
from the site to the Internet;

or,

- Allow some access from the Internet, but only to selected systems such
as information servers and e-mail servers.

Firewalls often implement network service-access policies that allow
some users access from the Internet to selected internal hosts, but this
access would be granted only if necessary and only if it could be combined
with advanced authentication. In a moment we will look at network security
access policy in more detail.

Firewall Design Policy. The firewall design policy is specific to the
firewall. It defines the rules used to implement the network service access
policy. This policy must be designed in relation to, and with full awareness
of, issues such as firewall capabilities and limitations, and the threats
and vulnerabilities associated with TCP/IP. As mentioned above, firewalls
generally implement one of two basic design policies:

- Permit any service unless it is expressly denied; or

- Deny any service unless it is expressly permitted.

Firewalls that implement the first policy (the permissive approach)
allow all services to pass into the site by default, with the exception of
those services that the service-access policy has identified as disallowed.
Firewalls that implement the second policy (the restrictive approach) deny
all services by default, but then pass those services that have been
identified as allowed. This second, restrictive, policy follows the classic
access model used in all areas of information security. The permissive first
policy is less desirable, since it offers more avenues for getting around
the firewall. For example, users could access new services currently not
denied by the policy (or even addressed by the policy). For example, they
could run denied services at non-standard TCP/UDP ports that are not
specifically denied by the policy.

On the other hand, certain services, such as X Windows, FTP, Archie, and
RPC are difficult to filter [Chap92], [Ches94]. For this reason, they may be
better accommodated by a firewall that implements the permissive policy.
Also, while the restrictive policy is stronger and safer, it is more
difficult to implement and more restrictive for users; services such as
those just mentioned may have to be blocked or heavily curtailed. This is
where firewall design comes in. Certain firewalls can implement either
design policy. One particular design, the dual-homed gateway, is inherently
a "deny-all" restrictive firewall. But it is possible to locate systems
which require services that should not be passed through the firewall on
screened subnets, separated from other site systems. In other words,
depending on security and flexibility requirements, certain types of
firewall designs are more appropriate than others, making it extremely
important that you consider policy before implementing the firewall. Failure
to do so could result the firewall's failing to meet expectations.

Questions to Ask. To arrive at a firewall design policy and then
ultimately a firewall system that implements the policy, we recommend that
you start with the most secure firewall design policy; that is, deny all
services except those that are explicitly permitted. The policy designer
should then understand and document the following:

A. Which Internet services does the organization plan to use (for
example, TELNET, Mosaic, and NFS)?

B. Where will the services be used (for example, on a local basis,
across the Internet, dial-in from home, or from remote organizations)?

C. What additional needs, such as encryption or dial-in support, may be
supported?

D. What risks are associated with providing these services and access?

E. What is the cost, in terms of controls and impact on network
usability, of providing protection?

F. What assumptions are made about security versus usability (does
security automatically win out if a particular service is too risky or too
expensive to secure)?

Addressing these items is straightforward but highly iterative. For
example, a site may wish to use NFS across two remote sites, but the
restrictive design policy may not permit NFS. There are several possible
responses:

- If the risks associated with NFS are acceptable, the organization may
change the design policy to the less secure approach of permitting all
services except those specifically denied and then pass NFS through the
firewall to site systems.

- Alternatively, the site could obtain a firewall capable of locating
the systems that require NFS on a screened subnet, thus preserving the
restrictive design policy for the rest of the site systems. On the other
hand, the risks of using NFS may be considered too great, so NFS would have
to be dropped from the list of services to use remotely.

4. Policy in Practice

The aim of the preceding "what-if" stage of policy-making is to arrive
at a suitable network-service access policy and a firewall-design policy. To
assist in this process we have outlined some common issues that need to be
addressed in the policies associated with firewall use. We begin with some
specifics and then move on to more general considerations.

Packet Filtering. All firewalls perform some sort of IP packet
filtering, usually by means of a packet-filtering router. The router filters
packets as they pass between the router's interfaces, implementing a set of
rules based on your firewall policy. A packet-filtering router usually can
filter IP packets based on some or all of the following criteria:
- source IP address,
- destination IP address,
- TCP/UDP source port, and
- TCP/UDP destination port.

Not all packet-filtering routers can filter the source TCP/UDP port.
Some routers can examine at which of the router's network interfaces a
packet arrived and then use this as a further filtering criterion. Some UNIX
hosts provide packet-filtering capability, although most do not. Filtering
can be used to block connections from or to specific hosts or networks and
also to block connections to specific ports. A site might wish to block
connections from certain addresses, such as from hosts or sites that it
considers to be hostile or untrustworthy. Alternatively, a site may wish to
block connections from all addresses external to the site (with certain
exceptions, such as with SMTP for receiving e-mail).

Adding TCP or UDP port filtering to IP address filtering results in a
great deal of flexibility. Servers such as the TELNET daemon usually reside
at specific ports (port 23 for TELNET). If a firewall can block TCP or UDP
connections to or from specific ports, then one can implement policies that
call for certain types of connections to be made to specific hosts, but not
others. For example, a site may wish to block all incoming connections to
all hosts except for several firewalls-related systems. At those systems,
the site may wish to allow only specific services, such as SMTP for one
system and TELNET or FTP connections to another system (see diagram in
Figure 3). With filtering on TCP or UDP ports, this policy can be
implemented in a straightforward fashion by a packet-filtering router or by
a host with packet-filtering capability.

Figure 3: Packet Filtering on TELNET and SMTP [Wack]

As an example of packet filtering, consider a policy to allow only
certain connections to a network of address 123.4.*.*. TELNET connections
will be allowed to only one host, 123.4.5.6, which may be the site's TELNET
application gateway, and SMTP connections will be allowed to two hosts,
123.4.5.7 and 123.4.5.8, which may be the site's two electronic mail
gateways. NNTP (Network News Transfer Protocol) is allowed only from the
site's NNTP feed system, 129.6.48.254, and only to the site's NNTP server,
123.4.5.9, and NTP (Network Time Protocol) is allowed to all hosts. All
other services and packets are to be blocked. For more detailed discussion
of this example see [Wack]. This is a very basic example of packet
filtering. Actual rules may permit more complex filtering and greater
flexibility.

Policing Protocols It is the network services access policy which
determines which protocols and fields are filtered, that is, which systems
should have Internet access and the type of access to permit. The following
services are inherently vulnerable to abuse and are usually blocked at a
firewall from entering or leaving the site [Chap92], [Garf92]:

- tftp, port 69, trivial FTP, used for booting diskless workstations,
terminal servers and routers, can also be used to read any file on the
system if set up incorrectly.

- X Windows, OpenWindows, ports 6000+, port 2000, can leak information
from X window displays including all keystrokes (intruders can even gain
control of a server through the X-server).

- RPC, port 111, Remote Procedure Call services including NIS and NFS,
which can be used to steal system information such as passwords and read and
write to files.

- rlogin, rsh, and rexec, ports 513, 514, and 512, services which, if
improperly configured, can permit unauthorized access to accounts and
commands.

Other services, whether inherently dangerous or not, are usually
filtered and possibly restricted to only those systems that need them. These
would include:

- TELNET, port 23, often restricted to only certain systems.

- FTP, ports 20 and 21, like TELNET, often restricted to only certain
systems.

- SMTP, port 25, often restricted to a central e-mail server.

- RIP, port 520, routing information protocol, can be spoofed to
redirect packet routing.

- DNS, port 53, domain names service zone transfers, contains names of
hosts and information about hosts that could be helpful to attackers, could
be spoofed.

- UUCP, port 540, UNIX-to-UNIX CoPy, if improperly configured can be
used for unauthorized access.

- NNTP, port 119, Network News Transfer Protocol, for accessing and
reading network news.

- gopher, http, ports 70 and 80, information servers and client programs
for gopher and WWW clients, should be restricted to an application gateway
that contains proxy services.

Although some of these services, such as TELNET or FTP, are inherently
risky, blocking access to these services completely may be too drastic a
policy for many sites. However, at many sites, not all systems require
access to all services. For example, restricting TELNET or FTP access from
the Internet to only those systems that require the access can improve
security at no cost to user convenience. Services such as NNTP may seem to
pose little threat, but restricting these services only to those systems
that need them helps to create a cleaner network environment and reduces the
likelihood of exploitation from yet-to-be-discovered vulnerabilities and
threats.

System managers should also be aware that there are services available
through which users can obtain access to FTP and other Internet services
through e-mail. For example, the Unix Mail Robot allows Web documents to be
retrieved through e-mail and at least two sites provide Internet services
such as FTP via e-mail. If especially tight security demands that such
access be restricted, there may be additional controls to impose on outgoing
and incoming e-mail.

Packet Filters as Firewalls So, if a packet-filtering router is
configured to enforce the network services access policy, is it a firewall?
If so, can it be an effective firewall? Yes, although some people might say
it depends on how effective you need your firewall to be. Some vendors might
then reply "Have you looked out our routers lately?" Certainly, there are
packet-filtering routers that meet Cheswick's three-point firewall cited
earlier and products from several router vendors have passed the NCSA's
firewall certification process. In the first edition of this guide we may
have given the impression that firewalls have "evolved" from packet filters
to application gateways and beyond. Although there are several categories of
firewall and some firewall technologies were developed quite recently, it
would be wrong to characterize this as evolution if this implied that newer
is necessarily better.

It is true that, historically, packet-filtering routers were hard to
configure and maintain, but improved software and user interfaces have made
these tasks easier. Packet-filtering rules are inherently complex to specify
and if no testing facility is provided for verifying the correctness of the
rules, other than by exhaustive testing by hand, then mistakes are more
likely. Again, some products have rectified this. Earlier routers lacked
logging capability, so that if the rule set were to let dangerous packets
through, they might not have be detected until a break-in occurred. Clearly
it is helpful to add logging features to a packet-filtering router. One more
recent addition is internal address translation and masking, in which the
addresses of internal systems are hidden, so that only the router's address
is seen by the outside network. This allows unlimited use of internal IP
addresses that do not need to be sanctioned or assigned (thus avoiding the
current shortage of addresses).

When evaluating packet-filtering routers as firewalls you also have to
look at factors such as performance, suitability and cost. For a small
office network with limited Internet connectivity, a packet-filtering router
that has a good interface and suitable testing and logging features may be
entirely adequate. Furthermore, it will not slow down internetwork traffic,
since the process of packets through filters has been optimized in router
designs for some time. Finally, since a router is required to make the
internetwork connection possible, using it as a firewall as well saves
money.

We do not wish to minimize the drawbacks to packet-filtering routers.
Even with a good interface, maintaining a rule set that is both effective
and appropriate is a serious task. Exceptions to rules will often need to be
made to allow certain types of access that normally would be blocked, but
exceptions to packet-filtering rules can make the filtering rules so complex
as to be unmanageable. For example, it is relatively straightforward to
specify a rule to block all inbound connections to port 23 (the TELNET
server). If exceptions are made, that is, if certain systems need to accept
TELNET connections directly, then a rule for each system may need to be
added (some packet-filtering systems do allow you to treat the sequential
order of the filter rules as being significant, which means you can have an
exception PERMIT to a specific system, followed by a DENY for all systems).
Sometimes the addition of certain rules may complicate the entire filtering
scheme. Some packet-filtering routers do not filter on the TCP/UDP source
port, which can make the filtering rule set more complex and can open up
holes in the filtering scheme.

Application Gateways To counter some of the weaknesses associated with
packet-filtering routers, software applications were developed to forward
and filter connections for services such as TELNET and FTP. Such
applications are referred to as proxy services, while host machines running
the proxy services are referred to as application gateways. Working
together, application gateways and packet-filtering routers have the
potential to provide higher levels of security and flexibility than either
alone. For example, consider a site that blocks all incoming TELNET and FTP
connections using a packet-filtering router. The router allows TELNET and
FTP packets to go to one host only, the TELNET/FTP application gateway. A
user who wishes to connect inbound to a site system would have to connect
first to the application gateway, and then to the destination host, as
follows:

1. user first telnets to the application gateway and enters the name of
an internal host,

2. the gateway checks the user's source IP address and accepts or
rejects it according to any access criteria in place,

3, the user may need to authenticate herself (possibly using a one-time
password device),

4. the proxy service creates a TELNET connection between the gateway and
the internal host,

5. the proxy service then passes bytes between the two connections, and
finally

6. the application gateway logs the connection.

You can see that proxy services allow only those services through for
which there is a proxy. In other words, if an application gateway contains
proxies for FTP and TELNET, then only FTP and TELNET may be allowed into the
protected subnet and all other services are completely blocked (see Figure
4). For some sites, this degree of security is important, as it guarantees
that only those services which are deemed trustworthy are allowed through
the firewall. It also prevents other untrusted services from being
implemented behind the backs of the firewall administrators.

Figure 4: Proxies on an application gateway

Another benefit to using proxy services is that the protocol can be
filtered. Some firewalls, for example, can filter FTP connections and deny
use of the FTP put command, which is useful if one wants to guarantee that
users cannot write to, say, an anonymous FTP server. Application gateways
have a number of general advantages over the default mode of permitting
application traffic directly to internal hosts. These include:

A. Information hiding, in which the names of internal systems need not
necessarily be made known via DNS to outside systems, since the application
gateway may be the only host whose name must be made known to outside
systems.

B. Robust authentication and logging, in which the application traffic
can be authenticated before it reaches internal hosts and can be logged more
effectively than if logged with standard host logging.

C. Cost-effectiveness, because third-party software or hardware for
authentication or logging need be located only at the application gateway.

D. Less-complex filtering rules, in which the rules at the
packet-filtering router will be less complex than they would if the router
needed to filter application traffic and direct it to a number of specific
systems. The router need only allow application traffic destined for the
application gateway and reject the rest.

Of course, there is seldom gain without pain. In the case of
client-server protocols such as TELNET, application gateways require two
steps to connect inbound or outbound, which can be viewed as a disadvantage
or an advantage, depending on whether the modified clients make it easier to
use the firewall. A TELNET application gateway would not necessarily require
a modified TELNET client, however it would require a modification in user
behavior: the user has to connect (but not login) to the firewall as opposed
to connecting directly to the host.

A modified TELNET client could make the firewall transparent by
permitting a user to specify the destination system (as opposed to the
firewall) in the TELNET command. The firewall would serve as the route to
the destination system and thereby intercept the connection, and then
perform additional steps as necessary such as querying for a one-time
password. Users don't have to change their behavior; however, the price is
that a modified client has to run on each system.

In addition to TELNET, application gateways are used generally for FTP
and e-mail, as well as for X Windows and some other services. Some FTP
application gateways include the capability to deny put and get commands to
specific hosts. For example, an outside user who has established an FTP
session (via the FTP application gateway) to an internal system such as an
anonymous FTP server might try to upload files to the server. The
application gateway can filter the FTP protocol and deny all puts to the
anonymous FTP server. This would ensure that nothing can be uploaded to the
server and would provide a higher degree of assurance than relying only on
the correct setting of file permissions at the anonymous FTP server.

An e-mail application gateway serves to centralize e-mail collection and
distribution to internal hosts and users. To outside users, all internal
users would have e-mail addresses of the form user@emailhost where emailhost
is the name of the e-mail gateway. The gateway would accept mail from
outside users and then forward mail along to other internal systems as
necessary. Users sending e-mail from internal systems could send it directly
from their hosts, or in the case where internal system names are not known
outside the protected subnet, the mail would be sent to the application
gateway, which could then forward the mail to the destination host. Some
e-mail gateways use a more secure version of the sendmail program to accept
e-mail.

Packet Inspection Some Internet firewalls use a combination of a
packet-filter screening computer or a hardware router for controlling the
lower layers of communication, and use application gateways for the enabled
applications. But there may be limits on transparency, flexibility, and
connectivity with this arrangement, which may also get expensive in terms of
setup, management, and expertise. Another approach is to inspect packets
rather than just filtering them; that is, to consider their contents as well
as their addresses. Firewalls of this type employ an inspection module,
applicable to all protocols, that understands data in the packet intended
for all other layers, from the network layer (IP headers) up to the
application layer. This strategy can provide context-sensitive security for
complex applications and may be more effective than technologies which only
have data in some of the layers available to them. For example, although
application gateways have access only to the application layer and routers
have access only to the lower layers, the packet-inspection approach
integrates the information gathered from all layers into a single inspection
point.

The speed at which packets can be handled by the inspection module is
typically faster, for a given unit of processing power, than an application
gateway. This can translate into positive cost implications (cheaper
hardware to run the firewall). Some inspection firewalls also take into
account the state of the connections they handle, so that, for example, a
legitimate incoming packet can be matched with the outbound request for that
packet and allowed in. Conversely, an incoming packet that is masquerading
as a response to a non-existent outbound request, can be blocked. This takes
the so-called stateful inspection approach well-beyond packet filtering.

Inspection firewalls can provide address translation and hiding, as well
as provide other services, such as virus scanning, that are also being added
to application gateway firewalls. However, some experts contend that,
because "allowed" packets travel through an inspection firewall, inspection
firewalls are less secure than application proxies, through which no
"allowed" packet passes (the packet is created by the proxy). Proponents of
inspection firewalls dismiss this argument but, as a prospective firewall
user, you should be aware of it. You should also be aware of another area of
debate, the speed with which different types of firewalls can respond to
changes.

Firewalls and Flexibility You can see that any security policy which
deals with Internet access, Internet services, and network access in
general, has to be flexible because the Internet itself is in flux. Consider
how quickly World Wide Web traffic eclipsed FTP once Web browsers became
popular. Furthermore, the organization's needs may change as the Internet
offers new services and methods for doing business, such as secure
transactions and live video.

New protocols and services are emerging on the Internet. Using them may
benefit the organization, but they may result in new security concerns. For
example, within a few months of its release, hundreds of thousands of people
were using the RealAudio sound player for the Web, yet users behind
UDP-blocking firewalls found that the program wouldn't work properly,
leading some to remove restrictions from the port. Thus, a policy needs
flexibility to reflect and incorporate new concerns. But flexibility is also
important given the rapid changes in business today. New partnerships and
alliances may bring new network connections and new risks. Few organizations
are likely to remain static.

When it comes to adapting your firewall to changes, you will want to be
able to accommodate new applications as soon as possible. Proponents of
inspection firewalls argue that their technology provides the quickest way
to allow new services, for example an innovative video-conferencing
protocol, to pass through the firewall. They point out that such a service
would be denied by an application gateway until a proxy were available.
Advocates of application proxies approach counter in several ways. First,
they note that major vendors, such as Intel, now work very closely with
firewall vendors to make proxy applications readily available (a process
facilitated by NCSA's Firewall Product Developers' Consortium). Second, they
note that, although inspection firewalls can allow a new service very
easily, the inspection module will need to be fine-tuned to provide maximum
safety (we do not wish to enter this debate, but you need to be aware of
it). In practice, commercial firewalls may use a mixture of technologies to
accomplish their task. Packet filtering is often combined with packet
inspection and proxies for standard services.

Remote User Advanced Authentication Policy Remote users are those who
initiate connections to a site's system from elsewhere on the Internet.
These connections could come from any location on the Internet, from dial-in
lines, or from authorized users traveling or working from home. All such
connections should use the advanced authentication service of the firewall
to access systems at the site. Policy should reflect that remote users may
not access systems through unauthorized modems placed behind the firewall.
There must be no exceptions to this policy, as it takes only one captured
password or one uncontrolled modem line to enable a backdoor around the
firewall.

Of course, such a policy has its drawbacks. For a start, there is the
increased user training for using advanced authentication measures. There is
added expense if remote users are supplied with authentication tokens or
smart cards, together with increased overhead in administering remote
access. But it does not make sense to install a firewall and at the same
time fail to control remote access, especially when secure authentication
tokens now cost less than $30 (US) per user per year (excluding host
software licenses).

Dial-in/out Policy A useful feature for authorized users is to have
remote access to the systems when these users are not on site. A dial-in
capability allows them to access systems from locations where Internet
access is not available. However, dial-in capabilities add another avenue
for intruder access. Authorized users may also wish to have a dial-out
capability to access those systems that cannot be reached through the
Internet. These users need to recognize the vulnerabilities they may be
creating if they are careless with modem access. A dial-out capability may
easily become a dial-in capability if proper precautions are not taken.

The dial-in and dial-out capabilities should be considered in the design
of the firewall and incorporated into it. Forcing outside users to go
through the advanced authentication of the firewall should be strongly
reflected in policy. Policy can also prohibit the use of unauthorized modems
attached to host systems and to personal computers at the site if the modem
capability is offered through the firewall. A strong policy and effective
modem service may limit the number of unauthorized modems throughout the
site, thus limiting this dangerous vulnerability as well.

Remote Network Connections In addition to dial-in/dial-out connections,
the use of Serial Line IP (SLIP) and Point-to-Point Protocol (PPP)
connections needs to be considered as part of the policy. Users could use
SLIP or PPP to create new network connections into a site protected by a
firewall. Such a connection is potentially a backdoor around the firewall,
and may be an even larger backdoor than a simple dial-in connection. It is
possible to locate dial-in capability so that dial-in connections have to
pass through the firewall. This sort of arrangement can also be used for
SLIP and PPP connections; however, this restriction would need to be set
forth in policy. As usual, the policy would have to be very strong with
regard to these connections (given that workers at NASA's Kennedy Space
Center can be suspended for introducing unauthorized floppy disks into the
workplace, it is reasonable for your policy to specify something equally
severe for setting up un-sanctioned Internet connections).

Information Server Policy A site that is providing public access to an
information server may want to incorporate this access into the firewall
design. Although the information server itself creates additional and
specific security concerns, the information server should not reduce the
existing security of the protected site. Policy should reflect the
philosophy that the security of the site shall not be compromised in order
to provide an information service. A typical example of this is a Web server
intended to provide access for Internet users. This machine may not need to
be behind the firewall at all. If the information dished up by the Web
server resides on that machine itself, rather than being drawn from systems
on the internal network, it can be operated as a "sacrificial lamb" with no
connections except an external one to the Internet. As long as the machine
is regularly backed up it can operate unencumbered by a firewall and simply
be restored from backups if it is attacked.

If you need to accept information submitted by users of a Web server,
for example, when they fill oput a form on a Web page, then you will want to
protect that information and the channel by which it is communicated to your
internal network. Allowing such data to accumulate on the Web server itself
is risky because it could be compromised if the Web server were attacked.
Transmission of the information from the server to the internal network
needs to be via a firewall. The same policy applies to any valuable
information that you want to dish up via Web pages. If you are supplying
responses to questions that draw from internal databases then you should put
the systems that hold these databases behind a firewall. Several
configurations are shown in Figure 5.

Figure 5: Web servers relative to firewalls

One can make a useful distinction that information-server traffic--the
traffic concerned with retrieving information from an organization's
information server -- is fundamentally different from other "conduct of
business" traffic such as e-mail (or other information server traffic for
the purposes of business research). The two types of traffic have their own
risks and do not necessarily need to be mixed with each other. Screened
subnet and dual-homed gateway firewalls allow information servers to be
located on a screened subnet and in effect be isolated from other site
systems. This reduces the chance that an information server could be
compromised and then used to attack site systems.

Multiplication Problems Cheswick and Bellovin identify one area of
particular difficulty for firewall policy developers under the title of
joint ventures [Ches94]. This includes situations where two or more
companies agree to work together on a specific project while remaining
competitive in other areas. This produces policy dilemmas similar to those
that arise when companies wish to let support staff from vendors connect to
a site in order to diagnose problems, or when suppliers connect to share
ordering information. Here are some suggested policies:

1. Shared machines require protection from outsiders.
2. System administrator must assume that partners do not trust each
other 100%.
3. Users have legitimate need for high-quality access to the shared
machines plus their respective home machines.
4. There are several solutions that can be implemented to address
these policies [Ches94] but this is an inherently challenging area of
firewall policy.
5. Specifying and Procuring a Firewall

Once the decision is made to use firewall technology to implement an
organization's security policy, the next step is to procure a firewall that
provides the appropriate level of protection and is cost-effective. We
cannot say what exact features a firewall should have to provide effective
implementation of your policies, but we can suggest that, in general, a
firewall should be able to do the following:

- Support a "deny all services except those specifically permitted"
design policy, even if that is not the policy used.

- Support your security policy, not impose one.

- Accommodate new services and needs if the security policy of the
organization changes.

- Contain advanced authentication measures or should contain the hooks
for installing advanced authentication measures.

- Employ filtering techniques to permit or deny services to specified
host systems as needed.

- Use an IP filtering language that is flexible, user-friendly to
program, and able to filter on as many attributes as possible, including
source and destination IP address, protocol type, source and destination
TCP/UDP port, and inbound and outbound interface.

- Use proxy services for services such as FTP and TELNET so that
advanced authentication measures can be employed and centralized at the
firewall.

Other Basic Functions It is also helpful, if services such as NNTP, X,
http, or gopher are required, for the firewall to contain the corresponding
proxy services. The firewall should also contain the ability to centralize
SMTP access, to reduce direct SMTP connections between site and remote
systems. This results in centralized handling of site e-mail. The firewall
should accommodate public access to the site, such that public information
servers can be protected by the firewall but can be segregated from site
systems that do not require the public access.

The firewall should contain the ability to concentrate and filter
dial-in access. The firewall should contain mechanisms for logging traffic
and suspicious activity, and should contain mechanisms for log reduction so
that logs are readable and understandable. If the firewall requires an
operating system such as UNIX, a secured version of the operating system
should be part of the firewall, with other security tools as necessary to
ensure firewall host integrity. The operating system should have all patches
installed. Note that there is no reason for the firewall machine itself to
use the same operating system as your company network. Indeed, numerous
firewalls use their own proprietary operating system, optimized for
performance and security. However, it may be helpful for the management of
the firewall to take place on a system with a familiar operating system and
interface.

The firewall should be developed in a manner that its strength and
correctness is verifiable. It should be simple in design so that it can be
understood and maintained. The firewall and any corresponding operating
system should be updated with patches and other bug fixes in a timely
manner. As mentioned in earlier discussion, the Internet is a constantly
changing network. New vulnerabilities can arise. New services and
enhancements to other services may represent potential difficulties for any
firewall installation. Therefore, flexibility to adapt to changing needs is
important, as is the process of staying current on new threats and
vulnerabilities. You may want to subscribe to some of the mailing lists that
we list on our Web site (www.ncsa.com) or consider a paid subscription to
reconnaissance services such as NCSA's IS/Recon.

Buy or Build? Some organizations have the capability to put together
their own firewalls using available software components and equipment or by
writing a firewall from scratch. At the same time, there are plenty of
vendors offering a wide range of services in firewall technology, from
providing the necessary hardware and software, to developing security policy
and to carrying out risk assessments, security reviews and security
training. Whether you buy or build, you start with a policy. If your
organization is having a hard time developing a policy, a consultant or
vendor may be able to expedite the process.

One of the advantages of building your own firewall is that in-house
personnel understand the specifics of the design and use of the firewall.
Such knowledge may not exist in-house with a vendor supported firewall. On
the other hand, an in-house firewall can be expensive in terms of time
required to build and document the firewall, plus the time required for
maintaining the firewall and adding features to it as required. These costs
are easy to overlook. Organizations sometimes make the mistake of counting
only the costs for the equipment. If a true accounting is made for all costs
associated with building a firewall, it could prove more economical to
purchase from a vendor. Consideration of the following questions may help
your organization decide whether or not it has the resources to build and
operate a successful firewall:

A. How will the firewall be tested?

B. Who will verify that the firewall performs as expected?

C. Who will perform general maintenance of the firewall, such as backups
and repairs?

D. Who will install updates to the firewall, such as for new proxy
servers, new patches, and other enhancements?

E. Can security-related patches and problems be corrected in a timely
manner?

F. Who will perform user support and training?

Many vendors offer maintenance services along with firewall
installation; therefore, the organization should consider whether it has the
internal resources to perform the functions listed above. Finally, firewall
administration is a critical job role and should be afforded as much time as
possible. In small organizations, it may require less than a full-time
position; however, in such cases, it should take precedence over other
duties. The cost of a firewall should include the cost of administering the
firewall. A firewall can only be as effective as its administration. If the
firewall is not maintained properly, it may become insecure; it may permit
break-ins while providing an illusion that the site is still secure. Your
security policy should clearly reflect the importance of strong firewall
administration. Management should demonstrate its commitment to this
importance in terms of full-time personnel, proper funding for procurement
and maintenance and other necessary resources.

A firewall is not an excuse to pay less attention to site system
administration. It is in fact the opposite: if a firewall is penetrated, a
poorly administered site could be wide-open to intrusions and resultant
damage. A firewall in no way reduces the need for highly skilled system
administration. At the same time, a firewall can permit a site to be
proactive in its system administration as opposed to reactive. Because the
firewall provides a barrier, sites can spend more time on system
administration duties and less time reacting to incidents and damage
control. It is recommended that sites:

- Standardize operating system versions and software to make
installation of patches and security fixes more manageable.

- Institute a program for efficient, site-wide installation of patches
and new software.

- Use services to assist in centralizing system administration, if this
will result in better administration and better security.

- Perform periodic scans and checks of host systems to detect common
vulnerabilities and errors in configuration.

Finally, you should ensure that a communications pathway exists between
system administrators and firewall/site security administrators to alert the
site about new security problems, alerts, patches, and other
security-related information.

6. Firewall Testing

When you have put time and money into a range of measures for protecting
your data it makes sense to test them. Ongoing testing to make sure the
firewall continues to work as intended is highly recommended (inspiration
for this discussion of firewall self-testing comes, in part, from the paper
by Wietse Venema and Dan Farmer, "Improving your site's security by breaking
into it"). Consistent, periodic testing is an important part of maintaining
effective firewall security. Placing trust in an un-validated configuration,
or in a configuration which was only validated at installation is dangerous.
For example, any quick change to a firewall configuration to support a
special project or one-time access may have unforeseen effects on the
security of the entire configuration. While self-testing is no guarantee of
invulnerability, it ensures that the castle walls are not crumbling, the
gates are closed, and that the moat is full of water.

Black Box Testing -- What we look like from the outside What can we
learn by probing our firewall? This type of testing can be divided into two
parts, port scanning and on-the-wire observation. A port scanner
systematically works its way across all of the sixty-five thousand possible
service-connection ports on a TCP/IP connected host or device. This is a
valuable service to both the hacker-of-systems and to the
maintainer-of-systems, as it reveals what avenues of attack and legitimate
services may be available on the target.

Standalone scanning software is widely available for Unix-based
platforms as well as for Windows workstations. Scanning capabilities are
also incorporated into automated testing tools (more on these later). The
useful part of scanning is accounting for what the scan turns up, and making
sure that only things which are authorized under policy are present.
Firewalls vary widely in how they implement services. Likewise, the
operating system platforms upon which they are installed have sundry
built-in or necessary services which cannot be disabled. However, it is
possible to establish some generic guidelines for self-test scanning:

a. Baseline scans Run a baseline scan on any configuration before
connecting it to the internet. Having pristine-state scans of firewalls and
any other network-attached devices is useful in tracking down things that
change -- due to causes legitimate or nefarious.

b. Scan from multiple locations It makes sense to scan from numerous
locations. At a minimum these include scanning from: - a "typical user"
location on the protected network - the DMZ (network immediately outside of
the firewall) - a "foreign" external site (perhaps a dial-up ISP account)

Note that correlating output from these scans should provide a clear
picture of the network security policy. Differences between DMZ and foreign
scans may indicate unwarranted trust of the DMZ, that should be
investigated, or the effect of filtering at the router. Scanning from inside
the protected net also provides a picture of what an attacker who managed to
dial-up, or gained physical access, would see. This often provides
interesting food for thought.

Some recommendations:

- Despite the time that it may take, ensure that scanning attempts all
ports from 0 to 65535. Stopping a scan at port 10000 (as some automatically
do) leaves a large section of unmonitored perimeter.

- Scan everything. Although it spills over into the policy area,
ensuring that the service offerings of all systems, even inside the
protected network, are recorded and understood contributes greatly to site
security. (see point b above concerning attackers inside the perimeter).

- Review the output. Scripting of scan tests is a great time saver,
however, reviewing the output for anomalies requires some clever automation
or a diligent administrator (organizations are urged to acquire one or the
other, if not both).

- A most pernicious work of Trojaneering is a tool which tells you that
things are just fine (regardless of what is actually there). Likewise having
baseline or periodic reports "adjusted" by unfriendlies has no good outcome.
Techniques for protecting tools range from read-only file systems and
removable media to air-gapped (unplugged from the network) systems and
beyond.

- You want to keep your test records out of harm's way. Once again,
removable media, or even paper copies, safely locked away, may prove to be
invaluable should things go awry.

On-the-Wire Observation In the not too distant past, network analyzers
were bulky, expensive devices which almost required a pilot's license to
operate properly. The top end of the product category is still expensive and
complex, but there are now cheap and even free products for monitoring
network traffic. Most network monitors provide a semi-interpreted
play-by-play of what is going past the monitoring system in real time. Note
that, as in the case of port scanners, the network monitor (commonly called
a sniffer, which is a trademark of Network General Corporation) is a tool
which can be used effectively by security personnel and also, unfortunately,
by less honorably intentioned persons.

Network monitor software is available as part of some operating system
distributions (such as snoop which ships with Sun's Solaris). A popular free
product, tcpdump, is available as source code for many other Unix platforms.
There are also a number of PC-based products which work with packet drivers
and allow the use of retired 286 or 386 PCs as dedicated network monitoring
stations (a noble and cost-effective alternative to the
dust-collector-and-doorstop retirement plan).

Please note that the cheap-if-not-free network monitoring products are
generally applicable to non-switched Ethernet networks. If the configuration
under test utilizes less common network media, the type of monitoring
described may well require special equipment. Also note that a tutorial on
the details of TCP/IP diagnosis is beyond the scope of this document but a
number of works on the subject are available (see Reference section at the
end of this document).

a. "Quiet Wire" Observation It is important to be familiar with the
"signature" of the network when quiescent. In addition to the ports which
are active (ready to respond to incoming calls), services which broadcast
information are a concern worth noting. Depending upon the configuration and
point of connection, relative to hosts/router/firewall, different types of
traffic will be visible.

With the DMZ disconnected from the Internet router, and no clients
initiating traffic, a network monitor should see relatively little (if any)
traffic. While it is of little utility to save trace files in this mode, it
is very helpful to have notes of the type of broadcasts or session requests
seen, and which devices originate requests.

b. Control Testing As a control, it is useful to observe the trace of a
connection originating on internal clients leaving the firewall bound for an
external destination. The source address will vary depending upon the
particular firewall implementation, as some products cause all sessions to
appear to be originating from the firewall's own IP address. Products which
implement Network Address Translation (NAT) by definition may not allow the
client's true IP address to be seen on the DMZ.

Observing the behavior of inbound connections, both to provided services
and to services not permitted by the firewall, improves familiarity with how
normal production traffic on the network will appear. A worthwhile control
exercise is to observe a network monitor while running a port scan. Scanners
which run high-speed` linear' scans are particularly visible to network
monitor tools. Other more stealthy scanners insert delays between probes and
target ports in a semi-random order (depending upon the type of network
monitor being used, some scanning techniques are not visible).

c. "Live Wire" Observation On a busy network, a wide-open network
monitor can scroll information past at an incredible rate. The utility of
watching everything is limited, with a couple of exceptions. In the event
that something goes wrong at the ISP or backbone level, being able to note
the direction and type of traffic passing through (or to) the firewall is an
aid to diagnosis. Port scans, as mentioned earlier, may be observed as well.
In the event that the firewall's logging/alert system reports unusual
activity, it is very handy to have a network monitor available to observe,
decode, and possibly record it.

System Logging Verification - Is someone knocking at the door? It is
critical that the personnel supporting a firewall be familiar with the
outputs of the logging/alerts system. Configurability of logging/alert
facilities is a great help in reducing the amount of material to be reviewed
by the administrator; however, it is important to verify the kinds of event
logged by the system and the format in which they appear. Since logging
facilities vary considerably by product vendor, a meaningful universal list
of what ought to be logged is difficult to establish. The point is to
attempt violations of the security policy and to note how (or if) the
attempt is logged. Here are some tactics to try:

a. Log on to the FTP and Telnet proxies Try logging on to the FTP and
Telnet proxies (if the firewall has them and they are enabled) numerous
times as a non-existent user name. Note whether the events are logged singly
as failed access attempts or if each attempt is logged separately. This
behavior reveals how brute-force password guessing attempts may show up in
the logs. Likewise note whether the IP address of the offender is listed.

Perform the same test from the DMZ, a foreign site, and the internal
network. On non-NAT systems attempt the test from a foreign network to the
IP address of the firewall located on the protected network.

b. Test unsupported protocols Attempt to use a protocol not supported by
the security policy to connect to the firewall, possibly rlogin or TFTP.
Many systems reject such attempts without logging; however, on systems which
do log the attempts, it is a useful indicator. Perform the same test from
the DMZ, from a foreign site, and from the internal network.

c. Test token authentication In configurations which utilize token
authenticators, it is helpful to note what occurs when an invalid
challenge/response cycle occurs on a legitimate user account.

d. Test mail On configuration which have SMTP proxies, the following
dialogue should produce a number of log entries (system responses are in
bold): telnet IP-OF-FIREWALL 25 200 HELO foo.bar.com 220
WIZ 500
DEBUG 500 likely a further insulting message> QUIT This dialogue is a blatant attempt
to lie about one's mail address as well as to exploit two hoary old back
doors in the Sendmail SMTP software.

e. Test automated tools Automated testing tools (see the later section)
will certainly exercise a firewall's logging capability in the extreme, and
may generate such copious output as to swamp the firewall. If you have
access to such tools it is useful to note what shows up in the logs (or
possibly what the firewall does) during an all-out non-stealthy automated
probe.

Configuration Testing - What did we type for that parameter? The point
of configuration testing is to validate that the detail components of the
configuration were correctly entered as the firewall was installed. The
examples listed are by no means exhaustive, but should serve as a basis for
devising tests appropriate for the installed configuration.

Misplaced trust of systems due to misconfiguration is a common (and
dangerous) situation. In testing for misplaced trust, do not underestimate
the risk of trusting IP addresses located on the DMZ network. Trust of DMZ
IP addresses may increase susceptibility to address spoofing attacks, and
widens the zone of exposure should any other hosts/devices on the DMZ be
compromised.

a. SOCKS In a firewall configuration which uses SOCKS, the facility
incorporates trust of certain clients based on IP address of origin. SOCKS
incorporates address-spoofing protection, however, misconfiguration can
result in external SOCKS clients being able to pass the firewall.

Using SOCKS-ified clients (such as Telnet and FTP) from a foreign
network, configure the SOCKS HOST parameter to point to the external IP
address of the firewall. Then attempt to access an IP address inside the
firewall using the client software. Referencing by IP is more reliable in
this situation than using a fully-qualified-domain-name. Attempt the same
test specifying the IP address of the firewall's interface on the protected
network as the SOCKS HOST. Repeat the test from the DMZ.

b. HTTP Proxies In the same manner as the SOCKS test, configure a Web
browser to use the firewall's Web proxy (if one exists). Attempt to access a
Web server inside the protected network. Also attempt to access a Web server
outside the protected network. Success with either indicates a misconfigured
HTTP Proxy. Attempt the same test specifying the firewalls IP address on the
protected network, also repeat the test from the DMZ.

c. DNS Since numerous schemes for supporting DNS with varying degrees of
opacity are common, a full DNS test is highly site specific (and also policy
specific). Ensure that fully-qualified-domain-names (FQDNs) are resolvable
from the internal network. The ability to Telnet to an external site by IP
address, but not by FQDN indicates a DNS configuration problem.

d. User Services Though pedestrian, formally exercising all of the
end-user software in accessing external services is a reasonable validation
as to which services the firewall does actually pass to the outside (and
that users will be able to use the software with the expected results).
Ensuring that services provided (or denied) to external hosts function (or
fail) in accordance with policy is likewise helpful as a validation.

Third Party Validations In addition to self-testing, a range of security
assessment services are available from consulting and auditing firms. Third
party validation services offer a valuable cross-check of implemented
security policy, as well as providing additional assurance for customers,
stockholders or management. Options to consider when arranging for such
services include:

Scheduled/Periodic The idea of periodic testing is much like scheduled
tune-ups for a car. They serve to make sure everything is running smoothly
and discover undesirable changes you might not have detected.

Unscheduled (surprise!) This type of test is designed to keep security
staff on their toes and is sometimes incorporated in broader company audits.
The idea is to see what happens when attacks take place unannounced and
without warning (just like in the real world).

Espionage/Social Engineering Anyone familiar with Ira Winkler's papers
and book on social engineering [Wink] will know that the best-laid plans of
network administrators and MIS professionals can often be undermined by poor
personnel practices and the natural human tendency to under-estimate the
duplicity of others. Tests in this realm should be considered if you have
any doubts about any of your people or suspect that any of your competitors
are developing a taste for espionage. However, such tests require extensive
and careful preparation to avoid damaging morale and even causing lawsuits
for entrapment and harassment.

Customized Testing You may have specialized needs that require custom
test scenarios. Try to select a third party that can demonstrate not only
the technical skills required to devise the test, but also a good
understanding of your business and the likely threats.

Resources - Bellcore: Pingware

- Dan Farmer: System Administrator's Tool for Analyzing Networks "SATAN"

- DoD's SPI Package (available only to DoD/DoE sites... you know who and
how)

- Internet Security Systems: Internet Scanner

7. Other Issues

This section covers several areas, such as the expanded security role of
firewalls, sources of additional information about commercial firewall
products, and security for home or small business office Internet
connections.

Firewall Expansion These days firewall vendors are busy adding new
security functions to their products. Detailed discussion of these functions
is beyond the scope of this guide, but they are worth mentioning as they may
affect your buying decision. There are four main functions:

A. Malicious code scanning -- checking internetwork traffic for viruses,
Trojan horses, and malicious applets written in ActiveX or Java.

B. Web surf monitoring -- recording which users travel to what Web sites
via the corporate Internet connection. This may be done to help determine or
enforce policies about use of company resources. Logs of Web surfing can
also be used to substantiate or refute claims of inappropriate activity.

C. Web surf filtering -- controlling employee access via the corporate
Internet connection to Web sites deemed inappropriate or unproductive.

D. Virtual private networks -- securing networks so that they can safely
communicate in private over the public Internet. This is done by strong
authentication of the connecting firewalls and encryption of all traffic
between them. Standards are emerging that will allow firewalls of different
brands to link together in VPNs. Without that, VPNs are only feasible
between firewalls of the same make.

When you are deciding between different commercial firewalls, the
ability to support one or more of the above may be important to you.

Product Information One of the problems you may encounter when you start
shopping for a firewall is a lack of standards in product literature. Of
course, it is quite reasonable for vendors to prepare marketing literature
that puts products in the best possible light and describes them in ways
that are appropriate to the company's design and sales philosophies.
However, if you look at other areas of hardware and software you will see
that some standards have emerged, both in terminology and the description of
features. For example, when a car brochure refers to brakes as being
anti-lock, or states there are dual air bags, we can expect these items to
fall within certain parameters (for example, we can expect the air bags to
be microprocessor-controlled supplemental restraint systems and not a pair
toy balloons and bicycle pump).

One of the first steps taken by the NCSA Firewall Product Developers'
consortium after it was formed in July of 1995 was to back a solution to
this problem developed by Marcus Ranum and referred to as Firewall Product
Functional Summaries. The purpose of the firewall product functional summary
program is twofold:

- To provide a structured format in which vendors can describe the
distinguishing features and advantages of their products.

- To provide a structured format from which potential firewall customers
can compare and contrast the features and design principles of firewall
products.

In other words, the functional summaries provide product information to
potential firewall customers in a format that allows for meaningful
comparisons between products while still allowing for claims of product
uniqueness. The summary format used in the program was derived through an
open process including firewall vendors, agencies of the computer security
community, and the firewall customer community. This cooperative industry
effort, a voluntary program, was coordinated, and the summary format
copyrighted by, Marcus Ranum of V-ONE. Since 1995, NCSA has been collecting
Firewall Product Functional Summaries from members of the Firewall
Developers' consortium and posting them on the NCSA Web site. Copies have
also been made available on the NCSA Firewall Buyer's Guide CD which can be
ordered from the NCSA bookstore. These Firewall Product Functional Summary
documents are well worth reading as part of your efforts to better
understand firewall technology since they give you added insight into the
various techniques and designs currently being deployed

Firewalls for SOHO Users? Most firewall conferences and seminars focus
on the needs of large corporate users. But at such events we are frequently
asked the question "What about when I surf the Internet from home?" The
questioner is often a corporate IT professional who has occasion to work
from home, but many small businesses have similar concerns (the term SOHO is
widely used in Europe for the "Small Office Home Office" category of
computer user). The concerns of SOHO users arise from several different
Internet access scenarios.

To address the simplest scenario first, consider a dial-up or
dial-on-demand ISP account. When you need to send e-mail or surf the Web,
your PC's modem dials an Internet Service Provider and establishes a TCP/IP
connection that lasts until you log off or until the connection times out
from lack of use. Such connections almost always use something called
dynamic IP addressing, which means that the address of your computer on the
network (yes, your computer is part of the network for the duration of the
call) is randomly assigned from a group of numbers that belong to the ISP.
The effect is to make your computer a small blip on the Internet radar and
thus a relatively improbable target for attack.

For example, suppose someone broke into your machine during an Internet
session and stole an encrypted password file. If they cracked the file
off-line and wanted to get back to your machine to exploit the passwords
they would have a hard time finding it. Such an attack is not impossible to
conceive, but you have to consider the motivation to arrive at a risk factor
-- do you have anything on your computer that would make such an attack
worth the effort? For most home computers the answer is probably no. In the
case of a company laptop being used from home or a hotel room somewhere, the
answer might be closer to the affirmative, but the attack is still a
difficult one to mount and would only approach probable if other types of
attack had been tried and thwarted.

8. References/Bibliography/Glossary

[Avol94] Frederick Avolio and Marcus Ranum. A Network Perimeter With
Secure Internet Access. In Internet Society Symposium on Network and
Distributed System Security, pages 109-119. Internet Society, February 2-4
1994.

[Bel89] Steven M. Bellovin. Security Problems in the TCP/IP Protocol
Suite. Computer Communications Review, 9 (2): 32-48, April 1989.

[Cerf93] Vinton Cerf. A National Information Infrastructure. Connexions,
June 1993.

[CERT94] Computer Emergency Response Team/Coordination Center. CA-94:
01, Ongoing Network Monitoring Attacks. Available from FIRST.ORG, file
pub/alerts/cert9401.txt, February 1994.

[Chap92] D. Brent Chapman. Network (In) Security Through IP Packet
Filtering. In USENIX Security Symposium III Proceedings, pages 63-76. USENIX
Association, September 14-16 1992.

[Chap95] D. Brent Chapman. Building Internet Firewalls. O'Reilly &
Associates, 1995.+

[Ches94] William R. Cheswick and Steven M. Bellovin. Firewalls and
Internet Security. Addison-Wesley, Reading, MA, 1994.

[CIAC94a] Computer Incident Advisory Capability. Number e-07, unix
sendmail vulnerabilities update. Available from FIRST.ORG, file
pub/alerts/e-07.txt, January 1994.

[CIAC94b] Computer Incident Advisory Capability. Number e-09, network
monitoring attacks. Available from FIRST.ORG, file pub/alerts/e-09.txt,
February 1994.

[CIAC94c] Computer Incident Advisory Capability. Number e-14, wuarchive
ftpd trojan horse. Available from FIRST.ORG, file pub/alerts/e-14.txt,
February 1994.

[Com91a] Douglas E. Comer. Internetworking with TCP/IP: Principles,
Protocols, and Architecture. Prentice-Hall, Englewood Cliffs, NJ, 1991.

[Com91b] Douglas E. Comer and David L. Stevens. Internetworking with
TCP/IP: Design, Implementation, and Internals. Prentice-Hall, Englewood
Cliffs, NJ, 1991.

[Cur92] David Curry. UNIX System Security: A Guide for Users and System
Administrators. Addison-Wesley, Reading, MA, 1992.

[Farm93] Dan Farmer and Wietse Venema. Improving the security of your
site by breaking into it. Available from FTP.WIN.TUE.NL, file
/pub/security/admin-guide-to-cracking.101.Z, 1993.

[Ford94] Warwick Ford. Computer Communications Security. Prentice-Hall,
Englewood Cliffs, NJ, 1994.

[Garf92] Simpson Garfinkel and Gene Spafford. Practical UNIX Security.
O'Reilly and Associates, Inc., Sebastopol, CA, 1992.

[Haf91] Katie Hafner and John Markoff. Cyberpunk: Outlaws and Hackers on
the Computer Frontier. Simon and Schuster, New York, 1991.

[Hugh] Larry J. Hughes, Jr. Actually Useful Internet Security
Techniques. New Riders Publishing, 1995.

[Hunt92] Craig Hunt. TCP/IP Network Administration. O'Reilly and
Associates, Inc., Sebastopol, CA, 1992.

[NIST91a] NIST. Advanced Authentication Technology. CSL Bulletin,
National Institute of Standards and Technology, November 1991.

[NIST91b] NIST. Establishing a Computer Security Incident Response
Capability. Special Publication 800-3, National Institute of Standards and
Technology, January 1991.

[NIST93] NIST. Connecting to the Internet: Security Considerations. CSL
Bulletin, National Institute of Standards and Technology, July 1993.

[NIST94a] NIST. Guideline for the use of Advanced Authentication
Technology Alternatives. Federal Information Processing Standard 190,
National Institute of Standards and Technology, September 1994.

[NIST94b] NIST. Reducing the Risk of Internet Connection and Use. CSL
Bulletin, National Institute of Standards and Technology, May 1994.

[NIST94c] NIST. Security in Open Systems. Special Publication 800-7,
National Institute of Standards and Technology, September 1994.

[Oppl97] Rolf Oppliger, Internet Security: Firewalls and Beyond,
Communications of the ACM, May 1997, Vol 40. No. 5, page 92.

[Ran93] Marcus Ranum. Thinking About Firewalls. In SANS-II Conference,
April 1993.

[RFC1244] Paul Holbrook and Joyce Reynolds. RFC 1244: Security Policy
Handbook. Prepared for the Internet Engineering Task Force, 1991.

[Stall] William Stallings, Peter Stephenson, and others. Implementing
Internet Security. New Riders Publishing, 1995.

[Siya] Karanjit Siyan and Chris Hare. Internet Firewalls and Network
Security. New Riders Publishing, 1995.

[Wash] Washington Technology, January 1995.

[Wack] John P. Wack and Lisa J. Carnahan, Keeping Your Site Comfortably
Secure: An Introduction to Internet Firewalls, NIST Special Publication
800-10, 1995.

[Wink] Ira Winkler, Corporate Espionage: What it is, why it is happening
in your company, what you must do about it. Prima Publishing,