Maximum Security:

A Hacker's Guide to Protecting Your Internet Site and Network

Previous chapterNext chapterContents


5

Is Security a Futile Endeavor?

Since Paul Baran first put pen to paper, Internet security has been a concern. Over the years, security by obscurity has become the prevailing attitude of the computing community.

These principles have not only been proven faulty, but they also go against the original concepts of how security could evolve through discussion and open education. Even at the very birth of the Internet, open discussion on standards and methodology was strongly suggested. It was felt that this open discussion could foster important advances in the technology. Baran was well aware of this and articulated the principle concisely when, in The Paradox of the Secrecy About Secrecy: The Assumption of A Clear Dichotomy Between Classified and Unclassified Subject Matter, he wrote:

Without the freedom to expose the system proposal to widespread scrutiny by clever minds of diverse interests, is to increase the risk that significant points of potential weakness have been overlooked. A frank and open discussion here is to our advantage.

Security Through Obscurity

Security through obscurity has been defined and described in many different ways. One rather whimsical description, authored by a student named Jeff Breidenbach in his lively and engaging paper, Network Security Throughout the Ages, appears here:

The Net had a brilliant strategy called "Security through Obscurity." Don't let anyone fool you into thinking that this was done on purpose. The software has grown into such a tangled mess that nobody really knows how to use it. Befuddled engineers fervently hoped potential meddlers would be just as intimidated by the technical details as they were themselves.

Mr. Breidenbach might well be correct about this. Nevertheless, the standardized definition and description of security through obscurity can be obtained from any archive of the Jargon File, available at thousands of locations on the Internet. That definition is this:

alt. 'security by obscurity' n. A term applied by hackers to most OS vendors' favorite way of coping with security holes--namely, ignoring them, documenting neither any known holes nor the underlying security algorithms, trusting that nobody will find out about them and that people who do find out about them won't exploit them.

Regardless of which security philosophy you believe, three questions remain constant:

Why Is the Internet Insecure?

The Internet is insecure for a variety of reasons, each of which I will discuss here in detail. Those factors include

Each of these factors contributes in some degree to the Internet's current lack of security.

Lack of Education

Do you believe that what you don't know can't hurt you? If you are charged with the responsibility of running an Internet server, you had better not believe it. Education is the single, most important aspect of security, one aspect that has been sorely wanting.

I am not suggesting that a lack of education exists within higher institutions of learning or those organizations that perform security-related tasks. Rather, I am suggesting that security education rarely extends beyond those great bastions of computer-security science.

The Computer Emergency Response Team (CERT) is probably the Internet's best-known security organization. CERT generates security advisories and distributes them throughout the Internet community. These advisories address the latest known security vulnerabilities in a wide range of operating systems. CERT thus performs an extremely valuable service to the Internet. The CERT Coordination Center, established by ARPA in 1988, provides a centralized point for the reporting of and proactive response to all major security incidents. Since 1988, CERT has grown dramatically, and CERT centers have been established at various points across the globe.


Cross Reference: You can contact CERT at its WWW page (http://www.cert.org). There resides a database of vulnerabilities, various research papers (including extensive documentation on disaster survivability), and links to other important security resources.

CERT's 1995 annual report shows some very enlightening statistics. During 1995, CERT was informed of some 12,000 sites that had experienced some form of network-security violation. Of these, there were at least 732 known break-ins and an equal number of probes or other instances of suspicious activity.


Cross Reference: You can access CERT's 1995 annual report at http://www.cert.org/cert.report.95.html.

12,000 incidents with a reported 732 break-ins. This is so, even though the GAO report examined earlier suggested that Defense computers alone are attacked as many as 250,000 times each year, and Dan Farmer's security survey reported that over 60 percent of all critical sites surveyed were vulnerable to some technique of network security breach. How can this be? Why aren't more incidents reported to CERT?


Cross Reference: Check out Dan Farmer's security survey at http://www.trouble.org/survey.

It might be because the better portion of the Internet's servers are now maintained by individuals who have less-than adequate security education. Many system administrators have never even heard of CERT. True, there are many security resources available on the Internet (many that point to CERT, in fact), but these may initially appear intimidating and overwhelming to those new to security. Moreover, many of the resources provide links to dated information.

An example is RFC 1244, the Site Security Handbook. At the time 1244 was written, it comprised a collection of state-of-the-art information on security. As expressed in that document's editor's note: This FYI RFC is a first attempt at providing Internet users guidance on how to deal with security issues in the Internet. As such, this document is necessarily incomplete. There are some clear shortfalls; for example, this document focuses mostly on resources available in the United States. In the spirit of the Internet's `Request for Comments' series of notes, we encourage feedback from users of this handbook. In particular, those who utilize this document to craft their own policies and procedures.

This handbook is meant to be a starting place for further research and should be viewed as a useful resource, but not the final authority. Different organizations and jurisdictions will have different resources and rules. Talk to your local organizations, consult an informed lawyer, or consult with local and national law enforcement. These groups can help fill in the gaps that this document cannot hope to cover.

From 1991 until now, the Site Security Handbook has been an excellent place to start. Nevertheless, as Internet technology grows in leaps and bounds, such texts become rapidly outdated. Therefore, the new system administrator must keep up with the security technology that follows each such evolution. To do so is a difficult task.


Cross Reference: RFC 1244 is still a good study paper for a user new to security. It is available at many places on the Internet. One reliable server is at http://www.net.ohio-state.edu/hypertext/rfc1244/toc.html.

The Genesis of an Advisory

Advisories comprise the better part of time-based security information. When these come out, they are immediately very useful because they usually relate to an operating system or popular application now widely in use. As time goes on, however, such advisories become less important because people move on to new products. In this process, vendors are constantly updating their systems, eliminating holes along the way. Thus, an advisory is valuable for a set period of time (although, to be fair, this information may stay valuable for extended periods because some people insist on using older software and hardware, often for financial reasons).

An advisory begins with discovery. Someone, whether hacker, cracker, administrator, or user, discovers a hole. That hole is verified, and the resulting data is forwarded to security organizations, vendors, or other parties deemed suitable. This is the usual genesis of an advisory (a process explained in Chapter 2, "How This Book Will Help You"). Nevertheless, there is another way that holes are discovered.

Often, academic researchers discover a hole. An example, which you will review later, is the series of holes found within the Java programming language. These holes were primarily revealed--at least at first--by those at Princeton University's computer science labs. When such a hole is discovered, it is documented in excruciating detail. That is, researchers often author multipage documents detailing the hole, the reasons for it, and possible remedies.


Cross Reference: Java is a compiled language used to create interactive applications for use on the World Wide Web. The language was created by efforts at Sun Microsystems. It vaguely resembles C++. For more information about Java, visit the Java home page at http://java.sun.com/.

This information gets digested by other sources into an advisory, which is often no more than 100 lines. By the time the average, semi-security literate user lays his or her hands on this information, it is limited and watered-down.

Thus, redundancy of data on the Internet has its limitations. People continually rehash these security documents into different renditions, often highlighting different aspects of the same paper. Such digested revisions are available all over the Net. This helps distribute the information, true, but leaves serious researchers hungry. They must hunt, and that hunt can be a struggle. For example, there is no centralized place to acquire all such papers.

Equally, as I have explained, end-user documentation can be varied. Although there should be, there is no 12-set volume (with papers by Farmer, Venema, Bellovin, Spafford, Morris, Ranum, Klaus, Muffet, and so on) about Internet security that you can acquire at a local library or bookstore. More often, the average bookstore contains brief treatments of the subject (like this book, I suppose).

Couple with these factors the mind-set of the average system administrator. A human being only has so much time. Therefore, these individuals absorb what they can on-the-fly, applying methods learned through whatever sources they encounter.

The Dissemination of Information

For so many reasons, education in security is wanting. In the future, specialists need to address this need in a more practical fashion. There must be some suitable means of networking this information. To be fair, some organizations have attempted to do so, but many are forced to charge high prices for their hard-earned databases. The National Computer Security Association (NCSA) is one such organization. Its RECON division gathers some 70MB per day of hot and heavy security information. Its database is searchable and is available for a price, but that price is substantial.


Cross Reference: To learn more about NCSA RECON, examine its FAQ. NCSA's database offers advanced searching capabilities, and the information held there is definitely up-to-date. In short, it is a magnificent service. The FAQ is at http://www.isrecon.ncsa.com/public/faq/isrfaq.htm. You can also get a general description of what the service is by visiting http://www.isrecon.ncsa.com/docz/Brochure_Pages/effect.htm.

Many organizations do offer superb training in security and firewall technology. The price for such training varies, depending on the nature of the course, the individuals giving it, and so on. One good source for training is Lucent Technologies, which offers many courses on security.


Cross Reference: Lucent Technologies' WWW site can be found at http://www.attsa.com/.


NOTE: Appendix A, "How to Get More Information," contains a massive listing of security training resources as well as general information about where to acquire good security information.

Despite the availability of such training, today's average company is without a clue. In a captivating report (Why Safeguard Information?) from Abo Akademi University in Finland, researcher Thomas Finne estimated that only 15 percent of all Finnish companies had an individual employed expressly for the purpose of information security. The researcher wrote:

The result of our investigation showed that the situation had got even worse; this is very alarming. Pesonen investigated the security in Finnish companies by sending out questionnaires to 453 companies with over 70 employees. The investigation showed that those made responsible for information security in the companies spent 14.5 percent of their working time on information security. In an investigation performed in the UK over 80 percent of the respondents claimed to have a department or individual responsible for information technology (IT) security.

The Brits made some extraordinary claims! "Of course we have an information security department. Doesn't everyone?" In reality, the percentage of companies that do is likely far less. One survey conducted by the Computer Security Institute found that better than 50 percent of all survey participants didn't even have written security policies and procedures.

The Problems with PC-Based Operating Systems

It should be noted that in America, the increase in servers being maintained by those new to the Internet poses an additional education problem. Many of these individuals have used PC-based systems for the whole of their careers. PC-based operating systems and hardware were never designed for secure operation (although, that is all about to change). Traditionally, PC users have had less-than close contact with their vendors, except on issues relating to hardware and software configuration problems. This is not their fault. The PC community is market based and market driven. Vendors never sold the concept of security; they sold the concept of user friendliness, convenience, and standardization of applications. In these matters, vendors have excelled. The functionality of some PC-based applications is extraordinary.

Nonetheless, programmers are often brilliant in their coding and design of end-user applications but have poor security knowledge. Or, they may have some security knowledge but are unable to implement it because they cannot anticipate certain variables. Foo (the variable) in this case represents the innumerable differences and subtleties involved with other applications that run on the same machine. These will undoubtedly be designed by different individuals and vendors, unknown to the programmer. It is not unusual for the combination of two third-party products to result in the partial compromise of a system's security. Similarly, applications intended to provide security can, when run on PC platforms, deteriorate or otherwise be rendered less secure. The typical example is the use of the famous encryption utility Pretty Good Privacy (PGP) when used in the Microsoft Windows environment.

PGP PGP operates by applying complex algorithms. These operations result in very high-level encryption. In some cases, if the user so specifies, using PGP can provide military-level encryption to a home user. The system utilizes the public key/private key pair scenario. In this scenario, each message is encrypted only after the user provides a passphrase, or secret code. The length of this passphrase may vary. Some people use the entire first line of a poem or literary text. Others use lines in a song or other phrases that they will not easily forget. In any event, this passphrase must be kept completely secret. If it is exposed, the encrypted data can be decrypted, altered, or otherwise accessed by unauthorized individuals.

In its native state, compiled for MS-DOS, PGP operates in a command-line interface or from a DOS prompt. This in itself presents no security issue. The problem is that many people find this inconvenient and therefore use a front-end, or a Microsoft Windows-based application through which they access the PGP routines. When the user makes use of such a front-end, the passphrase gets written into the Windows swap file. If that swap file is permanent, the passphrase can be retrieved using fairly powerful machines. I've tried this on several occasions with machines differently configured. With a 20MB swap file on an IBM compatible DX66 sporting 8-16MB of RAM, this is a formidable task that will likely freeze the machine. This, too, depends on the utility you are using to do the search. Not surprisingly, the most effective utility for performing such a search is GREP.


NOTE: GREP is a utility that comes with many C language packages. It also comes stock on any UNIX distribution. GREP works in a way quite similar to the FIND.EXE command in DOS. Its purpose is to search specified files for a particular string of text. For example, to find the word SEARCH in all files with a *.C extension, you would issue the following command:

GREP SEARCH *.C

There are free versions of GREP available on the Internet for a variety of operating systems, including but not limited to UNIX, DOS, OS/2, and 32-bit Microsoft Windows environments.


In any event, the difficulty factor drops drastically when you use a machine with resources in excess of 100MHz and 32MB of RAM.

My point is this: It is by no fault of the programmer of PGP that the passphrase gets caught in the swap. PGP is not flawed, nor are those platforms that use swapped memory. Nevertheless, platforms that use swapped memory are not secure and probably never will be.


Cross Reference: For more information about PGP, visit http://web.mit.edu/network/pgp.html. This is the MIT PGP distribution site for U.S. residents. PGP renders sufficiently powerful encryption that certain versions are not available for export. Exporting such versions is a crime. The referenced site has much valuable information about PGP, including a FAQ, a discussion of file formats, pointers to books, and of course, the free distribution of the PGP software.

Thus, even when designing security products, programmers are often faced with unforeseen problems over which they can exert no control.


TIP: Techniques of secure programming (methods of programming that enhance security on a given platform) are becoming more popular. These assist the programmer in developing applications that at least won't weaken network security. Chapter 30, "Language, Extensions, and Security," addresses some secure programming techniques as well as problems generally associated with programming and security.

The Internet's Design

When engineers were put to the task of creating an open, fluid, and accessible Internet, their enthusiasm and craft were, alas, too potent. The Internet is the most remarkable creation ever erected by humankind in this respect. There are dozens of ways to get a job done on the Internet; there are dozens of protocols with which to do it.

Are you having trouble retrieving a file via FTP? Can you retrieve it by electronic mail? What about over HTTP with a browser? Or maybe a Telnet-based BBS? How about Gopher? NFS? SMB? The list goes on.

Heterogeneous networking was once a dream. It is now a confusing, tangled mesh of internets around the globe. Each of the protocols mentioned forms one aspect of the modern Internet. Each also represents a little network of its own. Any machine running modern implementations of TCP/IP can utilize all of them and more. Security experts have for years been running back and forth before a dam of information and protocols, plugging the holes with their fingers. Crackers, meanwhile, come armed with icepicks, testing the dam here, there, and everywhere.

Part of the problem is in the Internet's basic design. Traditionally, most services on the Internet rely on the client/server model. The task before a cracker, therefore, is a limited one: Go to the heart of the service and crack that server.

I do not see that situation changing in the near future. Today, client/server programming is the most sought-after skill. The client/server model works effectively, and there is no viable replacement at this point.

There are other problems associated with the Internet's design, specifically related to the UNIX platform. One is access control and privileges. This is covered in detail in Chapter 17, "UNIX: The Big Kahuna," but I want to mention it here.

In UNIX, every process more or less has some level of privilege on the system. That is, these processes must have, at minimum, privilege to access the files they are to work on and the directories into which those files are deposited. In most cases, common processes and programs are already so configured by default at the time of the software's shipment. Beyond this, however, a system administrator may determine specific privilege schemes, depending on the needs of the situation. The system administrator is offered a wide variety of options in this regard. In short, system administrators are capable of restricting access to one, five, or 100 people. In addition, those people (or groups of people) can also be limited to certain types of access, such as read, write, execute, and so forth.

In addition to this system being complex (therefore requiring experience on the part of the administrator), the system also provides for certain inherent security risks. One is that access privileges granted to a process or a user may allow increased access or access beyond what was originally intended to be obtained. For example, a utility that requires any form of root access (highest level of privilege) should be viewed with caution. If someone finds a flaw within that program and can effectively exploit it, that person will gain a high level of access. Note that strong access-control features have been integrated into the Windows NT operating system and therefore, the phenomenon is not exclusively related to UNIX. Novell NetWare also offers some very strong access-control features.

All these factors seriously influence the state of security on the Internet. There are clearly hundreds of little things to know about it. This extends into heterogeneous networking as well. A good system administrator should ideally have knowledge of at least three platforms. This brings us to another consideration: Because the Internet's design is so complex, the people who address its security charge substantial prices for their services. Thus, the complexity of the Internet also influences more concrete considerations.

There are other aspects of Internet design and composition that authors often cite as sources of insecurity. For example, the Net allows a certain amount of anonymity; this issue has good and bad aspects. The good aspects are that individuals who need to communicate anonymously can do so if need be.

Anonymity on the Net

There are plenty of legitimate reasons for anonymous communication. One is that people living in totalitarian states can smuggle out news about human rights violations. (At least, this reason is regularly tossed around by media people. It is en vogue to say such things, even though the percentage of people using the Internet for this noble activity is incredibly small.) Nevertheless, there is no need to provide excuses for why anonymity should exist on the Internet. We do not need to justify it. After all, there is no reason why Americans should be forbidden from doing something on a public network that they can lawfully do at any other place. If human beings want to communicate anonymously, that is their right.

Most people use remailers to communicate anonymously. These are servers configured to accept and forward mail messages. During that process, the header and originating address are stripped from the message, thereby concealing its author and his or her location. In their place, the address of the anonymous remailer is inserted.


Cross Reference: To learn more about anonymous remailers, check out the FAQ at http://www.well.com/user/abacard/remail.html. This FAQ provides many useful links to other sites dealing with anonymous remailers.

Anonymous remailers (hereafter anon remailers) have been the subject of controversy in the past. Many people, particularly members of the establishment, feel that anon remailers undermine the security of the Internet. Some portray the situation as being darker than it really is:

By far the greatest threat to the commercial, economic and political viability of the Global Information Infrastructure will come from information terrorists... The introduction of Anonymous Re-mailers into the Internet has altered the capacity to balance attack and counter-attack, or crime and punishment.1


1Paul A. Strassmann, U.S. Military Academy, West Point; Senior Advisor, SAIC and William Marlow, Senior Vice President, Science Applications International Corporation (SAIC). January 28-30, 1996. Symposium on the Global Information Infrastructure: Information, Policy & International Infrastructure.

I should explain that the preceding document was delivered by individuals associated with the intelligence community. Intelligence community officials would naturally be opposed to anonymity, for it represents one threat to effective, domestic intelligence-gathering procedures. That is a given. Nevertheless, one occasionally sees even journalists making similar statements, such as this one by Walter S. Mossberg:

In many parts of the digital domain, you don't have to use your real name. It's often impossible to figure out the identity of a person making political claims...When these forums operate under the cloak of anonymity, it's no different from printing a newspaper in which the bylines are admittedly fake, and the letters to the editor are untraceable.

This is an interesting statement. For many years, the U.S. Supreme Court has been unwilling to require that political statements be accompanied by the identity of the author. This refusal is to ensure that free speech is not silenced. In early American history, pamphlets were distributed in this manner. Naturally, if everyone had to sign their name to such documents, potential protesters would be driven into the shadows. This is inconsistent with the concepts on which the country was founded.

To date, there has been no convincing argument for why anon remailers should not exist. Nevertheless, the subject remains engaging. One amusing exchange occurred during a hearing in Pennsylvania on the constitutionality of the Communications Decency Act, an act brought by forces in Congress that were vehemently opposed to pornographic images being placed on the Internet. The hearing occurred on March 22, 1996, before the Honorable Dolores K. Sloviter, Chief Judge, United States Court of Appeals for the Third Circuit. The case was American Civil Liberties Union, et al (plaintiffs) v. Janet Reno, the Attorney General of the United States. The discussion went as follows:

Q: Could you explain for the Court what Anonymous Remailers are?

A:
Yes, Anonymous Remailers and their -- and a related service called Pseudonymity Servers are computer services that privatize your identity in cyberspace. They allow individuals to, for example, post content for example to a Usenet News group or to send an E-mail without knowing the individual's true identity.

The difference between an anonymous remailer and a pseudonymity server is very important because an anonymous remailer provides what we might consider to be true anonymity to the individual because there would be no way to know on separate instances who the person was who was making the post or sending the e-mail.

But with a pseudonymity server, an individual can have what we consider to be a persistent presence in cyberspace, so you can have a pseudonym attached to your postings or your e-mails, but your true identity is not revealed. And these mechanisms allow people to communicate in cyberspace without revealing their true identities.

Q: I just have one question, Professor Hoffman, on this topic. You have not done any study or survey to sample the quantity or the amount of anonymous remailing on the Internet, correct?

A:
That's correct. I think by definition it's a very difficult problem to study because these are people who wish to remain anonymous and the people who provide these services wish to remain anonymous.

Indeed, the court was clearly faced with a catch-22. In any case, whatever one's position might be on anonymous remailers, they appear to be a permanent feature of the Internet. Programmers have developed remailer applications to run on almost any operating system, allowing the little guy to start a remailer with his PC.


Cross Reference: If you have more interest in anon remailers, visit http://www.cs.berkeley.edu/~raph/remailer-list.html. This site contains extensive information on these programs, as well as links to personal anon remailing packages and other software tools for use in implementing an anonymous remailer.

In the end, e-mail anonymity on the Internet has a negligible effect on real issues of Internet security. The days when one could exploit a hole by sending a simple e-mail message are long gone. Those making protracted arguments against anonymous e-mail are either nosy or outraged that someone can implement a procedure that they cannot. If e-mail anonymity is an issue at all, it is for those in national security. I readily admit that spies could benefit from anonymous remailers. In most other cases, however, the argument expends good energy that could be better spent elsewhere.

Proprietarism

Yes, another ism. Before I start ranting, I want to define this term as it applies here. Proprietarism is a practice undertaken by commercial vendors in which they attempt to inject into the Internet various forms of proprietary design. By doing so, they hope to create profits in an environment that has been previously free from commercial reign. It is the modern equivalent of Colonialism plus Capitalism in the computer age on the Internet. It interferes with Internet security structure and defeats the Internet's capability to serve all individuals equally and effectively.

ActiveX

A good example of proprietarism in action is Microsoft Corporation's ActiveX technology.


Cross Reference: Those users unfamiliar with ActiveX technology should visit http://www.microsoft.com/activex/. Users who already have some experience with ActiveX should go directly to the Microsoft page that addresses the security features: http://www.microsoft.com/security/.

To understand the impact of ActiveX, a brief look at HTML would be instructive. HTML was an incredible breakthrough in Internet technology. Imagine the excitement of the researchers when they first tested it! It was (and still is) a protocol by which any user, on any machine, anywhere in the world could view a document and that document, to any other user similarly (or not similarly) situated, would look pretty much the same. What an extraordinary breakthrough. It would release us forever from proprietary designs. Whether you used a Mac, an Alpha, an Amiga, a SPARC, an IBM compatible, or a tire hub (TRS-80, maybe?), you were in. You could see all the wonderful information available on the Net, just like the next guy. Not any more.

ActiveX technology is a new method of presenting Web pages. It is designed to interface with Microsoft's Internet Explorer. If you don't have it, forget it. Most WWW pages designed with it will be nonfunctional for you either in whole or in part.

That situation may change, because Microsoft is pushing for ActiveX extensions to be included within the HTML standardization process. Nevertheless, such extensions (including scripting languages or even compiled languages) do alter the state of Internet security in a wide and encompassing way.

First, they introduce new and untried technologies that are proprietary in nature. Because they are proprietary, the technologies cannot be closely examined by the security community. Moreover, these are not cross platform and therefore create limitations to the Net, as opposed to heterogeneous solutions. To examine the problem firsthand you may want to visit a page established by Kathleen A. Jackson, Team Leader, Division Security Office, Computing, Information, and Communications Division at the Los Alamos National Laboratory. Jackson points to key problems in ActiveX. On her WWW page, she writes:

...The second big problem with ActiveX is security. A program that downloads can do anything the programmer wants. It can reformat your hard drive or shut down your computer...

This issue is more extensively covered in a paper delivered by Simon Garfinkel at Hot Wired. When Microsoft was alerted to the problem, the solution was to recruit a company that created digital signatures for ActiveX controls. This digital signature is supposed to be signed by the control's programmer or creator. The company responsible for this digital signature scheme has every software publisher sign a software publisher's pledge, which is an agreement not to sign any software that contains malicious code. If a user surfs a page that contains an unsigned control, Microsoft's Internet Explorer puts up a warning message box that asks whether you want to accept the unsigned control.


Cross Reference: Find the paper delivered by Simon Garfinkel at Hot Wired at http://www.packet.com/packet/garfinkel/.

You cannot imagine how absurd this seems to security professionals. What is to prevent a software publisher from submitting malicious code, signed or unsigned, on any given Web site? If it is signed, does that guarantee that the control is safe? The Internet at large is therefore resigned to take the software author or publisher at his or her word. This is impractical and unrealistic. And, although Microsoft and the company responsible for the signing initiative will readily offer assurances, what evidence is there that such signatures cannot be forged? More importantly, how many small-time programmers will bother to sign their controls? And lastly, how many users will refuse to accept an unsigned control? Most users confronted with the warning box have no idea what it means. All it represents to them is an obstruction that is preventing them from getting to a cool Web page.

There are now all manner of proprietary programs out there inhabiting the Internet. Few have been truly tested for security. I understand that this will become more prevalent and, to Microsoft's credit, ActiveX technology creates the most stunning WWW pages available on the Net. These pages have increased functionality, including drop-down boxes, menus, and other features that make surfing the Web a pleasure. Nevertheless, serious security studies need to be made before these technologies foster an entirely new frontier for those pandering malicious code, viruses, and code to circumvent security.


Cross Reference: To learn more about the HTML standardization process, visit the site of the World Wide Web Consortium (http://www.w3.org). If you already know a bit about the subject but want specifics about what types of HTML tags and extensions are supported, you should read W3C's activity statement on this issue (http://www.w3.org/pub/WWW/MarkUp/Activity). One interesting area of development is W3C's work on support for the disabled.

Proprietarism is a dangerous force on the Internet, and it's gaining ground quickly. To compound this problem, some of the proprietary products are excellent. It is therefore perfectly natural for users to gravitate toward these applications. Users are most concerned with functionality, not security. Therefore, the onus is on vendors, and this is a problem. If vendors ignore security hazards, there is nothing anyone can do. One cannot, for example, forbid insecure products from being sold on the market. That would be an unreasonable restraint of interstate commerce and ground for an antitrust claim. Vendors certainly have every right to release whatever software they like, secure or not. At present, therefore, there is no solution to this problem.

Extensions, languages, or tags that probably warrant examination include

JavaScript is owned by Netscape, and VBScript and ActiveX are owned by Microsoft. These languages are the weapons of the war between these two giants. I doubt that either company objectively realizes that there's a need for both technologies. For example, Netscape cannot shake Microsoft's hold on the desktop market. Equally, Microsoft cannot supply the UNIX world with products. The Internet would probably benefit greatly if these two titans buried the hatchet in something besides each other.

The Trickling Down of Technology

As discussed earlier, there is the problem of high-level technology trickling down from military, scientific, and security sources. Today, the average cracker has tools at his or her disposal that most security organizations use in their work. Moreover, the machines on which crackers use these tools are extremely powerful, therefore allowing faster and more efficient cracking.

Government agencies often supply links to advanced security tools. At these sites, the tools are often free. They number in the hundreds and encompass nearly every aspect of security. In addition to these tools, government and university sites also provide very technical information regarding security. For crackers who know how to mine such information, these resources are invaluable. Some key sites are listed in Table 5.1.

Table 5.1. Some major security sites for information and tools.

Site Address
Purdue University http://www.cs.purdue.edu//coast/archive/
Raptor Systems http://www.raptor.com/library/library.html
The Risks Forum http://catless.ncl.ac.uk/Risks
FIRST http://www.first.org/
DEFCON http://www.defcon.org/

The level of technical information at such sites is high. This is in contrast to many fringe sites that provide information of little practical value to the cracker. But not all fringe sites are so benign. Crackers have become organized, and they maintain a wide variety of servers on the Internet. These are typically established using free operating systems such as Linux or FreeBSD. Many such sites end up establishing a permanent wire to the Net. Others are more unreliable and may appear at different times via dynamic IP addresses. I should make it clear that not all fringe sites are cracking sites. Many are legitimate hacking stops that provide information freely to the Internet community as a service of sorts. In either case, both hackers and crackers have been known to create excellent Web sites with voluminous security information.

The majority of cracking and hacking sites are geared toward UNIX and IBM-compatible platforms. There is a noticeable absence of quality information for Macintosh users. In any event, in-depth security information is available on the Internet for any interested party to view.

So, the information is trafficked. There is no solution to this problem, and there shouldn't be. It would be unfair to halt the education of many earnest, responsible individuals for the malicious acts of a few. So advanced security information and tools will remain available.

Human Nature

We have arrived at the final (and probably most influential) force at work in weakening Internet security: human nature. Humans are, by nature, a lazy breed. To most users, the subject of Internet security is boring and tedious. They assume that the security of the Internet will be taken care of by experts.

To some degree, there is truth to this. If the average user's machine or network is compromised, who should care? They are the only ones who can suffer (as long as they are not connected to a network other than their own). The problem is, most will be connected to some other network. The Internet is one enterprise that truly relies on the strength of its weakest link. I have seen crackers work feverishly on a single machine when that machine was not their ultimate objective. Perhaps the machine had some trust relationship with another machine that was their ultimate objective. To crack a given region of cyberspace, crackers may often have to take alternate or unusual routes. If one workstation on the network is vulnerable, they are all potentially vulnerable as long as a relationship of trust exists.

Also, you must think in terms of the smaller businesses because these will be the great majority. These businesses may not be able to withstand disaster in the same way that larger firms can. If you run a small business, when was the last time you performed a complete backup of all information on all your drives? Do you have a disaster-recovery plan? Many companies do not. This is an important point. I often get calls from companies that are about to establish permanent connectivity. Most of them are unprepared for emergencies.

Moreover, there are still two final aspects of human nature that influence the evolution of security on the Internet. Fear is one. Most companies are fearful to communicate with outsiders regarding security. For example, the majority of companies will not tell anyone if their security has been breached. When a Web site is cracked, it is front-page news; this cannot be avoided. When a system is cracked in some other way (with a different point of entry), press coverage (or any exposure) can usually be avoided. So, a company may simply move on, denying any incident, and secure its network as best it can. This deprives the security community of much-needed statistics and data.

The last human factor here is curiosity. Curiosity is a powerful facet of human nature that even the youngest child can understand. One of the most satisfying human experiences is discovery. Investigation and discovery are the things that life is really made of. We learn from the moment we are born until the moment that we die, and along that road, every shred of information is useful. Crackers are not so hard to understand. It comes down to basics: Why is this door is locked? Can I open it? As long as this aspect of human experience remains, the Internet may never be entirely secure. Oh, it will be ultimately be secure enough for credit-card transactions and the like, but someone will always be there to crack it.

Does the Internet Really Need to Be Secure?

Yes. The Internet does need to be secure and not simply for reasons of national security. Today, it is a matter of personal security. As more financial institutions gravitate to the Internet, America's financial future will depend on security. Many users may not be aware of the number of financial institutions that offer online banking. One year ago, this was a relatively uncommon phenomenon. Nevertheless, by mid-1996, financial institutions across the country were offering such services to their customers. Here are a few:

The threat from lax security is more than just a financial one. Banking records are extremely personal and contain revealing information. Until the Internet is secure, this information is available to anyone with the technical prowess to crack a bank's online service. It hasn't happened yet (I assume), but it will.

Also, the Internet needs to be secure so that it does not degenerate into one avenue of domestic spying. Some law-enforcement organizations are already using Usenet spiders to narrow down the identities of militia members, militants, and other political undesirables. The statements made by such people on Usenet are archived away, you can be sure. This type of logging activity is not unlawful. There is no constitutional protection against it, any more than there is a constitutional right for someone to demand privacy when they scribble on a bathroom wall.

Private e-mail is a different matter, though. Law enforcement agents need a warrant to tap someone's Internet connection. To circumvent these procedures (which could become widespread), all users should at least be aware of the encryption products available, both free and commercial (I will discuss this and related issues in Part VII of this book, "The Law").

For all these reasons, the Internet must become secure.

Can the Internet Be Secure?

Yes. The Internet can be secure. But in order for that to happen, some serious changes must be made, including the heightening of public awareness to the problem. Most users still regard the Internet as a toy, an entertainment device that is good for a couple of hours on a rainy Sunday afternoon. That needs to change in coming years.

The Internet is likely the single, most important advance of the century. Within a few years, it will be a powerful force in the lives of most Americans. So that this force may be overwhelmingly positive, Americans need to be properly informed.

Members of the media have certainly helped the situation, even though media coverage of the Internet isn't always painfully accurate. I have seen the rise of technology columns in newspapers throughout the country. Good technology writers are out there, trying to bring the important information home to their readers. I suspect that in the future, more newspapers will develop their own sections for Internet news, similar to those sections allocated for sports, local news, and human interest.

Equally, many users are security-aware, and that number is growing each day. As public education increases, vendors will meet the demand of their clientele.

Summary

In this chapter, I have established the following:

Those things having been established, I want to quickly examine the consequences of poor Internet security. Thus, in the next chapter, I will discuss Internet warfare. After covering that subject, I will venture into entirely new territory as we begin to explore the tools and techniques that are actually applied in Internet security.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.