Fred Cohen <fc at all dot net> is president of Management Analytics in Hudson, Ohio, a consulting firm specializing in Net security. The firm operates the Info-Sec Heaven site at <http://all.net/> and publishes a monthly series of essays called "Internet Holes" [1] on information-security topics. The March essay [2] espoused a policy of "zero tolerance" for Net attacks:
> Take a zero-tolerance attitude toward investigating attempts to scan or
> enter your system. The idea that one attempt to guess a password or gain
> unauthorized entry is too small to bother with opens a giant hole. With
> modern attack tools, instead of scanning for a lot of services on one
> computer, I can scan for a few services at many computers. By staying
> below your incident detection threshold, an attacker can go after sys-
> tems at will and without fear of recourse. With zero-tolerance, each
> questionable activity results in another message to the systems admin-
> istrator at the site where the attack originates. Pretty soon, the ac-
> tivities will be seen as significant.
Apparently some twisted Netizen took this policy as a personal affront on his right to telnet wherever he damnwell pleased. Over a period of several days, a shadowy band of crackers used a newly discovered vulnerability in URLs to enlist innocent collaborators in a denial-of-service attack. (The defenses of all.net proved more than ample.) Cohen wrote in comp.risks:
> ...there is a more basic flaw in the URLs used in the Internet that ap-
> pears to make firewalls very weak prey for attackers and enables Web
> sites to launch highly distributed and hard-to-trace attacks. The basic
> flaw was published some weeks ago... and extensions have now been used
> to launch probes and attacks by the thousands from sites all over the
> net.
Cohen has posted a detailed and disturbing account [3] of the attack on all.net. Read it if you've ever wondered what it's like to be a system administrator under siege.
[1] <http://all.net/journal/netsec/top.html>
[2] <http://all.net/journal/netsec/9603.html>
[3] <http://all.net/journal/netsec/9604.html>
Webmasters: do you monitor your servers to see how fast they are serving pages to users? Do you then think you know something about the quality of the experience your users have when they visit your site? Allow Bernard Hughes <bernard at timedancer dot com> politely to differ. Hughes offers a Web service called OnTime Delivery that tracks and reports on the time it's taking your users to load your pages. From May to September 1995 he ran a test using 200 pages volunteered by respondents to Usenet postings. The results [4], posted last December, are somewhat counterintuitive. They lead to the conclusion that most of the variability in Web performance can be attributed to servers and their "pipes" -- the quality and speed of their network connections.
One finding: Web pages aren't delivered faster, in aggregate, at any particular time of day [5]. But for any single page, the time required to deliver it can range over a factor of 3 or 4 from one request to another [6]. Taken together, these results seem to exculpate Internet load and implicate servers as the main contributors to the variability we perceive on the Web. Another surprise: a 28.8 KBaud modem on the client end downloaded pages, on average, only 40% faster than one running at 14.4 KBaud [7]. Note that these results apply to Web browsing only, and would certainly look different if you timed other services such as FTP. The OnTime Delivery service costs $2 or less per URL per week; see [8]. Thanks to Frostie Sprout <frostie at wyoming dot com> for alerting the Apple Internet Users mailing list to this resource.
[4] <http://www.timedancer.com/Beta/>
[5] <http://www.timedancer.com/Beta/daily.html>
[6] <http://www.timedancer.com/Beta/spread.html>
[7] <http://www.timedancer.com/Beta/144v288.html>
[8] <http://www.timedancer.com/Forms/Subscription_Form2.html>
Louis Slothouber <louis at starnine dot com> of StarNine Technologies, makers of the leading Macintosh Web server, has developed a mathematical model of Web server performance -- see the executive summary at [9] and the full paper at [10]. (Adobe Acrobat PDF and MS Word forms of the paper are available from [11].) The model reproduces the exponential behavior of servers under increasing load -- familiar to webmasters everywhere -- of fairly flat response leading up to a "wall." The model indicates that the wall's position is determined mostly by available network bandwidth and the average size of files served.
Some intriguing results: when network bandwidth is a bottleneck, doubling the server's speed results in only a slight improvement. Adding a second, identical server has no effect at all. But adding a second server that is slower than the first actually decreases performance.
[9] <http://louvx.biap.com/white-papers/performance/summary.html>
[10] <http://louvx.biap.com/white-papers/performance/overview.html>
[11] <http://louvx.biap.com/white-papers/default.html>
Peter Flynn <webmaster at www dot ucc dot iw>, webmaster of University College, Cork, runs a Web-accessible acronym server [12] that has won Magellan 4-star and Point Top-5% awards. On my first visit I just had to see if the 16,252-entry database contained LFSUX; it didn't so I added it. Thanks to Peter Langston <psl at acm dot org> for forwarding this marginally CDA-acceptable mnemonic from the alt.folklore.computers newsgroup:
> ...the PPC [Apple/Motorola/IBM PowerPC chip] architecture defines the
> instruction:
>
> Load Floating-point Single-precision indeXed with Update
>
> with the mnemonic "LFSUX". Whenever the Mac debugger... finds this in
> the disassembly, it adds the comment: "It's also a bitch, then you die."
Anu Garg at Case Western Reserve University offers an email interface (described at [13]) to services called Dictionary/by/Mail, Thesaurus/by/Mail, A.Word.A.Day, and Anagram/by/Mail. (For a Web-based anagram service see [14].) I use the thesaurus service often enough that I've aliased it from all of my Internet-visible Unix accounts.
[12] <http://www.ucc.ie/info/net/acronyms/acro.html>
[13] <http://www.ucc.ie/info/net/acronyms/mailserver.html#garg>
[14] <http://www.infobahn.com/pages/anagram.html>
Pushing HTML beyond the established standards, as both Netscape and Microsoft do, can be a two-edged sword. Feeling a bit snippy with Microsoft today, are we, sir? Like to take it out on the users, would we, sir? Don Reed <don at alcuin dot com> reveals an underhanded way to do that. Here's his response to a query on the Apple Internet Authoring mailing list:
>> I have to recreate a greek letter to use for a scientific article. Are
>> the HTML codes for the Greek symbols still in discussion by the WWW
>> steering committee?
> At present, the best solution is tell people to use Microsoft Explorer
> to view it. Microsoft has added a FACE attribute to Netscape's FONT en-
> tity. The line would look something like "<FONT FACE=SYMBOL> text text."
> (Some Microsoft-hostile people put this line in their pages routinely.
> When an Explorer user sees their pages, they're all Greek.)
>>Apple Internet Users mailing list -- mail listproc@abs.apple.com without
> subject and with message: subscribe apple-internet-users Your Name .
>>Apple Internet Authoring mailing list -- mail listproc@abs.apple.com without
> subject and with message: subscribe apple-internet-authoring Your Name .
TBTF HOME |
CURRENT ISSUE |
TBTF LOG |
TABLE OF CONTENTS |
TBTF THREADS |
SEARCH TBTF |