previous  next  Title  Contents  Index       Previous    Next     Top    Detailed TOC 

7 Security Mechanisms

This chapter outlines typical mechanisms used to implement IT Security. The mechanisms discussed are Cryptography, Access control lists, Authentication, implementation of rules & policies and availability mechanisms.

7.1 Cryptography & Digital Signatures

Cryptography is the translation of information (known as plaintext) into a coded form (known as cypertext) using a key. Cryptography is mostly used to protect the privacy of information (i.e. limit who can access the information).
In a strong cryptosystem, the original information (plaintext) can only be recovered by the use of the decryption key. So the plaintext information is protected from "prying eyes". A strong encryption algorithm is one who cannot be easily inverted on a Supercomputer today (i.e. the PC in 10 years time). There are two principal methods of cryptography, Shared Key and Public Key cryptography.

Crypto References:

The reference book on Cryptography is [crypto1].                                     [pointers to crypto SW]                                       [excellent: a "must visit"]                                 [Schneier: Blowfish, Twofish]                          [E.Young's DES, SSL]                                      [cryptix Java, C, Perl]          [Wei Dai's C++ lib]                                           [Tatu Ylonen's SSH]   [Crypto+Law] -- sci.crypt Archives -- International Association for Cryptologic Research    Classical Crypto Explanation            [an index to lots of crypto news articles]

An article written by te author  for SecurityPortal on Internationally Available Strong Crypto Products in September 1999.

Discussions on keylengths:   (published January 1996)
An excellent article in Byte May 1998 by Bruce Schneier [crypto2].

7.1.1 Shared (or symmetric) Key Cryptography

Both parties exchanging data have a key, this key (unknown to others) is used to encrypt the data before transmission on one side and to decrypt on receipt on the other side. There are two kinds of symmetric ciphers: Block (which encrypt blocks of data at a time) and stream ciphers (which encrypt each bit/byte or word sequentially). Sample algorithms:

Advantages: Shared key algorithms are much faster than their public key counterparts.
Disadvantages: Both side must know the same key and they must find a secure way of exchanging it (via a separate secure channel).

Typical applications: Encryption of information to protect privacy. i.e. local encryption of data files (where no transmission is required), data session encryption, banking systems (PIN encryption). 

7.1.2 Public Key Cryptography

Both parties have a private key and a public key. The private keys are known only to their owners, but the public keys are available to anyone (like telephone numbers). The sending party encrypts the message with the receivers public key and the receiver decrypts with his own private key. This is possible due to the discovery by Diffie and Hellman (at Stanford University, autumn 1975) that algorithms can be developed which use one key for encryption and a different key for decryption. The public and private key constitute a key pair.
The following public key crypto-systems are well known:

Patents: Both RSA and DH the above are patented in the U.S. PKP (Public Key Partners) of Sunnyvale, CA hold the licensing rights. Although the DH patent is now out of date (19.8.97) and the RSA patent (only valid in the USA) only holds until 2.9.00. There are two patents valid until 2008 affect the DSS (from Scnorr and Kravitz).
Strength: The public-key algorithms rely on difficult-to-solve mathematical problems such as taking logarithms over finite fields (Diffie-Hellman) or factoring large numbers into primes (RSA) to create one-way functions. Such functions are much easier to calculate in one direction that in the other, making brute force decryption virtually impossible (with today's computing power and decent key sizes).
Newer techniques such as Elliptical curves and mixture generators (e.g. RPK at ) are promising faster public key systems.

Advantages of PK: Only the private key need be kept secret. No secret channels need exist for key exchange, since only public keys need be exchanged. However the public key must be transferred to the sender in such a way that he is absolutely sure that it is the correct public key! Public key cryptography also provides a method for digital signatures.
Disadvantages: Slow, due to the mathematical complexity of the algorithms.

Typical applications: Ensuring proof or origin, ensuring that only the receiver can decrypt the information, transmission of symmetric session keys. 

7.1.3 Hashing / message digest

A hash function creates a fixed length string from a block of data. If the function is one way, it is also called a message digest function. These (fast) functions analyse a message and produce a fixed length digest which is practically unique i.e. finding a message with an identical hash very unlikely with very fast computers . There is no known feasible way of producing another message with the same digest. Such algorithms are normally used to create a signature for a message which can be used to verify it's integrity.

Advantages: much faster than encryption and output is fixed length (so even a very large file produces the same digest, which is much more efficient for data transmission).

Typical applications: Many Internet servers provide MD5 digests for important files made available for downloading. Most digital signature systems and secure email system use a digest function to ensure integrity.

An interesting variation of hashes are Message Authentication Codes (MAC), which are hash functions with a key. To create or verify the MAC, one must have the key. This is useful for verifying that hashes have not been tampered with during transmission. Two examples are HMAC (RFC 2104) and NMAC, based on SHA-1. 

7.1.4 Applying cryptography

Applications such as PGP, S/MIME, Secure RPC (and hence secure NFS & NIS+) and SKIP use a combination public key cryptography and symmetric cryptography to ensure non repudiation and privacy. Hashing algorithms are used for (fast) generation of signatures. Encryption Strength

There are several possible weaknesses in a crypto system, and the strength of the system is the strength of the weakest link.

The following discussion concentrates on the issue of key lengths, but strong keys are useless if the above issues are not addressed!

Computers are getting faster (computing power doubles about every 2 years), cheaper and better networked each year. All cryptographic algorithms are vulnerable to "brute force" attacks (trying all possible key combinations).

Symmetric (or shared key) algorithms:
In general, the key length determines the encryption strength of an algorithm with the approximate formula of 2 to the power of the key length, so 56 bit keys take 65,536 times longer to crack than 40 bit keys.

Most products come from the U.S. and are subject to U.S. export restrictions, currently either a 40bit limit or escrowing of keys.

Public (asymmetric) key algorithms:

Recommendations on key sizes: What is strong?

The encryption key size should be chosen, based on:

Attacker Time Span Recommended key size
Curious hacker Information must be protected for a few days. Public Key 512 bits
shared key 40 bits
Curious hacker Information must be protected for minimum 2 years. Public Key 1024 bits
shared key 60 bits
Large organisation Information must be protected for minimum 20 years. Public Key 1568 bits
shared key 90 bits
Government Information must be protected for minimum 20 years. Public Key 2048 bits
shared key 128 bits

Here we define strong encryption as that which uses key sizes greater than or equal to:

Public Key 1568 bits (for RSA, DH and ElGamal)
Shared key 90 bits

"Strong" for new encryption system such as Elliptical curve or Quantum cryptography is not defined here, as yet.

See also the reference section above. Legal Issues / Export restrictions

The "International Law Crypto Survey" of cryptographic laws and regulations throughout the world can be found at This is changing rapidly, particularly since Sept.'99.

The U.S. and certain other countries consider encryption to be a weapon and strictly control exports. This is basically crippling the efforts to include encryption in Applications, Internet services such as Email and Operating systems.

In general the U.S. allows export of 40bit shared key systems and 512 bit public key systems.

Some countries (e.g. France), forbid encryption except when a key has been deposit in an escrow (so the legal authorities can listen to all communications if they need).

Other countries allied to the U.S. (e.g. Germany, UK, Sweden, etc.) also enforce the U.S. restrictions by allowing strong encryption domestically, but restricting export of cryptographic devices.

The OECD made a set of recommendations on international cryptography in June 1997, see . Many countries have almost no restrictions, but some (especially European) countries are considering some kind of restriction of the use of cryptography in the future.

The only strong encryption software widely available internationally, known to the author of this document, are from Australia, Finland, Ireland and Russia. Digital Time-stamping Service (DTS)

A DTS issues a secure timestamp for a digital document. Certificates, Certification Authorities (CA), PKI and Trusted Third Parties (TTP)

Certificates are digital documents attesting the identity of an individual to his public key. They allow verification that a particular public key does in fact belong to the presumed owner. The ISO certificate standard is X.509 v3 and is comprised of: Subject name, Subject attributes, Subject public key, Validity dates, Issuer name, Certificate serial number and Issuer signature. X.509 names are similar to X.400 mail addresses, but with a field for an Internet email address. The X.509 standard is used in S/MIME, SSL, S-HTTP, PEM, IPsec Key Management.

LDAP (Lighweight Directory Access Protocol) is an X.500 based directory service for certificate management. Certain secure email products such as PGP5 have inbuilt support for querying and updating LDAP servers.

Certificates are issued by the certification authority (CA). The CA is a trusted authority, who confirms the identity of users. The CA must have a trustworthy public key (i.e. very large) and it's private key must be kept in a highly secure location. CAs can also exist in a hierarchy, which lower level CAs trust high CAs.

Where sender and receiver must be absolutely sure of who their Peer is, a CA is a possible solution. Another name for a CA is a Trusted Third Party (TTP). If both sides trust a common authority, this authority can be used to validate credentials from each side. E.g. the sender sends his public key, name (and other validifying information) to the CA. The CA verifies this information as far as possible, add it's stamp to the packet and sends it to the receiver. The receiver can now be surer than the sender is who he says he is.

The problem with CAs are that you have to trust them! However, even Banks have overcome that problem with the implementation of SWIFT, a world wide financial transaction network.
See also: Emergency File Access

A frequent requirement when protecting file confidentiality via encryption is Emergency File Access. If the file owner encrypts an important file and forgets the key, what happens? A second key is created, split into five parts such that any two of the five (partial) keys, when combined, could be used as a decryption key. The five (partial) keys could be kept by separate people, only to be used if the original owner was not able to decrypt the important file.

The Windows version of PGP supports these key splitting functions. Secure Data Transmission using cryptography

Secure data transmission is the exchange of data in a secure manner over (presumed) insecure networks.

Secure data transmission is required for class systems or higher and can be divided into the following categories:

  1. Peer entity authentication: Both sides (users & processes) must identify & authenticate themselves, prior to the exchange of data.
  2. Data integrity: Data must remain complete during transmission. Unauthorised manipulation of user data, audit trail data and replay of transmissions shall be reliably identified as errors.
  3. Data confidentiality: Only authorised persons should be able to access the data. (e.g. end-to-end data encryption).
  4. Data origin authentication : Does the receiving process know who the data is coming from? For class systems, non repudiation of origin may be required: On receipt of data, it shall be possible to uniquely identify and authenticate the sender of the data. Has the receiver proof (e.g. digital signatures) of where information came from?
  5. Non repudiation of receipt : Has the sender proof that the information sent was received by the intended receiver?
  6. Access control : All information previously transmitted which can be used for unauthorised decryption shall be accessible only to authorised persons.

Secure data transmission is achieved by the use of cryptography. There are two principal cryptographic methods, public key and shared key. Normally a mixture of both is used for secure communication.

Using Cryptography for secure transmission
When choosing an authentication system, choose a signature function and encryption method and hash function that require comparable efforts to break.
The encryption algorithms described in the previous section can be combined together to produce a system for secure data transmission (refer to the diagram below):

  1. Data integrity: MD5 digests are created on the data part of message.
  2. Non repudiation of origin: Use public keys. i.e. the sender encrypts all or part of the message encrypted with his private key. Often only the message digest is encrypted (for performance reasons). The receiver (decrypts with the senders public key) is sure that the message comes from the correct sender, because only the sender knows his private key. For performance reasons, normally it is sufficient to encrypt the MD5 digest noted above. The digest encrypted with the senders private key is called a signature.
  3. Non repudiation of receipt: not covered here.
  4. Data confidentiality: Confidential parts of the message are encrypted. Shared key encryption is the most efficient method (performance). Normally the shared key is calculated from information known to both sides e.g. the sender uses his private key + receivers public key and the receiver uses his private key + senders public key. They can both generate the same unique key due to the mathematical properties of public key algorithms (i.e. multiplying numbers raised to powers). This data encryption key is often called the session key (it is valid only for a particular session).
  5. Peer entity authentication: Where sender and receiver must be absolutely sure of who their peer is, a certification authority is a possible solution. If both sides trust a common authority, this Authority can be used to validate credentials from each side. E.g. the sender sends his public key, name (and other validifying information) to the Authority. The Authority verifies this information as far as possible, add it's stamp to the packet and sends it to the receiver. The receiver can now be surer than the sender is who he says he is. Similar encryption and hashing to that above would be applied this data.
  6. Access control: not covered here (depends on implementation).

The data is prepared for transmission:

After receipt, the data is decrypted:

Example systems using this approach: Sun's Secure RPC (hence NIS+, NFS), SKIP, S/MIME isn't a million miles away either.

Using FTP for secure file exchange

Ftp is available as standard on many platforms, so you may find it a convenient method of transferring data between say, an UNIX machine and an IBM mainframe. (Note: use SSH/SCP if you can, but SSH is not available on all platforms) What needs to be done to improve ftp security?

  1. Ftp transfers username and password in clear text, therefore do not rely on password protection as the main security barrier.
  2. Encrypt files before transfer and decrypt after receipt (use private/public keys where proof of origin is required).
  3. Validate file integrity using MD5 digests.
  4. The ftp sources can be altered to remove unwanted commands (e.g. only allow get), change default ports or slightly change the protocol - all of which should increase security. The sources may be had from
  5. SSH can also be used to setup PASV FTP over a secured tunnel (SSH2 includes an encrypted ftp client: recommended).

7.2 Sample encryption products/protocols

7.2.1 Encryption Products

This section has been reduced, since more up-to-date articles have been written for SecurityPortal which are more comprehensive on SSH and Crypto products. Please refer to these articles:

[1] Internationally Available Strong Crypto Productssp/int_crypto.html or the Version on SecurityPortal.
[2] All about SSH, sp/ssh-part1.html  or the Version on SecurityPortal.

7.2.2 encryption protocols


Netscape's secure socket layer is a "plug-in" socket layer (port 443 for HTTP with SSL) offering client & server authentication, integrity checking, compression and encryption. It is currently an Internet draft (not yet approved), see the TLS section below..
It is designed to fit on the transport layer in the TCP/IP stack (like Berkeley sockets), but below applications (such as telnet, ftp, HTTP). SSL was introduced in July 1994. It is designed for use in Internet WWW commerce applications, but also on LANs. The Netscape Navigator and Microsoft explorer both provide support for SSL V2 and V3 (Explorer 3.0, Navigator 3.0). Web servers supporting SSL3 include Apache & Netscape.

The client connects to the server and sends a list of supported encryption algorithms. The server replies with algorithm name, his public key, a shared key and the hash algorithm name. The client can check if the public key does belong to that server. The client generates a session key and sends it encrypted with the server's public key to the server. The server decodes the session key (with his private key) and uses it to encrypt data transmitted during the session. The client checks the server by sending random string encrypted with the session key. The server confirms receipt.
The above authentication method can also be used by the server to authenticate the client, however it must have a public key for the client (not the case for WWW applications).


  1. SSL, V2.0 is documented at .
  2. V3.0, as documented at or .
  3. SSL-Talk Email discussion list: Send an email to with the *subject* being the single word SUBSCRIBE.
  4. Secure Sockets Layer Discussion List FAQ,
  5. For a discussion of SSL implementations, see [1].

TLS (Transport Layer Security)

In 1995, the IETF started work on the adoption of SSL as an Internet Standard, known as TLS. A draft of the protocol was published in March 1997, based on SSL 3.0. Some differences are the use of HMAC instead of MD5 for integrity checking and a slightly different set of encryption algorithms that are supported. See  or .


Microsoft's Private Communication Technology (PCT) is aimed at replacing SSL. It is more general in nature. Authentication and encryption negotiation are separate. It is used in Explorer 3.0. Coming from Microsoft, it is not compatible with anything else, but could become a standard.
Microsoft proposed to the IETF (in April 1996) and Netscape a combined SSL/PCT implementation to ensure compatible solutions for Internet commerce. See (link out of date & no replacement found)

S-HTTP (Secure HTTP)

S-HTTP is a extension of the HTTP protocol which can run on top of "normal" TCP/IP, developed by CommerceNet. It provides services for transaction confidentiality and works on the application layer, specifically for secure HTTP connections. CommerceNet is the CA in the current implementation.


E-commerce requires secure methods for:

  1. secure payment for online buying/selling (see SET below)
  2. Privacy & integrity of payment details (e.g. credit card information). SSL is typically used to protect the privacy and integrity of the Online ordering session between client & merchant.

Sample products:

SET (Secure Electronic Transmission)

Secure Electronic Transmission is a set of protocols for electronic commerce proposed by VISA, MasterCard and American Express (since Feb. 1996). SET uses MIME to transport messages. SET 1.0 was released in May 1997 and can communicate across most media, not just TCP/IP.
Authentication: The server requests authorisation, the server key is authorised by a CA, keys are exchanged with the client and the transaction occurs. The SET digital certification includes an account number and public key.

SET 2.0: Version 2.0 will feature a much-needed encryption-neutral architecture that encourages the development of faster (than RSA encryption used in SET 1.0) electronic-commerce applications. Vendors such as Certicom, Apple Computer, and RPK are all positioning themselves as alternatives to RSA. Elliptic Curve Cryptosystem (ECC) is a technology that is being pushed by both Certicom and Apple.



The standard DNS has been extended to provide a parallel public key infrastructure, with each DNS domain having a public key. The domain key can be loaded at boot or securely transferred from the parent domain.
See also the new version of BIND and the IETF charter .

Cryptanlysis & encryption crackers


A company called AccessData (Utah, phone 1-800-658-5199, )  sells a package for ~ $200 that cracks the built-in encryption schemes used by WordPerfect, Lotus, Microsoft Office & other products, ACT, Quattro Pro, Paradox,  PKZIP, etc.
It doesn’t simply guess passwords—it does real cryptanalysis.

7.3 Authentication

Authentication is the process of verifying the identity of a subject. A subject (also called a principal) can be a user, a machine or a process i.e. a "network entity". Authentication uses something which is known to both sides, but not to others i.e. something the subject is, has or knows . Hence this can be biometrics (fingerprints, retina patterns, hand shape/size, DNA patterns, handwriting, etc.), passphrases, passwords, one-time password lists, identity cards, smart-tokens, challenge-response lists etc. Some systems consist of a combination of the above.

The most common methods of strong authentication today consist of one-time password lists (paper), automatic password generators (smart tokens) and intelligent identity cards.

7.3.1 Summary of authentication mechanisms

There is no industry standard today. Many different efforts are underway. In particular the Federated Services API, GSS API and RADIUS seem like a logical ways to interconnect the current incompatible systems, without requiring vendors to throw away their existing products. It is hard to imagine such an API offering more that basic functionality however (since advanced functionality is not common to all products). The IETF also have a number of active Authentication groups:

For enterprise wide authentication and naming services DCE, NIS+ and NODS are the current main runners, with Microsoft's Active Directory service (planned for release with NT5) already generating interest for companies using NT Domains. Support for X.500 directory services will probably appear in most of these, allowing an interoperability gateway to be built. The fact that neither DCE nor NIS+ have been fully adopted in the PC client world is a pity, but perhaps reflects pricing and complexity problems.

SSH is a really impressive product for secure access to UNIX machines. It can use RSA, SecurID or UNIX user/password authentication.

For authentication across unsecured networks, proprietary (incompatible, expensive) encrypting firewalls using certificates or token based authentication are the current solution. Possible future acceptance of proposed standards such as SKIP or IPsec will, hopefully, provide long term interoperability.


Client/server applications run on many different types of systems from IBM mainframes, VMS, UNIX to PCs. Unfortunately each of these systems has it's own way of authenticating users. Database logins are normally not integrated with OS (user) logins. Usually a Username and Password identifies a user to the system. If each system and application has it's own logon process, then the user is confronted with an array of (possibly) different usernames and passwords. This poses a real security risk, as the user may be tempted to write down all the different passwords, change them rarely, or use simple ones.

The ideal solution would be to provide a secure single signon. i.e. when a user logs on to a workstation on the network, his identity is established and can be shared with any system or application. Any user can sign from at any system anywhere and have the same name and password. The user needs to remember only one password. An even more secure signon can be achieved by using Personnel ID Cards to validate the user (via a card reader on each workstation) or via hand held Smartcards (with one time passwords).

Achieving single signon is not an easy task in today's heterogeneous environment, but it would seem that Kerberos is the main contender with Sun's NIS+ also an option.

Firewalls & Authentication

Strong authentication relies (normally) on something the user knows (e.g. a password) and something the user has (e.g. a list, smart card). Applications must support the authentication mechanism (or it must be transparent to the application). The following is a sample of strong firewall authentication methods/products.

Strong authentication mechanisms on Firewalls are very important, if protocols such as Telnet, Rlogin or ftp (writeable) are to be allowed. TCP/IP has inherent security weaknesses (confidentiality, IP spoofing) and these need to be addressed in a strong authentication product. If keys are used, key distribution must considered. 

No standards exist, each product has it's own API and interoperability is often very difficult. Some Firewall authentication servers can act as glue, allowing a common database to be used for different authentication products (en example is the Gauntlet authentication server). 

HTTP Basic Authentication

A basic authentication method is supported in HTTP.

Algorithm: A WWW client sends a request for a document which is protected by basic authentication. The server refuses access and sends code 401 together with header information indication that basic authentication is required. The client presents the user with a dialog to input username and password, and passes this to the server. The server checks the user name and password and sent the document back if OK.
Encryption: Very weak. The user name and password are encoded with the base64 method. Documents are sent in clear text.

NT Domains / Lan Manager / SMB & NetBIOS /NetBeui /CIFS

NT's domains are an extension of (IBM/Microsoft) Lan Manager (LM) and are not hierarchical, but domain based - i.e. more suitable for separate LANs.
LM authentication has several dialects: PC NETWORK PROGRAM 1.0, MICROSOFT NETWORKS 3.0, DOS LM1.2X002, DOS LANMAN2.1, Windows for Workgroups 3.1a, NT LM 0.12, CIFS. The last two are the most interesting as they are used in NT4.

Authentication Products

Kerberos (+ DCE)

Kerberos is a secret-key network authentication service developed at MIT by Project Athena. It is used to authenticate requests for network resources in a distributed, real-time environment. DES (i.e. shared key) encryption and CRC/MD4/MD5 hashing algorithms are used. The source code is freely available (for non-commercial version) and Kerberos runs on many different systems.

Kerberos requires a "security server" or Kerberos server (KDC) which acts as a certification authority, managing keys and tickets. This server maintains a database of secret keys for each principal (user or host), authenticates the identity of a principal who wishes to access secure network resources and generate sessions keys when two users wish to communicate securely.

There are many versions of the Kerberos authentication system:V3 (MIT), V4 (commercial: Transarc, DEC) and V5 (in beta/RFC 1510, DCE, Sesame, NetCheque). BSDI is the only OS to bundle the Kerberos server. Solaris 2 bundles a Kerberos client, which among other things allows NFS to use Kerberos for authentication.

Microsoft intend supporting a version of Kerberos in NT5, it remains to be seen how compatible it will be with existing versions.
Entegrity Solutions ( offer solutions for making DCE the core of enterprise security. PC-DCE interfaces to other non-Kerberos authentication systems such as SecurID and Entrust PKI Certificates. 

Kerberos is not without problems:


NIS+ is a hierarchical enterprise wide naming system, based on Secure RPC. In the default configuration it provides user, group, services naming, automounter and key distribution. NIS+ can be easily extended to define customised tables.

NIS+ is an improved version of the UNIX defacto standard NIS (Network Information System, or yellow pages). NIS & NIS+ were developed by Sun. NIS is available on most UNIX platforms, but has very weak security. NIS+ is much more secure but it only available on Sun's Solaris and recently HP-UX and AIX.

Security is based in the use of Secure RPC, which in turn uses the Diffie/Hellman public key cryptosystem.



BoKS is a full authentication/single signon package for PC and UNIX systems, made by DynaSoft in Sweden. DynaSoft is a 10 year old company employing about 50 people. The following is an extract from their home page: :

The BoKS concept has been developed and improved by DynaSoft since 1987. It is a comprehensive security solution covering areas such as access control, strong authentication, encryption, system monitoring, alarms and audit trails. BoKS functions in UNIX and DOS/Windows environments, offers high reliability and is ported to most UNIX platforms. BoKS can also be integrated with enterprise management systems such as Tivoli and database applications such as Oracle and Sybase.

BoKS can use Secure Dynamics SecurID smart tokens. Although the author has little practical experience with BoKS, it seems to be in extensive use where high security is required. Runs on UNIX (SunOS, Solaris and HP-UX) and PCs (Win95 & NT versions should be introduced in late 1996). BoKS uses shared key encryption (40 bit DES outside the U.S., 56bit DES in the U.S.).

Bellcore S/Key

S/Key is a one time password system from Bellcore. Public domain versions are also available. Features:

OPIE (One-time Passwords In Everything)

OPIE is a public domain release of the U.S. Naval Research Laboratory's. OPIE is an improved version of S/Key Version 1 which runs on POSIX compliant UNIX like systems and has the following additional features to S/Key:

ACE Server (SecurID)

The SecurID system from Secure Dynamics is one of the more established names on the market today. It works with most clients (UNIX, NT, VPN clients, terminal servers etc.) and many firewalls provide support for SecurID. The server which manages the user database and allows/refuse access is called ACE and delivered only by Secure Dynamics (whereas clients are delivered by several vendors). The author has used this system for providing secure remote access to hundreds of users on diverse clients.

The tokens are known are SecurID and are basically credit card sized microcomputer, which generate a unique password every minute. In addition each user is attributed a 4 character pin-code (to protect against stolen cards). When a user logs on, he enters his PIN, plus the current pass-code displayed by the SecurID token. The server contains the same algorithm and secret encryption key, allowing both sides to authenticate securely. Software tokens are available for Win95/NT as are SecurID modems from Motorola. The tokens last typically 3 years.

This form of authentication is strong, but there is a risk of a session being hijacked (for example if the one time password doesn't change often).

Cost: The Smart Token often costs about $60 (every 3 years), which may seem expensive when many users are involved, particularly when the server software can cost an additional $150 per user.
Stability: The author has been running an ACE server with 200 users for several years. While it can be quirky to setup, it is rock solid and has never crashed (on Solaris 2.5 or 2.7).

The server, which supports mirroring for high availability runs on UNIX, with clients for virtually all platforms. ACE is configured via a Motif GUI that is certainly not perfect. A more useful GUI is available on the NT Remote Admin tool. ACE supports NT/RAS, ARA, XTACACS and RADIUS authentication protocols. See or . For some syadmin notes see


Safeword by Secure Computing is direct competition for ACE/SecurID. It's servers run on UNIX. It supports many authentication protocols such as TACACS, TACACS+ and RADIUS.

Many token types are supported: Watchword, Cryptocard, DES Gold & Silver, Safeword Multi-sync and SofToken, AssureNet Pathways SNK (SecureNet Keys). See also


This one time password system from Racal Guardata that are well established competition to the SecurIDs. It works basically as follows:

Attacks could occur in the form of chosen plaintext guessing. Racal Guardata also produce the Access Gateway. 

Defender Security System

This system from AssureNet Pathways may be of interest to those using NT servers, since the server runs on NT (not UNIX like most of the above). Features: Authentication via ARA, NT/RAS, TACACS+. Multiple servers are possible via database replication.

The token used are SecureNet Keys (SNK) hardware or software tokens. The challenge/response authentication uses DES, the PIN is never transmitted over the network and sensitive information is encrypted. See also  

Remote Access Control Protocols

RADIUS (Remote Authentication Dial In User Service).

Merit Network and Livingston developed the RADIUS protocol for identification and authentication. There is an IETF working group defining a RADIUS standard.

XTACACS (Enhanced Terminal Access Controller Access System)

XTACACS is an enhancement on TACACS (Terminal Access Controller Access System), which is a UDP based system from BBN which supports multiple protocols. SLIP/PPP, ARA, Telnet and EXEC protocols are supported.


Also an enhancement on TACACS (from CISCO), but not compatible with XTACACS or TACACS. It allows authentication via S/key, CHPA, PAP in addition to SLIP/PPP and telnet. Authentication and authorisation are separated and may be individually enabled/configured.

PPP Authentication protocols: PAP, CHAP

PAP (password authentication protocol) involves the username and password being sent to a server in clear-text. The password database is stored in a weakly encrypted format. CHAP (Challenge Handshake Authentication Protocol) is a challenge/response exchange with a new key being used at each login. However, the password database is not encrypted. Some vendors offer variations of the PAP and CHAP protocols but with enhancements, for example storing passwords in encrypted form in CHAP.

Other Authentication methods

7.4 Access Control Lists (ACLs)

An ACL defines who (or what) can access ( e.g. use, read, write, execute, delete or create) an object. Access Control Lists (ACL) are the primary mechanism used to ensure data confidentiality and integrity. A system with discretionary access control can discern between users and manages an ACL for each object. If the ACL can be modified by a user (or data owner), it is considered to be discretionary access control. If the ACL must be specified by the system and cannot be changed by the user, mandatory access control is being used. There is no standardised ACLs for access to OS services and applications in UNIX.

7.5 Mechanisms for implementing Rules & Policies

To secure a particular environment, mechanisms are required which allow the rules and policies to be implemented. Implementing rules and policies network wide on UNIX machines is not easy and often requires development of scripts. Another possibility is the use of a tool such as Tivoli, which is designed to implement rules and policies in a networked heterogeneous environment.

NT allows setting of some, but not all rights per user. It takes a very different approach to UNIX in this area. (See the chapter "NT"). Implement rules as policies across a network of servers is not supported by standard utilities either.

7.6 Availability Mechanisms

7.6.1 Backup & Restore

Things to watch out for:

7.6.2 Environment

The computing environment can be protected with Air Conditioning, locked server rooms and UPS (220V protection).

7.6.3 Redundancy

Redundancy increases availability and may be implemented in hardware (RAID), disk drivers or OS (RAID) or at the application/service level (e.g. Replication, transaction monitors, backup domain controllers). Application/service redundancy

This is often the cheapest and easiest to implement, where available. The principle problem is that few applications support this type of redundancy. Clients connecting to these servers automatically look for a backup or duplicate server if the primary is not available. RAID / Mirroring

The classical method of increasing system availability is by duplicating one of the weakest part in a computer: the disk. RAID (Redundant Array of Inexpensive Disks) is a de-facto standard for defining how standard disks can be used to increase redundancy. The top RAID systems duplicate disks, disk controllers, power supplies and communication channels. The simplest RAID systems are software-only disk drivers which group together disparate disks into a redundant set.
There are several RAID levels:

Things to watch for in RAID systems: System Redundancy

If applications do not provide built in redundancy, special software (and perhaps hardware) can be installed on two systems to provide Hot Standby functionality. The principle is as follows: Both systems can access shared (high availability, dual ported) disks and have duplicate network connections. The backup machine monitors the primary constantly and if it notices that the primary is no longer functioning, it takes control of the shared disks, reconfigures it self to have the same network address as the primary and starts up the applications that were running on the master. Of course this with only work with certain applications e.g. if the primary crashes and it's principal application thrashes it's configuration or data files in doing so, the backup server will not be able to start the application.

A example of this is IBM's HACMP product, or Sun's HA cluster. Full Hardware redundancy

Specialised computer systems offer compete redundancy in one system i.e. CPU, memory, disks etc.. are fully duplicated. A single point of failure should not exist. These systems often require specially adapted Operating Systems, cost a fortune and are rarely compatible with mainstream systems. Rarely used in the commercial arena, they are most reserved for military or special financial use.

An example is the Stratos line of systems.

previous  next  Title  Contents  Index      Previous     Next     Top   Detailed TOC