freehelia_administration_of_workstations

Back to FreeHelia Abstract and Contents – Prev Software and Operating System for Workstations

4 Administration of Workstations

Managing workstations means automatically handling security patches, software updates, settings and software installation. When automatic handling fails or there are other problems, remote control is required. Many organizations still do some of these administrative tasks by walking to computers and accessing them locally. However, ability to do these tasks remotely over the network or completely automating parts of these tasks can provide huge benefits.

In addition to administrative remote control, naive users sometimes need advice on how to use a program, and this can be done using shared sessions. In my view, shared sessions are not really needed in administration, and some administrations are used to them just because their platfrom lacs decent method of remote control without interfering normal usage.


4.1 Current solution

Currently, Helia is using Microsoft Windows platform in all of it’s workstations, and software from various vendors. Operating system, updates and programs are installed by two methods: major updates are done by copying images with Norton Ghost, minor updates are installed with network login script (Helia IT Services 2003). While feature updates are just nice to have, security updates are mandatory to keep network safe. In Windows, virus database updates are nearly as important as operating system security updates.

Copying image means that an example workstation is built with all software installed and settings made. Then an image of hard disk is saved, that is, every byte on the hard disk is saved into a huge file, the image. This image is then copied to the hard disks of target computers. Helia handles images with Norton Ghost. Imaging Windows has many downsides: computers must be almost identical, all previous information on target computers is lost and thus updates end up being once or twice a year. Computers are rarely identical, so any difference in workstation configuration means software has to be installed manually after imaging. Installing different software configurations is sometimes mandated by different needs (eg. Winha), but more often by licensing reasons (eg. Creative CD Writer).

Network login script can install minor updates. Virus protection databases are installed this way. Installation in a login script cannot install big software, because in Windows this often requires answering many questions and rebooting computer. Helia IT Services is researching alternatives to automate software install without overwriting the whole computer.

Windows operating system has had some form of automatic installer since Windows 98, but automatic installation only installs operating system but practically no programs. I have created automatic installation script for Windows 98, but it was not taken into production as Ghost was used instead.


4.2 Methods of Software Installation and Update

Software installation and update is a basic task in computer administration. Even though larger organizations can probably see the needs for most of the software before rolling out workstations, continuous updating is required for security reasons. Different methods for software installation differ in their level of automation, compatibility between programs and possibility of uninstallation. Different methods for installing software are copying static binary with an installation wizard, compiling from source, using package manager and using an automated package manager.


4.2.1 Installation Wizard

Installation Wizard is the most popular method for installing programs in Windows. Some Linux programs use it too, such as the original OpenOffice binary installer. Installer is an executable program, containing the installer and software in compressed form. When run, software is uncompressed and copied to the system according to users answers to installers questions. Even though there are user interface guidelines provided by Microsoft, the implementation of installers is varies a lot. Different third-party vendors have created software to create installers, for example InstallShield and NullSoft installers. Installation wizard has the benefit that it is intuitively usefull for naive users, who most likely double-click a file to execute it after download. Some find that easy questions can make user feel in control. Downsides are many: installing is slow, it cannot be automated, most questions are unnecessary and show lack of standardization and uninstall does not really work as system is not returned to previous state after uninstall. Usually, despite many questions, the level of actual user control is still minimal.


4.2.2 Compile from Source

Some software, especially development versions, are distributed as source code. User has maximum control over the software, and the resulting runnable binary program is optimized for users hardware. Even though GNU make system makes it possible for anyone computer literate user to compile software, some programs are harder to compile that others. Basically, a program is compiled by uncompressing the package, then typing ‘./configure && make’. Compiling software is not efficient method if software is to be distributed to many computers. Some benefits of compilation can be reached by compiling multiple binaries to different architectures, and thus source code is usually just an addition to other installation methods.


4.2.3 Package Manager

The problems of Installation Wizards lead to creation of specific programs to install software, package managers. Package manager install software packaged according to guidelines specific to the package manager. Strict guidelines make non-interactive installation possible, thus making it practically possible to install many packages at once. For example, installing 50 packages interactively, each asking 10 questions would not be practical. Package managers also handle dependencies, typically by aborting installation if required software libraries are missing.

Some of the most famous package managers include the RPM Package Manager (formerly Red Hat Package Manager) and Debian package manager dpkg. The underlaying package management format is often the biggest different between distributions. Packages created for one distribution can easily be recompiled to work in another distribution using the same packaging format. Distributions categorized by packaging format to RPM (Red Hat) based, DEB (Debian) based and to those using distributions own format. This categorization was discussed in detail in chapter “3.1 Distributions” and in Illustration 4.

Package managers handle the installation of a software or a software library for a single piece of software. Typically, if a required software library is missing, a warning is given and installation is aborted. After this, user must find the missing library and start the installation again. This is commonly known as “dependency hell”. Because dependencies are not handled automatically, system updates could not be fully automated.

Programs could almost always be compiled so that they don’t rely on external software. This requires that they don’t use other programs as commands (system calls) and that they have all software libraries included as static. Not using other programs is against the toolbox principle (combining small programs to achieve big tasks) and results duplicate coding efforts and too little specialization in software. Compiling all binaries as static wastes RAM memory as same code is loaded once for each statically compiled program. Static binaries require program vendor (or packager) to recompile the program each time any of the libraries needs updating.


4.2.4 Automated Package Managers

Fully automated package management software handles also dependencies, and thus fully automates installation and updates of software. This means that a computer can nightly update all programs without user intervention. Automated package managers use normal package managers (such as RPM) to actually install packages. Well defined package format (and thus a working basic package manager) is a requirement for automatic package manager.

Automated package managers replace user in all or some of these routine tasks:

  • Installing requirements
  • Removing conflicting software
  • Verifying package author
  • Listing what software is available
  • Locating latest versions of software
  • Downloading software before installation

To know what software is available and to receive installation packages, automated package managers use client server model. A server is a repository of packages offered for automatic installation, and client is the computer being updated.


4.2.5 Proprietary Automated Package Managers

Proprietary package managers are typically specific to a single distribution, use non-documented protocols and are somehow controlled by a single entity. Most installation systems have a small number of predefined repositories, and don’t allow users to create their own repositories.

Even though proprietary package managers solve many technical problems, the closed nature creates new risks. They create a single point of failure (vendor controlled repositories), they cause vendor lock-in and they can only provide a limited selection of software.

Red Hat up2date used to be very popular, because it was bundled with the most popular distribution, and probably because it had a graphical user interface. It is quickly getting obsoleted by more advanced and more open yum. Even though up2date is still included in Fedora Core / Red Hat, the vendor officially also supports yum.

Microsoft Windows Update could be considered an automatic package manager too. However, it’s scope is very limited compared to update systems in Linux, as Windows Update can only update some software from Microsoft. Thus, it is not adequate solution for updating a typical Windows workstation. Microsoft controls the update repositories, and Blaster worm used this fact to launch a denial of service attack against Windows Update repository.


4.2.6 Free Automated Package Managers

Free automatic package managers make it possible for anyone to create repositories (servers) for automated updates. For users, a lot more software is available and single point of failure is eliminated. For companies with many workstations, own repository makes it possible to test updates before rolling them out to all workstations.

Advanced packaging tool apt was the first free automated package manager. Some years ago the ability to upgrade every piece of software, core system and install security updates by just typing “apt-get update” made Debian distribution and apt unique. Even though most of apts working has been ported to Red Hat, there are still some special cases where apt works best in its native system Debian.

Yum, “Yellow Dog Updater, modified”, was originally used for updating Linux made for Macintosh hardware. It was ported to Red Hat 9.0, and became so popular that it is now officially included in Fedora Core 1. Even though yum and apt have very similar user interface and basic idea of operation, their inner workings are different. Having taught many courses with both package managers and peeked trough their code, I have noticed that yum fixes some shortcomings of apt on Red Hat / Fedora platform. The code base of apt is a lot larger, because it is C and has some code for Debian only. Unlike apt, yum uses rpm directly trough application programming interface (API), which leads to greater stability. Apt checks the authenticity of repositories, yum checks also packages. It is a lot easier to create a software repository with yum (yum-arch) than with apt. Finally, the RPM (installation package) of yum requires less work from user.

Many other automated package managers exist. In my experience, they require user interaction more often than yum and apt. This makes them unsuitable for updating large number of machines. Probably that has kept popularity of these tools quite low. There are many minor package managers mentioned in Freshmeat.net, and Red Hat Linux RPM Guide has reviewed some of them.


4.2.7 Recommended Method of Software Installation

Based on the analysis above and many years of experience with the most popular package management systems, and having administered repositories for both apt and yum, I recommend yum as the package management system for Helia. Main benefits of yum are good integration with Red Hat / Fedora (recommended distribution), stability, popularity (large and growing user base), security, ease of use (especially server side) and use of standard web servers.

Helia should begin by using public yum repositories, and later build its own repository. Packages tailored in Helia should be submitted to Fedora Extra or other popular repository for quality assurance. Later, if there are resources and need to test updates, Helia could start its own repository. A suitable list of repositories is included as an appendix.


4.3 User authentication

To use computer, users must be authenticated. Because most medium to large organizations have thousands of users, it would be rather inconvenient to handle each workstation separately.

In the case organization, Helia, users are authenticated using Windows Active Directory. There are nearly 10 000 users. Also the Linux server myy.helia.fi (running Red Hat Advanced Server) uses Windows Active Directory to authenticate users.

Benefits of Helia’s current solution:

  • Using a single user database is an obvious choice for a network of this size
  • Solution seems quite stable (having used it for a year and a half)

Shortcomings of Helia’s current solution:

  • Password cannot be changed from main linux server myy, so it cannot be changed by loggin in remotely
  • Using Windows Active Directory lowers the security of the whole system to Windows level
  • Active directory uses proprietary extensions to standards. Microsoft often changes the specifications, probably to keep the system incompatible with competing solutions.

Samba could connect Linux workstations directly to Helia’s current solution, Windows Active Directory. Samba is a Linux server that provides native-like file shares and other services to Windows workstations. New features in Samba 3 include Active Directory support. Samba 3.0 is now able to join a ADS realm as a member server and authenticate users using LDAP/Kerberos (Samba Team 2003). However, there is not much experience anywhere on authenticating Linux workstations against Active Directory, and as Microsoft can change its implementation according to its whim, this solution could prove risky in the long run.


4.3.1 Selection criteria

User authentication method must provide

  • Password obfuscation (not send passwords clear text over network)
  • Built-in support in selected distribution
  • Linux and Windows support

The chosen authentication system should also meet the generic criteria for software choice laid out earlier. The system and its protocols should be fully documented and accepted as a standard. Previous in-house experience and easy of installation for IT-support would make roll out cheaper. The system should be widely interoperable to avoid multiple passwords. At least Windows and Linux workstation login should be supported, but authentication for web servers and proxies could immediately usefull too. Strong encryption with no known faults in implementation would be a requirement in an ideal world, but as the network has many other weaker points, any encryption that is relatively difficult to break is good enough at the moment.


4.3.2 NIS Yellow Pages

The Network Information Service or NIS is Sun Microsystems’ “Yellow Pages” (YP) client-server protocol for distributing system configuration data such as user and host names between computers on a computer network. (Wikipedia 2003)

Even though NIS was once a popular method for centralized authentication, it has now been replaced because of security concerns (Wikipedia 2003). Many distributions, including Red Hat, no longer give advice on setting up NIS in basic documentation.


4.3.3 LDAP

Lightweight Directory Access Protocol (LDAP) allows data to be stored on a central server, and accessed by clients trough encrypted SSL tunnels. LDAP could be used to store any data that is often read and written rarely, but we are interested in using LDAP just to store user authentication data and contact information.

LDAP is in production use in many large educational organizations. Helsinki University is using LDAP to authenticate users in multiple systems, including Mappi webmail, WebOodi course evaluation and gym reservations. Some systems allready use authentication tickets to allow access to all services with single login. In the future, there will be a single place for authentication. (Harjuniemi 2003)

Funet, Finnish IT network for science, which provides Internet connectivity for Helia, is researching possibilities for providing LDAP services for its members (CSC 2003, Kanner 2002). Helia should follow Funet’s development and attempt to build interoperable systems. System interoperability could improve service by letting users work with less passwords and access more systems. Costs could be saved by combining development efforts, and using the development efforts already done under CSC funding.

Helia should consider implementing LDAP too. Following the CSC recommended LDAP schema “FunetEduPerson” (CSC 2003) would be an obvious first step towards compatible systems. Implementing LDAP on Linux is explained in distributions own documentation (For example Red Hat Inc 2003), and in more detail but distribution independent way in LDAP-HOWTO (Malère 2003).

Even though Microsoft Active Directory supports access with the LDAP protocol, it might be dangerous to create a system where Microsoft components are in a critical role. Microsoft has a reputation of making surprising, obscure changes in protocols that make them incompatible with other vendors products. On the other hand, Active Directory LDAP implementation is quite standards compliant at the moment. This makes combining the systems easier.


4.3.4 Recommendation for Authenticating Users

Helia should use LDAP for authenticating users. Funet recommendations should be followed when they are compatible with existing systems. Practical implementation of LDAP in Helia requires more research.


4.4 Remote Control

Currently, remote control is done with a Virtual Network Computer (VNC) protocol based closed source system with unknown level of encryption. Remote control initiates a shared session to target computer, so it cannot be used for other purposes while on remote control. Shared sessions might make it easier to use remote control for giving users advice.

Most widely used remote control methods for Linux are ssh command line connection and graphical X Window System and Virtual Network Computer (VNC) connection. All methods above should be secured. Ssh has encryption and two way authentication built in, and the graphical remote control tools can be protected with an ssh encrypted tunnel. It is obvious that only secure, encrypted communication methods are suitable for remote control. Otherwise any attacker sniffing (eavesdropping) traffic could gain administrative root access to machines.


4.4.1 Virtual Network Computer VNC

Virtual Network Computer (VNC) is truly multiplatform, with both client and server for Linux, Windows and Macintosh. All clients and servers are interoperable. VNC works by taking screenshots of target computer, and sending them compressed trough the network. This makes it very slow for anything but local area network. Interoperability between Linux and Windows could be useful in Helia. Graphical remote control might be easier than command line when beginning administration.


4.4.2 X Window System

X window system, the foundation of graphical interfaces in Linux and other POSIX systems, is made for multiple users. Any computer running X window system could serve X terminals and allow many clients to log in graphically simultaneously. Protocol is very efficient compared to VNC, as instead of screenshots it sends descriptions of windows to draw and text to write on screen. Still, it is not useful for slow lines. It is trivial to create a secure tunnel for X window system traffic to allow secure graphical remote control to any computer that allows ssh access and has X Window System installed. In fact, this tunnel is created automatically in most setups, and graphical user interface programs can be run by typing their name to ssh command line. Used this way, X window system does not draw target computers desktop, but opens the remotely run programs window only. Some see this as a benefit, some would use another program, such as Xnest, to draw the desktop.


4.4.3 SSH Secure Shell

In my experience, command line is the most useful method of remote control. In Linux, anything can be changed from the command line. Mass execution of commands in multiple computers is only possible on command line. If remote access is done from outside the network, scarce bandwidth makes graphical remote control impractical, but text mode works still. Command line ssh remote control works with tiny devices too, such as mobile phones and PDA‘s. The simplest method for mass execution is printing target machines names separated by whitespace, then using ssh command mode in bash for-loop. Also specialized tools for this exist.

A free ssh server, OpenSSH, is installed to Red Hat / Fedora Core by default. It is also available for all POSIX (Unix and Linux like) platforms. Free ssh client is part of practically all Linux distributions, and all Helia’s workstations already have a closed source ssh client. As OpenSSH server is installed to Fedora Core by default, enabling SSH requires only opening ssh port and possibly improving OpenSSH configuration. Some configuration could be disabling now obsolete ssh-1 protocol and allowing only remote control user to log in trough ssh.

SSH allows remote logins by knowing password or owning the secret key of installed public key. As OpenSSH is integrated with Pluggable Authentication Modules (PAM), it can use any method to authenticate users. For example, a one time password system was briefly tested. To allow remote logins to workstations, a public key could be installed to allow remote root access to only the owner of the secret key. A secret key is a lot harder to be mistakenly communicated to outside parties, as it is a text file full of non-pronounceable gibberish. Also, it is more resistant to brute force attack than normal password, in case a workstation is captured.


4.4.4 Remote Control Recommendation

Based on the comparison above, I suggest that workstations are remote controlled by SSH in command line mode by default, and X Window System trough SSH tunnel, without desktop, is used for graphical remote control. Authentication should be done with a pre-installed public key. Commands could be mass executed by command line ssh.

Back to FreeHelia Abstract and Contents – Next Practical Recommendation for Case Organization



Posted in Old Site | Tagged | Comments Off on freehelia_administration_of_workstations

Comments are closed.