Get news? 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | About | Contact Want to help?

Linux-Kongress 2003
10th International Linux System Technology Conference
October 14th to 16th, 2003 in Saarbrücken, Germany

Home | Events | Program | Abstracts | Tutorials | BoFs | Fees | Registration | Exhibition | Location | Keysigning Party | Sponsors | Call for Papers


The following papers will be presented at the Linux-Kongress on Wednesday, October 15 and Thursday, October 16. All talks are in English. (The original Call for Papers.)

A new binary alternatives system for Linux/Unix by Hans-Georg Eßer Wednesday 11:30

This paper presents a new concept for binary (program) alternatives that will aid Linux/Unix users in defining generic default programs for every task that allows multiple programs. While hiding the range of available programs and e.g. defaulting to a chosen editor, whatever editor launch command was used, it will still be possible to launch all installed programs.


Today Linux users are faced with a huge number of applications for most interesting tasks; e.g. there are lots of editors, mail clients, word processors, web browsers, etc.

A user who knows what specific program he wants to start and happens to know the binary name as well, has no problem to launch this program by issuing a command on the shell. However, many applications start helper applications, and those might default to a program which is not the tool of choice of the user. In most cases it is possible to change the default behavior, but the necessary procedures vary much.

A new approach

In the OS News article Adam Scheinberg describes a collection of changes to Linux systems that would be needed in order to make Linux more accessible to new users:

"Perhaps the way to get system-wide default is to have a given directory, say, /system/commands, that appears to be the equivalent of /usr/local/bin or /usr/bin -- that contains the executables from the command line. Except, in our distro, the real files are kept in /system/bin. /system/commands is full of aliases. Then, when you change your default browser through our control panel -- all the known browser commands [...] change to aliases of your selected browser."

An implementation of this approach would have to move all binaries from directories in the search path and add a new standard executable directory with placeholders for all moved binaries which would be links to default programs.

This approach has some design flaws:

  • Moving a binary to a different location will break program calls by applications that use the original full path name.
  • Changing a standard application will require all links in /system/commands to be altered in order to reflect the change.
  • There is no central configuration file or database to administer the alternatives.

We present an approach that picks up Scheinberg's idea, but fixes these flaws and allows for user-friendly and logical configuration of the program alternatives and defaults.

Suggested solution

Our new framework, called ``Linux Alternatives framework'' consists of five parts:

  • a concept of ``program classes''. As an example, an ``editor'' class would consist of a list of programs and attributes of each of the programs,
  • a description of possible operations (modifications) on these classes and its elements, and a specification of a command-line tool that shall handle these modifications,
  • a suggestion for a file system structure that allows for the implementation of these ideas,
  • a specification for configuration files,
  • and finally a simple but working implementation.

Hans-Georg Eßer

About the author:

Hans-Georg Eßer is Editor-in-chief of the two german Linux publications LinuxUser and EasyLinux and author of several Linux books.

Related links:

Building Enterprise VPN's using Open Source Tools by Ken Bantoft Wednesday 11:30

As Linux scales it's way into the enterprise, features like high availabilty and redundancy become more and more important.

This presentation deals with both the concepts and the implementation of building enterpise grade VPNs, using various applications, protocols and tools to add redundancy at each layer:

Zebra (zebra-ng) (BGPv4) GRE (Generic Routing Encapsulation) FreeS/WAN (IPSec) Heartbeat (from

We'll cover the built in enterprise features of each application/protocol, and how to put them together into a high availability setup.

Ken Bantoft

About the author:

Ken Bantoft starting programming in 1988, and successfully avoided doing it as a full time job until 2002. He opted instead to focus on Unix, Networking, and starting in 1996, Linux. He now spends his days doing systems integration and writing software for Canadian Biotech company MDS Proteomics. Beginning at OLS2002, he starting working alongside the FreeS/WAN project, integrating various patches into his own fork of their code. He now works directly on the project, as well as several other Open Source efforts including OpenZaurus/OpenEmbedded.

Ken's previous jobs included Team Leader of Linux Services at IBM Canada, and as a Network Engineer for a large Canadian Bank. His FreeS/WAN related work is partially funded by Astaro AG, and the rest by insomnia. In his spare time, he can be found studying Oriental weapons in the company of his two cats.

The Linux Standard Base: the road to Linux compatibility by Mats Wichmann Wednesday 12:15

As the GNU/Linux operating system gained in popularity, worries that it could suffer from the fragmentation that hurt the UNIX industry drove prominent members of the open source community to convene the Linux Standard Base (LSB) project. In this paper we review the technology and concepts that make the LSB project work, the current status of the project, and examine challenges and future directions for the LSB.

The LSB is an application binary interface (ABI) specification and as such describes a set of programming interfaces and other services that will be available at link and at run time. But a specification alone is not useful; we will examine how the LSB uses a three pronged strategy of specification, test and implementation to arrive at a stable platform. We also look at the process by which the specification is developed, including community participation and voting rights earned through "sweat equity".

We next consider the progress the LSB has made at providing a binary compatibility contract between operating system and application, and discuss some current shortcomings and planned evolution. The developments underway for the forthcoming version 2.0 of the LSB include modularization of the specification to enable the development of usage profiles such as server, workstation, or embedded; and a full C++ ABI.

In the final sections, we describe the process for individuals and groups to bring new candidates for standardization to the LSB project. Issues to consider include how to evaluate whether an existing project is ready for standardization, which requires answering questions about license openness, stability of interfaces, dependencies on other projects, general acceptance (i.e., is this project "best practice" in its field?), degree of support across distributions, demand for the feature to become a standard and availability of tests and a sample conforming implementation to validate the specification that will be developed.

About the author:

Mats Wichmann has been a UNIX/Linux developer, consultant and trainer since 1981. At Intel since 2001, Mats works to enable Linux for large systems, a role which includes work as a core developer in the LSB project. Before coming to Intel he spent several years as a professional trainer with Learning Tree International and has developed Linux and Python courseware. He also spent several years on a previous successful binary standardization project, the MIPS ABI Group, where he was director of technology.

Additional author: Stuart Anderson, Free Standards Group

Kernel Level Security (again) by Philippe Biondi Wednesday 12:15

Security is a problem of trust. Having a system that offers services to Internet and that can be trusted is very hard to achieve. Classical security models focus on the physical limit of the machine. We will see that it can be interesting to move the trust limit between user space and kernel space and that it is still possible to enforce a security policy from this trusted place.

We will see some practical ways to have that work in a modern monolithic kernel (Linux), with some small code examples.

We will also see some other practical aspects with a review of some implementations that exist for Linux kernels, with a focus and examples on the Linux Security Modules (LSM) framework.

Philippe Biondi

About the author:

Philippe Biondi is a security consultant. He is co-author of LIDS, author of Scapy, author of shellforge. He is now working as a security consultant for Arche.

Related links:

What to expect in the Linux 2.6 kernel by Theodore Ts'o Wednesday 14:30

Linux 2.5 has been in the works for the past 18 months, and in that time many exiciting changes have been made by the dedicated team of kernel developers. This talk begin with a short history of the Linux 2.4 kernel, and what went well and not-so-well with its release, leading up to the start of the 2.5 development series in February, 2002. We will discuss the various improvements made to the Linux 2.5 kernel, from better scalability on very large machines to better support for embedded applications. These improvements cover virtually very area of the kernel, epsecially in the VM, scheduler, block I/O, sound, and module loader subsystems.

Ted Ts'o

About the author:

Theodore Ts'o has been C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore currently serves on the board of the Free Standards Group and contributes to the development of the Linux Standard Base. Theodore has attending every Linux Kongress since its founding, and is looking forward to attending this 10th Linux Kongress.

The OpenAntivirus Project by Kurt Huwig, Rainer Link Wednesday 14:30

The OpenAntivirus project aims to delived virus protection on an open source basis. The talk is about the used core techniques of virus detection and digital signatures on virus databases. A special focus is given on the implementation for On-Access Virus Scanning on Samba (via Samba VFS) servers using the Internet Content Adaption Protocol (ICAP).

Kurt Huwig

About the authors:

Kurt Huwig is a core developer of the OpenAntivirus project and managing director of iKu Systemhaus AG in Saarbrücken. He installed his first commercially used Linux server in 1996, created some small patches for the kernel and loves to code in Java.

Rainer Link received his diploma in "Computer Networking" of the University of Applied Sciences, Furtwangen, Germany, this year. He's interested in computer-viruses and anti-virus technology since 1991. In 1999 he joined the AMaViS (A Mail Virus Scanner) development team. Together with Howard Fuhs he founded the project in 2000.

Related links:

Kernel Janitors: State of the Project by Arnaldo Carvalho de Melo Wednesday 15:15

The Kernel Janitors Project has been cleaning up the kernel for quite some time, in this talk I'll present what has been done, tasks that kernel hackers agree to add to the TODO, tools used to help in the process, 2.5 changes that need to be propagated thru the tree, etc.

About the author:

One of the Conectiva founders and one of the lead developers of Conectiva Linux. I've started doing translations for pt_BR, then worked on the internationalization (i18n) of minicom, net-tools, fetchmail, util-linux, etc, then went to more fun things, developing and maintaining a driver for a X.25 serial sync card (cyclom2x, made by cyclades), the IPX & LLC network stacks and doing random fixes in the Linux Kernel. Now I'm trying to get some time at my home lab to help in other free software projects (kernel janitoring, NetBEUI, Appletalk, samba, etc) to get back to the old, old days of actually having fun coding :-)

Related links:

Strong Cryptography in the Linux Kernel by Jean-Luc Cooke, David Bryson Wednesday 15:15

In 2.5, strong cryptography has been incorporated into the kernel. This inclusion was a result of several motivating factors: remove duplicated code, harmonize IPv6/IPSec, and the usual crypto-paranoia. The authors will present the history of the Crypto API, its current state, what kernel facilities are currently using it, which ones should be using, plus the new future applications including:

  1. hardware and assembly crypto drivers,
  2. kernel module code-signing,
  3. hardware random number generation,
  4. file system encryption, including swap space

About the authors:

still to come...

Related links:

Pursuing the AP's to Checkpointing with UCLIK by Mark Foster Wednesday 16:30

Checkpointing is the technique of storing a running process's state in such a way that a process can rollback to or be restarted from the point at which the checkpointing took place. We believe the ideal checkpointing system can be best described by the Three AP's to Checkpointing: Any Process on Any Platform at Any Point in time. UCLiK is a work in progress aimed at achieving two of the three AP's: Any Process at Any Point in time. UCLiK is designed as a kernel module and thus operates on the system-level. By checkpointing at the system-level, we can achieve greater levels of transparency than those experienced by other checkpointing systems. UCLiK is unlike many other checkpointing systems in that it provides complete transparency for the application programmer. It requires no modifications of the application programmer's code, no special checkpointing libraries with which to relink or recompile, and no special programming language or compiler. Furthermore, UCLiK does not rely on logging to create checkpoints. In addition, since UCLiK is designed as a kernel module, it does not require any modifications to the running kernel's code.

UCLiK works by stopping the execution of a process, saving that process's address space and kernel state to a file, and then terminating or continuing the execution of that process. By saving a checkpoint to a file, we can save it for a later restart or move it to another host for a restart. By moving the checkpoint file to another host, we achieve process migration. UCLiK inherits much of its framework from the work on CRAK. However, much of CRAK's support for items such as opened files, pipes, and sockets takes place at the user-level. UCLiK differs in that it provides support for these items at the kernel-level. In addition, UCLiK provides support for storing the contents of opened files in a checkpoint, identifying deleted and modified files during a restart, restoring the file pointer, and restoring a process's original PID. We have also developed a tool that allows one to restart a process in the terminal or pseudo-terminal of one's choice.

Future work is not limited to but includes adding support for temporary files, the ability to checkpoint only one end of a pipe, a PID reservation system, and support for migration of checkpointed sockets. Our goal with UCLiK is to provide a kernel-level checkpointing system that can checkpoint Any Process at Any Point in time. Such a system would not only support process migration but also be a great asset to the system administrator. This would allow the system administrator to checkpoint seemingly runaway processes rather than kill them. In addition, the system administrator could checkpoint user processes before doing maintenance or preventive shutdowns.

Mark Foster

About the author:

Mark Foster is currently working on his PhD in computer science at the University of Florida. His bachelors and masters degrees were also in computer science. He has been a Linux user for about 3 years. He is interested in many aspects of computing but especially checkpointing, fault tolerance, operating systems and security. Outside of computers his main hobbies are running and movies.

OpenSC - Smart Cards on Linux by Olaf Kirch Wednesday 16:30

OpenSC is a framework for crypto smart cards on Linux, i.e. cards that can store keys and certificates, and can be used to safely compute digitial signatures (and decrypting encrypted data) without exposing the key to the host operating system. These cards can be used in a number of applications, ranging from encrypted/signed email to authentication (single sign-on) to eGovernment applications.

OpenSC provides libraries and utilities to use smartcards that have an ISO 7816-3 compliant file system on them, including Gemplus cards, Cryptoflex, Starcos SPK, etc, as well as various USB tokens such as the Aladdin eToken and Rainbow iKey.

OpenSC comes with a pkcs11 module that can be used with Netscape/Mozilla, an OpenSSL engine, a PAM module etc. Patches for OpenSSH and FreeSWAN are also available.

The talk would give some basic background on smart cards, some of the relevant names you come across all the time, and demo some applications of OpenSC.

About the author:

Olaf Kirch underwent a rapid metamorpohis from Unix user to Minix user to Linux enthusiast, and has been involved with Linux since the days of the 0.97.3 kernel. He is the author of the Linux Network Administrator's Guide, has written and contributed to various smaller system utilities and services, and has been the principal NFS scapegoat for several years. He has also been very active in the Linux security community for a long time.

Olaf admits to having worked for Caldera. In 2002, he moved on to SuSE, where he is currently working for the security team, as well as doing various kernel related work, including NFS and IPv6.

VLANs and GVRP on Linux: quickly from specification to prototype by Pim Van Heuven Wednesday 17:15

The Generic VLAN Registration Protocol (GVRP), is an IEEE standard to set up Virtual LANs (802.1Q, [1]). GVRP removes the burden of manually installing and maintaining VLANs from the network administrator's hands.

This talk presents how a working prototype of a GVRP switch for Linux can be built using a "Writing code is good but integrating existing code is much better" approach. The talk is split into two parts. The first part describes the development of the signaling component of the GVRP network: the GVRP daemon, the second part describes the Click router architecture and the modifications made to support VLANs.

The IEEE 802.1Q standard contains an example implementation of the GVRP protocol. This example code lacks platform dependant code (encapsulation and decapsulation of the GVRP messages, timers, logging, memory allocation and VLAN switching and tagging). The first part of the presentation starts by explaining how the example code in the IEEE specification can be extracted using ghostview, ps2ascii and grep. Further on it proposes some design decisions to support the timers, the logging and the main event queue. It continues by illustrating how Libnet and libpcap can be used to send and receive raw VLAN tagged MAC frames.

The second part of the presentation introduces the Click modular router [2]. Click is a software architecture for building flexible and configurable routers and it consists of a Linux kernel patch and a user-level driver. A router can be configured by using a declarative language readable by humans and easily manipulated by tools. We present an example configuration for an Ethernet switch that supports VLANs. The configuration describes the elements used and their interconnections. An element is a basic packet processing module that implements a simple router function such as queuing or scheduling. By writing new elements it is very easy to extend a Click router. We developed a new element that supports VLAN tagging and switching. This new element combined with other standard Click elements makes the Click a fully VLAN compliant switch. This VLAN element was contributed back to the Click project.

The flexibility, the ease of configuration and extension of a Click router does not jeopardise its forwarding performance. To illustrate this the talk also presents measurements of the forwarding performance of a Click router compared to a standard Linux router [3].

In conclusion, this paper presents a number of techniques that can be used to reduce the time to develop a prototype of a networking protocol. This is illustrated with the GVRP protocol but these techniques can also be applied to other network protocols and are presented here because they might be useful for other developers.

[1] IEEE, "802.1Q - Virtual LANs",
[2] PDOS and ICIR,"The click modular router project",
[3] E. Kohler, R. Morris, B. Chen, J. Jannotti and M. F. Kaashoek, "The Click modular router", ACM Transactions on Computer Systems, Volume 18 number 3, p236 -2 97, 2000.

About the author:

Pim Van Heuven received its Ph.D. in applied science: computer science in June 2003. He currently works at the INTEC Broadband Communications Networks (IBCN) Group and is starting his own Linux and open source focussed company in parallel. His fields of expertise include open source, networking and web services. In 2001 he open-sourced the DiffServ over MPLS for Linux project and is maintaining it ever since. He published several papers mainly on rerouting in IP networks.

Frederic Van Quickenborne specialises in Layer 2 Ethernet, especially in QoS, VLANs, RSTP and MSTP. He is also working on different projects concerning video-streaming problems and implements on the Click Modular Router toolkit.

Filip De Greve investigates the design of reliable Layer 2 networks (Ethernet, RPR, MPLS) and is responsible for the design of a tunnel set-up mechanism with QoS guarantees in Ethernet-based access networks in the PANEL project.

Brecht Vermeulen researches management frameworks for QoS networks, a.o. DiffServ, for which he has developed a CORBA based management framework that has been tested and prototyped on a network of some 10 Click based PC QoS routers.

The research interests of Steven Van den Berghe are IP Quality of Service, high- speed network monitoring and traffic engineering of IP networks. His focus is on measurement-based adaptive Traffic Engineering in a DiffServ/MPLS/MultiPath environment.

Filip De Turck's main research interests include scalable software architectures for telecommunication network and service management, performance evaluation and optimization of routing, admission control and traffic management in telecommunication systems.

Piet Demeester has been active in the research on broadband communication networks since 1992. He published over 400 papers and has been member of numerous technical program committees.

Using the OpenPGP Smartcard in a POSIX environment by Werner Koch Wednesday 17:15

Todays smartcards are ubiquitous and an import security feature for a lot of appliances - most notably cell phones. Despite what one might assume, their use on desktop computers is still not very widespread. I will look into reasons why their dissemination is that slow and how it can be changed.

As a solution we have specified the OpenPGP card and are ready to sell reasonable priced cards to end users. GnuPG has been enhanced to utilizes the cards and along with that a new modularized crypto infrastructure is now available to make use of the cards by different applications. The most important ones are obviously PAM and SSH key storage. This paper describes the system and its design and gives examples on how it can be utilized by other applications.

The OpenPGP card is an ISO-7816-4, -8 compliant application with a open specification, so that every vendor will be able to produce such cards. In contrast to X.509 cards, a CA is not required as key validation issues are not defined and left to the software using the card. For example the keys can simply be used for SSH user authentication in the standard way with the authorized_key file. The card is able to generate the keys on-chip so that the private key won't ever be seen outside of the card; this provides very good resistance against key compromise through remote attacks or physical attacks by average attackers. Optionally the card supports key import, so that existing keys can be be used and a backup facility can be implemented. Our current software support the ctAPI as well as PC/SC so that most of the software can be easily ported to non-POSIX systems to allow the use of one card throughout heterogenous systems.

About the author:

Werner Koch was born 1961, married and living near Düsseldorf/Germany.

After school, alternative service and apprenticeship as an electrician he started to work as software developer in 1985 while also studying computer science at the FH Dortmund. For several years he has been with ABIT Software as principal designer of their software framework. In 1991 he began to work as free-lance consultant and developer; he founded the Free Software company g10 Code in 2001.

Koch is radio amateur since the late seventies and became interested in software development at about the same time. During the years he worked on systems ranging from small CP/M systems to mainframes, languages from assembler to Smalltalk and applications from drivers to financial analysis systems. He uses GNU/Linux as main development platform since 1993, is the principal author of the GNU Privacy Guard and founding member of the FSF-Europe where he acts as German Vice-Chancellor.

Related links:

Multiple Linux user databases and the samba way to merge them by Volker Lendecke Thursday 9:30

Unix system administrators have to take care to keep their user databases consistent across machines. Several mechanisms exist to make this possible or even easy, such as rsync of /etc/passwd, NIS and LDAP. If planned and done well, these mechanisms work fine even for a large number of machines and users.

But in the real world diverging uid/gid allocations have to be maintained and often merged. It's sad we have to say this, but this is an area where the unix world can learn from Windows NT: This system is capable of running more than one user database on a single machine via trust relationships.

Samba 3 provides two new features that might give a solution to this problem:

  • The support for NT-style trust relationships makes it possible to have different user databases available on a Samba domain using automatic winbind id allocation.
  • A centralized winbind id allocation scheme solves the problem that different machines as domain members might assign different user and group id's to the domain users and groups.

This talk will present the problems of merging different user databases, the way Samba can take part in the NT way of doing this, and how this might be a possible solution even for Unix-only shops.

About the author:

Volker Lendecke is member of the Samba core development team and co-founder of the Service Network Gmbh in Goettingen, Germany. The first practical migrations to Samba 3 are done, so I might be able to give the people some valuable hints.

Scalable Network Programming by Felix von Leitner Thursday 9:30

This talk is about programming network servers that can handle 10.000 or more simultaneous connections. I will compare traditional methods and recent advances in the area, show latency graphs, and talk about the bottlenecks that the programmer needs to be aware of.

The target audience is people who already know how to write networking software under Unix/Linux and want to know about the scalability part. If you are a Linux evangelist in need of new or better arguments why Linux is great, this talk may also be for you.

About the author:

Felix is co-founder of a small security company called Code Blau.

He is a Linux user since Linux 0.98 and likes to tinker with high performance and high scalability aspects of code. His most important recent project is the diet libc, which is a small glibc replacement written to get more performance and better scalability out of his Linux machines.

umlsim - A UML-based simulator by Werner Almesberger Thursday 10:15

umlsim provides an environment that allows the use of regular Linux kernel or application code in event-driven simulations. It consists of an extension of user-mode Linux (UML) to deterministically control the flow of time as seen by the UML kernel and applications running under it, and a simulation control system that acts like a debugger, and that can be programmed in a Perl-like scripting language.

Since umlsim only requires minor changes to the UML kernel, simulations use the original kernel code and do not require code to be translated back and forth between the kernel and the simulation environment, as is frequently the case with other simulators. Furthermore, applications run under this UML kernel without any changes at all.

The scripting language of umlsim provides - besides most of the functionality found in languages like C or Perl - only the basic primitives for controlling processes (such as the UML kernel) and accessing their data. Higher-level elements, such as taps into kernel subsystems and components for model creation, is provided by libraries written in this scripting language.

One of the first uses of umlsim is to examine the behaviour of Linux TCP, but it will also be useful for many other applications in research and kernel development, including regression tests, examination of race conditions, validation of configuration scripts, and performance analysis.

This paper describes the design and implementation of umlsim, gives a brief overview of the scripting language, and concludes with a real-life usage example.

Werner Almesberger

About the author:

Werner Almesberger got hooked on Linux in the days of the 0.12 kernel, when studing computer science at ETH Zurich, and he has been hacking the kernel and related infrastructure components ever since, both as a recreational activity, and as part of his work, first during his PhD in communications at EPF Lausanne, and later also in industry. Being a true Linux devout, he moved closer to the home of the penguins in 2002, and now lives in Buenos Aires, Argentina.

Contributions to Linux include the LILO boot loader, the initial RAM disk (initrd), the MS-DOS file system, some of the Linux port to the Psion S5 PDA, and much of the ATM code.

Related links:

WvStreams: An Easier Way to World Domination by Dave Coombs Thursday 10:15

WvStreams is an open-source C++ networking library developed over the last few years with the following major goal: make coding easy, without sacrificing performance. Really.

How easy? With WvStreams, it took one weekend to write Tunnel Vision, a simple, secure VPN. With WvStreams, it took one evening to write Retchmail, the world's fastest and smartest POP3 mail retriever. These projects, and WvStreams, and our other open-source projects, are available at for your enjoyment.

Our rule is: "Any amount of code ugliness is okay if it removes more ugliness than it adds." The internal parts of WvStreams are not for the faint of heart, but the code that uses the library ends up being very clean, easy to write, easy to understand, and (yes) still very fast.

This paper will quickly describe what areas of your life WvStreams can improve. I will show you simple TCP streams, encoders, buffers, crypto, lists and tables, task switching, Gtk and Qt integration, and other goodies.

I will then present spine-tingling examples that do crazy, ordinarily complicated things, using almost no lines of code, including a cheesy, SSL-enabled replacement for IRC that runs on both Linux and Windows.

Dave Coombs

About the author:

Dave Coombs co-founded NITI in 1997, with Avery Pennarun, in order to preclude any chance that university might not be quite stressful enough. Just kidding -- it was actually because of market research. They found that a lot of things really suck, and they wanted to fix some of them.

Since then, Dave has been involved with most of NITI's open-source projects. He's the primary author of WvDial, and also wrote large parts of WvStreams, the C++ networking library that makes it all possible. These days, he manages a gang of unruly programmers at NITI's R&D centre in Montreal, but he still sometimes finds time to do real work too.

Related links:

openMosix, a Linux Kernel Extension for Single System Image Clustering by Matt Rechenburg Thursday 11:30

Please note: This abstract was written by Moshe Bar who had to cancel his talk recently - Matt might differ from this: The openMosix Linux kernel extension turns networked computers into a SSI cluster. The openMosix Project is both Open Source and Open Project. The open nature of our project brings good ideas and contributions from our users. We share our plans, accept code contributions, and value outside suggestions. This openness accelerates the project development. openMosix creates a High Performance Computing (HPC) platform from commodity components.

openMosix is installed on each participating node of a LAN. Since all openMosix extensions are inside the kernel, applications automatically and transparently benefit from this distributed computing concept. There is no need to program those applications specifically for openMosix. The cluster behaves much as does a Symmetric Multi Processor (SMP), but this solution scales to well over a thousand nodes which can each be SMPs. Once installed and booted, the openMosix nodes see each other in the cluster and start exchanging information about their load level and resource usage. Processes originating from any one node, if that node is too busy compared to others, can migrate to any other node, repeatedly, during their lives. openMosix continuously attempts to optimize the resource allocation.

The remainder of this paper discusses how this is done, what applications benefit the most and least, and what enhancements are planned to increase the number and type of applications that benefit.

About the author:

Matt Rechenburg shortly agreed to overtake this talk - so he hadn't time to add his bio here.

Best practices in the development of IPv6 networking software by Mauro Tortonesi Thursday 11:30

The deployment of the new IPv6 Protocol creates new challenges for networking software designers and developers.

Not only the next generation of networking applications have to support the new IPv6 protocol, but they also have to be backward-compatible with the old, reliable and die-hard IPv4 protocol. Moreover, the applications must be very flexible and highly configurable in order to work in the wide range of mixed IPv4 and IPv6 environments that we will encounter during the transition to IPv6.

In most cases, all the complexity of the applications must be hidden from the end user, but, especially in the first period of the transition to IPv6, there will be cases in which the users must be allowed to have full control over the applications' behaviour, to ease debugging and deployment of services in mixed IPv4 and IPv6 environments.

To facilitate the complex task of the networking software designers and developers and to speed up the process of porting software to IPv6, there is a need of an in-depth guide that shows a "best practice" approach in the development of modern IPv6-enabled networking applications.

The first section of this paper will provide a brief presentation of the IPv6 networking protocol, explaining why the switch from IPv4 to IPv6 is needed and what kinds of problem we will have to face during the transition to IPv6.

The next section of the paper will focus on the problem of porting the software to IPv6: the new BSD Socket API defined by RFC3493 and RFC3542 will be presented, with many significant examples of IPv6-compliant networking code.

The correct usage of GNU autoconf in the development IPv6-enabled open source software is a key issue to achieve backward compatibility and portability for the applications, and will be addressed in detail by the following section of the paper.

Finally, the last part of the paper will present, as case studies, some of the software developed by the author in the last 3 years: netcat6 (an advanced IPv6-enabled netcat clone), libds6 (a library that contains a lot of routines to speed up the development of IPv6-enabled software), example-ipv6-package (a simple software package that shows the best practices in the development of IPv6-enabled software), and the IPv6 ports of wget and oftpd.

Mauro Tortonesi

About the author:

Mauro Tortonesi is a Ph.D. student at the University of Ferrara, where he received a degree in Electonics Engineering in fall 2002. Co-founder of the Ferrara Linux Users Group, Mauro is a Linux activist since 1997.

He started working on the development of IPv6 software for the Linux operating system back in 1999, when he ported to the new networking protocol the well-known netkit-base and bsd-finger applications. In 2000, he founded the Project 6 initiative and gave his first talk about IPv6 at the Italian Linux Meeting in Bologna. In December 2002, together with Simone Piunno and Peter Bieringer, Mauro founded the Deep Space 6 initiative to promove the deployment of the new IPv6 protocol, especially targeting the Linux community. Deep Space 6 aims at being a reference website for the users who are interested in the new IPv6 protocol and in its usage with the Linux operating system.

Related links:

MigSHM, Migrating Shared Memory on openMosix by Kris Buytaert Thursday 12:15

Migshm is a DSM patch for openMosix. DSM Stands for Distributed Shared Memory. It enables migration of processes that use shared memory on openMosix. DSM is being developed by the MAASK group from India. There have been some reports of Migshm enabling Apache to migrate to other nodes. This evolution opens a completely new area of applications that can be clustered by openMosix.

The openMosix software package turns networked computers running GNU/Linux into a cluster. It automatically balances the load between different nodes of the cluster, and nodes can join or leave the running cluster without disruption of the service. The load is spread out among nodes according to their connection and CPU speeds.

Since openMosix is part of the kernel and maintains full compatibility with Linux, a user's programs, files, and other resources will all work as before without any further changes. The casual user will not notice the difference between a Linux and an openMosix system. To her, the whole cluster will function as one (fast) GNU/Linux system.

Currently, one of the main limitations of openMosix is that applications that use shared memory and multi-threaded applications do not migrate on the cluster. Hence applications these cannot benefit from the load-balancing features of openMosix. Migshm aims to fill this need.

Migshm Stands for Migration of shared memory. It's not exactly a complete DSM as of now, but is sufficient for shared memory applications to benefit from openMosix.

The idea behind this talk/paper is to find out the current state of MigSHM, which applications do benefit from this new step in openMosix, and how different applications react to it.

We will try to give the audience an overview of its current status with its limitations. MigSHM still requires a lot of users to test this environment as much as possible.

About the author:

Kris Buytaert is a Linux, Security and Open Source consultant, He has consulting and development experience with multiple enterprise level clients and governement agencies. Next to his high level technical experience he is a team leader who likes to deliver his projects on time. He is a contributor to the Linux Documentation Project and author of different technical publications.

He is currently the maintainer of the openMosix HOWTO and within this role he is actively testing and documenting the integration of openMosix with different platforms and applications.

Related links:

Linux on FPGA's by Peter De Schrijver Thursday 12:15

Programmable electronic components (FPGA's) have been part of the hardware engineers' toolbox for quite some time now. Recently however manufacturers started to include hard macro cells such as CPU cores in their FPGAs. This transforms the FPGA into a programmable System on Chip (SoC) which allows the designer to balance functionality between hardware and software for optimal system implementation without needing extra components or board space. But this also creates new design challenges as the FPGA design engineers (speaking verilog or VHDL) and software designers (speaking C) will have to work more closely together to make this happen. We had an opportunity to port Linux to such a new FPGA platform based on the Xilinx Virtex II Pro. As always, it all turned out to be more involved than we expected, but it's here now and we learned a lot on working together with the hardware people. This paper will start by giving a short introduction to FPGAs. Then we will continue describing how we did the port of Linux to the Virtex II Pro, what kind of problems we encountered along the way and will conclude with some tips and advice.

About the author:

Peter De Schrijver was one of the first Linux adopters and kernel hackers in Belgium. He earned a Master in informatics at the KU Leuven and started working for Alcatel on large public telephony switching systems as a design and test engineer. He moved on to Alcatel research where he worked on QoS in internet access networks. He co-authored RFC 3203 on DHCP reconfiguration. Today he is the CTO of Mind NV, a belgian company focussing on free software operating systems for embedded applications.

Future directions in Linux HA by Lars Marowsky-Brée Thursday 14:00

The Linux HA project has made a lot of progress since 1999. It's main application heartbeat is probably one of the most widely deployed two-node failover solutions on Linux, and has proven to be very robust. Linux HA not only has a large user base, but its modular nature has also attracted many developers. However, a lot of work remains to be done and some is even on-going (though patches are always accepted).

This talk will first briefly summarize the current status and then the areas of development: The infrastructure enhancements (concensus cluster membership, messaging, barrier services), local resource management, the Cluster Resource Manager to allow heartbeat to exploit more than two nodes, the additions for the telco users (Carrier Grade Linux), the adaption and first implementations of Open Clustering Framework standards, tighter integration with projects such as the distributed replicated blockdevice, clustered EVMS, OpenGFS, LVS and others.

It will provide insight for potential contributors looking for a great project to work on.

Lars Marowsky-Brée

About the author:

Lars Marowsky-Brée currently works in the SuSE Labs on High Availability and Cluster related topics, ranging from Cluster Resource Management, Multipath IO to Cluster Administration.

Using Linux since 1994, his initial involvement with network operations (aka BOFH) provided him with lots of real-life experience about the various reasons for service outages and the one common factor. He soon began to appreciate the complexities in keeping a service running in the face of malicious software, possessed hardware, well-intentioned users and the world at large and loves to rant about it; this has kept him employed and invited to conferences ever since.

In early 2000, he jumped at the chance to work on Linux High Availability exclusively and joined SuSE. Being a natural pessimist, he enjoys this work a lot, and being a creative paranoid certainly helps.

Related links:

Technical requirements to use Linux for a honeynet by Ralf Spenneberg Thursday 14:00

This talk will describe the technical requirements to use Linux to setup a real honeynet. Specifically two tools will be covered in this talk: snort-inline and Sebek2.

Snort-inline is an Intrusion Prevention System and is placed between the honeynet and the rest of the world. It runs only on Linux using Netfilter. Each packet going through the Netfilter-chains is then queued to snort-inline running in userspace. Snort-inline is a modified Snort which may alert, log and drop the packet. By dropping the packet Snort can prevent anybody in the honeynet from attacking the outside.

Sebek2 is a tool for local data capture. It is a kernel module which captures all activity by the attacker and logs it over the network to a separate system. Sebek is available for Linux, Solaris and OpenBSD.

Ralf Spenneberg

About the author:

The Author has used Linux since 1992 and worked as a system administrator since 1994. During this time he worked on numerous Windows, Linux and UNIX systems. The last 5 years he has been working as a freelancer in the Linux/UNIX field. Most of the time he provides Linux/UNIX training. His specialty is network administration and security (firewalling, VPNs, intrusion detection). He has developed several training classes used by Red Hat and and other IT training companies in Germany. He has spoken on several SANS conferences and even more UNIX/Linux specific conferences. He was chosen to be member of the program comitee of the Linux-Kongress and the GUUG-Frühjahrsfachgespräch. Last year he published his first german book "Intrusion Detection für Linux Server". Right now he is writing on his second book "VPNs mit Linux".

Related links:

Further directions in storage replication via IP networks by Philipp Reisner Thursday 14:45


High availability is a hot topic for service companies which offer their services via electronic networks. As Linux is in a position to gain a market share in newly built systems, there is a growing interest in HA solutions for Linux-based systems.

Quite a number of conventional HA solutions are brought to Linux these days and most of them are based on a shared storage device.

A common storage system is not only a single point of failure, usually it is quite expensive as well.

The alternative approach is to replicate the storage via off-the-shelf networking equipment to a stand-by machine. Currently DRBD and SteelEye's "Data Replication" product are available to build such solutions based on Linux.

This work describes and compares these two solutions.


While DRBD is a monolithic approach, most of the functionality is integrated into a single module, SteelEye's approach is to combine the functionality of Linux's software RAID1 driver and the network block device (NBD) driver.

Integration with cluster managers

DBRD is an open design. A glue layer for the heartbeat cluster manager comes with the software.

SteelEye's "Data Replications" is a propriety product that integrates the MD+RAID1 approach with their LifeKeeper product.

I will present a solution to use the MD+RAID1 approach with the heartbeat cluster manager.

I will compare the architecture, performance, and usability of these two approaches and give an outlook on the future plans of both.

About the author:

Graduation from the Vienna University of Technology in computer science in 2000.

Since November 2001 MD at LINBIT, a provider of professional Linux services with a focus on high availability clustering.

Apart from DRBD, author of mergemem (a modification of Linux's memory management system)

Related links:

The Asterisk Open Source PBX by Mark Spencer Thursday 14:45

Asterisk is a "PBX, IVR and ACD in software". Running on linux, it provides all the call handling that you'd expect a classical Private Branch eXchange (PBX) to do, but it does even much more than that: Asterisk ("*", as in "match all") is designed to connect to all kinds of standards-based telephony systems, both classical analog or ISDN devices and the Voice over IP systems. Asterisk comes with support for SIP, MGCP and H.323 as well as it's own lightweight IAX protocol. Asterisk's switching core is able to act as a bridge between all those protocols, interconnecting otherwise incompatible telephony systems. With suitable hardware, these interconnection capabilities extend even to the classical analog or digital telephony networks.

In addition to this switching core, Asterisk provides for a rich set of telephony applications ranging from simple music on hold to complex Interactive Voice Response (IVR) systems. There is even support for both Text-to-Speech and and an interface for speech recognition systems. So asterisk encapsulates the features of a soft switch and an ISDN-POTS-SIP-MGCP-H.323 gateway. It can act as a VoIP proxy, voicemail server or IVR platform and can provide call queueing functions for ACD applications. In addition to all the features and applications it already has, it's easily extensible by adding new channel drivers or application modules.

After presenting Asterisk's overall architecture, this paper details some of it's unique features such as the IAX protocol (which allows you to tunnel 100 concurrent calls through the bandwith of a standard Primary Rate ISDN interface) or the Asterisk Gateway Interface (AGI) which allows calls to interact with PERL or even Shell scripts in a manner similar to CGI.

Mark Spencer

About the author:

Mark Spencer is president of Digium, an Open Source company selling low cost telephony hardware designed specifically for the Linux operating system and Asterisk PBX. He is best known as the original author of several Open Source projects including Asterisk, Gnophone, Gaim, and Cheops. He graduated in 2000 from Auburn University in Alabama with a bachelors in Computer Engineering.

Related links:

Simple, Robust Software RAID for Linux 2.6 by Daniel Phillips Thursday 15:45

Linux's new Device Mapper subsystem provides efficient facilities for concatenating, striping and mirroring physical volumes into a single logical volume, but support of RAID level 5 is left to the existing Multiple Device subsystem. Though the Device Mapper and Multiple Device subsystems can be combined to work around this problem, this requires extra administration work, adds an extra level of processing overhead, and does not satisfy Device Mapper's original goal of simplicity. A new RAID plug-in for Device Mapper is introduced here to provide RAID5-like functionality for physical volume configurations consisting of 2^n + 1 disks, where each logical block is split across 2^n disks, and parity information is written to the remaining disk. This strategy resembles the old RAID 3 strategy, and avoids one of the greatest sources of complexity in RAID 5, which is the need to read before writing in order to update parity information. This in turn removes the need for an extra block cache in the RAID driver. With the IO path thus simplified, performance for largely sequential write loads can approach the combined IO bandwith of the physical devices. Versus RAID 5, random IO loads generate higher seeking and lower rotational latency penalties, so random IO performance remains acceptable.

Daniel Phillips

About the author:

Daniel Phillips wrote his first programs on punch cards in 1975, on a then-massive IBM 360/168 with four megabytes of memory. When the IBM PC appeared on the scene in 1981, he obtained one of the first, with two 160 KB floppy disks and 128 KB of memory installed. Then, he developed applications from interpreters to 3D graphics, robotics, and industrial control systems, using MS Dos. The fall of MS Dos coincided with the rise of Linux, and he became a member of the Linux kernel development community in 1998, as a specialist in filesystems, and later, virtual memory.

NX -- A New Stage For Network Desktop Computing by Kurt Pfeifle Thursday 15:45

NX is a new technology to make X connections work reliably and fast even over very slow and low-bandwidth links.

The techniques to make this work are threefold:

  • a very efficient compression of normal X traffic
  • a very intelligent caching mechanism of transfered data, with no second transfer of same and only differential transfers of similar data
  • a reduction of time-consuming X round-trips to nearly zero

X itself *could* be very fast over the network. However, in recent years, application programmers tended to have forgotton the art to develop X-efficient programs. Thoughtlessly they introduced a lot of "round-trip" requiring functions into their code.

This is not the place to discuss that subject and Kurt is not an expert in this: it should suffice to say that a lot of apps "lazily" require the X server to store information that could otherwise be maintained at the X client side; this stored information is again retrieved by the app more than once, each time requiring a round-trip...

Over narrowband connections (like modems), each round-trip consumes 200-250 milliseconds. Round-trips therefore induce a lot of non-responsiveness to GUI applications running f.e. over such links. A round-trip can be invoked f.e. if the remote application asks the local X server about the exact position of the mouse (where event handling by the application itself could ensure the same result.) During the time the application calls Xlib, it has to idle until the answer arrives. Another example is when application and X server need to exchange info about available X fonts.

NX installs "NX proxies" on each side of an X connection. The NX proxies speak native "X" with their respective X application (this is the role of the NX proxy at the remote end) or the X server (this is the role of NX proxy at the local end). These NX proxies cache most of the X protocol operations on their own sides, keeping the caches in sync. The NX proxies only transfer "differences" in respect to previous operations. The NX proxies apply a very efficient compressed encoding of the traffic between them, even translating X bitmaps to other more compact lossless or lossy image formats, like PNG or JPEG.

The NX protocol is based on a modified X protocol, but vaporizing the many X round-trips down to nearly zero.

To give a few figures:

  • a Mozilla start-up alone produces nearly 6.000 round-trips and needs more than 7 minutes to complete over a 9.600 baud modem connection. With the help of NX, the round-trips are boiled down to a few dozen, and a startup may only take 20 seconds over the same modem link!
  • a full-screen KDE session transfers 4.1 MByte of data over the wire, if it is run over a vanilla remote X connection. Run it over NX, and the second startup data transfer volume is down to 35 kByte only! You can run KDE sessions over a 9.600 baud modem link and have a responsiveness which is better than TightVNC over a crosslink cable hooking together two boxes only 1 yard apart.
  • overall compression/speed gain is 70:1 (on average, across various applications), but can easily achieve 200:1 and more for some applications, like Web browsing.

NX has a few more goodies built-in:

  • NX embodies the additional capabilities...
    • connect to Windows Terminal Servers or Windows XP Professional boxes, using the RDP protocol,
    • ...and also to connect to VNC servers, using the RFB protocol.
    In these cases the remote NX proxy uses additional "agents" which speak the RDP or RFB protocols to the remote "foreign" servers. The NX agents translate these other protocols to X primitives, hand them to the remote NX proxy which transfers them down to the local NX client (applying, of course, its built-in compression and caching techniques). NX keeps bitmap updates in their foreign format and translates them into X bitmaps only at X server side, offering, thus, no penalties compared to the client speaking the native protocol. The NX encoding and compression of the resulting X protocol (using its usual algorithms) offer compression ratios ranging between 4:1 and 10:1 in respect to the native RDP or RFB protocol. NX derives its capabilites regarding these foreign protocols from "rdesktop" and "TightVNC" at the remote end, but uses an NX connection between the local and remote proxy.
  • NX can share files and printers between the local NX client machine (running the X server) and the remote NX server running applications (that is the X clients)
  • NX can tunnel multimedia and sound streams through the connection
  • NX can encrypt all traffic using SSH
  • NX can display not only remote "fullscreen" desktops, but even individual X applications in "single application window mode" on the local X server display (it makes for cute screenshots to put Konqueror or KMail on an MS Windows XP-based desktop this way)
  • NX utilizes the achievements of other OSS developers by plugging their components into its architecture: X11, SSH, Samba, rsync, Xnest, rdesktop, TightVNC, artsd, ESD...
  • NX servers don't install an additional daemon, opening an addtional port. NX clients connect to the standard SSH daemon of any given system (usually over port 22) and then start the "nxshell" (effectively starting the NX server and connecting to it). If an administrator cares for securing his SSH server, he implicitely has also cared to a large degree for securing his NX installation.

NX is the starting point for a new understanding of network desktop computing. It makes it possible to connect to your own Desktop, running your own application, using your own data from anywhere in the world even over slow connections like GSM-modems. Not too far from now, in the near future we will "NX-connect" on a peer-to-peer basis to remote applications that run on Linux, Mac OS X, Solaris and Windows application servers somewhere in the world, but are displayed at our local PDA. NX may soon define an X-based web, just like HTTP defined a HTML-based WWW.

NX certainly has the potential to take a big share of what now is one of the strongholds of Citrix MetaFrame/ICA and Microsoft Terminal Servers.

All the core NX components and libraries are licensed under the GPL and their source code is freely available. On top of this GPL-ed, self-developed code, the originators from "" have built a proprietary, commercial product, called the NX Server and the NX Client.

This suite of software aims to be a open-standards based replacement of costly solutions, which use proprietary technologies. NX server and client software can effectively help with the adoption of Linux on the desktop, They offer to corporates a valid Unix alternative to Microsoft's and Citrix's stronghold on thin-client computing. That's why they are also mentioned in the "Migrationsleitfaden" published by the German Ministery of the Interior. have invited the Open Source Software community to develop their own, compatible versions of NX-based clients and servers. They promised their help and active support for that effort. They pledged to not use any different, superior libraries for their commercial product (unlike f.e. other hybrid OSS/closed-source projects do -- why is it that in Kurt's mind there always appears CrossOver/WINE as an example? ;-) and to always release future versions of these libraries under the GPL.

The talk will provide a practical demonstration of NX. At the beginning we will connect to an NX server in Italy, and run the local presentation from there, displaying it in the conference hall to the audience. It will include a demonstration of a commandline-based connection as well as the GUI-based "NX Client" one. Prepare for some more surprises!

[ Kurt hopes, that this presentation will help to find interested developers which then start to work on a Free NX Client and Server (just as a few years ago his evangelism regarding CUPS had helped to spawn the development of the KDEPrint GUI for CUPS.... ;-) ]

Kurt Pfeifle

About the author:

Kurt is working as a system engineer at Danka Deutschland GmbH.

His job includes doing consulting and training related to network printing, IPP (Internet Printing Protocol), migrating heterogeneous networks to Linux print servers (with the help of CUPS and Samba).

He has been helping with both the and the KDEPrint websites and with user support in various newsgroups. He writes documentation related to printing and works as a beta tester for CUPS and KDE printing stuff. He also wrote most of the Documentation in the new Samba HOWTO Collection dealing with printing. In various newsgroups he is actively helping users to solve their printing problems, minor and major ones.

When CUPS first appeared on the scene, it was largely ridiculed and not taken seriously by many Linux/Unix "oldies". "We don't need no new printing system. LPD is good enough for me!", sounded a lot of responses he encountered. Only slowly did the supremacy of CUPS' architecture penetrate the minds of many.

Since he discovered NX in March this year, he has experienced several "deja-vu" situations. "We don't need no new X compression protocol. SSH with X forwarding and '-C' compression is good enough for me!", is what he even heard from his best friends when he narrated about NX features. Only when he forcefully dragged them in front of an NX client box, connecting over a slow line thru the internet to a Rome/Italian-based NX server, booting up a full-screen KDE-session and showing them the astounding performance gains by NX compressing & caching of vanilla X and its zero-ing of "round-trips", did he get them to instantly install NX themselves.

Currently he is busy with a book about CUPS (dealing with printing on Linux, Unix, Mac OS X and MS Windows). But he may start one on NX now too.... ;-)

Related links:

Comments or Questions? Mail to Last change: 2005-09-17