| 
 |---- Homepage
 |---- Call for Location
 |---- Program
 |---- Abstracts
 |---- Fees
 |---- Registration
 |---- Location
 |---- Key Signing Party
 |---- Pictures
 |---- Exhibition
 |---- Sponsors
 | 
German Ministry of education and research
GUUG - German UNIX User Group
LiVe - Linux-Verband
Netherlands UNIX Users Group
Caldera Deutschland GmbH
O'Reilly
SuSE
Sevenval AG
iX
LinuxLand
Linux AG
Linux-Magazin
7th International Linux-Kongress · 20.-22.9.2000 · Erlangen/Germany
Abstracts

Firewall Technologies with Linux as Case Study

Wednesday, 20.9., 12:00-18:00, Room T1


Jos Vos <jos@xos.nl>

This tutorial presents an overview of generic firewall theory and techniques, as well as an overview of the available implementations of these concepts for the freely available Linux operating system.

The first part covers firewalls in general. It provides necessary background information, explains the most commonly used firewall terminology, and it describes most of the currently known firewall concepts. Furthermore, the various firewall techniques that are available (like packet filtering and proxy servers operating on different levels) are explained and their pro's and con's will be discussed. Some important aspects for designing firewalls are explained using some example firewall architectures.

The second part covers the firewall software available for the Linux operating system. A large number of different software packages for packet filtering and proxy services will be described, as well as auxiliary techniques and software, like network address translation, masquerading, virtual private networks, and various additional tools that can improve the host and network security using Linux systems. This part includes an extensive introduction to ipchains, the Linux 2.2 packet filtering and masquerading software, but it also talks about less well-known software packages, that are often not standard available in most Linux distributions. The tutorial will cover the new techniques and tools available in Linux 2.2, but it also addresses migration from Linux 2.0, as well as future directions.

Target audience

This tutorial is aimed at system and network administrators, and other people involved with the design and implementation of network security policies. Although Linux (and, more generic, UNIX software) is used in the case study, the theory can also be used to build, understand, and maintain firewalls and security policies in general.

Speaker

Jos Vos <jos@xos.nl> is CEO and co-founder of X/OS Experts in Open Systems BV. He has an experience of more than 15 years in the area of research, development, and consulting related to UNIX systems software, Internet, and security. He is the author of ipfwadm and part of the firewall code in the Linux 2.0 kernel. His company X/OS releases its own LinuX/OS distribution, an extension to Red Hat Linux with a number of additional software packages, many of them related to security and encryption.

Linux Security for Programmers

Wednesday, 20.9., 12:00-18:00, Room T2


Olaf Kirch <okir@monad.swb.de>

For quite some time, the sendmail mail transport system was the epitome of bad security -- every couple of months, somebody would find a bug or misfeature what allowed local and occasionally even remote attackers to obtain root privilege. Then, Java came along and Eric Allman was off the hook for a while. Currently, the spotlight is on Microsoft because they're new to the multiuser/network game, and their large installed base make them an interesting target.

Tomorrow, it might be Linux.

I have been actively involved in Linux security for over four years now; both in the role of blundering programmer, as someone catching other people's blunders, and on various mailing lists. Linux' track record is no better or worse than most of the other operating systems, but there's definitely room for improvement.

The amazing news is that many programming mistakes that can lead to security problems seem to be impossible to eradicate. For instance, it has been a fairly well known fact for quite some time that special care needs to be taken when executing another program from a setuid context. However, there are still programs being written today that do not take the proper precautions. The last case I came across was in May 1999, and I'm sure it won't be the last.

There are probably many reasons why this is so. Speaking from my own experience as a blundering programmer, we tend to develop a partial blindness when it comes to spotting trouble areas in our own code. We're so much in love with our elegant design that we're reluctant to anticipate all the ways in which an attacker might try to break it. In addition, most security pitfalls -- the notable exception being buffer overflows -- look harmless enough until you know how an attacker might exploit them. Finally, many people underestimate the cleverness of the cracker community. To most people, a cracker is a freaked-out 16 year old kid running scrounged up exploit programs they don't even half understand. However, there's a non-negligible proportion of creative hacker crackers who are well-educated in Unix (in many cases thanks to Linux), and quite clever when it comes to finding the weak spots in your software.

The focus of this tutorial will be on traps we tend to fall into, and how to avoid them. It will roughly be divided into two parts; the first dealing with common and less common programming mistakes that can lead to security compromises. I'll try to include some hands-on examples of real security problems that have happened.

The second part will concentrate on design issues that help you create safer code, such as avoiding setuid programs, using helper applications, chroot jails, etc.

The third (and probably shortest) part will deal with using cryptography in your applications.

Unless there's strong demand for English, the tutorial will be held in German.

Author info:

Olaf Kirch has been a member of Linux community since the 0.97 days. He wrote the Linux Network Administrator's Guide, maintained the NFS code for quite some time, and is currently employed by Caldera.


Linux Kernel Hacking

Wednesday, 20.9., 12:00-18:00, Room T3


Jes Sorensen <jes@linuxcare.com>

This tutorial will look at how the kernel is structured and provide general guidelines for kernel programming. In particular it will be focusing at the issues you run into when writing device drivers. It will amongst other things cover the following:

  • Resource management (interrupts, device memory/ io space, memory allocation etc)
  • Device register and DMA memory access
  • Locking mechanisms, queues, timers etc.
  • User space access
  • Portability between architectures
  • SMP handling
  • General performance aspects
  • Kernel debugging

It is expected that the audience is has a basic knowledge of Unix/Linux internals and are comfortable with C programming.

The author

Jes has been working on the Linux kernel for more than five years, the last three as the maintainer of Linux/m68k. He used to work at the European Laboratory for Particle Physics (http://www.cern.ch/) where he worked on very high performance networking, Linux clusters and Linux/ia64. This has included writing Linux device drivers for Gigabit Ethernet and HIPPI (High Performance Parallel Interface, an 800Mbit/sec supercomputer network). Jes now works for Linuxcare Inc. in Canada (http://www.linuxcare.com) where he continues to work on Linux/ia64, high speed networking and other low level Linux issues.

Overview Filesystems in Linux

Thursday, 21.9., 10:15-11:00, Hörsaal 7


Ted Ts'o <tytso@valinux.com>

The last 18 months have seen an absolute surge in the Linux Filesystemspace. Between the advent of University of Minnesota's GFS, SGI's XFS, IBM's JFS, Hans Reiser's Reiserfs, the old reliable ext2 and its journaling follow on ext3, it's enough to make one's head spin.

First the speaker will provide a broad brush overview of the developments in the filesystem space in the recent history, and try to give a big picture of how they interrelate to one another. The talk will also review the history of Linux filesystem to provide a context for covering more recent filesystem developments.

Finally, the speaker will make a daring attempt (without a safety net!) to make some predictions for what the future may bring in the arena of filesystem and storage developments in Linux.

Bio

Ted has been a Linux kernel hacker since almost the very beginning. His first project was implementing POSIX job control in the 0.10 Linux kernel. He is the maintainer and author for the Linux COM serial port driver and the Comtrol Rocketport driver. He architected and implemented Linux's tty layer. Outside of the kernel, he is the maintainer of the e2fsprogs and the e2fsck filesystem consistency checker.

Theodore is currently participating in the Linux Standard Base efforts, and is chair of the technical board of Linux International. Outside of Linux, he was the development lead for Kerberos V5 at MIT for seven years, and is currently serving as IPSec working group chair in the Internet Engineering Task Force. He is currently employed by VA Linux Systems.


plex86 / FreeMWare

Thursday, 21.9., 11:30-12:15, Hörsaal 7


Kevin Lawton <kevin@mandrakesoft.com>

1. Introduction

Running multiple x86 operating systems concurrently has many uses. Many Linux users will undoubtedly look to virtualization to run Windows software on their Linux platform. There are many other uses for plex86 which will be explained.

Additionally, I will explain how virtualization in plex86 compares to other Open Source and commercial technologies and projects such as VMware, Wine, DOSEMU, bochs, Win4Lin etc.

2. General overview of virtualization architecture.

Virtualizing an x86 operating system, requires a virtual machine monitor which wraps itself around each 'guest' operating system, isolating it from the primary 'host' operating system. The virtual machine monitor exports enough functionality to each guest OS, such that it runs in its own 'virtual machine'.

A general overview of the architecture and techniques used by the plex86 virtual machine monitor to achieve this virtual machine, will be given.

3. Virtualization challenges on the x86 architecture.

The x86 architecture poses some interesting challenges for creating a perfect virtual machine environment. While other architectures are virtualizable by design, the x86 CPU has a number of quirks which necessitate extra software intervention to create an isolated and correct virtual machine environment.

I will talk about some of the x86 shortcomings, and give an overview on the software techniques used by plex86 to overcome them.

4. Future optimizations

There are many ways to increase the performance of the guest OS running in the plex86 virtual machine. For instance, optimized guest OS drivers (for example a Windows video driver) can be written to speed up video access, and potentially access the native video hardware. A number of such optimizations will be discussed.

5. Question & Answer

About the Author

Kevin Lawton is an employee of MandrakeSoft (http://www.linux-mandrake.com/), creator of the leading Mandrake-Linux distribution. He worked on his previous project 'bochs', a portable x86 PC emulation project, for 6.5 years. Bochs is now Open Source thanks to MandrakeSoft. His primary focus in working on plex86.

For more information about plex86, check out the main web site at http://www.plex86.org/.


Video Conferencing

Thursday, 21.9., 11:30-12:15, Hörsaal 8


Marcus Meissner <marcus@jet.franken.de>

1. Introduction

While the lowlevel technology of video grabbing is now working rather well on Linux the user level software usually does not go further than providing local display or simple webcam functionality.

With the advent of more and more bandwith available to the home user, more bandwith intensive uses become possible. Not only is it now possible to receive video and audio transmissions, the common user can create and its own conferences.

However several problems need to be addressed for conferencing over the Internet.

2. Transmitting Multimedia Streams as Packets

Cameras and microphones provide a continous stream of data, the internet however transmits packets which travel for an unknown amount of time, might get lost, or reordered.

To address this problem, the Audio Video Transport Working Group (avt-wg) of the Internet Engineering Task Force has developed protocol called "Real-time Transport Protocol" or short "RTP".

This protocol suite defines packets that split up the contious datastream into packets that honor the following concepts:

  • A packet is a self contained part of the stream. The data can be decoded with only this packet, so loss of a number of packets is not a problem.
  • Detection of misordered packets is possible.
  • The packet contains information that assigns the data to a specified source and a specified time on that source.

Packet payload layouts for RTP have been defined for various audio- and videoformats, ranging from raw audio to G.726 compressed audio, from Motion JPEG to H.263 Videocompression.

3. Making large conferences possible

Another problem to be addressed is that for a conference of more than two participants, where you do not want to multiply the number of packets to send by the number of participants.

To avoid the problem of duplication, a method called "multicasting" is used, where a single packet is sent to a special address. If this special packet hits a router, it is sent into every direction where a receiver listens on that special address. Several special routing protocols have been designed and implemented to support this.

4. Conference Applications

Most of RTP and Multicasting is already used by well known commercial applications like Microsofts Netmeeting and RealPlayer.

On the Open Source side there exist several implementations, best known is the RTP suite originally developed in the Lawrence Berkeley Laboratories at the University of California, which consists of Audio, Video, Text and WhiteBoard RTP conferencing tools.

The talk will focus on giving an introduction into the problematic of conferencing over the the internet, will go into technical details of the RTP protocol suite and its implementation issues. It will introduce multicasting only briefly and conclude with a small demonstration of the above mentioned application suite.


GNU/Linux on SuperH Project

Thursday, 21.9., 12:15-13:00, Hörsaal 7


Yutaka Niibe <gniibe@chroot.org>

SuperH is Hitachi's processor for embedded market. It is used PDA, video game machine, and such. Example are HP Jornada (SH-3), SEGA Dreamcast (SH-4).

Last summer, I began the port of Linux kernel for SH-3. The initial port was accepted by Linus and included in 2.3.15.

Since then, many developpers around the world gathered together. It found that there's independent two port for SH-3, and other two port for SH-4. We cooperated together to establish good and fast development. Now, they are merged into one, resulted 2.4.X kernel, which try to support full features of Linux kernel.

We also ported GNU C library, which will be included in forthcomming 2.2 release. Besides, we hacked GNU Tool chain (i.e. GNU Binutils and GNU C Compiler) to support shared libraries.

In the paper, I'll explain the short history of this project, the international cooperation, along with our experiences on technical issues. Also I'll explain the future of GNU/Linux for embedded market.


ACE/TAO

Thursday, 21.9., 12:15-13:00, Hörsaal 8


Lothar Werzinger <werzinger.lothar@krones.de>

Multi platform development using the open source framework ACE/TAO (http://www.cs.wustl.edu/~schmidt/ACE.html).

The talk inludes a general overview about the abstractions in ACE that allow cross/multiple platform development.

This includes

  1. Point out the benefits from using frameworks like ACE and the ACE ORB (TAO)
    • less error prone than plain implementations
    • time saving by using high quality tested code
    • using patterns to solve problems that occur in almost all network/distributed applications.
  2. Go into the details of ACE

    == Low level abstractions (e.g. multithreading and concurrency access methodology (mutexes, semaphores))

    == Higher level abstractions. The generally used patterns in ACE like

    • wrapper facade
    • generic factory
    • acceptor/connetor
    • reactor
    • ...

  3. Have a glance at the ACE ORB (TAO) as a well performing, real time capable implementation of a CORBA ORB.
    • general introduction of TAO
    • Example: Krones has implemented a distributed image processing for empty container inspection in real time with the TAO ORB. The implementation runs on Windows NT with the graphical user interface written in Java and multiple image processing engines running Linux.

ALSA Sequencer System

Thursday, 21.9., 14:30-15:15, Hörsaal 7


Takashi Iwai <tiwai@suse.de>

With the recent spread of demand for audio support on Linux, the Advanced Linux Sound Architecture (ALSA)[1] has been improved day by day since its born. The sequencer system is one of the most greatly enhanced features in ALSA project. The design of ALSA sequencer was based on notion of multiple clients like MidiShare[2]. Each hardware low level driver or sequencer application is represented as an independent client, while the sequencer core stands only as a dispatcher of events from each client. Multiple clients may exist on a system, so that the problem occurring on Open Sound System (OSS)[3], where one application grabs the system exclusively, is avoided. All inputs/outputs are realized simply by sending/receiving events between clients. There is nothing but such event communication on this system. For example, even OSS sequencer emulation is implemented as a normal sequencer client.

The basic roles of sequencer core are routing (dispatching) and scheduling of events. An event can be delivered in various ways. It can be transmitted to an explicit destination, or can be multi- or broadcasted to several clients at the same time. Such a complicated routing is processed by sequencer, so that clients don't have to take care of it. An event can be delivered even via network through a special client. The sequencer is equipped with priority queues for scheduling events. Priority queues assure events to be delivered in the right order at the scheduled time.

There are two types of clients: kernel and user space clients. The former provides fast and light-weight event communication on kernel space, while the latter uses standard read/write syscalls on user space. Hardware related controls like raw MIDI device or wavetable synth are usually implemented as kernel modules. A kernel client fits also to simple real-time jobs like event forwarding and filtering. On the other hand, a user space client gives us more flexible programmability without restriction of kernel code.

The ALSA sequencer employs the dual-time system which consists of both the real time and MIDI clock time. The former corresponds to "wall-clock" time, which is represented by a standard sec/nsec unit. Meanwhile, the latter is based on virtual time unit with the varying tempo. This dual-time mechanism makes it possible to synchronise MIDI sequencer using real-time unit.

The talk will (hopefully) include the topic regarding more advanced synchronisation of sequencers, which is now under development. By using this feature, the ALSA sequencer can be synchronised with external time sources from other devices via SMPTE or MTC time code.

[1] Advanced Linux Sound Architecture - http://www.alsa-project.org
[2] MidiShare - http://www.grame.fr/MidiShare/
[3] Open Sound System Free - http://www.linux.org.uk/OSS/


SCEZ - a smart card library

Thursday, 21.9., 14:30-15:15, Hörsaal 8


Matthias Bruestle <m@mbsks.franken.de>

This tutorial presents an overview of the smart card library SCEZ, its aims, its usage and its future.

SCEZ is a smart card library, which is developed under GNU/Linux. The goal was to write a free, portable, small and easy to use smart card library. The library interface was developed while programming it to see what is good and what not. To reduce bloat, the library has been designed modular. When compiling the library, it can be chosen, which drivers are included and which not. The drivers are drivers for smart cards, e.g. Giesecke & Devrient SmartCafe, and for card readers/terminals, e.g. Dumb Mouse. The terminal drivers have all the same interface, so the change of the reader does not require changes in the program. On the other side, the smart card drivers differ very much. This is caused by the big differences between the smart cards of different manufacturers.

The library has reached a relatively stable state. After some minor modifications and finishing the documentation, the version 1.0 will be released. This may have already happened, when this tutorial is held at the Linux Kongress 2000.

The author

Matthias Bruestle is doing this PhD in Chemistry at the Computer Chemie Centrum in Erlangen and works as a programmer in the smart cards area. He is using GNU/Linux since 1993.


"Too little, too slow" - An introduction to memory management and an overview of Linux 2.5 MM.

Thursday, 21.9., 15:15-16:00, Hörsaal 7


Rik van Riel <riel@conectiva.com.br>

1. Introduction to memory management.

To understand memory management, one needs to have a good conceptual and quantative overview of the memory hierarchy, the hierarchy of progressively smaller, faster and more expensive types of memory that populate every computer today. Because of this (and because the talk is scheduled early in the morning), we'll start the talk with a 30 minute overview of the memory hierarchy and some of the concepts involved in memory management.

If you're already knowledgeable about memory management, you may want to skip the rest of this abstract and move on to section 2).

The speed difference between CPU and memory (25 to 100 times as slow) and memory and hard disk (>100.000 times as slow) is quite big. Because of this the memory hierarchy poses some "interesting" performance problems that the operating system has to deal with.

The speed difference between CPU and memory is mainly masked by "cache"; cache is very fast memory and using it does not need support from the Operating System or application. However, there are some tricks the OS can perform to make it easier for the cache to do its job well and to raise system performance.

The speed difference between memory and hard disk is truly enormous. Furthermore, data on disk will be saved permanently so we need to store some of it in a way that we can find it back after the computer is rebooted.

This means that we have to store the data in a "filesystem". I won't talk about how a filesystem works. The important part is that a filesystem works like an index where you have to look up where the data is.

The extra lookup means that the disk would be even slower, more than a million times as slow as the processor! The only reason that the system still runs reasonably fast is because of some memory management and filesystem tricks.

2. A look at Linux 2.5 VM

In this part we'll present some new ideas for Linux memory management. While current Linux memory management should be able to cope with most "normal" system loads just fine, it isn't as good as it could be and should be improved a bit to handle extreme situations a bit better. The following ideas will be presented.

2.5 VM

In Linux 2.5 virtual memory management will see some considerable changes. One of the main problems with the current Linux memory management is that sometimes we cannot make a proper distinction between pages which are in use and pages which can be evicted from memory to make room for new data.

In order to improve that situation and make the VM subsystem more resilient against wildly variable VM loads, we will use ideas from various other operating systems to improve Linux memory management. The main page replacement routine will use the active, inactive and scavenge (cache) lists as found in FreeBSD. This mechanism maintains a balance between used and old memory pages so there will always be "proper" pages around to swap. In addition to this there will probably be things like dynamic and administrator settable RSS limits, anti hog code to prevent one user or process from hogging the machine and slowing down the rest of the machine and per-user memory accounting.

anti hog code

The virtual memory management of most modern operating systems works under the assumption that every page is of equal importance, applying equal memory pressure to each page in the system. This can lead to the situation where one memory hog is running happily and touching all its pages all the time (since it is in memory it is fast) and the rest of the system is thrashing (and will continue to do so since it is running so slow that it won't get a chance to use its memory before the next pageout scan).

Since this is a very unfair situation that nobody wants to run into and also can cause very inefficient system use, we should leave the idea that every page is equally important behind. There are a number of ideas that can improve this situation considerably. Two of these will be presented in this lecture. One is the simple anti-hog code that was experimented with in the 2.3 kernel and the other is the solution of dynamic RSS limits.

process suspension

When the memory load on the system is just too big (eg. when the working set of all running processes no longer fits in memory) paging is no longer enough and something else needs to be done. The simplest solution is to simply suspend a process for a while so that the sum of all working set sizes is small enough to fit in memory.

The obvious questions arising with this solution are: which process(es) to suspend? For how long should they be suspended? How do we ensure fairness? How do we make sure that every process is able to get some work done? How do we make sure interactive performance isn't impacted too much?

The algorithm presented is a variation on the algorithm used by ITS (the Incompatible Timesharing System), where the system makes an attempt at measuring throughput made times memory used, averaged over time. Using this per-process number the system can estimate how badly the system is thrashing (do we need to suspend a process) and make sure all processes receive fair treatment.


Finding Your True Self - PAM with Smartcards, Open but Secure

Thursday, 21.9., 15:15-16:00, Hörsaal 8


Lutz Behnke <behnke@trustcenter.de>
Holger Lehmann <holle@catworkx.de>

The use of SmartCards in authenticating users to computer resources enhances the security by requiring two pieces of controled resource; knowledge of a PIN and the ownership of the SmartCard. It also allows a second party to perform the identification of the user seperately to the use of the computing resource. We examine the use of the Plugable Authentication Modules (PAM) to allow easy integration of such technology into existing applications. We also look at the gains realized when build such an application from OpenSource and Free Software components.


Linux FailSafe

Thursday, 21.9., 16:30-17:15, Hörsaal 7


Lars Marowsky-Bree <lmb@suse.de>

The Linux Failsafe project is a community development effort lead by SuSE andSGI to port SGI's field proven IRIS Failsafe to Linux. Failsafe is now Open Source and thus the perfect foundation for a crossplatform high availability solution.

Failsafe provides a full suite of high-availability capabilities. These include full N-node cluster membership and communication services, application monitoring and failover capabilities, with a set of nice GUI tools for administering and monitoring HA clusters.

It is the purpose of this talk to present the architecture and design of Failsafe and cover the possible applications and give an update on the current status.


DRBD

Thursday, 21.9., 17:15-18:00, Hörsaal 7


Philipp Reisner <philipp@linuxfreak.com>

Hard disk mirroring (RAID1) is a well known method to increase the availability of servers. It prevents loss of data in the case of a hard disk failure. But mirroring inside a single machine does not contribute anything to availability if a component other than the hard disk is failing. A short distance between the two hard disks also does not protect data from disasters like fire.

The first challange is solved by so-called HA clusters, where the active server is backed up by a standby machine. Usually these clusters are equipped with shared disks. A shared disk is a hard disk which can be accessed from all nodes of a cluster. In an HA cluster the shared disk is usually a RAID-set of disks. But in these clusters the distance between the disks is still very low, since the disks of the RAID-set are located inside a single case.

DRBD is a device driver for Linux which allows you to build clusters with distributed mirrors, so called "shared nothing" clustering. This architecture does not only have the advantage that the physical distance between the two copies of the data can be magnitudes greater than with shared disks, but it is also (magnitudes) cheaper than configurations with shared disks.

DRBD uses a TCP/IP connection for data mirroring, it has three protocols, which offer different grades of the guarantees-vs-performance tradeoff. DRBD supports automatic resynchronisation of the mirrors if mirroring was interrupted by an outage of the communication network. In order to be able to offer good performance and support for journaling filesystems it analyses write-after-write dependencies, which gives the disk scheduler maximum freedom during the write process to reorder blocks without compromising the order imposed by the filesystem.

The device reaches between 50% and 98% of the maximum theoretical performance and it works well with heartbeat, the HA cluster management software for Linux.

You can find more information about DRBD at http://www.complang.tuwien.ac.at/reisner/drbd/.


PPP and IP tunneling over X.25

Friday, 22.9., 9:30-10:15, Hörsaal 7


Henner Eisen <eis@baty.hanse.de>

Tunneling IP traffic over the X.25 packet layer protocol was common practice in the early days of the Internet. Today it is hardly used.

This is going to change because some Internet providers have announced to support AODI. AO/DI (Always On, Dynamic Isdn) applies the multi-link PPP protocol where a permanently connected low bandwidth link uses the isdn D-channel (additional B-channels can be added and removed dynamically). The D-channel ppp traffic is carried inside an "X.31 Case B" connection, which uses the X.25 network layer protocol and the LAPD data link protocol.

For implementing this inside linux, a cheep method of tunneling ppp inside an X.25 connection is needed. The current 2.[234].x kernels support the X.25 protocol but the PF_X25 protocol family only implements a socket interface.

An extension which supports ppp or other protocols on top of X.25 is currently being developed. It uses the new (Linux 2.4.x) ppp_generic implementation for the ppp layer.

In order to reduce complexity, X.25 connection control is handled outside the kernel via the usual socket API. A user space process (e.g. executed on behalf of pppd) creates a PF_X25 socket and connects it to the peer by means of the connect() or accept() system call. When the connection is established, the user space program attaches the data path of the socket to a ppp bundle by means of the PPPIOCATTACH ioctl.

After that, the kernel side implementation takes care of providing a struct ppp_channel interface to the ppp_generic layer. The socket's write-memory counter is used to flow-control the ppp layer if the X.25 send window is full.

In addition to a ppp upper layer, the implementation also supports to directly tunnel IP (or other protocols) inside X.25 connections. This is implemented by means of special "socket tunnel" network interfaces, which can be attached to a connected socket in a similar manner.

As tunneling IP and PPP over other protocols might also be useful, the API provides a generic socket tunnel paradigm which is not protocol family specific. The kernel side implementation is a library approach which also implements the core functionality independently from the protocol family. Thus, although the initial implementation is targeted towards PF_X25, other protocol families might reuse and share most of the code.


Introduction to SLP and SLP Programming using OpenSLP

Friday, 22.9., 9:30-10:15, Hörsaal 8


Matthew Peterson <mpeterson@calderasystems.com>

Service Location Protocol (SLP) is an IETF standards track protocol that provides a framework to allow networking applications to discover the existence, location, and configuration of networked services in enterprise networks. Traditionally, in order to locate services on the network, users of network applications are required to provide the host name or network address of the machine that supplies a desired service. Ensuring that users and applications are supplied with the correct information has, in many cases, become an administration nightmare.

Early in the process of developing other management tools, Caldera Systems started the OpenSLP project as an effort to develop an open-source implementation of the IETF Service Location Protocol that would be suitable for commercial and non-commercial application. Since that time OpenSLP has grown in popularity among developers that are seeking to find a standardized solution to service location problems. SLP is already the standard service discovery protocol in use by Solaris, and NetWare operating systems. With exposure, it is hoped that SLP will become a standard part of Linux distributions as well.

    The talk will cover the following topics
  • The advantages and capabilities of SLP enabled applications
  • Differences between SLP and other technologies (DHCP, DNS, and LDAP)
  • brief introduction to the SLP protcol itself
  • A discussion of the OpenSLP implementation
  • An exercise in using the standardized SLP API (nitty gritty coding stuff)

For more information, you are invited to visit the OpenSLP web site, http://www.openslp.org.


The netfilter framework in Linux 2.4

Friday, 22.9., 10:15-11:00, Hörsaal 7


Harald Welte <laforge@sunbeam.franken.de>

Linux 2.4 provides a sophisticated infrastructure, called netfilter, which is the basis for packet filtering, network address translation and packet mangling.

The whole firewalling implementation has been rewritten from scratch.

Netfilter is a clean, abstract and well-defined interface to the network stack. It is easily extendable due to its modular concept.

The presentation covers the following topics:

  • Netfilter concepts
    • Infrastructure provided by the network stack
    • IP tables
  • Packet filtering
    • The builtin matches and targets
    • Stateful Firewalling (Connection Tracking)
  • Network address translation
    • Source NAT, destination NAT, Masquerading, transparent proxying
    • Packet mangling
    • Queuing packets to userspace
    • Current work / Future / Netfilter-related projects

LSB

Friday, 22.9., 10:15-11:00, Hörsaal 8


Ralf Flaxa <rf@caldera.de>

A talk about what it is, why we need it, what the current state is and when we expect LSB 1.0 to be released.

It would cover general highlevel information about the spec, the test suite and the implementation as well as details in selected areas of the spec that are of most interest.


The /proc Filesystem

Friday, 22.9., 11:30-12:15, Hörsaal 7


Bodo Bauer <bbauer@turbolinux.com>

A part of the standard file system that deserves to be explained in more detail is the tree beginning with /proc. This file system is becoming the de facto method for Linux to handle process and system information. It is a nice example of the power of Linux's Virtual File System, a file system that does not really exist, either in the /proc directory or in its subdirectories. The content of files in the /proc is generated by the kernel at the moment a user reads them, representing information about running processes and kernel internals. Some files are writable, which allows the possibility of changing kernel parameters during runtime.

This talk will give an introduction to the /proc filesystem and it's relation to the Linux kernel. In the first part of the talk we will walk through the part of /proc which is read-only and can be used to get inportant information about the running system. Special attention will be given to the following topics:

  • Investigating the properties of the pseudo file system /proc and its ability to provide information on the running Linux system
  • Examining /proc's structure
  • Uncovering various information about the kernel and the processes running on the system

In the second part of the talk we will see how /proc/sys can be used to alter kernel parameters during runtime:

  • Modifying kernel parameters by writing into files found in /proc/sys
  • Exploring the files which modify certain parameters
  • Review of the /proc/sys file tree

The author

Bodo Bauer is Principal Software Engineer for TurboLinux. In this position he is responisble for the TurboLinux Workstation and Server distribution. He manages the development teams for these products.

Prior to this he was chief architect of the Zenguin Installer and co-founder of Zenguin, Inc. Bodo is a well known SuSE Linux developer, technical manager, and Linux spokesperson in North America and Europe. Bodo's work with SuSE spans over five years, with two years in the Bay Area and three years in Germany (he was hired as SuSE's third employee in 1994). Bodo played a key role in developing the SuSE Linux distribution, related software applications, and SuSE Germany's highly successful corporate consulting program.

Prior to joining SuSE, Bodo gained over four years of experience as a technical design specialist with Mikronik and EMS in Fuerth, Germany. Bodo has acted as a consultant and technical advisor to such companies as Siemens, Informix, and Oracle.

Bodo is a former member of LST (Linux Support Team), the version of the Linux OS later purchased by-and now the basis for-the Caldera distribution. Bodo is also the author and co-author of numerous Linux-related articles in German and U.S. computer magazines and an upcoming book about SuSE Linux.


Beyond C++ - the KDE / Qt Object Model

Friday, 22.9., 11:30-12:15, Hörsaal 8


Matthias Ettrich <ettrich@trolltech.com>

The standard C++ Object Model provides very efficient runtime support of the object paradigm. On the negative side, its static nature shows inflexibility in certain problem domains. Graphical User Interface programming is one example that requires both runtime efficiency and a high level of flexibility. Qt provides this, by combining the speed of C++ with the flexibility of the Qt Object Model.

In addition to C++, Qt provides

  • a very powerful mechanism for seamless object communication dubbed "signals and slots",
  • queryable and designable object properties,
  • powerful events and event filters,
  • scoped string translation for internationalization,
  • sophisticated interval driven timers that make it possible to elegantly integrate many tasks in an event-driven GUI.
  • hierarchical and queryable object trees that organize object ownership in a natural way.
  • guarded pointers that are automatically set to null when the referenced object is destroyed, unlike normal C++ pointers which become "dangling pointers" in that case.

KDE extends this model by exporting objects over process bounderies through the DCOP system (Desktop Communication Protocol). This makes it possible to access special external interfaces, object properties or slots from different processes. The two main application domains for this are language-agnostic scripting and out-of-process controls.

In my talk, I'll present the object model and its implementation. Examples will demonstrate how the KDE / Qt C++ extensions make the language better suited for true component GUI programming.


Linux Performance Tuning

Friday, 22.9., 12:15-13:00, Hörsaal 7


Dave Jones <davej@suse.de>

Abstract: Since the comparisons by companies such as Mindcraft of the performance differences between Microsoft Windows and Linux, many people have been discussing ways in which Linux can be tuned. The various adjustments that are possible for increased performance are described in numerous places, but are confused with contradicting information, bad examples, and a lack of centralisation. There are many areas we can adjust, this talk will show some of these areas, the reasoning behind the gain, an explanation of the tools that can be used to adjust each area. Some of the main areas covered will include

  • Direct kernel interface : /proc/sys tuning.
    • Memory management tuning.
    • Filesystem adjustments.
    • Performance related networking features.
  • Disk I/O
    • Filesystem tuning with fstab.
      • Including NFS/Samba tuning.
    • IDE tweaks with hdparm.
    • SCSI subsystem adjustments.
  • Hardware adjusting utilities.
    • CPU enhancements.
      • Tuning by example, Cyrix CPUs with Set6x86
        • Usage of the /dev/cpu/?/msr interface to access 'hidden' features.
    • Clockrate utilities
      • Motherboard bus speed adjustments, and the problems Linux faces.
      • XFree86 clock adjustments, gfx card specific adjustments.
  • PCI device optimisation.
    • Tricks with setpci and other PCI utilities.

The talk will conclude with a description of the rationale behind Powertweak, and why an 'all-purpose' tuning tool is considered a good thing for Linux.

Author:

Dave Jones is the author of Powertweak-Linux , the first general-purpose Linux performance enhancing utility. Between contributing to other projects (including the Linux kernel), he also is one of the maintainers on the Linux tuning site at http://linuxperf.nl.linux.org


KHtml - a modern rendering engine in KDE

Friday, 22.9., 12:15-13:00, Hörsaal 8


Lars Knoll <lars@trolltech.com>

khtml is the new HTML rendering engine used by KDE-2. It has been rewritten from scratch in the last year, and is now supporting almost all of HTML4, CSS1 and DOM Level1. CSS2 and DOM Level2 are partly implemented, and Javascript is already working for a lot of web pages. The rendering engine aims at doing fast, incremental layout and being able to deal with dynamic html.

The talk will give an introduction into the internal working of the library and how to integrate and use it in external applications.


System Resource Management

Friday, 22.9., 14:30-15:15, Hörsaal 7


Frank Klemm <pfk@schnecke.offl.uni-jena.de>

The talk covers the following questions:

What are system resources?

  • virtual memory, file descriptors, process slots, locks, sockets, ...
  • disk space, transfer rates, inodes, ...
  • network transfer rates, ports, ...
  • CPU time, time slices (RTOS)

Why they are limited?

  • primary limitations: limitations of a given hardware: RAM, Disk Space, CPU time
  • secondary limitations: weakness of algorithms, overrun of static tables, algorithms with poor scaling performance (n^2, n^3, 2^n); not hardware related
  • based on standards (fixed number of bits in control structures)
  • intentional limitations: some of the secondary limitations are intentionally due to the lack of fantasy of many developers (640 KB ought be enough ...), some are implemented by additional code (example: quotas) to prevent global system crashs and locks (see next up-stroke)

Scopes of limitations (process, single system, user, file, partitions, LAN), precedences of these scopes

Why out-of-resource events should be handled with care, not like the way Linux handles it (damage minimization instead of maximization)?

Examples of useful strategies for handling out-of-resource events (examples are running in user mode - it's easier to implement, but for performance and reliability reason this code should be a part of the kernel)


(no title available yet)

Friday, 22.9., 14:30-15:15, Hörsaal 8


Miguel de Icaza <miguel@helixcode.com>

(no abstract available yet)


Generic Modules - Gem

Friday, 22.9., 15:15-16:00, Hörsaal 7


Simon Pogarcic <sim@suse.de>

1. Abstract

In the past times we were working on various projects which required some kind of 'door' between kernel and user space and manager who would take care of hardware resources, memory allocations, crazy things like tons of physicaly contiguous RAM (sometimes required by older or exotic hardware), shared memory, resource hungry processes, dirty experiments and driver specific software modules.

The best example for the problematics and one of such projects was 3D- Xserver project with the requirement to access peripheral hardware directly from the user space (so-called Direct Rendering), and to feed the hardware with data from different clients, managing for every client its current hardware context and synchronisation issues.

Later projects showed that not only graphic drivers would have above requirements. So I tried to put the things together in more generic manner, and developed a framework to address above problematics. The result of development was Generic Module or GEM, the software library which should meet the folowing goals:

  • provide management methods for system resources access, not driver

  • provide kernel interface for plug-in of optional driver modules

  • make possible that bigger parts of driver code can reside in user space

  • do not change kernel - use already existing kernel capabilities

  • code should be object oriented and easy extendable for new features

  • don't require superuser rights to access resources

  • address security and resource usage restriction issuess

2. GEM Scheme

The folowing scheme shows one possible GEM configuration. We see 6 boards in this example system: 2 of the hardware type 'A', 1 of the hardware type 'B' and 3 of the hardware type 'C'. For each hardware type we provide one Kernel Device Dependent Module (KDDM), with dynamicaly assigned minor numbers 1 to 3. Using few system calls, the user space interface libgem can get all necessary informations for direct hardware access and initialization of device specific resources.

Bonobo - the GNOME Component System

Friday, 22.9., 15:15-16:00, Hörsaal 8


Martin Baulig <baulig@suse.de>

Large-scale and monolithic software projects are hard to maintain, hard to reuse and hard to extend. This is even harder for Free Software due to the very high entry-barrier.

Bonobo provides the base foundation for creating and implementing reusable software components in GNOME. These components export their functionality through well-defined interfaces and they're self-contained and replaceable at runtime.

This makes it very easy to glue large and complex software programs together with such small building blocks.

Bonobo Controls are a kind of super-widgets. They look like normal widgets to the user, but they can be used in a very smart way for rapid software development in GUI builder applications.

In addition, such Controls can provide menus and toolbars which will be merged together with menus and toolbars of the applications. This leads to perfect integration and consistency of this components.

Bonobo Compound Documents allow you to embed reusable software components in documents. The actual loading and saving of the document data is done transparently by Bonobo. The strict separation between Model and View makes it possible to use, for instance, the same spreadsheet in different applications at the same time.


The Difference is Open Source

Friday, 22.9., 16:00-17:00, Hörsaal 7


Dirk Hohndel <hohndel@suse.de>

Many people talk about the technical qualities of Linux, about its usefulness both for home and business use. About stability, performance and affordability. This talk won't mention much of that. Instead it will focus on what really makes Linux and the movement around it so different from other worlds. Why it is important that Linux is Open Source. Why it is important to the people who use Linux as well as to the people who make it. So this is an advocacy talk, and maybe a philosophical talk. It won't be technical, but it sure should be interesting and fun.

Bio

Dirk Hohndel is Chief Technologie Officer of SuSE Linux AG. He also serves as Vice President of The XFree86 Project, Inc., a non-profit corporation that provides an Open Source implementation of the X Window System on PC Unix systems like Linux.

Prior to his current position at SuSE, Dirk was Unix Architect at Deutsche Bank AG, one of the leading global financial institutions. Before joining Deutsche Bank, Dirk was Senior Software Engineer for AIB Software Corporation (which was then acquired by PLATINUM technology).

Dirk Hohndel holds a Diploma in Mathematics and Computer Science from the University of Würzburg, Germany. He is married and lives near Frankfurt, Germany.


email: Martin Schulte
Last recently updated at Sunday, 10-Sep-2000 18:37:07 CEST