Get news? 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | About | Contact Want to help?

Linux-Kongress 2008
15th International Linux System Technology Conference
7.10.-10.10.2008 at the
University of Hamburg, Germany

Home | Program | Abstracts | Sponsoring | Fees | Location/Accomodation | Key signing party | Call for Papers


Abstracts


Tutorials/Training Technical Sessions
Network Monitoring with Open Source Tools
by Timo Altenfeld, Wilhelm Dolle, Robin Schroeder and Christoph Wegener
Tuesday, 2008/10/07 10:00-18:00 and
Wednesday, 2008/10/08 10:00-18:00 german

Das zweitägige Tutorium "Netzwerküberwachung mit Open-Source-Tools" richtet sich an erfahrene Systemadministratoren, deren Aufgabe die Betreuung, Überwachung und Optimierung von komplexen Netzwerkumgebungen ist. Die Teilnehmer sollten bereits Erfahrungen mit der Installation von Programmen unter Linux haben und rudimentäre Grundkenntnisse des TCP/IP-Stacks mitbringen.

Im Laufe des Workshops wird der Aufbau eines Überwachungsservers auf Basis von Linux mit exemplarischen Diensten gezeigt und diskutiert werden. Dabei werden wir aber nicht nur die rein technischen Aspekte der Netzwerküberwachung beleuchten, sondern auch die Grundlagen der notwendigen organisatorischen und rechtlichen Rahmenbedingungen aufzeigen und berücksichtigen. Nach der Veranstaltung können die Teilnehmer die gewonnenen Erkenntnisse dann selbständig in die Praxis umsetzen.

Durch die wachsende Abhängigkeit unseres täglichen Lebens von einer funktionierenden IT-Landschaft und die gleichzeitig rapide zunehmende Komplexität der dazu benötigten Infrastrukturen gewinnen die Themen Netzwerkmanagement und Netzwerküberwachung immer mehr an Bedeutung. Zur Netzwerküberwachung existieren eine Reihe von komplexen und oft sehr teuren kommerziellen Werkzeugen. Dieser Workshop zeigt, wie man eine analoge Funktionalität mit spezialisierten, freien und quelloffenen Programmen erreichen kann.

Themen im Detail/Ablauf des Tutoriums:

  • Organisatorische Fragen
    • Möglichkeiten der Netzwerküberwachung
    • Business Planing / Business Continuity / TCO
    • Warum freie und quelloffene Software?
    • Bedeutung der Netzwerküberwachung beim Risikomanagement (Basel II / Sarbanes-Oxley Act (SOX))
  • Rechtliche Aspekte
  • Informationsgewinnung
    • Simple Network Management Protocol (SNMP)
  • Qualitative Überwachung
    • Multi Router Traffic Grapher (MRTG)
    • Cacti und RRDTool
  • Verfügbarkeitsüberwachung
    • Nagios
  • Proaktive Überwachung, Auswerten von Logdateien
  • Fehlersuche in Netzwerken mit Wireshark
  • Sicherheits-Monitoring
    • Host- und Netzwerk-Scanning mit nmap
    • Nessus und Open-Source-Alternativen

Die Inhalte werden im Vortragsstil vermittelt und durch praktische Übungen der Teilnehmer am eigenen Rechner vertieft.

Hardware-Voraussetzungen: Die Teilnehmer müssen einen Rechner mit einer aktuellen Linux-Distribution mitbringen -- Benutzer anderer Betriebssysteme (*BSD oder MacOS-X) sollten sich vor der Veranstaltung über contact@linux-kongress.org mit den Vortragenden in Verbindung setzen.

About the speakers:

Christoph Wegener, CISA, CISM und CBP, ist promovierter Physiker und seit 1999 mit der wecon.it-consulting freiberuflich in den Themen IT-Sicherheit und Open Source / Linux unterwegs. Er ist Autor zahlreicher Fachbeiträge, Fachgutachter für verschiedene Verlage und Mitglied in mehreren Programmkomitees. Seit Anfang 2005 ist er zudem am europäischen Kompetenzzentrum für Sicherheit in der Informationstechnik (eurobits) tätig. Darüber hinaus ist er Gründungsmitglied der Arbeitsgruppe Identitätsschutz im Internet (a-i3) e.V. und dort, sowie in der German Unix User Group (GUUG), Mitglied des Vorstands.

Wilhelm Dolle ist Senior Security Consultant bei der HiSolutions AG, einem Beratungshaus für Information Security und Risk Consulting, in Berlin und seit vielen Jahren im IT-Sicherheitsumfeld tätig. Er ist CISA, CISM, CISSP sowie vom BSI lizensierter ISO 27001 / Grundschutzauditor und hat bereits in früheren Positionen als Abteilungsleiter und Mitglied der Geschäftsleitung eines mittelständischen Unternehmens Erfahrungen in den Bereichen IT-Sicherheitsmanagement, Risiko- und Sicherheitsanalysen sowie Incident Management sammeln können. Wilhelm Dolle ist Autor zahlreicher Artikel in Fachzeitschriften und hat Lehraufträge an einer Universität und an einer Berufsakademie inne.

Timo Altenfeld, Fachinformatiker für Systemintegration, ist Systemadministrator an der Fakultät für Physik und Astronomie der Ruhr-Universität Bochum. Seit Beginn seiner Ausbildung faszinieren ihn Open Source-Tools und Linux, die er bereits seit geraumer Zeit auch in der beruflichen Praxis zur Überwachung verschiedenster Systeme einsetzt. Mit dem Einsatz von Linux in diversen Umgebungen beschäftigt er sich seit dem Jahre 2003.

Robin Schröder, ebenfalls Fachinformatiker für Systemintegration, ist seit Anfang 2006 in der Verwaltung der Ruhr-Universität Bochum tätig. Er administriert dort zahlreiche UNIX-, Linux- und Windows-Systeme und überwacht deren Zustand mit verschiedenen Open Source-Tools. Mit Computern, Linux und Netzwerken beschäftigt er sich bereits seit dem Jahre 1995.

Creating a Single Sign-On Infrastructure with Kerberos and LDAP
by Michael Weiser and Daniel Kobras
Tuesday, 2008/10/07 10:00-18:00 and
Wednesday, 2008/10/08 10:00-18:00 german

The two-day tutorium targets administrators of pure Linux and mixed Linux/Windows networks. Participants should be fluent in Linux administration and networking. Basic knowledge of symmetric and asymmetric cryptographic methods is recommended. Differing from the main conference language, all course material for this tutorial will be made available in German.

Each day starts with roughly two hours of theoretical introduction into the topics. The rest of the days sport hands-on sessions where participants build up their own single sign-on environments in a virtual test network.

Day 1: Kerberos - a cryptographic authentication service

Presentation: The Kerberos authentication service

The tutorial starts with a talk introducing Kerberos as an authentication service, its design goals and limitations, and how they are realised in version 5 of the Kerberos protocol

The concept of single sign-on is key to Kerberos: A user should only have to type a password once at the beginning of a session, and be able to use all network services without further interaction. The presentation next describes the practical implementation of single sign-on, arising problems and how Kerberos manages to solve them.

Hands-On: Setting up a Kerberos realm

All participants team up in pairs. First, each team uses MIT Kerberos to configure a central authentication server, the so-called Key Distribution Center (KDC). Adapting the configuration of Pluggable Authentication Modules (PAM), they join a Linux workstation to their Kerberos realm, and optionally also integrate login of a virtual machine running Windows XP, using freely available tools. Tutorial day 1 closes with the participants "kerberising" an Apache web server, or an NFSv4 file system.

Day 2: LDAP - a hierarchical directory service

Presentation: Lightweight Directory Access Protocol (LDAP)

An overview of the developments leading to LDAP as a directory service marks the beginning of tutorial day 2. The presentation continues about the relationship between X.500 and LDAP, compares both concepts, and describes strengths and weaknesses of directories in general. It introduces the concepts of delegation and replication, and finally points out both possible fields of application as well as practical problems of LDAP as a directory service.

Hands-On: Setting up a directory

Leveraging Linux virtual machines again, participants install and configure an OpenLDAP-based directory server and exercise use and administration with graphical tools and on the command line.

To increase availability of the directory, they replicate the LDAP server using syncrepl. With suitable configuration of the Name Service Switch (NSS) the kerberised Linux workstation from tutorial day 1 shall now also retrieve all account information from LDAP. Subsequently, LDAP access itself is kerberised using Simple Authentication and Security Layers (SASL), which ensures both confidentiality and integrity of requested data.

Combined, Kerberos as an authentication service, and LDAP as a directory service complement one another to form a secure and scalable infrastructure for unified user management, even in heterogeneous environments.

About the speakers: Michael Weiser studied computer science in Leipzig and Bolton (UK). In 1996, he started to administrate his first computer labs in education and production environments, comprising of Linux, SGI IRIX, Sun Solaris, Novell Netware and Mac OS X hosts, and with a focus on security. Later on in Bolton, HP-UX joined the team, and high-availability became a new topic of interest. Since 2004, he's working for science + computing ag on projects and workshops about LDAP, Kerberos, and integration into Active Directory, as well as in the field of high-performance computing.

Daniel Kobras came to know about complex computer networks at Tuebingen University's department on theoretical astrophysics and computational physics. Both a freelance journalist, and a developer for Linux distribution Debian, we couldn't escape IT topics even in his spare time. The physicist started working for Tuebingen-based science + computing ag in 2007, focusing on topics in high-performance computing and heterogeneous network environments.

Linux-basierte, hochverfügbare Firewalls im Eigenbau
by Jörg Jungermann and Maximilian Wilhelm
Tuesday, 2008/10/07 10:00-18:00 and
Wednesday, 2008/10/08 10:00-18:00 german

Dieses Tutorium richtet sich an erfahrene Systemadministratoren, die mit dem Betrieb und der Sicherheit kleinere oder größerer Netzwerke betraut sind. Interessierte Teilnehmer sollten sichere Umgang mit Linux und Grundkenntnisse des TCP/IP-Stacks mitbringen.

Sie werden in diesem praktisch orientierten Tutorium schrittweise an den Aufbau einer hoch verfügbaren Firewall auf Basis von (Debian/GNU) Linux herangeführt. Sie werden in diesem Kurs NICHT lernen, wie Ihr Netzwerk bzw. Ihre Firewall konfiguriert werden muss, um Ihr Netzwerk abzusichern. Dies kann nur im Einzelfall entschieden werden.

Alle im folgenden beschriebenen Aspekte des Workshops werden durch einen einführenden Vortrag vorstellt und im Anschluss praktisch erprobt, wobei jeweils auf die nächsten Elemente des Gesamtkonzeptes hin gearbeitet wird. Jeder Teilnehmer wird einen Desktop-PC zur Verügung haben, an dem sämtliche Tutoriumsinhalte umgesetzt werden können. Zur Konfiguration einer HA-Firewall bestehend aus zwei Rechnern, ist eine Kooperation zwischen jeweils zwei Teilnehmern vorgesehen. Zusätzlich ist ein eigener Laptop mit Linux erforderlich, der als Client im Netzwerk dient.

Das Tutorium startet mit einer kurzen Rekapitulation von essentiellen Netzwerktechnologien und -protokollen wie TCP/IP, VLANs und Bonding, um ein einheitliches Wissensniveau zu schaffen. Grundlegende Kenntnisse des Internetprotokolls sowie dessen Handling werden vorausgesetzt.

Der nächste Block wird sich mit dem Linux-Paketfilter Framework "netfilter" beschäftigen. Neben grundlegenden Konzepten eines Paketfilters werden Architektur, Möglichkeiten und Erweiterungen angesprochen. Die Thematik wird anhand von praktischen Anwendungsbeispielen vertieft; rudimentäre netfilter-Kenntnisse sind vorteilhaft.

Das Tutorium wird fortgeführt mit dem Einstieg in hoch verfügbare Systeme. Praktische Belange werden am Beispiel von heartbeat erklärt und umgesetzt, häufige Fallstricke werden aufgezeigt und diskutiert. Am Ende diese Abschnitts halten Sie bereits ein hoch verfügbares Firewallsystem in den Händen.

Für den praktischen Einsatz eines solchen Systems kann es erforderlich sein, dass beim Ausfall einer Firewall bzw. dem Aktivieren eines Backupsystems sämtliche bestehenden Netzwerkverbindungen erhalten bleiben. Dies ist nur möglich, wenn Informationen über bestehende Verbindungen zwischen aktiver und Reservermaschine synchronisiert werden. Die für stateful failover notwendigen Komponenten werden vorgestellt, eingerichtet und getestet.

Der letzte Block des Workshops beschäftigt sich mit der Verwaltung des Regelwerkes des nun geschaffenen Firewallsystems. Zur Generierung und Synchronisation des Regeln werden verschiedene Hilfsmittel vorgestellt, sowie deren Vor- und Nachteile diskutiert.

Je nach verfügbarer Zeit und Interessenlage sind folgende weiteren Themen möglich:

  • Skalierbarkeit
  • Bridgewallkonzepte
  • active-active Setups
About the speakers:

Maximilian Wilhelm arbeitet seit einigen Jahren als Linux System- und Netzwerkadministrator am Institut für Mathematik an der Universität Paderborn. Er ist verantwortlich für das Netzwerk, das Backupsystem sowie den Xen-Cluster. Als Hostmaster hat er dort ein HA-Firewallsystem aufgebaut, das nun mehreren Jahre im Einsatz ist. Nebenbei studiert er dort Informatik und arbeitet gerade an seiner Bachelorarbeit.
In der Freizeit spielt er gerne Klavier oder läuft mit seiner Kamera durch die Gegend.
Maximilian Wilhelm kann ein LPIC-2 Zertifikat sein Eigen nennen.

Jörg Jungermann ist Systemadministrator am Institut für Mathematik an der Universität Paderborn. Seine Hauptaufgabengebiete sind das HA-Firewallsystem, die LaTeX-Installation, sowie das Typo3-System. Er studiert Mathematik und Information auf Lehramt in Paderborn.
In seiner Freizeit bricht er sich die Knochen beim Handball oder fummelt an eingebetteten Systemen wie Lego(R) Mindstorm(R), oder Linux-tauglichen Routern.
Jörg Jungermann nennt ein LPIC-1 Zertifikat sein Eigen und darf sich FST nennen.

Building a virtualization cluster based on Xen and iSCSI-SAN
by Thomas Groß
Tuesday, 2008/10/07 10:00-18:00 and
Wednesday, 2008/10/08 10:00-18:00

1. day: Setting up an iSCSI-SAN and the Xen based cluster servers 2. day: Using lax to setup it infrastructure components on the cluster, setting up monitoring, notification and high availability

1. day

Will explain the steps to setup a virtualization cluster with standard linux tools. A cluster consists of a storage (iSCSI SAN) and 2 or more cluster servers. The cluster servers use Xen. My demo system consists of 3 standard pc, one acting as the SAN and two as cluster servers. I will setup manually and explain the technologies from LVM2 across iSCSI up to Xen machines:

  • create logical volumes for swap and root filesystem
  • copy an os-template into the root filesystem
  • customize the root filesystem (ip-adress ...)
  • export these LVM block devices as iSCSI-targets via a network
  • import the iSCSI-targets on a clusterserver
  • create a Xen control file for the virtual machine
We start / stop virtual machines and test live migration between cluster servers. We save the virtual machines with LVM snapshot technology. At least we can have a look at the lax clusterscripts, which do the same all at once.

Prerequisites:

Distro: Opensuse 11.0

Technologies: Xen, LVM2, iSCSI-target, Open-iSCSI, openssh

SAN: standard-pc.i.e. Athlon 64 X2, 2 GB, 250 GB SATA2, GBit Lan

Cluster servers: standard pc, i.e. Athlon 64 X2, 4 GB, 250 GB SATA2, GBit Lan

8-port GBit switch (low cost), cable

Subscribers should have linux admin skills, especially on the command line, but we use yast in some cases too. People which want to work at their own equipment should have some equivalent one. We do not compile Xen, we use the disti's packages. Thats why I prefer opensuse because Xen is well integrated. You can use another disti as well if you have the technologies above available. Be sure your cpu's have the virtualization-feature. For migration the cluster servers's cpu must be the same family (intel-intel, amd-amd)

2. day:

We use the (already sat up) cluster and some lax tools to setup standard it infrastructure components such as a nameserver (bind9), a dhcp server (dhcpd), a mailserver (postfix), an openvpn server, a wiki (docu wiki), an apache-webserver and maybe more. Furthermore we use lax to set up monitoring, notification and high availability both for machines and services.

About the speaker: Thomas Groß runs teegee - a small company which is specialized on linux and opensource software.

He studied information technology in Chemnitz / Germany. His main focus at work is linux based it infrastructure, system and network administration. For a couple of years he has been working for local companies at administration and software development tasks. He loves his children, mountain trekking, jogging, cycling.and snow boarding and hopes to run a marathon this year. Sometimes he preferes tactile information storage like the books from J.K. Rowling, J. R. R. Tolkien and Walter Moers.

Linux-HA (aka Heartbeat) v2 Setup and Administration
by Daniel Peeß
Tuesday, 2008/10/07 10:00-18:00

Linux-HA v2 is a very flexible solution for providing services highly available. It is the first solution for Linux that gives you features that have been available only in commercial solutions before, like:

  • Multiple Nodes (tested up to 16 nodes but this is no limit).
  • Integrated into Linux, combinations with other Linux tools are possible.
  • Generic; this means Linux-HA works for every service that is usually started by the SystemV concept.
  • Interfaces, components and responsibilities are designed according the Open Cluster Framework specifications.
  • Complex resource combination like resource groups, resource clones and master-slave resources.
  • Complex constraint possibilities to decide where a resource has to run and how it should react to special cases.

But the fact that it is so highly flexible makes it also complex to manage. "With great power comes great responsibility".

The Tutorial is divided into these sections: 1. Cluster Basics Here we will talk about the different types of clusters, their advantages and disadvantages. Also part of this section is basic knowledge about cluster handling and problems that can occur in a cluster.

2. Linux-HA Basic Setup and Cluster start In this chapter we will see the prerequisites of Linux-HA to get it up and running.

3. Linux-HA Design The single components that build Linux-HA are shown in this chapter. The graphical interface that makes managing Linux-HA easier is demonstrated and used for a basic resource setup.

4. The Cluster Information Base (in short CIB) A clusterwide database that handles the behavior of Linux-HA. Correctly adding, deleting and changing entries of resources and constraints is central in the management of the cluster.

5. Resources Types of resource agents, their possibilities and their usage. Also the kinds of resources (Primitive, Groups, Clone and Master/Slave) that can be used in Linux-HA are explained.

6. Constraints Rules that handle the relationship between resources, nodes and changes of the cluster state.

7. Complex Demonstration + Exercise

4 node cluster setup, two of them act as high-available storage backend, those others are our high-available webserver frontends.

This tutorial targets experienced Linux administrators with solid gui-less system and services knowledge.

About the speaker: Daniel Peeß is an experienced trainer and consultant employed at B1 Systems GmbH, which focuses on virtualization and clusters. Daniel has specialized in building high-availability clusters with Heartbeat.
VoIP Jumpstarting - getting through the initial hurdles
by Heison Chak
Tuesday, 2008/10/07 10:00-18:00

This tutorial will cover VoIP from a sys admin perspective, how one can leverage the potential of VoIP to meet business requirements. The tutorial is filled with examples, from deploying VoIP applications within a corporate environment to enabling communication to the road warriors.

While Asterisk is the platform where some discussions will be based upon, comparison with other VoIP products and platforms allows attendees to grasp practical VoIP knowledge and techniques -- dial-plan manipulation, traffic shaping, integration with legacy systems, conversation auditing and more.
---------------

Intended Audience: Managers and system administrators involved in the evaluation, design, implementation of VoIP infrastructures. Participants do not need prior exposure to VoIP but should be familiar with networking principles.

About the speaker: Heison Chak is currently working at Avaya as a Post-Sales Engineer support IVR products running on SCO UNIX, Solaris and Linux platforms. Heison has been an active member of the Asterisk community and a frequent speaker on VoIP topics. His VoIP column on ;login: is well received.
DRBD + Heartbeat + Xen: HA Virtualization Environment
by Lars Ellenberg
Wednesday, 2008/10/08 10:00-18:00

complete High Availability Solution for Virtualization Environments

  • Introduction, basic configuration and handling
    • DRBD
    • Heartbeat
    • Xen
  • Best Practices for Designing your Host Environment
      eliminate Single Points of Failure
    • dedicated Storage Cluster (iSCSI/GNBD/NFS): Yes or No?
    • Availability vs. Complexity
  • Deploying Virtual Machines
      from scratch
    • creating template
  • Cluster Management
      migration of guests
    • stop them, then start on some other node
    • freeze them, then unfreeze on some other node
    • live migration
    • failure scenarios
    • upgrade scenarios
    • scaling options
About the speaker: Lars Ellenberg is one of the lead developers of DRBD and known as "helpdesk" on the drbd-user mailing list. Florian Haas, who is co-speaker on this tutorial, among other things is known for his "DRBD User's-Guide" and his blog about DRBD + High Availability.
Linux @ Layer2
by Johannes Hubertz and Jens Link
Wednesday, 2008/10/08 10:00-18:00 german

Ein kurzer Ausflug in die ISO/OSI-Welt macht zu Beginn die Denkstrukturen und Muster anschaulich, die zum weiteren Verstaendnis unerlaesslich sind.

Im Tutorial geht es vorrangig um Linux auf Layer 2. Neben den Einsatz von Linux als Bridge z.B. zum Sniffen oder auch als Paketfilter (ebtables), wird die redundante Anbindung von Servern (Bonding) besprochen. Hierbei werden auch die verschiedenen Spanning-Tree Varianten besprochen. Mit VLANs und dem Einsatz von Linux als Router zwischen diesen gibt es dann auch einen kurzen Ausflug zu Layer 3.

Neben der reinen Theorie gibt es zahlreiche Uebungen, um das Gelernte zu vertiefen. In den praktischen Teilen werden dann auch zahlreiche nuetzliche Tools vorgestellt. Ausserdem gibt es ein paar kurze Ausfluege in die Cisco-Welt.

Das Tutorial richtet sich an Linux-Admins mit guten Linux- und grundlegenden Netzwerkkenntnissen.

Die Teilnehmer werden gebeten, Laptops (mit Linux) mitzubringen, falls vorhanden gerne mit zwei Ethernet-Netzwerkschnittstellen. Wenn moeglich sollte auf den Rechnern VMWare laufen um auf einer Hardware mehr als einmal Linux zu starten.

About the speakers: Jens Link (http://www.quux.de) ist seit mehr als 12 Jahren im IT-Bereich tätig. Er ist freiberuflicher Consultant, seine Schwerpunkte liegen im Bereich von komplexen Netzwerken (Cisco), Firewalls (Linux, Cisco, Check Point) und Netzwerkmonitoring mit OpenSource-Tools.

Ab 1998 setzte Johannes Hubertz Linux bei seinem Arbeitgeber fuer einige sensible Dinge wie Routing, DNS und Server-Ueberwachung ein, dabei wurde Debian schnell der Favorit. Eine kostenguenstige Firewall-und VPN-Loesung musste 2001 entwickelt werden, zur eigenen und Kundenverwendung.

Seit August 2005 ist er nun mit seiner eigenen GmbH unterwegs, um Linux kombiniert mit Sicherheit zu verbreiten. Dienstleistungen rund ums Thema Sicherheit am Internet sind das Programm. Nicht zuletzt auch mit der Eigenentwicklung 'simple security policy editor'.

Keynote: What is the Value of Open Source?
by James Bottomley
Thursday, 2008/10/09 09:30-10:30

Measuring value is a very subjective exercise, for instance it can be philosophical (the values of the FSF four freedoms), monetary (the value of Open Source is equal to what I paid for it) or a range in between (the value of open source to my company is the cost of the software it replaces less the price of keeping it running) and so on ...

It has been said that communities are built on shared values. However, we will demonstrate for Open Source that this is incorrect: communities are in fact built on a shared appreciation of value, but the way a member of the community defines that value need have nothing in common with the way another member does it. This talk will explore these "disparate value" communities in Linux and give some tips on how to generate them, including the useful one of how to persuade your boss that he should be paying you to work more on open source.

About the speaker: James Bottomley is currently CTO of Hansen Partnership, Inc., a Director of the Linux Foundation and Chair of its Technical Advisory Board. He is Linux Kernel maintainer of the SCSI subsystem, the Linux Voyager port and the 53c700 driver. He has also made contributions to PA-RISC Linux development in the area of DMA/device model abstraction. He was born and grew up in the United Kingdom. He went to university at Cambridge in 1985 for both his undergraduate and doctoral degrees. He joined AT&T Bell labs in 1995 to work on Distributed Lock Manager technology for clustering. In 1997 he moved to the LifeKeeper HA project. In 2000 he Joined SteelEye Technology, Inc as Software Architect and later as Vice President and CTO.
There can be only one - The unified x86 architecture
by Glauber Costa
Thursday, 2008/10/09 11:00-11:45

Once upon a time, during the 2007 Kernel Summit, it was decided not without an extensive discussion, that the kernel architecture trees for i386 and x86_64 would be better as one.

One year later, the work has started and moved fast. But it is still far away from being finished, with some core areas still needing to be addressed.

What we see now, is an hybrid world, in which two close brothers march together until the traces of their differences completely vanishes.

This talk will look in to the past in a retrospective, attempting to answer main questions about this work. Was it worthy? Are we now in a better shape than before? And more importantly, what are the main lessons that were learned, and can be used to increase the future quality of the linux kernel as a whole.

Last, but not least, we'll discuss what is there still to be done, which areas still need attention, and how can you do to help. |Glauber is a software engineer, graduated at the University of Campinas, Brazil. In the early times of his course (when he had plenty of time), he met Free Software for the first time.

About the speaker: In 2004, while still an undergrad, Glauber joined IBM's Linux Technology Center, as part of the first group gathered by the company in the country for such purpose. From that moment on, he never had any spare time left.

Since 2006, he's been working for Red Hat in the virtualization group, initially for the neverending task of getting Xen ready for the RHEL5 release. During this time, he wrote code for the paravirt_ops framework for x86_64, lguest and KVM.

Nowadays, his time is split between pushing KVM forward, and other general linux issues, such as the x86 integration.

He's also known to whenever going to conferences abroad, bring close to his person a bottle of the true and traditional brazilian cachaça (sugar cane made beverage), to spread the brazilian culture and get people drunk.

Scalable filesystems boosting Linux storage solutions
by Daniel Kobras
Thursday, 2008/10/09 11:00-11:45

Scalable clustered filesystems provide an efficient and convenient solution to today's ever-increasing storage demands. In general, the growth of network and storage systems lags behind pure CPU power, giving rise to bottlenecks that can only be circumvented with great--and costly--effort using traditional solutions. With these problems in mind, scalable clustered filesystems have been designed to dynamically adapt to current needs in performance, capacity, and reliability.

Driven by extreme demands on capacity and bandwidth in high-performance computing, several free and proprietary implementations have emerged, powering storage the world's largest supercomputers. Their features, however, also make them a viable solution in smaller scale environments: High scalability allows to consolidate separate storage islands into a single filesystem namespace, hence facilitating workflow and data handling. Furthermore, clusters of NFS or CIFS fileservers backed by scalable filesystems allow custom-built alternatives to traditional NAS filers that are performant, cost-effective, and highly available.

This development shows many parallels to the rise of Linux compute clusters in the mid-90s, built to circumvent performance bottlenecks in single-system servers. Cluster middleware like batch systems or MPI was key to the success as they enabled users to treat a group of nodes as a single entity. In terms of storage clusters today, the scalable, global filesystems provides the middleware component that forms a single service out of a collection of fileservers.

Filesystems like GFS, GPFS, or Lustre bear the potential for Linux to become a key platform for scalable storage solutions. This paper compares the different filesystem architectures and implementations on the market today, discusses their applicability in business environments, as well as problem areas like data backup and restore.

About the speaker: Daniel Kobras works as a systems engineer for science+computing ag. Specialising on high-performance and technical computing, he designs and implements clustered storage solutions for customers primarily in the automotive industry.
Chasing the Penguin: Evolution and State of the Kernel
by Wolfgang Mauerer
Thursday, 2008/10/09 11:45-12:30

Linux kernel development is among the fastest-paced and most dynamic projects in the open source community. Vast amounts of high-quality code are constantly being contributed by a world wide network of developers. Operating system development in general is not an easy topic to grasp, and the speed at which changes are rapidly absorbed into the Linux kernel makes following the overall development process a challenge. Nevertheless, as one of the most prestigious and interesting pieces of code out there, understanding the sources and tracking the development is a worthy goal for many people.

Essentially, I will concentrate on two topics in my talk:

On the one hand, interesting new components of the kernel developed during the last couple of releases will be discussed. This includes, for instance, the new CPU scheduler and improved real-time capabilities of the kernel. The selected examples will also be employed to illustrate techniques and introduce analysis tools that allow for tracking the kernel's evolution. This includes the obvious suspects like git, lxr, usermode linux, ddd, and other analysis tools. Additionally, I will show how apt entry points in the kernel to understand various parts are best identified and how the above-mentioned tools can be used to identify the relevant data structures and their interrelations.

Since kernel development is not all about technology, I will also discuss some social components of the development process. Since this is the lighter part of the talk, I will pay special attention to some entertaining and/or funny and/or interesting discussions that arose on on the linux kernel mailing list in these contexts.

About the speaker: Wolfgang has been writing documentation on various Linux and Unix related topics for the last 10+ years, and has closely tracked kernel development during this time. His articles have been translated into six languages. He is the author of two German books, one on text processing with LaTeX, and one about the architecture of the Linux kernel. A translation of the latter into English has recently been sponsored by the Hewlett-Packard company, and a version on track with 2.6.26 will be published in September by Wrox/Wiley.

He is the co-founder of the 1998 internet startup mynetix for which he worked as technologist. Currently he works as a researcher in quantum information theory at a Max Planck Institute where he is interested in using Linux for scientific tasks, mostly numerical simulation of quantum systems and quantum programming languages.

Semantic Filesystems and Formal Concept Analysis a PhD 5 years in the making
by Benjamin Martin
Thursday, 2008/10/09 11:45-12:30

My PhD thesis was on applying a branch of mathematics, Formal Concept Analysis (FCA), to Semantic Filesystems. The notion of a semantic filesystem was proffered over a decade ago and largely covers the metadata heavy filesystem views and index+search interaction that form the core of projects like libferris, tracker, beagle. Although rather risky I took the route of performing applied research for my PhD, resulting an libferris currently having support for not only conventional desktop search (ranked queries, boolean search on fulltext and metadata) but also desktop search that is driven by FCA.

FCA can be thought of as unsupervised machine learning, or natural clustering. Given a set of objects and a set of files, with FCA you can generate a finite lattice that expresses the relationship of the objects with a minimal number of nodes. The benefits of using FCA for desktop search include: graceful handling of over specified queries (you can see how you may generalize your query), ability to form a navigation space rather than a list of results (other tools try to allow navigation though rough post processing like splitting text and images into categories). In the navigation space, refinements to the query associated with the node you are looking at are only offered if they exist in the filesystem. For example, if you are viewing Hamburg + 2005 + SLR when looking at your photo collection, a refinement of Flash for flash photography will only be offered if your image collection includes flash photos taken in Hamburg in 2005 where flash was used.

As search is generally applicable to things other than filesystems, with the above example tailored to KPhotoAlbum, I'd aim the talk at what FCA is, why it can help and what the complexity issues are and how you can alleviate many of them to the point that FCA becomes useful on modern machines.

About the speaker: Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on Linux, libferris, filesystems, and search solutions.
Testing real-time Linux. What to test and how?
by Sripathi Kodi
Thursday, 2008/10/09 14:00-14:45

Real-time Preempt patches for the Linux kernel, also referred to as PREEMPT_RT patches, aim to remove or reduce non-preemptable code paths, resulting in reduced latencies in the kernel. These patches have made the standard Linux system a credible choice in soft real-time environments. While users of real-time Linux increase, so does the need to test it for functionality and performance. There are certain differences between testing needs on the mainline kernel and that on the real-time kernel. For example, testing latency variations is much more important on real-time kernel, whereas tests measuring throughput are more important on mainline kernel.

When we started using real-time kernel a couple of years ago, we also started writing test cases for it. We started off with tests for functionality and latencies. Later on, we tried some stress and performance tests as well. We put these tests on kernel.org (http://www.eu.kernel.org/pub/linux/kernel/people/dvhart/realtime/tests/) and this started attracting contributions from the community. Some of these contributions include support for PowerPC architecture and a number of bug fixes. We recently integrated our tests into the LTP (http://linuxtestproject.org/) to increase visibility of these tests and make it easy for others to use and contribute to. This has resulted in increased community activity in this area.

This paper will first talk about the things to test in PREEMPT_RT kernel and how it is different from testing the mainline kernel. It will then explain how some system settings have a significant impact on behavior of tests. The paper will then offer few tips on how to write tests for the real-time kernel. In the next part, the paper will explain the functionality tested by the real-time test suite in LTP by using a few tests as examples. It will also briefly cover a few other real-time tests that are not in LTP. Finally, it will discuss stress and performance testing on real-time kernel and explain the subtle differences between running these tests on mainline and real-time kernels.

About the speaker: Sripathi Kodi is a member of IBM's Linux Technology Center at Bangalore. He currently works on real-time Linux. Before starting to work on Linux, he has worked on proprietary UNIX systems and Java Virtual Machines. He has spoken at various universities and FOSS events in India about open source software in general and real-time Linux in particular.
Server Consolidation with Xen Farming
by Ulrich Schwardmann
Thursday, 2008/10/09 14:00-14:45

Most consolidation concepts for servers are based on virtualisation of the underlying hardware components. This usually is done by either operating system-level virtualization, paravirtualisation or virtual machines. All of these techniques are made flexible by allowing to implement a more or less broad variety of operating systems. This flexibility usually means individual administration of all these systems on the other hand.

But real server consolidation should be more comfortable for the adminitrator than just virtualising the hardware: there should be an essential benefit in administration too. To our knowledge the only technique, except our proposal, that allowes an update mechanism, which is centralized on file system level, is the proprietary system virtuozzo.

The aim of this paper is to show a way to implement such a possibility of centralising the update and installation mechanism to a group of Xen instances on a Xen server. The main idea behind this concept is borrowed from the technology of diskless clients. Like these diskless systems the Xen clients all share a common filesystem that acommodates the main structure for the software components. Just the individual parts of the operating system and software components are hold in a special writable filesystem. Here in contrast to diskless clients a real filesystem is used, that additionally can hold the special software and data that is used on the individual client. In this file system of course all types of configuration and logging data is included.

This furthermore obviously means, that software installation and update cannot be done by the client on the central filesystem, but has to be done centrally on the so called master of this farm. A mayor part of the development of this farming was therefore a decision what the possibilities the client administrators have and the constraints due to the central management. Our viewpoint was, that the client administrators should concentrate on the application aspects. They should focus on the configuration of their needed software components and use the underlying operating system as a secure and updated basis.

This farming concept started at GWDG as a feasibility study, but is now in a state of hosting eight clients, with four of them already in production. This paper describes the exact structure of the filesytem, the problem of data consistancy, the details of the boot process of the clients in order to get the whole filesystem availible and the state of the utility components for installing and administrating master and clients.

About the speaker: The author is scientific assistant of the Gesellschaft of wissenschaftliche Datenverarbeitung Göttingen (GWDG) and works mainly in the field of scientific computing, since 2004 with a special focus on grid computing. The authors interest in operating systems and virtualization issues goes back to 1997, when he starts a feasibility study to use the computers of the course class of GWDG as one of the first parallel computer cluster in a computer center in germany. This parallel system is, with several hardware upgrades, still operating since then. It has its benefits for the cluster user as additional hardware and for the teachers of courses due to the flexibility of the virtualization. Since three years the author uses Xen virtualization for a couple of servers in the context of scientific, cluster and grid computing.
The Linux Scheduler, today and looking forward
by Dhaval Giani, Srivatsa Vaddagiri and Peter Zijlstra
Thursday, 2008/10/09 14:45-15:30

This paper describes the new changes in the Linux process scheduler. The new process scheduler has the concept of scheduler classes, with two main classes, SCHED_RT and SCHED_OTHER. The SCHED_OTHER is also known as the Completely Fair Scheduler. We will look at the need for the CFS, how it works. We also look at the how the new SMP load balancer works, and the new group scheduler. This paper will also illustrate where the scheduler goes in the future.

About the speakers: Dhaval works at IBM's Linux Technology Center at Bangalore. He has been a part of LTC since June 2007.

Dhaval is currently maintaining libcgroup, a library which provides APIs for applications to utilize the cgroups feature in Linux. He is involved in various activities around the libcgroup project. Dhaval has also been involved in the group scheduler development. His interests are in resource management and power management.

Srivatsa Vaddagiri has been with IBM India since over 12 years where he has worked on a number of projects focusing mainly on Unix systems. Some of his significant contributions are CPU Hotplug for Linux kernel and group-scheduling extensions to Linux CPU scheduler. Currently he is in a customer facing role at Linux Technology Centre. He can be reached at vatsa@in.ibm.com.

Peter Zijlstra is a professional Linux kernel hacker who has made significant contributions to various core Linux subsystems including the VM and scheduler. He is one of the maintainers of the lockdep/lockstat infrastructure and an active contributor to the linux-rt effort. He is currently employed by Red Hat.

Virtualization cluster built with Xen and iSCSI-SAN
by Thomas Groß
Thursday, 2008/10/09 14:45-15:30

The session explains and shows laxCluster, our software to manage Xen based virtualization systems both clusters with iSCSI SAN or standalone servers. A cluster consists of a storage (iSCSI SAN) and 2 or more cluster servers. The storage is a standard linux server typically having local raid systems or fc connections to external SAN servers. It can be used as a cluster server too if Xen is running there. The cluster servers are standard linux servers with Xen kernel. They need just small local storage for the operating system though can boot from a flashdisk or a usb-stick. Basicly laxCluster is a couple of scripts around LVM2, iSCSI, Xen and openssh to do things like install a virtual machine from a template into a logical volume, configure the installation according to your needs, export / import the logical volumes via iSCSI across a network, create Xen control files, run / stop / migrate / save / restore virtual machines. Though all that is done by scripts there are a kommander (kde) based gui tools laxCluster uses the lax infrastructure like the laxDB (openldap) or the autologin openssh channels to remote machines. The integration into the lax infrastructure furthermore allows monitoring and notification of cluster servers, virtual machines and services.

The solution is actually based on opensuse 11.0 or SLES10SP2. The session includes a life demonstration via internet / openvpn connection or a local cluster. Lax and laxCluster will be available in opensuse build service this autumn. See also: www.teegee.de -> Downloads -> Vorträge -> Xen-basiertes Cluster mit iSCSI-SAN (in english soon).

About the speaker: Thomas Groß runs teegee - a small company which is specialized on linux and opensource software. He studied information technology in Chemnitz / Germany. His main focus at work is linux based it infrastructure, system and network administration. For a couple of years he has been working for local companies at administration and software development tasks. He loves his children, mountain trekking, jogging, cycling.and snow boarding and hopes to run a marathon this year. Sometimes he preferes tactile information storage like the books from J.K. Rowling, J. R. R. Tolkien and Walter Moers.
High-Availability on Linux: The Future
by Lars Marowsky-Brée
Thursday, 2008/10/09 16:00-16:45

Linux has an abundance of riches of clustering solutions; Red Hat's Cluster Suite and the Linux-HA stack are the most popular ones and ship with the major enterprise distributions. The idea of sharing code and standards has always existed, but coming from two different backgrounds, it has taken considerable time to make progress here.

The co-evolution of both user-space stacks, including the join of the kernel work through GFS2 and OCFS2, has finally led to the point where more major components, such as the cluster infrastructure, are being shared.

There have also been organizational drivers on the Linux-HA projects, which may be of interest to the community.

Further, the Cluster Developer Summit 2009 is scheduled for the last week of September in Prague, and is attended by developers from the community, including Oracle, Red Hat, Linbit, and SuSE.

This presentation will discuss the latest developments, the current situation, and report from the summit.

About the speaker: Lars Marowsky-Brée works for the SuSE Labs department at Novell, focusing on storage and High-Availability topics. He enjoys thinking about and anticipating failure scenarios.
Scalable and Practical OpenSource Embedded System
by D. Jeff Dionne
Thursday, 2008/10/09 16:00-16:45

Embedded systems running a variant of linux come in many shapes and sizes. From mass deployments of consumer products running uClinux kernels with a minimal userland to large Common Off the Shelf (COTS) embedded systems, the challenges are remarkably similar. We discuss the different hardware and software required to build a continuum of systems, from the point of view of Economics, Engineering and Software. Having developed a set of requirements, we propose a portable and scalable Linux based software stack and tool set which can be used to build such systems efficiently. We present a prototype development projects using a first implementation of these specifications. Finally, we present our ideas on how community participation might be increased, to the benefit of both the commercial vendors of products containing OpenSource components and the community.

Scope

  • System Characteristics:
    • CPU Families and Support
    • Software Stacks
    • Practical Systems: Example hardware and software stack in Commercial product
  • Cost Structures
    • Embedded Systems people need to care about Bill of Materials
    • From Concept to Components to Product
  • Design Examples (4: MIPS without MMU, MIPS with MMU, SPARC and SH3)
    • Design Specification
    • Chipset Specification
    • Linux Kernel support: Drivers and Porting
  • Towards a common software stack
    • Kernel tree (Great success has been achieved).
    • UserSpace architecture
      • Common Practice
      • A new package based, User Accessible approach
    • Embedded systems as a Platform
  • Community Building
    • OpenSourceEmbedded
    • Vendor Participation
    • Making the best use of Community and the tools the GPL gives us
About the speaker: D. Jeff Dionne is one of the original authors of uClinux (Linux for systems without MMU) which spawned directly projects such as uClibc and set the stage for others. Jeff co-founded Rt-Control, an embedded systems company that built hardware and uClinux based software products (eventually sold to Lineo, Inc) and was CEO of Arcturus Networks, a Network and VoIP equipment engineering house. Rt-Control and Arcturus Networks built dozens of complete embedded products containing Linux and uClinux on almost as many CPU architectures.

Most recently, Jeff is CEO of ANI Co. Ltd. Japan. As is true of CEOs in all small companies, Jeff leads product development hands on, specializing in the design of Linux based Consumer Electronics from concept through hardware engineering, software and ultimately production.

DRBD 9 + Devicemapper: The Future of (Linux) Block-Level Storage Replication
by Lars Ellenberg
Thursday, 2008/10/09 16:45-17:30

Summary:

The current architechture of DRBD came to its limits. To cope with requested and anticipated usage scenarios, we join the device-mapper framework with a modular rewrite and extension of current DRBD features.

This talk will not go into implementaion details, since most of it is still under development, but focus on the design ideas and the reasons behind them.

Coverage:

After a very brief summary and taxonomy of the What and Why of storage replication, I present the hooks and concepts to be implemented, and how to fit them into the existing devicemapper framework.

Heinz Mauelshagen recently created the dm-replicator framework, which has its primary development focus on asynchronous replication using (dumb) iSCSI or FC WAN links, to allow for desaster recovery and maybe some "follow-the-sun" global storage model.

This framework, however, is sufficiently abstracted and modularized, thus flexible enough to add DRBD features for synchronous, Failover-HA suitable replication.

The additional benefit is, all various features now are easily combined, so you can have storage with local Failover-HA, plus WAN replication to multiple DR-Sites, consistent resynchronization, time-shifted replication, rewindable storage etc.

Features already present now in dm-replicator

  • one Primary multiple Secondaries
  • consistent (keeping write order) update on "remote" targets
  • consistent multi-volume write ordering
  • per site link tunable replication lag to secondaries
  • unfortunately currently "dumb remotes" only, any remote needs to be visible as local block device already (typically iSCSI or FC)
  • no internal logic for data generation tracking, failover, or policy decisions
Features to be added:
    make synchronous replication suitable for failover HA
  • add generic data generation ("monotonic storage time")
  • introduce concept of "write quorum"
  • "intelligent" transport links, more efficient asynchronous replication (scrubbing overwrites, shipping only compressed parity information, etc),
  • time-shifting update on remote targets
as by-product:
  • rewind in time, snapshot "after the fact": create a snapshot now - for a data set version some hours ago.
About the speaker: Lars Ellenberg, DRBD key developer since years, is in charge of the further development of DRBD and its necessary integration into the generic devicemapper framework.
New Connection Manager for embedded Linux systems
by Marcel Holtmann
Thursday, 2008/10/09 16:45-17:30

The new Connection Manager for Linux is an attempt to create a generic infrastructure for creating networking connections. The main goal is to make the new solution ready for embedded systems. The whole design is modeled to be slim and flexible. This is achieved via a fully plugin and policy based architecture. Connection Manager is the perfect solution for embedded system like phones and tablets that are running Linux and where Network Manager would be too big and complex.

The initial release has been made public as part of Moblin.org and includes support for Ethernet, WiFi and Bluetooth. Future releases will add support for Ultra-Wideband, GSM/UMTS and WiMAX.

About the speaker: Marcel Holtmann is the maintainer for the Bluetooth stack for Linux and works for the Open Source Technology Center at Intel.
Device Mapper Remote Replication Target
by Heinz Mauelshagen
Thursday, 2008/10/09 17:30-18:15

The Device-Mapper, as a general purpose mapping subsysteml, is part of the Linux kernel since back in the 2.5 development cycle being shared by applications such as LVM2, dmraid, kpartx, multipathd etc. Details of any particular mapping (e.g. mirrored) are being abstracted via pluggable mapping targets (i.e. kernel modules), hence enabling addition of new mappings to a running system. A new target, the author has been working on since late 2007, is the Replicator Target to cope with disaster recovery requirements in Enterprises. It allows for remote data replication of arbitrary groups of block devices as a single, write-ordered entity to multiple remote sites, supports synchronous and asynchronous replication and fallbehind thresholds to switch from asynchronous to synchronous mode beside other features.

The talk will cover general requirements for data replication and the functionality being provided by this new target.

About the speaker: After his diploma in Electrical Engineering in 1986, the author worked on the development of distributed planning applications for the phone/ISDN network of Deutsche Telekom and in UNIX systems management in a large development center, where he started to develop Linux LVM in his spare time. 2000 he joined Sistina Software, Inc., who allowed him to build a team and work fulltime on Linux LVM development. Sistina got aquired by Red Hat Inc. in January 2004. The author continues to work on LVM, Device Mapper and related topics, such as dmraid.
Enhancing Security of Linux-based Android Devices
by Aubrey-Derrick Schmidt, Hans-Gunther Schmidt, Kamer Ali Yüksel, Osman Kiraz, Dr. Seyit Ahmet Camptepe and Prof. Dr. Sahin Albayrak
Thursday, 2008/10/09 17:30-18:15

Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in vary from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become more and more dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level and where third-party developers first time have the opportunity to develop kernel-based low-level security tools.

Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS, holding the greatest market share among all smartphone OSs, was even closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners' privacy.

In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities and tools employed to conclude possible threats that Android devices may face. We survey well-accepted security mechanisms and tools which may address these threats. We provide detailed descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage to conclude on their feasibility.

As open smartphones are released and may increase their market share, similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability existing signature and anomaly intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment.

Our third contribution involves initial results of our malware detection mechanism basing on systems calls. We identified approximately 400 executables installed to the Linux side of Android. We perform a statistical analysis on the system calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain system calls, e.g. "chroot", may indicate malicious activity. Therefore, we present a metric for weighing the suspiciousness of these calls. Our results present a first step towards detecting malicious applications on Android-based devices.

About the speakers: Aubrey-Derrick Schmidt received his Diplom-Informatiker with main focus on artificial intelligence and communication systems from Technische Universität Berlin in 2006. He is a Ph.D. candidate in the department of electrical engineering and computer science at Technische Universität Berlin, where his research interests include smartphone security, monitoring, and intrusion detection.

Hans-Gunther Schmidt, currently enrolled as a computer science student at Technical University of Berlin, is a diploma thesis candidate within the Security Competence Center at DAI Labor (Technische Universität Berlin), under supervision of Prof. Dr.-Ing. habil Sahin Albayrak. With a professional background at Sun Microsystems' Mission Critical Solution Center (Solaris Networking Focus), his main interests lie in the security of current operating systems, preferably UNIX and Linux systems, distributed systems, network protocols and server deployment and administration.

Kamer Ali Yüksel is currently finishing his studies to obtain a B.S. degree in Computer Science and Engineering at Sabanci University. His main research areas are applications of artificial intelligence approaches including multi-agent systems, machine learning, and distributed problem solving; in several problems of computer science varying from crowd simulation to intrusion detection and prevention.

Osman Kiraz is currently a Bachelor of Science student in Computer Science and Engineering at Sabanci University. His current research interests are computer and network security, particularly intrusion detection systems for host-based agents, artificial intelligence and commonsense reasoning applications.

Dr. Seyit Ahmet Camptepe received the B.S. and M.S. degrees in Computer Engineering at Bogazici University, in 1996 and 2001 respectively, under supervision of Prof. M. Ufuk Caglayan. He received the Ph.D. degree in Computer Science at Rensselaer Polytechnic Institute, in 2007, under supervision of Assoc. Prof. Bülent Yener. Currently, He is working as a senior researcher at DAI-Labor - Technische Universitaet Berlin under supervision of Prof. Dr.-Ing. habil Sahin Albayrak. His research interests include autonomous security, economy of information security, malicious cryptography, key management, distributed systems security, attack modelling , detection, and prevention, social network analysis and VoIP security.

Prof. Dr. Sahin Albayrak is founder and scientific director of the Distributed Artifcial Intelligence Laboratory. He received his Ph.D. from Technische Universität Berlin in 1992. Since July 2003 Prof. Albayrak holds the chair of Agent-Oriented Technologies (AOT) in Business Applications and Telecommunication at Technische Universität Berlin. He is member of IEEE, ACM, GI, and AAAI.

Keynote: The Kernel Report by
Jonathan Corbet
Friday, 2008/10/10 09:30-10:30

The Linux kernel is at the core of any Linux system; the performance and capabilities of the kernel will, in the end, place an upper bound on what the system as a whole can do. This talk will review recent events in the kernel development community, discuss the current state of the kernel and the challenges it faces, and look forward to how the kernel may address those challenges. Attendees of any technical ability should gain a better understanding of how the kernel got to its current state and what can be expected in the near future.

About the speaker: Jonathan Corbet is a Linux kernel contributor, co-founder of LWN.net (and the author of its Kernel Page), and the lead author of Linux Device Drivers, Third Edition. He lives in Boulder, Colorado.
Extending Vyatta router to add Quality Of Service
by Stephen Hemminger
Friday, 2008/10/10 11:00-11:45

This paper describes how Quality Of Service (QoS) features were added to the Vyatta Community Edition 4.0 [1]. The purpose is twofold to describe the details of the interface, and more importantly use QoS as an example of how new services can be added to Vyatta Software.

The key feature of Vyatta software is a command line user interface, FusionCLI, similar in functionality to proprietary router products. The CLI is an extension to GNU Bourne Again Shell (BASH) using templates and completion extensions to allow for simple scripting of services.

The challenge with QoS support is deciding how to allow enough functionality to support what the user wants, without being overwhelming. In Vyatta, this is implemented by the creation of QoS policies. These policies were chosen to be familiar to network administrators. For the initial implementation, two selections are available: a fair-queue policy using the Stochastic Fair Queuing (SFQ) discipline, and a more complex traffic-shaper based on the the Hierarchical Token Bucket (HTB) discipline.

The policies are implemented in the CLI as template nodes in a hierarchical directory tree. Each node has a a template file describing what actions to perform for syntax validation, creation, deletion, and commit. For the QoS features, a set of Perl scripts is used to transform the actions into the underlying Traffic Control (TC) commands.

Some of the issues uncovered in the development process were: missing packet classification for VLAN's; difficulty in describing complex matching requirements; ordering problems during boot; conceptual gaps where the actual TC infrastructure "leaks out" to the user interface; and the usual set of bugs that needed fixing.

Hopefully, by reading this paper others will be inspired to make suggestions for further improvements. Or even better experiment with adding new features or customizing the CLI for themselves.

1. Vyatta is a fully open-source routing platform based on Debian Linux. Vyatta has both subscription and community editions similar to other Linux distributors (RHEL and Fedora).

About the speaker: Stephen has been involved with Linux kernel development for 7 years, mostly on TCP and network devices. He currently maintains the bridging and iproute2 utilities. Currently at Vyatta, he focuses on QoS, routing protocols, and performance. Stephen used to give more talks in his previous role as fellow at OSDL, including Linux Conference Europe 2007, Linux Conference Australia 2005, and Linux Kongress tutorial in 2004.
Samba status report
by Volker Lendecke
Friday, 2008/10/10 11:00-11:45

Samba 3.2 has been released on July, 1st, 2008. This talk will give an overview of the current status of Samba development and what can be expected in the near future.

Major new developments in Samba 3.2 are

  • IPv6 support: Samba can now listen on IPv6 interfaces
  • Registry configuration: To make configuration for OEMs easier, Samba now provides a registry based configuration method. This makes parsing and writing smb.conf files unnecessary and also enables remote configuration via the
  • Cluster support: Based on a posix cluster file system like GFS, OCFS or GPFS Samba can now share the same file space via different nodes of the cluster and still maintain consistent CIFS semantics.
  • SMB transport encryption: NFSv4 has it, so we have to have it, too :-). smbclient to smbd can now encrypt the bulk data, cifsfs is being extended to also do so.
Future development
  • Samba 4 is making progress being an active directory domain controller, although it is still incomplete
  • A new merge project has been started: Merge the best parts of Samba 3 and Samba 4 into one build and thus provide an AD controller as well as a solid file and print server. This talk will show where the interfaces are and how we plan to cope with them.
About the speaker: Volker Lendecke is member of the Samba Team and co-founder of SerNet GmbH in Göttingen, Germany
Towards 10Gbps open-source routing
by Olof Hagsand, Bengt Görden and Robert Olsson
Friday, 2008/10/10 11:45-12:30

We present Linux performance results on selected PC hardware for IP packet forwarding in 10Gb/s speeds. In our experiments, we use Bifrost Linux on a multi-core NUMA PC architecture with multiple DMA channels, dual PCIe buses and 10GE network interface cards.

More specifically, the PC architecture was an AMD Opteron 2222 with two double core 3GHz CPUs on a Tyan Thunder n6650W(S2915) motherboard. Adapter PCI Express x8 lanes based on 82598 chipset from Intel.

Our experiments were divided into TX and forwarding experiments. The purpose of the TX experiments was to explore hardware capabilities, while the purpose of the forwarding experiments was to give realistic bandwidth and packet rate numbers.

In the transmission(TX) experiments, a bandwidth of 25.8 Gb/s and a packet rate of 10 Mp/s using four CPU cores and two PCIe buses was achieved. This provided us with an upper limit on the IP forwarding.

In the first forwarding experiment, a single IP flow was forwarded using a single CPU. The experiment shows a forwarding rate of around 900 Kp/s resulting in near wire-speed for larger packets but degrading bandwidth performance at lower packet sizes.

Profiling was done for each experiment in order to get a detailed understanding of the CPU and code execution. The CPU spends a large part of its time in buffer handling. Input handling seems also to yield more work than forwarding and output.

Thereafter, a realistic multiple-flow stream with varying destination address and packet sizes was forwarded using a single CPU. 8K simultaneous flows were generated, where each flow consisted of 30 packets. A large FIB was also loaded and netfilter modules enabled. The forwarding bandwidth was shown to be around 4.5Gb/s.

In the last experiment, multi-queues on the interface cards were used to dispatch different flows to four different CPUs. Our results indicate how multiple queues on the receive side were evenly distributed over multiple CPU cores. This shows a potential of load balancing using multiple CPUs.

However, we identified a performance degradation when we used more that one CPU. By profiling we detected an issue with the last part of the TX qdisc code that is not fully ready for being used in a parallel CPU environment.

When this remaining bottleneck is removed, we believe that we can fully utilize the full potential of multi-core, multi-queue forwarding in a Linux system, so that we can increase forwarding performance by adding more CPU cores, at least up to the point when bus and memory limitations appear. This would in principle allow us to forward realistic traffic in 10Gb/s wirespeed and beyond.

About the speaker: Olof Hagsand is an as associate professor in grid- and internet-technology, at the School of Computer Science and Communication at the Royal Institute of Technology(KTH), Stockholm. He received his PhD in Telecommunication Networks and Systems from KTH in 1995, and has been active both in the research and industry in the field of networking and router architectures. As a researcher, he has worked at SICS for ten years, and later at KTH since 2003. He has also an industrial experience in networking companies including Dynarc, Prosilient, Xelerated, and Ericsson. At Dynarc, he was the chief software architect and led the software development of Dynarc's router products.
Samba's new registry based configuration
by Michael Adam
Friday, 2008/10/10 11:45-12:30

Starting with version 3.2.0, Samba offers a new configuration system that stores the configuration parameters in Samba's internal registry database. This backend can be chosen in combination with or as a substitute for the traditional text configuration backend (the smb.conf file).

The registry store has the advantage of being easily accessible programmatically, the api providing locks and transactions. Furthermore, since the registry is stored as a TDB database, the new configuration backend is especially interesting for use in a clustered environment, where with the help of ctdb, the configuration changes are immediately distributed to all cluster nodes. (CTDB is the clustered implementation of TDB, see http://ctdb.samba.org).

The fact that the registry is used (instead of any arbitrary tdb database), makes the configuration interface available remotely over the WINREG RPC interface without further effort. I.e. those who like it can now edit Samba's configuration with regedit.exe from a windows workstation...

The registry backend has been introduced together with a new interface abstraction layer for accessing Samba's configuration, the libsmbconf library. The command line program "net conf" is an example application using this interface. It provides a dedicated tool for reading and writing Samba's registry configuration.

This talk gives a demonstration of the usage of the registry configuration, in particular the net conf commandline interface is presented. If time permits, the effects of using registry configuration in conjunction with ctdb will be demonstrated. Further, the the API and implementation of the libsmbconf library are addressed. The netdomjoingui is presented as an example special-purpuse application that changes parts of the samba configuration via libsmbconf. Finally an outlook is given on current and future development.

About the speaker: Born in 1972, I studied Mathematics and Computer Science in Göttingen and Bonn. I am working as a senior consultant and software engineer at SerNet GmbH in Göttingen since 2002. I have been working on Samba code since 2006 and have become a member of the Samba Team in 2007. Main contributions to Samba include the registry based configuration system, libsmbconf, registry code rewrite, POSIX ACL file system implementations as VFS modules, VFS API cleanups, winbindd cache validation code, and enhancements of the Samba build system. I co-authored the second edition of "Samba3 für Unix/Linux-Administratoren" (in German) together with Volker Lendecke and others.
Open Source Routing in High-Speed Production Use
by Robert Olsson, Hans Wassen and Emil Pedersen
Friday, 2008/10/10 14:00-14:45

In almost 10 years we have used Open Source Routers in mission critical networking to bring Internet connectivity to many tens of thousands users. Needless to say Uppsala University is one of the largest universities in Sweden and it is a well-connected university. It is currently using four Gigabit (including two for its student network UpUnet-S) connections towards our ISP (SUNET), and a production for 10G connection is planned. Uptime for our users is close 100% due to the testing and verification efforts and also due to the redundancy of the dual access. This makes it possible to, without loss of connectivity, replace and upgrade core routers.

Reporting this success does not mean it is without effort or simple. It requires skill and planning, and the network managers must understand issues like packet budget and bandwidth needs, and be able to match them to the used equipment/routers. It is important to understand traffic patterns and routing protocols, and of course how to operate and monitor the routers. Also a time-consuming process is hardware selection and testing to find robust and high-performing hardware in combination with good device driver/kernel support.

The major core routers uses full BGP with our ISP (SUNET) and local peering and OSPF is used for IGP. Both IPv4 and IPv6 is in production. The student network uses authenticated network access. Over time we have identified areas of work and collaborated with linux network developers and contributed with work in different areas.

A breakthrough in performance is expected due to HW assisted support such as hardware classifiers and multiple RX and TX rings found on new interface cards together with new kernel support for multiple queues.

About the speaker: Author is experienced in IP routing, network design, network and equipment testing. And have collaborated and contributed to various parts of linux networking code as pktgen, fib_trie and NAPI. Also moderator for Bifrost Linux, a distribution specifically targeted for networking, and coordinator for the Bifrost user community.
mISDN continued
by Karsten Keil
Friday, 2008/10/10 14:00-14:45

On the 11th Linux Kongress I presented the new modular ISDN driver architecture to the audience. After some years of experiments and lot of different implementations of the architecture I finally released the modular ISDN driver v2.0 into kernel 2.6.27.

The new driver use sockets to communicate with upper layers in user space, it only has the hardware abstraction layer and the ITU layer2 ISDN protocol (Q921) in the kernel all other protocol layers are implemented in user space, including CAPI 2.0 support.

It has support for a DSP framework for audio processing and so it is optimized for voice handling applications like a PBX. mISDN also has a layer1 over IP module so you can export the functionality of your ISDN card easily via you LAN or into virtual machines. It can be used with different open source projects to build Voice over IP gateways (e.g pbx4linux, Asterisk).

I will give an overview about the current mISDN driver layout, its components and show how to setup a simple PBX with it.

About the speaker: Karsten Keil started in 1995 with developing ISDN drivers for Linux. He is the maintainer of the ISDN subsystem in the Linux kernel. Since 1999 he is working for the SuSE Labs as kernel engineer in the networking area.
Latency reducing TCP modifications for thin-stream interactive applications
by Andreas Petlund
Friday, 2008/10/10 14:45-15:30

A wide range of Internet-based services that use reliable transport protocols display what we call thin-stream properties. This means that the application sends data with such a low rate that the retransmission mechanisms of the transport protocol are not fully effective. In time-dependent scenarios (like online games, control systems or some sensor networks) where the user experience depends on the data delivery latency, packet loss can be devastating for the service quality. Extreme latencies are caused by TCP's dependency on the arrival of new data from the application to trigger retransmissions effectively through fast retransmit instead of waiting for long timeouts.

In order to reduce application-layer latency when packets are lost, we have implemented modifications to the TCP retransmission mechanisms in the Linux kernel. We have also implemented a bundling mechanisms that introduces redundancy in order to preempt the experience of packet loss. In short, if the kernel detects a thin stream, we trade a small amount of bandwidth for latency reduction and apply:

Removal of exponential backoff: To prevent an exponential increase in retransmission delay for a repeatedly lost packet, we remove the exponential factor.

Faster Fast Retransmit: Instead of waiting for 3 duplicate acknowledgments before sending a fast retransmission, we retransmit after receiving only one.

Redundant Data Bundling: We copy (bundle) data from the unacknowledged packets in the send buffer into the next packet if space is available.

These enhancements are applied only if the stream is detected as thin. This is accomplished by defining thresholds for packet size and packets in flight. Also, we consider the redundancy introduced by our mechanisms acceptable because the streams are so thin that normal congestion mechanisms do not come into effect.

We have implemented these changes in the Linux kernel (2.6.23.8), and have tested the modifications on a wide range of different thin-stream applications (Skype, BZFlag, SSH, ...) under varying network conditions. The modifications are made available as a patch. Our results show that applications which use TCP for interactive time-dependent traffic will experience a reduction in both maximum and average latency, giving the users quicker feedback to their interactions.

Availability of this kind of mechanisms will give Linux an edge when it comes to providing customizability for interactive network services. The quickly growing market for Linux gaming may benefit from lowered latency. As an example, most of the large MMORPG's today use TCP (like World of Warcraft and Age of Conan) and several multimedia applications (like Skype) use TCP fallback if UDP is blocked.

The talk will outline the different aspects of using TCP for interactive time-dependent applications. We will present statistics from network traces from a range of thin-stream applications. A description of our developed mechanisms will be presented and compared to other related TCP modifications. We will also present the results from extensive tests performed using our modifications. We can also demonstrate our modifications.

About the speaker: Andreas Petlund received his B.Sc. in informatics in 2003 and his M.Sc. in 2005, both at the University of Oslo. He is currently a Ph.D.-student at Simula Research Laboratory / University of Oslo. His main research interests include network protocol optimization for time-dependent thin streams, operating systems optimizations and hardware offloading.
Complete and comprehensive service management built purely on open source
by Michael Kienle
Friday, 2008/10/10 14:45-15:30

Today's increasing competition demands a modern IT specifically focussing on the professional enablement of tightly integrated business processes being built on top of IT infrastructure and applications. Therefore, stability and reliability are becoming more and more crucial for a successful IT organisation, especially when these criteria tend to become measured on an objective base, like service level agreements. Getting along with these requirements is not an easy job for today's IT managers. Luckily enough there are plenty of commercial toolsets and software suites available: systemmanagement for monitoring the status and performance of the IT, ticketing systems for tracking the troubleshooting progress, configuration databases to streamline and achieve compliance etc.. Unfortunately licensing these tools is pretty expensive and many suites are rather inflexible and hard to tailor against each owns internal processes and requirements which consequently can only be solved by spending even more money and resources. The all-dominant question is: Are there any alternative solutions built purely on open source? Without contradiction open source in general has proven itself mature, even for the demands of enterprise customers but can it really substitute these complex and mission-critical solutions to achieve better flexibility, deeper insight and save resources and money? This contribution points out possible open source solutions for coping with these requirements. It starts with a big-picture vision of a complete and comprehensive service management framework suite based on open source building blocks (Nagios, OTRS, I-do-IT). Subsequently each block is briefly introduced. Based on a wealth of experience in enterprise projects a distinguished valuation from a practitioners' viewpoint is presented in accordance with some recommendations to avoid the usual pitfalls in complex open source projects.

About the speaker: Michael Kienle, born 1969, joined it-novum GmbH as managing director in 2003. Before, he held various management positions at different companies in the IT and telecommunications industry. It-novum is a mid-sized consulting company and well-known open source technology leader with 50 employees and more than 6 million turnover, focusing primarily on enterprise customers. Core competencies are, amongst others, large-scale open source based systemmanagement projects for the monitoring of heterogeneous and complex IT-infrastructure and applications with regards to the integration into processes like ticketing systems, ITIL, configuration databases, service level management etc. Michael Kienle's first experience with Linux was back in the early 90's when implementing Linux was much more than just inserting an auto-start installation CD into the tray.
The Evolution of Java(TM) software on GNU/Linux
by Dalibor Topic
Friday, 2008/10/10 15:30-16:15

The inclusion of OpenJDK 6 into the core of Fedora, Debian, OpenSuse and Ubuntu has enabled millions of GNU/Linux users to easily obtain the latest version of the Java SE platform. This has changed the packaging landscape for Java software, since ISVs and distro builders can now rely on the Java platform being available out of the box. Planned enhancements to the Java programming language aim to further simplify packaging by making Java software more modular and more explicit about its dependencies.

In this talk we'll describe the mechanisms that make GNU/Linux package management scale to thousands of packages and millions of users; share lessons learned from packaging Sun's major Open Source Java projects like OpenJDK, NetBeans and Glassfish, and explain how the enhancements planned for the next release of the Java platform can make the better for everyone: developers, distro builders and users.

About the speaker: Dalibor Topic lives and works in Hamburg, Germany, as Sun's Java F/OSS Ambassador. In collaboration with the OpenJDK community and GNU/Linux distributions, he is currently occupied accelerating the evolution of the Java(TM) programming language based Open Source software stacks on GNU/Linux and other platforms.
Taking GPGME to New Horizons
by Werner Koch
Friday, 2008/10/10 15:30-16:15

GnuPG Made Easy (GPGME) has originally been developed to make integration of encryption features into mail software easier by providing a simple and consistent API. Over time GPGME has turned into a versatile library to control almost all aspects of OpenPGP and X.509/CMS based cryptography.

The programming interface of GPGME is easy to learn and resembles commonly used programming patterns. GPGME is portable and allows for changes in the actual encryption backend without changing the applications.

Given the proven API it is worth to ask why to use GPGME only for mail software and data encryption tools? Why not help other applications using other aspects of cryptography. In particular why not integrate support for online protocols like Secure Shell or GNUTLS? Thinking a bit about these questions it quickly turns out that they can be addressed with minimal enhancements to the current API.

Further, the key management functions of GPGME could also be used by other programs and in particular for use with Secure Shell and TLS. There would be no more need to have keys floating around in several files, all in a different format, all with their own key management functions, and all being slightly different.

The envisioned new GPGME will provide a portable and consistent crypto API for users, developers and those who want to evaluate the software. It will also make the entire crypto ecosystem much less complex by following the old crypto guideline to "put all your keys into one basket and guard them carefully".

This talk will present an overview of GPGME, sketch the planned and already implemented enhancements, discuss interesting implementation details and briefly show some usage scenarios.

About the speaker: Werner Koch, born 1961, married and living near Düsseldorf.

After school, alternative service and apprenticeship as an electrician he worked as software developer while also studying applied computer science. He is founder and general manager of g10 Code, a company specialized in development of Free Software based security applications.

Werner is radio amateur since the late seventies and became interested in software development at about the same time. He worked on systems ranging from CP/M systems to mainframes, languages from assembler to Smalltalk and applications from drivers to financial analysis systems. He is a long time GNU/Linux developer and the principal author of the GNU Privacy Guard.

Closing Note: Mobile Linux
by Dirk Hohndel
Friday, 2008/10/10 16:45-17:30

Moblin.org is an open source project to create a Linux OS targeted at and tuned for mobile devices. Dirk will discuss some of the design philosophies, current status and goals for this project.

About the speaker: Dirk Hohndel is Chief Linux and Open Source Technologist and is working for Intel Corporation. He has been an active developer and contributor in the Linux space since its earliest days. Among other roles, he worked as Chief Technology Officer of SuSE and as Vice President of The XFree86 Project, Inc. Dirk joined Intel in 2001. He works in the Software and Solutions Group and focuses on the technology direction of Intel's Open Source Technology Center and guides Intel's engagements in open source. He is an active contributor in many open source projects and organizations, various program committees and advisory boards. Dirk holds a Diploma in Mathematics and Computer Science from the University of Würzburg, Germany. He lives in Portland, OR.

Comments or Questions? Mail to contact@linux-kongress.org Last change: 2008-10-01