Get news? 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | About | Contact Want to help?

Linux-Kongress
The International Linux System Technology Conference

Home  |   Call for Papers  |   Program  |   Abstracts  |   Directons  |   Fees/Registration  |   Sponsoring


Abstracts


Tutorials/Training Technical Sessions
Network Monitoring with Open Source Tools
by Timo Altenfeld, Wilhelm Dolle, Robin Schroeder and Christoph Wegener
Tuesday, 2009/10/27 10:00-18:00 and
Wednesday, 2009/10/28 10:00-18:00 german
Das zweitägige Tutorium "Netzwerküberwachung mit Open-Source-Tools" richtet sich an erfahrene Systemadministratoren, deren Aufgabe die Betreuung, Überwachung und Optimierung von komplexen Netzwerkumgebungen ist. Die Teilnehmer sollten bereits Erfahrungen mit der Installation von Programmen unter Linux haben und rudimentäre Grundkenntnisse des TCP/IP-Stacks mitbringen.

Im Laufe des Workshops wird der Aufbau eines Überwachungsservers auf Basis von Linux mit exemplarischen Diensten gezeigt und diskutiert werden. Dabei werden wir aber nicht nur die rein technischen Aspekte der Netzwerküberwachung beleuchten, sondern auch die Grundlagen der notwendigen organisatorischen und rechtlichen Rahmenbedingungen aufzeigen und berücksichtigen. Nach der Veranstaltung können die Teilnehmer die gewonnenen Erkenntnisse dann selbständig in die Praxis umsetzen.

Durch die wachsende Abhängigkeit unseres täglichen Lebens von einer funktionierenden IT-Landschaft und die gleichzeitig rapide zunehmende Komplexität der dazu benötigten Infrastrukturen gewinnen die Themen Netzwerkmanagement und Netzwerküberwachung immer mehr an Bedeutung. Zur Netzwerküberwachung existieren eine Reihe von komplexen und oft sehr teuren kommerziellen Werkzeugen. Dieser Workshop zeigt, wie man eine analoge Funktionalität mit spezialisierten, freien und quelloffenen Programmen erreichen kann.

Themen im Detail/Ablauf des Tutoriums:

  • Organisatorische Fragen
    • Möglichkeiten der Netzwerküberwachung
    • Business Planing / Business Continuity / TCO : Warum freie und quelloffene Software?
    • Bedeutung der Netzwerküberwachung beim Risikomanagement (Basel II / Sarbanes-Oxley Act (SOX)
  • Rechtliche Aspekte
  • Informationsgewinnung
  • Simple Network Management Protocol (SNMP)
    • Qualitative Überwachun
    • Multi Router Traffic Grapher (MRTG)
    • Cacti und RRDTool
  • Verfügbarkeitsüberwachung
    • Nagios
  • Proaktive Überwachung, Auswerten von Logdateien
  • Fehlersuche in Netzwerken mit Wireshark
    • Sicherheits-Monitoring
    • Host- und Netzwerk-Scanning mit nmap
    • Nessus und Open-Source-Alternativen
Die Inhalte werden im Vortragsstil vermittelt und durch praktische Übungen der Teilnehmer am eigenen Rechner vertieft.

Hardware-Voraussetzungen: Die Teilnehmer müssen einen Rechner mit einer aktuellen Linux-Distribution mitbringen – Benutzer anderer Betriebssysteme (*BSD oder MacOS-X) sollten sich vor der Veranstaltung über contact@linux-kongress.org mit den Vortragenden in Verbindung setzen.

About the speakers: Timo Altenfeld, Fachinformatiker für Systemintegration, ist Systemadministrator an der Fakultät Physik und Astronomie der Ruhr-Universität Bochum. Seit Beginn seiner Ausbildung faszinieren ihn Open-Source-Tools und Linux, die er bereits seit geraumer Zeit auch in der beruflichen Praxis zur Überwachung von Linux-, Windows- und Solaris-Systemen einsetzt. Mit dem Einsatz von Linux in diversen Umgebungen beschäftigt er sich seit dem Jahre 2003. Außerdem studiert er zur Zeit Wirtschaftsinformatik an der FOM Essen.

Wilhelm Dolle leitet als Business Field Manager den Bereich Security Management bei der HiSolutions AG, einem Beratungshaus für Information Security und Risk Management, in Berlin. Er ist CISA, CISM, CISSP sowie lizensierter Grundschutz-/ISO 27001- und BS 25999-Auditor und hat bereits zahlreiche Erfahrungen in den Bereichen Sicherheitsmanagement, Risiko- und Sicherheitsanalysen sowie Incident Management sammeln können. Wilhelm Dolle ist Autor zahlreicher Fachartikel und hat Lehraufträge an verschiedenen Universitäten und einer Berufsakademie inne.

Robin Schröder, Fachinformatiker für Systemintegration, ist seit 2006 in der Abteilung "IT-Systeme, Software-Integration" der Verwaltung der Ruhr-Universität Bochum tätig. Er administriert dort zahlreiche Linux-, Solaris- und Windows-Systeme und überwacht den Applikationsbetrieb mit verschiedenen Open Source-Tools. Mit Computern, Linux und Netzwerken beschäftigt er sich bereits seit dem Jahre 1995.

Christoph Wegener, CISA, CISM und CBP, ist promovierter Physiker und seit 1999 mit der wecon.it-consulting freiberuflich in den Themen IT-Sicherheit und OpenSource unterwegs. Er ist Autor vieler Fachbeiträge, Fachgutachter für verschiedene Verlage und Mitglied in mehreren Programmkomitees. Seit Anfang 2005 ist er zudem am europäischen Kompetenzzentrum für Sicherheit in der Informationstechnik (eurobits) tätig und in der Ausbildung im Bereich der IT-Sicherheit aktiv. Darüber hinaus ist er Gründungsmitglied der Arbeitsgruppe Identitätsschutz im Internet (a-i3) und dort, sowie in der German Unix User Group (GUUG), Mitglied des Vorstands.

Linux im Netzwerk
by Johannes Hubertz, Jens Link and Thomas Martens
Tuesday, 2009/10/27 10:00-18:00 and
Wednesday, 2009/10/28 10:00-18:00 german
Zielgruppe: Das Tutorial richtet sich an Linux-Admins mit guten Linux- und grundlegenden Netzwerkkentnissen.

Zum Inhalt/Ablauf:

Dieses Tutorial widmet sich den etwas unbekannteren Funktionen von Linux im Netzwerk: Bridging, VLANs, Bonding, dynamisches Routing, QoS.

Ein kurzer Ausflug in die ISO/OSI-Welt macht zu Beginn die Denkstrukturen und Muster anschaulich, die zum weiteren Verständnis unerlässlich sind. Auch wenn TCP/IP anders funktioniert, ist etwas Theorie außerhalb des Tellerrandes nützlich.

Im ersten Teil des Tutorials geht es vorrangig um Layer 2. Neben den Einsatz von Linux als Bridge z.B. zum Sniffen oder auch als Paketfilter, wird die redundante Anbindung von Servern (Bonding) besprochen. Mit VLANs und dem Einsatz von Linux als Router zwischen diesen geht es dann langsam zu Layer 3 über.

Im zweiten Teil werden dann die Grundlagen von Routing unter Linux sowie der Einsatz von Quagga für dynamische Routing-Protokolle besprochen. Schwerpunkt hierbei bilden OSPF und BGP.

Ein dritter Teil führt in die Grundlagen von Quality of Service (QoS) ein und zeigt wie man unter Linux einfache QoS-Regeln implementiert.

Neben der reinen Theorie gibt es zahlreiche Übungen, um das Gelernte zu vertiefen. In den praktischen Teilen werden dann auch zahlreiche nützliche Tools vorgestellt. Außerdem gibt es ein paar kurze Ausflüge in die Cisco-Welt, sodass Linux und Cisco direkt miteinander verglichen werden können.

Technische Voraussetzungen:

Die Teilnehmer werden gebeten, Laptops (mit Linux) mitzubringen, falls vorhanden gerne mit zwei Ethernet-Schnittstellen. Für die Festplatteninhalte kann jedoch keine Gewährleistung bernommen werden. Wenn möglich sollte auf den Rechnern VMware laufen, um auf einer Hardware mehr als einen Router starten zu können.

About the speakers: Jens Link (http://www.quux.de) ist seit mehr als 12 Jahren im IT-Bereich tätig. Er ist freiberuflicher Consultant, seine Schwerpunkte liegen im Bereich von komplexen Netzwerken (Cisco), Firewalls (Linux, Cisco, Check Point) und Netzwerkmonitoring mit Open-Source-Tools. Ab 1998 setzte Johannes Hubertz Linux bei seinem Arbeitgeber für einige sensible Dinge wie Routing, DNS und Serverüberwachung ein, dabei wurde Debian schnell der Favorit. Eine kostengünstige Firewall-und VPN-Loesung musste 2001 entwickelt werden, zur eigenen und Kundenverwendung.

Seit August 2005 ist er nun mit seiner eigenen GmbH unterwegs, um Linux kombiniert mit Sicherheit zu verbreiten. Dienstleistungen rund ums Thema Sicherheit am Internet sind das Programm. Nicht zuletzt auch mit der Eigenentwicklung 'simple security policy editor'.

Thomas Martens ist seit 8 Jahren im IT-Bereich tätig und beschäftigt sich hauptsächlich mit dem Aufbau und der Wartung von Netzwerken und der Serveradministration.

Building a high available virtualization cluster based on iSCSI storage and XEN
by Thomas Groß
Tuesday, 2009/10/27 10:00-18:00 and
Wednesday, 2009/10/28 10:00-18:00 german
The structure of the cluster: 2 iSCSI servers, 2 cluster servers, 2 switches.

Day 1: Setting up the cluster Step by step we set up a high available cluster by standard linux tools.

1. Preparation: - summary of the technologies iSCSI, Xen, bonding, DRBD, heartbeat1 and LVM. 2. Build the cluster - connect 2 cluster servers und 2 storage servers by 2 switches (redundant, bonding) - configure Xen - configure iSCSI server (target) and client (initiator) - configure DRBD und heartbeat1 between the iSCSI storage machines - see what's going on if one storage machine crashes

3. Create and manage virtual machines - set up an system to install virtual machine from templates or installation media - use logical volumes as containers for virtual machines - create and configure virtual machines - create iSCSI targets for virtual machines and use them in cluster servers - create Xen control files; start, stop, migrate virtual machines

Day 2: manage the cluster We set up the LAX administration system to manage servers, virtual machines and services. Though it's possible to manage the cluster by bare Linux commands it's not comfortable for every day use. LAX offers of a couple of concepts, scripts and gui tools to rescue the administrator. Furthermore we set up monitoring, notification and high availability both for machines and services.

1. Preparation - summary of LAX

2. Manage the cluster - scripts to manage virtual machines (create, start, stop, migrate, change) - concept and scripts to save / restore virtual machines by LVM snapshots - gui tools

3. Organize high availability - set up monitoring and notification both for machines and services - automatically restart services and virtual machines - plan the distribution of virtual machines across the cluster - automatically reorganize the cluster in case of a crash

In the time left (if there is) we can - discuss how to set up an active-active cluster (each machine has it's own DRBD device) - organize the adminstrator's workbench by KDE4 desktop - discuss about the usage of Infiniband between cluster servers and storage Subscribers should have linux admin skills, on the command line too.

About the speaker: Thomas Groß runs teegee - a company which is specialized on linux and opensource software. He studied information technology in Chemnitz / Germany. His main focus at work is linux based IT infrastructure, system and network administration. He loves his children, mountain trekking, jogging, cycling.and snow boarding. Sometimes he preferes tactile information storage like the books from J.K. Rowling, J. R. R. Tolkien and Walter Moers.
A Linux Kernel Safari
by Wolfgang Mauerer
Tuesday, 2009/10/27 10:00-18:00
The reasons to gain an understanding of the Linux kernel are noways manyfold and not only directly focused on kernel development: Acquaintance with the fundamental system layer is essential in understanding a wide variety of system and application issues from performance tracing to architectural design topics. However, complexity and size of the sources make this a difficult task. This tutorial provides a tool-centric safari through the Linux kernel: The most important parts and components are introduced and examined with hands-on experiments utilising numerous tools that are essential for quick and efficient development and analysis.
   In particular, the topics are:
  • Structure and organisation of the kernel sources
  • Important concepts and data structures: Standard algorithms (list management, locking, hashing, ...), the task network, memory management, scheduling
  • Using Qemu for development and debugging
  • Information sources within the kernel
  • Tracking data structure networks
  • Examining kernel behaviour
  • Analysing crashes and faults
Tools introduced and used during the tutorial:
  • LTTng and ftrace
  • GDB for kernel debugging [optionally also KGDB]
  • [optionally latencytop and powertop]
  • Qemu and UML as test foundation
  • LXR, cscope for source code tracking
  • git basics for users, qgit
About the speaker: Wolfgang Mauerer has been writing documentation on various Linux and Unix related topics for the last 10+ years, and has closely tracked linux kernel development during this time. He is the author of books on text processing with LaTeX and about the architecture of the Linux kernel, and has written numerous papers and articles. After his PhD in quantum information theory at a Max Planck Institute where he was mostly interested in using Linux for scientific tasks (numerical simulation of quantum systems and quantum programming languages), he joined Siemens CT Research and Technologies where he currently deals with virtualisation and real-time topics.
High-Availability Clustering with OpenAIS and Pacemaker
by Lars Marowsky-Brée
Tuesday, 2009/10/27 10:00-18:00
This tutorial describes the new community-based high-availability clustering stack, based on OpenAIS as the cluster infrastructure layer and Pacemaker as the resource manager, as jointly developed by Oracle, Red Hat, and Novell. (For those with experience on the legacy Linux-HA stack, differences and improvements to that version will be covered.)
   The first half introduces the basic concepts, software components, hardware requirements, fencing considerations, and discusses deployment options (and limitations), as well as covering the configuration.
   In the second half of the day, an example cluster will be set up step-by-step from scratch (on pre-installed Linux systems), working from the basic package install to a configuration with replicated storage (using DRBD), clustered logical volume management, the in-kernel DLM, OCFS2 as a cluster-aware file system, and a virtual guest as a monitored resource on top, including live migration. A discussion on monitoring and trouble-shooting of cluster environments, common problems, and how to report bugs will lead to a glimpse of future plans. The day will be concluded with questions and answers.

The audience should be familiar with general Linux administration tasks.
About the speaker: Lars Marowsky-Brée has been with the Linux high-availability community for a decade. He has worked with and contributed to the Linux FailSafe, Linux Virtual Server, Linux-HA, heartbeat, drbd, OCFS2, DLM, OpenAIS, Pacemaker projects. He was the original architect behind the heartbeat cluster resource manager, which eventually led to Pacemaker. His passion is unifying the Linux clustering solutions, a project which has greatly advanced in the last year. He is a well-known speaker at Linux-oriented conferences, and has been speaking at the Linux-Kongress with an availability rating of one nine over ten years. As principal engineer at Novell and SUSE, his current role is HA/Storage architect for the SLE 11 HA Extension. In the past, he has spent 3 years as a lead for a Linux kernel team, as a senior consultant, and engineer.
IKEv2-based Virtual Private Networks using strongSwan
by Andreas Steffen
Tuesday, 2009/10/27 10:00-18:00
This tutorial is targeted at sysadmins who want to deploy and operate an IPsec-based VPN on a medium to large scale (100 up to several 1000 users) either in a pure Linux or a mixed Linux / Windows / Mac OS X / FreeBSD environment. The following topics will be treated: Introduction to the native Linux 2.6 IPsec stack and its interactions with Netfilter. How to install and monitor IPv4 and IPv6 IPsec policies and security associations as well as IPsec-policy based iptables rules. Advantages of the improved IKEv2 Internet Key Exchange protocol (RFC 4306) in comparison to the old IKEv1 standard. Presentation of the IKEv2 main features. Introduction to the modular, object-oriented strongSwan architecture. How to add and configure various crypto, management interface and database plugins. How to define and write plugins of your own. Running automatic integrity and crypto self-tests. User authentication based on pre-shared keys, X.509 certificates and various IKEv2 EAP methods. Using a FreeRADIUS or Windows Active Directory Server to centrally manage user credentials via the strongSwan EAP-RADIUS plugin. Using an SQLite or MySQL database to store and manage VPN connection information and virtual IP address leases. Performance tuning, high-availability and load-sharing on strongSwan gateways running up to 20'000 concurrent VPN connections. Seamless integration of strongSwan into the Linux desktop by using the strongSwan NetworkManager applet. Start and stop a VPN connection to your home network with a simple mouse click. Interoperability with the Windows 7 Agile VPN client using IKEv2 and the Mobility and Multihoming protocol (MOBIKE). Porting strongSwan to Mac OS X and FreeBSD. General interoperability questions.
About the speaker: Andreas Steffen is a professor for Security and Communications and head of the Institute for Internet Technologies and Applications (ITA) at the Hochschule f r Technik Rapperswil (HSR) in Switzerland. As contributor of the X.509 patch to the famous Free/SWAN project and later founder of the Open Source strongSwan VPN project he has been involved in the design, development and testing of IPsec-based applications for the last ten years.
IKEv2-based Virtual Private Networks using strongSwan
by Andreas Steffen
Tuesday, 2009/10/27 10:00-18:00
This tutorial is targeted at sysadmins who want to deploy and operate an IPsec-based VPN on a medium to large scale (100 up to several 1000 users) either in a pure Linux or a mixed Linux / Windows / Mac OS X / FreeBSD environment. The following topics will be treated: Introduction to the native Linux 2.6 IPsec stack and its interactions with Netfilter. How to install and monitor IPv4 and IPv6 IPsec policies and security associations as well as IPsec-policy based iptables rules. Advantages of the improved IKEv2 Internet Key Exchange protocol (RFC 4306) in comparison to the old IKEv1 standard. Presentation of the IKEv2 main features. Introduction to the modular, object-oriented strongSwan architecture. How to add and configure various crypto, management interface and database plugins. How to define and write plugins of your own. Running automatic integrity and crypto self-tests. User authentication based on pre-shared keys, X.509 certificates and various IKEv2 EAP methods. Using a FreeRADIUS or Windows Active Directory Server to centrally manage user credentials via the strongSwan EAP-RADIUS plugin. Using an SQLite or MySQL database to store and manage VPN connection information and virtual IP address leases. Performance tuning, high-availability and load-sharing on strongSwan gateways running up to 20'000 concurrent VPN connections. Seamless integration of strongSwan into the Linux desktop by using the strongSwan NetworkManager applet. Start and stop a VPN connection to your home network with a simple mouse click. Interoperability with the Windows 7 Agile VPN client using IKEv2 and the Mobility and Multihoming protocol (MOBIKE). Porting strongSwan to Mac OS X and FreeBSD. General interoperability questions.
About the speaker: Andreas Steffen is a professor for Security and Communications and head of the Institute for Internet Technologies and Applications (ITA) at the Hochschule f r Technik Rapperswil (HSR) in Switzerland. As contributor of the X.509 patch to the famous Free/SWAN project and later founder of the Open Source strongSwan VPN project he has been involved in the design, development and testing of IPsec-based applications for the last ten years.
Building and Maintaining RPM Packages
byJos Vos
Tuesday, 2009/10/27 10:00-18:00
Introduction

In this tutorial attendees will learn how to create, modify and use RPM packages. The RPM Package Management system (RPM) is used for package management on most Linux distributions. It can also be used for package management on other UNIX systems and for packaging non-free (binary) software.

The theory from this tutorial will apply to all RPM-based Linux distributions, but Fedora and Red Hat Enterprise Linux will be used as the reference environment.

Contents

General software packaging theory will be provided as a start, followed by the history and basics of the RPM packaging system.

The headers and sections of an RPM spec file will be discussed. Hints and tricks will be given for each section to enhance the quality of the target package, including the use of macros, adapting software for installing it in an alternative root directory, ensuring correct file ownerships and attributes, the proper use of pre/post (un)installation and "trigger" scripts, and how to deal with package-specific users and init scripts.

Package dependencies and conflicts will be covered, as well as some ways too tweak the automatically generated dependencies, if needed.

Installing files in the proper place requires knowledge of the Filesystem Hierarchy Standard (FHS), hence the basics of the FHS will be discussed.

Besides RPM itself, also build environments around RPM, such as mock, will be shown.

The tutorial will also show how to properly package binary software, often done for internal system management purposes, and shed light on some of the issues involved, including some legal aspects related to packaging non-free software.

Package repositories and dependency resolution. Complementary to RPM, software exists for solving dependencies, such as yum, up2date, and apt-rpm. This software and the corresponding package repositories will be discussed.

Using RPM on non-Linux systems. Although primarly used on Linux systems, RPM can also be used to package software for other (free or commercial) UNIX-like systems. Some aspects of using RPM on non-RPM systems will be discussed.

Besides the theory, several issues will be illustrated with live demonstrations.

Target Audience

The tutorial is targeted toward system administrators and software developers that want to create or modify RPM packages or get a detailed insight in the way RPM packages are built and can best be used.

The attendees need no prior knowledge of RPM, although some basic knowledge of using software packages (as a system administrator using RPM/yum, apt/dpkg, etc.) would be helpful.

About the speaker: Jos Vos is CEO and co-founder of X/OS Experts in Open Systems BV. He has 25 years of experience in research, development and consulting -- mostly relating to UNIX systems software, Internet, and security.

His operating system of choice since 1994 is Linux. In the Linux community he is best known for writing ipfwadm and part of the firewall code in the 2.0 kernel. Using RPM since 1996, he is known to nearly never install software without "RPM-ifying" it. He also participated in the design of RPM's trigger-sripts, later implemented by Red Hat.

His company X/OS delivers open, standards-based solutions and services. Products include Linux consulting and support services, custom-built firewall/VPN appliances with embedded Linux, high-availability cluster solutions and Linux-based Point-of-Sale products.

Deploying VoIP - Identifying and avoiding pitfalls
by Heison Chak
Wednesday, 2009/10/28 10:00-18:00
This tutorial will cover the best practices along with proper testing and deployment strategy, the necessity of keeping VoIP deployments away from becoming a target of user complain and management headache. Voice quality issues, challenges in maintain system availablility, typical user concerns are some of topics that will be covered. Besides a SIP tutorial, recipe of DOs and DON'Ts of VoIP, attendees will come away with some troubleshooting skills and the ability to identify source of problems. The tutorial is SIP-centric which applies from Asterisk to Broadsoft. Managers, system administrators and developers involved in the evaluation, design, implementation of VoIP infrastructures and applications. This tutorial will benefit those who are building their own platform as well as system integrators of commerical products. Participants needs to have good understanding of VoIP and be familiar with networking principles.
About the speaker: Heison Chak is currently working for Leonid Consulting LLC as a Senior Consultant providing Broadsoft consulting as well as managing products to enhance business process, provisioning and fraud detection for major telco's. Heison has been an active member of the Asterisk community and a frequent speaker on VoIP topics. His VoIP column on ;login: is well received.
Keynote: Linux and Open Source in 2010 and Beyond
by Theodore Ts'o (CTO Linux Foundation)
Thursday, 2009/10/29 09:45-10:45
What will the future bring for Linux and Open Source? Some companies have decreased their investment in Linux as a result of the economic downturn starting in the Fall of 2008, and because Linux has already captured much of the "easy wins" in the enterprise server market. Some Linux and Open Source developers have seen layoffs as a result, and have had to look for work at other companies. However, other companies have found new opportunities in new markets, especially in mobile and cloud computing. This talk will discuss some of the impacts the economy, as well as changes in technology, such as the move towards multiple cores, solid state disks, and low-powered CPU's will impact the Linux, Open Source, and Free Software movements.

About the speaker: Theodore Ts'o is the first North American Linux Kernel Developer, and organizes the Annual Linux Kernel Developer s Summit, which brings together the top 75 Linux Kernel Developers from all over the world for an annual face-to-face meeting. He was a founding board member of the Free Standards Group, and was chair of that organization until it merged with OSDL to form the Linux Foundation. He is one of the core maintainers for the ext2, ext3, and ext4 file systems, and is the primary author and maintainer for e2fsprogs, the user space utilities for the ext2/3/4 file systems. At IBM, Theodore served as the architect for the Real-Time Linux development team. Theodore is currently on assignment with the Linux Foundation where he serves its Chief Technology Officer.
QEMU - The building block of Open Source Virtualization
by Glauber Costa
Thursday, 2009/10/29 11:15-12:00
The QEMU project is an open source system emulator and code translator available for a variety of platforms. For a long while, this was all it used to be: a great, but niche-confined software. With the dawn of virtualization systems, QEMU was raised to a more central role, gaining attention of a much broader community. Today, QEMU is used in most of Open Source virtualization projects, in greater or lesser degree, including Xen and KVM. This basically means that wherever you find linux virtualizing something, you'll see QEMU. Raising to that level, a piece of software that was never meant for this task, and never really had a design with things like security in mind, was a challenge on its own. In this talk, I will talk about where we are today, and where we are heading, including a deep explanation of QEMU structure (or lack thereof). I will cover the history of what happened to the project and its messy organization from the very moment a herd of kernel hackers started to flock in there. If the talk gets boring, I will sing and dance Michael Jackson songs.
About the speaker: Glauber is a software engineer, graduated at the University of Campinas, Brazil. In the early times of his course (when he had plenty of time), he met Free Software for the first time. In 2004, while still an undergrad, Glauber joined IBM's Linux Technology Center, as part of the first group gathered by the company in the country for such purpose. From that moment on, he never had any spare time left. Since 2006, he's been working for Red Hat in the virtualization group, initially for the neverending task of getting Xen ready for the RHEL5 release. During this time, he wrote code for the paravirt_ops framework for x86_64, lguest and KVM. Currently, Glauber maintains QEMU packages for Fedora, upstream qemu's stable branch, and splits the rest of his time between pushing KVM forward and other general linux and qemu issues.
Compiler Optimization Survey
by Felix von Leitner
Thursday, 2009/10/29 11:15-12:00
This talk shows how well current generation compilers optimize code by looking at the code generated for several small pieces of C code. Programmers tend to write bad code because they think that the compiler will then generate better code, which the talk will show to be not true. The talk will make the argument that having a good optimizing compiler is thus a security feature, because it allows programmers to write code that is more obviously correct.
About the speaker: Felix is the author of several open source projects, the most well known are dietlibc (a small libc for embedded platforms or to save memory), tinyldap (a simple LDAP server), gatling (a very scalable httpd) and minit (an advanced /sbin/init replacement). He has done talks at Linux-Kongress before, in 2001 (about dietlibc), 2003 (scalable networking), 2004 (about minit), 2006 (filesystem benchmark survey).
View-OS: Change your View on Virtualization.
by Renzo Davoli and Michael Goldweber
Thursday, 2009/10/29 12:00-12:45
The View-OS concept is defined as allowing each process to have its own view of its "execution environment." For example, each process can have its own view for its file system, networking support, user-id, system-name, etc. Umview and kmview are two proof of concept implementations of the View-OS concept. Technically umview and kmview are user-level, system-call based, partial, modular, virtual machine monitors. They virtualize a subset of the kernel requests (system calls) depending upon which umview (or kmview) modules have been loaded and upon which kind of virtualization the user configured. The effectiveness of this approach is best illustrated by providing some examples of what umview (or kmview) can accomplish. * Virtual installation/upgrade of software. Re-mount your root file system using the copy on write facility of the view-os module (viewfs), and run the command to install/upgrade software. $ um_add_module viewfs $ mount -t viewfs -o mincow,except=/tmp,vstat /tmp/newroot / $ viewsu # aptitude install mynewsoftware * Run several browser/ssh clients or other TCP-IP based networking clients where each one uses its own VPN. $ um_add_service umnet $ mount -t umnetlwipv6 none /dev/net/default $ ip link set vd0 up $ ip addr add 10.1.2.3/24 dev vd0 $ ssh 10.1.2.1 * Mount a private filesystem; even filesystem types not natively supported by your kernel. $ um_add_service umfuse $ mount -t umfuseext2 filesystemimage /mnt $ mount -t umfusestrangefilesystem strangeimage /mnt2 * Partition a filesystem image and mount its partition as if were real. $ um_add_service umdev $ um_add_service umfuse $ viewsu # dd of=/tmp/diskimage bs=1024 count=0 seek=1024000 # mount -t umdevmbr /tmp/diskimage /dev/hda # fdisk /dev/hda .... # mkfs.ext2 /dev/hda1 # mount -t umfuseext2 /dev/hda1 /mnt * Add a virtual device (possibly one not supported by your kernel) and use it with your favorite software. * Create a ramdisk and use it. * Change the pace of the "proper time" for a process; making it slower or faster than the time measured by the other processes. * Change the name/type of your machine. * Change your uid/gid. * Create setuid files and create device special files on virtual filesystems. * What you do by fakeroot/fakeroot ng. * Set a chroot cage. Since umview and kmview run at user-level and do not need root access, there is no risk of harm to your system. Umview differs from kmview in that umview is implemented as a user-level virtualization, hence it is as "dangerous" for your host machine as is running user-mode linux. Kmview requires utrace, a special kernel module, which is used to provide faster and more complete support for the user's virtualizations.
About the speakers: Renzo Davoli and Michael Goldweber are joined by a long lasting friendship and collaboration. Renzo is with the Department of Computer Science, University of Bologna, Italy; Michael is with the Department of Mathematics and Computer Science Xavier University, Cincinnati, OH, USA. They share common interests about advanced teaching methods for operating system and networking, and for everything virtual: virtual machines, virtual networking, etc. Obviously they are interested in the intersection of the two topics: teaching operating systems and networking by using emulators and virtual environments. Renzo and Michael run the Virtualsquare lab, a distributed, cooperative initiative that has given to the libre software a number of projects: VDE (Virtual Distributed Ethernet), uMPS (micro MPS emulator), PureLibc, and View-OS.
A generic architecture and extension of eCryptfs
by André Osterhues
Thursday, 2009/10/29 12:00-12:45
eCryptfs is a modular and stackable file-system encryption for the current Linux kernel. Unlike container encryption mechanisms such as dm-crypt or TrueCrypt which provide some kind of offline protection, eCryptfs performs encryption on file level. This leads to several advantages, especially when considering the maintenance of file servers where multiple users have access. In addition eCryptfs may also be used on top of a container encryption thereby combining the advantages of both technologies. However, eCryptfs relies on certain assumptions regarding the trustworthiness and awareness of its users and administrators. Currently most real world scenarios trust at least the administrator .This must be assumed because even in the case of an encrypted file system the administrator can access all files once the user has mounted his encrypted file system. Unfortunately these assumptions are not appropriate concerning modern privacy aspects and legal restrictions. In this contribution we therefore discuss a generic architecture for enhancing eCryptfs security properties by integrating some additional features. These additions include a scheme for sharing an encryption key between multiple users (aka "Secret Sharing Scheme") which also can be used for an emergency file access. Further on, strong cryptography will be enabled by making use of Smartcards and thereby clearly prohibiting any unauthorized access to the user s private key. Last but not least a new Linux Security Module is integrated into the kernel providing a stronger separation of the Root user account. All these new features are integrated by using a user friendly interface. Our architecture aims at providing a general flexibility towards further enhancements of eCryptfs and the current implementation itself. A proof of concept implementation has already been achieved and is currently being tested and further improved.
About the speaker: Dipl.-Inf. André Osterhues (escrypt GmbH) 1994-2005: Computer Science, Universität Dortmund 2005-2008: Graduate Student, Lehrstuhl fuer Systemsicherheit, Ruhr-Universität Bochum (Prof. Sadeghi) 2008-now: Security Engineer at escrypt GmbH
Linux multi-core scalability
by Andi Kleen
Thursday, 2009/10/29 14:15-15:00
The future of computing is multi core. Massively multi-core. But how does the Linux kernel cope with it? This paper takes a look at Linux kernel scalability on many-core systems under various workloads and discusses some some known bottlenecks. The primary focus will be on kernel scalability. It will start with a short introduction of scalability tuning on modern systems and then look at scalability data for some workloads utilizing many CPUs on a modern Linux kernel. Also some mitigation strategies on how to deal with non-scaling software will be discussed.
About the speaker: Andi Kleen worked on the Linux kernel longer than he can remember now. Originally he worked on networking, then later various areas. He spent severals years maintaining the x86-64 port and later the i386 architecture too. He also worked on NUMA, RAS, scalability and some other areas. He s working for Intel s Open Source Technology Center and lives in Bremen, Germany.
Samba status report
by Volker Lendecke
Thursday, 2009/10/29 14:15-15:00
Samba is a project under continuous, steady development. This talk will give an overview of the current status in the different areas of development. Hot topics at this moment are: * SMB2. The new version of the SMB protocol introduced with Windows Vista is right now being developed inside Samba3. The talk will give a status update on what parts of SMB2 work and what is missing. * Cluster support is right now is pretty stable. The talk will give an update on current deployments, in particular what file systems the clustered samba has been tested on by the time of the conference. * AD support with the "Franky" approach to merge Samba3 and Samba4 is under very active development right now. The talk will give an overview of tasks completed and what still needs to be done. The conference is directly after the SNIA Storage Developer Conference in Santa Clara, California, where the Samba Team and many other CIFS and storage vendors meet. I expect some slides for this talk will contain aspects taken from that conference that I don't know about here yet.
About the speaker: Volker Lendecke is one of the main developers of Samba and co-founder of SerNet GmbH in Göttingen, Germany
Real-Time performance comparisons and improvements between 2.6 Linux Kernels
by Gazment Gerdeci
Thursday, 2009/10/29 15:00-15:45
To achieve greater performance ,the Linux Kernel tuning depends on different factors. One of these factors is the environment where this kernel will be used. The real time environment requires some specific modifications applied directly to the kernel. The progress of these modifications is parallel with the kernel releases. The patches developed for real time environment , adjust the kernel to work for real time tasks. This work discusses the nature of real-time and system parameters that affect performance and highlights the core improvements in performance and responsiveness provided by the 2.6 kernel releases .Will be discussed the preemption time which affects directly in the execution of real time tasks. This element is estimated for different kernel releases under some defined audio and video streaming workloads .This Results based on "Realfeel" benchmark are represented in graphs for each release with respective conclusions.They will show the differences and improvements made to these releases concerning with different compilers and threading policies.
About the speakers: Bachelor degree in Computer Engineering at Polytechnic University of Tirana, Actually attending the second grade for Master degree.
dmraid update
by Heinz Mauelshagen
Thursday, 2009/10/29 15:00-15:45
The dmraid tool for Linux supports various ATARAID and the DDF1 software RAID solutions utilizing the device-mapper runtime and its mapping targets (RAID0, RAID1, ...). The talk is about ATARAID and device-mapper basics and recent enhancements tp the dmraid software to allow for RAID set creation, deletion, rebuild and device event monitoring.
About the speaker: After his diploma in Electrical Engineering in 1986, the author worked on the development of distributed planning applications for the phone/ISDN network of Deutsche Telekom and in UNIX systems management in a large development center, where he started to develop Linux LVM in his spare time. 2000 he joined Sistina Software, Inc., who allowed him to build a team and work fulltime on Linux LVM development. Sistina got aquired by Red Hat Inc. in January 2004. The author continues to work on LVM, Device Mapper and related topics, such as dmraid.
The Good, the Bad, and the Ugly? Structure and Trends of Open Unix Kernels
by Wolfgang Mauerer
Thursday, 2009/10/29 16:15-17:00
From a user's point of view, the kernel underneath a Unix-like system is not of direct interest almost all of the time. Nevertheless, many different kernels that do not share a significant amount of code evolved during the last two decades. Historical, legal and philosophical reasons obviously contributed a great deal to this situation, but there are also distinct technical aspects to the problem. In this talk, we analyse differences and similarities of various Open Source kernels (Linux, OpenSolaris and the BSD family). We first concentrate on quantitative source metrics to analyse structure and complexity of the code, which allows for a comparison of static features of each approach that is unbiased to the best possible extent. Additionally, we study the dynamics of development to evaluate how fast new features enter the kernels, and at which rate refactoring of the code takes place . This provides some interesting comparisons between the working of the different communities. The second part of the talk deals with architectural issues: The design of the systems is surveyed, in particular with respect to core kernel features like process and memory management, architecture/platform support and application domains, but also for developer features like kernel tracing. Areas with large commonalities and strong differences are identified, and we discuss the technical, legal and social reasons from which the different approaches originate. Finally, we close with an overview about recent developments in each of the kernels that could be of reciprocal interest.
About the speaker: Wolfgang Mauerer has been writing documentation on various Linux and Unix related topics for the last 10+ years, and has closely tracked linux kernel development during this time. He is the author of books on text processing with LaTeX and about the architecture of the Linux kernel, and has written numerous papers and articles. After his PhD in quantum information theory at a Max Planck Institute where he was mostly interested in using Linux for scientific tasks (numerical simulation of quantum systems and quantum programming languages), he joined Siemens CT Research and Technologies where he currently deals with virtualisation and real-time topics.
Userspace Application Tracing
by Jan Blunck
Thursday, 2009/10/29 16:15-17:00
Today multiple tracing infrastructures are available in the Linux kernel. Most of them are focused on tracing execution of kernel code. A solution that is capable of tracing both userspace and kernel code is missing. A new userspace tracing solution fills this gap and integrates seamlessly into the existing kernel traces generated by LTTng. The presenation gives a short overview about the technology behind the userspace tracing solution. Afterwards I'll give some basic examples of how to use the tracers with your own applications.
About the speaker: Jan is an engineer working for Novell, taking care of the Linux kernel. He studied electrical engineering at Technische Universit t Hamburg-Harburg and specialized in computer engineering. His contributions in Linux development reach from a USENET newsreader, over device drivers to Linux VFS development. He lives in Nuremburg with his wife and son.
Fighting regressions with git bisect
by Christian Couder
Thursday, 2009/10/29 17:00-17:45

"git bisect" enables software users and developers to easily find the commit that introduced a regression. This is done by performing a binary search between a known good and a known bad commit. In the manual mode, at each step, a commit is checked out, and the user is asked to test the current state of the code. This can be automated with "git bisect run", so that a simple script or a command instead of the user can test the current state. For example it's very easy to bisect broken builds using "git bisect run make". Some people out there are very happy with automated bisection, because it saves them a lot of time, it makes it easy and worthwhile for them to improve their test suite, and overall it efficiently improves software quality. Sometimes there can be some untestable commit that may prevent from testing. For that case the "git bisect skip" command can be used. In case the best bisection point is in the middle of many untestable commits, it will try to find a bisection point away from these untestable commits. This can be used by a "git bisect run" script as well as when manually bisecting. But in the latter case "git bisect visualize" and then "git checkout" can be used instead when the user needs more control. Recent work on "git replace" may provide a better solution to this problem in the long run. Some other nice features of "git bisect" are how it works when the "good" and the "bad" commits are not on the same branch. And there are the "git bisect log" and "git bisect replay" subcommands to respectively show and replay a bisection log. In the end, bisecting can be an important part of a software quality process.

About the speaker: Christian Couder is a Git developer since June 2006 and is working especially on "git bisect" since March 2007. He developed the "git bisect run" and "git bisect skip" subcommands. This year he worked on "git replace" and on porting some important parts of "git bisect" from shell to C. And currently he is working on porting "git rebase --interactive" to C. Before that from 1999 to 2002 he was a KDevelop developer.
System call tracing overhead
by Jörg Zinke
Thursday, 2009/10/29 17:00-17:45
System call tracing is a common used technique for debuggers or application control programs which enforce security policies. Usually a tracing process attaches themselves to other processes via a kernel based interface to intercept system calls of the traced processes and enforce security policies on them. Intercepting system calls usually involves additional overhead which results in longer run times for system calls. This additional overhead can be ignored for the purpose of debugging but should be considered for security enforcing applications and other kind of applications. This paper presents comparisons and performance measurements for two different current system call tracing implementations. The goal of the performance measurements is to determine the additional overhead for intercepting system calls. There are three approaches for system call interception: * Kernel based system call interception implemented through a modified system kernel and transparent for the userspace application. * Using a modified system library to replace system calls of the applications. This can be archived mostly transparently and dynamically for operating systems and applications using shared libraries and preload mechanisms. * Using a trace processes and debugging interfaces like ptrace and systrace for system call interception of applications (likewise transparent). The third approach seems to be the slowest one, since it depends on context switches between the tracing process and the application. But since it is easy to use, providing a lot of flexibility (compared to the other both approaches) and is commonly used by debugger and security policy enforcement applications it is the only approach which will be considered in the determination of the overhead in this paper. Since the authors of this paper are primarily interested in the overhead of system call tracing on Linux and OpenBSD this paper is focused on the available kernel implementations of these platforms, namely these are ptrace and systrace. There are also tracers available for kernel process debugging (for example ktrace) which are out of scope of this paper. Furthermore this paper is focused on microbenchmarks to determine the overhead of the system call interception itself not on the costs for analysis performed by the tracing process. These primarily interests are the result of the current research related to self adapting server load balancing on these platforms.
About the speaker: Jörg Zinke received a graduate degree in computer science with focus on the area of intelligent systems from Brandenburg University of Applied Sciences in Germany in 2004, followed by an MSc degree in computer science from the University of Potsdam in 2007. Currently he is an external PhD student at the chair of Operating Systems and Distributed Systems of the Potsdam University. His main research interests are server load balancing, providing scalable and fault-tolerant services through efficient and self adapting load balancing algorithms especially in Linux and Unix environments. Furthermore he is working as system administrator for Magna International Inc. and playing drums in various bands.
Keynote by
N.N.
Friday, 2009/10/30 09:45-10:45
About the speaker:
Ext4, btrfs and the others
by Jan Kara
Friday, 2009/10/30 11:15-12:00
In recent years, quite a few has happened in the Linux filesystem scene. Storage is becoming ever larger, solid state disks are becoming common, computers are joined into clusters sharing a storage... That brings new challenges to the filesystem area and new filesystems are developed to tackle them. In this talk I will present a design and compare two general purpose filesystems under development: ext4 and btrfs. They are considered as the most probable successors of the current filesystems such as ext3 or reiserfs. Ext4 is a direct successor of an ext3 filesystem. It shares with it the basic disk layout design, although there are notable differences in the disk format such as extents support, support for 48-bit block numbers and uninitialized block groups. Compared with ext3, it supports quite a few new features such as delayed allocation, online defragmentation, or journal checksumming. It also features new block allocator, supports fallocate system call and generally achieves better performance numbers than ext3. Another big advantage of ext4 is that the code in the current kernel is already quite stable and thus it is usable on a desktop or a server. Btrfs is a new filesystem designed from scratch. It borrows an idea to store every filesystem object in a big tree from reiserfs but otherwise its design is different. It supports copy-on-write feature allowing creation of file snapshots on a filesystem level, checksumming of both data and metadata, efficient storage of small files, online defragmentation, special handling of solid-state disks and many more. The disadvantage of btrfs is that its disk format is not yet finalized and thus it is not yet intended for wider use. Other filesystems are also developed in Linux. I will briefly explore the following:
  • OCFS2 - a filesystem designed to run on several machines sharing a common storage (connected via SAS or Fiberchannel)
  • Ubifs - a filesystem designed for solid state devices
  • Reiserfs4 - a general purpose filesystem. A successor of original reiserfs
About the speaker: Jan Kara is a software engineer at SUSE Labs, Novell. He is an active developer in the filesystem area of Linux kernel and a maintainer of disk quotas, UDF filesystem, and journaling block layer.
EDE, a light desktop environment, description and best practices
by Sanel Zukan
Friday, 2009/10/30 11:15-12:00
EDE (Equinox Desktop Environment) is a graphical desktop for X Window systems licensed under GNU GPL. Is based on a FLTK. Main features of EDE are speed and responsiveness, low resource usage, usability, and a familiar look and feel. EDE is one of the fastest desktop environments around and is suitable for embedded devices, old computers and restrained OS-es. EDE even runs on XBox and Minix. This paper, besides a short EDE presentation and it's goals, presents techniques used behind. We are using C++ and we are using it conservatively, but we are not dropping one of the crucial C++ features like object or generic approach. With this, we are yielding a small executable size, small compilation time and fast start-up. I will also quickly mention FLTK project.
About the speaker: Sanel Zukan is working on EDE project last 6 years mostly in his spare time. By day he is network administrator on Faculty of Science in Bosnia and Herzegovina, where he has a few exams to get a degree in mathematics.
Speeding up file system checks in ext4
by Ted Ts'o
Friday, 2009/10/20 12:00-12:45
The Fourth Extended File System, or ext4, is the latest descendent a line of file systems that were designed specifically for Linux, starting with the Second Extended File System, or ext2, and more recently, the Third Extended File System, more commonly known as ext3. The ext4 file system has many new features, such as extents, fine grained time stamps, and many others. The time it takes to check and repair a file system is critically important to system administrators, since this can prevent a critical server from being returned to operation for hours or even days or weeks. As disk sizes become larger, the efficiency of a file system's consistency checker becomes more and more important. This paper will describe the changes made to the ext4's file system layout and block allocation algorithms which improved the time it takes to check an ext4 file system has been improved by a factor of 9 to 10 times compared to an equivalent ext3 file system. Some of the techniques used include avoiding disk seeks by grouping the inode table and allocation bitmaps together, and by reserving parts of the disk for directory blocks so they can be grouped together. In addition, the number of blocks that need to be read by e2fsck has been reduced by the use of extent-mapped inodes which uses many fewer metadata blocks than indirect block-mapped inodes, and per-inode table high watermarks to avoid needing to read unused inode table blocks.
About the speaker: Theodore Ts'o is the first North American Linux Kernel Developer, and organizes the Annual Linux Kernel Developer s Summit, which brings together the top 75 Linux Kernel Developers from all over the world for an annual face-to-face meeting. He was a founding board member of the Free Standards Group, and was chair of that organization until it merged with OSDL to form the Linux Foundation. He is one of the core maintainers for the ext2, ext3, and ext4 file systems, and is the primary author and maintainer for e2fsprogs, the user space utilities for the ext2/3/4 file systems. At IBM, Theodore served as the architect for the Real-Time Linux development team. Theodore is currently on assignment with the Linux Foundation where he serves its Chief Technology Officer.
LAX - a toolset for network administration
by Thomas Groß
Friday, 2009/10/30 14:00-14:45
LAX is a collection of tools for the administration of mainly Linux/Unix based networks. LAX runs son a separate server. The lax server starts the administration tasks and collects the resulting data. LAX consists of a database to store the network structure (openldap), a lot of scripts and technologies to act on remote hosts (openssh autologin channels, snmp, windows rpc) and to store data (openldap, postgres). The LAX scripts. today mainly written in bash or python, map to typical administration tasks. They are organized in modules such as "cert" (x.509 certificates), "openvpn" or "vx" (virtualization cluster). A simple development environment helps administrators to create their own LAX scripts. A runtime environment ensures that special users can use the LAX scripts und act to remote hosts and services across the openssh channels. Further there is a transaction system to repeat operations parallel on many hosts. A monitoring and notification / reaction system, which checks the state of network objects, can trigger activities such as notification or service restart. The administrator(s) work at a preconfigured KDE-desktop as a portal to the network. Superkaramba widgets (KDE3) or plasmoids (KDE4) represent network objects on the desktop, showing their state and offering quick access to configuration tools. In summary LAX can be used for inventory, monitoring, controlling and visualization of network objects. Though the main focus is on linux / unix systems other operating systems can be intgegrated. Today LAX is available at the teegee website but hosting at a popular opensource directory and the openSUSE build service and preconfigured machines at SuSE studio are planned.
About the speaker: Thomas Groß runs teegee - a company which is specialized on Linux and Open Source software. He studied information technology in Chemnitz / Germany. His main focus at work is linux based IT infrastructure, system and network administration. He loves his children, mountain trekking, jogging, cycling.and snow boarding. Sometimes he preferes tactile information storage like the books from J.K. Rowling, J. R. R. Tolkien and Walter Moers.
State of the Union (Mount)
by Jan Blunk
Friday, 2009/10/30 14:00-14:45
This talk gives an overview about the different approaches available for doing filesystem namespace unification, aka union mounts, and their status today. Furthermore there are certain scenarios where full blown union mounts are not necessary. Best practice examples for these use-cases are given.
About the speaker: Jan is an engineer working for Novell, taking care of the Linux kernel. He studied electrical engineering at Technische Universit t Hamburg-Harburg and specialized in computer engineering. His contributions in Linux development reach from a USENET newsreader, over device drivers to Linux VFS development. He lives in Nuremburg with his wife and son.
OpenBSC: GSM network-side protocol stack on top of Linux
by Harald Welte
Friday, 2009/10/30 14:00-14:45
The OpenBSC project is a userspace-only implementation of a network-side GSM protocol stack on top of the Linux kernel. It uses the mISDN kernel subsystem for the physical E1 Link to a GSM Base Transceiver Station (BTS) and implements the A-bis layer 2 protocol as well as the various layer 3 protocols of GSM. This includes the functionality typically performed by the Base Station Controller (BSC), Mobile Switching Center (MSC) and Home Location Register (HLR) OpenBSC marks one of the few first steps of Free and Open Source software into the GSM communications protocols, despite those protocols being specified in publicly available documents and used by billions of devices around the planet.
About the speaker: Harald Welte is a freelancer, consultant, enthusiast, freedom fighter and hcaker who is working with Free Software (and particularly the Linux kernel) since 1995. His first major code contribution to the kernel was within the netfilter/iptables packet filter. He has started a number of other Free Software and Free Hardware projects, mainly related to RFID such as librfid, OpenMRTD, OpenBeacon, OpenPCD, OpenPICC. During 2006 and 2007 Harald became the co-founder of OpenMoko, where he served as Lead System Architect for the worlds first 100% Open Free Software based mobile phone. Aside from his technical contributions, Harald has been pioneering the legal enforcement of the GNU GPL license as part of his gpl-violations.org project. More than 150 inappropriate use of GPL licensed code by commercial companiess have been resolved as part of this effort, both in court and out of court. He has received the 2007 "FSF Award for the Advancement of Free Software" and the "2008 Google/O'Reilly Open Source award: Defender of Rights". In 2008, Harald started to work on Free Software on the GSM protocol side, both for passive sniffing and protocol analysis, as well as an actual network-side GSM stack implementation called OpenBSC. He is currently in the early design phase for the hardware and software design of a Free Software based GSM baseband side. Harald is currently working as "Open Source Liaison" for the Taiwanese CPU, chipset and peripheral design house VIA, helping them to understand how to attain the best possible Free Software support for their components. He continues to operate his consulting business hmw-consulting.
Valgrind your filesystem
by Jörn Engel
Friday, 2009/10/30 14:45-15:30
Do you pine for the nice days when men were men and wrote their own filesystems? Well, today even lesser mortals can have a stab and notice two inconvenient truths: 1. Kernel hacking is hard. 2. Filesystem hacking doubly so. So a particularly lazy hacker decided to go shopping instead. His eyes caught sight of something called Valgrind which was supposed to be insanely cool for finding bugs. Awesome. But there was a catch. Valgrind runs in userspace and on userspace programs. Filesystems usually run inside the kernel. So we have the wrong tool for the job. But wait. Why not run the filesystem in userspace instead? After all, that just requires rewriting a teeny tiny amount of vfs and mm code. And once finished, we can run any (well, at least one) filesystem in userspace and get those embarrassing results before someone else notices. Rather unexpectedly, the resulting 14k lines of code actually work and allow testcases to be written. Not so unexpectedly, a number of bugs were discovered that your presenter would rather forget about.
About the speaker:
Freeswitch application server: Define complex Voice applications within a day.
by Peter Steinbach
Friday, 2009/10/30 14:45-15:30
Freeswitch is a modular, carrier grade OpenSource telephony system which runs on Linux, MAC, Windows and Solaris. Beyond acting as a VoIP and ISDN switching platform it offers PBX functionalities for large PBX installations. Future demands in the industry are covered by supporting 48KHz voice codecs, full encryption of calls (SIPS/SRTP), speech recognition (ASR), text to speech (TTS) and additionional voice protocols like H.323, Jabber, GoogleTalk, Skype and IAX2. The telefaks* application server for Freeswitch is a new milestone in creating voice applications. It acts as a middleware for reducing the compexity of creating voice applications without limiting the scope of possible services. IVRs and Callcenter applications are designed by drawing workflows in a grapical user interface. Workflows are tested on a web interface before taken into production. Ajax push services from Freeswitch to the web browser enable new possibilities for interactions with voice applications.
About the speaker: Peter Steinbach is co-founder of Telefaks*, a company which is focussed on integration of VoIP solutions and voice applications. Before founding his own company he was IT manager and sales manager in worldwide leading telecommunication service companies. His company is located near Frankfurt, Germany and has a strong emphasis on integrating OpenSource solutions into exiting IT infrastructures.
Improving disk I/O performance on Linux
by Håvard Espeland
Friday, 2009/10/30 15:45-16:30
The completely fair queueing (CFQ) algorithm allocates the same amount of input/output (I/O) time for all queued processes with the same priority. When requesting data from a disk, this can lead to differences in throughput between processes, depending on how much disk search that happens within its timeslice and the placement on the disk. To improve support for real-time processes requiring a fixed bandwidth, we have implemented a new priority class with quality of service (QoS) support for bandwidth and deadline requirements in CFQ. This new class provides enhanced real-time support while improving the overall performance compared to the existing classes. As an example, experiments show that the bandwidth-based queue was able to serve 24 DVD-quality video streams while the CFQ-RT queue managed to serve 19 streams without deadline misses.

Current in-kernel disk schedulers, albeit generally efficient, fail to optimize sequential multi-file operations like traversing a large file tree, since the application only reads one file at a time before processing it. We have investigated a user-level, I/O request sorting approach to reduce inter-file disk arm movements. This is achieved by allowing applications to utilize the placement of inodes and disk blocks to make a one sweep schedule for all file I/Os requested by a process, i.e., data placement information is read first before issuing the low-level I/O requests to the storage system. Our experiments with a modified version of the tar archiving utility show reduced disk arm movements and large performance improvements. As an example, a tar of the Linux kernel tree was 82.5s using GNU tar, while our modified tar completed in 17.9s.

About the speaker: Håvard Espeland is a norwegian PhD student affiliated with University of Oslo and Simula Research Laboratory. He received his Master's degree in 2008 on the subject of utilizing heterogeneous architectures with focus on the STI Cell Broadband Engine. His main research topic is in the area of parallel processing.
Async Programming in Samba
by Volker Lendecke
Friday, 2009/10/30 15:45-16:30
Many parts of Samba have to talk to other network services to perform their tasks. smbclient naturally needs to connect to a file server, winbind has to connect to domain controllers. Even smbd with an LDAP backend needs to connect to other servers. When talking to network services, Samba needs to cope with the fact that these can block indefinitely long. Several methods of dealing with these blocking requests have been proposed and tested. Samba has now settled on a programming model that is based on a typical event library. This event library is extended with concepts to handle complex sequences of blocking events as easily as possible. This developer-oriented talk will give an introduction to the tevent_req based programming that Samba converges towards, together with some real-life examples from the Samba source code being walked through.
About the speaker: Volker Lendecke is one of the main Samba developers and co-founder of SerNet GmbH in Göttingen, Germany
Pre-silicon software development of Linux MTD drivers
by Gernot Hoyler
Friday, 2009/10/30 16:30-17:15
Dynamics in flash memory architectures have never been greater. New technologies are hitting the market with increasing frequency. Customer design times are reduced and time to market becomes more and more important. Customers need to get fully tested software drivers at the same time they get their hands on first hardware samples. After a brief review of the classic device driver development method a simulator based approach is presented and discussed in more detail. Actually, the Linux MTD stack is perfectly suited for this kind of development. Due to its proper encapsulation of all I/O it can be run on simulated hardware without big changes. This advantage was utilized for the development of a new chip driver for Spansion flash memory devices that work with a reduced command set. Although silicon was not available yet, the corresponding driver could successfully be written and its functionality be verified. Implementation details are presented and architectural challenges of the simulator interface in the kernel are discussed. For example how to handle memory access I/O operations that used to be non-blocking but become blocking by the use of a simulator. For this case, a transparent solution is given that does not require any changes in upper MTD layers. Finally, the solution is demonstrated live showing the whole MTD stack running on a simulated flash memory device.
About the speaker: Gernot Hoyler holds a PhD degree in Electrical Engineering from the University of Erlangen-Nuremberg. He has more than 10 years of industry experience in the context of system level software development. Before he joined Spansion's systems engineering team he worked for Silicon Graphics and other hardware manufacturers. During this time he developed several device drivers for Linux and Unix.
Open-Source ERP-Solutions - a new way for SMEs
by Falk Neubert
Friday, 2009/10/30 16:30-17:15
The presentation will highlight the latest development of Open-Source ERP-solutions. In the first part of the presentation, the current ERP solutions are presented and their advantages and disadvantages identified. In an overview of the functionality of the different solutions will be demonstrated. - production modul - purchase modul - sales modul - main data - project management - human resource modul - finance modul - stock management - and more In the second part of the presentation addresses the implementation of an ERP solutions. What must be considered in the selection of an open-source erp-solution? What organizational characteristics arise? what role the community plays in the implementation of ERP-solutions? The last part of presenation shows a example of open-source ERP-solution based on openERP. The company Nemus GmbH has introduced a new ERP-solution. A short online presentation will show the flexibility of modern open-source erp-solution.
About the speaker: Falk Neubert is managing director of ecoservice (Hannover), scientific co-worker at the University of Osnabrück. He is responsible for the research project "erp solution based on open source". This project is funded by the Federal Ministy of Economics and Technology. Since 10 years he advised companies in questions of electronic commerce and business. He is also member of RECO (Regional competence center of electronic commerce, Osnabrück).

Comments or Questions? Mail to contact@linux-kongress.org Last change: 2009-10-23