Get news? 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | About | Contact Want to help?

Linux-Kongress 2010
17 International Linux System Technology Conference
September 21-24, 2010
Georg Simon Ohm University Nuremberg / Germany

Home   |   Program   |   Abstracts   |   Directions   |   CfP   |   Fees/Registration   |   Sponsoring


Abstracts


Tutorials/Training Technical Sessions
Network Monitoring with Open Source Tools
by Timo Altenfeld, Wilhelm Dolle, Robin Schroeder and Christoph Wegener
Tuesday, 2010/10/27 10:00-18:00 and
Wednesday, 2010/10/28 10:00-18:00 german
Das zweitägige Tutorium "Netzwerküberwachung mit Open-Source-Tools" richtet sich an erfahrene Systemadministratoren, deren Aufgabe die Betreuung, Überwachung und Optimierung von komplexen Netzwerkumgebungen ist. Die Teilnehmer sollten bereits Erfahrungen mit der Installation von Programmen unter Linux haben und rudimentäre Grundkenntnisse des TCP/IP-Stacks mitbringen.

Im Laufe des Workshops wird der Aufbau eines Überwachungsservers auf Basis von Linux mit exemplarischen Diensten gezeigt und diskutiert werden. Dabei werden wir aber nicht nur die rein technischen Aspekte der Netzwerküberwachung beleuchten, sondern auch die Grundlagen der notwendigen organisatorischen und rechtlichen Rahmenbedingungen aufzeigen und berücksichtigen. Nach der Veranstaltung können die Teilnehmer die gewonnenen Erkenntnisse dann selbständig in die Praxis umsetzen.

Durch die wachsende Abhängigkeit unseres täglichen Lebens von einer funktionierenden IT-Landschaft und die gleichzeitig rapide zunehmende Komplexität der dazu benötigten Infrastrukturen gewinnen die Themen Netzwerkmanagement und Netzwerküberwachung immer mehr an Bedeutung. Zur Netzwerküberwachung existieren eine Reihe von komplexen und oft sehr teuren kommerziellen Werkzeugen. Dieser Workshop zeigt, wie man eine analoge Funktionalität mit spezialisierten, freien und quelloffenen Programmen erreichen kann.

Themen im Detail/Ablauf des Tutoriums:

  • Organisatorische Fragen
    • Möglichkeiten der Netzwerküberwachung
    • Business Planing / Business Continuity / TCO : Warum freie und quelloffene Software?
    • Bedeutung der Netzwerküberwachung beim Risikomanagement (Basel II / Sarbanes-Oxley Act (SOX)
  • Rechtliche Aspekte
  • Informationsgewinnung
  • Simple Network Management Protocol (SNMP)
    • Qualitative Überwachung
    • Multi Router Traffic Grapher (MRTG)
    • Munin und RRDTool
  • Verfügbarkeitsüberwachung
    • Nagios
  • Proaktive Überwachung, Auswerten von Logdateien
  • Fehlersuche in Netzwerken mit Wireshark
    • Sicherheits-Monitoring
    • Host- und Netzwerk-Scanning mit nmap
    • Nessus und Open-Source-Alternativen
Die Inhalte werden im Vortragsstil vermittelt und durch praktische Übungen der Teilnehmer am eigenen Rechner vertieft.

Hardware-Voraussetzungen: Die Teilnehmer müssen einen Rechner mit einer aktuellen Linux-Distribution mitbringen – Benutzer anderer Betriebssysteme (*BSD oder MacOS-X) sollten sich vor der Veranstaltung über contact@linux-kongress.org mit den Vortragenden in Verbindung setzen.

About the speakers: Timo Altenfeld, Fachinformatiker für Systemintegration, ist Systemadministrator an der Fakultät Physik und Astronomie der Ruhr-Universität Bochum. Seit Beginn seiner Ausbildung faszinieren ihn Open-Source-Tools und Linux, die er bereits seit geraumer Zeit auch in der beruflichen Praxis zur Überwachung von Linux-, Windows- und Solaris-Systemen einsetzt. Mit dem Einsatz von Linux in diversen Umgebungen beschäftigt er sich seit dem Jahre 2003. Außerdem studiert er zur Zeit Wirtschaftsinformatik an der FOM Essen.

Wilhelm Dolle leitet als Business Field Manager den Bereich Security Management bei der HiSolutions AG, einem Beratungshaus für Information Security und Risk Management, in Berlin. Er ist CISA, CISM, CISSP sowie lizensierter Grundschutz-/ISO 27001- und BS 25999-Auditor und hat bereits zahlreiche Erfahrungen in den Bereichen Sicherheitsmanagement, Risiko- und Sicherheitsanalysen sowie Incident Management sammeln können. Wilhelm Dolle ist Autor zahlreicher Fachartikel und hat Lehraufträge an verschiedenen Universitäten und einer Berufsakademie inne.

Robin Schröder, Fachinformatiker für Systemintegration, ist seit 2006 in der Abteilung "IT-Systeme, Software-Integration" der Verwaltung der Ruhr-Universität Bochum tätig. Er administriert dort zahlreiche Linux-, Solaris- und Windows-Systeme und überwacht den Applikationsbetrieb mit verschiedenen Open Source-Tools. Mit Computern, Linux und Netzwerken beschäftigt er sich bereits seit dem Jahre 1995.

Christoph Wegener, CISA, CISM und CBP, ist promovierter Physiker und seit 1999 mit der wecon.it-consulting freiberuflich in den Themen IT-Sicherheit und OpenSource unterwegs. Er ist Autor vieler Fachbeiträge, Fachgutachter für verschiedene Verlage und Mitglied in mehreren Programmkomitees. Seit Anfang 2005 ist er zudem am europäischen Kompetenzzentrum für Sicherheit in der Informationstechnik (eurobits) tätig und in der Ausbildung im Bereich der IT-Sicherheit aktiv. Darüber hinaus ist er Gründungsmitglied der Arbeitsgruppe Identitätsschutz im Internet (a-i3) und dort, sowie in der German Unix User Group (GUUG), Mitglied des Vorstands.

Plattformübergreifende Dateidienste sicher anbieten
by Michael Weiser, Daniel Kobras
Tuesday, 2010/09/21 10:00-18:00 and
Wednesday, 2010/09/21 10:00-18:00 german
Das Tutorium behandelt die Implementierung sicherer Dateidienste für Unix-, Linux- und Windows-Clients. Er wendet sich an Administratoren, die in ihren Umgebungen Dateidienste über CIFS oder NFS anbieten und nun die Sicherheit dieser Services erhöhen möchten.
Dazu werden folgende Szenarien betrachtet:
  1. Dieses Szenario geht aus von einer bestehenden Infrastruktur basierend auf Active Directory, in der Dateidienste über CIFS und NFSv3 angeboten werden. Das Ziel ist die Migration auf NFSv4 mit GSS als Security Flavor. Die Teilnehmer lernen hier den Aufbau eines kerberisierten NFSv4-Dienstes mit Anbindung an das Active Directory.
  2. Die Ausgangslage in diesem Szenario ist eine reine Open-Source-Infrastruktur mit OpenLDAP und einer Samba-3-Domäne. Als Dateidienste existieren auch hier bereits CIFS und NFSv3. Das Ziel ist ebenfalls die Ablösung von NFSv3 durch sicheres NFSv4.
    Während das Active Directory aus Szenario-1 bereits eine vollständige Kerberos-Infrastruktur beinhaltet, soll diese hier mit möglichst geringem Aufwand erst noch aufgebaut werden. Dazu lernen die Teilnehmer, wie sich die Samba-Domäne mit Hilfe der Kerberos-Implementierung Heimdal ohne Migrationsaufwand zu einer Kerberos Realm erweitern lässt.
  3. Neue Infrastruktur mit OpenAFS: Im dritten Szenario verzichten die Teilnehmer auf CIFS und NFS, um statt dessen eine alternative Lösung mit OpenAFS aufzubauen. Damit eröffnet sich die Möglichkeit eines einheitlichen Dateidienstes für Unix-, Linux- und Windows-Clients. Verglichen mit Szenario-1 und 2 entsteht hier ein höherer Migrationsaufwand. Die Kerberos-Infrastruktur aus Szenario-1 oder 2 lässt sich jedoch auch für OpenAFS verwenden. Die Sicherheit der einzelnen Varianten unterscheidet sich hinsichtlich der Kriterien Daten-Integrität, -Vertraulichkeit und Authentisierung. Zusammen mit den Teilnehmern werden die einzelnen Lösungen im Verlauf des Workshops darauf hin analysiert und bewertet.

Voraussetzungen: Grundlegendes Know-How in der Linux-Netzwerkadministration. Grundkenntnisse zu Kerberos, LDAP, Samba und NFS sind empfehlenswert.

Mitgebrachte Rechner sollten folgende Voraussetzungen erfüllen:
  • frisch installiertes, nicht produktiv genutztes Linux einer aktuellen Distribution, das weitreichend umkonfiguriert werden kann
  • Debian 5 empfohlen
  • aktuelle Ubuntu, OpenSuSE und Fedora möglich
  • abweichende Distributionen auf eigene Gefahr des Teilnehmer (gegebenenfalls Compilieren fehlender Software nötig, z.B. Heimdal KDC unter Fedora - wir unterstützen dabei)
  • Möglichkeit zum Nachinstallieren von Distributionssoftware (Installations-CDs oder Online-Repositories via Netzwerk)
  • optional ein frisch installiertes, nicht produktiv genutztes Windows XP, Vista oder 7 (Professional/Ultimate, keine Home Edition)
  • Je eins oder auch beide Systeme können virtualisiert laufen. Sie benötigen dann direkten Zugriff auf das Netzwerk (bridged mode).
  • Auf Anfrage stellen wir virtuelle Maschinen mit Debian 5 und Windows 7 Ultimate zur Verfügung. Hierfür ist ein aktueller, installierter und funktionsfähiger VMware Player, Server oder Workstation mitzubringen und der Umgang damit zu beherrschen.
About the speakers: Daniel Kobras ist als Systems Engineer bei der Tübinger science+computing ag beschäftigt. Dort arbeitet er unter anderem an skalierbaren Speicherlösungen für Kunden der Automobilindustrie.

Michael Weiser begleitet seit 2004 bei der science+computing ag Projekte und Workshops zu den Themen LDAP, Kerberos und AD-Integration sowie High-Performance-Computing.

SELinux - How to live with it?
by Toshaan Bharvani
Tuesday, 2010/09/21 10:00-18:00 german
Security Enhanced Linux, is disabled in most cases due to fact that most people do not take the time to understand how to work with SELinux. However security increases, by keeping SELinux on, as all applications are segregated therefor even if a intruder were to enter it would only affect that application. In RHEL, CentOS or Fedora most applications are predefined in SELinux and can be adjusted, however other applications can be added easily with the integrated tools, allowing you to run any custom application. The presentation explains what SELinux is, how it works, how to implement the predefined policies and how to create custom policies. This tutorial explains how you can set SELinux booleans right for the default applications and how to configure SELinux with some of the most popular services making them behave better of making SELinux accept their behavior as needed.

For this tutorial the following is required:

  • laptop with SELinux
  • some basic knowledge of Linux
  • some basic knowledge of security

About the speaker: Toshaan has been in the IT business for a long time. He is a IT consultant, spending most of his time implementing and promoting Linux towards the medium sized enterprises. By education, he is a Master in Business Engineering in Management Information Systems and majored in Information Technology. He has a technical knowledge which he combines with his academic knowledge, business management of IT. Currently busy projects regarding collaboration systems, virtualization systems and ERP, CRM and integration projects. Another main area's of interest is security in its broader aspect.
Qemu and the Open virtualization stack
by Glauber Costa
Tuesday, 2010/09/21 10:00-18:00

This is a proposed 1-day length tutorial.

In this tutorial, I will cover the basic usage of QEMU, the bulding block of Open Source virtualization, as well some advanced options. How to manually migrate virtual machines with low downtime, debug guests, play with firmware. From emulation to virtualization, with the KVM kernel driver for Linux. Attendees of this tutorial will learn how to take the most out of QEMU.

I will also cover QEMU's new remote protocol, QMP, and some of the tools there are commonly used in the open virtualization stack, like libvirt, libguestfs.

About the speaker: Glauber is a computer engineer, graduated at the University of Campinas, Brazil, and Msc in Computer Science at the same Univeristy. He's been working for four years at Red Hat in the virtualization group, initially for the neverending task of getting Xen ready for the RHEL5 release. During this time, he wrote code for the paravirt_ops framework for x86_64, lguest and KVM. As of 2009, he figured at linuxfoundation "Who writes Linux" report as one of the top 30 developers for the year. Currently, Glauber works on preparing KVM for RHEL6, and general virtual timekeeping problems.
Zen and the art of High-Availability clustering
by Lars Marowsky-Brée
Wednesday, 2010/09/22 10:00-18:00

Since more than a decade, Linux has been part of the data centers; from grass root initiatives it has begun to take over critical systems, where dependability is key to the business, or even lives are at stake.

Several Open Source projects - and some proprietary legacy products - exist to augment Linux with High-Availability functionality. This OSS technology has come a long way since its smallish beginnings in 1998, and standardization is finally starting to emerge acorss all major distributions for the HA cluster stack (based on corosync and pacemaker); combined with the many other facets - RAID, Multipath, DIF, networking, file systems, instrumentation, code review - Linux thus has one of the arguably most powerful environments for building dependable computing services. Many conference tutorials, presentations, or company trainings have been devoted to this topic; and today, even a significant body of documentation - from online to books - exists.

It is thus somewhat surprising to witness some of the painful problems that arise when actual people try building actual systems from these components and follow through on the teachings.

This tutorial discusses the other side of the coin: drawing from the experience and anecdotes of building and developing HA systems since their very beginnings on Linux in all roles, it will help set realistic expectations, what NOT to try, when clustering might just be bad idea, how to triage and debug some case studies, how to do it better, and when/how to ask for help.

It covers all aspects of building, maintaining, and supporting HA systems: processes, architecture and design, configuration, writing your own resource agents, networking, storage - what we have seen people do and would rather have them stop. Hopefully, this will help the audience to fail better in their own endeavors.

The tutorial addresses mostly administrators, but also developers.

About the speaker:

Lars Marowsky-Brée joined the Linux community in 1994, and has been working on High-Availability topics since 1999, on projects ranging from Linux-HA, heartbeat, LVS, Pacemaker, FailSafe to in-kernel MPIO and in a broad range of roles: administrator, consultant, contributor, and project lead. He has been on the receiving end of many support escalations.

He joined SuSE in the spring of 2000, and currently serves as Architect for Storage and High-Availability, and project manager for the SUSE Linux Enterprise 11 High-Availability Extension.

A Linux Kernel & Tools Safari
by Wolfgang Mauerer
Tuesday, 2010/09/22 10:00-18:00

The reasons to gain an understanding of the Linux kernel are nowadays manyfold and not only directly focused on kernel development: Acquaintance with the fundamental system layer is essential in understanding a wide variety of system and application issues from performance tracing to architectural design topics. However, complexity and size of the sources make this a difficult task. This tutorial provides a tool-centric safari through the Linux kernel: The most important parts and components are introduced and examined with hands-on experiments utilising numerous tools that are essential for quick and efficient development and analysis. Special emphasis is also placed on the recently introduced perf toolbox that allows for gaining a detailled understanding of combined kernel/userland issues.
In particular, the topics are:

  • Structure and organisation of the kernel sources
  • Important concepts and data structures: Standard algorithms (list management, locking, hashing, ...), the task network, memory management, scheduling
    • Using Qemu for development and debugging
    • Information sources within the kernel
    • Tracking data structure networks
    • Examining kernel behaviour
    • Understanding how kernel time is spent on behalf of userland programs
  • Tools introduced and used during the tutorial:
    • ftrace and LTTng
    • perf [optionally including kvm interaction]
    • GDB and kgdb for kernel debugging
    • Qemu and UML as test foundation
    • LXR, cscope for source code tracking
    • git basics for users, qgit

Participants are expected to bring a laptop that can either run a self-compiled kernel or a (provided) qemu virtual machine image.

About the speaker: Wolfgang Mauerer has been writing documentation on various Linux and Unix related topics for the last 10+ years, and has closely tracked linux kernel development during this time. He is the author of books on text processing with LaTeX and about the architecture of the Linux kernel, and has written numerous papers and articles. With the recent publication of "深入Linux内核架" he finally managed being unable to read a single letter of what he has written.

After getting a PhD in quantum information theory from the Max Planck Institute for the science of light (where he was interested in using Linux for scientific tasks like numerical simulation of quantum systems and quantum programming languages), he joined Siemens Corporate Research and Technologies where he currently deals with virtualisation (libvirt, qemu, kvm), real-time (xenomai, ipipe) and other low-level work.
Porting of IPv4 Applications to IPv4/IPv6 dual-stack
by Owen DeLong
Wednesday, 2010/09/22 14:00-18:00
A review of the need for IPv6 support in client-server applications followed by a review of the author's methodology for making these changes with running-code examples in C, Perl, and Python.
IPv6 is probably the single biggest code-update required since Y2K. Given the far reaching implications of this transition and the speed with which it must occur, this is a very timely subject.
The author first wrote a simple client and server application in native IPv4 in each language. Subsequently, he ported each of those clients and servers to dual-stack and now shows the differences in the resulting code and provides tips, techniques, and lessons learned from the exercise so that other developers may expedite their porting processes.

For attendees a laptop is recommended.
About the speaker: Owen DeLong is an IPv6 Evangelist at Hurricane Electric and a member of the ARIN Advisory Council. In these roles, he is keenly aware of the dwindling IPv4 resource pools and the need to get as much of the internet as possible to add IPv6 capabilities to their networks prior to IPv4 runout.
Request Tracker: From Setup to Processes and Workflows
by Torsten Brumm and Björn Schulz
Wednesday, 2010/09/22 10:00-18:00
The one day tutorial "RT from setup to the first workflow" is addressed to beginners and advanced RT admins who like to get more out of RT. The main goal of this tutorial is to explain the installation, configuration and the development with RT based on daily business examples from KN and DESY. The participants should have basic linux and perl knowledge.
During this workshop we explain how to install RT from scratch, how to install some of the main RT modules with explanation of the usage, followed by a large development part to get the basic knowledge howto create workflows and implement business processes on top of RT.

Details:
  • Request Tracker basics
  • Request Tracker features
  • Request Tracker functionallities
  • Request Tracker extensions
  • Installation
  • Preparation of the Linux systems
  • Installation of Perl modules needed
  • Installation of RT
  • Initial setup of RT
  • Setup of mail and web server for RT
  • Development basics
  • Introduction into RT development
  • Objects within RT
  • Scripts, templates and modules
  • Creating your own modules
  • Creating your own RT webpages
  • Workflows
  • Workflow basics
  • Creating a simple approval process
  • Creating a complex process for Incident and Change Management
About the speakers: Bjoern Schulz: Works since 2000 as RT admin at DESY in Hamburg. He is responsible for the User Helpdesk and Development of RT. Bjoern supports about 400 RT users at their daily business with RT.

Torsten Brumm: Is responsible for the largest RT installation worldwide at Kuehne + Nagel (KN) Corporate IT since 2001. Torsten supports more than 60.000 users at their daily work with RT. He is also responsible for the creation of all the Kuehne + Nagel workflows and business processes and development based on RT.
KEYNOTE – Kernel development: how it goes wrong and why you should be a part of it anyway
by Jon Corbet
Thursday, 2010/09/23 09:15-10:15
The Linux kernel is one of the most successful software projects in history, routinely integrating tens of thousands of changes from thousands of developers every year. Like any large community, the kernel has its ups and downs, and that has led to some spectacular failures over the years. This talk will take a look at episodes where things have gone wrong and come to some conclusions about how difficulties can be avoided and why kernel development is worth the trouble despite the occasional hazard. Participants should expect to leave informed and ready to send in patches.
About the speaker: Jonathan Corbet is a Linux kernel contributor, co-founder of LWN.net (and the author of its Kernel Page), and the lead author of Linux Device Drivers, Third Edition. He lives in Boulder, Colorado, USA.
OsmocomBB: Protocol stack and baseband firmware for GSM mobile phones
by Harald Welte
Thursday, 2010/09/23 10:45-11:30
The OsmocomBB project[2] is a Free Software implementation of the GSM protocol stack running on a mobile phone. Layer 1 is written to run directly on the telephone hardware, while Layer 2 and Layer 3 run as application program on a Linux system.

For decades, the cellular industry comprised by cellphone chipset makers and network operators keep their hardware and system-level software as well as GSM protocol stack implementations closed. As a result, it was never possible to send arbitrary data at the lower levels of the GSM protocol stack. Existing phones only allow application-level data to be user-supplied, such as SMS messages, IP over GPRS or circuit-switched data (CSD).

Using OsmocomBB, the Free Software enethusiast as well as the security researcher finally has a tool equivalent to an Ethernet card in the TCP/IP protocol world: A simple transceiver that will send arbitrary protocol messages to a GSM network.

By the time Linux Kongress 2010 is held, it is expected that OsmocomBB has proceeded to a level where it can make actual phone calls on any GSM network.

[1] http://laforge.gnumonks.org/ [2] http://bb.osmocom.org/

About the speaker: Harald Welte is a freelancer, consultant, enthusiast, freedom fighter and hacker who is working with Free Software (and particularly the Linux kernel) since 1995. His first major code contribution to the kernel was within the netfilter/iptables packet filter.

He has started a number of other Free Software and Free Hardware projects, mainly related to RFID such as librfid, OpenMRTD, OpenBeacon, OpenPCD, OpenPICC. During 2006 and 2007 Harald became the co-founder of OpenMoko, where he served as Lead System Architect for the worlds first 100% Open Free Software based mobile phone.

Aside from his technical contributions, Harald has been pioneering the legal enforcement of the GNU GPL license as part of his gpl-violations.org project. More than 150 inappropriate use of GPL licensed code by commercial companies have been resolved as part of this effort, both in court and out of court. He has received the 2007 "FSF Award for the Advancement of Free Software" and the "2008 Google/O'Reilly Open Source award: Defender of Rights".

In 2008, Harald started to work on Free Software on the GSM protocol side, both for passive sniffing and protocol analysis, as well as an actual network-side GSM stack implementation called OpenBSC. He is currently in the early design phase for the hardware and software design of a Free Software based GSM baseband side.

He continues to operate his consulting business hmw-consulting.

What´s up in Kernel-Land
by Thorsten Leemhuis
Thursday, 2010/09/23 10:45-11:30
What the developers of the Linux kernel and software close to it work on might not seem very significant on the first sight, but in fact it is very important, as that's where a lot of new technologies and drivers get developed and brought into shape before they show up in major Linux distributions. This presentation will give an overview of the recent happenings in kernel-land and what to expect in the near future. It will include an overview of what's in Linux 2.6.35, what will show up in 2.6.36, and what improvements are currently under discussion for later inclusion. In that scope some aspects of the Linux development process will be explained and some challenges outlined that the kernel and its developers currently face. Attendees of any technical ability should gain a better understanding of where the kernel development stands and what to expect in the near future.
About the speaker: Thorsten works as an editor for the Hannover based Heise Zeitschriften Verlag and writes for its print magazine c't and the online publications heise online and heise open. He has a strong interest in PC hardware and Linux, which in turn led to the "Kernel Log" -- a column that gives a concise overview of the most important things that the kernel developers are doing or discussing. The column also covers topics that are close to the kernel, such as graphics drivers (Xorg, Mesa, ...) and Linux's "plumbing layer".
Design and implementation of a DECT network stack for Linux
by Patrick McHardy
Thursday, 2010/09/23 11:30-12:15
DECT (Digital Enhanced Cordless Telecommunications) is an ETSI standard for digital portable phones and data terminals. This talk gives an introduction to the various layers of the DECT protocols and presents the design and implementation of an open source DECT network stack for Linux.
About the speakers: Patrick McHardy is a freelancing hacker and consultant, working mainly on the Linux kernel for the past 10 years. He is the current chairman and maintainer of the netfilter project, but also works on all other components of the Linux networking stack. He is the primary author of the DECT network stack for Linux. Patrick lives in Freiburg, Germany.
Deploying OpenOffice.org - Installation and Configuration in a Corporate Network
by Florian Effenberger
Thursday, 2010/09/23 11:30-12:15
OpenOffice.org is the leading open source productivity suite, covering all aspects of daily office work. More and more companies migrate to OpenOffice.org, not only due to its open file format and its free license. Network administrators, however, often face issues when deploying the program in their corporate network, which requires maintaining a sensible default configuration adapted to their users' needs. Most people don't know that OpenOffice.org comes with a broad support for deployment: ready-to-use packages can be deployed on Linux and Windows out of the box with any existing software distribution, while configuration is administered by XML-based configuration files that come with LDAP support and are platform-independent. Settings can be maintained on a per-user or on a network-wide basis, while critical options can be locked and unwanted features can be removed to follow the company's security policy. This talk shows examples on how OpenOffice.org can be deployed within a corporate network, and gives insight on its configuration options. It is intended for beginners with near-to-zero knowledge on deployment, as well as for experienced system administrators with heterogeneous networks.
About the speaker: Florian Effenberger has been an open source evangelist for many years. He is lead of the international OpenOffice.org marketing project as well as a member of the management board of the non-profit OpenOffice.org Deutschland e.V. He has ten years' experience of designing enterprise and educational computer networks, including software deployment based on free software. He is also a frequent contributor to a variety of professional magazines worldwide on topics such as free software, open standards and legal matters.
IEEE 802.15.4 stack for Linux
by Dmitry Eremin-Solenikov
Thursday, 2010/09/23 13:45-14:30
Wireless personal area networks (WPANs) are used to convey information over relatively short distances. Unlike wireless local area networks (WLANs), connections effected via WPANs involve little or no infrastructure. This feature allows small, power-efficient, inexpensive solutions to be implemented for a wide range of devices. Typical use cases for WPANs are distributed sensor or metering networks, "Smart Home", smart remote controls, etc. Now WPANs also start to be used for industry applications: factory automation, robots, etc.

Currently networks based on the IEEE 802.15.4 standard have become the de-facto standard for most of the WPAN installations. Most tranceiver vendors provide at least IEEE 802.15.4 stack for at least one MCU family. For a long time Linux was left without any support for WPANs. Finally during the 2.6.31 kernel development cycle Siemens started pushing in support for IEEE 802.15.4 networks in the Linux kernel.

In this talk, I'd like to talk about the current state of the LR-WPAN stack and it's feature directions.

About the speaker: Dmitry is an engineer at Siemens CT Research and Technologies where he cares about porting Linux to various strange platforms. His actual research interests are in the fields of perspective network protocols. In the Linux/OpenSource world, he is maintainer/contributor of several boards, maintainer of LR-WPAN (IEEE 802.15.4) stack, contributor to several Embedded Linux distributions (like OpenEmbedded, SLIND), Qemu and several other projects.
Desktop virtualization with spice
by Gerd Hoffmann
Thursday, 2010/09/23 13:45-14:30
This talk will give an overview on the spice remote desktop protocol and the components it needs: What is spice? What is qxl? What features has spice and how does it work? How does spice integrate with qemu? What changed recently? What is currently being worked on? What are the future plans?
This will be a largely technical talk for developers.
About the speaker: Gerd Hoffmann is working on virtualization. He started a few years back with user mode linux. Later the focus shifted to Xen. Nowdays he is working on qemu and kvm for the Red Hat emerging technologies group. Currently he is working on spice support for qemu. Other recent QEMU work areas include device tree (qdev), VNC and SCSI. Gerd gave various talks on virtualization-focuced conferences (Xen Summit, KVM Forum) and on german linux conferences (LinuxTag, Linux-Kongress).
Wifi 802.11n standard support in Linux
by Vladimir Botka
Thursday, 2010/09/23 14:30-15:15
In October 2009 IEEE approved and published the 802.11n (11n) standard that defines the high throughput extension to the 802.11 standard. Using this extension wireless adapters could achieve throughput up to 300 Mbps based on physical layer data rates of 600 Mbps. In the introduction we present the overview of physical layer diversity techniques, frame aggregation and channel bonding mechanisms together with Multiple Input Multiple Output (MIMO) and multiple antennas features. To implement the 11n standard in the kernel new unified wireless extension to the wlan drivers is developed. We explain how the wlan drivers are configured to use the new extension. This new extension enables the wlan driver to communicate with the firmware of the 11n enabled adapter on one side and on the other side enables to configure the wlan driver by the user-land applications. We describe the functionality of related user-land applications next. The iw utility that enables the user to manipulate wireless devices and their configuration, crda central regulatory domain agent and wireless regulatory database. Then we introduce specific changes to the configuration of openSUSE. Modifications to the traditional ifup/ifdown scripts are explained and new variables to the sysconfig are introduced. Then practical example of the configuration is given together with short introduction to the debugging techniques.
About the speakers: Author is a member of the team that creates customized Suse Linux Enterprise Desktop (SLED) distributions for specific PC configurations updated with customized packages for the specific HW. Author is responsible for the configuration and functionality of the drivers for Wifi adapters, Bluetooth adapters and related user-land applications.
Architecture of the Kernel-based Virtual Machine (KVM)
by Jan Kiszka
Thursday, 2010/09/23 14:30-15:15
In the past years, the Kernel-based Virtual Machine (KVM) gained increasing popularity as a native virtualization solution for Linux. In this presentation, we will try to provide a deeper insight in its latest architecture.

We will first of all explain the execution model of KVM virtual machines and how this maps to the (most prominent) user space part of the hypervisor stack: QEMU. We will describe the KVM specific submodules in QEMU and provide an update on the merge status of the qemu-kvm tree into upstream QEMU.

We will then shed a light on the design of the KVM kernel part: How it manages the hardware virtualization extensions on x86 CPUs, how it still works in their absence on other architectures, what platform devices are virtualized and why some of them reside in the KVM module, how device pass-through is realized, what kind of smart optimizations have been applied recently to reduce the virtualization overhead, why KVM can host itself or other hypervisors, and more.

Our goal is to provide an up-to-date overview of the KVM design that can help to understand its interface and implementation, either to optimize complex KVM installations, to debug tricky problems, or extend its functionality.

About the speaker: Jan Kiszka is working as consultant and software engineer in the competence center for embedded Linux at Siemens Corporate Technology. He is supporting Siemens sectors and external customers to adapt and enhance Open Source as platform for their products. His current focus is on virtualization, specifically for embedded use cases, real-time systems, and development tools. For customer projects and in parts of his spare time, he is involved in various Open Source projects, including KVM and QEMU.
Control and forwarding plane separation on an open-source router
by Olof Hagsand
Thursday, 2010/09/23 15:45-16:30
In previous work we have shown how open-source routers on new PC hardware allows for forwading speeds of 10Gb/s and above. In this work we extend the applicability of the results by showing how the new 10G interface classification techniques can be used to separate packet forwarding from control plane operation.

The objective of such a separation is to allow for robust control plane operation, so that routing and management can continue uninterrupted in the case of overload or denial-of-servic attacks. The aim is to dedicate a single CPU-core to routing protocols, ssh and other services, and let the remaining cores process forwarding traffic. For this, incoming control traffic needs to be identified and filtered at the interface card level and dispatched to the control CPU-core via DMA, while the remaining traffic is load-balanced over the forwarding cores.

Many new interface cards have chipsets with advanced classification capabilities motivated by advances in virtualization and multicore architectires. We have chosen to study the Intel 82599 10Gb/s controller and the Linux ixgbe driver. The 82599 has several mechanisms to control packet classification, including Receiver Side Scaling (RSS), Flow Director, and ntuple filters. Other interface cards on the market use generic TCAMs providing a similar functionality.

The approach we used was to implicitly configure the Flow Director by outgoing control traffic, so that return flows aimed at the control plane were identified and could be directed to the control processor. Flows not destined to the control processor were load balanced among the forwarding cores using RSS. We found this to be a simple and straight-forward approach, and we present results that verifies this method. However, we have seen some cases in overload scenarios where packet drops are made in hardware before classification which need to be further analyzed.

During the project we also explored some of the hardware capabilites new buses (PCIe GEN 2). We discovered with optimal setting that we could transmit (DMA) 93 Gbit/s using 1500 byte packets.

About the speaker: Olof Hagsand is an as associate professor in internet-technology at the School of Computer Science and Communication at the Royal Institute of Technology(KTH), Stockholm. He received his PhD in Telecommunication Networks and Systems from KTH in 1995, and has been active both in the research and industry in the field of networking, end-to-end measurement and router architectures.

Additional authors: Robert Olsson, Uppsala University Jens Laas, Uppsala University Bengt Görden, KTH

Virtual Machine timekeeping
by Glauber Costa
Thursday, 2010/09/23 15:45-16:30
Keeping accurate time is a hard problem, and in virtual machines, it is more so. It only gets worse if low overhead is also a requirement. In this talk, we'll cover the basics on this problem, comparing all the clock devices that a kvm guest can use. Techniques on how to mitigate the problem, both on the guest and the host. We'll see what the hypervisor does or can do to export a reasonable TSC to the guest. Also, we'll cover the last state of kvm paravirtual clocksource device.
About the speaker: Glauber is a computer engineer, graduated at the University of Campinas, Brazil, and Msc in Computer Science at the same Univeristy. He's been working for four years at Red Hat in the virtualization group, initially for the neverending task of getting Xen ready for the RHEL5 release. During this time, he wrote code for the paravirt_ops framework for x86_64, lguest and KVM. As of 2009, he figured at linuxfoundation "Who writes Linux" report as one of the top 30 developers for the year. Currently, Glauber works on preparing KVM for RHEL6, and general virtual timekeeping problems.
Elliptics network - a distributed hash table, design and implementation
by Evgeniy Polyakov
Thursday, 2010/09/23 16:30-17:15
The purpose of Elliptics network storage is to allow users to access a set of physically distributed servers through flat addressing model in decentralized network environment. Key/value distributed storage provides an efficient method of accessing data with limited set of constraints. As a proof that such functionality is useful in a real life scenarios we present practical implementation of the DHT storage server with modular IO backends on top of common filesystems or database and various frontends varying from POSIX interface to HTTP access mode. We will discuss limitations faced with distributed hash table approach and compare them to functionality provided by centralized storage systems, namely high-performance data access and its high-availability in the fault-prone environment. Based on the practical results and flexibility of the implemented storage model we will highlight possible new functionality and the ways it could be made in the discussed system.
About the speaker: Linux kernel enthusiast with the main interest in network and storage technologies.
KVM on Server Class PowerPC
by Alexander Graf
Thursday, 2010/09/23 16:30-17:15
Virtualization feels complicated. On x86 it even took an addition to the instruction set to make it fast, easily workable and fit for production use.

But what about the other platforms? Do other architectures have the same technical issues x86 has? Do they already have extensions to support virtulization? Are those necessary? Might virtualization actually be simple in the end?

This talk will give a technical introduction into the functionality of virtualization on non-embedded PowerPC processors using KVM, explaining how memory management and world switches work here.

About the speaker: Alexander started working for Novell about 3 years ago. Since then he worked on fancy things like mkinitrd, qemu and KVM.

Whenever something really useful comes to his mind, he tends to implement it. Among his more well-known projects are Mac OS X virtualization, the modular SUSE initrd and nested SVM. He is also the official maintainer of KVM for PowerPC and Qemu for S390x.

On the publicly invisible side he worked on SUSE Studio backend parts, keeping appliance building secure and fast.

Scalability Layer hits the Internet Stack
by Martin Sustrik
Thursday, 2010/09/23 17:15-18:00
Current Internet stack addresses the issues of data transport, address location, network robustness etc., however, it doesn't deal with scaling applications written on top of it. There's no layer that would allow you to write your application and then painlessly scale it without the need to modify the code.

0MQ project (and the theoretical framework it is based on) attempts to provide just that. Stacked on top of TCP (or other transport layer protocol) it allows you to use Berkeley-socket-like API to talk to multiple applications in parallel while remaining agnostic about their actual number and individual TCP connections. Write a simple application with two communicating endpoints and scale it to Google-size dimensions by adding more endpoints, more intermediary nodes, creating sophisticated topologies and routing algorithms while not modifying single line of code.

The talk will give some examples of the technology, then focus on basic concepts making the scalability possible.

About the speaker: Martin Sustrik is working in business messaging area for a decade. He works in standardisation bodies (AMQP), he have co-authored two business messaging solutions (0MQ and OpenAMQ) and wrote a number of papers on the topic.
megasas: An efficient SCSI HBA emulation for KVM/Qemu
by Hannes Reinecke
Thursday, 2010/09/23 17:15-18:00
In this talk I'll be presenting the 'megasas' SCSI HBA emulation for KVM.

Currently KVM only emulates a parallel SCSI HBA, which incurs a heavy performance / processing penalty as the driver has to emulation the entire SCSI parallel protocol exactly, plus any device quirks the original hardware has.

The megaraid SAS controller in contrast is using a high-level, protocol independent 'frame' interface, which avoids the time-consuming protocol emulation. Using this interface we can efficiently pass I/O between guest and host using the scatter-gather lists from the guest to pass them to the block driver backend. With this emulation we can forward SCSI devices from the host to the guest without modifications, allowing for applications / systems to be moved transparently from a physical system to a virtual one. And Windows7 installs out of the box, too.

About the speaker: Studied Physics with main focus image processing in Heidelberg from 1990 until 1997, followed by a PhD in Edinburgh 's Heriot-Watt University in 2000. Worked as sysadmin during the studies, mainly in the Mathematical Institute in Heidelberg. Now working for SUSE Linux Products GmbH as senior engineer with focus on storage and mainframe support.

Linux addict since the earliest days (0.95); various patches to get Linux up and running. Main points of interest are storage, (i)SCSI, and multipathing. And S/390, naturally. Plus occasionally maintaining the aic79xx driver. Recently got involved with KVM and SR-IOV

Whenever something really useful comes to his mind, he tends to implement it. Among his more well-known projects are Mac OS X virtualization, the modular SUSE initrd and nested SVM. He is also the official maintainer of KVM for PowerPC and Qemu for S390x.

On the publicly invisible side he worked on SUSE Studio backend parts, keeping appliance building secure and fast.

The New Linux 'perf' Tools
by Arnaldo Melo
Friday, 2010/09/24 09:30-10:15
The perf events infrastructure is fast moving into being the unifying channel for hardware and software performance analysis.

Modern CPUs have hardware dedicated to counting events associated with performance, special registers that allow pinpointing hotspots that can possibly be optimized.

Recent developments in the Linux kernel explore these features, solving problems found in previous attempts, such as OProfile.

The reuse of Linux kernel code in user space applications is part of the experiment of shipping user space tools in the kernel repository.

Lessons learned from this experience and future directions on providing code designed to be used, unmodified, in the kernel proper and on the user space tools in the tools/ directory will be discussed.

The goal is to reduce the barrier for coding in the kernel and in the user space tools, presenting the kernel developers with source code that looks like kernel code, using the familiar list, rbtree, error reporting, hash facilities and others as the perf experiment matures.

The following tools, among others that are in the works, will be presented:

perf record

Record specified events for the whole system, a CPU or some specific workload (threads started from the record session) for later, possibly offline, analysis.

perf report

Post processor that reads files created with 'perf record' and show the results sorted as specified by the user (comm, DSO name, symbol name, etc), using simple stdio based output or via a TUI that is integrated with annotation and callchain tree browsing.

perf annotate

Shows source code and/or binary disassembly prefixed by percentage of hits per instruction/source code line.

perf diff

Compares multiple perf.data files recorded to show difference that different versions of workloads being analysed or differences resulting from changes in app/kernel specific tuning knobs.

perf probe

Dynamic probing using processor breakpoints, working with the other aspects of the perf infrastructure (callchains, report tools)

perf test

Regression testing infrastructure for the perf kernel and user space libraries infrastructure.

perf trace

Attempt to connect ftrace and perf.

The use of build-id, cookies that are becoming widely available in modern Linux based OSes that can be used to uniquely identify a binary and its use in providing offline report and annotation that works across multiple platforms will also be discussed.

About the speaker: Arnaldo has been involved with Linux since his University days, back in 1994.

One of the Conectiva founders, he was one of the lead developers of Conectiva Linux.

Worked on internationalization (i18n) of many free software tools.

Developed and maintained several Linux kernel drivers.

Maintained several legacy network protocols such as IPX, LLC and Appletalk.

Reworked the Linux kernel TCP stack so that lots of non TCP specific code could then be reused by other transport level protocols.

Developed the first DCCP (Datagram Congestion Control Protocol) protocol implementation to be present in a mainstream kernel.

He is the author of the dwarves set of tools that includes pahole, a tool for analysing data structures used to optimize software such as the GNU libc, GCC, by the CERN Atlas project among many others.

Now works for Red Hat on the Real Time group, working on tooling, being one of the maintainers and major code contributors for the Linux kernel perf events tools.

Shared snapshots
by Mikulas Patocka
Friday, 2010/09/24 09:30-10:15
Shared snapshots enable the administrator to take many snapshots of the same logical volume. With existing non-shared snapshots, multiple snapshots are possible, but they are implemented very inefficiently --- a write to the logical volume copies the data to every snapshot separately, consuming disk space and i/o bandwidth. New shared snapshots implement this efficiently, a write to the logical volume copies data to just one place, possibly sharing this piece of data among large number snapshots.

Shared snapshots enable continuous recording of system activity --- if a snapshot is created for example every hour, the administrator can access old system states at hourly intervals and analyze the changes.

Shared snapshots can save disk space in virtualized environments, the administrator can create one master image and clone this image for many virtual machines using shared snapshots. The data that is not being written by the virtual machines will occupy a shared space. Snapshots of snapshots are supported, so that snapshots of individual machines can be taken as well.

Shared snapshots can also be used to implement thin-provisioning. In big virtualized environments, the provider may allocate larger volume to the virtual machines than the physically present disk capacity. When the guest tries to write to a segment in the virtual machine the system automatically allocates the segment and assigns it to the virtual machine.

Internally, shared snapshots are implemented as a B+tree with block number and range of snapshot IDs as a key. Because the storage uses ranges of snapshot IDs, it is scalable to arbitrary number of shared snapshots. Consistency after a crash is provided with log-structured write semantics.

About the speaker: My name is Mikulas Patocka.

I work in Red Hat in the LVM team. I am work mainly on snapshots, I provide bug fixes for old non-shared snapshots code. I developed snapshot merging feature (it is already present in the current kernel and userspace tools) and now I am working on shared snapshots.

In my free time, I work on other non-commercial open source projects.

libtcr - Threaded Coroutine Library
by Philipp Reisner
Friday, 2010/09/24 10:45-11:30
The threaded coroutine library (libtcr) is a framework for writing network server applications. Its purpose is to enable developers to write servers making use of multiple CPUs/cores to handle requests of a single connection in a pipelined fashion. The library's core is a parallel main event loop to dispatch events on file descriptors based on the epoll(4), eventfd(2), and timerfd(2) system calls. For additional ease of use, lightweight user space thread switching is also a part of this framework. It requires a 2.6.25 or later Linux kernel.

Apart from the main purpose, of writing pipelined applications, with CPU cache locality in mind, the library also offers statements like parallel_for(;;) and and parallel {}. Those statements allow a programming model in which potentially thousands of potentially parallel execution units may be used by the programmer.

The library maps those execution units (coroutines) to the available CPUs of the system, effectively creating an n:m threading model on top of the existing thread scheduling in Linux.

Libtcr is available under the terms of LGPL 2.1.

About the speaker: Philipp Reisner is CTO of LINBIT Information Technologies GmbH in Vienna. He was born in 1975 in Vienna, Austria. During his studies of computer sciences at the Technical University in Vienna (TU Wien), CTO Philipp Reisner developed the cluster solution DRBD® that is in the meantime successfully used around the globe. DRBD got accepted into mainline Linux with the 2.6.33 release. Philipp is an internationally renowned OSS specialist, kernel programmer and eminent lecturer on high availability at international Linux events. He gave presentations at numerous Linux-Kongresses in Germany, at NLUUG's and UKUUG's conferences, at mySQL conferences and at LinuxCon 2009, and some other conferences.
Tracking filesystem modifications
by Jan Kára
Friday, 2010/09/24 10:45-11:30
The problem of tracking file and directory changes (or generally any filesystem changes) appears in a host of applications. Filesystem backup, transparent caching, desktop search --- all these applications need to find what has changed in a given set of files as effectively as possible. In the paper we survey several current and historical frameworks to tackle the problem in Linux and introduce a new approach to this problem named recursive modification time proposed by the author.

The oldest way of solving the problem is a brute force directory scan during which we compare file modification times to the time of last scan. The advantages of this method are that it works for any filesystem and it is reasonably easy to implement. The disadvantage is that we have to read the whole directory tree. We start our survey by describing the dnotify framework which was the first approach to mitigate this problem. Next we write about inotify framework. It was developped to replace dnotify which has usability problems --- the most painful are arising from the fact that to watch for changes in a directory the directory has to be open. To end our survey we introduce fanotify to the reader. The newest notification framework, which was developped mainly to address needs of antivirus scanners.

All the above mentioned notification frameworks are event based --- i. e., when some modification happens a listening application receives an event. This requires the application (or some dedicated daemon) to run while the changes are happening although often an immediate reaction to the change is not needed. Also if for some reason the application is not running while changes happen, the changes are lost. Another disadvantage of current frameworks is that they require specifying all the directories in which changes should be watched. In a situation where lots of directories are involved, setup of all the watches requires considerable amount of time.

To address these issues we propose a recursive modification time framework. The framework associates with each directory a persistent timestamp which allows to reliably decide whether anything in the subtree of this directory has changed. We also explain how this feature allows for an implementation of a quite efficient iteration over modified files in given subtree. In the end we present some performance numbers for a sample use case.

About the speaker: Jan Kára is a software engineer at SUSE Labs, Novell. He got PhD in theoretical computer science from Charles University in Prague. Currently he is an active developer in the filesystem area of Linux kernel and a maintainer of disk quotas, UDF filesystem, and journaling block layer.
Universal Function Call Tracing
by Olaf Dabrunz
Friday, 2010/09/24 11:30-12:15
Universal Function Call Tracing

What do you do when a computer program fails?

You will probably get help when you write a bug report. But what happens when the developer cannot reproduce the bug?

Wouldn't it be great if you could simply see what the program is doing?

Function call tracing (FCT) does exactly this. It shows you which functions the program is executing.

There are already some solutions that provide function call tracing at some level, e.g. for the system call interface and for the internal kernel functions. But these tools come with limitations which make it difficult to use them in a hassle-free way, anytime and anywhere. And, they only work on some types of software (e.g. the kernel) or on some interface functions.

Wouldn't it be great to always have a single tool available that shows all function calls that you would like to see, even across program boundaries and into the kernel and back?

Fctrace is a tool under development that could provide this.

This talk gives an overview about the available tracing technologies and tools, their purposes and limitations. Then it shows how fctrace can overcome the limitations to provide a universal function call tracing tool. It can easily

  • be made available for all processors and platforms,
  • trace all kinds of compiled programs, including libraries and the kernel
  • does not need any special support from the traced programs -- just installing fctrace is enough.
Finally, the current state of the project and the next steps are presented.
About the speaker: Olaf Dabrunz studied Informatics, Physics and Japanology at the University of Hamburg, Germany and Japanlogy at both the Waseda University and Sophia University in Tokyo. From 1997 he worked as a freelance security and systems consultant for several projects with IBM and EDS/Systematics. From 2003 to 2009 he worked on business computing for SUSE Linux, mainly for the IBM PowerPC-based distribution, the boot process, development tools, and the realtime kernel.
Log2fs or how to achieve 150.000 IO/s
by Jörn Engel
Friday, 2010/09/24 11:30-12:15
Being an open source developer sometimes yields unexpected rewards like free hardware in the mailbox. Having thus received a card capable of doing 150.000 IO/s on raw flash, there came a sobering moment. After adding logfs to the equation, only 50.000 IO/s remained. Still quite decent when compared to a venerable hard disk, but relying on such dodged comparisons was hardly solace.

A quick investigation found that this and other performance problems were caused by the compression support in logfs. Improving matters requires significant changes to the filesystem format - significant enough to give the result a new name and, more importantly, not introduce many new bugs to an existing filesystem.

About the speaker: Jörn Engel is a freelance kernel hacker. He has been involved in many areas, but focussed on LogFS for the last several years. While usually locked away in his underground lair, planning world domination - or at least domination of the flash storage sector - he does occasionally spend time to socialize at conferences and meet other supervillains.
systemd
by Lennart Poettering
Friday, 2010/09/24 13:30-14:15
. systemd is a system and session manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit.

In this presentation I will explain what systemd is, what it does, and why it is a substantial improvement over what existed before. I'll compare it in detail with the Upstart init system developed by Canonical, and explain why we chose to replace it instead of extend it. We'll discuss the pros and cons of both systems, on a technical level.

About the speaker: Lennart Poettering works for Red Hat in the desktop group and wrote various components of the modern Linux userspace infrastructure such as Avahi and PulseAudio and is involved with various other projects. Together with Kay Sievers he designed and implemented systemd. He lives in Berlin, Germany. In his spare time he's an avid photographer, and sometimes even gets his pictures published in Siberian airline magazines.
OFS: An Offline File System based on FUSE
by Tobias Jähnel
Friday, 2010/09/24 13:30-14:15
Even with more and more pervasive wireless networks mobile users will not always be connected to network file servers. The costs of mobile phone connectivity or corporate security settings that do not allow remote access are only two reasons why a road warrior cannot access her data on the go.

This is where the Offline File System OFS comes into play. It is a Linux project that addresses the common problem of synchronizing data between server and notebook. OFS is not only a synchronization tool, but a full-featured mountable file system.

Nevertheless OFS is not a complete file system implementation but extends virtually any networked file system by adding an offline layer between user applications and the networked file system. OFS's implementation solely relies on that offline-layer on the client and does not require any modifications to the file server.

OFS is based on the File system in Userspace (FUSE) and does not contain any kernel modules. OFS receives file operations via the fuse device and redirects them to the previously mounted remote file system. Accessing the offline cache works in the same way. In this case, the underlying file system is located on the local harddrive. Depending on the availability of the server, OFS decides whether to redirect file operations to the mounted remote file system or to the local cache.

To reintegrate changes made in offline mode, all changes are logged and replayed once connection to the remote file system is re-established. Since OFS is a client-only file system an "update runner" needs to traverse the remote file system to detect changes.

OFS has been developed at the Georg Simon Ohm University Nuremberg and is also known as "Ohm File System".

About the speaker: Tobias Jähnel (www.jonmedia.net) earned his German diploma (4 years) and a masters degree in computer science from 2003 to 2008 at the Georg-Simon Ohm University Nuremberg.

His long lasting interest in Linux and networking motivated his diploma thesis on offline file systems. Beside an in depth analysis of common network file systems focusing on disconnected operation capabilities he developed a novel theoretical approach to an offline file system. During the time of his master studies, he implemented this approach in conjunction with two fellow students and released the project to the open source community.

After finishing his masters degree he started his employment in the automotive industry working for Elektrobit Automotive (www.elektrobit.com). He is engaged in high-level software engineering as well as low-level and embedded systems programming. His experience includes dealing with protocols and applications based on TCP/IP, CAN and Flexray networks.

Additional speaker and author of the paper: - Prof. Dr. Peter Trommler (Peter.Trommler@ohm-hochschule.de)

mcelog: Memory error handling in user space
by Andi Kleen
Friday, 2010/09/24 14:15-15:00
Server and HPC systems contain more and more memory to contain increasing data sets. But with more and larger DIMMs and more transistors in them, combined with larger clusters of systems, there are also more memory errors.

Modern server systems generally have ECC memory and other ways to detect and correct memory errors as far as possible in the hardware. When the hardware corrects an error it generates a corrected error events. These events can be also used by specialized software to prevent future failures.

mcelog is a user space backend for handling and reporting hardware errors that can use trends in corrected error to implement specific error prevention algorithms. This including offlining of memory areas with too many corrected errors, triggering events for the administrators and others.

This talk gives an overview of the algorithms and usage models of mcelog and describes what it can do and what it can not do. It describes how mcelog interacts with the kernel and what user interfaces are available.

In addition some non memory errors handled by mcelog are discussed, including core errors and generic errors.

About the speaker: Andi Kleen worked on the Linux kernel longer than he can remember now. Originally he worked on networking, then later on various areas. He spent several years maintaining the x86-64 port and later the i386 architecture too. Andi also worked on NUMA, RAS, scalability and some other areas. He is current for Intel s Open Source Technology Center and lives in Bremen, Germany.
Divide and conquer: Shared disk cluster file systems shipped with Linux
by Udo Seidel
Friday, 2010/09/24 14:15-15:00
If more than one server need to access the same data at the same time shared file systems are the way to go. Probably the oldest approach are the so-called network file systems. Popular representatives are the Network File System (NFS) and the Andrew File System (AFS). Of younger age (10+ years) are the cluster file systems which are based on shared storage. Each of the traditional Unix derivates has/had its own implementation: AdvFS for True64, CXFS for Irix, GPFS from IBM and more. Since release 2.6.16 the Oracle Cluster File System 2 (OCFS2) is part of the Vanilla Linux kernel. RedHat's Global File System (GFS) followed with 2.6.19 onwards. Since GFS and OCFS2 are shipped with the big enterprise Linux distribtutions, one can consider them to be data center ready. However, the question is which one to choose. Different boundary conditions must be taken into account: technical and buisness. On a high level both file systems have a similar architecture. There is a shared disk - either via classical fibre channel SAN or via iSCSI. A cluster setup is needed as a framework for the file system. Data consistency is achieved via file locking and I/O fencing. But a detailed looks shows significant differences. Also, the choice of the Linux distribution does strongly influence the options of the cluster file systems that one is left with.

The talk will provide an overview and an comparison of the two shared disks cluster file systems shipped with the Linux kernel.

About the speaker: Dr. Udo Seidel started as a teacher for Mathematics and Physics. Since 1996 he is a fan of Linux. After his PhD studies about semiconductor clusters he started working as a trainer for Linux and Solaris. Afterwards he stepped into the compute cluster field and worked as a system administrator in the automotive industry. His tasks were on the one side the setup and support of the Linux, HP-UX and Irix workstations for the pre- and postprocessing of the computational results. On the other side he and his team mates were in charge of the Linux and HP-UX compute clusters themselves. Since 2006 he works for Amadeus Data Processing in Erding. He leads a team of Unix/Linux system administrators responsible for the internal infrastructure services and the hosting business. In his leisure time he writes articles about Linux and related topics, he plays badminton and loves inlineskating during the summer time and skiing in the winter season.
Resource Management in Linux with Control Groups (cgroups)
by Stefan Seyfried
Friday, 2010/09/24 15:30-16:15
Resource Management in Linux with Control Groups (cgroups)

Control groups are implemented in the Linux kernel since version 2.6.24 and are part of the GNU/Linux distributions. Control groups allow to group processes and to manipulate their properties with help of control groups subsystems. They represent a new and consistent possibility to distribute system resources to processes systematically.

The presentation gives a quick overview about the topic and explains the buzzword cgroups and what is behind it as well as the possibilities that exist. Growing system resources and their functions (NUMA, ...) make resource management in GNU/Linux more and more important.

About the speaker: Since 10 years Stefan Seyfried is engaged full-time in different Linux areas. First, he was a system administrator at SUSE Linux GmbH in Nuremberg. In the year 2004 he became a developer for mobile end devices, hardware enablement and system integration. Thereby, he learnt to analyze and eliminate different problems- from boot loader to desktop. In 2009 he has been working as a developer for wireless technologies at Sphairon Access Systems and since 2010 he is supporting B1 Systems Gmbh as a consultant and developer. When he is not virtualizing server or finds solutions for tricky problems, he attends to miscellaneous embedded Linux systems in his part time.
Porting of IPv4 Applications to IPv4/IPv6 dual-stack
by Owen DeLong
Friday, 2010/09/24 15:30-16:15
A review of the need for IPv6 support in client-server applications followed by a review of the author's methodology for making these changes with running-code examples in C, Perl, and Python.

IPv6 is probably the single biggest code-update required since Y2K. Given the far reaching implications of this transition and the speed with which it must occur, this is a very timely subject.

The author first wrote a simple client and server application in native IPv4 in each language. Subsequently, he ported each of those clients and servers to dual-stack and now shows the differences in the resulting code and provides tips, techniques, and lessons learned from the exercise so that other developers may expedite their porting processes.

About the speaker: Owen DeLong is an IPv6 Evangelist at Hurricane Electric and a member of the ARIN Advisory Council. In these roles, he is keenly aware of the dwindling IPv4 resource pools and the need to get as much of the internet as possible to add IPv6 capabilities to their networks prior to IPv4 runout.
The userspace solution for control groups
by Dhaval Giani
Friday, 2010/09/24 16:15-17:00
For a few years now Linux has had support for arbitrary grouping of processes and applying certain operations, such as CPU bandwidth control, memory limiting, I/O bandwidth control, in the form of control groups.

Userspace developers are interested in exploiting these new capabilities, in the form of process classification, process tracking. There have been various challenges in these attempts, the least of which being the virtual filesystem interface control groups provide.

The Control group filesystem, being an in-memory filesystem, is not persistent. This raises a need for configuration most of the times it is mounted, at the very least on every boot. Control groups, due to their design, also provide a great deal of flexibility on their use. This however brings about a number of issues for a programmer, the most important being that the programmer might now need to be aware of the internals of other subsystems, even though their application does not use it.

To make life easier for programmers, libcgroup was created. libcgroup provides features such as usable API, persistent configuration, a classification engine. It however fails in other aspects such as making programming entirely subsystem independent.

This paper looks at the author's success in providing some of the missing features through libcgroup, and at the complexities in implementing yet to be implemented features such as subsystem independence and their plans for tackling these issues.

About the speaker: Dhaval Giani is with the RETIS lab at Scuola Superiore Sant'Anna in Pisa, involved in research in real time systems, more specifically in real time schedulers and QoS. He maintains the libcgroup project along with a few other developers.

In the past, he has been involved in the group scheduling extensions of the CFS scheduler and has helped out a bit with the attempt to provide hard limits for the group scheduler.

Jan Safranek works as developer and package maintainer at Red Hat Inc, mainly on system management software and protocols (SNMP, IPMI). He is part of Net-SNMP core team and is involved with other projects like libcgroup.

In past he worked as software engineer, tester and architect in telco industry.

rsyslog: going up from 40K messages per second to 250K
by Rainer Gerhards
Friday, 2010/09/24 16:15-17:00
Within a just two-year period, rsyslog has become the default syslogd on all leading Linux distributions. Rsyslog was developed to provide a flexible syslog solution scalable from low-end systems to busy datacenters. Besides feature-richness and flexibility, one of the project's core goals is to provide support for massively parallel systems and high message rates. Early versions of rsyslog v4 could process around 40,000 syslog messages per second (mps), while later versions of v4 went up to around 100,000 mps. The v5 engine has been further improved, with some users claiming processing rates around 250,000 mps.

One of the core components inside rsyslog is the queue manager. It is the component that handles almost all multiprocessing while messages traverse the rsyslog engine. Among others, the queue manager also manages worker thread pools and synchronization between workers. In order to achieve the desired performance improvements, various optimizations to the queue manager and its helper entities were made. The most important optimizations were design changes.

In this talk, we will provide insight into the relevant part of rsyslog architecture and describe where the initial design failed. We will describe those changes that really made a difference and include notes on all the tiny bits that must not be overlooked. Also, rsyslog is a quite active project and we are far from being satisfied with the performance level we have reached. We sincerely believe there is lots of room for improvement. We will also describe some of the new ideas currently being considered and how we expect these to affect the overall performance. And, of course, we will also describe known shortcomings of the current v5 engine. Last, but not least, we will mention which changes in traditional user perception of syslog message processing, namely in respect to message sequence, are vitally necessary to achieve high message rates. And we will, of course, describe why these somewhat frightening-sounding changes are actually not a real departure from traditional processing.

We consider this talk to be of importance because many applications, userland at least, are trapped in old perceptions and single-threaded paradigms. We hope that by providing insight into the good and evil of rsyslog and its evolution, we may help others improving their programs to multi-threaded paradigms. We would like to warn that some of our findings sound pretty basic, but yet these are still common design problems often enough overlooked.

About the speaker: Rainer Gerhards is founder of Adiscon GmbH, a German consultancy and software house. He created the rsyslog project in 2004 and has been its main contributor up to today. Since 1981, he develops system-level or close-to-system-level software on various platforms, including Mainframes, Windows and Linux. He also provides consulting in the system infrastructure area. Over the past 15 years, Mr. Gerhards has become a renowned syslog expert. He participated in the IETF syslog standardization effort, where he use rsyslog as a test bed for some of the standards.
From Source Code to Packages for Various Distributions
by Andreas Jaeger
Thursday, 2010/09/23 10:45-11:00
We'd like to present our experiences with building packages for multiple distributions with multiple packagaging formats using the openSUSE Build Service. This is a great way for FOSS projects to deliver their users binary packages and get feedback about them.
Further, we'd like to explain how the Build Service can be used to build not only single packages but also software stacks and complete distributions. FOSS projects are collaborative - and the Build Service supports collaboration between developer groups is supported in multiple ways, from working together on packages to building with binaries of other projects or automatically applying changes to source code to a request and review system for offering changes to different groups.
We are currently implementing support to allow users of external software repositories like CPAN/SourceForge/QtCreator an easy and automated way to build software for multiple distributions and architectures, thus bringing collaboration from the level of packagers and distribution creators to the original software writers.
We want to show also how the Build Service is used by other developers, researchers, ISVs and larger companies who develop their own software or Linux OS - and how the collaborate.
About the speaker: Andreas Jaeger, openSUSE Program Manager for Novell, has been contributing to Linux for over a decade. He ported the GNU C Library to x86-64 and lead the development effort to port Linux to the new 64-bit x86-64 architecture. He had been responsible as project manager for the openSUSE distributions and is now part of the openSUSE team where he is involved also with the openSUSE Build Service.

Comments or Questions? Mail to contact@linux-kongress.org Last change: 2010-09-22