Rule Set Based Access Control (RSBAC) Security Extension
Wednesday, Nov 28, 13:00-17:30, Room 1Amon Ott, Compuniverse
You can find details about RSBAC at http://www.rsbac.org.
Wednesday, Nov 28, 13:00-17:30, Room 2Ralf Spenneberg
Linux and IPSEC: Building and administering an interoperable VPN
This short course will cover the basics of the implementation of a virtual private network using Linux and FreeS/Wan.
The need for virtual private networks becomes greater, the more business is done using the internet. Many vendors address this need and not many people know about alternative open source solutions.
In this talk I will start with a brief overview of the IPSEC protocol and the different implementations by the different vendors. I will emphasize on the different key management mechanisms and their interoperability with Linux FreeS/Wan.
In the second part the different steps to obtain a fully working Linux FreeS/Wan installation will be covered. The patching, compilation and installation of the Linux Kernel and the Userspace tools will be explained.
The implementation of an automatically keyed IPSEC tunnel will be shown. After the explaining the configuration the tunnel will be started and tested. Tools for troubleshooting will be demonstrated.
Modern DNS servers have the ability to provide PKI information. The usage of these informations for authentication in IPSEC tunnels will be shown and the above tunnel will be reimplemented using public keys provided by a bind9 DNS server. I will explain the configuration changes in FreeS/Wan and the implementation of the resource record in the bind9 DNS server.
Since VPN solutions are often used to allow travelling salesmen to connect securely to the intranet, a roadwarrior scenario will be explained and implemented. This scenario will allow numerous Clients to connect to the intranet. The clients will be authenticated using public keys and use strong encryption. As an example the implemention of an client using PGPNet/Windows98 and X509 certificates will be demonstrated.
The Placement of the VPN Gateway is crucial. Therefore we will discuss the different possibilities to place the VPN Gateway and to implement stateful firewall rules using iptables to secure the VPN Gateway and the intranet.
The intended audience of this short course are Linux/UNIX Administrators wishing to implement an interoperable VPN solution using Linux. Experiences using other virtual private networking software like the PPTP are helpful but not mandatory. Knowledge of Linux and the TCP/IP protocol is required.
Firewalling with Linux 2.4.x using netfilter/iptables
Wednesday, Nov 28, 13:00-17:30, Room 3Paul "Rusty" Russell
One of the major advantages of the new Linux 2.4.x kernel series is the new packet filtering / NAT / packet mangling sybsystem, called iptables.
Iptables is the successor of ipchains and ipfwadm in 2.2 and 2.0 kernels. Major new features are stateful firewalling, extensibility and better NAT (Network Address Translation) support.
The tutorial will be presented by two of the netfilter core team members,
Wednesday, Nov 28, 13:00-17:30, Room 4Jes Sorensen
This tutorial will look at how the kernel is structured and provide general guidelines for kernel programming. In particular it will be focusing at the issues you run into when writing device drivers. The tutorial will, amongst other things, cover the following:
It is expected that the audience is has a basic knowledge of Unix / Linux internals and are comfortable with C programming. The tutorial will be focusing on the upcoming 2.5 development kernel series.
About the author
Jes has been working on the Linux kernel for more than seven years, the last three as the maintainer of Linux/m68k. He used to work at the European Laboratory for Particle Physics (http://www.cern.ch/) where he worked on very high performance networking, Linux clusters and Linux/ia64. This has included writing Linux device drivers for Gigabit Ethernet and HIPPI (High Performance Parallel Interface, an 800Mbit/sec supercomputer network). Later Jes worked for Linuxcare Inc. in Canada for their Research Groupd. Jes now works for Wild Open Source Inc. in Canada (http://www.wildopensource.com) where he continues to work on Linux/ia64, high speed networking and other low level Linux issues.
A Decade of Linux: Linux Past, Present and Future
Thursday, Nov 29, 10:15-11:00, Room CC1Theodore Ts'o, Thunking Systems
This talk will reflect on the past ten years of Linux, starting with a retrospective of Linux history, so we know from where we've come, leading to an examination to where the Linux and Open Source communities are today, and leading to an examination of challenges which we need to face in the future.
Some of the topics discussed will include Open Source and Legal Issues, Open Source and Business Models, the need to to make Open Source Software easier to use, and the relationship between Open Source and Propietary Software.
Future directions in Linux Packet Filtering (What we've learned...)
Thursday, Nov 29, 11:30-12:15, Room CC1Paul "Rusty" Russell
One of the major advantages of the 'new' Linux 2.4.x kernel series is the new packet filtering / NAT / packet mangling sybsystem, called iptables.
Despite the huge amount of time spent in design and devolpment of this versatile, extensible infrastructure for packet filtering, there are still a number of unresolved issues left.
Now, after the netfilter/iptables implementation is in widespread use, the remaining issues have to be taken care of. Partially during 2.4.x, some of them seem not vialbe before 2.5.x - mostly because they conflict with structures inherent to the current design.
At least we can assure that the future of Linux firewalling will break with a long-lasting tradition: Starting with kernel 1.2.x every major kernel version had a new firewalling subsystem. New concepts to be understood by administrators, new commandline-interfaces to be learned.
The 2.5.x kernel will not come up with such a new interface. The current netfilter/iptables infrastructure will undergo internal changes and extensions - but in particular the iptables command will not change, neither its syntax nor its semantics.
Stay tuned for the latest news from the netfilter developer workshop taking place exactly two days before Linux Kongress.
The talk will be presented by two of the netfilter core team members, Paul "Rusty" Russell and Harald Welte.
FAI - Fully Automatic Installation of Debian GNU/Linux
Thursday, Nov 29, 11:30-12:15, Room CC2Thomas Lange, Universität zu Köln
Installing an operating system can be tedious, if you have to install more than one computer. For each installation, you have to answer many questions about the configuration of the installation. Each question has be answered over and over again for each computer. Would you like to type the root password fifty times if you want to install a Beowulf cluster of medium size ? This type of installation does not scale at all.
FAI is a tool for the fully automated installation of Debian GNU/Linux. It's possible to install and configure the whole operating system and the applications without any manual interaction needed. So, FAI is a scalable method for installing many computers all at once.
If you take a brand new computer, the local hard disks will be partitioned, filesystems will be created and mounted. Then the operating system software and all desired application are installed on the new system. Finally the system will be configured to meet your local needs.
This is all done using a central configuration space on an install server. Using the class concept of FAI, multiple hosts can share the same parts of the configuration. So, no configuration information has to be defined twice. The installation can be highly customized by defining functions which are easily hooked into the installation process.
The homepage for FAI is http://www.informatik.uni-koeln.de/fai/.
About the Author
Thomas Lange is a system administrator since 1992 at the institute of computer science at the university of cologne. Most of his jobs is done by shell and perl scripts, which do the installation and administration of several Sun Solaris workstation cluster. Since 1999 he develops the automatic installation for Debian GNU/Linux.
Class Based Queueing & Iptables for packet slicing & dicing
Thursday, Nov 29, 12:15-13:00, Room CC1Bert Hubert, PowerDNS/Netherlabs
Linux contains very powerful traffic management techniques. It implements full Class Based Queuing semantics which allow the operator to manage and partition traffic beyond what even ATM and Frame Relay offer.
Class Based Queueing is a powerful concept which is badly documented in the Linux world (and outside), possibly because few CBQ implementations exist. This is also due to kernel geniuses which concentrate on coding, not on ease of use.
In this presentation bert hubert will outline the basics of CBQ, and the Linux version - something that most probably has not been done before.
Also mentioned in this presentation: interaction with iptables, NAT, policy routing.
Focus is on the semantics of CBQ and how to use it in the real world. Real world examples include usage of Linux CBQ by a United Nations mission.
Optimising the Idle Loop
Thursday, Nov 29, 12:15-13:00, Room CC2Stephen Rothwell, IBM OzLabs - Linux Technology Centre
While auditing the running processes on my laptop, I came to the realisation that knowing when particular files are updated is a common problem for many daemons. In all the cases I noticed, the solution used is to poll the files and/or directories containing them using the (f)stat(2) system call. This results in several processes on an otherwise idle system waking up at intervals varying from a few minutes right down to 1 second and stat(2)ing files.
In the past this has seemed a reasonable solution as using up idle time would not have appeared to be a problem and there was no other way to do the job anyway. Today, however, with laptops proliferating and battery technology not keeping up with the power requirements of modern CPUs (and all the gadgets we "need" installed), maximising the idle time on a machine can be very important. Also, we now have the tools necessary to signal asynchronous events when files and directories are modified.
This paper will discuss alternate methods, including directory notification and file leases, that can be used to eliminate the polling loops in some common programs and the effects that these can have on the power consumption of laptops (and other computers for the benefit of the Californians in the audience :-)).
About the Author
Stephen Rothwell has been interested in and using Linux since 1991. He is the kernel maintainer for the Advanced Power Management driver and Directory Notification code and has an interest in the file leases implementation.
Network Emulation with Netfilter
Thursday, Nov 29, 14:15-15:00, Room CC1Fábio Olivé Leite, Conectiva S.A.
Taisy Silva Weber, II/UFRGS
This talk will present the great flexibility and extensibility of Netfilter and specially how it can be extended with very untraditional modules for packet matching and mangling to turn it into a quite powerful communication fault injection and network emulation tool. Building on the experience obtained with the ComFIRM tool for the 2.2 kernel, a set of Netfilter modules called ComFIRMv2 is about to be released by the author, coupled with a graphical interface that can manipulate those modules in order to create several specific or arbitrary network failure scenarios.
Such controlled failure scenarios can then be used to validate robust network applications that need communication protocols with characteristics such as reliability, atomicity and ordering. Experimental validation is a key part of robust and secure network application design, but such tools are not very easy to find or abundant.
It is expected that the ComFIRMv2 tools will be used to raise the quality of the current and forthcoming open source distributed/cluster computing applications. The ComFIRM tool, which this new tool builds on, was created as part of the main author's MSc course in the Informatics Institute of the Federal University of Rio Grande do Sul, Brazil (II/UFRGS), under the advisory of Taisy Silva Weber.
About the authors
Fábio Olivé Leite is a member of the Conectiva High Availability development team. He is currently finishing an MSc course on Fault Tolerance, has a BSc degree on Computer Science and also a Technician degree on Industrial Electronics. He has published a few works on Fault Tolerance and Distributed Computing, and enjoys working with reliable communication, clusters and other distributed cool stuff.
Taisy Silva Weber, Doctor in Informatics by the University of Karlsruhe, Germany, is an Associate Professor in the Informatics Institute of UFRGS, Brazil, and advised the work on ComFIRM.
Linux as an embedded platform
Thursday, Nov 29, 14:15-15:00, Room CC2Peter De Schrijver, Mind Linux Solutions
Linux is rapidly becoming an attractive alternative to existing embedded OS's. The paper will start by describing what a typical embedded system looks like and in what parts of the system, Linux can be used as an operating system. It will then proceed by explaining how to setup a development platform. The next step is to adapt the kernel for the target platform. This mainly consists of writing (or porting) a bootloader, adapting the kernel to hardware specifics such as memory layout, interrupt controllers, timers and writing device drivers for custom hardware. Using Linux in an environment which has a limited amount of RAM, and boots or runs from flash, requires some special care in setting up the system. There are a number of approaches varying from running everything from RAM using a ramdisk to using execute in place and a flashfilesystem to run as much as possible from ROM. The appropriate choice is based on the amount of RAM you have, how much writes are necessary, what kind of write access the application needs,... The paper will investigate these options and try to define some guidelines in when to choose what.
About the author
Peter De Schrijver has been working on various bits of the Linux kernel for quite some time now. Probably best know is his work on the tokenring code and device drivers. But he also contributed code to the Linux/PPC project and the Linux/m68k. He worked 3 years on a large public switching system for Alcatel. His main areas of expertise were device drivers and time critical code which handled speech path setup and teardown. He worked about 2 years on broadband internet access systems at Alcatel and co-authored the recent RFC on DHCP reconfiguration. Today he works on linux for embedded systems combining his experience with linux and embedded systems.
Wireless networking with Linux and IEEE 802.11b
Thursday, Nov 29, 15:00-15:45, Room CC1David Gibson, IBM
Most of the common 802.11b cards are supported under Linux, but the support is not entirely mature and consolidated. The two most common types of card (both based on Intersil's Prism II chipset, but using different firmwares) are each supported by several different drivers. However, each driver supports a different subset of any particular card's functionality. Only a few drivers support some of the more specialised functionality of the wireless devices such as acting as an AP, or passively monitoring a wireless network. Progress is slowed by the fact the chipset and firmware vendors have provided little information to open source developers about the specifications of the MAC controller and its firmware.
This paper will cover the state of support for the 802.11 protocol and 802.11b devices under Linux. In particular, the internals of the existing kernel code will be discussed, with a particular focus on the "orinoco" device driver.
About the author:
David Gibson is an employee of the IBM Linux Technology Center, working from Canberra, Australia. He is the author and maintainer of the orinoco driver for Prism II based 802.11b cards using the Lucent/Agere, Intersil or Symbol firmware. He has also done some work on embedded PPC machines, ramfs (as included in the -ac kernel tree), and a userspace implementation of checkpoint/resume.
Thursday, Nov 29, 15:00-15:45, Room CC2Felix von Leitner, Convergence
Standards conformance, scalability and a broad and rich API have made the GNU C Library the first choice for all major Linux distributions. However, this comes at the price of high memory and disk footprint, which are important. concerns for embedded systems and boot and rescue floppies.
The diet libc is a new libc for Linux that specifically aims to have low memory and disk footprint. While vital for embedded systems, this also benefits servers greatly, especially servers with many small processes (mostly email, ftp and http servers) will require much less memory to operate at peak efficiency.
This talk will also briefly show a few spin-off projects like an init, a getty+login, and rewritten versions of common shell utilities like cp, ln, chown and ls.
The diet libc project can be found at http://www.fefe.de/dietlibc/.
Thursday, Nov 29, 17:00-17:45, Room CC1Jeff Dike
User-mode Linux (UML) is the port of Linux to Linux. It implements a virtual Linux machine running in a set of Linux processes. It is a full-blown Linux kernel, implementing its own scheduler and VM system. It relies on the host Linux kernel only for emulating hardware. UML has a full set of virtual devices, including consoles and serial lines, a block driver, and a network driver. It runs the same binaries as the host, and is capable of executing almost anything that will run on the host (exceptions are things which are highly hardware-specific, such as emulators and installation procedures).
UML virtualizes system calls by using the system call interception facility of ptrace, virtual memory and separate address spaces with mmap and mprotect, uses the context switching of the host kernel to implement its own process context switches, and implements hardware faults with Linux signals. The console and serial line drivers can be attached to almost anything on the host which makes sense, including traditional pseudo-terminals, pts terminals, xterms, virtual consoles, and already-opened file descriptors. The network driver supports both networking with the host through slip, Ethertap, and TUN/TAP devices and completely virtual networking through either a switch daemon or a multicast network. The block driver can be attached to files, usually containing a filesystem image or a swap signature, and to device nodes such as full disks, partitions, CD-ROMs, and floppies to provide access to the host's devices.
UML has an number of applications as a virtual machine, including kernel debugging and development, jailing and sandboxing, virtual hosting, education, and experimenting with new kernels, new distributions, and new types of networking. More recently, it has become apparent that UML has potential beyond being a simple virtual machine. It is possible to spread a single UML instance across multiple physical hosts, creating a new kind of cluster. It is also possible to use UML to turn the Linux kernel into a normal application library. Applications linking against this UML library would gain a number of interesting capabilities that were not available before.
The paper will describe the design and implementation of UML, including recent additions and work that remains to be done. It will also talk about the applications of UML, both the ones that are currently being done and those that I expect it to have in the future.
About the author(s)Jeff Dike is the author and maintainer of User-mode Linux. He is an MIT gradutate, a former DECcie, and currently hacks away in rural NH, USA.
Friday, Nov 30, 9:30-10:15, Room CC1Theodore Ts'o, Thunking Systems
Stephen Tweedie, Red Hat
This paper will discuss existing features in the ext2 and ext3 filesystems, and planned improvements to this filesystems. In some cases (directory indexing, tail merging, relaxed metadata layout, extended attributes/access control lists) the enhancements exist in patches that need to be integrated into the mainline sources. In others cases (extent maps, V2 inode structure, high watermarking/contiguous block allocation), the design of the enhancements will be laid out.
There will also be discussion on how the dependencies between the various enhancements allow development in parallel and where the dependencies require serialization in the development process, and how the codebase can be factored to allow for easier development of new filesystem features.
Lire: integrated analysis of all your int(er|ra)net services
Friday, Nov 30, 9:30-10:15, Room CC2Egon Willighagen, St. LogReport Foundation
Joost van Baal
Stichting LogReport FoundationStichting LogReport Foundation is an organization that wants to stimulate and facilitate the use of the information found in log files. One of its projects is the development of a integrated tool to analyze log files for many inter- and intranet services, like DNS, Email, FTP, and WWW, and provide the manager with a one-click interface to the information in those logfiles. The tool under development is called Lire.
The Lire project is begin developed as an interface to the log files. And not just one, but as an interface to the log files of all services running on a internet server. Or more than one server, for that matter. And it is not limited to just one program. You can use several mail programs and all get analyzed. The output of a Lire analysis gives an transparent overview of the latest activities of your services.
Lire makes this possible by using a modular and plugable architecture using XML as one of its key technologies. Plugins can be added for new log formats and new services. Plugins can also be added to report the information you want to see. And plugins can be added to give you to output format you like most. Whether this is plain text, RTF, DocBook or PDF. Lire's architecture makes all this possible.
DRBD on Linux 2.4.x
Friday, Nov 30, 10:15-11:00, Room CC1Philipp Reisner, LINBIT
HA Clusters (in Fail-over configuration) are common practice when a server with mirrored disks (RAID) cannot offer the desired availability for the application. HA Clusters can offer better availability because they can mask failures other than hard disk failures.
DRBD is a device driver for Linux which allows you to build clusters with distributed mirrors, so-called "shared nothing" clustering. This architecture does not only have the advantage that the physical distance between the two copies of the data can be magnitudes greater than with RAID sets, but it is also (magnitudes) cheaper than clusters with shared disks.
In order to provide shared-disk-like semantics to applications like databases and journaling file systems, DRBD has to guarantee correct order of write operations and correct resynchronisation in case of recovery after degraded operation.
DRBD analyses write-after-write dependencies on the sending node to give the disk scheduler on the standby node the maximum freedom at writing the blocks to the storage subsystem, without compromising the write order imposed by the file system.
At cluster restart DRBD uses generation counters to find the node with the up-to-date data, and to determine if the standby node's disk needs to be resynchronised to the primary node.
While DRBD has been available for Linux 2.2.x base systems the upcoming release will support Linux 2.4.x. With the new kernels LVM devices and RAID devices are supported and improved throughput is possible since Linux's block device interface had been generalised.
DRBD-based clusters have been used in production by various organisation since June 2000. They are used to form high available databases, file servers and mail servers.
About the author
Graduation from the Vienna University of Technology in computer science in 2000. Since graduation employment at the biggest provider of professional services in the field of Linux in Austria: CUBiT IT Solutions.
Since November 2001 CEO at LINBIT a provider of professional Linux services with a focus at high availability clustering.
Apart from DRBD, author of mergemem (a modification of Linux's memory management system)
TimeWalker, a tool to visualize eventdata
Friday, Nov 30, 10:15-11:00, Room CC2Theo de Ridder, Prometa Ratum bv
Many systems produce huge amounts of timestamped data (events) like logs from systemcalls, time-series from network monitoring or transactions from database-applications.
In practice eventdata is often thrown away without any inspection. Some of the main reasons are: waste of resources, poor dataformats, non-scalability of traditional tools, lack of an adequate visual instrument.
However, throwing away eventdata unseen implies losing essential information needed to discover cause-effect relations within (un)wanted or (un)expected systembehaviour. TimeWalker is a tool that makes preservation and disclosure of historical details contained in eventdata attractive and feasible.
The implementation and user-interface are made very flexible and portable by using wxPython and C. The first release of TimeWalker will become available(under a GPL-licence) in november 2001 for Win32 and Linux. In the first release TimeWalker will work smoothly for about 500000 records in memory that represent individual events or aggregated events collected from much larger (Gb) datasets.
2. Data handling
TimeWalker unifies arbitrary eventformats into a format that is suitable for very fast storage, aggregation and transformation.
Aggregation is the process of compressing arbitrary eventlists into a fixed-time interval sequence containing a single composite record in each interval. With user-specified expressions important correlations can be preserved during aggregation. Aggregated records can be transformed with user-specified expressions into values to plotted.
The clean syntax and semantics of Python is used for all expressions at the user level. Some specific internal techniques are used to improve the performance of the produced byte-code drastically.
3. Visualizing techniques
TimeWalker uses an innovative technique for information-visualisation along the time-axis that enables simultaneous presentation of context and detail of eventdata in a range from 40 years down to 5 minutes. The technique is based on a sliding hierarchical ZoomLens that shows a bundle of multiple beams with predefined (quarter, week, day, hour, 5 min) time-scales. The zoomlens can be shifted by hand or by starting an animation.
The graphical user-interface as a whole is carefully designed for quick pattern-recognition by a regular user. Each part has a fixed place, there is no scrolling, the information density is high, scaling and coloring is automatic, and there is (almost) no static and redundant (textual)information.
Apart from the graphical mainwindow there are also a number of frames for browsing or manipulating configuration data, metadata, raw data, documentation, and even (parts of) the reflective running environment.
There is a general datacollector, with derivations available for some common dataformats like syslog. Experience showed that creating and testing a new collector can be done within one day.
The use of expressions for aggregations and transformations on itself is not complicated, but making the right choices requires domain knowledge as well as some experience with the resulting visual effects.
TimeWalker is aimed to support visual datamining of huge amounts of non-filtered eventdata. It should be considered as a multi-focal looking glass complementary to the limitations of the usual spreadsheet way of (statistical) datareduction.
About the author
Having played all possible roles during 30 years in the software-engineering landscape the author ended in painting enduring and esthetic software patterns using Python as his pencil.
Lustre: The intergalactic file system for the national labs?
Friday, Nov 30, 11:30-12:15, Room CC1Peter J. Braam, Cluster File Systems
The National Labs and the NSA are seeing data centers with 10,000's of servers, peta-bytes of storage and requirements to move 100's of GBytes over the wire each second. And, they are not alone: The genomics, broadcasting and movie industry is following with a similar set of requirements.
The Lustre project appears to offer answers here. It started as an exploration of new file system designs between Seagate and the author. Subsequently work with the National labs gave much further direction and input to the project to handle both general purpose file sharing needs as well as specialized computational needs for scientific computation.
The underlying technology is an overhaul of the storage stack, into a smarter object oriented model. On drive computing and a programmable storage interface are two of the main features. In the area of cluster file systems, we looked at new locking techniques. Prototypes of several components were implemented on Linux.
In this talk we will give an overview of those aspects of Lustre's file system handling that are fundamentally different from what is currently available. This involves scalable file I/O, parallel I/O handling and the metadata protocol. These features will support extremely high I/O throughput and provide server offload.
In the metadata protocol the adaptive behaviour to server load with operation based locking and write back caching will be explained in detail. This protocol will allow very large numbers of clients to operate on the same directory, but also allow clients to proceed with agressive write back caching when contention is low.
Rule Set Based Access Control (RSBAC) Security Extension
Friday, Nov 30, 11:30-12:15, Room CC2Amon Ott, Compuniverse
Extended AbstractThe Rule Set Based Access Control (RSBAC) system is an open source security extension to current Linux kernels, which has been continuously developed for several years. The current stable version 1.1.2 has been released on 27th of August 2001.
RSBAC was designed according to the Generalized Framework for Access Control (GFAC) to overcome the deficiencies of access control in standard Linux systems, and to make a flexible combination of security models as well as proper access logging possible.
Access control is devided into enforcement, decision and data structures, and all access modes are grouped into abstract request types. Also, the controlled object types include interprocess communication as well as devices (not only device special files).
The abstraction makes the framework and the existing model implementations easily portable to other operation systems.
Among the nine access control models, which are currently included, are well known ones, like MAC/Bell--LaPadula, as well as new models, which have been specially designed for *nix server needs. Specially, the complex and powerful Role Compatibility model and the Access Control Lists model provide fine grained control over all objects in the system, while the Authorization model easily controls user IDs used by all programs.
Installation requires a kernel patch, RSBAC configuration and a recompile. The complete set of administration tools contains a range of menues for most tasks.
Practical experience shows the system to be fast and stable for production use, what is one reason for its growing acceptance. There are already two Linux distributions with RSBAC included and a lot of server systems running it.
In the next major release 1.2.0, real network access control will be provided and the whole access control data handling subsystem will have been changed and optimized.
About the authorAuthor: Amon Ott
Globally Distributed Content
Friday, Nov 30, 12:15-13:00, Room CC1Horms (Simon Horman), http://vergenet.net/">Verge Networks
About the author(s)Horms (Simon Horman) is a senior software engineer at VA Linux Systems, Australia, working on load balancing and high availability projects having recently transfered from VA Linux Systems, USA. Prior to this he was the senior technician at Zip World, an ISP in Sydney, Australia. He moved to Zip World after a stint at Open Systems Integrators where he implemented the network management system for the Optus cable TV network. For his honours thesis in computer science at the University of New South Wales he worked on using genetic algorithms to schedule the university examinations timetable. His main interest is computer networks and in particular how this makes information accessible to people.
Proposal of a standard token API for Linux
Friday, Nov 30, 12:15-13:00, Room CC2Matthias Bruestle
What is a token? A token is a personal cryptographic device which we can easily carry around with us. The most common token is the smart card. It fits into the wallet like credit card.
Currently the usage of these tokens under Linux is very limited, but many applications could profit from such a token. These are e.g. GnuPG, OpenSSL, PAM or encrypted filesystems. The speaker has done an anlysis of about a dozen applications and will provide an overview.
Some of these applications support smart cards, but most are hacks. One application supports one type of card with one reader and a second application supports another type of card with another reader. A great simplifiction would be the usage of a standard API. An application developer would only have to use this API and instantly their application can use all cards and readers for which someone has written a driver for this API.
It would be possible to develop a new API, but this is a lengthy process. It may also occur that three applications and five drivers later it is discovered, that something important has been missed. On the other hand there are already APIs for interfacing to tokens, like the CSPs from the Microsoft Crypto API. In the speakers opinion, the best of the existing APIs for this purpose would be PKCS #11. It is in usage for some years and has great flexibility, but it is complex. The advantages, drawbacks and properties of PKCS #11 will be presented in more detail in this talk.
About the authorMatthias Bruestle is doing his Ph.D. thesis in computer chemistry and working as consultant and developer in the smart card field.
Actual Status and Future Strands of the Linux LVM
Friday, Nov 30, 14:15-15:00, Room CC1Heinz Mauelshagen, Sistina Software, Inc.
Tracking 2.5 - what the kernel hackers are up to
Friday, Nov 30, 14:15-15:00, Room CC2Jonathan Corbet, LWN.net
The 2.5 kernel development series will, presumably, have been underway for some time when the Linux-Kongress is held in November. This talk will cover the developments which have been included so far, or which look to be included shortly. User-visible changes will be discussed, but there will also be considerable emphasis on internal API changes of interest to kernel developers.
About the author(s)
Jonathan Corbet is a co-founder of LWN.net, and has been the author of LWN's kernel coverage since the beginning; he thus keeps a closer eye than many on what the kernel developers are working on. He is a coauthor of Linux Device Drivers, Second Edition, published by O'Reilly & Associates in July, 2001, and released to the community under the GNU Free Documentation License. Mr. Corbet lives in Boulder, Colorado, USA, but makes regular pilgrimmages to Italy to visit his in-laws.
Speech Processing with VRIO and Linux
Friday, Nov 30, 15:00-15:45, Room CC1Dieter Kranzlmueller, GUP Linz
Ingo Hackl, GUP Linz
The omnipresence of computers in our everyday life is steadily increasing. Driven by many research and industrial efforts in ubiquitous and pervasive computing, and the general tendency of exponential growth in computing, lots of features and benefits for humankind and society are promised for the near future. Yet, compared to science fiction literature and films, the interface between the human user and the machine is still lagging behind. While SF imagines a natural interaction with the computer comparable to everyday human-to-human conversation - as e.g. in Arthur C. Clarke's "2001: A Space Odyssey" between the HAL 9000 computer and the human astronauts - today most actual interaction is still based on some kind of mechanical device, e.g. mouse or keyboard.
This deficit is addressed by the VRIO appliance, a combination of hardware and software tools for integration of speech processing in arbitrary applications. Using a black box approach, VRIO can be integrated in any computer system infrastructure by connecting it into a network, integrating it via its application programming interface, and configuring it for processing. The total costs of the system have been kept a minimum by relying on commodity-off-the-shelf components and open source software, such as the Linux operating system and the availability of IBM's ViaVoice speech processing software development kit. The result is an interesting speech processing prototype at affordable costs, which even exceeds our expectations in terms of performance and usability.
Practical examples of VRIO's application include an installation in the CAVE environment and a demonstration of "natural system administration". The former demonstrates the use of speech processing in Virtual Reality, where traditional interaction is often difficult due to the multiple dimensions and the specialized input devices. The latter shows the potential of speech processing for arbitrary tasks of everyday life - even though, the example is situated in a computing environment. Many more practical examples of speech processing are imaginable, where some of them are currently being investigated. While the VRIO product may seem a niche project at present, its affordable price and flexibility may be its key success factor, which in turn could only be achieved by relying on a Linux/Open Source solution.
About the author(s)
Dieter Kranzlmueller is assistant at the GUP Linz, Joh. Kepler University Linz. His research interests are parallel computing and computer graphics. His first encounter with Linux was ten years ago, when he was switched from SCO Xenix on an Intel 80286 to one of the pre-1.0 Linux kernels.
Ingo Hackl is studying computer science at the Joh. Kepler University Linz, and started to work on the VRIO speech processing appliance during his required practical course in January 2001. Using Linux for quite some time now, he found the idea for VRIO quite appealing.
OST, The home entertainment software platform
Friday, Nov 30, 15:00-15:45, Room CC2Johan Scott, Nokia
OST (Open Standards Terminal)
The Home Entertainment Software Platform
Today, no widely accepted open standard or free implementation for a home entertainment platform exists. The Open Standards Terminal (OST), hosted at ostdev.net is an attempt to create such a standard for digital TV, gaming and IP enabled services based on open source components. This is needed so that application developers can take advantage of the new possibilities of converging media and technology, and not get limited by the current standards of digital TV and the embedding problems of desktop systems. Examples of devices that can take advantage of the OST platform are embedded devices that suffer from constraints that makes it inappropriate to use desktop or server software such as set-top-boxes, internet terminals, game consoles and PDAs.
The OST platform is built utilizing different open source projects and aimed for compliance with and flexibility to new open standards. The main open source components are the Linux operating system, the Xfree86 window system and the Mozilla web browser framework. Standards typical OST enabled devices aims to be able to comply with are, web standards, Java, DVB digital video broadcasting and MHP which is a Java based interactive TV application standard.
The main basic concepts that the OST platform introduces are abstract applications, modules and the navigator an implementation of the user interface to application management.
The implementation of a specific type of application is handled through an interface for an abstract set of applications, i.e. a middleware for a specific application format. This entity is called application environment and it implements the API translation from the specific application API to the OST application framework and module APIs. The application environments are designed to coexist simultaneously and this way we can support arbitrary application standards, e.g. different types of interactive TV standards, even those that are not designed for coexistence. The general principle for resource management on the OST is that the navigator is granting actions or resolving conflicts, and that way it is always able to define the behavior of the system.
Example of Application Environments are native applications, Mozilla based applications and MHP compliant applications. The Mozilla application environment hosts applications built on the Mozilla framework, like web browser applications. Finally, the MHP application environment implements the MHP middleware specification.
The modules in the OST are dynamic servers providing extended functionality and a native API to applications and application environments as shared libraries. These are needed for several reasons; to provide a high level abstraction of the hardware, share information and resources between applications and integrate the resources in the OST resource management framework. In DVB devices, there needs to be modules that can support this.
In short, the OST technology has been idealised as a way to converge the desktop computer world and the high-end digital TV world, with a platform that brings the best from the two and provides a powerful application abstraction.
Linux, the next steps
Friday, Nov 30, 16:00-17:00, Room CC1Jon "maddog" Hall
A lot has happened in the past year. Linux companies have disappeared or gotten significantly smaller, yet reports state that the number of Linux servers, supercomputers and embedded systems are growing at a large rate.
Maddog will explain why this phenomenon is happening, and what the next steps should be to move Linux into the marketplace.
|Martin Schulte||Last recently updated at Wednesday, 2003-12-24 12:00:00 CET|