![]() |
6. Auflage voraussichtlich Mai/Juni 99 |
![]() |
The Internet is an insecure place. Many of the protocols used in the Internet do not provide any security. Tools to "sniff" passwords off of the network are in common use by systems crackers. Thus, applications which end an unencrypted a password over the network are extremely vulnerable. Worse yet, other client/server applications rely on the client program to be "honest" about the identity of the user who is using it. Other applications rely on the client to restrict its activities to those which it is allowed to do, with no other enforcement by the server.
Some sites attempt to use firewalls to solve their network security problems. Unfortunately, firewalls assume that "the bad guys" are on the outside, which is often a very bad assumption. Most of the really damaging incidents of computer crime are carried out by insiders. Firewalls also have a significant disadvantage in that they restrict how your users can use the Internet. In many situations, these restrictions are simply unrealistic and unacceptable. Yet firewalls do have a place some situations. However, it is very important that you understand their limitations.
This course will provide you with an understanding of the real-world network security issues that you will need to build a secure network and to build secure client server applications. It will cover Kerberos, public key cryptography, firewalls and other security techniques.
Theodore Ts'o has been working with computers for the past 20 years, and has been using the Internet for the last 10 years. A graduate of the Massachusetts Institute of Technology, Theodore is a member of the Internet Engineering Task Force, where currently serves on the Security Area Directorate, helping to shape the security architecture for the Internet. He is currently leading the development team at MIT working on Kerberos, a distributed network security protocol used on the Internet.
In addition to his network security work, Theodore is also one of the core Linux kernel developers, having started using Linux back in 1991, during the days of the 0.10 kernel. Today, he is responsible for the tty drivers and the serial layer, and he is also the maintainer of the filesystem utilities package for the ext2 filesystem. He is also the maintainer of the tsx-11.mit.edu ftp server, and is a member of the technical board of Linux International.
Das Tutorial wendet sich an ISDN-Einsteiger und solche mit ersten Erfahrungen, die sich jetzt auch fuer die weitere Konfiguration des Gesamt-Systems (z.B. sendmail) interessieren.
Das Tutorial wird praxisorientiert durchgeführt. Es werden nicht alle Grundlagen und Features im Detail besprochen, sondern der Teilnehmer hat nach dem Tutorial ein entsprechend konfigurierten Rechner bzw. die Grundlagen dazu.
Im Tutorial wird die Distribution S.u.S.E. Linux 5.2 benutzt. Siehe http://www.suse.de/. Andere Distributionen (Debian, RedHat, ...) können selbstverständlich auch benutzt werden. Bei Bedarf werden die notwendigen Scripte installiert. Siehe dazu
Der Teilnehmer sollte über Linux-Grundkenntnisse verfügen. Wer einen eigenen Rechner mitbringt (was wir sehr empfehlen), sollte die Basis-Installation schon erfolgreich durchgeführt haben.
Weiterhin sollte eine untersützte ISDN-Karte eingebaut sein. Zu empfehlen ist z.B. eine AVM-Fritz classic. Siehe http://www.suse.de/Support/sdb/isdn.html für eine Liste der untersützten Karten.
Folgende Aufgabe wird gelöst: Ein Linux-Rechner mit ISDN-Karte soll Internet-Zugangs-Rechner (IZG) werden. Der Rechner wählt sich bei einem Verbindungswunsch automatisch beim Internet-Service-Provider (ISP) ein und stellt die Netzverbindung trasparent her. Benutzer an dieser Arbeitsstation haben nur vollständigen Zugriff auf das Internet und können z.B. WWW- und FTP-Dienste nutzen. Das Mailsystem wird so eingerichtet, daß beim Verbindungsaufbau automatisch die E-Mails ausgetauscht werden.
Da es sich um eine Wählleitung handelt, wird besonderes Augenmerk darauf gerichtet, daß zwar voller Internetzugang besteht, aber die Telefonkosten möglichst gering gehalten werden.
Um den roten Faden nicht zu verlieren, werden folgende Annahmen gemacht, die für die meisten Privatanwender (aber auch kleine Firmen, die nur einen privaten Internet-Zugang nutzen) zutreffen:
Diese Vorausetzungen treffen z.B. auf T-Online oder Personal-Eunet zu.
Weiterhin wird auf sicherheitsrelevante Fragen, Probleme mit dynamischen IP-Nummern und den Anschluss eines lokalen Netzwerks an den IZR besprochen.
This tutorial will introduce the various firewalling techniques and then provide a tour of their various implementations for Linux.
We will start with a general introduction to firewalls, discussing the various types of threats and defenses, general architecture issues etc. After this, we will delve into the several alternative kernel level implementations of packet filters, network address translation, etc. The differences and commonalities will be discussed, and a general indication of future directions given. Then, we will have a look at some of the user level software for Linux which builds on these low-level facilities to provide the actual useful functionality of a firewall system, e.g. (transparent) proxy servers, Socks, the various administration tools, etc.
The tutorial is meant to give a somewhat technically oriented view of firewalls on Linux, focussing on technical implementation aspects, design strengths and weaknesses of alternatives and future developments, more than on general firewall usage.
The use of the Linux model of software development, and the following of "Open Source" promises to reduce the cost of operating system development, but there other, more hidden factors that also affect the cost of computing, which might reduce this cost to near zero. If this is true, who can benefit from this cost reduction, and how?
Bio:
Jon "maddog" Hall is a Senior Manager in Digital's UNIX Software
Group. Jon has been in the UNIX group for fifteen years as an engineer,
Product Manager and Marketing Manager. Prior to Digital, Jon was a Senior
Systems Administrator in Bell Laboratories' UNIX group, so he has been
programming and using UNIX for over 19 years.
In addition to Jon's work with Digital UNIX, Jon is also Executive Director of Linux International, a non-profit organization dedicated to promoting the use of Linux, a freely distributable re-implementation of the UNIX operating system. Digital is the first system vendor to join Linux International, and is a Corporate Sponsoring Member. Jon is directly responsible for the port of Linux to the Alpha processor.
Jon started his career programming on large IBM mainframes in Basic Assembly Language, but his career improved dramatically when he was introduced to Digital's PDP-11 line of computers as chairman of the Computer Science Department at Hartford State Technical College. There he spent four glorious years teaching students the value of designing good algorithms, writing good code, and living an honorable life. He has also been known to enjoy discussing aspects of computer science over pizza and beer with said students.
maddog (as his students named him, and as he likes to be called)
has his MS inComputer Science from RPI, his BS in Commerce and
Engineering from Drexel University, and in his spare time is writing
the business plan for his retirement business:
There are a number of places on a Unix system where program invocations cross security boundaries. Examples include the cron daemon, man programs which cache manpages, printing subsystems that try to avoid copying files, mail delivery to user mailboxes, WWW CGI scripts provided by ordinary users, rsh/rlogin, user-mount, and many others.
At the moment this is usually handled using setuid programs, which has a number of disadvantages: you can't easily draw the security boundary where you want it; the setuid program really has to be written in C (with attendant buffer overrun problems et al); and sanitising the environment (not just the environment variables, but things like umasks, ulimits, and all the other guff that children inherit) to make it safe to execute is very difficult.
I present a replacement which works as follows: you design your program so that the security boundary is exactly at a program invocation, and replace the setuid call with a call via the new `userv'[1] facility. userv is responsible for properly enforcing the security boundary, so that the called parts of your program can trust their PATH, ulimits, controlling tty, et al. userv has to be secure, but it only has to be written once. Furthermore, the client/server model means you don't have to worry so much about resetting all the many inherited properties of processes.
In many of the Unix subsystems where calls cross security boundaries userv can be used to avoid having to write security-critical code as C programs which run largely in an untrusted environment.
For example, a number of security problems have been found in traditional `cron' implementations. A userv-based cron would not need to run as root, because it doesn't have to have the ability to run arbitrary commands as any user, only to be able to invoke crontab entries; nor does the `system' part have to be written in C.
A userv-based caching man program would not need to trust the user to format manpages correctly before putting them in the cache; nor would it have to trust the user to supply manpages which do not contain dangerous troff directives.
Other programs such as rsh/rlogin which need closer interaction with the user are not suitable for userv, because userv isolates them too much.
In my talk I'll describe the aims, structure and (briefly) usage of userv, and will present example designs for userv-based cron, lpr and manpage cache programs. I'll discuss userv's strengths, weaknesses and limitations.
[1] userv is short for `user services', and is pronounced `you-serve'.
Design [and implementation] of a free portable library written in C++, that will provide a set of data structures and routines for the development of game applications.
Description - Portability
This library will consist of three sets of classes:
Code written for 1) and 2) will be platform independed, and will follow the rules of portable coding. It will be written in ANSI C++ and will be compiled in all clones of GCC. (inc. DJGPP, linux-gcc)
Game development
A game will be described with a system of objects derived from the
standard library objects and designed by the user of the library.
Ignition and realization(?) will take place under the control of
a kernel-like library object, which will be aware of hardware events
and will send messages to the game objects. Objects will take life from
the 'ticking' of kernel's clock and do what they are programmed to
do by the user of the library (and author of the game).
Since all hard stuff will be handled by the library, a game progammer will just have to find a way of describing the era of the game in a pretty way to be controlled by the library. (Of course this is not always an easy work).
Data for the game (like images, sounds, music files etc) will be loaded and handled either by the kernel (with a little script) or by the user (with his code).
Data files will be stored in large archive files which will be handled automatically with library functions.
A look at library classes
Abstract classes: SCREEN, KBD, MOUSE, JOY, SND, SAMPLE, MUSIC...
Classes: GAME, WIN, THING, THINGS, OBJS, ARCHIVE ...
Classes PIC, FILM, PAL, FNT, SPRITE, CONTROL, FORM ...
IEEE-1394 (FireWire) is a high-performance, but low-cost serial bus. It was designed to be the technology to supersede SCSI, connect consumer multimedia devices (digital camcorders, video recorders, ...), printers as well as a base for industrial applications. There are also projects to make IP over 1394 possible.
The Linux IEEE-1394 (FireWire) Driver Development Project wants to add 1394 support to the fantastic Linux operating system. We want to create a structure which allows easy plug-in of new hardware drivers (mainly PCI-to-1394 host adapters) as well as plug-in of high-level services (i. e. video, disks, printing etc.)
Work is going on and I hope a first beta release will be available in March. In June it should be perfectly running :)
Please see http://www.edu.uni-klu.ac.at/~epirker/ieee1394.html for further reference.
Thanks to the KDE Desktop Environment Project there is finally some movement inside the freeware community regarding graphical user interfaces. The ergonomical standards for graphical applications grow and an increasing number of users is no longer satisfied with a chaotically patched desktop. I clearly realized that with another project of mine: The LyX document processor, a kind of wordprocessor based on the LaTeX typesetting system. Only six months after the foundation of the KDE Desktop Project I recieved more and more emails of long-term LyX users "complaining" that their wordprocessor did no longer fit into their new desktop, but was more and more difficult to use. I could not believe that at first, until I created a larger document with my own program after a break of several months. And really: After only one year of KDE-usage it was remarkably more difficult for me to get used to the different behaviour of apparently well-known controls again, especially scrollbars and pulldown menus. "The forces I set free...." came out against me, that must not be! Also things began to bother me which I considered entirely normal before: The configuration by editing a complex file named lyxrc with a texteditor, complex dialog popups instead of dynamic toolbars, missing session management, missing drag'n'drop and all those further KDE benefits. In addition, the toolkit used by LyX - named xforms - differed very much from Motif in respect of usage, and therefore also from the OS/2 workplace shell, Macintosh and MS-Windows. This was no real problem in the times of chaotic X-desktops - Athena is even more unusable - but these times are luckily over.
Everything cried for a port! And due to the urging of Kalle Dallheimer, who was in seek for a comfortable wordprocessor for his wife, we found ourselves sitting in front of two X-terminals. Equiped with a couple of cans of caffeinated chocolate (too much coffee is bad for the stomach), we set ourselves a limit of 24 hours for a port.
A first "grep" over the sources, which are not too small with 80.000 lines of C++, was not very encouraging: Over 50 complex dialogs and far more than 5000 calls of xforms-functions. It was obvious that the trivial method of porting --- replacing all xforms related code with KDE/Qt-code and recompile --- was neither possible within 24 hours nor very amusing. The pure thought about the need to debug several thousand lines of new GUI-code at once after a couple of days hard work, took the last bit of fun away. And fun should be the most important motivation to create free software.
The ideal solution to cope with such a huge task of porting is obvious: Simply set one foot after the other. That means replacing function call for function call, dialog for dialog without breaking the functionality of the whole. This model of porting is similar to a complete renovation of a house while you still live in it. In one word: The solution is multi toolkit programming.
In fact: only few lines of modified code made it possible to add new Qt-GUI-controls additionally to the current xforms stuff. LyX simply used both toolkits at the same time!
In my talk I will try to explain how and why something like this works:
One the one hand multi toolkit programming has several advantages for the porting of applications: (a) It is easier, (b) it can be done within a larger timeframe, (c) it can be distributed on many shoulders. All these are important points for free software development. On the other hand it becomes possible to borrow controls and functionaliy from other toolkits. And example might be a GTK-application which want to use the KApplication-class from the KDE Desktop Environment to get access to a comfortable data base, desktop configuration events, session management, iconloader and much more.
My investigation will cover the popular ( = many applications available) tookits like Qt, Tk, Motif, Athena and XForms. I might also have a look at a framework like wxWindows and --- with regard to a possible Gimp-to-KDE-port --- GTK.
The target audience of the talk are mainly programmers who already have some X11-experiences. But also all others may hopefully gain some useful knowledge about the X Window System and event driven programming in general.
In October 1997, Caldera, Inc. announced the Caldera Open Administration System (COAS). COAS provides Linux users with a robust, flexible, easy-to-use system for administering the Linux operating system. In the past, Linux provided several tools for system administration, but it did not provide a complete administration system. With COAS, not only will users get a complete administration system, but they will get a system that is flexible, powerful, and easy to use.
GOALS
To improve Linux administration, Caldera organized the COAS project. The
goals
of the project are to create
Modular Administration System
Because users may only want to use certain parts of the administration
system,
and because a complete administration system will take months to
develop, the
system is modular. With a modular architecture, at any time users can
plug in
the parts of the system they want to use. For example, users who do not
remember the syntax for editing the printcap or crontab configuration
files
may want to plug in modules to help them. Conversely, these same users
may not
want to use a module for changing a user's name.
When users plug in the parts of the system they want to use, there is no
additional setup required. Users also do not have to worry about system
performance when using the COAS system. System performance remains high,
regardless of the number of modules used.
Flexible, Powerful Administration Modules
To make the modules useful to novice and advanced Linux users, the
administration system is full-featured and flexible. The system includes
built-in intelligence, backwards compatibility, direct access to
configuration
data, control of local administrative preferences, and actual data view.
Easy-to-Use Administration Modules
The new system is also easy to use. Each module has the same "look and
feel,"
includes extensive online help, and can be easily translated into other
languages.
DEVELOPMENT
Caldera is currently working on several administration modules that will
be
ready to use early in 1998. Caldera is also encouraging Linux developers
worldwide to develop modules for the project. Depending on the
developers'
choices, the COAS modules will be available to the Linux community under
the
GNU General Public License. The modules are also compatible with any
Linux
distribution. That way, all Linux users can benefit from the COAS
project,
regardless of the Linux distribution they are using.
Working together as a Linux community will propel the Linux operating
system
into the next century. In addition, working together on the Caldera Open
Administration System will give worldwide Linux users what they expect:
a complete Linux administration system that is flexible, powerful, and
easy to use.
FOR MORE INFORMATION
For details about the COAS project, see
http://www.coas.org/ onthe
Caldera web site. This site currently provides an SDK to download
(including
the COAS source code), technical details about the project, and an
e-mail
forum for questions and discussion. From this site, you can download the
latest COAS modules.
The Linux/Microcontroller project is a port of Linux 2.0 to systems without a Memory Management Unit. Currently, only Motorola MC68000 derivatives are supported, but the techniques and much of the code is equally applicable to any architecture. The 3Com PalmPilot with a TRG SuperPilot Board and a custom boot loader was the first system to successfully boot.
The initial port of Linux to systems without MMU was done during the last week of January 1998 by The Silver Hammer Group Ltd's programming team, Kenneth Albanowski and D. Jeff Dionne.
Since then, the port has been run on 68360, and 68332 embedded systems, and even on an Atari ST. A small shell and set of utilities are available, and even networking is running.
SHGL is an engineering company that develops hardware, firmware and software. Running Linux on our hardware is a natural progression from our home grown soft real time kernel. TCP/IP support is worth the extra 1/5Meg of memory required alone. With RAM and FLASH memory prices falling, many applications can afford to have a controller with sufficent resources to run Linux on.
Linux running on your next toaster? Who knows, but for certain many industrial and commercial products could benefit from the services provided by an embedded uClinux kernel. With a simple GUI layer, uClinux could be useful in PDAs or next generation Cell Phones.
Topics to be covered in seminar:
Bio:
Jeff Dionne is an engineer with The Silver Hammer Group Ltd. He has been a
member of the Linux community since Linux 0.99.4. He has thirteen years
engineering and programming experience. Jeff lives near Toronto, Ontario.
Kenneth Albanowski is a new addition to the Silver Hammer Group's programming team. He has been an independant programmer for the last 5 years, and has contributed to the Linux, Perl and Palm Pilot communities for the last few years. Kenneth lives near Princeton, New Jersey.
An increasingly long forgotten skill in todays computer industry is the ability to optimize low end machines as well as large servers. In recent times, even most entry level desktops are configured with fully cache coherent bus architectures, megabytes of level 2 cache, and lots of memory. Very few systems today possess architectural attributes which typically plagued the OS optimizer to no end just a short time ago.
This is unfortunate, and surprisingly the techniques which prove beneficial to these low end systems help tremendously on more modern machines, and vice versa. Some of the problems are precisely the same, if viewed with the proper perspective.
In this light, it is instructive to view side by side the optimization strategies used on ports to machines with both characteristics, and also to look at what the relative pay- off was in each case. In some cases a particular optimization was prudent only on one of the two classes of machines, but this was found to not be the common denominator.
This talk will demonstrate these issues, in depth, using two ports of the Linux kernel on which the author has done extensive hardware-specific optimization work. In particular it will analyze the UltraSparc port and more recently the port of Linux to the first generation microserver product released by Cobalt Microserver, which is MIPS based.
ABSTRACT:
Generic IP Firewall Chains (`ipchains') is a more flexible
replacement for the normal Linux IP Firewalling (`ipfw'). The
principle is similar, but it overcomes some of the limitations of the
linear approach used by ipfw, by effectively adding control-flow
rules. The work consists of a user-space tool to replace `ipfwadm'
and a kernel-space replacement for ipfw.
The aim was to allow more intuitive organisation of firewall rules, which will hopefully reduce the number of mistakes made in setting up complex systems. It also allows for extremely complex rules which can be slow for ipfw, which uses a simple linear traversal.
The talk will cover the firewalling method used in Linux, the features we included, the testing suite, known bugs and warts, and the `ipfwadm' wrapper script which provides backwards compadibility with ipfw. It will also cover the iterative design decisions including some initial ideas which were rejected and the great feedback (and patches) obtained from users. Possible future directions will also be discussed.
It is assumed that the audience is familiar with packet filtering concepts, preferably with knowledge of basic usage of the ipfwadm program.
More details can be found on the Generic IP Firewall Chains homepage at
ABOUT THE AUTHOR:
I'm one of the authors of the Generic IP Firewall Chains kernel patch (Michael Neuling is the other one). I am a consultant, and run my business on Linux. I have worked for various large organisations in Australia in roles ranging from team leader, to firewall tester, to system administrator, to OO developer.
In 1992 the Internet Engineering Task Force (IETF) began the development of standards that define an architecture, a set of protocols, and a set of security services to secure the Internet Protocol (IP). The protocols include two IP encapsulation protocols, the Encapsulated Security Protocol (ESP), and the Authentication Header (AH), and a dynamic key management protocol, the Internet Key Exchange (IKE). Through these protocols, confidentiality, authentication, and data integrity are applied to each secured IP packet. IPsec is the term typically applied to the ESP and AH protocols and is also the name of the IETF Working Group that is developing the specifications for the IP Security Architecture, ESP, AH, and IKE.
At this time, there are several commercial and publicly available IPsec implementations being developed. The US National Institute of Standards and Technology (NIST) has developed a Linux reference implementation of IPsec called Cerberus. Cerberus is currently being used to provide a better understanding of the IPsec specifications, to assist in testing other IPsec implementations, and provide secure communication between home and corporate offices. Cerberus is a functionally complete implementation of the ESP and AH protocols and was implemented to run as either a direct component of the Linux 2.1 kernel or as a run-time module. Implementation details on the Cerberus implementation can be found at http://www.antd.nist.gov/cerberus.
The talk will start with a brief overview of IPsec. The remainder of the talk will cover a detailed description of Cerberus, including the architecture, design details, coding decisions, and other ideas which may be included in future versions of Cerberus.
It is assumed that the audience is familiar with basic security service concepts and general TCP/IP concepts. It is helpful, but not necessary, for the audience to be familiar with the core pieces of the Linux 2.1 kernel networking code.
Rob Glenn has been working with the IETF IPsec Working Group since 1993. He is the technical lead on the Internet Protocol Security Program at NIST (see http://www.antd.nist.gov/antd/html/security.html for more information). He is a co-author on several of the IETF IPsec WG Internet Drafts and RFCs. He is also the primary developer of two separate Unix IPsec implementations, the first was a BSD implementation based on early IPsec specifications.