OpenAFS installation on Debian

Posted by docelic on Mon 4 Aug 2008 at 10:58

The purpose of this article is to give you a straight-forward, Debian-friendly way of installing and configuring OpenAFS 1.4.x, the recommended production version of OpenAFS for UNIX. By the end of this guide, you will have a functional OpenAFS installation that will complete our solution for secure, centralized network logins with shared home directories.


The newest version of this article can be found at http://techpubs.spinlocksolutions.com/dklar/afs.html.

Table of Contents

Introduction
The role of AFS within a network
Glue layer: integrating AFS with system software
PAM
Conventions
OpenAFS installation
OpenAFS kernel module
OpenAFS client
OpenAFS server
Post-configuration
OpenAFS concepts
File layout
File listing and information
Read and write file paths
File reading and writing
Creating users
Creating and mounting volumes
Setting permissions
Volume quotas
Serving metadata
Metadata via LDAP
Metadata via libnss-afs
Metadata test
PAM configuration
/etc/pam.d/common-auth
/etc/pam.d/common-session
Conclusion

Introduction

AFS distributed filesystem is a service that has been traditionally captivating system administrators' and advanced users' interest, but its high entry barrier and infrastructure requirements have been preventing many from using it.

AFS has already been the topic of numerous publications. Here, we will present only the necessary summary; enough information to establish the context and achieve practical results.

You do not need to follow any external links; however, the links have been provided both throughout the article and listed all together at the end, to serve as pointers to more precise technical treatment of individual topics.

AFS was started at the Carnegie Mellon University in the early 1980s, in order to easily share file data between people and departments. The system became known as the Andrew File System, or AFS, in recognition of Andrew Carnegie and Andrew Mellon, the primary benefactors of CMU. Later, AFS was supported and developed as a product by Transarc Corporation (now IBM Pittsburgh Labs). IBM branched the source of the AFS product, and made a copy of the source available for community development and maintenance. They called the release OpenAFS, which is practically the only "variant" of AFS used today for new installations.

The amount of important information related to AFS is magnitudes larger than that of, say, Kerberos or LDAP. It isn't possible to write a practical OpenAFS Guide without skipping some of the major AFS concepts and taking shortcuts in reaching the final objective. However, this injustice will be compensated by hooking you up with the whole OpenAFS idea, helping you achieve practical results quickly, and setting you underway to expanding your OpenAFS knowledge further using other quality resources.

AFS relies on Kerberos for authentication. A working Kerberos environment is the necessary prerequisite, and the instructions on setting it up are found in another article from the series, the MIT Kerberos 5 Guide.

Furthermore, in a centralized network login solution, user metadata (Unix user and group IDs, GECOS information, home directories, preferred shells, etc.) needs to be shared in a network-aware way as well. This metadata can be served using LDAP or libnss-afs. In general, LDAP is standalone and flexible, and covered in another article from the series, the OpenLDAP Guide. libnss-afs is simple, depends on the use of AFS, and is covered in this Guide.

The role of AFS within a network

AFS' primary purpose is to serve files over the network in a robust, efficient, reliable and fault-tolerant way.

Its secondary purpose may be to serve user meta information through libnss-afs, unless you choose OpenLDAP for the purpose as explained in another article from the series, the OpenLDAP Guide.

While the idea of a distributed file system is not unique, let's quickly identify some of the AFS specifics:

  • AFS offers a client-server architecture for transparent file access in a common namespace (/afs/) anywhere on the network. This principle has been nicely illustrated by one of the early AFS slogans "Where ever you go, there you are!".

  • AFS uses Kerberos 5 as an authentication mechanism, and without a valid Kerberos ticket and an AFS token, it is virtually impossible to gain any privileged access to the AFS data space, even if you happen to be the server or network administrator.

  • User's AFS identity is not in any way related to traditional system usernames or other data; AFS Protection Database (PTS) is a stand-alone database of AFS usernames and groups. However, since Kerberos 5 is used as an authentication mechanism, provision is made to automatically "map" Kerberos principal names onto PTS entries.

  • AFS does not "export" existing data partitions to the network in a way that NFS does. AFS requires partitions to be dedicated to AFS use only. On AFS partitions, one creates "volumes" which represent basic client-accessible units and hold files and directories. These volumes and volume quotas are "virtual" entities, they do not affect physical disk partitions and disk blocks like system partitions do.

  • AFS volumes reside on AFS server partitions. Each AFS server can have up to 256 partitions of arbitrary size and unlimited number of volumes on them. A volume cannot span multiple partitions — the size of the partition implies the maximum data size contained in any of its volumes. (If AFS partitions composed of multiple physical partitions are a requirement, Logical Volume Manager or other OS-level functionality can be used to construct such partitions.)

  • To become conveniently accessible, AFS volumes are usually "mounted" somewhere under the AFS namespace (/afs/). These "mounts" are handled internally in AFS — they are not affected by client or server reboots and they do not correspond to Unix mount points. The only Unix mount point defined in AFS is for the /afs/ directory itself.

  • AFS supports a far more elaborate and convenient permissions system (AFS ACL) than the traditional Unix "rwx" modes. These ACLs are set on directories and apply to all contained files. Each directory can hold up to 20 ACL entries. The ACLs may refer to users and groups, and even "supergroups" (groups within groups) to a maximum depth of 5. IP-based ACLs are available as well, for the rare cases where you might have no other option; they work on the principle of adding IPs to groups, and then using group names in ACL rules. Newly-created directories automatically inherit ACL list of the parent directory.

    As of summer 2008, there have been plans to support file-based ACLs as well, as part of the Google Summer of Code initiative, but this has not yet become a usable functionality. (Actually, the implementation was supposedly working, but the project got stuck in trying to provide compatibility for old AFS clients).

  • AFS is available for a broad range of architectures and software platforms. You can obtain an up-to-date AFS release for all Linux platforms, Microsoft Windows, IBM AIX, HP/UX, SGI Irix, MacOS X and Sun Solaris.

  • OpenAFS comes in two main releases, 1.4.x and 1.6.x. The 1.4 "maintenance" release is the recommended production version for Unix and MacOS platforms. The 1.6 "features" release is the recommended production version for Microsoft Windows. The client versions are completely interoperable and you can freely mix 1.4 and 1.6 clients.

    Actually, OpenAFS 1.6 client functionality is pretty stable on all platforms and 1.6 clients should start getting deployed in non-critical environments. Debian OpenAFS 1.6 packages are available in the "unstable" branch.

  • The OpenAFS website and documentation may seem out of date at first, but they do contain all the information you need.

  • AFS is enterprise-grade and mature. Books about AFS written 10 or 15 years ago are still authoritative today, plus/minus the inevitable architectural changes and improvements.

You can find the complete AFS documentation at the OpenAFS website. After grasping the basic concepts, your most helpful resources will be quick help options supported in all commands, such as in fs help, vos help, pts help or bos help and the Unix manpages.

Glue layer: integrating AFS with system software

PAM

On all GNU/Linux-based platforms, Linux-PAM is available for service-specific authentication configuration. Linux-PAM is an implementation of PAM ("Pluggable Authentication Modules") from Sun Microsystems.

Network services, instead of having hard-coded authentication interfaces and decision methods, invoke PAM through a standard, pre-defined interface. It is then up to PAM to perform any and all authentication-related work, and report the result back to the application.

Exactly how PAM reaches the decision is none of the service's business. In traditional set-ups, that is most often done by asking and verifying usernames and passwords. In advanced networks, that could be Kerberos tickets and AFS tokens.

PAM will allow for inclusion of OpenAFS into the authentication path of all services. After typing in your password, it will be possible to verify the password against the Kerberos database and automatically obtain the Kerberos ticket and AFS token, without having to run kinit and aklog manually.

You can find the proper introduction (and complete documentation) on the Linux-PAM website. Pay special attention to the PAM Configuration File Syntax page. Also take a look at the Linux-PAM(7) and pam(7) manual pages.

Conventions

It's quite disappointing when you are not able to follow the instructions found in the documentation. Let's agree on a few points before going down to work:

  • Our platform of choice, where we will demonstrate a practical setup, will be Debian GNU.

  • Install sudo. Sudo is a program that will allow you to carry out system administrator tasks from your normal user account. All the examples in this article requiring root privileges use sudo, so you will be able to copy-paste them to your shell.

    su -c 'apt-get install sudo'
    

    If asked for a password, type in the root user's password.

    To configure sudo, add the following line to your /etc/sudoers, replacing $USERNAME with your login name:

    $USERNAME ALL=(ALL) NOPASSWD: ALL
    

  • Debian packages installed during the procedure will ask us a series of questions through the so-called debconf interface. To configure debconf to a known state, run:

    sudo dpkg-reconfigure debconf
    

    When asked, answer interface=Dialog and priority=low.

  • Monitoring log files is crucial in detecting problems. The straight-forward, catch-all routine to this is opening a terminal and running:

    cd /var/log; sudo tail -F daemon.log sulog user.log auth.log debug kern.log syslog dmesg messages \
      kerberos/{krb5kdc,kadmin,krb5lib}.log openafs/{Bos,File,Pt,Salvage,VL,Volser}Log
    

    The command will keep printing log messages to the screen as they arrive.

  • Our test system will be called monarch.spinlock.hr and have an IP address of 192.168.7.12. Both the server and the client will be installed on the same machine. However, to differentiate between client and server roles, the client will be referred to as monarch.spinlock.hr and the server as afs1.spinlock.hr. The following addition will be made to /etc/hosts to completely support this scheme:

    192.168.7.12	monarch.spinlock.hr monarch krb1.spinlock.hr krb1 ldap1.spinlock.hr ldap1 afs1.spinlock.hr afs1
    
    

OpenAFS installation

The most meaningful way to access data in AFS is through an AFS client. That means you will need the OpenAFS client installed and the kernel module built and running on all AFS systems, including servers.

Since the AFS client is mandatory and it depends on the kernel module, the whole installation procedure of an OpenAFS server will consist of the OpenAFS kernel module, OpenAFS client and OpenAFS server, in that order.

Great, so let's roll.

OpenAFS kernel module

Building the OpenAFS kernel module today is very simple. There are basically two methods available: the module-assistant (older) and DKMS (newer). Both of them offer an extremely simple and elegant way to not have to deal with any of the complexities behind the scenes.

OpenAFS kernel module with DKMS

DKMS is a framework for generating Linux kernel modules. It can rebuild modules automatically as the new kernel version is installed. This mechanism was invented for Linux by the Dell Linux Engineering Team back in 2003, and has since started seeing widespread use.

To get the OpenAFS module going with DKMS, here's what you do:

sudo apt-get install build-essential dkms linux-headers-`uname -r`
sudo apt-get install openafs-modules-dkms

It's that easy, and you're done. What's best, this method is maintenance-free — provided that there are no compile-time errors, the OpenAFS module will be automatically compiled when needed for all kernel versions you happen to be running, be it upgrades or downgrades.

OpenAFS kernel module with module-assistant

To get it going with module-assistant, here's what you do:

sudo apt-get install module-assistant
sudo m-a prepare openafs
sudo m-a a-i openafs

Similarly as for the DKMS approach, you're already done. Although, with just one difference: module-assistant does not provide support for rebuilding the kernel module automatically on kernel change, so you'll need to manually run sudo m-a a-i openafs every time you boot into a new kernel version.

OpenAFS client

After the kernel module is installed, we can proceed with installing the OpenAFS client:

sudo apt-get install openafs-{client,krb5}

Debconf answers for reference:

AFS cell this workstation belongs to: spinlock.hr
# (Your domain name in lowercase, matching the Kerberos realm in uppercase)

Size of AFS cache in kB? 4000000

# (Default value is 50000 for 50 MB, but you can greatly increase the
# size on modern systems to a few gigabytes, with 20000000 (20 GB) being
# the upper reasonable limit. The example above uses 4 GB)

Run Openafs client now and at boot? No
# (It is important to say NO at this point, or the client will try to 
# start without the servers in place for the cell it belongs to!)

Look up AFS cells in DNS? Yes

Encrypt authenticated traffic with AFS fileserver? No
# (OpenAFS client can encrypt the communication with the fileserver. The
# performance hit is not too great to refrain from using encryption, but
# generally, disable it on local and trusted-connection clients, and enable
# it on clients using remote/insecure channels)

Dynamically generate the contents of /afs? Yes

Use fakestat to avoid hangs when listing /afs? Yes

DB server host names for your home cell: afs1
# (Before continuing, make sure you've edited your DNS configuration or 
# /etc/hosts file as mentioned above in the section "Conventions", and that
# the command 'ping afs1' really does successfully ping your server)

Client AFS cache

OpenAFS cache directory on AFS clients is /var/cache/openafs/. As we've said, this includes your AFS servers too, as they will all have the AFS client software installed.

The cache directory must be on an Ext 2, 3, or 4 partition.

In addition, ensure to never run out of space assigned to the OpenAFS cache; OpenAFS doesn't handle that condition gracefully.

The best way to satisfy both requirements is to mount a dedicated partition onto /var/cache/openafs/, and make its size match the size of the AFS cache that was specified above.

If you have a physical partition available, create an Ext filesystem on it and add it to /etc/fstab as usual:

sudo mkfs.ext4 /dev/my-cache-partition
sudo sh -c "echo '/dev/my-cache-partition /var/cache/openafs ext4 defaults 0 2' >> /etc/fstab"

If you do not have a physical partition available, you can create the partition in a file; here's an example for the size of 4 GB we've already used above for the "AFS cache size" value:

cd /var/cache
sudo dd if=/dev/zero of=openafs.img bs=10M count=410   # (~4.1 GB partition)
sudo mkfs.ext4 openafs.img
sudo sh -c "echo '/var/cache/openafs.img /var/cache/openafs ext4 defaults,loop 0 2' >> /etc/fstab"
sudo tune2fs -c 0 -i 0 -m 0 openafs.img

To verify that the Ext cache partition has been created successfully and can be mounted, run:

sudo mount /var/cache/openafs

OpenAFS server

Now that the kernel module and the AFS client are ready, we can proceed with the last step — installing the OpenAFS server.

sudo apt-get install openafs-{fileserver,dbserver}

Debconf answers for reference:

Cell this server serves files for: spinlock.hr

That one was easy, wasn't it? Let's follow with the configuration part to get it running:

AFS key (the Kerberos principal)

As Kerberos introduces mutual authentication of users and services, we need to create a Kerberos principal for our AFS service.

Strictly speaking, in Kerberos you would typically create one key per-host per-service, but since OpenAFS uses a single key for the entire cell, we must create just one key, and that key will be shared by all OpenAFS cell servers.

(The transcript below assumes you've set up Kerberos and the Kerberos policy named "service" as explained in the MIT Kerberos 5 Guide; if you did not, do so right now as Kerberos is the necessary prerequisite.)

sudo rm -f /tmp/afs.keytab

sudo kadmin.local

Authenticating as principal root/admin@SPINLOCK.HR with password.

kadmin.local:  addprinc -policy service -randkey -e des-cbc-crc:normal afs/spinlock.hr
Principal "afs/spinlock.hr@SPINLOCK.HR" created.

kadmin.local:  ktadd -k /tmp/afs.keytab -norandkey -e des-cbc-crc:normal afs/spinlock.hr
Entry for principal afs with kvno 1, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/tmp/afs.keytab.

kadmin.local:  quit

Once the key's been created and exported to file /tmp/afs.keytab as shown, we need to load it into the AFS KeyFile. Note that the number "1" in the following command is the key version number, which has to match KVNO reported in the 'ktadd' step above.

sudo asetkey add 1 /tmp/afs.keytab afs/spinlock.hr

To verify the key has been loaded and that there is only one key in the AFS KeyFile, run bos listkeys:

sudo bos listkeys afs1 -localauth

key 1 has cksum 2035850286
Keys last changed on Tue Jun 24 14:04:02 2008.
All done.

Now that's nice!

(On a side note, you can also remove a key from they KeyFile. In case there's something wrong and you want to do that, run bos help for a list of available commands and bos help removekey for the specific command you'll want to use.)

AFS enctype DES

AFS uses a single-DES Kerberos key, which is considered to be "weak crypto" by newer versions of Kerberos. Specifically, in Kerberos version 1.7 it is still allowed by default, but can be disabled by specifying "allow_weak_crypto = false" in /etc/krb5.conf. In Kerberos 1.8, it is disabled by default and must be explicitly enabled.

Enabling "weak crypto" for OpenAFS is done according to the following rules:

If the machine is a Kerberos KDC server running Kerberos >= 1.7, then weak_crypto must be explicitly allowed, or the server will not offer single-DES to clients. (To allow it, edit /etc/krb5.conf, section "[libdefaults]", and explicitly set "allow_weak_crypto = true". Remember to restart krb5-kdc with sudo invoke-rc.d krb5-kdc restart).

If the machine is a Kerberos/AFS client, then weak_crypto may or may not need to be enabled, depending on the versions of both Kerberos and OpenAFS installed on it:

  • Kerberos <= 1.6: no adjustment necessary

  • Kerberos 1.7 OR pre-releases of Kerberos 1.8 OR OpenAFS < 1.4.12: edit /etc/krb5.conf, section "[libdefaults]", and explicitly set "allow_weak_crypto = true":

    
    [libdefaults]
      ...
      allow_weak_crypto = true
      ...
    

    (Remember to restart krb5-kdc with sudo invoke-rc.d krb5-kdc restart.)

  • Kerberos >= 1.8 (NOT 1.8 pre-releases) AND OpenAFS >= 1.4.12: no adjustment necessary

    (MIT Kerberos added a hook to let OpenAFS selectively enable weak crypto for aklog, and OpenAFS 1.4.12 uses that hook.)

Note

Things regarding AFS use of "weak" keys are changing for the better, the "strong" AES encryption already works, but is not yet part of a standard release. This guide will be updated as soon as things become usable out of the box.

AFS (vice) partitions

As we've hinted in the introduction, AFS works by using its own dedicated partitions. Each server can have up to 256 partitions which should be mounted to directories named /vicepXX/, where "XX" is the partition "number" going from 'a' to 'z' and from 'aa' to 'iv'.

In a simple scenario, we will have only one partition /vicepa/. While different underlying filesystems are supported, we will assume /vicepa/ has been formatted as some version of the Ext filesystem (4, 3 or 2).

The same notes as for the OpenAFS client cache directory apply — it is advisable to have /vicepa mounted as a partition, although you can get away without it.

Here's the list of the three possible setups:

1) If you have a physical partition available, create an Ext filesystem on it and add it to /etc/fstab as usual:

sudo mkfs.ext4 /dev/my-vice-partition
sudo sh -c "echo '/dev/my-vice-partition /vicepa ext4 defaults 0 2' >> /etc/fstab"

2) If you do not have a physical partition available, you can create the partition in a file; here's an example for the size of 10 GB:

cd /home
sudo dd if=/dev/zero of=vicepa.img bs=100M count=100   # (10 GB partition)
sudo mkfs.ext4 openafs.img
sudo sh -c "echo '/home/openafs.img /vicepa ext4 defaults,loop 0 2' >> /etc/fstab"
sudo tune2fs -c 0 -i 0 -m 0 openafs.img

To verify that the Ext vice partition has been created successfully and can be mounted, run:

sudo mkdir -p /vicepa
sudo mount /vicepa

3) If you insist on not using any partition mounted on /vicepa, that'll work too because AFS does not use its own low-level format for the partitions — it saves data to vice partitions on the file level. (As said in the introduction, that data is structured in a way meaningful only to AFS, but it is there in the filesystem, and you are able to browse around it using cd and ls).

To make OpenAFS honor and use such vice directory that is not mounted to a separate partition, create file AlwaysAttach in it:

mkdir -p /vicepa
touch /vicepa/AlwaysAttach

Creating a new cell

Now that we've installed the software components that make up the OpenAFS server and that we've taken care of the pre-configuration steps, we can create an actual AFS cell.

Let's run afs-newcell:

sudo afs-newcell

                            Prerequisites

In order to set up a new AFS cell, you must meet the following:

1) You need a working Kerberos realm with Kerberos4 support.  You
   should install Heimdal with KTH Kerberos compatibility or MIT
   Kerberos 5.

2) You need to create the single-DES AFS key and load it into
   /etc/openafs/server/KeyFile.  If your cell's name is the same as
   your Kerberos realm then create a principal called afs.  Otherwise,
   create a principal called afs/cellname in your realm.  The cell
   name should be all lower case, unlike Kerberos realms which are all
   upper case.  You can use asetkey from the openafs-krb5 package, or
   if you used AFS3 salt to create the key, the bos addkey command.

3) This machine should have a filesystem mounted on /vicepa.  If you
   do not have a free partition, then create a large file by using dd
   to extract bytes from /dev/zero.  Create a filesystem on this file
   and mount it using -oloop.

4) You will need an administrative principal created in a Kerberos
   realm.  This principal will be added to susers and
   system:administrators and thus will be able to run administrative
   commands.  Generally the user is a root or admin instance of some
   administrative user.  For example if jruser is an administrator then
   it would be reasonable to create jruser/admin (or jruser/root) and
   specify that as the user to be added in this script.

5) The AFS client must not be running on this workstation.  It will be
   at the end of this script.

Do you meet these requirements? [y/n] y

If the fileserver is not running, this may hang for 30 seconds.
/etc/init.d/openafs-fileserver stop

What administrative principal should be used? root/admin

/etc/openafs/server/CellServDB already exists, renaming to .old
/etc/init.d/openafs-fileserver start
Starting OpenAFS BOS server: bosserver.
bos adduser afs.spinlock.hr root -localauth

Creating initial protection database.  This will print some errors
about an id already existing and a bad ubik magic.  These errors can
be safely ignored.

pt_util: /var/lib/openafs/db/prdb.DB0: Bad UBIK_MAGIC. Is 0 should be 354545
Ubik Version is: 2.0

bos create afs1.spinlock.hr ptserver simple /usr/lib/openafs/ptserver -localauth
bos create afs1.spinlock.hr vlserver simple /usr/lib/openafs/vlserver -localauth
bos create afs1.spinlock.hr fs fs -cmd '/usr/lib/openafs/fileserver -p 23 -busyat 600 \
  -rxpck 400 -s 1200 -l 1200 -cb 65535 -b 240 -vc 1200' -cmd /usr/lib/openafs/volserver \
  -cmd /usr/lib/openafs/salvager -localauth
bos setrestart afs1.spinlock.hr -time never -general -localauth
Waiting for database elections: done.
vos create afs1.spinlock.hr a root.afs -localauth
Volume 536870915 created on partition /vicepa of afs.spinlock.hr

/etc/init.d/openafs-client force-start
Starting AFS services: afsd.
afsd: All AFS daemons started.

Now, get tokens as root/admin in the spinlock.hr cell.
Then, run afs-rootvol to create the root volume.

Now that our AFS cell is created, remember we've said volumes are the basic units accessible by AFS clients? By convention, each AFS cell creates the first volume called root.afs.

Note

Well, strictly speaking, files in a volume can legitimately be accessed without mounting a volume. It's not as convenient, so you will almost always want to mount the volume first, but keep in mind that unmounting a volume does not equal making the files inaccessible — a volume becomes really inaccessible only if you clear its toplevel ACL.

According to the advice printed at the end of afs-newcell run, we need to first obtain the AFS administrator token:

sudo su # (We want to switch to the root user)

kinit root/admin

Password for root/admin@SPINLOCK.HR: PASSWORD

aklog

When running aklog, you might see the following error:


aklog: Couldn't get hcoop.net AFS tickets:
aklog: unknown RPC error (-1765328184) while getting AFS tickets

It simply means that you did not enable option "allow_weak_crypto" in Kerberos config; see the section called “AFS enctype DES” above for a solution to the problem.

To verify that you hold the Kerberos ticket and AFS token, you may run the following:

klist -5f

Ticket cache: FILE:/tmp/krb5cc_1116
Default principal: root/admin@SPINLOCK.HR

Valid starting     Expires            Service principal
02/09/10 17:18:18  02/10/10 03:18:18  krbtgt/SPINLOCK.HR@SPINLOCK.HR
renew until 02/10/10 17:18:16, Flags: FPRIA
02/09/10 17:18:18  02/10/10 03:18:18  afs/spinlock.hr@SPINLOCK.HR

renew until 02/10/10 17:18:16, Flags: FPRAT

tokens

Tokens held by the Cache Manager:

User's (AFS ID 1) tokens for afs@spinlock.hr [Expires Feb 10 03:18]
   --End of list--

Now, with a successful kinit and aklog in place, we can run afs-rootvol:

afs-rootvol

                            Prerequisites

In order to set up the root.afs volume, you must meet the following
pre-conditions:

1) The cell must be configured, running a database server with a
   volume location and protection server.  The afs-newcell script will
   set up these services.

2) You must be logged into the cell with tokens in for a user in
   system:administrators and with a principal that is in the UserList
   file of the servers in the cell.

3) You need a fileserver in the cell with partitions mounted and a
   root.afs volume created.  Presumably, it has no volumes on it,
   although the script will work so long as nothing besides root.afs
   exists.  The afs-newcell script will set up the file server.

4) The AFS client must be running pointed at the new cell.
Do you meet these conditions? (y/n) y

You will need to select a server (hostname) and AFS partition on which to
create the root volumes.

What AFS Server should volumes be placed on? afs1
What partition? [a] a

vos create afs1 a root.cell -localauth
Volume 536870918 created on partition /vicepa of afs1
fs sa /afs system:anyuser rl
fs mkm /afs/spinlock.hr root.cell -cell spinlock.hr -fast || true
fs mkm /afs/grand.central.org root.cell -cell grand.central.org -fast || true
.....
.....
.....
.....
.....
fs sa /afs/spinlock.hr system:anyuser rl
fs mkm /afs/.spinlock.hr root.cell -cell spinlock.hr -rw
fs mkm /afs/.root.afs root.afs -rw
vos create afs1 a user -localauth
Volume 536870921 created on partition /vicepa of afs1
fs mkm /afs/spinlock.hr/user user 
fs sa /afs/spinlock.hr/user system:anyuser rl
vos create afs1 a service -localauth
Volume 536870924 created on partition /vicepa of afs1
fs mkm /afs/spinlock.hr/service service 
fs sa /afs/spinlock.hr/service system:anyuser rl
ln -s spinlock.hr /afs/spinlock
ln -s .spinlock.hr /afs/.spinlock
vos addsite afs1 a root.afs -localauth
Added replication site afs /vicepa for volume root.afs1
vos addsite afs1 a root.cell -localauth
Added replication site afs1 /vicepa for volume root.cell
vos release root.afs -localauth
Released volume root.afs successfully
vos release root.cell -localauth
Released volume root.cell successfully

Post-configuration

If you remember, during the AFS installation phase, we've answered "No" to the question "Run OpenAFS client now and at boot?". AFS init script is such that it just won't run the client as long as the client startup is disabled in the config file — even if you invoke sudo invoke-rc.d openafs-client start manually (you'd have to invoke sudo invoke-rc.d openafs-client force-start, but this is not what happens during regular boot). So we have to enable the client in /etc/openafs/afs.conf.client by replacing AFS_CLIENT=false with AFS_CLIENT=true:

sudo perl -pi -e's/AFS_CLIENT=false/AFS_CLIENT=true/' /etc/openafs/afs.conf.client
sudo invoke-rc.d openafs-client restart

Now let's drop any tokens or tickets that we may have initialized, to continue with a clean slate:

unlog; kdestroy

And at this point, you've got yourself one helluva OpenAFS cell up and running!

The following sections provide some more information on OpenAFS and its usage, but your installation and configuration phases have been completed.

OpenAFS concepts

File layout

While the whole point of AFS is in accessing files from remote workstations, remember that all AFS servers are also regular AFS clients and you can use them to browse the files just as fine. So let's explain the AFS directory structure a bit and then use our just-installed machine to look at the actual contents of the /afs/ directory.

As we've hinted in the section called “Introduction”, AFS uses a global namespace. That means all AFS sites are instantly accessible from /afs/ as if they were local directories, and all files have a unique AFS path. For example, file /afs/spinlock.hr/service/test will always be /afs/spinlock.hr/service/test, no matter the client, operating system, local policy, connection type or geographical location.

In order to avoid clashes in this global AFS namespace, by convention, each cell's "AFS root" starts with /afs/domain.name/.

Beneath it, AFS automatically creates two directories, common/ and user/. The latter is where the users' home directories should go, usually hashed to two levels, such as /afs/spinlock.hr/user/r/ro/root/.

File listing and information

Let's list /afs/ directory contents to verify what we've just said about AFS cells and their mount points:

cd /afs

ls | head
1ts.org
acm-csuf.org
acm.uiuc.edu
ams.cern.ch
andrew.cmu.edu
anl.gov
asu.edu
athena.mit.edu
atlass01.physik.uni-bonn.de
atlas.umich.edu

ls | wc -l

189

The 189 directories were automatically created by the afs-rootvol script, but you can create additional and remove existing mount points (AFS mount points) at will.

With the above said, we can predict that AFS has created our own directory in /afs/spinlock.hr/. This directory is only visible automatically within the local cell and is not seen by the world in ls /afs listing (because you have not asked for its inclusion in the global "CellServDB" file). Its default invisibility, however, does not make it inaccessible — supposing that you have a functioning network link, and that your cell name and server hostnames are known, your cell is reachable from the Internet).

Now that we're in AFS land, we can quickly get some more AFS-specific information on /afs/spinlock.hr/:

fs lsm /afs/spinlock.hr

'/afs/spinlock.hr' is a mount point for volume '#spinlock.hr:root.cell'

fs lv /afs/spinlock.hr

File /afs/spinlock.hr (536870919.1.1) contained in volume 536870919
Volume status for vid = 536870919 named root.cell.readonly
Current disk quota is 5000
Current blocks used are 4
The partition has 763818364 blocks available out of 912596444

The output above is showing a cell setup with 1 TB of AFS storage — the block size in OpenAFS is 1 KB.

Note

Most of the AFS fs subcommands operate only on directory names that do not end with a dot (".") or a slash ("/"). For example, the above fs lsm /afs/spinlock.hr would not work if it was called with /afs/spinlock.hr/. Likewise, it is not possible to call fs lsm . or fs lsm ./; use fs lsm $PWD for the equivalent.

Read and write file paths

Each time you mount a volume, you can mount it read-write or read-only.

Read-write mounts are simple — reads and writes are done through the same filesystem path, such as /afs/spinlock.hr/common/testfile, and are always served by the AFS server on which the volume resides.

Read-only mounts make things interesting — volumes may have up to 8 read-only replicas and clients will retrieve files from the "best" source. However, that brings two specifics:
First, as the read-only mount is read-only by definition, a different file path (prefixed with a dot), such as /afs/.spinlock.hr/common/testfile, must be used when access in a read-write fashion is required.
Second, any change of data in the read-write tree won't show up in the read-only tree until you "release" the volume contents with the vos release command.

As said, read-write paths for read-only mounts are prefixed by a leading dot. Let's verify this:

fs lsm /afs/spinlock.hr

'/afs/spinlock.hr' is a mount point for volume '#spinlock.hr:root.cell'

fs lsm /afs/.spinlock.hr

'/afs/.spinlock.hr' is a mount point for volume '%spinlock.hr:root.cell'

File reading and writing

Equipped with the above information, let's visit /afs/spinlock.hr/, look around, and then try to read and write files.

cd /afs/spinlock.hr

ls -al

total 14
drwxrwxrwx 2 root root 2048 2008-06-25 02:05 .
drwxrwxrwx 2 root root 8192 2008-06-25 02:05 ..
drwxrwxrwx 2 root root 2048 2008-06-25 02:05 service
drwxrwxrwx 2 root root 2048 2008-06-25 02:05 user

echo TEST > testfile

-bash: testfile: Read-only file system

cd /afs/.spinlock.hr

echo TEST > testfile

-bash: testfile: Permission denied

Good. The first command has been denied since we were in the read-only AFS mount point. The second command has been denied since we did not obtain the Kerberos/AFS identity yet to acquire the necessary write privilege.

Now let's list access permissions (AFS ACL) for the directory, and then obtain AFS admin privileges that will allow us to write files. Note that we first establish our Kerberos identity using kinit, and then obtain the matching AFS token using aklog. Aklog obtains a token automatically and without further prompts, on the basis of the existing Kerberos ticket.

cd /afs/.spinlock.hr

fs la .

Access list for . is
Normal rights:
  system:administrators rlidwka
  system:anyuser rl

kinit root/admin; aklog

Password for root/admin@SPINLOCK.HR: PASSWORD

klist -5

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@SPINLOCK.HR

Valid starting     Expires            Service principal
06/29/08 19:38:05  06/30/08 05:38:05  krbtgt/SPINLOCK.HR@SPINLOCK.HR
        renew until 06/30/08 19:38:05
06/29/08 19:38:12  06/30/08 05:38:05  afs@SPINLOCK.HR
        renew until 06/30/08 19:38:05

tokens

Tokens held by the Cache Manager:

User's (AFS ID 1) tokens for afs@spinlock.hr [Expires Jun 30 05:38]
   --End of list--

At this point, writing the file succeeds:


echo TEST > testfile

cat testfile

TEST

rm testfile

Creating users

As we've seen in previous chapters, to obtain read or write privilege in AFS, you authenticate to Kerberos using kinit and then to AFS using aklog.

We're dealing with two separate authentication databases here — the Kerberos database, and the AFS "Protection Database" or PTS.

That means all users have to exist in both Kerberos and AFS if they want to access AFS data space in an authenticated fashion. The only reason we did not have to add root/admin user to AFS PTS is because this was done automatically by the virtue of afs-newcell.

So let's add a regular AFS user. We're going to add user "mirko", which should already exist in Kerberos if you've followed the MIT Kerberos 5 Guide, section "Creating first unprivileged principal". Make sure you hold the administrator Kerberos ticket and AFS token, and then execute:

pts createuser mirko 20000

User mirko has id 20000

You will notice that Kerberos and AFS do not require any use of sudo. (Actually, we do use sudo to invoke Kerberos' sudo kadmin.local, but that's only because we want to access the local Kerberos database directly by opening the on-disk Kerberos database file). Kerberos and AFS privileges are determined solely by tickets and tokens one has obtained, and have nothing to do with traditional Unix privileges nor are tied to certain usernames or IDs.

Creating and mounting volumes

Now that we have a regular user "mirko" created in both Kerberos and AFS, we want to create an AFS data volume that will correspond to this user and be "mounted" in the location of the user's home directory in AFS.

This is an established AFS practice — every user gets a separate volume, mounted in the AFS space as their home directory. Depending on specific uses, further volumes might also be created for the user and mounted somewhere under their toplevel home directory, or even somewhere else inside the cell file structure.

Make sure you still hold the administrator Kerberos ticket and AFS token, and then execute:

vos create afs1 a user.mirko 200000

Volume 536997357 created on partition /vicepa of afs1


vos examine user.mirko

user.mirko                        536997357 RW          2 K  On-line
    afs1.spinlock.hr /vicepa 
    RWrite  536997357 ROnly          0 Backup          0 
    MaxQuota     200000 K 
    Creation    Sun Jun 29 18:06:43 2008
    Copy        Sun Jun 29 18:06:43 2008
    Backup      Never
    Last Update Never

    RWrite: 536997357

    number of sites -> 1
       server afs1.spinlock.hr partition /vicepa RW Site 

Having the volume, let's mount it to a proper location. We will use a "hashed" directory structure with two sublevels, so that the person's home directory will be in /afs/spinlock.hr/user/p/pe/person/ (instead of directly in user/person/). Follow this AFS convention and you will be able to use libnss-afs and 3rd party management scripts without modification.

cd /afs/spinlock.hr/user

mkdir -p m/mi

fs mkm m/mi/mirko user.mirko -rw

Let's view the volume and directory information:


fs lsm m/mi/mirko

'm/mi/mirko' is a mount point for volume '#user.mirko'


fs lv m/mi/mirko

File m/mi/mirko (536997357.1.1) contained in volume 536997357

Volume status for vid = 536997357 named user.mirko
Current disk quota is 200000
Current blocks used are 2
The partition has 85448567 blocks available out of 140861236

Setting permissions

Let's view the permissions on the new directory and allow user full access:


fs la m/mi/mirko

Access list for m/mi/mirko is
Normal rights:
  system:administrators rlidwka

fs sa m/mi/mirko mirko all

fs la !:2

Access list for m/mi/mirko is
Normal rights:
  system:administrators rlidwka
  mirko rlidwka

(On a side note, the "!:2" above is a Bash construct that will insert the 3rd word from the previous line. Expanded, that line should read "fs la m/mi/mirko")

Now switch to user mirko and verify you've got access to the designated home directory:

unlog; kdestroy

kinit mirko; aklog

Password for mirko@SPINLOCK.HR: PASSWORD

cd /afs/spinlock.hr/user/m/mi/mirko

echo IT WORKS > test

cat test

IT WORKS

Volume quotas

AFS volumes have a concept of "volume quota", or the maximum amount of data a volume can hold before denying further writes with the appropriate "Quota exceeded" error. It's important to know that AFS volumes do not take a predefined amount of disk space as physical disk partitions do; you can create thousands of volumes, and they only take as much space as there is actual data on them. Likewise, AFS volume quotas are just limits that do not affect volume size except "capping" the maximum size of data a volume can store.

Let's list volume data size quota and increase it from the default 5 MB to 100 MB:

cd /afs/.spinlock.hr

fs lq

Volume Name                   Quota      Used %Used   Partition
root.cell                      5000        28    1%         38% 

fs sq . 100000

fs lq

Volume Name                   Quota      Used %Used   Partition
root.cell.readonly           100000        28    1%         38% 

Serving metadata

The material covered so far in the MIT Kerberos 5 Guide and this OpenAFS Guide has gotten us to a point where we can create users in Kerberos and AFS, create and mount users' data volumes, authenticate using kinit and aklog, and read and write files in the users' volumes with full permissions.

In other words, it seems as if we're a step away from our goal — a true networked and secure solution for centralized logins with exported home directories.

There's one final thing missing, and it's the support for serving user "metadata". As explained in the section called “Introduction”, metadata will come from either LDAP or libnss-afs.

If you've followed and implemented the setup described in the OpenLDAP Guide, you already have the metadata taken care of. However, let's say a few words about it anyway to broaden our horizons.

Collectively, metadata is the information traditionally found in system files /etc/passwd, /etc/group and /etc/shadow.

The metadata necessary for a successful user login includes four elements: Unix user ID, Unix group ID, home directory and the desired shell.

But let's take a look at the complete list of common user metadata information and note which software components can store them in parentheses:

  • Username (all)

  • Password (Kerberos or LDAP — but storing passwords in LDAP is out of our scope)

  • User ID (LDAP or libnss-afs)

  • Group ID and group membership (LDAP)

  • GECOS information (LDAP)

  • Home directory (LDAP or libnss-afs)

  • Preferred shell (LDAP or libnss-afs)

  • Group membership (LDAP)

  • Password aging (Kerberos)

You may notice LDAP seems like a "superset" of libnss-afs. And it really is, which can be an advantage or a disadvantage, depending on the situation. Here's why:

LDAP is a standalone solution that can be used to create network infrastructures based on the "magic trio" — Kerberos, LDAP and AFS. It is flexible and can serve arbitrary user and system information besides the necessary metadata. Can you think of a few examples how this would be useful? For example, on a lower level, you could use LDAP to store extra group membership information or per-user host access information; on a higher level, you could use LDAP to store a person's image, birth date, or a shared calendar available to all user applications. However, this flexibility comes at a cost of administering yet another separate database (Kerberos, AFS and LDAP all have their own database, and you have to keep them in sync. Without proper tools, this could become a burden).

libnss-afs, on the other hand, is an AFS-dependent module that serves the metadata out of the AFS PTS database. It is simple, and limited. Structure of the PTS is such that you can only save certain information in there, and nothing else. For fields that cannot be represented in PTS, libnss-afs outputs a "one size fits all" default value. For example, as there is no space for GECOS information in the PTS, everyone's GECOS is set to their username; as there is no group ID, everyone's group ID is set to group 65534 (nogroup), and as there is no home directory, everyone's homedir is set to /afs/cell.name/user/u/us/user/. libnss-afs may suit those who prefer simplified administration over flexibility.

In this Guide, both LDAP and libnss-afs approach will be explained. Moving from libnss-afs to LDAP is easy; if in doubt, pick libnss-afs as a simpler start-up solution.

Metadata via LDAP

A complete LDAP setup is explained in another article from the series, the OpenLDAP Guide. If you have followed and implemented the procedure, especially the part about modifying /etc/nsswitch.conf, then there's only one thing that should be done here — you should modify users' entries in LDAP to make their home directories point to AFS instead of to /home/.

Actually, you can symlink /home/ to AFS, and then no change in LDAP will be necessary. One benefit of this approach is that /home/ looks familiar to everyone. One drawback is that you need to symlink that directory to AFS on all machines where users will be logging in.

To create the symlinks, use:

sudo mv /home /home,old
sudo ln -sf /afs/.spinlock.hr/user /home
sudo ln -sf /afs/spinlock.hr/user /rhome

To literally change users' home directories in LDAP (to point to /afs/spinlock.hr/user/u/us/user/), construct a LDIF file and use ldapmodify to apply the LDIF file.

Here's an example for user mirko (which should already exist in your LDAP directory if you've followed the OpenLDAP Guide Guide). Save the following as /tmp/homechange.ldif:

dn: uid=mirko,ou=people,dc=spinlock,dc=hr

changetype: modify
replace: homeDirectory
homeDirectory: /afs/spinlock.hr/user/m/mi/mirko

And apply using:

ldapmodify -c -x -D cn=admin,dc=spinlock,dc=hr -W -f /tmp/homechange.ldif

Metadata via libnss-afs

As said, libnss-afs is an AFS-dependent approach to serving metadata, so it only makes sense to describe it in the context of the OpenAFS Guide.

Adam Megacz created libnss-afs on the basis of Frank Burkhardt's libnss-ptdb, which in turn was created on the basis of Todd M. Lewis' nss_pts. The primary motivation for libnss-afs has been the use at HCoop, the first non-profit corporation offering public AFS hosting and accounts.

Good. Let's move onto the technical setup:

libnss-afs must run in combination with nscd and cache the replies from the AFS ptserver, so let's install nscd:

sudo apt-get install nscd

Once nscd is installed, edit its config file, /etc/nscd.conf to include the following:

enable-cache hosts no
persistent passwd no
persistent groups no
persistent hosts  no

(Note that all of the above lines already exist in /etc/nscd.conf, although the formatting of the file is a bit strange and finding them is an exercise for your eyes. So you should not be adding the above lines in there, but just locating them in the config file and turning the appropriate "yes" to "no".)

Then, restart nscd as usual with sudo invoke-rc.d nscd restart.

libnss-afs homepage is on libnss-afs, and the software can be downloaded as Gitweb tarball. Once the tarball is unpacked, Debian package can be built and installed by cd-ing into the libnss-afs/ directory and running:

sudo apt-get install libopenafs-dev

sudo dpkg-buildpackage

sudo dpkg -i ../libnss-afs*deb

After libnss-afs is installed, let's modify the existing lines in /etc/nsswitch.conf to look like the following:

passwd:  afs files
group:   afs files
shadow:  files

Metadata test

We are ready to test metadata retrieval:

sudo nscd -i passwd

id mirko

uid=20000(mirko) gid=65534(nogroup) groups=65534(nogroup)

getent passwd mirko

mirko:x:20000:65534:mirko:/afs/hcoop.net/user/m/mi/mirko:/bin/bash

PAM configuration

The final step in this article pertains to integrating OpenAFS into the system authentication procedure. We want Kerberos ticket and OpenAFS token to be issued for users as they log in, without the need to run kinit and aklog manually after login.

Let's install the necessary OpenAFS PAM module:

sudo apt-get install libpam-afs-session

To minimize the chance of locking yourself out of the system during PAM configuration phase, ensure right now that you have at least one root terminal window open and a copy of the files available before starting on PAM configuration changes. To do so, execute the following in a cleanly started shell and leave the terminal open:

sudo su -
cd /etc
cp -a pam.d pam.d,orig

Note

If you break logins with an invalid PAM configuration, the above will allow you to simply revert to a known-good state by using the open root terminal and executing:

cp -a pam.d,orig/* pam.d/

After you've edited your PAM configuration as shown below, restart the services you will be connecting to. This isn't strictly necessary, but it ensures that the services will re-read the PAM configuration and not use any cached information.

/etc/pam.d/common-auth

auth    sufficient        pam_unix.so nullok_secure
auth    sufficient        pam_krb5.so use_first_pass
auth    optional          pam_afs_session.so program=/usr/bin/aklog
auth    required          pam_deny.so

/etc/pam.d/common-session

session required pam_limits.so
session optional pam_krb5.so
session optional pam_unix.so
session optional pam_afs_session.so program=/usr/bin/aklog

Conclusion

At this point, you have a functional AFS site. Users, once created in the system, can log in and access their files anywhere on the network.

You can rely on either system login or manually running kinit; aklog in obtaining Kerberos ticket and AFS token.

Once the token is obtained, you can access the protected AFS data space.

With a good foundation we've built, for further information on AFS, please refer to other available resources:

For commercial consultation and infrastructure-based networks containing AFS, contact Spinlock Solutions or organizations listed on the OpenAFS support page.

The newest version of this article can always be found at http://techpubs.spinlocksolutions.com/dklar/afs.html.



Davor Ocelic
http://www.spinlocksolutions.com/


Copyright (C) 2008-2012 Davor Ocelic, <docelic@spinlocksolutions.com>
Spinlock Solutions, http://www.spinlocksolutions.com/

This documentation is free; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

It is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

 

 


Posted by liotier (212.85.xx.xx) on Mon 4 Aug 2008 at 13:18
I had been curious about AFS for quite a while, but I found it a bit intimidating - this walkthrough article makes it much clearer, thank you !

Now that I have a better idea of what setting up AFS entails, I still wonder about how relevant it is for small networks compared to Samba. For larger networks there is no question that AFS is superior, but the initial cost is spread over a large user base. I love AFS's constant paths, security and WAN suitability - but having to set up the whole directory/Kerberos/AFS stack for less than a dozen hosts seems a bit excessive compared to Samba's simplicity, even accounting for Samba's limitations.

Does anyone here has experience of AFS in a SOHO environment ? I would love to hear from it - and I'm sure plenty of enthusiasts and small networks administrators would too.

[ Parent | Reply to this comment ]

Posted by paulgear (203.206.xx.xx) on Wed 6 Aug 2008 at 09:03
Chalk me up for a "me too" on this one. From the article, it seems like it would be too much administrative overhead to deploy AFS for even a 10-user small business.

[ Parent | Reply to this comment ]

Posted by AJxn (77.53.xx.xx) on Thu 7 Aug 2008 at 13:52
[ View Weblogs ]
Check what is a "one time setup" and what has to be done for each machine.
Then you have the possibility to really calculate the administrative overhead for each machine. You also get better security than SAMBA, which also has to be worth something.

[ Parent | Reply to this comment ]

Posted by docelic (78.134.xx.xx) on Thu 7 Aug 2008 at 16:11
The overhead of AFS should be viewed in the context of a distributed filesystem and the features it gives you. The picture probably isn't completely clear as, like I mention in the text, I've skipped enormous amounts of "material" to get to a practical guide instead of a long prose-like article.

The overhead we're talking about here is mostly administrative and not user-side.
So I would say the question is just whether your 10 small installations justify the time and effort put into learning OpenAFS.
But once you do get to know how it works and how to set it up, it is great for big and small installations alike.

[ Parent | Reply to this comment ]

Posted by docelic (78.134.xx.xx) on Thu 7 Aug 2008 at 16:15
Maybe worth mentioning, on machines where you want to run the client without any server elements, the setup procedure is quite simple (basically just something like module-assistant a-i openafs && apt-get install openafs-client && adjusting PAM setup).

[ Parent | Reply to this comment ]

Posted by Anonymous (67.140.xx.xx) on Sun 17 Aug 2008 at 05:49
Quote:

"The transcript below assumes you've set up Kerberos and created policy service as explained in the MIT Kerberos 5 Guide;"

That URL doesn't describe how to create the policy service. You'll need to hit the Debian GNU Kerberos setup guide here for an explanation: http://techpubs.spinlocksolutions.com/dklar/kerberos.html .

[ Parent | Reply to this comment ]

Posted by docelic (78.134.xx.xx) on Sun 17 Aug 2008 at 12:10
Right, policy creation was added in later versions of the Kerberos guide; thanks for the note.
Also, the "-policy service" part can just as well be omitted, and it won't make any noticeable difference.

[ Parent | Reply to this comment ]

Posted by Anonymous (82.233.xx.xx) on Sat 23 Aug 2008 at 10:58
I have been playing with AFS for a few days and can't get a better performance than 10Mo/s on a gigabit network. It is also quite hard to find any help on AFS "generally", google only has poor answers.
I also tried to find some benchmark of the filesystem and surprisingly could not find any ...
So, what kind of performance should I achieve on a gigabit network with a raw dd write ?

[ Parent | Reply to this comment ]

Posted by docelic (78.134.xx.xx) on Sat 23 Aug 2008 at 12:50
For general OpenAFS help, quickest resource is IRC channel #openafs on the FreeNode IRC network (irc.gnu.org for example).

Note that AFS locally caches all retrieved data, and doesn't spend any resources on checking whether the data is valid or expired, because the notification is done in a "push" manner by the server.

Once your data is in the cache, performance is compareable to local disk read.
Acceptable cache sizes go anywhere from as little as 50 MB to as much as 20 GB.

[ Parent | Reply to this comment ]

Posted by Anonymous (24.81.xx.xx) on Mon 25 Aug 2008 at 00:29
Does anyone have any tips on backing up an OpenAFS partition? This system looks very interesting to me, however, I do need to keep offline/offsite backups of the data housed on the systems i would like to convert.

Does anyone have any suggestions, pointers, or links I should take a look at?

[ Parent | Reply to this comment ]

Posted by docelic (78.134.xx.xx) on Mon 25 Aug 2008 at 21:32
There are a few options:

First, in case you didn't know, each volume can have up to 8 read-only replicas at other servers in the network. So if you replicate a volume in a few places, and some of them go down, no work is needed as other replicas and the main volume will still be accessible. If the main (read-write) volume goes down, however, then you can quickly convert one of the replicas from read-only to read-write, and the operations may resume. (replicas are created using "vos addsite", and this method assumes that you periodically "release" written data onto replicas using "vos release" command).

Another AFS-specific solution is to use vos backup and vos backupsys tools which create backup volumes in AFS (If your volume is named "abc", it creates "abc.backup"). AFS has built-in support for this with "bos create" command, and this is, of course, a solution for the case where you want to protect from user error, not hardware failures.

But if you want to back up files in the traditional way, so that you have them on a separate backup medium, there are a couple options as well:

1) create user system:backup (or similar) and give it permission on all directories you want to back up.. then have a script which authenticates as that user, and backups all files it can access in your afs space. (This is simple, but has the drawback of not being AFS-aware, so you have to back up afs-specific data (volume names and volume infos, mount points, etc.) separately).

2) You can use "dump" option of the "vos" command to dump volumes. You can load them back with vos restore.

3) You can use http://www.amanda.org/ (GPL, possibly with amanda-afs plugin) or http://www.teradactyl.net/ TiBS (commercial) backup solution to also dump files on a per-volume level.

4) You can use "Backup PC" software (GPL, http://backuppc.sourceforge.net/) and the AFS addon for it (http://www.physics.unc.edu/~stephen/BackupPC4AFS/).

Dumping on volume-level is perfectly alright as long as you have at least one AFS cell available into which you can restore the volumes to access the backup files. (That can be the same cell whose files you are restoring, or a separate cell - which you can create just for the purpose of restore).

But if you want to access to individual files at any time directly from backup, then you'll probably want to back up on file level. And, of course, keep AFS-specific info backed up separately so that you can restore the cell to identical layout like before the crash.

[ Parent | Reply to this comment ]

Posted by docelic (78.134.xx.xx) on Mon 25 Aug 2008 at 21:34
When I mention "bos create" for backup volumes, that's when you want to automate the backup by creating something like a cron job in AFS.
But to create individual backups, you use "vos backup" of course.

[ Parent | Reply to this comment ]

Posted by Anonymous (82.225.xx.xx) on Sat 17 Jan 2009 at 15:41
If it's not done, afs-newcell will fail to create any volume on it.
So if you don't a separate partition to mount on /vicepa, you'll have to mount a file as aloopback device on it

[ Parent | Reply to this comment ]

Posted by Anonymous (88.162.xx.xx) on Mon 9 Nov 2009 at 16:39
Seconded. A bind mount to an empty directory like /var/lib/openafs/vicepa works, however.

[ Parent | Reply to this comment ]

Posted by Anonymous (141.35.xx.xx) on Mon 29 Nov 2010 at 13:57
Anyone with root access on a client (host) can simply copy a Kerberos ticket to gain access to that users data on the server -- we successfully tried that years ago.
You are always trusting the host too. Not one of the mentioned technology changes anything -- neither "virtually" nor practically.
The article is plain wrong regarding the security of AFS and Kerberos and regarding that hazardous to any beginner.

Apart from that it's a good starting point for a running AFS setup in a trusted environment.

[ Parent | Reply to this comment ]

Sign In

Username:

Password:

[Register|Advanced]

 

Flattr

 

Current Poll

What do you use for configuration management?








( 235 votes ~ 0 comments )