Weblogs for dkg
gpg gets it absolutely right by not asking users this question by default. People should not be enabling this option.
Some background: gpg's --ask-cert-level option allows the user who is making an OpenPGP identity certification to indicate just how sure they are of the identity they are certifying. The user's choice is then mapped into four levels of OpenPGP certification of a User ID and Public-Key packet, which i'll refer to by their signature type identifiers in the OpenPGP spec:
- 0x10: Generic certification
- The issuer of this certification does not make any particular assertion as to how well the certifier has checked that the owner of the key is in fact the person described by the User ID.
- 0x11: Persona certification
- The issuer of this certification has not done any verification of the claim that the owner of this key is the User ID specified.
- 0x12: Casual certification
- The issuer of this certification has done some casual verification of the claim of identity.
- 0x13: Positive certification
- The issuer of this certification has done substantial verification of the claim of identity.
Most OpenPGP implementations make their "key signatures" as 0x10 certifications. Some implementations can issue 0x11-0x13 certifications, but few differentiate between the types.
By default (if --ask-cert-level is not supplied), gpg issues certificates ("signs keys") using 0x10 (generic) certifications, with the exception of self-sigs, which are made as type 0x13 (positive).
When interpreting certifications, gpg does distinguish between different certifications in one particular way: 0x11 (persona) certifications are ignored; other certifications are not. (users can change this cutoff with the --min-cert-level option, but it's not clear why they would want to do so).
So there is no functional gain in declaring the difference between a "normal" certification and a "positive" one, even if there were a well-defined standard by which to assess the difference between the "generic" and "casual" or "positive" levels; and if you're going to make a "persona" certification, you might as well not make one at all.
And it gets worse: the problem is not just that such an indication is functionally useless; encouraging people to make these kind of assertions actively encourages leaks of a more-detailed social graph than just encouraging everyone to use the default blanket 0x13-for-self-sigs, 0x10-for-everyone-else policy.
A richer public social graph means more data that can feed the ravenous and growing appetite of the advertising-and-surveillance regimes. i find these regimes troubling. I admit that people often leak much more information than this indication of "how well do you know X" via tools like Facebook, but that's no excuse to encourage them to leak still more or to acclimatize people to the idea that the details of their personal relationships should by default be public knowledge.
Lastly, the more we keep the OpenPGP network of identity certifications (a.k.a. the "web of trust") simple, the easier it is to make sensible and comprehensible and predictable inferences from the network about whether a key really does belong to a given user. Minimizing the complexity and difficulty of deciding to make a certification helps people streamline their signing processes and reduces the amount of cognitive overhead people spend just building the network in the first place.
Jane Q. Public <email@example.com>This is clean, clear, and unambiguous.
However, some tools (gpg, enigmail among others) ask the user to provide a "Comment:" field when they are choosing a new User ID (e.g. when making a new key). These UI prompts are evil. The savvy user knows to avoid entering anything in this field, so that they can end up with a User ID like the one above. The user who provides something here (perhaps even something inconsequential like "I like strawberries", due to not being sure what should go in this little box) will instead end up with a User ID like:
Jane Q. Public (I like strawberries) <firstname.lastname@example.org>This is bad. This means that Jane is asking the people who certify her key+userid to certify whether she actually likes strawberries (how could they know? what if she changes her mind? should they revoke their certifications?) and anywhere that she is referred to by name will include this mention of strawberries. This is not Jane's identity, and it doesn't belong in an OpenPGP User ID packet.
Furthermore, since User IDs are atomic, if Jane wants to change the comment field (but leave her name and e-mail address the same), she will instead need to create a new User ID, publish it, get everyone who has certified her old key+userid to certify the key+newuserid, and then revoke the old one.
It is difficult already to help people understand and participate in the certification network that forms that backbone of OpenPGP's so-called "web of trust". These bogus comment fields make an already-difficult task harder. And all because of strawberries!
Tools like enigmail and gpg should not expose the "Comment:" field to users who are generating keys or choosing new User IDs. If they feel it absolutely must be present for some weird corner case that 0.1% of their users will have, they could require that the user enters some sort of "expert mode" before prompting the user to do something that is likely to be a mistake.
There is almost no legitimate reason for anyone to use this field. Let's go through some examples of this people use, taken from some examples i have lying around (identifying marks have been changed to protect the innocent who were duped by this bad UI choice, but you can probably find them on the public keyserver network if you want to hunt around):
- domain repetition
John Q. Public (Debian) <email@example.com>We know you're with debian already from the @debian.org address. If this is in contrast to your other address (firstname.lastname@example.org) so that people know where to send you debian-related e-mail, this is still not necessary.
Lest you think i'm just calling out debian developers, people with @ubuntu.com addresses and (Ubuntu) comments (as well as @example.edu addresses and (Example University) comments and @example.com addresses and (Example Corp) comments) are out there too.
- nicknames already evident
John Q. Public (Johnny) <email@example.com> John Q. Public (wackydude) <firstname.lastname@example.org>Again, the information these comments are providing offers no clear disambiguation from the info already contained in the name and e-mail address, and just muddies the water about what the people who certify this identity should actually be trying to verify before they make their certification.
John Q. Public (Work) <email@example.com>if John's correspondents know that he works for Example Corp, then "Work" isn't helpful to them, because they already know this as the address that they're writing to him with. If they don't know that, then they probably aren't writing to him at work, so they don't need this comment either. The same problem appears (for example) with literal comments of (School) next to their @example.edu address.
- This is my nth try at this crazy system!
John Q. Public (This is my second key) <firstname.lastname@example.org> John Q. Public (This is my primary key) <email@example.com> John Q. Public (No wait really use this one) <firstname.lastname@example.org>OpenPGP is confusing, and it can be tricky to get it right. We all know :) This is still not part of John's identity. If you want to designate a key as your preferred key, keep it up-to-date, get people to certify it, and revoke or expire your old keys. People who care can look at the timestamps on your keys and tell which ones are the most recent ones. You do have a revocation certificate for your key handy just in case you lose it, right?
- Don't use this key
John Q. Public (Old key, do not use) <email@example.com> John Q. Public (Please only use this through September 2004) <firstname.lastname@example.org>This kind of sentiment is better expressed by revoking the key in question or setting an expiration time on the key or User ID self-sig directly. This sentiment is not part of John's identity, and shouldn't be included as though it were.
John Q. Public (none) <email@example.com>sigh. This is clearly someone getting mixed up by the UI.
- I use strong crypto!
John Q. Public (3092 bits of RSA) <firstname.lastname@example.org>This comment refers to the strength of the key material, or the algorithms preferred by the user. Since the User ID is associated with the key material already, people who care about this information can get it from the key directly. This is also not part of the user's actual identity.
- "no comment"
John Q. Public (no comment) <email@example.com>This is actually not uncommon (some keyservers reply "too many matches!"). It shows that the user is witty and can think on their feet (at least once), but it is still not part of the user's identity.
I'm sure that such cases exist. I've even seen one or two of them. The fact that one or two cases exist does not excuse the fact that that overwhelming majority of these comments in OpenPGP User IDs are a mistake, caused only by bad UI design that prompts people to put something (anything!) in the empty box (or on the command prompt, depending on your preference).
And this mistake is one of the thousand papercuts that inhibits the robust growth of the OpenPGP certification network that some people call the "web of trust". Let's avoid them so we can focus on the other 999 papercuts.
Please don't use comments in your OpenPGP User ID. And if you make a user interface for OpenPGP that prompts the user to decide on a new User ID, please don't include a prompt for "Comment" unless the user has already certified that they are really and truly a special special snowflake.
It shows a smiling, attractive man, with text next to him saying something like "I told 9000 people what smartphone to buy".
What happened here?
- A TV channel bought an ad on the side of a bus
- trying to demonstrate to other advertisers
- about how good their viewers are at providing advertising-by-proxy
- on services that themselves are mostly advertising platforms
- to sell devices that are themselves often used for advertising delivery.
And almost all of these steps count as positive economic activity when we try to measure whether the US economy is healthy.
I am depressed by this tremendous waste of time and effort.
This post is about something i made successfully with free software (and some non-software crafting): I made a Woolly Mammoth for my nephew!
I documented the pattern (with pictures!) that i came up with using Inkscape (and used markdown, pandoc, emacs, pdftk, and other free software in the process). i've also published the source for the pattern via git if you want to modify it:
git clone git://lair.fifthhorseman.net/~dkg/woolly
Writing up the documentation makes me realize that i don't know of any software tools designed specifically for facilitating fabric/craft construction. Some interesting software ideas:
- Make 3-D models showing the partly assembled pieces, derived from the flat pattern. Maybe something like blender would be good for this?
- Take a 3D-modeled form and produce some candidate patterns for cutting and sewing? This seems like it is an interesting theoretical problem: given a set of (marked?) 3D surfaces and a set of approximation constraints, have the tool come up with a reasonable set of 2D patterns that could be cut and assembled using a set of standard operations into something close to the 3D shape.
- a "pattern lint checker" (maybe an inkscape extension?) that would let you mark certain segments of an SVG as related to other segments (i.e. the two sides of a seam), and could give you warnings when one side was longer than the other (within some level of tolerance)
Anyone have any ideas?
Well, this time, the power supply broke. As in, dead, no lights, no fan, no nothing. No problem, though, the disk is still good, and i've got a spare machine lying around; and the spare is actually superior hardware to the old machine so it'll be an upgrade in addition to a fix. Nice! So i transplant the disk and fire up the new chassis.
But WinXP fails to boot with a lovely "0x0000007b" BSOD. The internet tells me that this might mean it can't find its own disk. OK, pop into the new chassis' BIOS, tell it to run the SATA ports in "legacy IDE" mode, and try again.
Now we get a "0x0000007e" BSOD. Some digging on the 'net makes me think it's now complaining now about the graphics driver. Hmm. Well, i figure i can probably work around that by installing new drivers from Safe Mode. So i reboot into Safe Mode.
Success! It boots to the login screen in Safe Mode. And, mirabile dictu, i happen to know the Administrator password. I put it in, and get a message that this Windows installation isn't "activated" yet -- presumably because the hardware has changed out from under it. And by the way, i'm not allowed to log in via safe mode until it's activated. So please reboot to "normal" Windows and activate it first.
Except, of course, the whole reason i'm booting into safe mode was because normal Windows gives a BSOD. Grrrr. Who thought up this particular lovely catch-22?
OK, change tactics. Scavenging the scrap bin turns up a machine with a failed mainboard, but a power supply with all the right leads. It's rated for about 80W less than the old machine's failed supply, but i figure if i rip out the DVD-burner and the floppy drive maybe it will hold. Oh, and the replacement power supply doesn't physically fit the old chassis, but it hangs halfway out the back and sort of rattles around a bit. I sacrifice the rest of the scrap machine, rip out its power supply, stuff the power supply into the old chassis, swap the original disk back in, and ... it boots successfully, finally.
That was the shorter version of the story :P
So now my colleague has a horrible mess of a frankencomputer which is more likely to fail again in the future, instead of a nice shiny upgrade. Why? Because Microsoft's need to control the flow of software takes priority over the needs of their users.
This is what you get when you let Marketing and BizDev drive your technical decisions.
Do i still need to explain why i prefer free software?
It reads a message from stdin, and prints a visualisation of its structure, like this:
0 dkg@alice:~$ printmimestructure < 'Maildir/cur/1269025522.M338697P12023.monkey,S=6459,W=6963:2,Sa' └┬╴multipart/signed 6546 bytes ├─╴text/plain inline 895 bytes └─╴application/pgp-signature inline [signature.asc] 836 bytes 0 dkg@alice:~$You can fetch it with git if you like:
git clone git://lair.fifthhorseman.net/~dkg/printmimestructureIt feels silly to treat this ~30 line script as its own project, but i don't know of another simple tool that does this. If you know of one, or of something similar, i'd love to hear about it in the comments (or by sending me e-mail if you prefer).
If it's useful for others, I'd be happy to contribute printmimestructure to a project of like-minded tools. Does such a project exist? Or if people think it would be handy to have in debian, i can also package it up, though that feels like it might be overkill.
and oh yeah, as always: bug reports, suggestions, complaints, and patches are welcome :)
I read Russ Allbery's posts about Aaron and "slacktivism" with much appreciation. I had been ambivalent about signing the whitehouse.gov petition asking for the removal of the prosecutor for overreach, because I generally distrust the effectiveness of online petitions (and offline petitions, for that matter). But Russ's analysis convinced me to go ahead and sign it. The petition is concrete, clear (despite wanting a grammatical proofread), and actionable.
For people willing to go beyond petition signing to civil disobedience, The Aaron Swartz Memorial JSTOR Liberator is an option. It makes it straightforward to potentially violate the onerous JSTOR terms of service by re-publishing a public-domain article from JSTOR to archive.org, where it will be accessible to anyone directly.
As someone who builds and maintains information/communications infrastructure, i have very mixed feelings about most online civil disobedience, since it often takes the form of a Distributed Denial of Service (DDoS) attack of some sort. DDoS attacks of public services are notoriously difficult to defend against without having huge resources to throw at the problem, so encouraging participation in a DDoS often feels a little bit like handing out cans of gasoline when you know that everyone is living in a house of straw.
However, the JSTOR Liberator is not a DDoS at all -- it's simply a facilitation of people breaking the JSTOR Terms of Service (ToS), some of the same terms that Aaron was facing charges for violating. So it is a well-targeted way to demonstrate that the prosecutions were overreaching.
I wanted to take issue with one of Russ' statements, though. In his second post about the situation, Russ wrote:
Social activism and political disobedience are important and often valuable things, but performing your social activism using other people's stuff is just rude. I think it can be a forgivable rudeness; people can get caught up in the moment and not realize what they're doing. But it's still rude, and it's still not the way to go about civil disobedience.While i generally agree with Russ' thoughtful consideration of consent, I have to take issue with this elevation of some sort of hyper-extended property right over the moral agency that drives civil disobedience.
To use someone else's property for the sake of a just cause without damaging the property or depriving the owner of its use is not "forgivable rudeness" -- it's forgivable, laudable even, because it is just. And the person using the property doesn't need to be "caught up in the moment and not realize what they're doing" for it to be acceptable.
Civil disobedience often involves putting some level of inconvenience or discomfort on other people, including innocent people. It might be the friends and family of the activist who have to deal with the jail time; it might be the drivers stuck in a traffic jam caused by a demonstration; it might be the people forced to shop elsewhere because the store's doors are barricaded by protestors.
All of these people could be troubled by the civil disobedience more than MIT's network users and admins were troubled by Aaron's protest, and that doesn't make the protests described worse or "not the way to go about civil disobedience." The trouble highlights a more significant injustice, and in its troubling way does what it can to help right it.
Aaron was a troublemaker, and a good one. He will be missed.
Angela Starita wrote:
I'd like to save my work in a location where I can access it from any computer. I'm wary of using the mechanisms provided by Google and Apple. Can you suggest another service?Here's my reply:
I think you're right to be wary of the big cloud providers, who have a tendency to inspect your data to profile you, to participate in arbitrary surveillance regimes, and to try to sell your eyeballs to advertisers.
Caveat: You have to trust the client machine too
But it's also worth remembering that the network service provider is not the only source of risk. If you really mean "accessing your data from any computer", that means the computer you're using to access the data can do whatever it wants with it. That is, you need to trust both the operator of these "cloud" services, *and* the administrator/operating system of the client computer you're using to access your data. For example, if you log into any "secure" account from a terminal in a web café, that leaves you vulnerable to the admins of the web café (and, in the rather-common case of sloppily-administered web terminals, vulnerable to the previous user(s) of the terminal as well).
Option 0: Portable physical storage
One way to have your data so that you can access it from "any computer" is to not rely on the network at all, but rather to carry a high-capacity MicroSD card (and USB adapter) around with you (you'll probably want to format the card with a widely-understood filesystem like FAT32 instead of NTFS or HFS+ or ext4, which are only understood by some of the major operating systems, but not all).
Here is some example hardware:
Almost every computer these days has either a microSD slot or a USB port, while some computers are not connected to the network. This also means that you don't have to rely on someone else to manage servers that keep your data available all the time.
Note that going the microSD route doesn't remove the caveat about needing to trust the client workstation you're using, and it has another consideration:
You'd be responsible for your own backup in the case of hardware failure. You're responsible for your own backup in the case of online storage too, of course -- but the better online companies are probably better equipped than most of us to deal with hardware failure. OTOH, they're also susceptible to some data loss scenarios that we aren't as individual humans (e.g. the company might go bankrupt, or get bought by a competitor who wants to terminate the service, or have a malicious employee who decides to take revenge). Backup of a MicroSD card isn't particularly hard, though: just get a USB stick that's the same size, and regularly duplicate the contents of the MicroSD card to the USB stick.
One last consideration is storage size -- MicroSD cards are currently limited to 32GB or 64GB. If you have significantly more data than that, this approach might not be possible, or you might need to switch to a USB hard disk, which would limit your ability to use the data on computers that don't have a USB port (such as some smartphones).
Option 1: Proprietary service providers
If you don't think this portable physical storage option is the right choice for you, here are a couple proprietary service providers who offer some flavor of "cloud" storage while claiming to not look at the contents of your data:
I'm not particularly happy with either of those, though, in part because the local client software they want you to run is proprietary, so there's no way to verify that they are actually unable to access the contents of your data. But i'd be a lot happier with either wuala or spideroak than i would be with google drive, dropbox, or iCloud.
Option 2: What i really want
I'm much more excited about the network-accessible, free-software, privacy-sensitive network-based storage tool known as git-annex assistant. The project is spearheaded by Joey Hess, who is one of the most skilled and thoughtful software developers i know of.
"assistant" (and git-annex, from which it derives) has the advantage of being pretty agnostic about the backend service (many plugins for many different cloud providers) and allows you to encrypt your data locally before sending it to the remote provider. This also means you can put your encrypted data in more than one provider, so that if one of the providers fails for some reason, you can be relatively sure that you have another copy available.
But "assistant" won't be ready for Windows or Android for several months (builds are available for Linux and Mac OS X now), so i don't know if it meets the criterion for "accessible from any computer". And, of course, even with the encryption capabilities, the old caveat about needing to trust the local client machine still applies.
However, during a recent upgrade, something wanted to pull in pulseaudio, which in turn wanted to pull in libasound2-plugins, and i distractedly (foolishly) let it. With that package installed, after an mpd restart, the CPU was completely thrashed (100% utilization) and music only played in stutters of 1 second interrupted by a couple seconds of silence. igor was unusable for its intended purpose.
Getting rid of pulseaudio was my first attempt to fix the stuttering, but the problem remained even after pulse was all gone and mpd was restarted.
Then i did a little search of which packages had been freshly installed in the recent run:
grep ' install .* <none> ' /var/log/dpkg.logand used that to pick out the offending package.
After purging libasound2-plugins and restarting mpd, the igor is back in action.
Lesson learned: on low-overhead machines, don't allow apt to install recommends!
echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.confAnd it should go without saying, but sometimes i get sloppy: i need to pay closer attention during an "apt-get dist-upgrade"
Alas, i can find no documentation about how to change the default page margins system-wide for either Oo.o or LibreOffice. Surely this is something that can be done without a recompile. What am i missing?