American library books » Essay » Content by Cory Doctorow (first e reader txt) 📕

Read book online «Content by Cory Doctorow (first e reader txt) 📕».   Author   -   Cory Doctorow



1 2 3 4 5 6 7 8 9 10 ... 29
Go to page:
if captured — but that doesn’t help you if Brad Pitt and his men in skirts skewer him with an arrow before he knows what’s hit him.

So you encipher your message with something like ROT-13, where every character is rotated halfway through the alphabet. They used to do this with non-worksafe material on Usenet, back when anyone on Usenet cared about worksafe-ness — A would become N, B is O, C is P, and so forth. To decipher, you just add 13 more, so N goes to A, O to B yadda yadda.

Well, this is pretty lame: as soon as anyone figures out your algorithm, your secret is g0nez0red.

So if you’re Caesar, you spend a lot of time worrying about keeping the existence of your messengers and their payloads secret. Get that? You’re Augustus and you need to send a message to Brad without Caceous (a word I’m reliably informed means “cheese-like, or pertaining to cheese”) getting his hands on it. You give the message to Diatomaceous, the fleetest runner in the empire, and you encipher it with ROT-13 and send him out of the garrison in the pitchest hour of the night, making sure no one knows that you’ve sent it out. Caceous has spies everywhere, in the garrison and staked out on the road, and if one of them puts an arrow through Diatomaceous, they’ll have their hands on the message, and then if they figure out the cipher, you’re b0rked. So the existence of the message is a secret. The cipher is a secret. The ciphertext is a secret. That’s a lot of secrets, and the more secrets you’ve got, the less secure you are, especially if any of those secrets are shared. Shared secrets aren’t really all that secret any longer.

Time passes, stuff happens, and then Tesla invents the radio and Marconi takes credit for it. This is both good news and bad news for crypto: on the one hand, your messages can get to anywhere with a receiver and an antenna, which is great for the brave fifth columnists working behind the enemy lines. On the other hand, anyone with an antenna can listen in on the message, which means that it’s no longer practical to keep the existence of the message a secret. Any time Adolf sends a message to Berlin, he can assume Churchill overhears it.

Which is OK, because now we have computers — big, bulky primitive mechanical computers, but computers still. Computers are machines for rearranging numbers, and so scientists on both sides engage in a fiendish competition to invent the most cleverest method they can for rearranging numerically represented text so that the other side can’t unscramble it. The existence of the message isn’t a secret anymore, but the cipher is.

But this is still too many secrets. If Bobby intercepts one of Adolf’s Enigma machines, he can give Churchill all kinds of intelligence. I mean, this was good news for Churchill and us, but bad news for Adolf. And at the end of the day, it’s bad news for anyone who wants to keep a secret.

Enter keys: a cipher that uses a key is still more secure. Even if the cipher is disclosed, even if the ciphertext is intercepted, without the key (or a break), the message is secret. Post-war, this is doubly important as we begin to realize what I think of as Schneier’s Law: “any person can invent a security system so clever that she or he can’t think of how to break it.” This means that the only experimental methodology for discovering if you’ve made mistakes in your cipher is to tell all the smart people you can about it and ask them to think of ways to break it. Without this critical step, you’ll eventually end up living in a fool’s paradise, where your attacker has broken your cipher ages ago and is quietly decrypting all her intercepts of your messages, snickering at you.

Best of all, there’s only one secret: the key. And with dual-key crypto it becomes a lot easier for Alice and Bob to keep their keys secret from Carol, even if they’ve never met. So long as Alice and Bob can keep their keys secret, they can assume that Carol won’t gain access to their cleartext messages, even though she has access to the cipher and the ciphertext. Conveniently enough, the keys are the shortest and simplest of the secrets, too: hence even easier to keep away from Carol. Hooray for Bob and Alice.

Now, let’s apply this to DRM.

In DRM, the attacker is also the recipient. It’s not Alice and Bob and Carol, it’s just Alice and Bob. Alice sells Bob a DVD. She sells Bob a DVD player. The DVD has a movie on it — say, Pirates of the Caribbean — and it’s enciphered with an algorithm called CSS — Content Scrambling System. The DVD player has a CSS unscrambler.

Now, let’s take stock of what’s a secret here: the cipher is well-known. The ciphertext is most assuredly in enemy hands, arrr. So what? As long as the key is secret from the attacker, we’re golden.

But there’s the rub. Alice wants Bob to buy Pirates of the Caribbean from her. Bob will only buy Pirates of the Caribbean if he can descramble the CSS-encrypted VOB — video object — on his DVD player. Otherwise, the disc is only useful to Bob as a drinks-coaster. So Alice has to provide Bob — the attacker — with the key, the cipher and the ciphertext.

Hilarity ensues.

DRM systems are usually broken in minutes, sometimes days. Rarely, months. It’s not because the people who think them up are stupid. It’s not because the people who break them are smart. It’s not because there’s a flaw in the algorithms. At the end of the day, all DRM systems share a common vulnerability: they provide their attackers with ciphertext, the cipher and the key. At this point, the secret isn’t a secret anymore.

—

2. DRM systems are bad for society

Raise your hand if you’re thinking something like, “But DRM doesn’t have to be proof against smart attackers, only average individuals! It’s like a speedbump!”

Put your hand down.

This is a fallacy for two reasons: one technical, and one social. They’re both bad for society, though.

Here’s the technical reason: I don’t need to be a cracker to break your DRM. I only need to know how to search Google, or Kazaa, or any of the other general-purpose search tools for the cleartext that someone smarter than me has extracted.

Raise your hand if you’re thinking something like, “But NGSCB can solve this problem: we’ll lock the secrets up on the logic board and goop it all up with epoxy.”

Put your hand down.

Raise your hand if you’re a co-author of the Darknet paper.

Everyone in the first group, meet the co-authors of the Darknet paper. This is a paper that says, among other things, that DRM will fail for this very reason. Put your hands down, guys.

Here’s the social reason that DRM fails: keeping an honest user honest is like keeping a tall user tall. DRM vendors tell us that their technology is meant to be proof against average users, not organized criminal gangs like the Ukrainian pirates who stamp out millions of high-quality counterfeits. It’s not meant to be proof against sophisticated college kids. It’s not meant to be proof against anyone who knows how to edit her registry, or hold down the shift key at the right moment, or use a search engine. At the end of the day, the user DRM is meant to defend against is the most unsophisticated and least capable among us.

Here’s a true story about a user I know who was stopped by DRM. She’s smart, college educated, and knows nothing about electronics. She has three kids. She has a DVD in the living room and an old VHS deck in the kids’ playroom. One day, she brought home the Toy Story DVD for the kids. That’s a substantial investment, and given the generally jam-smeared character of everything the kids get their paws on, she decided to tape the DVD off to VHS and give that to the kids — that way she could make a fresh VHS copy when the first one went south. She cabled her DVD into her VHS and pressed play on the DVD and record on the VCR and waited.

Before I go farther, I want us all to stop a moment and marvel at this. Here is someone who is practically technophobic, but who was able to construct a mental model of sufficient accuracy that she figured out that she could connect her cables in the right order and dub her digital disc off to analog tape. I imagine that everyone in this room is the front-line tech support for someone in her or his family: wouldn’t it be great if all our non-geek friends and relatives were this clever and imaginative?

I also want to point out that this is the proverbial honest user. She’s not making a copy for the next door neighbors. She’s not making a copy and selling it on a blanket on Canal Street. She’s not ripping it to her hard-drive, DivX encoding it and putting it in her Kazaa sharepoint. She’s doing something honest — moving it from one format to another. She’s home taping.

Except she fails. There’s a DRM system called Macrovision embedded — by law — in every VHS that messes with the vertical blanking interval in the signal and causes any tape made in this fashion to fail. Macrovision can be defeated for about $10 with a gadget readily available on eBay. But our infringer doesn’t know that. She’s “honest.” Technically unsophisticated. Not stupid, mind you — just naive.

The Darknet paper addresses this possibility: it even predicts what this person will do in the long run: she’ll find out about Kazaa and the next time she wants to get a movie for the kids, she’ll download it from the net and burn it for them.

In order to delay that day for as long as possible, our lawmakers and big rightsholder interests have come up with a disastrous policy called anticircumvention.

Here’s how anticircumvention works: if you put a lock — an access control — around a copyrighted work, it is illegal to break that lock. It’s illegal to make a tool that breaks that lock. It’s illegal to tell someone how to make that tool. One court even held it illegal to tell someone where she can find out how to make that tool.

Remember Schneier’s Law? Anyone can come up with a security system so clever that he can’t see its flaws. The only way to find the flaws in security is to disclose the system’s workings and invite public feedback. But now we live in a world where any cipher used to fence off a copyrighted work is off-limits to that kind of feedback. That’s something that a Princeton engineering prof named Ed Felten and his team discovered when he submitted a paper to an academic conference on the failings in the Secure Digital Music Initiative, a watermarking scheme proposed by the recording industry. The RIAA responded by threatening to sue his ass if he tried it. We fought them because Ed is the kind of client that impact litigators love: unimpeachable and clean-cut and the RIAA folded. Lucky Ed. Maybe the next guy isn’t so lucky.

Matter of fact, the next guy wasn’t. Dmitry Sklyarov is a Russian programmer who gave a talk at a hacker con in Vegas on the failings in Adobe’s e-book locks. The FBI threw him in the slam for 30 days. He copped a plea, went home to Russia, and the Russian equivalent of the State Department issued a blanket warning to its researchers

1 2 3 4 5 6 7 8 9 10 ... 29
Go to page:

Free e-book: «Content by Cory Doctorow (first e reader txt) 📕»   -   read online now on website american library books (americanlibrarybooks.com)

Comments (0)

There are no comments yet. You can be the first!
Add a comment