[Next] [Up] [Previous] [Index]

Cryptography and Content Protection

One common application of cryptography is to prevent copies, or at least digital copies, being made of computer programs, music, pictures, or movies.

Since these things can't be used while in an encrypted form, however, works protected in this fashion still need to be accompanied, either as they are distributed, or in the device on which they will be legitimately played or used, by all the information needed to decrypt them. Thus, it appears that someone attempting to overcome such protection will always have an alternative to cryptanalysis as a means of attack: prying the key out of wherever it is hidden.

However, if a key is hidden inside the circuitry of a microchip, prying it out of there requires specialized equipment; that, in itself, would be more reassuring if many hackers weren't college students, but the military also uses various techniques to make that more difficult, such as painting chips with chemicals that will catch fire if exposed to the air. Because this limitation does mean that no content protection method can be technically perfect, it is not surprising, whether or not one approves of it, that industries relying on copyright have asked for (and have recieved in many cases, such as the Digital Millenium Copyright Act in the United States) specific legal protection of content protection schemes, to make it illegal to attempt to defeat them, and to reveal the hidden keys to others once they are found.

To allow a protected movie or song, for example, to be played on a computer, without it being necessary to allow the protected content in decrypted form to move along the computer's buses, one of the ideas that has been advanced, and which does seem necessary, is to put the decryption inside each display device, such as inside video cards, sound cards, and printers (so that you can print a copy of a book without being able to access its text in machine readable form).

Software, if protected by encryption, could be protected in two different ways. It could be distributed with a dongle that decrypts an important part of the software, totally preventing copying. Or, the encryption could use a key which is jointly derived from the user's serial number or name and a corresponding secret value: the two together would produce the constant key in which the software is encrypted on a CD-ROM, but it would be made difficult to find and use this key directly, so that unauthorized copies would normally identify their source.

I remember that, some years ago, there was a news story about a new microprocessor that had, built into it, the capability of running programs that were encrypted. Actually, two chips had this feature; they were the NEC V25 Software Guard and the NEC V35 Software Guard. These chips were 8086-compatible chips; the V35 (which also existed in a plain form without this feature), in addition, had features that allowed it to address 16 Megabytes of RAM with a 24-bit address, but in a simpler fashion than that which later became the standard with Intel's 80286 chip.

The encryption provided was, however, somewhat limited. Customers could specify a 256-byte translation table, and when the chip was executing encrypted software, this table was used to decrypt the first opcode byte of instructions.

Since the address portion of an instruction usually appears in the clear on the address bus in a later cycle, it made sense not to encrypt that, and thereby provide a window into the translation table for anyone who could monitor the computer's bus.

One could imagine slightly enhancing this kind of encryption, while keeping its time requirements comparable to those involved in address calculation:

Here, bytes being fetched by the CPU go through two translation tables or S-boxes, and in between are XORed with a quantity calculated from the least significant two bytes of the address from which they were fetched.

Four different S-boxes are present in each position. Another table, not shown in the diagram, would determine which S-box is to be used for various types of memory access, and it might look something like this:

00000 First opcode byte                   00 00 00 00
00001 Other opcode bytes                  01 01 01 01
00010 8-bit displacement                X
00011 Address field                     X
00100 (not used)                        X
00101 one-byte data                       10 10 10 10
00110 16-bit data, first byte             11 11 11 11
00111 16-bit data, second byte            00 01 10 11
01000 32-bit integer, first byte          01 10 11 00
01001 32-bit integer, second byte         10 11 00 01
01010 32-bit integer, third byte          11 00 01 10
01011 32-bit integer, fourth byte         11 10 01 00
01100 32-bit floating, first byte         10 01 00 11
01101 32-bit floating, second byte        01 00 11 10
01110 32-bit floating, third byte         00 11 10 01
01111 32-bit floating, fourth byte        00 11 00 11

11000 64-bit floating, first byte         01 10 01 10
11001 64-bit floating, second byte        10 01 10 01
...

so there would be nine bits for each entry, one turning off encryption, the other eight specifying the four S-boxes to use. One could add another two bits, so that the two XOR steps shown in the diagram could individually be switched to addition.

To allow a standard part to be used, the chip could contain the ability to do public-key cryptography, so that it could load in the contents for all these tables from the outside.

But even with the additional complications shown, it seems like quite a mismatch to start off by using something as powerful as public-key cryptography, and then protect software with such an elementary type of cryptography.

So, instead of (or in addition to) using the chipmaker's public key to encrypt S-boxes for use in this elementary fashion, it ought to be used to allow decryption of executable code, which, in decrypted form, would be kept in memory on the chip itself, and not allowed to leave there.

The program so decrypted could be a small program, including a key, which would serve to conventionally decrypt by any algorithm additional program code to also be placed in this internal memory. This would reduce the amount of dedicated encryption hardware needed on the chip, but might create problems in connection with what I propose below.

Decrypting a program by a secure algorithm, and only storing the result inside the microprocessor chip for use, would be quite secure.

But this raises another issue.

Do we allow every software maker to protect its own software in this fashion? Or will making use of the mechanism be restricted to large, respected companies, that the chipmaker will trust to abide by a non-disclosure agreement?

Using public-key cryptography would mean that the chipmaker could disclose the public key corresponding to the private key built into every chip without compromising the security. But what happens when writers of viruses and trojan-horse programs use it to protect their efforts? Of course, the chipmaker would use its knowledge of its private key to assist efforts to combat viruses, but this would still allow such code to be far more damaging, and harder to detect.

In a USENET post, I proposed a scheme that would allow a facility of this nature to be made openly available and yet have additional protection against misuse.

Hence, the only way that a program containing encrypted parts could successfully execute on a user's computer would be if that user activated the program by using a utility to superencrypt that program's encrypted parts with his own personal key.

This would be a fair approach to content protection, as it would provide a level playing field for software writers, and it would also provide the user with control over his computer, by being able to decide what programs he will trust to execute in encrypted form on that computer.

Note that this proposal requires on-chip symmetric encryption capability, to handle the user's key. Programs to be loaded into protected memory using this encryption might also be required to be superencrypted with the user's key, in addition to requiring this for the block encrypted with public-key techniques.

(There is no need to require this for programs which aren't decrypted once, and loaded into chip-internal memory, but executed in regular memory using the simple scheme illustrated in the diagram above, where the block containing the S-boxes for the program has been activated. Although much less secure, it might be thought useful to include this kind of ability on a chip that runs secured software, so as to allow all the program to be protected somewhat, providing an additional nuisance to hackers, in addition to protecting the small pieces of the program loaded into the internal memory of the chip with more advanced encryption. Possibly also useful would be a secondary user key, used to activate programs which are only allowed to use the multiple S-box method of external protection, but which are not loaded in part into the chip's internal memory.)

But even this would not be a foolproof way of preventing a protected program from accepting other programs as protected in a fashion that bypasses the requirement of explicit user activation, since a program could always be loaded in the form of P-code into an on-chip data area, as a program which is to be hidden needs the ability to work with data in private as well. This is particularly likely to be a problem if the computer's operating system makes use of this protection, but if the operating system were activated with the type of secondary user key proposed above, so that it was only protected using the simple scheme in the illustration, it would have no direct access to the internal memory. But that wouldn't stop it from accepting programs written in encrypted P-code for execution, of course.

Also note that a protected program, using either type of protection, would have to be treated like an interrupt service routine by the computer, so that it could only be called at the entry points explicitly specified when it was loaded. However, that does not mean that such programs should be privileged; limiting those externally protected to being user-mode programs, and further limiting those executing on-chip to access to a fixed area of memory, so that they can only serve as computational subroutines, is another way to combat misuse of the security feature, although, again, it is not foolproof.


[Next] [Up] [Previous] [Index]

Next
Table of Contents
Main Page