Walnut/Secure Distributed Computing/Capabilities

From Erights

Jump to: navigation, search


Contents

Capabilities

To be more specific, E uses object-capabilities, a type of capability security that has strengths not found in several weaker "capability" systems, as discussed in Paradigm Regained.

E uses capability based security to supply both strong security and broad flexibility without incurring performance penalties. Capabilities might be thought of as the programming equivalent of physical keys, as described with the following metaphors.

Principle of Least Authority (POLA)

When you buy a gallon of milk at the local 7-11, do you hand the cashier your wallet and say, "take what you want and give me the rest back?" Of course not. In handing the cashier exact change rather than your wallet, you are using the Principle of Least Authority, or POLA: you are giving the cashier no more authority than he needs. POLA is a simple, obvious, crucial best-practice for secure interactions. The only people who do not understand the importance of POLA are credit card companies (who really do tell you to give that far-off Internet site all your credit, and hand back what they don't want), and computer security gurus who tell you to use more passwords.

Children with your ID badge

Suppose all security in the physical world were based on ID badges and ID readers. At your home you might put an ID reader on your door, another on your CD cabinet, and another on your gun vault. Suppose further you had to depend on 4-year-old children to fetch your CDs for you when you were at the office. How would you do it? You would hand your ID badge to the child, and the child could then go through the front door and get into the CD cabinet. Of course, the child with your ID badge could also go into the gun vault. Most of the children would most of the time go to the CD cabinet, but once in a while one would pick up a gun, with lamentable results.

Keys

In the real physical world, if you had to depend on children to fetch CDs, you would not use an ID badge. Instead you would use keys. You would give the child a key to the front door, and a key to the CD cabinet. You would not give the child a key to the gun vault.

All current popular operating systems that have any security at all use the ID badge system of security. Windows, Linux, and Unix share this fundamental security flaw. None come anywhere close to enabling POLA. The programming languages we use are just as bad or worse. Java at least has a security model, but it too is based on the ID badge system--an ID badge system so difficult to understand that in practice no one uses anything except the default settings (sandbox-default with mostly-no-authority, or executing-app with total-authority).

The "children" are the applications we run. In blissful unawareness, we give our ID badges to the programs automatically when we start them. The CD cabinet is the data a particular application should work on. The gun vault is the sensitive data to which that particular application should absolutely not have access. The children that always run to get a gun are computer viruses like the Love Bug.

In computerese, ID badge readers are called "access control lists". Keys are called "capabilities". The basic idea of capability security is to bring the revolutionary concept of an ordinary door key to computing.

Melissa

Let us look at an example in a computing context, of how keys/capabilities would change security.

Consider the Melissa virus, now ancient but still remembered in the form of each new generation of viruses that use the same strategy the Melissa used. Melissa comes to you as an email message attachment. When you open it, it reads your address book, then sends itself - using your email system, your email address, and your good reputation - to the people listed therein. You only had to make one easy-to-make mistake to cause this sequence: you had to run the executable file found as an attachment, sent (apparently) by someone you knew well and trusted fully.

Suppose your mail system was written in a capability-secure programming language. Suppose it responded to a double-click on an attachment by trying to run the attachment as an emaker. The attachment would have to request a capability for each special power it needed. So Melissa, upon starting up, would first find itself required to ask you, "Can I read your address book?" Since you received the message from a trusted friend, perhaps you would say yes - neither Melissa nor anything else can hurt you just by reading the file. But this would be an unusual request from an email message, and should reasonably set you on guard.

Next, Melissa would have to ask you, "Can I have a direct connection to the Internet?" At this point only the most naive user would fail to realize that this email message, no matter how strong the claim that it came from a friend, is up to no good purpose. You would say "No!"

And that would be the end of Melissa, all the recent similar viruses, and all the future similar viruses yet to come. No fuss, no muss. They would never rate a mention in the news. Further discussion of locally running untrusted code as in this example can be found later under Mobile Code.

Before we get to mobile code, we first discuss securing applications in a distributed context, i.e., protecting your distributed software system from both total strangers and from questionable participants even though different parts of your program run on different machines flung widely across the Internet (or across your Intranet, as the case may be). This is the immediate topic.

Language underpinnings for capabilities

There are a couple of fundamental concepts that must be gotten right by a programming language in order to use capability discipline. We mention these here.

Memory Safety: Reach objects through references, not pointers

Pointer arithmetic is, to put it bluntly, a security catastrophe. Given pointer arithmetic, random modules of code can snoop vast spaces looking for interesting objects. C and C++ could never support a capability system. Java, Smalltalk, and Scheme, on the other hand, did get this part of capability discipline right.

Object Encapsulation

In a capability language you can not reach inside an object for its instance variables. Java, Smalltalk, and Scheme pass this test as well.

In JavaScript, as a counterexample, all instance variables are public. This is occasionally convenient but shatters any security hopes you might have. JavaScript is a relatively safe language only because the language as a whole is so thoroughly crippled. We consider JavaScript safe, but not secure: We consider security to require not only safety but also the power to get your work done: POLA means having enough authority, as well as not having too much. Using this definition of security, the Java applet sandbox is mostly safe, but not at all secure. And a Java applet that has been allowed to run in a weaker security regime because the applet was "signed", is neither safe nor secure.

No Static methods that grant authority, no static mutable state

In a capability system, the only source of positive authority for an object should be the references that the object holds.

Java fails here, along with Smalltalk and Scheme. A famous example of the trouble that static mutable state can get you appeared in Java 1.0 (corrected in 1.1, an upward compatibility break so rarely used they could get away with it). The object System.out, to which people routinely sent print messages, was replaceable. A programmer in the middle of a bad hair day could easily replace this object, reading everything everyone else was doing, and preventing anyone else from reading their own outputs.

Carefully design the API so that capabilities do not leak

You can make everything else right, but if the APIs for a language were designed without consideration of security, the capability nature of the system is seriously flawed. Let us consider an example in Java. Suppose you had an analysis program that would present graphs based on the contents of an existing spreadsheet. The program only needs read access on the one spreadsheet file, it needs nothing else in your file system. In Java, then, we would grant the application an InputStream.

Unfortunately, an InputStream in Java leaks authority. In this example, you could "cast down" to a FileInputStream, from which you could get the File, from which you could get a WriteStream and a whole filepath, which would give you total access and control over the entire directory system.

To fix this problem for a single object in a single application, you could write your own InputStream class that doesn't leak. This strategy does not scale, however: requiring the programmer to write replacements for every object in the API to achieve security will result in few secure programs (just as requiring the programmer to write his own bitmap handlers and gui widgets will result in few point-and-click programs). To really fix this security problem in Java, you would have to rewrite the entire java.io package, wrecking backward compatibility as a side effect. Without that kind of massive, serious repair, it is always easier to create a breach than to create a secure interaction. With an infrastructure actively trying to harm you, what chance do you really have?

Personal tools
more tools