next up previous
Next: Approaches Up: Extensible Security Architectures for Previous: Memory Protection vs. Secure

Security in Java

  Though we could in principle use any of several mobile code technologies, we will base our analysis on the properties of Java. Java is a good choice for several reasons: it is widely used and analyzed in real systems, and full source code is available to study and modify.

Java uses programming language mechanisms to enforce memory safety. The JVM enforces the Java language's type safety, preventing programs from accessing memory or calling methods without authorization [28]. Existing JVM implementations also enforce a simple ``sandbox'' security model which prohibits untrusted code from using any sensitive system services.

The sandbox model is easy to understand, but it prevents many kinds of useful programs from being written. All file system access is forbidden, and network access is only allowed to the host where the applet originated. While untrusted applets are successfully prevented from stealing or destroying users' files or snooping around their networks, it is also impossible to write a replacement for the users' local word processor or other common tools which rely on more general networking and file system access.

Traditional security in Java has focused on two separate, fixed security policies. Local code, loaded from specific directories on the same machine as the JVM, is completely trusted. Remote code, loaded across a network connection from an arbitrary source, is completely untrusted.

Since local code and remote code can co-exist in the same JVM, and can in fact call each other, the system needs a way to determine if a sensitive call, such as a network or file system access, is executing ``locally'' or ``remotely.'' Traditional JVMs have two inherent properties used to make these checks:

Combined together, these two JVM implementation properties allow the security system to search for remote code on the call stack. If a ClassLoader other than the special system ClassLoader exists on the call stack, then a policy for untrusted remote code is applied. Otherwise, a policy for trusted local code is used.

To enforce these policies, all the potentially dangerous methods in the system were designed to call a centralized SecurityManager class which checks if the action requested is allowed (using the mechanism described above), and throws an exception if remote code is found on the call stack. The SecurityManager is meant to implement a reference monitor [25,32] -- always invoked, tamperproof, and easily verifiable for correctness.

In practice, this design proved insufficient. First, when an application written in Java (e.g., the HotJava Web browser) wishes to run applets within itself, the low-level file system and networking code has a problem distinguishing between direct calls from an applet and system functions being safely run on behalf of an applet. Sun's JDK 1.0 and JDK 1.1 included specific hacks to support this with hard-coded ``ClassLoader depths'' (measuring the number of stack frames between the low-level system code and the applet code).

In addition to a number of security-related bugs in the first implementations [8], many developers complained that the sandbox policy, applied equally to all applets, was too inflexible to implement many desirable ``real'' applications. The systems presented here can all distinguish between different ``sources'' of programs and provide appropriate policies for each of them.

next up previous
Next: Approaches Up: Extensible Security Architectures for Previous: Memory Protection vs. Secure
Dan Wallach