Information Security Conference (ISC 2008) Day 3

The last day of the conference included again several talks on cryptography, hash functions, and related stuff. However, there were a few talks on system security and authentication, too.

Xuhua Ding: Proxy Re-Signatures in the Standard Model

This talk discussed signature schemes with a proxy that re-signs messages. In proxy re-signature schemes, the proxy cannot sign arbitrary messages, instead the proxy needs an "allowance" of the original signer to do so. The signatures are indistinguishable, i.e., one cannot distinguish whether party A (the original signer) or party B (the proxy) has signed the message. Technically this works as follows: there are two additional operations (besides KeyGen, Sign, Verify as usual): ReKey and ReSign. The ReKey operation takes as input asymmetric key pairs of A and B, and outputs a new key for B. This key transforms A's signature into that of B. The ReSign operation takes this key and the old signature, and outputs a new signature which can be verified by the same public key.

An attack (key recovery attack) on an existing scheme in the random oracle model was shown, and a new scheme (Homomorphic Compartment Signatures) was presented which the authors claim to be secure in the standard model. (slides)

I wonder whether these proxy re-signature schemes can be used for signing credentials of a virtual TPM (vTPM)? Since the vTPM does not have a vendor certificate for its endorsement key (vEK), it cannot request certificates for its attestation identity keys (vAIK) directly. But what if the hardware TPM signs with its AIK the vAIK of the vTPM, and a Proxy has the transformation key to make vAIK and AIK signatures indistinguishable? The vTPM could sign vPCRs with its vAIK, and the Proxy could transform the signature as if it was signed by the real TPM, hence, letting verifiers use the AIK certificates to verify the signature!? Well, just a rough idea...

David Champagne: The Reduced Address Space (RAS) for Application Memory Authentication

This talk presents a new method for application memory authentication, i.e., the procedure that an application can verify what it reads from memory is what it has written there before. The approach assumes the CPU and the application as being trusted, and that an on-chip engine can authenticate the initial state of the application (David mentioned TPM, XOM, AEGIS, SP, and SecureBlue as such related works). Existing approaches use hash trees to verify the memory integrity: data blocks and the hash tree are on off-chip RAM, the root hash is on-chip. Hash trees on the physical address space (PAS) are insecure because of a so-called "branch splicing attack" (in an untrusted OS: possible substitution of data blocks via page table corruption). Hash trees on the virtual address space (VAS) are impractical because they are too wide.

In the proposed RAS tree, the data blocks (leaf nodes) are address ranges of used memory regions only (contents of code, data, heap, and stack). When new memory pages are touched, a partial tree is constructed, and the RAS tree is expanded, i.e., the old tree is "merged" with the partial one.

RAS trees are resistant against branch splicing attack. But they require additional hardware: a Tree Management Unit (TMU) and a hash engine. The TMU is located between TLB/cache and bus controller. The prototype does not support shared memory yet (e.g., necessary for shared libraries). (slides)

While the approach of RAS trees is very interesting and the technique behind seems sound, the underlying assumption of operating the application on an untrusted operating system is somehow strange and artificial. So, the application can now verify the integrity of its memory. But in reality, an application needs a lot services from the OS, e.g., I/O, libraries, resources like files, etc. But if the OS is untrusted, why care about memory integrity? The application would not be able to communicate its operational results to some other subject (either processes or the user) because therefore it would need the services of the OS, which are untrusted by assumption. I think this seems only usable for a limited range of applications (e.g., on special-purpose devices, embedded systems or alike).

Ersin Uzun: HAPADEP - Human-Assisted Pure Audio Device Pairing

This talk presented the HAPADEP system, a way to pair devices (e.g., bluetooth phone and headset) via audio a human person can control. The public keys for the cryptographic pairing are encoded as audio streams, and played and recorded on each device. In the verification phase, the devices play an audio encoding of the digest of the exchanged keys, and the user has to compare the audio samples. What the user hears can either be a melody or a (grammatical correct, but non-sense) English sentence. Results of a (small) usability study showed that sentences are more convenient for the verification phase. (slides)

Cormac Herley: One-Time Password Access without Changing the Server

This was about web authentication using a proxy. The proxy has a set of symmetric encryption keys ek1,...,ekn. The user has to compute encrypted passwords E(pwd,ek1),...,E(pwd,ekn) on a trusted machine and store the resulting list, e.g., on a mobile phone. The Proxy redirects URLs (e.g., paypal.urrsa.com) to avoid any proxy configuration in the web browser, hence, the proxy has a fixed address (urrsa.com). The user enters an encrypted password in the web browser, which sends it to the proxy. The proxy decrypts the password and inserts the clear-text password in the original login site.

The approach assumes DNS works correctly (i.e., no protection against DNS poisoning). Moreover, it does not bind the passwords/keys to any URLs or SSL certificates. The proxy just decrypts the password and sends it to the web site, configured in the mapping of, e.g., paypal.urrsa.com to www.paypal.com website. Users have a transparent usage experience, except that all (security-sensitive) URLs are now of the form *.urrsa.com. (slides)

Cormac Herley: Can "Something You Know" be Saved?

Cormac Herley gave another talk, this time he questioned whether there is a fundamental problem with challenge-response protocols for web authentication. Based on the attack model that an adversary can observe anything on a PC (e.g., due to malware, keylogger, etc.), and that the adversary can observe login attempts many times, one can simply imply that it is generally not a good idea to enter passwords in clear-text on untrusted machines. Instead, users should perform challenge-response protocols where they do not reveal the secret. They enter some value computed by a response function that takes the secret and a challenge as input. However, this scheme is constrained by the human capabilities of memorizing bits and doing computations in head.

Framed by these conditions, it is analyzed what effects it has when parameters of generic challenge-response protocols are modified, e.g., number of bits of secret that are necessary for every response bit. This results in a generic brute force attack: secrets that are close (differ only in a few bits relevant for responses) do have closes responses, and this allows to easily find values that are close to the secret. In other words, the adversary does not need to know the whole secret, but instead only those bits that are relevant to compute valid responses. (slides)


Information Security Conference (ISC 2008) Day 2

The second day had only cryptanalysis talks on the agenda. So I decided to do some other work. In the afternoon, there was a tour to the Taiwan National Palace Museum. On the ISC08 website you can find photos from the museum tour. In the evening, there was the gala banquet, for which you can also have a look on some photos


Information Security Conference (ISC 2008) Day 1

The 11th Information Security Confernce (ISC 2008) was held in Teipei, Taiwan. This is a short summary of some presentations I attended.

Marcel Winandy: Property-Based TPM Virtualization

This was actually my presentation. See my older post and my slides

Endre Bangerter: A Demonstrative Ad Hoc Attestation System

The proposal is to use a trusted device for ad hoc attestation of computing platforms, i.e., showing to the user "PC is ok" or "PC is not ok". It is a server-based approach, where the server sends remote procedure call (RPC) to the PC, and the PC displays flickering barcods on the screen. The trusted device is hold in front of the screen and receives the RPC, i.e., decodes the barcode. Finally, the device displays whether PC is OK nor not.

The decision the device displays is actually based on a remote attestation done between the server and the PC. The trusted device is just used as local "trusted display" of the remote server. For each attestation, the flickering barcode will be different (i.e., includes a counter value) to prevent simple replay attacks. (slides)

Hans Löhr: Property-Based Attestation without a Trusted Third Party

This is an improved protocol for property-based attestation. Instead of having a Trusted Third Party (TTP) issuing certificates for properties, the verifier has a-priori a list of configurations. The attestee creates a proof that its configuration is within a defined list of configurations, without revealing which exact configuration it has. The proof is based on group signatures (ring signature scheme) without revealing the secret key used to sign the commitment. (slides)

Xuhua Ding: An Efficient PIR Construction Using Trusted Hardware

Paper about private information retrieval. Improves reshuffeling of database form O(n) to O(sqrt(n)). Records are colored black and white. On each query, they fetch two records of different colors. Retrieved records are colored black. Shuffeling is done only on black ("touched") records. (slides)

Charalampos Papamanthou: Athos - Efficient Authentication of Outsourced File Systems

Outsourced filesystems means they are stored on a server. The server is completely untrusted (i.e., there is no trusted hardware on the server side). Accessing the files are queries to the server, and accompanied by a "proof" of authenticity, both for file system content and hierarchy. This proof is based on cryptographic hashing, and uses authenticated skip lists and authenticated dynamic trees. It is an efficient scheme (similar to Merkle hash trees), the client only has to maintain a O(1) trusted storage. Query operations have O(k log n) time. (slides)


Property-Based TPM Virtualization

This is the title of paper I have written together with Ahmad-Reza Sadeghi and Christian Stüble. We will present it at the 11th Information Security Conference (ISC 2008) in Taipei, Taiwan. I will also give a presentation about it at the HGI Seminar at Ruhr-University Bochum on 10th July 2008.

Virtualization and hypervisors enable useful and cost-efficient means to manage IT infrastructure, especially migration of virtual machines (VMs) between hardware platforms. A challenge in this context is the virtualization of
hardware security modules like the Trusted Platform Module (TPM) since the intended purpose of TPMs is to securely link software and the underlying hardware. Existing solutions for TPM virtualization, however, have various
shortcomings that hinder the deployment to a wide range of useful scenarios. In our paper, we address these shortcomings by presenting a flexible and privacy-preserving design of a virtual TPM that in contrast to existing
solutions supports different approaches for measuring the platform's state and for key generation, and uses property-based attestation mechanisms to support software updates and VM migration. Our solution improves the maintainability and applicability of hypervisors supporting hardware security modules like the TPM. The following figure shows the design.

For each VM that needs a vTPM, there is a separate vTPM instance. We assume the underlying hypervisor to protect the internal state and operations of each vTPM from any unauthorized access. The main building blocks of our vTPM are the following: PropertyManagement represents the virtual PCRs and manages different mechanisms to store and read measurement values; KeyManagement is responsible for creating and loading keys; vTPMPolicy holds the user-defined policy of the vTPM instance, defining which properties are going to be revealed during an attestation operation; CryptographicFunctions provide monotonic counters, random number generation, hashing, etc.; MigrationController is responsible for migrating the vTPM to another platform.

To improve flexible migration and to preserve the availability of sealed data after migration or software updates, an essential step is to support other measurement strategies. Applying property-based measurement and attestation to a vTPM allows much more flexibility in the choice of the hypervisor and for easier updates of applications -- a VM can still use sealed data or run attestation procedures if the properties of the programs remain the same.

Our vTPM design is based on a plug-in-like architecture for various vPCR extension strategies. Each extension strategy is realized by a PropertyProvider module implementing different translation functions. A translation function translates measurements (i.e., hash values of program binaries) into property representations. Each PropertyProvider has its own vector of virtual PCRs. Thus there is a matrix of vPCR values for each vTPM. This allows us to choose, according to the vTPM policy, which PropertyProvider to use on particular sealing or attestation operations.

Depending on the implementation of the PropertyProvider, we can realize property-based sealing and property-based attestation without any change to the interface of the vTPM from the perspective of the associated VM. This enables the availability of protected data and cryptographic keys of the vTPM after migrating to another platform that provides the same security properties but may have a different binary implementation. TPM-enabled applications executed in a VM can directly profit from this flexibility without the need for modification.



Tools for Maintaining a Personal Research Journal

To keep a journal of research activities, especially when studying for a PhD, is generally considered a good idea (see for example desJardins' guide How to Succeed in Graduate School). Such a journal helps to organize ideas, to record the progress of research, and to leverage building new ideas. On the web, you can find also other guidelines which give more examples what to include in the journal, e.g., Notes on the Personal Research Notebook / Journal.

While I used such a journal concept intuitively during my diploma thesis, I wrote it on separate sheets of papers which I transformed later into the written thesis. This worked out very well at that time and on that project. However, now I want to use a tool which can automate those time-consuming things like searching and copy&paste. Today, I have several subprojects and small parts which are sometimes (at least at the beginning) very unrelated. Using a paper notebook as a journal would not be very efficient. So, I wondered which software tools would work out as a research journal for a PhD.
I have tried out several tools, starting from simple text files to journal and todo list functions in KDE Kontact. But the information is still scattered throughout several files on my disk in several different formats (text files, LaTeX files, OpenOffice files, pictures, etc.). I need something that can combine everything and provides a fast search and kind of sorting function (like tagging in Web 2.0 applications).

Finally, I have found two applications which are suiteable for this task: Journler for Mac OS X. This is exactly what I needed. You can enter journal entries in chronological order, add tags, pictures, URLs, PDFs, whatever. When you click on a tag, Journler automatically shows up a list of all entries with this tag. And of course, it makes use of the fast search engines of Leopard to quickly scan your entries for keywords. This is great!

Unfortunately, I have to work on a PC laptop at work. So I can't use Journler there. But I have something similar: BasKet Note Pads for KDE on Linux. It has similar functionalities, and I have started to use it. One good feature I noticed: you can import notes from KNotes and simple text files (now it pays back to have used text files!*g*).