Xuhua Ding: Proxy Re-Signatures in the Standard ModelThis talk discussed signature schemes with a proxy that re-signs messages. In proxy re-signature schemes, the proxy cannot sign arbitrary messages, instead the proxy needs an "allowance" of the original signer to do so. The signatures are indistinguishable, i.e., one cannot distinguish whether party A (the original signer) or party B (the proxy) has signed the message. Technically this works as follows: there are two additional operations (besides KeyGen, Sign, Verify as usual): ReKey and ReSign. The ReKey operation takes as input asymmetric key pairs of A and B, and outputs a new key for B. This key transforms A's signature into that of B. The ReSign operation takes this key and the old signature, and outputs a new signature which can be verified by the same public key.
An attack (key recovery attack) on an existing scheme in the random oracle model was shown, and a new scheme (Homomorphic Compartment Signatures) was presented which the authors claim to be secure in the standard model. (slides)
I wonder whether these proxy re-signature schemes can be used for signing credentials of a virtual TPM (vTPM)? Since the vTPM does not have a vendor certificate for its endorsement key (vEK), it cannot request certificates for its attestation identity keys (vAIK) directly. But what if the hardware TPM signs with its AIK the vAIK of the vTPM, and a Proxy has the transformation key to make vAIK and AIK signatures indistinguishable? The vTPM could sign vPCRs with its vAIK, and the Proxy could transform the signature as if it was signed by the real TPM, hence, letting verifiers use the AIK certificates to verify the signature!? Well, just a rough idea...
David Champagne: The Reduced Address Space (RAS) for Application Memory AuthenticationThis talk presents a new method for application memory authentication, i.e., the procedure that an application can verify what it reads from memory is what it has written there before. The approach assumes the CPU and the application as being trusted, and that an on-chip engine can authenticate the initial state of the application (David mentioned TPM, XOM, AEGIS, SP, and SecureBlue as such related works). Existing approaches use hash trees to verify the memory integrity: data blocks and the hash tree are on off-chip RAM, the root hash is on-chip. Hash trees on the physical address space (PAS) are insecure because of a so-called "branch splicing attack" (in an untrusted OS: possible substitution of data blocks via page table corruption). Hash trees on the virtual address space (VAS) are impractical because they are too wide.
In the proposed RAS tree, the data blocks (leaf nodes) are address ranges of used memory regions only (contents of code, data, heap, and stack). When new memory pages are touched, a partial tree is constructed, and the RAS tree is expanded, i.e., the old tree is "merged" with the partial one.
RAS trees are resistant against branch splicing attack. But they require additional hardware: a Tree Management Unit (TMU) and a hash engine. The TMU is located between TLB/cache and bus controller. The prototype does not support shared memory yet (e.g., necessary for shared libraries). (slides)
While the approach of RAS trees is very interesting and the technique behind seems sound, the underlying assumption of operating the application on an untrusted operating system is somehow strange and artificial. So, the application can now verify the integrity of its memory. But in reality, an application needs a lot services from the OS, e.g., I/O, libraries, resources like files, etc. But if the OS is untrusted, why care about memory integrity? The application would not be able to communicate its operational results to some other subject (either processes or the user) because therefore it would need the services of the OS, which are untrusted by assumption. I think this seems only usable for a limited range of applications (e.g., on special-purpose devices, embedded systems or alike).
Ersin Uzun: HAPADEP - Human-Assisted Pure Audio Device PairingThis talk presented the HAPADEP system, a way to pair devices (e.g., bluetooth phone and headset) via audio a human person can control. The public keys for the cryptographic pairing are encoded as audio streams, and played and recorded on each device. In the verification phase, the devices play an audio encoding of the digest of the exchanged keys, and the user has to compare the audio samples. What the user hears can either be a melody or a (grammatical correct, but non-sense) English sentence. Results of a (small) usability study showed that sentences are more convenient for the verification phase. (slides)
Cormac Herley: One-Time Password Access without Changing the ServerThis was about web authentication using a proxy. The proxy has a set of symmetric encryption keys ek1,...,ekn. The user has to compute encrypted passwords E(pwd,ek1),...,E(pwd,ekn) on a trusted machine and store the resulting list, e.g., on a mobile phone. The Proxy redirects URLs (e.g., paypal.urrsa.com) to avoid any proxy configuration in the web browser, hence, the proxy has a fixed address (urrsa.com). The user enters an encrypted password in the web browser, which sends it to the proxy. The proxy decrypts the password and inserts the clear-text password in the original login site.
The approach assumes DNS works correctly (i.e., no protection against DNS poisoning). Moreover, it does not bind the passwords/keys to any URLs or SSL certificates. The proxy just decrypts the password and sends it to the web site, configured in the mapping of, e.g., paypal.urrsa.com to www.paypal.com website. Users have a transparent usage experience, except that all (security-sensitive) URLs are now of the form *.urrsa.com. (slides)
Cormac Herley: Can "Something You Know" be Saved?Cormac Herley gave another talk, this time he questioned whether there is a fundamental problem with challenge-response protocols for web authentication. Based on the attack model that an adversary can observe anything on a PC (e.g., due to malware, keylogger, etc.), and that the adversary can observe login attempts many times, one can simply imply that it is generally not a good idea to enter passwords in clear-text on untrusted machines. Instead, users should perform challenge-response protocols where they do not reveal the secret. They enter some value computed by a response function that takes the secret and a challenge as input. However, this scheme is constrained by the human capabilities of memorizing bits and doing computations in head.
Framed by these conditions, it is analyzed what effects it has when parameters of generic challenge-response protocols are modified, e.g., number of bits of secret that are necessary for every response bit. This results in a generic brute force attack: secrets that are close (differ only in a few bits relevant for responses) do have closes responses, and this allows to easily find values that are close to the secret. In other words, the adversary does not need to know the whole secret, but instead only those bits that are relevant to compute valid responses. (slides)