Sign-offs

Posted on Sep 4, 2022

In computer trust, there are a few “roots of all evils” or “seed of the poisonous tree”. For some, that’s Djikstra’s “early optimization”, but for most (defensive) security researchers it’s that annoying reminder in your head that, unless you’re the daughter or son of Magnetto and an AI, you won’t know what code is the CPU running. Because your TODO list to be certain of the code that’s running is exhausting:

  • Make sure the source code doesn’t contain malicious code.
  • Review there are no hidden covert attacks in the source code, such as “Trojan Source: Invisible Vulnerabilities” or similar.
  • Analyze the dependencies are not compromised. (links to supply chain attacks on npm, maven, etc)
  • Make sure the development environment is not compromised. The dependencies or the source code, or the final result could be changed with out the developer knowing.
  • Make sure the compiler is not being modified targetting this particular project, as Reflections on Trusting trust" by Ken Thompson describes
  • Make sure the environment that executes the program received the correct code and configuration.
  • Make sure the execution is not being altered by the executing environment’s run-time modifications, for example, the use of LD_PRELOAD changes
  • Make sure the CPU microcode has not been modified to alter the execution of your program
  • Make sure there are no hardware devices with DMA access that could compromise the integrity of the execution
  • Wear a tinfoil hat and execute this in a faraday cage to prevent Van Eck Phreaking

Mitigations

What are the current (or possible) alternatives and strategies to overcome these things?

  • Reproducible builds: by allowing anyone to recreate the exact same binary from a set of source files, the presence of a trojan sequence in the compiler or the source code has to be absolute – and this would be very hard to maintain over a long enough period of time.
  • Linter tools and static analysis: The static analysis of binaries can reveal manipulations similar to Thompson’s fears. Particularly, an analysis of the complexity of the application can be used to detect offuscation, which in open source code it almost always means trouble. A tool that could be used for this is Kolmogorov complexity analysis and other measures of entropy.
  • Countering Trusting Trust: in this 2006 essay, Bruce Schneier demystifies the Thompson trojans by going through a series of steps that would neuter the attacks described in the 1984 speech.
  • Reproducible environments: A new paradigm in software development and use is that of reproducible environments to build and modify existing software. Nix and NixOS particularly are driving most of the experimentation in this area, after the relatively moderate success but weak enforcement of reproducibility achieved by Docker and OCI containers on the subject.
  • Blockchain execution: even though it’s becoming “unpopular” to talk about blockchains, really they provide an amazingly new take on computation: you can be economically sure that some calculation and update of this global database took place in exactly the way you programmed it to (including bugs). This has never been possible before (you would have to be “Magnetto” to know the status of the registers on a CPU in the moment when it executed) but also now you have a receipt of execution, and extreme transparency into the process by which that change was made.
  • Zero knowledge proofs: compilation of source code can be accompanied by a friendly build system by a proof of faithful execution of the compilation circuit. This could bring about a new age of gamification and optimization of build mechanisms. Imagine a global database of libraries that get matched to achieve exactly some result, reduced to the minimum Kolgomorov complexity of the platform in which you’re trying to achieve this goal. There are some limits to ZKPs, for example, they can’t prove that the current execution environment has not been altered, so things such as blockchains will still be useful for the coordination of multi-party computation.
  • Tamper proof of hardware: the ultimate frontier is hardware, so protecting the hardware from tampering and alteration is required to keep a fairly certain sense that software is behaving as intended. The great thing about blockchain execution, zero knowledge proofs, and reproducible builds is that it turns the compilation step (or execution process) independent from the hardware and particular software conditions in which it was created.

And what else could we do?

Signoffs, a social alternative

Some of these things have one-honest-trumps-all feature: analyzing the dependencies can be done by N people, although they can be compromised. What if 1 of those N is honest?

Social trust in reviews could be leveraged to create a global consensus on intervention-free software that just does what you expect it to. Hardware could also fall into this category. Global nations, and their intelligence agencies, would be at a leveled play field, by means of game-theorethical playing: unless you’re a lot further ahead than your competitors, you have more to gain from being able to relatively trust the software rather than spend resources distrusting it.

This also applies to companies leveraging open source software for their internal practices. If a company decides to use a certain software, they are now in for a legal and technical review of that software, and that might entail spending millions of dollars in that due dilligence. Those millions of dollars could be instead donated “to a public good cause” in the same sense that standards bodies are tasked with creating a software/language/ecosystem/protocol that benefits everyone. Sometimes, when a company has a clear lead on a technology, they’re able to set the standards. But this same power can be abused, as we have seen with Google’s stronghold on TC39 and the W3C overall.

It’s politics all the way down, until you reach families

After all is said and done, what’s left? Some technology triumphed over some other. One executive got arrested for embezzlement of funds. In the end, we probably should just trust nature and the correct course of things: doing the best we can, the best we know of, and competing against that which we don’t believe could survive over centuries. To me, that means making sure as many humans in this world understand knowledge, data, and information better than their parents did.

Also, apparently, this is a thing in Rust with packages like cargo-crev or carg-auditable.

Cheers,
Claude