Rage Against the Die-ing of the Locked Silicon
How the tiniest explosives have been laying the foundation for your subjugation
It is a persistent and troubling issue at every Open Source conference I’ve been to and indeed among almost all software developers I’ve met. Software folks and just folks in general have a big misunderstanding of the fundamental influences in the computer hardware ecosystem. For far too long the work of the software engineer and the computer hardware engineer have been entirely disjointed. This has largely been a result of how locked down hardware vendors have become in the past two decades. As a result, the semantics between the two disciplines differ in ways that require a significant remediation before a meaningful discussion can happen amongst these engineers[1]. My hope is that my words can go someway in changing that both for the layman and for the disciplines themselves.
Just ask a hardware engineer and a software engineer what a cache is: you will get very different yet strangely overlapping answers. I believe this sort of difference is why putting a hardware and software engineer in the same room is often not a profitable thing for a company to do; they don’t speak the same language. There is something that most hardware engineers understand that most software engineers don’t though, and it’s a harmful blind spot. Software folks generally don’t understand what is indeed possible in hardware and that it is very easy for the hardware engineer to keep a secret from the user of the hardware they design.
This is done with a technique most software people have never even heard of — let alone the average person. That technique is called logic locking[2]; it’s in every modern computer today. It never ceases to amaze me how many times I have to introduce this concept to my fellow software engineers. In the simplest terms it is achieved by embedding a secret key into the silicon of your computer. When talking with software people I’m often confronted with something along the lines of “that’s impossible”. And if you are not careful to read between the lines of a software engineering curriculum that is indeed what it teaches you. Most computer software courses teach their students that if their hardware is not physically secured (i.e. actually guarded from a real person physically tampering with it) then you can extract any information from it that you want. This is patently false yet most software engineers assume it in all the systems they build.
The software engineer is mostly right in their assumption though, with enough time and resources, a skilled hardware hacker can extract just about any information from a computing device that they can physically muck around with as long as the information being sought was not placed their during the manufacturing process.
So how are these secret keys embedded in silicon accomplished? For all my rambling, it’s actually incredibly simple; a little over a decade ago material scientists invented an adhesive that creates a tiny explosion when you tear it off and is impervious to any form of modern imaging. I call it flashing-glue.
This adhesive-explosive-that-is-impervious-to-any-modern-imaging-techniques when torn from the silicon leaves no trace as to what was underneath. And with these secret keys one can lock and obfuscate the internals of all modern processor chips today and every modern company is doing this (at least I don’t know of a single exception)1.
In practice the most common method used today for these logic locking schemes is known as EPIC (or the End of Piracy for Integrated Circuits)[3]. However, despite the bold claim in it’s title there has been at least one anonymously published paper that describes a systematic way to break EPIC, given the proper time and resources[4]. It’s claims are also bold but also seem credible though I can’t say I’ve verified them myself, nor do I know anyone who has. And from Bitcoin to EPIC’s End, there sure are a lot of quality works in computer science being published anonymously. Dare I say more than any other field. I think it’s worth asking… why?
A naive explanation of how this is done can be summarized as just throwing a bunch of xor gates into your design which are cleverly attached to the flashing-glued key embedded in the silicon. The xor function is used because it is the only logical primitive which preserves entropy.
References:
[0] Andrew “Bunnie” Huang, RISC-V International. (2017). Keynote Address Impedance Matching Expectations Between RISC V and the Open Hardware Community Retrieved August 7, 2022, from youtu.be/zXwy65d_tu8
[1.]
John Hennessy, David Patterson ACM. (2018). A New Golden Age for Computer Architecture: Domain Specific Hardware Software Co-design, Enhanced Security, Open Instruction Sets, And Agile Chip Development. YouTube. Retrieved August 7, 2022, from youtu.be/3LVeEjsn8Ts?t=346.
Jeffrey Mogul@hp, Andrew Baumann@microsoft, Timothy Roscoe, and Livio Soares. (2011). Mind the Gap: Reconnecting Architecture and OS Research. Retrieved January 29, 2023 from https://www.usenix.org/legacy/events/hotos/tech/final_files/Mogul.pdf
[2] Muhammad Yasin, Jeyavijayan Rajendran, and Ozgur Sinanoglu. (2019). A brief history of logic locking. Analog Circuits and Signal Processing, 17–31. DOI:http://dx.doi.org/10.1007/978-3-030-15334-2_2
[3] Jarrod A. Roy, Farinaz Koushanfar, and Igor L. Markov. (2008). Epic: Ending piracy of Integrated Circuits. Design, Automation and Test in Europe. DOI:http://dx.doi.org/10.1109/date.2008.4484823
[4] Anon. The End of Logic Locking? A Critical View on the Security of Logic Locking.
Hello, Mr Trossbach.
Found your backback(s) in the Circus. Are you still in Vienna?