No Go * (Star)

Cybersecurity with New Paradigms

The Reason Why Cybersecurity Must Embrace New Paradigms

Erland Wittkoetter, Ph.D., Aug. 29th, 2022

 “Cybersecurity is built on shaky grounds; it cannot be trusted”. Why am I saying that? Cybersecurity protects us from bad software. We cannot (blindly) trust software code, even if it comes from honest developers. Auditors can validate or certify software, but there is still too much uncertainty. How could we be sure that the latest versions of used program libraries or compilers are sufficiently validated or certified? How do we know the environment in which software is being used is safe? Actually, entire operating systems (OS) must be audited and certified for high-security applications. How could an OS know that it is being nefariously misused? Operational measures are established to protect secure systems from being covertly misused. These systems depend (often) on trusted sys-admins who may work in tandem (because a single person cannot be trusted). With more cybersecurity measures, security could be improved. But there is the nagging and unanswerable question: Is the software (really) safe? Are there (hidden) vulnerabilities that could be exploited? Can we trust humans using or administering the software? Cybersecurity should make our security trustworthy, or it is a failure.

Books on cybersecurity are honest when they say: “There is no perfect security”. This belief is not just a nice meme; it is seemingly a fact: humans can switch off all or parts of the security, and then what? Would data still be safe? But who would do that? Just saying, there is no perfect security!

Overwhelmingly, we are all conditioned that perfect security can’t be achieved. Is there a (hidden) law of nature that prevents it? Is the “Halting Problem” conclusively answering this question? But it is a simplified model of computation that does not include external (security) constraints. Also, why should cybersecurity depend on (unreliable) human involvement? And why should much better security not be a retrofit and cheaper?

Example: We use elevators to skyscrapers’ top floors and enjoy the views. But how would we feel about doing that if building engineers discuss structural flaws or equipment failures in multi-storage buildings as if that is normal? On June 24th, 2021, the South Champlain Towers, a 12-story beachfront condominium in Miami/ Surfside, Florida, collapsed. In recent decades, buildings have been retrofitted to prepare California for the Next Big One. We continuously improve these building codes/technologies because we believe that being in a building should not be a death sentence. Buildings/bridges are not expected to collapse nowadays, not even during natural disasters. Flaws or errors cannot be made without severe consequences for responsible parties. Unsound buildings cannot be insured, or governmental oversight would shut them down. Let’s compare these situations with the state of security in software.

We don’t even know how many flaws are in our software. We allow vandals, i.e., external attackers, to damage our property. In most cases, we even let them get away, often with money. Cybercrime damage is in the 100s of billion dollars, likely over a trillion annually, with a rising trend. Compare this to zero-tolerance aviation or nuclear safety. It is not that aviation/nuclear security is perfect, but it addresses its problems aggressively. They don’t wait until people die before they act. Many problems are dealt with by rules/regulations proactively and preventatively. Redundancy is the norm, not the exception. Frankly, cybersecurity’s performance is almost embarrassing for an industry making billions annually with cybersecurity products. It seems that business remains good as long as the underlying problems are not solved comprehensively. This situation would be absurd in any other security business with reasonable safety expectations. Would we accept cybersecurity’s level of vulnerabilities in our food supply or regular product safety?

As a bare minimum, we must trust (security-related) algorithms and data. We would be much safe(r) if local code vulnerabilities could not be exploited. Imagine how much safer we could be if unknown software would not be allowed in device’s RAM, putting the main CPU out of reach for suspicious code. Suppose we would know that crypto-keys or crypto-devices are stolen or misused; this could immediately increase our trust in the integrity of the used/installed software and received data. Instead, we must distrust our software, the Internet, and every CPU/OS we use. Cybersecurity already assumes that nothing is safe on computers from being modified or misused by attackers. We cannot trust our devices, but we must use them; this is unsustainable.

Trusting software’s integrity by increasing its resistibility against covert or untraceable modifications is a method to make algorithms and security much more trustworthy. For some applications, we need (near-perfect) security. Perfect usually means flawless: we cannot do any better. Because we don’t know if measures cannot be improved any further, we don’t need to call it perfect; near-perfect is sufficient.

Five measures outside current cybersecurity’s toolbox and paradigms could significantly improve its performance: (1) Separate/isolate security-relevant algorithms from regular (versatile/dynamic) algos; this puts security outside CPU’s reach – regular software cannot modify security. (2) Don’t accept unknown algorithms in RAM or available to CPU, i.e., only binary hashcoded, white- or graylisted apps/scripts are allowed in RAM. (3) All crypto-keys are protected from cleartext exposure; they are compromised once in cleartext in an unprotected CPU. The underlying hardware (type) is then flagged as flawed. (4) Inter-locked, inter-guarding security detects modification/misuse of relevant security units (Multi-Unit Security). (5) Automating proactive, preventative, low-cost/effort security process, including auto-reporting and investigating every detected anomaly.

From power distribution, we know the concept of circuit breakers or fuses. Those ideas are not being used in cybersecurity sufficiently. However, with task separation/isolation (like a fuse) using the non-bypassable data bus, we could enforce simplification in security deployments, avoiding hard-to-analyze complexity explosions by ignoring attackers’ further or possible capabilities.

As firm criteria to determine if security systems are trustworthy, we should have a simpler test: can a security component irrefutably prove that it is a dedicated (unmodified) hardware unit and not a perfect software simulation or cloned hardware? This benchmark is likely where the rubber hits the road.

Trust is not based on a few audits or certifications. Instead, trust in security is based on never-ending scrutiny of all security-related, open-sourced feature implementations. Scrutinized open-sourced security solutions should be considered safe or safer than certified security solutions. All security units must be updatable; updates are not allowed covertly or in isolation. Automated Multi-Unit Security guarantees that attackers who try to change security are stopped. But flaws or weak spots, once detected, they will be fixed and deployed quickly.

Fully automated tools would seek (direct/indirect) evidence for possible security breaches using (all) deployed security components as sensors. If required, vulnerabilities could be automatically deactivated until fixed or used as criteria to find new malware. Using data on what we expect from software or data exchange activities, any deviation of expectations in whitelisted apps is reported.

Not claiming that the transformation to full trustworthy/near-perfect security is without challenges, but it seems doable in three steps. For rolling it out initially quickly, i.e., within a few months, we could start with a software-only retrofit solution against today’s cyber-criminals that use malware, spyware, ransomware, and backdoors. Why so fast? Step-1 is based on a proven approach to eliminating rootkits (hooksafe/hypervisors).

But then computer security must grow in its capabilities significantly, i.e., being used against an off-the-charts smart/adaptable AI. This level adversary would likely use inherent weaknesses of every software-only security solution: modify isolated systems to steal crypto-keys. Crypto-keys/devices cannot be protected without required hardware retrofits (like external USB security sticks for IT devices). In the final step, we should have permanent/separate security hardware units within HDD/SSD and network interfaces.

Cybersecurity should guide, help, and protect people with their inherent (often unavoidable) vulnerabilities and how they can be shielded (more reliably) from being scammed. Humans should not get involved operationally in low-level tasks that can be fully automated; otherwise, security is less reliable and trustworthy.

As the first goal of computer security, we should end nation states’ ability to wage cyberwars, which is doable quickly. The goal is a call for a stop to cyber-warfare once and for all via an open-sourced grassroots development project that also facilitates the defense against more capable cyber adversaries.

More info: