No Go * (Star)

FAQ – Misc. Questions (verbose)


Q C1: Who is No-Go-* competing with?

  • Short A: No tech or business has dared to call for an end of cyberwar (yet)
  • Longer A: Many technologies are trying to limit damages from malware, ransomware, spyware, and backdoors; there is an entire industry around that topic.
    Currently, no technology or company would claim that they can eliminate damage from all types of malware. Antivirus programs take the edge of malware and ransomware; but they are reactive and not proactive. Also, Firewalls are used to reduce problems from spyware and backdoors. But no technology or business is currently daring to call for an end of cyberwar.
    Also, no comprehensive security technologies deal with adversaries that would use reverse code engineering for their attacks. Modifying SSL/TLS code, i.e., stealing crypto keys from modified applications, is known and could already be done by trojans. Assuming that multi-factor authorization doesn’t count, competitors should have proposals against these threats, but there seems to be nothing to our knowledge.
Tag/Link to A:

Q C2: Does something similar to No-Go-Security already exist?

  • Short A: Not to our knowledge; please get in touch with us if you own relevant IP
  • Longer A: Not to our best knowledge. But we would not be surprised if similar proposals were already published. We did some google searches and even searched on USPTO (US Patent Office). So far, we are currently unaware of (relevant) “prior art”.
    No-Go-* has certainly not invented white-, gray- or blacklisting using binary hashcodes of files, but we have not seen them combined to defeat malware – but the basic idea might be known. If No-Go’s solutions already exist as “prior art”, we are interested. Please, let us know.
    Technology is a collaborative endeavor. IP is being developed in large quantities and identifying relevant claims is difficult/labor-intensive. We don’t want to ignore any relevant IP. So if you own (potentially) relevant IP, please get in touch with us and help us to understand your claims.
Tag/Link to A:


Q C3: Is cryptography used, and is it up to the task?

  • Short A: Yes, and No. Cryptography isn’t doing enough against stolen or misused keys
  • Longer A: Cryptography was pushed by hot and cold war efforts. It was designed to protect messages over wire or radio. Public/private keys allow the secure exchange of encryption/decryption keys. With quantum computation, some of these methods are more easily broken than anticipated (when developed 40 or 50 years ago).
    Additionally, encryption and decryption are standard operations on PCs, actually, on almost every IT device. The standard algorithms are all published, and the used code is often open-source.
    Today’s problem is that all crypto-operations happen in or on untrusted systems, and keys are extremely difficult to manage safely. Additionally, damage from compromised keys is significant and often detected too late.
    Even if the en-/decryption happens in protected CPUs, how could we be sure that messages are authentic if encryption is so easy to misuse by malware? Multi-Factor-Authentication is recommended because cybersecurity doesn’t trust SSL/TLS; session keys could be stolen by malware.
    Cryptography took a while until it accepted threats from quantum computation. But it seems it has not accepted malware threats sufficiently. Crypto keys could be stolen, and crypto devices misused – without traces.
    Also, public keys are being announced based on assumptions that are not valid anymore. We don’t need humans checking if a public key and its certificate with its data are valid. We don’t need to detail data from certificate witnesses confirming the legitimacy of a key; it could be done in the background. Because we know local software is unreliable, we are certain attackers would get the public data anyway.
    No-Go-* is introducing the concept of key safes. Keys are never exposed in cleartext (not even public); we could refer to them via their unique hashcode. Also, Encryption happens only in protected CPUs. If a key is used in a simulation on an unprotected CPU, then this simulation must be detectable. Finally, multiple methods should actively check if keys were stolen, nonetheless.
Tag/Link to A:

Q C4: Is No-Go-Security secure against quantum computation?

  • Short A: Yes, very likely
  • Longer A: The problem is that we don’t know the future and its technical capabilities. We don’t even know enough about the limits of quantum computation yet. The reason why No-Go-Security is secure is that No-Go doesn’t allow any Crypto-Key, not even public keys, to be published in cleartext. No-Go-Security refers to public keys always via unique hashcodes created from the key data. And full hashcodes are not even exchanged in cleartext. If necessary, only a hashcode partial is used to assist messages to be redirected to the relevant Key-Safe that contains the corresponding key; it is processed in protected CPUs.
    The key exchange is negotiated between systems that won’t share used methods or patterns; everything is encrypted based on a common set of public keys used to get more keys from trustworthy key repositories and provided during the manufacturing of the key-safes. Therefore, from the outside, no one can know what method, key type, key length, or message pattern is used or when it is being changed. No side-channel attack should reveal if the key length is 512 bits or any other value. If quantum computation could be a successful tool under these circumstances is questionable.
Tag/Link to A:

Q C5: Why is cybersecurity not better in securing users?

  • Short A: Cybersecurity is complex, and it is considered essential but treated as a side-show
  • Longer A: Cybersecurity is much more than defending computer systems from malware, ransomware, spyware, or backdoors. A standard book: “Security Engineering: A Guide to Building Dependable Distributed Systems” (3rd Ed, Dec. 2020) by Ross J. Anderson, is 1200 pages thick, and it is not even covering threats from reverse code engineering by malware.
    Cryptography is at the bedrock of cybersecurity. It makes fundamental assumptions: keys are not (systematically) stolen – they are computationally broken; crypto-devices are known, standardized and vulnerable, but they must be trusted when deployed. These assumptions are outdated.
    As a possibly biased observation, the mood among cybersecurity professionals can be characterized as discouraged, demoralized, or even depressed. This depiction should not come as a surprise: there is huge annual damage from cyber-crime; there is a constant stream of new attack tools/methods; the number of zero-day vulnerabilities is unknown (a few thousand or millions?); also, the complexity or variety of what could go wrong or malicious in cybersecurity is overwhelming. Technical complexity, combined with evidence on how many of the vulnerabilities were created in ignorance or irresponsible/reckless manner, has contributed to a mindset that solving (all) security problems (comprehensively) is impossible.
    However, in electronics, we have fuses and circuit breakers. We could simplify security with these concepts (see that answer), i.e., separate security-related from regular operations and thereby protect security features. Code changes to single-purpose units are easily detectable.
Tag/Link to A:

Q C6: What makes security in other sectors so much more successful?

  • Short A: Security in other sectors is more confident due to proactive toolsets
  • Longer A: We have security in food, drugs, and product liability. Security is promised in construction (buildings), air traffic, and nuclear safety. All these security applications have in common that they know that every failure in provided security has legal consequences.
    Successful security has clearly defined and established expectations. Therefore, we know that elevators do not fail and expect to be maintained. We do not have perfect security, not even in aviation or nuclear safety. But these security sectors have a common mindset: precautions are taken, and additional redundancies are deployed before people are harmed.
    Laws of nature are more predictable than human attackers. But security fails only in a finite number (possible) ways. In regular security, certain options or failures are made impossible or extremely unlikely with measures from security toolboxes; they prevent complexity explosions.
    Cybersecurity does not have enough complexity-reducing, proactive tools. When security operations are protected from malicious modifications, these protections must be protected, and so forth. Cybersecurity is not setting strictly enforceable, non-bypassable separation rules yet. But it could start creating proactive tools to simplify the defense of security measures.
Tag/Link to A:

Q C7: Who is more exposed in a cyberwar?

  • Short A: … that is probably changing over time …
  • Longer A: Once detected or revealed, vulnerabilities and malware can be removed relatively quickly. The problem is the uncertainty, covertness, and sneakiness of cyber weapons against vulnerable systems. Cyberwar actions are intended to be painful. The US government/CISA has named 16 infrastructure sectors as vital for national security: Chemical, Commercial Facilities, Communications, Critical Manufacturing, Dams, Defense Industrial Base, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare/Public Health, Information Technology, Nuclear Reactors/Materials/Waste, Transportation, Water/Wastewater Systems. Actions by other nations against them could be considered an act of war.
    Computer systems are vulnerable, but it is still challenging to know what could be done to create harm. Reconnaissance and planning of cyberwar attacks are labor-intensive. However, these steps, including analysis, can be (fully) automated, and soon a new level of precision, intention, and proportionality can be realized with AI (for all war parties). If we are not doing anything, the cyber domain will become a battlefield with unpredictable outcomes that could be outside what is expected from conventional military force comparisons – just saying one word: logistics.
Tag/Link to A:

Q C8: Will governments or the military push back on No-Go-*?

  • Short A: Probably not
  • Longer A: Governments and the military strategize from what is available to them. It’s speculation, but they will likely accept that the cyber-domain has become irrelevant. However, they will not accept No-Go’s claims at face value; instead, supported by cybersecurity, they will likely put a lot of R&D into finding new vulnerabilities; this would (clearly) strengthen No-Go-Security. However, we should not forget that computer hardware is vulnerable to electromagnetic pulses.
Tag/Link to A:

Q C9: Who may resist the changes from No-Go-*?

  • Short A: … we will see …
  • Longer A: This is almost a question for which we have no answer. We would like to assume that there is unanimous consent on ending cyberwar capabilities. But we should not bet on that. People’s jobs or careers in activities, now prone to be automated, were started expecting job stability. However, threats to other vulnerabilities will give them opportunities to continue their careers.
Tag/Link to A:

Q C10: What is the opinion of cybersecurity professionals about No-Go-Security?

  • Short A: It’s new, untested; but we got important hints (thanks)
  • Longer A: For most IT professionals, No-Go is an untested idea. More importantly, the founder got valuable feedback on many details. However, security professionals acknowledge the flaws in how cybersecurity is done compared to other security sectors. Experts can speak for themselves about whether it is worth turning the No-Go-Security ideas into products.
Tag/Link to A:

Q C11 Why don’t you care about software vulnerabilities?

  • Short A: We care, but not as much when having No-Go-Security
  • Longer A: Vulnerabilities require a trigger and potentially a malicious follow-up to be dangerous. Even if content is a trigger, content alone does not create damage except if it is an exploit in the form of a simple backdoor within an existing app.
    An exploit is usually an app or a script; both are binary hashcoded and white-, gray- or blacklisted. If whitelisted, we would (temporarily and involuntarily) “ignore” the exploit. However, for the manufacturer/developer, this could have severe consequences for their reputation once the exploit is used maliciously. If the exploit is graylisted, some users will use it without knowing; still, they have decided to accept the risk from graylisted apps. Content Watchdog or Network Watchdog will be used to limit the damage. Users will receive additional tools from No-Go to detect and report malicious software behavior easily.
Tag/Link to A:

Term clarifications

Q C12: Is promising no damage from malware a flawed statement?

  • Short A: Not really. Damage is something that should be avoided/prevented
  • Longer A: We define damage as harm (to something) by someone else that causes detectable devaluation, reduced usefulness, or broken normal functions. Because we have no threshold, every change could arguably lead to (some) damage. We acknowledge that argument, but damage from malware is more than minor; it is considered significant enough to be prevented by security measures; otherwise, it would not be called malware.
Tag/Link to A:

Q C13: What is security in comparison to safety?

  • Short A: Both terms mean freedom from harm, threat, and danger
  • Longer A: We use security and safety, often interchangeable: freedom from harm, threat, and danger. There is a difference between these terms. Security is related to group efforts to protect us. Safety relates more to the personal feeling of being free from harm/danger.
Tag/Link to A:

Q C14: What is the difference between white- and graylisting?

  • Short A: Whitelisted are based on authorized data, graylisted on detected patterns and statistics
  • Longer A: Binary hashcodes of apps, scripts, and data exchanged protocols must be white-, gray- or blacklisted. Blacklisted hashcodes have lost their right to be executed or used automatically. Hashcodes have additional data associated with them. If these additional data are authorized disclosures from software developers, manufacturers, or web-resource operators, then we call the hashcode whitelisted. The hashcodes are gray-listed if provided additional data were generated from algorithms or statistical inferences.
Tag/Link to A:

Q C15: What are deviations or anomalies?

  • Short A: They are results that were not expected from disclosures, patterns, or predictions
  • Longer A: We provide additional data for white- and gray-listed hashcodes (see previous answer). Watchdogs use these data to check if software activities are suspicious. The expectation is that every software operation does what software developers/manufacturers or web-resource operators have shared and nothing more. Any deviation is considered an anomaly or threat if that expectation is false. Activities of gray-listed apps or data exchange operations are modeled via patterns or predictive models. Surprises are handled similarly, and users are asked for their preferred response (ignore or cancel the operation). These responses are statistically analyzed and associated with auto-responses; users have the option to correct that later.
Tag/Link to A:

Q C16: Why is proactive security so much better than reactive?

  • Short A: Proactive measures prevent damage early
  • Longer A: In reactive security, previously unknown threats could create damage. If damage detection is sensitive enough, this new threat is then blacklisted. Software that does not fit an element on the blacklist is accepted and executed. In fairness, there are methods to get potential viruses or malware on the blacklist without having them commit damages. But a blacklist is not proactive despite additional features. If users start unknown code and the system pauses, e.g., for getting a confirmation, then we would have a proactive feature. The question is, is that effective enough? I.e., is the same (unknown) software blocked in other circumstances before being executed? Could it be loaded into RAM, made executable in RAM, and then used covertly? Some of the answers depend on the used Operating system.
    A bit more technical: What about loading harmless apps into RAM and having another app modify the binary executable in RAM by inserting some malicious attack code? A reactive solution tries to detect and remove an app doing these changes. Proactively, we would prevent code modification on executables. If changes are done in non-executable code, then we prevent that code can be made executable when in RAM. Instead, we would demand that modified code be stored first and reloaded to RAM as executable after hashcoded and checked to be white-, gray-, or blacklisted. These operations are relevant for developers; they could get special components that would allow them to do these operations overtly and not hidden as done by an attacker.
    Partial, proactive features are not sufficient. No-Go-Malware uses whitelisting with cached hashcodes applied to every executable (including scripts/macros) before they can be executed. Also, we must prevent modifications to executables in RAM, a rule that must extend to CPU’s cache.
    What if code modification is part of software’s regular operation mode? (see that answer).
Tag/Link to A:

Q C17: Why is prevention in security important?

  • Short A: Prevention (in No-Go) expects damage and prepares us to deal with its consequences
  • Longer A: Proactive security is already prevention. But if we mention prevention separately, we refer to features that help us when malicious software has fallen through the cracks and is about to create damage. In these situations, we want systems to prevent damage; this is No-Go’s understanding of prevention.
    Separate prevention is important for two reasons. It is redundant in case primary protection fails against malware, and second, it detects failures to the main security measure independently.
Tag/Link to A:

Q C18: What is the advantage of independent circuit breakers?

  • Short A: Preventing damage and gaining time for additional actions
  • Longer A: Circuit breakers in software are currently useless. They would be part of the operating system; this means they would be deactivated or manipulated, or the attack would adapt to the underlying detection filter. E.g., stopping software on a blacklist is a (dependent) circuit breaker. Also, delaying further user login after inserting false passwords is one.
    But the problem with current security methods is the protection of security measures, which requires their own security protection measures, which might be vulnerable, etc. Circuit breakers require full independence; if this independence is breached in simple or single-purpose units, it can be detected much easier and reliably than in complex CPU/OS environments.
Tag/Link to A:

Q C19: What does it mean: it is impossible?

  • Short A: Impossible is a strong prediction; it is dangerous to overgeneralize impossibility
  • Longer A: From the laws of nature, we get impossibility statements. A person can not be in 2 different places physically simultaneously. What about mixed liquids? Can they be separated over time? Entropy cannot naturally decrease; it only increases with time. However, this is less convincing: two liquids, water and oil, separate over time naturally. What about water and alcohol? It depends on the temperature. Still, in chemistry, there are energetically impossible reactions. In physics, chemistry, and math, we could make impossibility statements. In reverse, if it is not impossible, it is just a matter of engineering, and it can be done.
    For computer science, it is much more difficult to make impossibility statements. If we had them and they were true, we could save money and time trying to develop impossible solutions. There are a few impossibility proofs (without naming concrete examples); their chosen model used in the proof already contained an impossibility of some kind. Is this a selection bias?
    E.g., using the Turing model, the correctness of an algorithm cannot generally be proven (halting problem). But for a less general situation, it is possible to use algorithms to validate the correctness of an algorithm. Then, we could ask, what algorithms can we validate automatically? As a formal math problem, this question might not be answerable.
    So, is damage elimination of malware of any kind impossible? Is it impossible to stop waging cyberwar via retrofitted legacy equipment? Could we give cyber defenders a sustainable advantage over attackers? Many have opinions based on experience with cybersecurity on these questions.
    No-Go-* has changed a few operational paradigms and is now claiming it can be done.
Tag/Link to A:


New Question

Our answers are not written in stone or intended to be our final word. If you disagree or think of a better answer, please don’t hesitate to contact us with the question code reference within the subject line