No Go * (Star)

FAQ – No-Go Product (verbose)

Products

Products

Q B1: What are the essential components that you want to develop?

  • Short A: Facilitating proactive/preventative security technology and educating fellow engineers
  • Longer A: No-Go-* will develop multiple technologies and components against cyberwar and later against vulnerabilities from super-smart AI.
    There is a difference between technology, i.e., our basic capabilities, and “essential components”, which are more related to product capabilities and features.
    Regarding (facilitating) technologies, we will make hooksafe-/hypervisor-type technologies work for us. Additionally, we need independent, non-bypassable hardware watchdog components within device’s data bus (PCI, SATA, USB, etc.) using (simple, i.e., feature-bare-bone) standard CPU that we can adapt to our specific needs (like the open-source RISC-V CPU template). We will also need to improve and adapt USB to some additional security demands without making it incompatible. Hardware-based crypto-key management for protecting encryption/decryption is another essential technology.
    Regarding product capabilities, we will need watchdogs (for executables, user content, and network activities) as hardware, software-only, or a combination of hard- and software components. All watchdog types have a low-level (hypervisor-based) component on the main CPU. The integrity of this software must continuously be validated. This software will also have more specific features for each watchdog type.
    Additionally, No-Go-* will have server-based services that are related to watchdogs, software developers/manufacturers, and web-resource operators.
    Beyond technology or product features, we want to focus on education. No-Go-* is making it part of its core mission to educate engineers and technicians to apply the No-Go-Security approach to their applications. No-Go-* requires application developers’ cooperation to deal efficiently with scripts/macros (i.e., fileless malware) and key management issues for SSL/TLS.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B1

Q B2: How do you know you can stop malware?

  • Short A: Restricted access to CPU for known apps only (via cached, whitelisted hashcodes)
  • Longer A: Software is published (cloned) code; it might be customized but is rarely personalized. We assume manipulated apps are malware if we detect deviation from expected hashcode values. With additional enhancement data (received from external servers), these cached hashcodes are used by watchdogs who could react to every unexpected (security-related) software behavior.
    The idea is to use whitelisting apps and accept their (security-relevant) activities after they are disclosed to us. We trust developers, i.e., we assume they do not include malicious features, and they warn us truthfully about potential misuse types of its features (e.g., via interfaces/scripts). With developers’/manufacturers’ reputations on the line, we can deter them from turning rogue.
    Therefore, we trust apps before we allow them access to RAM. With No-Go-Malware, no unknown app/script, certainly no malware, could make it to the CPU (surely not undetected). Its white-/graylisting stops known and unknown malware. Checking for undisclosed actions by software is an additional redundancy against known software that could covertly act maliciously.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B2

Q B3: How do you know you can stop ransomware or data sabotaging?

  • Short A: No malware – and rapid recovery features in case of damage
  • Longer A: Ransomware is malware that primarily sabotages user data. As malware, it is repelled from entering RAM (and used by CPU); therefore, it would already be stopped proactively. But some ransomware may still slip through the cracks. What we would need is a Content-Watchdog solution that prevents damage. Restoration from backups is slow; the main damage would likely come from lost time. No-Go-Ransomware proposes multiple protection layers that would help via additional early warning and rapid recovery from damages (i.e., in a few seconds or minutes and not hours or days).
    No-Go-Ransomware is designed for rapid, near-zero damage repair. Attackers would waste their time and resources trying to damage or extort No-Go protected users.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B3

Q B4: How do you know you can stop spyware?

  • Short A: No malware – more transparency over exchanged data
  • Longer A: Spyware is malware that tries to get user data off-premise to remote locations; as malware, it is proactively repelled from entering RAM (and used by CPU). No-Go-Spyware is a device firewall solution that checks if data exchange operations are misused (e.g., piggybacked or uses backdoors). No-Go-* considers software developers as partners. The same applies to website or web-resource providers. They are businesses with (good) reputations that they must protect. No-Go users are notified if they are about to use network resources that could not be fully checked for vulnerabilities (i.e., backdoor or spyware utilization).
    Most online activities follow detectable and predictable data exchange patterns. But sometimes, we would need additional (confidential) disclosures to confirm that the data exchange is normal, not nefarious. Because changes to the data exchange protocol are (made) detectable, disclosures must be updated (via dev-tools) and added to an archive. Companies providing this transparency open themselves up for independent scrutiny from audits, the public, or courts.
    No-Go-Spyware and its Network-Watchdogs are designed to deter rule-breakers by exposing them.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B4

Q B5: How do you know you can stop backdoors?

  • Short A: No malware – better detection of anomalies and deterrence (for rule breakers)
  • Longer A: A backdoor is a covert software feature (receiving unexplained data). No-Go-Spyware uses its Network-Watchdog to detect all types of unpredicted data operations.
    We know it will work because we can detect unexpected or suspicious online activities and deter developers, manufacturers, or website/web-resource operators from using them because of devastating consequences to their reputation.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B5

Q B6: Are there limitations to anticipated deliverables?

  • Short A: Initially, yes; but they can be fixed when security is solidified
  • Longer A: The first anticipated solution is a software-only version of a non-bypassable watchdog. This retrofit solution has unavoidable limitations.
    The software is implemented below device’s current operating system as a low-level hypervisor (type 1). This solution will likely be sufficient against criminals and nation-states. But initial No-Go-Tools have all an inherent flaw: they are software-only. Software is vulnerable to modifications by relentless, for weaknesses probing, advanced adversaries like super-smart AI. This AI could be a master in reverse code engineering and steal insufficiently protected crypto-keys. If it removes all (data) traces, we would not even detect that this adversary controls our security.
    Therefore, a semi-soft/hardware solution with key protection is required sooner than later.
    An additional limitation is initially coming from legacy IoT devices. Later, hardware-based retrofit solutions could protect legacy IoT as well. However, the full impact of these short-term IoT limitations on cyberwar scenarios is not sufficiently studied and understood yet.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B6

Our answers are not written in stone or intended to be our final word. If you disagree or think of a better answer, please don’t hesitate to contact us with the question code reference. If you think we miss out on a question, please send Feedback/New Question.

Complementary

Q B7: What happens to firewalls when No-Go-* watchdogs are accepted?

  • Short A: Firewalls (i.e., security-conscious routers) remain important
  • Longer A: Firewalls are security-conscious routers for defining an intranet. They are often considered as a boundary between a (safer) intranet and an unsafe Internet. No-Go-* is not making that distinction. Without No-Go-Tools on all devices, the intranet is as insecure as the Internet.
    The Network-Watchdog absorbs the role of the device’s firewall entirely. The resulting network security is more reliable than having a software firewall. But users’ network security involves situational adaptations, which is more than protection from technical/tool vulnerabilities.
    In particular, companies have more specific data vulnerabilities that require competent configuration. Intranet-defining firewalls are overwhelmed in detecting (advanced) low-level threats; instead, they should better be used to enforce additional network security rules for all devices.
    However, in the short-term, intranet traffic filters are still required to deal with legacy IoT threats.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B7

Q B8: Are antivirus solutions required when No-Go-* solutions are available?

  • Short A: Cybersecurity is much more than Anti-Virus – these solutions remain important
  • Longer A: Antivirus products (AVPs) are based on the concept of blocking blacklisted apps or blacklisted (code) signatures found in apps. Binary hashcoding is likely used to find modified apps with viruses. But AVP struggle for different reasons with white- or graylisting. AVPs are not interested in knowing much about security-related details within examined or checked software. They use certificates related to installers, legitimizing manufacturers as recipients based on a few administrative steps. But these steps are not enough to create a system of accountability and reputation for software developers and manufacturers.
    AVP detecting malware is redundant to what No-Go-* would provide. No-Go would prevent AVP from doing its job – watchdogs stop malware files from being loaded into the device’s RAM. Also, quarantining apps are being rejected as a not-permitted task.
    The detection of viruses or malware is already a free software feature. AVPs are usually apps solving users’ security problems, from phishing to spam. AVP providers are assumed to be amenable and cooperative in using external white-, gray- or blacklisted hashcodes to replace their binary signatures. AVP could build additional services on the basic No-Go features.
    Cybersecurity is diverse, complex, and demanding; AVP should provide automated solutions for users who need urgent (digital) protection from their individual (often unavoidable) vulnerabilities. Damages from malware, ransomware, or spyware cannot explain the full scope, i.e., hundreds of billions of dollars, even over 1 trillion damage (in other estimates), in cybercrime annually. Much damage is likely because of deception and dishonesty and not the failure of a specific tool. Cybersecurity should reduce the cybercrime rate and damage significantly.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B8

Q B9: Are backups required when data sabotaging is stopped?

  • Short A: Not required for security, but hardware failures
  • Longer A: Data backups are highly recommended. Storage hardware is not failsafe. The only problem with depending on backups for repairing data sabotages is that restoring content from external media takes a lot of time.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B9

Our answers are not written in stone or intended to be our final word. If you disagree or think of a better answer, please don’t hesitate to contact us with the question code reference. If you think we miss out on a question, please send Feedback/New Question.

Features

Q B10: Is No-Go-* limiting existing computer or network capabilities?

  • Short A: No
  • Longer A: In simple terms, the design goal is to have all good software running smoothly while all bad stuff is blocked. This goal requires human judgment and potentially even controversial decisions. Within No-Go, we are not making decisions permanent. We will facilitate that even borderline software remains usable for legitimate reasons. But we should have the right to require additional protections against misuse. Protecting vulnerable (including presumed innocent) users is more important than commercial interests or success. Some features will require court orders, and we must facilitate them to our best abilities.
    Return

Tag/Link to A: NoGoStar.com/faqb_v/#B10

Q B11: Why is No-Go-Security a proactive, preventative solution?

  • Short A: Security threats should not get close to CPUs – redundancy if whitelisting has failed
  • Longer A: Proactive security in No-Go-* is primarily based on knowing and trusting the code that enters RAM (and thereby executed on a device’s CPU). No-Go-* uses cached binary hashcodes of apps to check if the code was registered by the software developer/manufacturer (we call this whitelisted in that case). Suppose code is not registered, was it detected in software installation contexts of other machines? If yes, the app is assumed to be acceptable and called graylisted.
    Exploits for vulnerabilities can not make it into RAM covertly because even scripts/macros are hashcoded and white- or graylisted. Apps or scripts are always associated with someone, and being a source of malware or exploits will lead to a crashed reputation.
    No-Go’s security is preventative by pausing actions or quarantining files when operations deviate from expectations. Additionally, No-Go-* expects protection failures and is redundantly prepared to mitigate damage. It tries to detect failures late; at the same time, it has created repair data early to fix damage; therefore, the repair could be automatically initiated if problems are detected.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B11

Q B12: Can No-Go-* deliver perfect security?

  • Short A: No, but near-perfect based on protection, prevention, and auto-detection of failures
  • Longer A: Perfect means flawless or cannot be done/implemented any better. Because we never know if there is not a better implementation, we use the term near-perfect instead.
    Also, when discussing (near-) perfect security, we must acknowledge the extended context in which security is being implemented, which could expose (previously ignored) flaws in security. There are seemingly threats we cannot control (undermining our security). Examples are compromised crypto units, compromised OS in which the crypto units operate, or human operators who switch off essential security components. But these circumstances are avoidable.
    Likely, near-perfect security cannot be realized under all circumstances or for all applications. But that does not mean we cannot build it for a well-defined (real-world) application.
    Example: Near-perfect key protection would never expose keys in cleartext or allows misuse of keys. Therefore, keys processed in cleartext on unprotected CPUs or in crypto units accessible by the main OS should flag these keys as compromised. Also, used device types are useless in processing secrets. Additionally, if near-perfectly protected keys are revealed to the outside (against all expectations), then we still expect from near-perfect security that it detects (every) misuse. This goal implies that every simulated crypto-units is directly detectable.
    Systems that depend on human involvement cannot be near-perfect; i.e., even tasks like investigating suspicious anomalies should be automated. Why? What happens if humans fail in their tasks? There are more aspects to consider, but getting to near-perfect security is doable and essential; we must trust crypto-key protection.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B12

Q B13: What threats can No-Go-Security adapt to automatically?

  • Short A: Security is based on auto-adapting, closed feedback loops
  • Longer A: Additional, non-bypassable Executable-Watchdog, Content-Watchdog, or Network-Watchdog components regularly check local software behavior. The data these watchdogs are using are locally cached. They are generated via models from observing these apps or (voluntarily) disclosed data from developers/manufacturers. Security is built as an auto-adapting closed feedback loop using provided or extracted security-related capability data to detect anomalous software actions automatically.
    Which operations are considered a threat depends on the software. Developers disclose software capabilities; they are then (all) accepted and not further questioned. A threat is a deviation from disclosed capabilities. Watchdogs detect these deviations easily and trigger reporting and investigations, possibly leading to data updates. If non-disclosed features show malicious intent, then developer’s/manufacturer’s reputation is ruined.
    For graylisted apps, the process is more technical. But the goal is to have accurate patterns and (prediction) models that app behavior must pass before software actions are accepted; if this test fails, i.e., a problem is detected, it is reported as a suspicious anomaly for possible investigations.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B13

Q B14: Has No-Go-Security blindspots, i.e., does it fail to detect malicious actions?

  • Short A: Yes – but we help to detect them (late) and deter from exploiting users with them
  • Longer A: Yes, No-Go-* has blindspots. No-Go-* trusts registered developers that their software does not contain intentional malicious code. If apps’ disclosed abilities are used as a cover-up for malicious features, then No-Go-* cannot detect that automatically.
    Over time, it can be expected that customers or competitors will detect this. A few options are already considered to support an earlier detection.
    Additionally (and a bit more technically), too broad patterns or prediction models within graylisted apps or data exchange protocols lead to the acceptance of false negatives. However, the quality of the used patterns/prediction models is not transparent to attackers, and not every detected deviation from patterns or models is called out or leads to pauses. Therefore, even advanced attackers are uncertain if their attack was detected and if they are being investigated.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B14

Q B15: Could No-Go-Security adapt to entirely new threats?

  • Short A: Yes, but we should hold our horses
  • Longer A: Yes, the No-Go-Approach makes sense in a few more scenarios. But it is better to hold back on problems that are not already calling for security solutions. Creating demand for advanced security against super-smart AI will come later with education so that deciders understand their vulnerability better.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B15

Q B16: Can No-Go-Security be updated?

  • Short A: Yes; and updates cannot be exploited in or for attacks
  • Longer A: Yes, all No-Go-Solutions can be updated. But these updates cannot be done covertly by attackers. There are multiple confirmation steps required before new software is accepted. Each confirmation request tells close-by (neighboring) instances and remote servers that a security device is changing its operating software to the newest (confirmed) version. If software changes are part of an attack, the security device is deactivated, multiple reports are generated, and an investigation is started immediately.
    Developers of new security software would have special (registered) instances that absorb some of these reports and calls for investigations. The use of these instances is being tracked so that their utilizations within an attack are likely detectable. Most importantly, the developers’ or manufacturers’ reputations involved in attacks would crash irreversibly.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B16

Q B17: How fast can No-Go-Security adapt to new threats?

  • Short A: No-Go-* is proactive – users are protected from damage; no worries about threats
  • Longer A: Because of proactive and preventative security, damage from malware is unlikely. The worst case is a delay in some actions.
    Detecting a new threat depends on the type of anomaly.
    If No-Go-* detects a suspicious anomaly, it initiates a pause (e.g., file upload). Suspicious operations should not create damage before software developers/manufacturers have had a chance to respond. These clarifications create new additional data related to (whitelisted) apps, sent around after the next regular update query, and finally used by the local watchdog. Some data could be distributed faster, in particular, to release paused activities.
    It is more technical for graylisted apps or data exchange/network patterns, but patterns and prediction models are updated with operations that humans will not detect. For No-Go-*, it continuously optimizes false positive and negative reporting.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B17

Q B18: What if software (already) modifies its own software code?

  • Short A: We need a different type of proactive security around self-modifiable software
  • Longer A: We are aware of software operation technologies used, e.g., in Samsung’s smart assistant technology (Bixby), that can change its own code. Conceptually, this is a potentially dangerous technology. How could we prevent or detect misuse of Bixby’s underlying capabilities? Currently, it is not fully dynamic in modifying itself. It is likely that future ASI, running on our devices, is conceptionally similar to Bixby. Therefore, banning this approach early would take away the unique chance Bixby gives us in developing proactive security software for which we can’t apply our white- or graylisting methods.
    A possible approach is to have Bixby-type software operating in special Virtual Machines (VM). These VMs would include some additional features, including guardrails, that could allow us to independently audit self-modifying code for potentially malicious activities. If we use isolated VMs, we could reduce our exposure to activities that endanger the integrity of regular apps while processing our user data.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B18

Q B19: Does No-Go-Security has single-point-of-failures?

  • Short A: No; chosen architecture is self-adapting, self-healing and fault tolerant
  • Longer A: No-Go intends to implement features with a self-adapting, even (self-healing), fault-tolerant architecture. Because all security codes can be updated safely, and keys are stored in protected hardware, location or level of data redundancy does not need to be made transparent to humans. No-Go-Security will be reliable, adaptable, and scalable. As long as devices have power or are connected to networks, devices and their data are protected against attackers.
    Connections to the network are tested constantly by apps requesting updates or updated data; flaws within data distribution could be detected early. Every delay in receiving data under all conceivable, even extreme conditions, is measured and analyzed; it is used to optimize the connection redundancy and reliability of every linked component.
    Out of concern, humans could fail to do their designated tasks reliably and immediately, humans should only define or manage high-level security goals. Humans are excluded from interfering with operational and most configurational aspects of No-Go-Security.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B19

Q B20: Why should we use RISC, not CISC, for security?

  • Short A: We must be able to detect malware in hardware
  • Longer A: RISC (reduced instruction set CPU) has around 30-50 basic CPU instructions, while 32-bit CISC (complex instruction set CPUs) has about 1500 and 64-bit over 2000 different CPU instructions. Additionally, hyper-threading, different caching levels, security rings, micro-code, and other efficiency or performance-increasing technologies have made CISC extremely complex. So, could we trust that CISC’s complexity is not being misused, i.e., used against us by an ASI? Also, do we have or could we have reliable tools to detect within different CPU versions multiple layers of billions of transistors hardware-based malware hidden by ASI?
    RISC is not simple, but with RISC-V, we have an open-source template that developers and experts could analyze and validate from design (every step) to final production to check if ASI has inserted some hardware-based malware. Variation or efficiency/performance improvements are detrimental to security. With as much simplicity and standardization as possible, we have a much better chance to create CPUs that we could use as comprehensively validated and trusted building blocks within our watchdogs and encryption/decryption units.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B20

Q B21: Is No-Go-Security incompatible with any hardware?

  • Short A: No-Go-* has a no-device-left-behind policy; problems are too early to predict
  • Longer A: It is too early to say what hardware cannot be supported. There are a lot of legacy hardware solutions out there. It is impossible to support everything from the beginning. However, No-Go will educate professional and hobbyist engineers to support as much hardware as possible.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B21

Q B22: Is No-Go-Security incompatible with any software?

  • Short A: No; but some software should/must be adapted to No-Go-Security
  • Longer A: Some software features, e.g., dealing with security and crypto keys, should be adapted to No-Go-Security sooner or later. Also, low-level software that tries to manipulate files, folders, or filesystems (including existing antivirus solutions) will not work as expected anymore – they will need to be adapted. It is possible to give some features a grace period to adapt to No-Go-Security. We recommend or even facilitate using Virtual Machines if software features use certain low-level capabilities, including modification of their own binary code.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B22

Q B23: Why do you require software developers/manufacturers to register?

  • Short A: Software is critical; other sectors (medical, financial) already have self-regulation
  • Longer A: Software is essential to our technical civilization. We depend on developers, and we must trust them. At least we must identify developers who should not be trusted. Having a good reputation is an important motivator. Receiving that reputation will depend solely on someone’s actions and is not based on popularity or subjective ratings.
    For the protection of the public, we already have self-regulation among medical doctors, lawyers, financial advisers, and many more business sectors. We have regulations on who we should trust. The registration of software developers is not about their performance or even if someone left vulnerabilities involuntarily in their products. Software vulnerabilities are normal and not worth being singled out for trust. However, if someone would insert intentionally and covertly malware features, including exploits of vulnerabilities, this behavior must have severe consequences.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B23

Q B24: Are you expecting too much (info) from software developers/manufacturers?

  • Short A: No. We offer developers/manufacturers to improve their reputation easily
  • Longer A: Developers would follow a simple checklist with easy questions. Most answers are not confidential, but some might be, like which 3rd party components were used. Their answers would be used to notify manufacturers about security issues with used components reliably. More importantly, with registration, developers and manufacturers have a tool to gain the trust of their customers and users based on their actions. Being registered is simply in their best interest. Most follow-up reporting should be done with automated features in development tools before deploying new/updated software.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B24

Q B25: What happens to non-registered Software, Developers, or Manufacturers?

  • Short A: It’s being made transparent to users; they can decide
  • Longer A: As consumers and customers, we don’t care about a certain individual developer, only if there are reasons for suspicion. For trusting a product, we need to know if a developer is involved in cybercrime or ransom fraud. They won’t tell us about their past, but it will be detected if their involvement is ongoing. Using not-registered software on a computer is a decision done by users/customers. No-Go will make this transparent. Registered software is whitelisted and generally more trustworthy because people stand openly behind their deliverables. No-go is continuously validating if the software is within limits set by developers’/manufacturers’ disclosers.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B25

Q B26: Do you expect Web-resource operators are participating voluntarily?

  • Short A: No, not all – potentially not even many; but businesses will likely see an advantage
  • Longer A: Many websites or resource operators are businesses. It would make sense for them to do so if they don’t have anything to hide. However, most websites won’t be required anyway. It can be expected that most security-relevant features are detected and validated automatically. Every unannounced change will be detected. Whether these operators want it, all sent or received data are under scrutiny by client-sided soft-/hardware. How consumers will react to businesses not being transparent is unknown at this point. However, avoidable scandals are expected to convince (more) companies to participate voluntarily.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B26

Q B27: How do you check content within encrypted (SSL/TLS) messages?

  • Short A: We create an accepted man in the middle instance in local Network Watchdog
  • Longer A: Session keys from SSL/TLS are currently managed in the device’s RAM. In RAM, they might be protected additionally, but they are vulnerable to attackers during the entire session. In No-Go, if regular software uses keys, it will generate or receive them and quickly get it out of the vulnerable RAM. Actually, it replaces it with another key (handled within the regular software) irrelevant/useless to the outside.
    Essentially, key negotiations are still done by the regular software, while session keys’ content encryption is done via an accepted and fully controlled man-in-the-middle instance.
    SSL/TLS uses public keys in cleartext; therefore, it should be intentionally incompatible with No-Go’s key-safe and encryption. Hardware-based Network Watchdogs should not support inferior data encryption that can’t be protected against advanced adversaries.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B27

Q B28: How will No-Go deal with filesystem/stateful info managed in RAM?

  • Short A: Watchdogs are transparent to all CPU/OS operations
  • Longer A: This is a low-level technical issue. The watchdog hardware is transparent to the CPU/OS, i.e., a watchdog is not detectable when it receives or sends data. No-Go’s watchdogs do not require OS to be changed; some additional (low-level) software will be installed. For interpreting certain requests (like directly addressing data within a filesystem), the watchdog caches relevant information and uses them to validate if requests require interventions.
    Return

Tag/Link to A: NoGoStar.com/faqb_v/#B28

Q B29: Do you check attack patterns fine-grained or coarse-grained?

  • Short A: Both, when we know attack method: fine-grained; unknown methods coarse-grained
  • Longer A: The problem with fine-grained checks for malicious modifications is that they are computationally intensive when we don’t know what we are looking for.
    No-Go-Security makes an important assumption: software is sufficiently tested by manufacturers. Software could still have bugs, and users should be able to report them easily to manufacturers. Also, if we know that an app, e.g., stores files, we don’t check if that software could intentionally manipulate these files. But if a file format is standardized, we could check if some piggybacking happened, i.e., if hidden information was inserted. That would be a fine-grained security validation.
    A coarse-grained security check validates if the app is, e.g., using the network, although developers did not disclose that feature. We assume because of hashcoding, software can not be locally manipulated. Developers or manufacturers must have inserted these features and are responsible for the integrity and quality of the delivered product. Manufacturers could receive a grace period for updating feature disclosures before updating their reputation rating.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B29

Our answers are not written in stone or intended to be our final word. If you disagree or think of a better answer, please don’t hesitate to contact us with the question code reference. If you think we miss out on a question, please send Feedback/New Question.

Concerns

Q B30: Will No-Go-Security slow down protected devices?

  • Short A: Probably not that much. But security always has some impact
  • Longer A: There is a small performance price to pay. That happens at the initiation of a transaction but not during transactions. Most of the heavy lifting could be done by watchdog hardware or via a dedicated CPU core if the watchdog is a software-only implementation.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B30

Q B31: Do we need No-Go-Security on all machines?

  • Short A: No. Cyberwar can be stopped with a smaller footprint; with ASI, it’s different
  • Longer A: Cyberwar doesn’t happen on all consumer devices yet. Cyberweapons trying to be selective in their targets and using regular devices are a springboard. Institutions, including computers serving the infrastructure or companies, must be prepared more than consumers. But if No-Go-Security incapacitates malware, why not let more benefit from that progress?
    Consumer devices could be holdouts for ASI after its emergence or escape from being controlled by whoever started it. If we want ASI to be on regular consumer devices (as loyal assistants or companions to humans), we need No-Go-Security on all devices. Only then could we switch off any unsafe, out-of-control ASI reliably.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B31

Q B32: Do you use or facilitate surveillance?

  • Short A: Not for security; but we must support court-ordered warrants for limited surveillance
  • Longer A: Surveillance is not being used or supported as a security tool. However, No-Go-* must accept that perfect communication encryption is not in society’s best interests. No-Go-* will agree to accept court-ordered warrants leading to limited surveillance. Governments or parents (protecting their children) should be allowed to receive session keys under predefined conditions. The session keys are not provided in cleartext; procedures around sharing session keys will not make the entire system vulnerable to criminal misuse.
    Using independent (court-) supervision for surveillance seems to be a good system under all conceivable circumstances.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B32

Q B33: Are there some goals that you are doubtful about achieving?

  • Short A: … well, no device-left-behind promise is a challenge
  • Longer A: All proposed solutions and their features are based on relatively simple components combined in a new way. Many solutions have been studied in (academic) papers; they will be available on arxiv.org and other publications.
    Because of our no-device-left-behind promise, we do not know if hypervisor solutions could always be implemented retroactively (e.g., on 30+ years old operating systems). But other No-Go-Solutions, potentially with less redundancy, could be used to make (really old) legacy equipment more secure.
    We admire do-it-yourselfers and hobbyists for keeping the knowledge of old technologies alive and supporting them, even if this is not good business.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B33

Q B34: How certain are you to deliver on your full promises?

  • Short A: Very certain on developing capabilities; cautiously optimistic on a broad deployment
  • Longer A: The initial promise is to stop nation-states from waging cyberwar using malware, ransomware, spyware, and backdoors. Delivering on that promise contains two steps: (1) development and (2) deployment.
    Re (1): We can be confident that we get a software-only hook-safe-type solution to accomplish all anticipated features. The same applies to the technologies that solidify our victory with a semi-soft- and hardware solution.
    Re (2): Global deployment is a tall order; it requires that the private sector is on board. Under that assumption, we could have (conceivably) a relatively high penetration rate with consumer devices (like 50% or more) relatively quickly. Legacy systems are a problem. We could be more optimistic if the media and governments were on board as well.
    However, we are less confident about industrial computer systems because there is so much diversity and customization. Even if governmental regulators push manufacturers and operators harder to solve their security exposures, it may take several years. This complexity needs to be addressed by many contributors; this is the reason why the No-Go-Community must be about education and no-device-left-behind (and why this policy is good business).
    There is a chance that simplified hardware solutions, with fewer features against advanced ASI adversaries, could be used to protect computer systems within critical infrastructure. Experts are required to determine if there is an acceptable trade-off between a fast but less perfect hardware-security solution (against cyberwar consequences) or waiting for the standard hardware solution designed to deal with ASI.
    Related to legacy IoT, this is a separate problem. No-Go-* will later provide several technologies to solve that problem, but it is unlikely that it could be solved quickly. I.e., IoT is not (fully) solvable as part of a software-only No-Go-Solution.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B34

Q B35: Could No-Go-*’s development effort be done in vain?

  • Short A: Unlikely if we can deliver on promises; if not, there are still useful outcomes
  • Longer A: If No-Go-* delivers on goals and promises, it’s clearly: No. Answering this question does not indicate that we prepare for failure.
    One of the biggest contributions to cybersecurity is transforming cybersecurities’ current paradigms: don’t trust developers and their products. If we increase the trust in software and developers by registering and having them self-regulated, then this is huge progress on its own.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B35

Q B36: How much should regular users care about No-Go-Security™?

  • Short A: Users should not worry about basic security; instead, be protected against cybercrime
  • Longer A: A good comparison is the use of an elevator. Its security (including maintenance) is simply guaranteed by product liability. Failures are not allowed and always have consequences. Security is a background feature; there is no reason to discuss or recognize security. It is No-Go’s vision that basic security is within IT’s foundation.
    Still, users often have digital vulnerabilities; some are unavoidable, like valuable or personal data. Security must be configured to protect them. Instead of being concerned about having encryption keys stolen by malware, users should focus more on not being tricked by dishonesty or deception.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B36

Q B37: Could No-Go-*™ educate attackers?

  • Short A: Yes, that is a valid concern, but it applies to every technology
  • Longer A: Educating attackers is an unavoidable side-effect. However, we hope the attackers have a gray hat. Or that we can catch them.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B37

Q B38: Would you share negative news about No-Go-*?

  • Short A: Yes, there is no positive or negative news – it’s just progress
  • Longer A: Facts are what they are. Therefore, there is no good or bad news. Actually, bad news doesn’t need to remain bad. They are often the source of ideas and even breakthroughs. With truthful transparency, we can make much better progress. Also, we may attract new expertise and engineering talents who already know what to do.
    Return
Tag/Link to A: NoGoStar.com/faqb_v/#B38

CONTACT US

Feedback
New Questions

Our answers are not written in stone or intended to be our final word. If you disagree or think of a better answer, please don’t hesitate to contact us with the question code reference within the subject line