So, you’ve decided you want to be foreign intelligence agency proof. Boy, is that a losing battle for your average office.

In this threat model, coercion and physical distruction of property, and even torture, are on the table (and undetected methods aren’t off the table, either).

Coercion proof security is far too expensive to be in-scope for businesses with limited funds. Detering attackers unwilling to be coercive… that might be within your bounds.

Your threat model considers moderately advanced, but not overly dystopian intrusions. If CIA agents hold your swiss, NYC, and london offices hostage at the same time, all bets are off. You won’t be inspecting chips here, but will be employing hardware distrust (laptop and router independently secure the connection out of the building), and self-controlled BIOS software (certs, netboot, 2fa…).

Why consider this in a model? Because it illustrates the power of a two man rule and reducing unilateral control over a system. If there’s any one man that can be held at gunpoint to turn over a key, there’s an incentive by a foreign intelligence agency to get to that person. Heck, if any one person can act without detection, there’s incentive to recruit them TO your foreign intelligence agency.

It also means that no model is completely safe, when considering supply chain MITM attacks (not covered by resource public key infrastructure) or useful idiots. There’s no guarentee of security without training the population and bringing the security out into the sunlight. If you don’t preserve the knowledge of pentesting in a younger generation… you may as well be IOT toast.

Diagramming this would be fun but it’s complicated yo. Gotta figure out the minimum number of seperate pieces (other than the 24/7 guard, 3 shifts, 2 guards, overlap for sick days…) gosh… how much redundancy do you need? Anyway… a diagram could help. I’ll make one soon.

0. Hardware

You’ve installed security hinges, security astragals, and weatherproofing-style security devices to prevent through-gap and under-door tools. You use secure locks with cameras to show access control. You’ve even installed alarm systems that need to be disarmed within a certain timeframe.

Your valuables are in tamperproof safes bolted to the floor (maybe they’re even fireproof). You understand your efforts only make attacks take longer. Your backups are in seperate locations.

Your workstation uses screen protectors and laptop locks. Your security system, laptop, phone, and datacenter stay in 24x7 contact with each other across two communication channels (cell / wire & tor / clearnet). You’re notified for every access, and can’t delete access logs.

You know about common lock bypasses, bad lock designs, & keyed alike systems (see defcon…). You understand that if you don’t make time for maintenance, it will make time for you.

You regularly have a team that inspects locks, ceiling tiles, and cables for signs of intrusion, bugging, and tapping. Your network shuts off unused ports and you work to scout out independent devices (pwnplug etc.) and hardware keyloggers. Simply having armed guards isn’t the same as regular, detailed inspection of your hardware. Trust, but verify.

1. Electronics

Your BIOS has been flashed, BY YOUR SECURITY TEAM to a known good state, then all JTAG header pins are destroyed and covered in tamper evident epoxy. Your computer case (e.g. seams, screws, snaps) is coated in glitter nail polish.

Your BIOS uses only authenticated bootloaders, you’ve configured it with ONLY your own signing keys, and it combines U2F, password, and biometric ID if possible. It only allows USBs to boot if they’re signed, or fails over to a secure boot server that YOUR ORG controls.

The disks are LUKS formated with backup key slots stored off site, and an on-site key that requires two personel’s tokens and passwords to unlock (the third option is an on-site administrator cooperating with a remote administrator in a shamir scheme… but that has its weaknesses, really it should be all 3). If possible, your bootloader is BOTH encrypted AND signed (rather than just signed).

You keep constant custody of your hardware, using secure document bags.

Your RFID/NFC chips use challenge response to prevent skimming, but unless you pay out for programmable storage, the vendor might have access to your private keys.

You employ sigint, and monitor for cellular and wifi signals in the premise. While jamming them isn’t allowed, guests check in their cellphones at the entrance and recieve an SOS communication device instead. Employees either follow this as well, or power off their devices. Passive obstruction pouches may be used for additional security (like those yondr pouches but with HF/LF fleece).

2. Software

Your OS employs U2F with Kerberos access control and rotation policies, as well as webcam/geolocation recording on login. You stream biometric keyboard data on the live session and silently lock access when biometrics don’t match. For convenience you employ proximity bracelets (NFC/RFID) as well as webcam proximity detection to automatically lock the device.

In addition to LUKS, user directories are protected by transparent home encryption. secure secrets are unlocked by a keyring only when code is required for signing or authorization. Remote top secret information is kept in memory, and not flashed to the SSDs.

Applications are point-in-time, continuously versioned backups, with clients capable of connection pooling and swarm consensus peering.

SELinux/AppArmor is used on a hardened, minimized system (maybe you’re running hardened BSD and you know what you’re doing?). Your system uses a VPN (wireguard), fail2ban, and honeypots. Your install chain uses signature verification of the software’s authenticity from the build server.

Your system employs IDS on the DNS, pkg, and MTA servers. Applications trying to use another provider send red flags. CanaryTokens and modified documents provide both external compromise and internal agent exfiltration monitoring.

Your AV employs on-access (block until it’s finished scanning), as well as daily scans. You employ the use of file-integrity checks on read-only media, and send AV files to a trusted AV server. Your kernel enables advanced security audit logging. And yes, you employ Anti-Malware and work to thwart obfucsated and live-off-the-land attacks. Someone reviews why your machine is doing what it’s doing (in some sort of SOC).

You employ read up and write down restrictions. All systems use scripts that pass an approval process from testing to be able to operate independently, an approval that takes a two-man rule. Normal seperation of roles applies.

Your system employs constant backups, vulnerability scanning, & phishing simulations. You’re mean to your code/cluster, and constantly test ways into it. You’ve disclosed your security posture (and boy, it’s an expensive one), and you still don’t give out all the details (randomized guard rotations, canaries, honeypots).

For some operations, the OS employs use of a TPM, or even RAM mounting files for secure access (for example, your datacenter where two Shamir Secrets are combined and applied in a token).

You fund research into the software systems you use (like finding attacks on BIOS based LUKS).

For 3rd party sites that possibly collect metrics on your users, you’ve sworn off physical access and use a headless browser to swap API keys and reset its password (might work well enough for Cloudflare (geolocation data/access logs), bank accounts, payroll sites(employee SSNs/addresses))… oh, and you have NDAs with them to disclose breaches, and attempt to phish through your vendors to compromise your vendor accounts.

And yes, you use good login ops (U2F, 2FA, recovery codes, passwor managers) wherever possible.

Really, you can’t expect them to be on the same game level you’re at.

3. Facilities

You’ve built (or moved into) a secure facility, with a 100ft standoff distance (against bomb threats and EMSEC attacks). It employs bollards, rotating-wedges for stopping power, and choke points, as well as bulletproof guard stations (before the parking lot) and mirrors check the underside of vehicles.

There’s a bulletproof front desk in a lobby you need to be buzzed into. People pass through a metal detector before reaching the lobby, and pass through an asset-metal detector or short-wave scanner before entering the office. The office includes a mantrap.

You’ve included a contained air supply, fire exinguishers, and first aid. Power can be supplied from generators / battery banks. You keep backup keys in a secured location.

You’ve moved away from people who work at home. If they do, they have an armed guard (and possibly a 35k k9 or a second armed guard) and a self-destruct panic button. For very specific and vulnerable cases, you employ executive bodyguards and armed couriers. If they’re intercepted, the keys to the data they transport are destroyed.

Employees are trained in self defense, disaster reponse, counterintelligence, and coercion resistance. They know how to handle threats to themselves, and act as eyes for your organization to prevent intrusion.

When facilities are built, they’re adversarially protected and inspected, the same as your hardware. Multiple independent geographic regions know if the other regions have been built correctly to withstand tampering.

4. People

No lone work is allowed in the office. Dual Tokens are needed for the machines, and authentication terminals are placed too far apart for a single person to operate.

You’ve isolated the manufacture, operation, authorization, recording, custody, and audit of your systems. Your system employs sequential seperation (2 signatures of approval), individual seperation, spatial seperation, and factorial seperation, to the best degree possible.

Your security guards don’t have rack keys, and the guy who flashes stuff on the rack doesn’t have the signing key.

Full root access is only present at hardware incoming, and thus the flashing of the BIOS and software is overseen by two independently protected parties (i.e., either guard will prevent the other guard interferring). The device remains in an observed, and mutually adversarially protected state (like ballots), until decomissioning.

Emergency root access, if it must be implemented, will only be possible if it succesfully alarms at least one other geographically seperate, and governmentally / organizationally independent location.

Everyone is trained in knowing that even if it’s their superior, they’re not allowed to permit access to a system (i.e. military datacenter pentesting).

Where two-man systems are used, they operate in shamir groups (M of N), that allow for the death of a party. Automated systems, upon duress, will invalidate keys and re-key shamir parameters as needed. Knowing that the software that operates on the final shamir group will have full access to the token means it must be highly protected in its final state (i.e., the machine that authorizes a system patch from 3 admins that does the actual signature). In addition to shamir credentials in a secure server, secured credentials on the user’s machines (public keys propogated and invalidated automatically), are also required (bitcoin-style multi-sigs).

Every person knows that an independent group of people has read access to the logs of everything they’ve ever done. Every person in a physical location knows there’s a gun protecting him, and a gun watching to make sure he doesn’t stab the agency in the back.

You’ve considered counter-HUMINT, weighing betrail because of Money, Ideology, Compromise/Coercion, and Ego. You employ several systems of offensive counterintelligence/espionage.

Important people receive a body guard rotation (you see, this is getting a bit excessive).

In the event a breach is found, potentially unshielded data (i.e., log leaks) should be automatically given higher restrictions and stricter ACLs. Given that any employee of the corp working on that patch will know it’s in a vulnerable state, heightened security should be the default.

In addition, your independent auditing party should be as secure, and as competent, as you are. They should know when the changes you propose are bullshit.

The auditing party would have access, within the NDA, to knowing specific details for an attack, but as discussed in the bonus section, you should disclose to your non-tech savvy users when you’ll expect your tech savvy observers to know of a vuln you’re racing to patch. Vague alerts (like incorrect username OR password) are your friend here, as are solid whistleblowing audit servers.

Conclusion

Security is tough. Employing defense in depth, and regularly testing your blue team (“the goal isn’t to look like a badass, but to get caught” - Deviant Ollam), help in this regard.

At the end of the day, you want intrusions to be slow, noisy, difficult to plan, and embarrasingly public. In addition, you’ll want to log enough information that an insider will believe they will be identified and caught. You’ll want multiple systems to overlap and catch an intruder in more than one way.

But even this doesn’t cover the other elements to the problem.

  • How do your users know they’re talking to the real bunker-encased team?
  • you don’t have electron microscope access into your chips, or NIST budgets for evaluating crypto
  • Your company might still be evil (i.e., installing spyware, selling user data, using DRM, engaging in price discrimination, financially censoring porn, obeying authoritarians…)
  • Your systems might be secure, but are you respecting the user’s power of choice? (Think right to repair, free software, open source everything, cipherpunk, radical transparency, time well spent, and redecentralize, among others)
  • You’re not saying anything about the business viability, accessibility, security UX, or maintainability of your system, not to mention the PR of it all
  • Body snatchers and coercion will always come, if not for your employee who’s guarded by a bodyguard as he sleeps, then by the bodyguard when he has to sleep

Really, all this is only a further standoff distance. If you’re not always at defcon 2, then something’s gonna get you.

And maybe you don’t have the money to implement a system like that.

Really, the best you can do is to make a commitment on some of it.

How do you even sleep at night, anyway?

Bonus

Per Kerckhoffs’s principle, we understand that any attacker must be assumed to be an insider. They may not necessarily follow the same attack patterns as someone performing initial discovery of an org (i.e., won’t fall for breadcrumbs).

That’s not to say that you can’t use both. Spam prevention / anti-SEO abuse techniques often employ a mix of defenses that are disclosed or undisclosed. Both approaches should be used to minimize 1) predictability of all detection measures 2) trust in a system being private. Regardless, if there’s limitations you don’t address, your customers need to know.

Publicly explaining how you keep your systems secure opens you to criticim and allows you to maintain accountability with customers. This is balanced with a 24/7 read only auditor who knows if something’s happened to your system (internal actor, breach, security vulnerability), and won’t let you hide or exploit it, and force you to vaguely acknowledge it and repair the system. Similar to warrant canaries, it should be hard to prevent patching a system once it’s been whistle-blown within your internal NDA.

This is bundled with all sorts of problems in publicly testing a patch (someone will reverse the patch), so it’s ethical to explain the vulnerability and its impact to your non-tech savvy customers at the same time you’re reasonably certain an attacker has full knowledge of it.

Security is never perfect, and you need to be honest with your customers about how much you care. Like an SLA, you need specificly measurable goals, and ways to measure / prove you’re meeting them, or would be, in the event of a compromise.

In the event of a breach, providing proof that you aimed for top notch security and did everything in your power show good openness and realistic expectations.

If possible, you’d force mitigations to take place down your user chain (i.e., vendor of a product, force users to upgrade the software), in addition to taking full responsibility for finding accomodations (identity theft protection, backups, AV)…. then again, might be getting carried away.

In any event, the amount of information any one person has should include false breadcrumbs, and the things they control should be limited and logged.

Coercion proof security is far too expensive to be in-scope for businesses with limited funds. Detering attackers unwilling to be coercive… that might be within your bounds.

Securing your barracks with armed guards? That’s probably only relevant if what you’re guarding is part of a triad.

Convincing your bank to enable U2F? Good luck.