Cambridge Cyber Conference

Yesterday, I had the privilege of attending the Aspen Institute’s Cambridge Cyber Conference. It was put on by CNBC and the Aspen Institute, and was at the Edward M. Kennedy Institute for the US Senate. It certainly exceeded my expectations:

I don’t think I want to type up all the copious notes I took, but here are nine key takeaways:

Cyber is a noun now

This always makes me squirm. But when industry CISOs and the White House Cybersecurity guru use “cyber” as a noun, I guess I can’t drag my heels and oppose it anymore.

The government is still uneasy with encryption

Several officials from government agencies talked about encryption. They agree that it’s essential to our security and privacy, and that they advocate its use. They also say they don’t advocate for backdoors.

What they do want, though, is a way for companies to be able to bypass encryption with a warrant. I’m not sure how bypassable encryption isn’t, by definition, a backdoor. They complain, not entirely unreasonably, about “warrant-proof encryption,” which they argue has no analogue in the hardware world. (A safe can be drilled, or even blasted with dynamite.)

Nation-state actors are a growing threat

I don’t know the source, but it was stated that North Korea has stolen over a billion dollars through exploits. Russia has been influencing other country’s elections online. And then there’s China…

Security threats no longer come from dudes sitting in their basement. National governments—including our own—have teams dedicated to offensive cyber. (See, it’s a noun!) While there’s been a call for something like a “Cyber Geneva Convention” to set boundaries—such as not targeting individuals or companies—none exists yet.

Almost by definition, these adversaries are quite well-financed and organized.

The “crown jewels” aren’t always the target.

North Korea’s attack against Sony over The Interview caught a lot of people by surprise. We’re used to attacks being against the obvious targets, and the ones we guard carefully.

Leaking emails has become a common tactic, too. Just look at Hillary or Macron—or, again, Sony.

And remember the Target breach? They got in through the Internet-connected HVAC target.

You need a plan!

Rod Rosenstein, Deputy Attorney General at the DOJ gave a short speech before lunch. Talking about the the threat of cyberattacks, he remarked: “If you think it won’t happen to you, you are probably wrong.”

Of course you should seek to do everything you can to prevent an attack in the first place. But speaker after speaker emphasized the importance of planning ahead of time your reaction to a compromise.

What was interesting to me is that the plan is not a particularly technical thing. Of course it should involve forensics and then audits to ensure whatever was exploited is fixed throughout your company. But the plan is arguably a Crisis Management function, not a technical runbook. It’s critical that it outline communications—both to the media and between departments.

Equifax came up again in this discussion. While their breach was enormous and arguably irresponsible, what set it apart was that their response to the breach was utterly incompetent. Executives sold off stock; public statements came too late; sleazy releases of liability were forced upon those who tried to find out if they had been compromised. If they had had a plan in place that did things like freeze stock trading, and get top executives in charge of prompt and honest communication, the incident would have been less of a dumpster fire.

You need to follow the plan!

An executive from Booz Allen Hamilton talked about working with clients on drills and rehearsals. An astonishing number of them, under pressure from the incident, completely failed to follow their plan. He likened it to youth versus professional soccer: at the youth games, all the players would chase the ball wherever it went, which strikes me as a really apt analogy. What’s needed, of course, is to get to the level of the pros, in which everyone knows their role and follows it.

The way to get here isn’t “if something happens, remember the plan!” It’s, as myriad speakers talked about, something that’s acquired from practice. Across industries, across functions, everyone insists that companies should regularly be practicing their incident response.

We need a societal shift, not security policy

“Societal shift” may not be the right word, but security can’t remain an IT policy. It needs to be something everyone in the company is aware of and committed to.

Someone at my small lunch session compared this to company polices on sexual harassment or diversity training. A company should have a sexual harassment policy, sure, but it doesn’t mean much on its own as a written document. The way we moved the needle there was to make awareness of sexual harassment and how to prevent it absolutely mandatory, and violation unacceptable.

Having security policies is only a small step in the right direction. You need to make it a part of your culture before it will be successful.

The government wants to collaborate with industry

As an individual, and perhaps a naïve one, I figured reporting cybercrime to law enforcement, aside from enormous exploits, was pretty useless.

It was pointed out that about 80% of our infrastructure—things like the power grid or telecom—is run by private industry. The government has substantial resources they can bring to bear, and they implored us to bring them in. Whether it’s sharing knowledge of prevention or having law enforcement investigate attacks, they want to work with us.

Basic “hygiene” remains important.

Equifax’s breach is thought to have come from a software vulnerability months after the fix was released. Microsoft still struggles to get everyone to accept the security fixes they try to push out. These sort of, “We fixed that ages ago!” exploits continue to happen with alarming frequency.

A major financial firm talked about their campaign of red-teamed phishing emails. Initially, something like half of recipients fell for it. Education was put in place, and the campaign was repeated periodically. The number of employees clicking got down to about 4%.

But 4% is still unacceptably high. In a massive company, that’s still thousands of employees putting the company at risk. (They apparently ran with the “putting the company at risk” bit, and made clear that they’re willing to start letting go of employees who repeatedly fail to take the most basic of security precautions.)

macOS / OS X disk compression

While reading about AppNap and the fun tricks Apple employs to improve battery life, I happened across something I never knew: the HFS+ filesystem supports transparent compression. It seems like Apple’s intention was for this to be used for shipping system files, and not applied to user files, as evidenced by the fact that it’s virtually impossible to even figure out if it’s enabled, much less a checkbox to simply turn it on for the filesystem.

But, enter afsctool. (Also available through Homebrew.) Wanting to try it out, I for some reason decided to try to compress the 1.5GB git repo I spend most of my time in at work. Let’s look at compression stats (-v, indirectly) for PHP files (-t php) and then the whole directory:

$ afsctool -vt php .
/Users/Matthew.Wagner/workrepo/.:

File content type: public.php-script
File extension(s): php
Number of HFS+ compressed files: 4895
Total number of files: 4904
File(s) size (uncompressed; reported size by Mac OS 10.6+ Finder): 22705078 bytes / 34.8 MB (megabytes) / 33.2 MiB (mebibytes)
File(s) size (compressed - decmpfs xattr; reported size by Mac OS 10.0-10.5 Finder): 1616066 bytes / 2.1 MB (megabytes) / 2 MiB (mebibytes)
File(s) size (compressed): 5948389 bytes / 6.4 MB (megabytes) / 6.1 MiB (mebibytes)
Compression savings: 73.8%
Approximate total file(s) size (files + file overhead): 9050906 bytes / 9.1 MB (megabytes) / 8.6 MiB (mebibytes)

Number of HFS+ compressed files: 17951
Total number of files: 21797
Total number of folders: 3706
Total number of items (number of files + number of folders): 25503
Folder size (uncompressed; reported size by Mac OS 10.6+ Finder): 1697530893 bytes / 1.75 GB (gigabytes) / 1.63 GiB (gibibytes)
Folder size (compressed - decmpfs xattr; reported size by Mac OS 10.0-10.5 Finder): 1615203435 bytes / 1.62 GB (gigabytes) / 1.51 GiB (gibibytes)
Folder size (compressed): 1628829083 bytes / 1.64 GB (gigabytes) / 1.52 GiB (gibibytes)
Compression savings: 4.0%
Approximate total folder size (files + file overhead + folder overhead): 1648262029 bytes / 1.65 GB (gigabytes) / 1.54 GiB (gibibytes)

The results really shouldn’t be surprising. The .git/ files are already stored compressed, so there’s not much to be gained there, hence an overall savings of only 4% in the repo. PHP files averaged a 73.8% reduction in size thanks to compression… Saving me approximately 30MB on a 512GB disk. Hardly worthwhile, and I have to imagine this is going to come bite me down the road. (Why would I even think that compressing stuff I use hundreds of times a day was a good idea?!) afsctool -d will de-compress a directory, though, so, assuming you don’t corrupt anything, it’s easy enough to roll back things that you compress for no good reason. (You could also use something like ascftool -c -s10 dirname/ to skip files unless they could be compressed more than 10%, to avoid doing the nonsense I did and “compressing” already-compressed files.)

As I referenced in the beginning, I see no way to enable this for newly created files. You can compress the contents of an existing directory, but new stuff written there won’t benefit. It’s possible I haven’t found it yet, but it really feels like compression was intended for Apple to use at install-time.

With that in mind, here was a perhaps-more-reasonable thing to do:

# afsctool -s 10 -vc /Applications/Adobe\ Photoshop\ CC\ 2015
/Applications/Adobe Photoshop CC 2015:
Number of HFS+ compressed files: 7286
Total number of files: 8218
Total number of folders: 2499
Total number of items (number of files + number of folders): 10717
Folder size (uncompressed; reported size by Mac OS 10.6+ Finder): 1887395578 bytes / 1.91 GB (gigabytes) / 1.78 GiB (gibibytes)
Folder size (compressed - decmpfs xattr; reported size by Mac OS 10.0-10.5 Finder): 1030642518 bytes / 1.04 GB (gigabytes) / 991.6 MiB (mebibytes)
Folder size (compressed): 1035466738 bytes / 1.04 GB (gigabytes) / 996.2 MiB (mebibytes)
Compression savings: 45.1%
Approximate total folder size (files + file overhead + folder overhead): 1049476316 bytes / 1.05 GB (gigabytes) / 1000.9 MiB (mebibytes)

I shrunk Photoshop by 45%. Note that some things, like Apple’s iMovie, are already compressed. (Also: why is iMovie on my work laptop?!) It seems like most non-Apple applications are uncompressed, and on average drop somewhere close to 50% in size.

Of course, it’s very clear that, armed with this new hammer, every directory is a nail. Compressing rarely-used applications like Photoshop is probably reasonable; attempting to compress my working git repo was clearly not. Still, a neat tool for your toolbox.

5 indispensable bash tricks

Don’t mind the lame Buzzfeed title… Here are a few handy bash tricks and tips that people either use every day or never knew existed. Hopefully I can help move some of you into the first camp!

Introductory notes

A few of these commands involve working with bash history. On the advice of a coworker, I dropped this in my .bashrc to keep tons of history:

HISTSIZE=100000 # keep 100k commands in a session history (memory)
HISTFILESIZE=200000 # store 200k commands in my history file (on disk)

Disk space is cheap, as is memory. The number of times (prior to this change) that I wanted a command that had aged out of my bash history is much greater than the number of times I’ve found bash cumbersome because my history file is almost 1MB in size (when I have a 500GB SSD and 16GB RAM in my 2-year-old laptop).

Meta key

A number of bash commands reference a Meta key. In general, on a Mac, the Escape key fills that roll. On Linux, it’s generally the Alt key. You can change that, but if you’ve done so, you don’t need me to tell you about it. My examples will use Esc for these commands, but if you’re on a Linux box, you will likely want to substitute Alt for it.

Esc-. | Insert last argument

Described in the docs as insert-last-argument (M-., M-_), this keyboard shortcut will spit out the last argument to the previous command. On a Mac, the Meta key is Escape; on Linux, it’s often Alt.

Example usage:

$ mkdir -p long/directory/name/that/would_suck_to/type
$ cd Esc .

The Esc + . will be expanded into long/directory/name/that/would_suck_to/type.

Note that Esc + _ is bound to the same function, but is a bit tougher to type.

Ctrl+R | Reverse history search

This one is tough to explain, but magical. Have you ever hit the up arrow a bunch of times to scroll through history, trying to find something you ran recently? Ctrl + r will open up an interactive search, or reverse-i-search in bash parlance.

Recently used vim on a file with a long filename? Press Ctrl + r and start typing vim. The most recent command matching vim will be showed. Keep typing to make your search more specific, or press Ctrl + r again to scroll to the next-newest one. When you find what you want, press Enter to run it, or the right arrow to start moving the cursor through the command. (Or something like Ctrl+E to jump to the end of the line.)

If you want to be really nutty, you can start commenting your commands at the end. vim /etc/X11/xorg.conf # fix video settings will allow Ctrl + r + video to match a search, for example. I’ve been known to throw in random keywords I think I might try looking for later on.

cd – | Return to previous directory

pushd and popd are awesome and you should use them. But sometimes you forget. bash has got your back. cd - will return you to the previous directory you were in. (This is stored in the OLDPWD environment variable.)

git checkout – | Switch back to the previous branch

If you use git, you’ll be delighted to know that it does something similar. git checkout - will check out the previous branch you were on. I’m often bad at cleaning up topic branches, and will git checkout master to do some catching up, and then realize I don’t remember what my topic branch was called. Sure, it would probably take me all of 30 seconds to figure it out, but checking out - is so much easier.

!! | Re-run the previous command

!! will re-run the command you just ran. Why not just hit the up arrow? Because !! can be combined. The most command usage:

$ cat /root/whatever
Permission denied
$ sudo !!
sudo cat /root/whatever
whatever


Hope you learned something useful! What other neat tricks should I know about?

Counting open files by process

A site I host is offline, throwing the error “Too many open files.” The obvious solution would be to bounce the webserver to release all the file handles, but I wanted to figure out what was using all of them and see if I could figure out why they were leaking in the first place.

I had a few hunches, so I ran lsof -p PID on a few of them. But none of them had excessive files open. After a couple of minutes of guessing, I realized this was stupid, and set out to script things.

I hacked this quick-and-dirty script together:


pids = Dir.entries('/proc/').select{|p| p.match(/\d+/)}
puts "Found #{pids.size} processes..."
pfsmap = {}
pids.each do |pid|
files = Dir.entries("/proc/#{pid}/fd").size
cmdline = File.read("/proc/#{pid}/cmdline").strip
pfsmap[pid] = {
:files => files,
:name => cmdline
}
end

puts pfsmap.sort{|a,b| a[files] <-> b[files]}

There’s got to be a better way to get a process list from procfs than regexp matching directories in /proc that are only numeric. But I do that, and, for each process, count how many entires are in /proc/PID/fd and sort by that. So that the output isn’t just a giant mess of numbers, I also read the /proc/PID/cmdline.

This is hardly a polished script, but it did the job — it identified a script that was hitting the default 1024 FD limit. I was then able to lsof that and find… that they’re all UNIX sockets, so it’s anyone’s guess what they go to. So I just rolled Apache like a chump. Oh well. Maybe it’ll help someone else—or maybe someone knows of a less-ugly way to do some of this?