Cambridge Cyber Conference

Yesterday, I had the privilege of attending the Aspen Institute’s Cambridge Cyber Conference. It was put on by CNBC and the Aspen Institute, and was at the Edward M. Kennedy Institute for the US Senate. It certainly exceeded my expectations:

I don’t think I want to type up all the copious notes I took, but here are nine key takeaways:

Cyber is a noun now

This always makes me squirm. But when industry CISOs and the White House Cybersecurity guru use “cyber” as a noun, I guess I can’t drag my heels and oppose it anymore.

The government is still uneasy with encryption

Several officials from government agencies talked about encryption. They agree that it’s essential to our security and privacy, and that they advocate its use. They also say they don’t advocate for backdoors.

What they do want, though, is a way for companies to be able to bypass encryption with a warrant. I’m not sure how bypassable encryption isn’t, by definition, a backdoor. They complain, not entirely unreasonably, about “warrant-proof encryption,” which they argue has no analogue in the hardware world. (A safe can be drilled, or even blasted with dynamite.)

Nation-state actors are a growing threat

I don’t know the source, but it was stated that North Korea has stolen over a billion dollars through exploits. Russia has been influencing other country’s elections online. And then there’s China…

Security threats no longer come from dudes sitting in their basement. National governments—including our own—have teams dedicated to offensive cyber. (See, it’s a noun!) While there’s been a call for something like a “Cyber Geneva Convention” to set boundaries—such as not targeting individuals or companies—none exists yet.

Almost by definition, these adversaries are quite well-financed and organized.

The “crown jewels” aren’t always the target.

North Korea’s attack against Sony over The Interview caught a lot of people by surprise. We’re used to attacks being against the obvious targets, and the ones we guard carefully.

Leaking emails has become a common tactic, too. Just look at Hillary or Macron—or, again, Sony.

And remember the Target breach? They got in through the Internet-connected HVAC target.

You need a plan!

Rod Rosenstein, Deputy Attorney General at the DOJ gave a short speech before lunch. Talking about the the threat of cyberattacks, he remarked: “If you think it won’t happen to you, you are probably wrong.”

Of course you should seek to do everything you can to prevent an attack in the first place. But speaker after speaker emphasized the importance of planning ahead of time your reaction to a compromise.

What was interesting to me is that the plan is not a particularly technical thing. Of course it should involve forensics and then audits to ensure whatever was exploited is fixed throughout your company. But the plan is arguably a Crisis Management function, not a technical runbook. It’s critical that it outline communications—both to the media and between departments.

Equifax came up again in this discussion. While their breach was enormous and arguably irresponsible, what set it apart was that their response to the breach was utterly incompetent. Executives sold off stock; public statements came too late; sleazy releases of liability were forced upon those who tried to find out if they had been compromised. If they had had a plan in place that did things like freeze stock trading, and get top executives in charge of prompt and honest communication, the incident would have been less of a dumpster fire.

You need to follow the plan!

An executive from Booz Allen Hamilton talked about working with clients on drills and rehearsals. An astonishing number of them, under pressure from the incident, completely failed to follow their plan. He likened it to youth versus professional soccer: at the youth games, all the players would chase the ball wherever it went, which strikes me as a really apt analogy. What’s needed, of course, is to get to the level of the pros, in which everyone knows their role and follows it.

The way to get here isn’t “if something happens, remember the plan!” It’s, as myriad speakers talked about, something that’s acquired from practice. Across industries, across functions, everyone insists that companies should regularly be practicing their incident response.

We need a societal shift, not security policy

“Societal shift” may not be the right word, but security can’t remain an IT policy. It needs to be something everyone in the company is aware of and committed to.

Someone at my small lunch session compared this to company polices on sexual harassment or diversity training. A company should have a sexual harassment policy, sure, but it doesn’t mean much on its own as a written document. The way we moved the needle there was to make awareness of sexual harassment and how to prevent it absolutely mandatory, and violation unacceptable.

Having security policies is only a small step in the right direction. You need to make it a part of your culture before it will be successful.

The government wants to collaborate with industry

As an individual, and perhaps a naïve one, I figured reporting cybercrime to law enforcement, aside from enormous exploits, was pretty useless.

It was pointed out that about 80% of our infrastructure—things like the power grid or telecom—is run by private industry. The government has substantial resources they can bring to bear, and they implored us to bring them in. Whether it’s sharing knowledge of prevention or having law enforcement investigate attacks, they want to work with us.

Basic “hygiene” remains important.

Equifax’s breach is thought to have come from a software vulnerability months after the fix was released. Microsoft still struggles to get everyone to accept the security fixes they try to push out. These sort of, “We fixed that ages ago!” exploits continue to happen with alarming frequency.

A major financial firm talked about their campaign of red-teamed phishing emails. Initially, something like half of recipients fell for it. Education was put in place, and the campaign was repeated periodically. The number of employees clicking got down to about 4%.

But 4% is still unacceptably high. In a massive company, that’s still thousands of employees putting the company at risk. (They apparently ran with the “putting the company at risk” bit, and made clear that they’re willing to start letting go of employees who repeatedly fail to take the most basic of security precautions.)

Quick-start with Gluster on AWS

I wanted to play around with Gluster a bit, and EC2 has gotten cheap enough that it makes sense to spin up a few instances. My goal is simple: set up Gluster running on two servers in different regions, and see how everything works between them. This is in no way a production-ready guide, or even necessarily good practice. But I found the official guides lacking and confusing. (For reference, they have a Really, Really Quick Start Guide and also one tailored to EC2. Both took some tweaking. Here’s what I did:

  • Start two EC2 instances. I used “Amazon Linux” on a t2.micro, and started one each in Sydney and Oregon. (Using different regions is in no way required; I’m doing that because I’m specifically curious how it will behave in that case.)
  • Configure the security groups from the outset. Every node needs access to every other node on the following ports (this was different for older versins):
    • TCP and UDP 111 (portmap)
    • TCP 49152
    • TCP 24007-24008
  • Create a 5GB (or whatever you like, really) EBS volume for each instance; attach them. This will be our ‘brick’ that Gluster uses.
  • Pop this in /etc/yum.repos.d/glusterfs-epel.repo:
# Place this file in your /etc/yum.repos.d/ directory

[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=0
  • sudo yum install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server. This should pull in the necessary dependencies.
  • Now, set up those volumes:
    • sudo fdisk /dev/sdf (or whatever it was attached as); create a partition spanning the disk
    • Create a filesystem on it; I used sudo mkfs.ext4 /dev/sdf1 for now
  • Create a mountpoint; mount
sudo mkdir -p /exports/sdf1
sudo mount /dev/sdf1 /exports/sdf1
sudo mkdir -p /exports/sdf1/brick
  • Edit /etc/fstab and add the appropriate line, like:
/dev/sdf1   /exports/sdf1 ext4  defaults        0   0
  • Start gluster on each node; sudo service glusterd start
  • Peer detection… This tripped me up big time. The only way I got this to work was by creating fake hostnames for each box in /etc/hosts. I used gluster01 and gluster02 for names. /etc/hosts/ mapped gluster01 to 127.0.0.1 on gluster01, and gluster02 to 127.0.0.1 on gluster02. Then, from one node (it doesn’t matter which), detect the other by its hostname that you just used. You don’t need to repeat from the other host; they’ll see each other.
  • Create the volume, replication level 2 (nodes), one one of them:
sudo gluster volume create test1 rep 2 gluster01:/exports/sdf1/brick gluster02:/exports/sdf1/brick

This will fail miserably if you didn’t get the hostname thing right. You can’t do it by public IP, and you can’t directly use localhost. If it works right, you’ll see “volume create: test1: success: please start the volume to access data”. So, let’s do that.

  • sudo gluster volume start test1 (you can then inspect it with sudo gluster volume status)
  • Now, mount it. On each box; sudo mkdir /mnt/storage. Then, on each box, mount it with a reference to one of the Gluster nodes: sudo mount -t glusterfs gluster01:test1 /mnt/storage (you could use gluster01:test1 or gluster02:test1, either will find the right volume). This may take a big if it’s going across oceans. * cd into /mnt/storage, create a file, and see that it appears on the other. Magic!

Please keep in mind that this was the bare minimum for a cobbled-together test, and is surely not a good production setup.

Also, replicating Gluster between Sydney and Oregon is horribly slow. Don’t do that! Even when it’s not across continents, Gluster doesn’t do well across a WAN.

TripleO / Ironic Meetup in Sunnyvale

Last week a group of developers working on the OpenStack projects TripleO and Ironic convened in the sunny vale of Sunnyvale for a mid-cycle meetup.

Yahoo! Sunnyvale Campus

My focus was primarily on Ironic, though lots of discussion about TripleO happened. (Here is some Tuskar documentation, for example.) I thought it would be worthwhile to quickly summarize my experiences:

  • About 40 people turned out, including some really bright folks from HP, Yahoo!, Mirantis, Rackspace, and Red Hat. (And surely some others that I’ve temporarily forgotten—sorry!) Just meeting everyone I’ve been working with online was pretty valuable.
  • A whole ton of patches got rapidly tested and merged, since sitting in the same room instead of being on separate continents made it much more efficient. In fact, a lot of patches got written and merged.
  • We hit feature freeze Tuesday. On Monday, -2’s were given to bigger patches to ensure that we had time to review everything. The -2 will be lifted once development for Juno opens up. Some of the things bumped include:
  • Because of feature freeze across projects, the Ironic driver for Nova was temporarily copied into Ironic’s source tree so we can work on it there.
  • Described in the same email linked above, a lot of work went into extending CI coverage for Ironic though it hasn’t yet landed. This test integration will be necessary to graduate from incubation.
  • We also identified end-user documentation as an important task, one which is both required to graduate incubation and as something that can be done during feature freeze in addition to bugfixes. This Etherpad tries to outline what’s required.
  • A lot of whiteboarding was done around a ramdisk agent for Ironic. The idea is that nodes can boot up a richer agent to support more advanced configuration and lifecycle management. The link here goes to the introduction of a pretty interesting thread.