North Korea’s Internet Presence

The Boston Globe reports that North Korea is entirely offline.

Two quotes struck me. The first:

The country officially has 1,024 Internet protocol addresses, although the actual number may be somewhat higher. By comparison, the United States has billions of addresses.

That’s… Quite few. A lot of tiny hosting companies have more substantial netblocks.

CloudFlare, an Internet company based in San Francisco, confirmed Monday that North Korea’s Internet access was “toast.” A large number of connections had been withdrawn, “showing that the North Korean network has gone away,” Matthew Prince, CloudFlare’s founder, wrote in an email.

“Withdrawn” was interesting terminology to me, making me think that their routers had withdrawn their routes from the Internet / stopped advertising them. That could be caused by an attack, but the prefixes disappearing from the global routing table is slightly more extreme than their routers simply failing to pass traffic. So I wondered: what network(s) does North Korea have, and what happened to them? Let’s find out!

North Korea’s Address Space

This is a great page, listing the known networks assigned to North Korea. (It also contains an interesting scan of their IP space, albeit from a while ago.) According to that site, there are three netblocks:

  • (the block of 1024 IPs the article mentions), owned by North Korea
  • from China Unicom (not China Unicorn as my eyes read every time)
  • from a satellite provider

The first is the official one that they control, and the other two are delegated from other carriers’ IP space. is “toast”

To borrow the term from the CloudFlare quote, their main netblock is “toast.” Taking a look at various looking glasses, the network doesn’t exist in the global routing table:

  • Cogent’s looking glass: “% Network not in table”
  • HE: “None of the BGP4 routes match the display condition”
  • nLayer GTT “No route found.”

The other two networks are still in the routing table, but that’s unsurprising since they’re managed by other ISPs. North Korea’s main netblock has disappeared from the Internet routing tables entirely.

.kp is offline

The .kp TLD has two nameservers, and they’re both in the vanished block:

;kp.                IN  NS

kp.         172800  IN  NS
kp.         172800  IN  NS

;; ADDITIONAL SECTION:        172800  IN  A        172800  IN  A

(As an aside, I had a hard time hosting my own DNS for a .com domain because I was supposed to have two nameservers on separate /24s. Here is a TLD that doesn’t meet that requirement.)

So, other than anything already cached, nothing in .kp can possibly resolve right now.

North Korean websites

As an aside, here is a list of every .kp domain I can find in existence:

  • (The website of state airline, Air Koryo)
  • (The website of the Committee for Cultural Relations with Foreign Countries)
  • (The website of the Korean Central News Agency)
  • (The website of the Pyongyang Film Festival)
  • (The official North Korean governmental portal, Naenara)
  • (The website of the Rodong Sinmun newspaper)
  • (The website of shortwave station Voice of Korea)

Descriptions, where present, come from the .kp Wikipedia page. My list comes from Wikipedia and a private crawler. (They’re not linked because none of them could resolve right now.)

This site has another list.

Quick-start with Gluster on AWS

I wanted to play around with Gluster a bit, and EC2 has gotten cheap enough that it makes sense to spin up a few instances. My goal is simple: set up Gluster running on two servers in different regions, and see how everything works between them. This is in no way a production-ready guide, or even necessarily good practice. But I found the official guides lacking and confusing. (For reference, they have a Really, Really Quick Start Guide and also one tailored to EC2. Both took some tweaking. Here’s what I did:

  • Start two EC2 instances. I used “Amazon Linux” on a t2.micro, and started one each in Sydney and Oregon. (Using different regions is in no way required; I’m doing that because I’m specifically curious how it will behave in that case.)
  • Configure the security groups from the outset. Every node needs access to every other node on the following ports (this was different for older versins):
    • TCP and UDP 111 (portmap)
    • TCP 49152
    • TCP 24007-24008
  • Create a 5GB (or whatever you like, really) EBS volume for each instance; attach them. This will be our ‘brick’ that Gluster uses.
  • Pop this in /etc/yum.repos.d/glusterfs-epel.repo:
# Place this file in your /etc/yum.repos.d/ directory

name=GlusterFS is a clustered file-system capable of scaling to several petabytes.

name=GlusterFS is a clustered file-system capable of scaling to several petabytes.

name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
  • sudo yum install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server. This should pull in the necessary dependencies.
  • Now, set up those volumes:
    • sudo fdisk /dev/sdf (or whatever it was attached as); create a partition spanning the disk
    • Create a filesystem on it; I used sudo mkfs.ext4 /dev/sdf1 for now
  • Create a mountpoint; mount
sudo mkdir -p /exports/sdf1
sudo mount /dev/sdf1 /exports/sdf1
sudo mkdir -p /exports/sdf1/brick
  • Edit /etc/fstab and add the appropriate line, like:
/dev/sdf1   /exports/sdf1 ext4  defaults        0   0
  • Start gluster on each node; sudo service glusterd start
  • Peer detection… This tripped me up big time. The only way I got this to work was by creating fake hostnames for each box in /etc/hosts. I used gluster01 and gluster02 for names. /etc/hosts/ mapped gluster01 to on gluster01, and gluster02 to on gluster02. Then, from one node (it doesn’t matter which), detect the other by its hostname that you just used. You don’t need to repeat from the other host; they’ll see each other.
  • Create the volume, replication level 2 (nodes), one one of them:
sudo gluster volume create test1 rep 2 gluster01:/exports/sdf1/brick gluster02:/exports/sdf1/brick

This will fail miserably if you didn’t get the hostname thing right. You can’t do it by public IP, and you can’t directly use localhost. If it works right, you’ll see “volume create: test1: success: please start the volume to access data”. So, let’s do that.

  • sudo gluster volume start test1 (you can then inspect it with sudo gluster volume status)
  • Now, mount it. On each box; sudo mkdir /mnt/storage. Then, on each box, mount it with a reference to one of the Gluster nodes: sudo mount -t glusterfs gluster01:test1 /mnt/storage (you could use gluster01:test1 or gluster02:test1, either will find the right volume). This may take a big if it’s going across oceans. * cd into /mnt/storage, create a file, and see that it appears on the other. Magic!

Please keep in mind that this was the bare minimum for a cobbled-together test, and is surely not a good production setup.

Also, replicating Gluster between Sydney and Oregon is horribly slow. Don’t do that! Even when it’s not across continents, Gluster doesn’t do well across a WAN.

Lazy distro mirrors with squid

I have a problem that I think a lot of fellow developers probably have–I have enough computers (or virtual machines!) running the same operating system version(s) that I would benefit from a local mirror of them, but I don’t have so many systems that it’s actually reasonable for me to run a full mirror, which would entail rsyncing a bunch of content daily, much of which may be packages I would never use. And using a proxy server isn’t terribly practical, because with a bunch of semi-round-robin mirrors, it’s likely that two systems would pull the same package from different mirrors. A proxy server would have no way of knowing (ahead of time) that the two documents were actually the same.

What I wanted for a long time was a “lazy” mirror — something that would appear to my systems as a full mirror, but would act more as a proxy. When a client installed a particular version of a particular package for the first time, it would go fetch them from a “real” mirror, and then cache it for a long time. Subsequent requests for the same package from my “mirror” would be served from cache. I was convinced that this was impossible to do with a proxy server. Worse, I wanted to mirror multiple repos — Fedora and CentOS and EPEL, and maybe even Ubuntu. There’s no way squid can do that.

I was wrong. squid is pretty awesome. We just pull a few tricks:

  • Instead of using squid as a traditional proxy server that listens on port 3128, use it as a reverse proxy / accelerator that listens on port 80. (This is, incidentally, what sites like Wikipedia do.)
  • Abuse Massage the refresh_pattern rules to cache RPM files (etc.) for a very long time. Normally it is an awful, awful idea for proxy servers to do interfere with the Cache-Control / Expires headers that sites serve. But in the case of a mirror, we know that any updates to a package will necessarily bump the version number in the URL. Ergo, we can pretty safely cache RPMs indefinitely.
  • Set up name-based virtual hosting with squid, so that centos-mirror.lan and fedora-mirror.lan can point to different mirrors.

Two other important steps involve setting up cache_dir reasonably (by default, at least in the packages on CentOS 6, squid will only cache data in RAM), and bumping up maximum_object_size from the default of 4MB.

Here is the relevant section of my squid.conf. (The “irrelevant” section of my squid.conf is a bunch of acl lines that I haven’t really customized and can probably be deleted.)

# Listen on port 80, not 3128
# 'accel' tells squid that it's a reverse proxy
# 'defaultsite' sets the hostname that will be used if none is provided
# 'vhost' tells squid that it'll use name-based virtual hosting. I'm not
#   sure if this is actually needed.
http_port 80 accel defaultsite=mirror.lowell.lan vhost

# Create a disk-based cache of up to 10GB in size:
# (10000 is the size in MB. 16 and 256 seem to set how many subdirectories
#  are created, and are default values.)
cache_dir ufs /var/spool/squid 10000 16 256

# Use the LFUDA cache eviction policy -- Least Frequently Used, with
#  Dynamic Aging.
# It's more important to me to keep bigger files in cache than to keep
# more, smaller files -- I am optimizing for bandwidth savings, not latency.
cache_replacement_policy heap LFUDA

# Do unholy things with refresh_pattern.
# The top two are new lines, and probably aren't everything you would ever
# want to cache -- I don't account for VM images, .deb files, etc.
# They're cached for 129600 minutes, which is 90 days.
# refresh-ims and override-expire are described in the configuration here:
# but basically, refresh-ims makes squid check with the backend server
# when someone does a conditional get, to be cautious.
# override-expire lets us override the specified expiry time. (This is
#  illegal per the RFC, but works for our specific purposes.)
# You will probably want to tune this part.
refresh_pattern -i .rpm$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern -i .iso$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

# This is OH SO IMPORTANT: squid defaults to not caching objects over
# 4MB, which may be a reasonable default, but is awful behavior on our
# pseudo-mirror. Let's make it 4GB:
maximum_object_size 4096 MB

# Now, let's set up several mirrors. These work sort of like Apache
# name-based virtual hosts -- you get different content depending on
# which hostname you use in your request, even on the same IP. This lets
# us mirror more than one distro on the same machine.

# cache_peer is used here to set an upstream origin server:
#   '' is the hostname of the mirror I connect to.
#   'parent' tells squid that that this is a 'parent' server, not a peer
#    '80 0' sets the HTTP port (80) and ICP port (0)
#    'no-query' stops ICP queries, which should only be used between squid servers
#    'originserver' tells squid that this is a server that originates content,
#      not another squid server.
#    'name=as6453' tags it with a name we use on the next line.
# cache_peer_domain is used for virtual hosting.
#    'as6453' is the name we set on the previous line (for cache_peer)
#    subsequent words are virtual hostnames it answers to. (This particular
#     mirror has Fedora and Debian content mirrored.) These are the hostnames
#     you set up and will use to access content.
# Taken together, these two lines tell squid that, when it gets a request for
#  content on fedora-mirror.lowell.lan or debian-mirror.lowell.lan, it should
#  route the request to and cache the result.
cache_peer parent 80 0 no-query originserver name=as6453
cache_peer_domain as6453 fedora-mirror.lowell.lan debian-mirror.lowell.lan

# Another, for CentOS:
cache_peer parent 80 0 no-query originserver name=harvard
cache_peer_domain harvard centos-mirror.lowell.lan

You will really want to customize this. The and mirrors happen to be geographically close to me and very fast, but that might not be true for you. Check out the CentOS mirror list and Fedora mirror list to find something close by. (And perhaps fetch a file or two with wget to check speeds.) And I’m reasonably confident that you don’t have a lowell.lan domain in your home.

If you can find one mirror that has all the distros you need, you don’t need to bother with virtual hosts.

You can edit the respective repos in /etc/yum.repos.d/ to point to the hostnames you set up. Pay attention to whether the mirror matches the URL structure the file defaults to or not.

You can just drop the hostnames in /etc/hosts if you don’t have a home DNS server, e.g.,: fedora-mirror.lowell.lan centos-mirror.lowell.lan

Fixing X11 forwarding request failed on channel 0

I see this error a lot:

mawagner ~ $ ssh's password:
X11 forwarding request failed on channel 0

There are a lot of forum posts about people having this problem, but surprisingly few questions that give a good answer. It just took me forever to run down what was causing it for me.

This gist is the closest I’ve seen to a big-picture look at what’s going on, but it still didn’t directly solve my problem. My solution came here (French). But the short version is this: that error can be caused by a lot of things.

Here’s a list of things I would check, in order of ease:

  1. ssh’ed into the remote machine, is an “xauth” command available for you to run? You don’t even have to run it, but does it exist? This was my problem, and it was fixed with yum install xorg-x11-xauth (or the equivalent for non-yum systems).
  2. Pop open /etc/ssh/sshd_config. Make sure that the X11Forwarding line is set to “yes”. Otherwise, X forwarding will absolutely not work. (You may wish to read the sshd_config manpage, as there are some security implications to enabling this—though if you’re reading this, you obviously want to use this feature.)
  3. Others have reported needing to change X11UseLocalhost to no, but I haven’t found this to be necessary. The man page suggests that this is only needed for “some older X11 clients”. Most of the posts I’ve seen about this setting seem to involve people changing settings randomly hoping to hit on a working solution, so don’t change this or X11DisplayOffset unless nothing else works.

Remember that after changing sshd_config, you must service restart sshd. This should not impact existing sessions. (But I always open a new tab in my terminal and test login there before logging out of the existing session, to make sure sshd came up cleanly so I don’t accidentally lock myself out.)

Another recommendation (which I totally missed until I had solved the issue) was to add the -v flag when starting your ssh session, to enable verbose logging. This might indicate the problem.

If none of the above solves your problem, this page (also linked above) was very helpful. Hopefully this post will help someone, though. (Please note that I am by no means an expert on getting X11 forwarding working with X. I just happen to have fixed it in my case and wanted to write up what I did, for my future self or for others.)