How to telnet to port 443 to test HTTPS sites

Testing websites, it’s sometimes beneficial to just telnet google.com 80 and speak HTTP. But, of course, that won’t work for HTTPS sites, because you’re expected to set up an SSL connection, not send HTTP commands.

It turns out you have two options (at least), but neither involves using telnet.

Use openssl

This is the more common one: use the s_client option on the OpenSSL CLI tool.

Matthew.Wagner ~ $ openssl s_client -connect ma.ttwagner.com:443
CONNECTED(00000003)
depth=1 /C=BE/O=GlobalSign nv-sa/CN=GlobalSign Organization Validation CA - G2
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
...

This will output a ton of SSL information, but then you’re in the equivalent of a telnet session, tunneled through a secure connection. You can use your classic dialog, e.g.,

GET / HTTP/1.1
HOST: ma.ttwagner.com

And you’ll get the expected response.

Use gnutls-cli

Thanks to a fellow Matt, at bearfruit.org, for this great suggestion. If you’ve got gnutls installed, it’s even easier:

Matthew.Wagner ~ $ gnutls-cli login.yahoo.com
Processed 237 CA certificate(s).
Resolving 'login.yahoo.com'...
Connecting to '98.139.21.169:443'...
Cannot connect to 98.139.21.169:443: Operation timed out

(The “Cannot connect” error is not erroneous. I was confirming that my browser wasn’t insane, and that login.yahoo.com really was down at the time.)

gnutls-cli is present on my Mac, but not on a CentOS box. There, it’s provided by gnutls-utils.

Converting a UNIX timestamp to normal time

You can, pretty easily, get a UNIX timestamp (time since the epoch, January 1, 1970, at 0:00:00), with date +%s. But how the heck do you do it in reverse? Given a timestamp like 1415744430, how do you convert that to a more conventional format?

My answer has always been “use a tool online”, or to pull up an interactive shell in something like Ruby. But there has to be a better way, right?

There is! But, it’s not standard.

On a Linux box (GNU coreutils)


$ date -d @1415744430
Tue Nov 11 22:20:30 UTC 2014

The leading @ is important.

On a Mac OS X box (BSD-derived)


$ date -r 1415744430
Tue Nov 11 17:20:30 EST 2014

So, there you have it, standardization be damned.

Setting up replication to an RDS instance

This is basically the official RDS documentation rephrased in a way that makes sense to my brain. These will take data from a “normal” MySQL server (e.g., installed by you into an EC2 instance) and import it into an RDS instance, then enable replication. Their instructions are correct, but caused me a good bit of confusion and didn’t prepare me for some gotchas.

You’ll have two instances, which I’ll refer to as such:

  • Master, the non-RDS instance (Amazon calls this the “Replication Source”)
  • Slave, the RDS instance which will pull data from the master

Launch a slave RDS instance

This one is normal. Log into AWS, and start up an RDS instance. Amazon says that you should not enable multi-AZ support until the import is complete. I missed that detail, and importing my trivial (one row in one table, for testing) database went fine. They’re probably right, though. Don’t forget the credentials you create! For this post, I used ‘dbuser’ as a username, and ‘dbpassword’ as a password. (Obviously, use something better in the real world.)

Make sure to get security groups / VPC ACLs right. I put them in the same VPC, and just enabled 3306 all around and it was good. They have more detailed instructions in the docs.

Configure the master

You’ll need to do several things on the master:

Enable binlogs and set a server-id

MySQL requires that a binary log (binlog) be used before replication is possible. You also need to set a server-id parameter, with a unique ID.

I just dropped this in the [mysqld] section of /etc/mysql.conf:

log-bin=mysql-bin
server-id=101

If this is the only server, server-id doesn’t really matter.

You need to service mysqld restart for this to apply.

Add a replication user

This one wasn’t abundantly clear to me. You need to add a replication user to the master, which the slave will use.

You’ll want the following two statements (with this example taken direclty from the MySQL docs): CREATE USER 'repl'@'%.mydomain.com' IDENTIFIED BY 'slavepass';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.mydomain.com';

Obviously, customize the hostname part. I just used ‘%’ because I was doing a POC test in a VPC, but that should be locked down for anything real.

Export a DB dump

Use mysqldump to create a snapshot.

I just wanted to copy one database, so I ran something like this:

mysqldump -u root -p --database test_db1 --master-data > test_db1.dump

That will prompt for a password, and then write a dump of the database to test_db1.dump. Next, we’ll import this.

Import the dump to RDS

Hopefully by now the RDS instance has come online. Test that you can connect to it over MySQL. (Note: you cannot ssh into the RDS node. It only exposes MySQL as a service.)

We now want to import that database dump, and then we can start replication. But first, we need to tweak one thing in the dump we just created!

With --master-data, a line like this is written near the top of the dump file: CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=107;

I had to remove that line, or else I got this error:

Access denied; you need (at least one of) the SUPER privilege(s) for this operation

With that fixed, it’s time to import the data. The thing that’s not necessarily intuitive is that you want to run the MySQL client from your existing database server, and use -h to specify a remote hostname. You can’t ssh to the RDS instance and run it locally, because they don’t have ssh enabled. Here’s the command I used:

mysql -u dbuser -p -h REDACTED.us-east-1.rds.amazonaws.com < test_db1.dump

Enable replication

With the old database imported on RDS, it’s time to enable replication to get it to sync up with anything since the dump was taken, and then stay current. Since we don’t have ssh access, Amazon gives us a few custom procedures in MySQL we can run.

Connect to MySQL on your RDS slave (e.g., mysql -u dbuser -p -h REDACTED.us-east-1.rds.amazonaws.com or whatever).

In that MySQL shell, use their mysql.rds_set_external_master procedure, by running something like this (read the docs for more details):

CALL mysql.rds_set_external_master (
'REDACTED.us-east-1.rds.amazonaws.com',
'3306',
'repl',
'repl-password',
'mysql-bin',
'00001',
'0'
);

It’s important to note that you need to use the credentials for the replication user you created, not the normal admin credentials.

Once that’s configured, start replication, with mysql.rds_start_replication. That one is much simpler, as it doesn’t take any arguments:

CALL mysql.rds_start_replication;

Then, you can run SHOW SLAVE STATUS\G to view the replication status. If all went well, there will be no errors. Yay! You can skip replication errors with another procedure they implement, mysql.rds_skip_repl_error, though ideally that won’t be necessary.

At this point, data inserted to the master should show up on the slave automatically. (Don’t insert rows into the slave yet, or you’ll end up with a real mess!)

Promote the slave

Amazon provides those instructions for the purposes of importing a database, then cutting over to use the RDS node as a master. When the RDS slave is cut over and your application is ready, you can stop replication, decommission the master, and start using the RDS slave as your master.

There are two procedures you’ll be interested in here; mysql.rds_stop_replication and mysql.rds_reset_external_master to unset the master information. Remember to clean up security groups, the old master, etc.

Quick-start with Gluster on AWS

I wanted to play around with Gluster a bit, and EC2 has gotten cheap enough that it makes sense to spin up a few instances. My goal is simple: set up Gluster running on two servers in different regions, and see how everything works between them. This is in no way a production-ready guide, or even necessarily good practice. But I found the official guides lacking and confusing. (For reference, they have a Really, Really Quick Start Guide and also one tailored to EC2. Both took some tweaking. Here’s what I did:

  • Start two EC2 instances. I used “Amazon Linux” on a t2.micro, and started one each in Sydney and Oregon. (Using different regions is in no way required; I’m doing that because I’m specifically curious how it will behave in that case.)
  • Configure the security groups from the outset. Every node needs access to every other node on the following ports (this was different for older versins):
    • TCP and UDP 111 (portmap)
    • TCP 49152
    • TCP 24007-24008
  • Create a 5GB (or whatever you like, really) EBS volume for each instance; attach them. This will be our ‘brick’ that Gluster uses.
  • Pop this in /etc/yum.repos.d/glusterfs-epel.repo:
# Place this file in your /etc/yum.repos.d/ directory

[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=0
  • sudo yum install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server. This should pull in the necessary dependencies.
  • Now, set up those volumes:
    • sudo fdisk /dev/sdf (or whatever it was attached as); create a partition spanning the disk
    • Create a filesystem on it; I used sudo mkfs.ext4 /dev/sdf1 for now
  • Create a mountpoint; mount
sudo mkdir -p /exports/sdf1
sudo mount /dev/sdf1 /exports/sdf1
sudo mkdir -p /exports/sdf1/brick
  • Edit /etc/fstab and add the appropriate line, like:
/dev/sdf1   /exports/sdf1 ext4  defaults        0   0
  • Start gluster on each node; sudo service glusterd start
  • Peer detection… This tripped me up big time. The only way I got this to work was by creating fake hostnames for each box in /etc/hosts. I used gluster01 and gluster02 for names. /etc/hosts/ mapped gluster01 to 127.0.0.1 on gluster01, and gluster02 to 127.0.0.1 on gluster02. Then, from one node (it doesn’t matter which), detect the other by its hostname that you just used. You don’t need to repeat from the other host; they’ll see each other.
  • Create the volume, replication level 2 (nodes), one one of them:
sudo gluster volume create test1 rep 2 gluster01:/exports/sdf1/brick gluster02:/exports/sdf1/brick

This will fail miserably if you didn’t get the hostname thing right. You can’t do it by public IP, and you can’t directly use localhost. If it works right, you’ll see “volume create: test1: success: please start the volume to access data”. So, let’s do that.

  • sudo gluster volume start test1 (you can then inspect it with sudo gluster volume status)
  • Now, mount it. On each box; sudo mkdir /mnt/storage. Then, on each box, mount it with a reference to one of the Gluster nodes: sudo mount -t glusterfs gluster01:test1 /mnt/storage (you could use gluster01:test1 or gluster02:test1, either will find the right volume). This may take a big if it’s going across oceans. * cd into /mnt/storage, create a file, and see that it appears on the other. Magic!

Please keep in mind that this was the bare minimum for a cobbled-together test, and is surely not a good production setup.

Also, replicating Gluster between Sydney and Oregon is horribly slow. Don’t do that! Even when it’s not across continents, Gluster doesn’t do well across a WAN.