Always Get Better

5 Ways to Keep Your Web Server Secure

September 19th, 2017

Equifax recently revealed that they were hacked and exposed the personal information of over 143 million people. You may not be sitting on such identity-theft rich material, but keeping your server secure is absolutely a must for any business. Fortunately it really isn’t very hard to achieve and maintain a decent level of protection.

1. Hire a Competent Developer

Cloud computing makes web servers super accessible to everyone; unfortunately that means it’s really easy  to get a website running and get a false sense of security thinking everything is right when it’s not. A lot of developers claim they can do it all for you when all the really know is how to install Apache but not how to lock it down.

A good giveaway is: If your developer can’t get by without using a gui tool or web interface to set up and administer the website, they don’t have any business on your server. These are great tools for setting up local development environment but they take shortcuts to make things more convenient for the user – not to improve security.

So if your guy doesn’t have deep command line knowledge and the ability to work without these tools, fine. He’s still a great developer, he can build  you a secure website following all the security best practices. He just doesn’t have any business touching your web server; have someone else set up the “live” environment.

2. Lock Down Ports

When you’re setting up a web server, lots of supporting programs get started that don’t directly affect your website. Things like email, ICMP, DNS, time and DHCP are important to keep the system running but have no business leaving the local network. Everything that you don’t absolutely need to access should be locked down. No access except from inside the server.

Web services like Apache and nginx are specifically designed to prevent people from using them as attack vectors to control your system, and they get compromised routinely. MySQL has no chance at all – don’t open it to the outside world… ever.

3. Separate Database Servers

It’s super common to find database servers improperly configured so they become a major security hole. On MySQL, most people don’t know to add users better than “GRANT ALL PRIVILEGES ON x.* TO y@z;”. Since the SQL server itself is often running with elevated system access, it only takes a single unsecured query to let people create files wherever they want on your server.

The easiest way to prevent this from affecting your websites is to move SQL to another server. Not only do you get the bonus of having a machine configured exclusively for web work and another exclusively for DB work, but bad things happening on one won’t mean bad things happening on the other.

4. Keep Up With Software Patches

If you want to keep your server secure, keep it updated right away when vendors release updates for their software.

In a world full of zero-day exploits, any software with a security update is definitely a risk. Maybe even part of a malware package being sold in some dark corner of the Internet.

Don’t be a victim, keep your server secure by keeping it up to date.

5. Enforce User Permissions

One of the most compelling reasons to use Linux traditionally has been the strong separation between services using user permissions. Even Windows Server supports it these days.

If you’re using PHP, don’t use Apache’s modphp, use php-fpm. Set up your pools to give each website its own user. Again it’s all about compartmentalization. Bad things will happen and a good sysadmin makes sure the damage done by those bad things gets contained to a small area.

BONUS #6: Keep Your Server Secure by Never Logging In

Never allow yourself or anyone else to log into the web server.

There’s no reason for it. If you need to deploy a website update, use a CI tool like Jenkins. If you got rooted, trash the server and launch a new one.

People make mistakes, people forget about config changes, people can’t be trusted. You don’t need to worry about password scheme, RSA keys, iptables goofs or any of a million common problems if you just acknowledge the human risk factor and remove it from the equation.

When we move to programmed servers, we make it easier to bring new people on board, faster to verify and test new software versions, more repeatable in the case of a data failure, and more secure across our entire network. Don’t make humans do the work of computers, automate all the things!

Reverse Proxy with IIS and Elastic Beanstalk

September 4th, 2017

Suppose your main website is a .NET/IIS stack running on AWS Elastic Beanstalk, and you decided to add a WordPress blog to the mix. Instead of having http://blog.yourdomain.com and http://www.yourdomain.com, you want to host the blog from a subdirectory at http://www.yourdomain.com/blog. There are a few ways you can go about this; this post will go through how to set up a reverse proxy to a dedicated blog server.

You have two options:

1. Host WordPress in IIS – Not fun. You need to configure IIS to run PHP, host a MySQL database, and manage your WordPress file directory and updates in an environment where user uploads and core files can get trashed at any minute when EB rolls out a new server. It’s possible to run an amazing HA WordPress install on Elastic Beanstalk (topic for another post) but in a subdirectory directly hosted in Windows/IIS? Not for the feint of heart)

2. Host WordPress on a Linux server somewhere else and reverse proxy – Let’s do this instead.

Basically you set up WordPress and get it to look how you want, configure it to respond to your domain, then configure IIS to fetch your blog URLs and return them whenever a request comes through to the blog subdirectory on your main site. Every other directory is served from your .NET app as normal.

Reverse Proxy a WordPress blog with IIS

Important: The final step to this is the most important bit if you’re running on Elastic Beanstalk. Make sure you follow it in order to actually enable IIS’ proxy module, which does not come pre-installed on Elastic Beanstalk’s AMI.

Configure WordPress to Respond to Your Domain

Add this section to the bottom of wp-config.php, just before the require_once for wp-settings.php. This tells WordPress to ignore its default server settings and use your website as its base path.

$hostname = 'www.yourdomain.com';
$_SERVER['HTTP_HOST'] = $hostname;
define('WP_HOME','https://www.yourdomain.com/blog');
define('WP_SITEURL','https://www.yourdomain.com/blog');
$current_url = 'https://' . $hostname . $_SERVER['REQUEST_URI'];

Configure IIS Paths

The reverse proxy makes use of the rewrite in IIS. To turn this on for your blog, add the following to the directive in your web.config file:

 <rewrite>
   <rules>
     <rule name="Reverse Proxy to blog" stopProcessing="true">
       <match url="^blog/(.*)" />
       <action type="Rewrite" url="https://blog.yourdomain.com/{R:1}" />
     </rule>
   </rules>
 </rewrite>

This tells IIS to redirect all requests beginning with “blog/” to your WordPress blog. IIS will reach out to your blog server as if it is the requestor, fetch the page, and return it as if the blog were hosted from within IIS itself. The {R:1} variable carries forward any path information to the blog — so your theme files, user uploads and static assets will all pass through.

If you deploy your site and try to access your blog page now, it won’t work. You’ll see a ‘404’ response code even though the rules are definitely set up properly. The final step to this is enabling the Application Request Routing module on IIS – this is not enabled by default in Elastic Beanstalk’s version of Windows.

Enabling the Reverse Proxy Module on Elastic Beanstalk

You could Remote Desktop into your web server machine and manually enable the ARR module, but it would stop working the next time your environment flips, the server gets reloaded for any reason (which can happen even if you are doing all at once deployments and not adding/removing machines), or nodes get added to your environment.

We need to make sure the module gets installed and checked every time you deploy your files, so it’s always present and available to use even when new machines come online.

To do that, we’ll use the .ebextensions scripting hooks to download, install and configure ARR every time a deploy runs.

1. Download the ARR Installer

Download the 65-bit ARR installer (from here) to S3 so it is available to your VM when it boots. We want to install to S3 instead of pulling directly off Microsoft’s servers because we can’t rely on outside links being available when we need to deploy, and if our VM happens to be inside a VM without a NAT then we can use Amazon’s S3 internal endpoints without needing to configure any more advanced network.

2. Add an ebextension hook

In your .ebextensions folder (at the root of your unzipped deploy package), add a new config file (install-arr.config) to instruct Elastic Beanstalk how to install the extension:

packages:
  msi:
    ApplicationRequestRouting: "pathtoyourinstallerons3"

commands:
  01_enablearr:
    command: "C:\\Windows\\system32\\inetsrv\\appcmd.exe set config  -section:system.webServer/proxy /enabled:True  /commit:apphost"

The packages/msi lines tell Elastic Beanstalk to download and run the installer. Since you won’t be physically present when that happens, the script will automatically accept all the license agreements and silently run.

The appcmd command instructs IIS to enable the reverse proxy module, which turns your rewrite instructions into actual reverse proxy commands. Now if you visit www.yourdomain.com/blog/, you will see your WordPress blog.

Bonus: Trailing Slashes

If you visit www.yourdomain.com/blog without a trailing slash, you won’t see the blog. You don’t want to start the rewrite rule at this level because your reverse proxy will try to access blog.yourdomain.com// (with a double slash) for all dependent resources, which can cause problems with WordPress’ routing.

For this case, I like to redirect to a trailing slash at the IIS level. Every time someone comes to yourdomain.com/blog, they will redirect with a clean 302 status command to yourdomain.com/blog/. Just add this rule in your section of web.config

<rule name="Add trailing slash to blog reverse proxy" stopProcessing="true">
  <matchurl="^blog$"/>
  <action type="Redirect" redirectType="Permanent" url="{R:1}/"/>
</rule>

The regular expression in the match url tells the web server this should only apply to the “blog” path, that is, nothing before or after the word “blog”. This lets you have “blog” in other parts of your URL without accidentally redirecting valid pages.

Automate Squarespace Development with Expect

March 9th, 2017

We have a love/hate relationship with Squarespace’s developer platform. I think they really need to decide whether they’re a design company or a web hosting platform and stop doing 50% of each. But anyway…

Sometime in the last year they added a local development server that really helps speed up the process of building things for their platform. It doesn’t fully replace the “make change and deploy live” workflow, but it is super useful for testing basic layout updates for side effects.

Since I use a password generator, logging into my Squarespace account to do any development work involves cracking open 1Password and copy/pasting into the terminal. After a few weeks of that I decides there had to be a better way so I created this shell script to take care of the setup.


#!/usr/bin/expect
spawn squarespace-server squarespacewebsiteaddress --auth
expect "Squarespace email:"
send "squarespace username\r"
expect "Squarespace password:"
send "password\r"
interact

Hope this helps out someone who is as lazy as I am.

Speed up Deployments in Azure Web Apps

March 4th, 2017

Is it possible to speed up deployments in Azure Web Apps? Without resorting to temporary fixes?

My biggest pain point with Azure is the length of time it takes to push changes. My e-commerce project has a simple architecture with a smallish number of dependencies but takes over 35 minutes to complete a build. Although there’s no downtime during the process, you don’t want to sit around waiting. Fortunately it’s easy to speed up deployments in Azure.

Problem: Starting from a Bad Place

Here’s what I’m dealing with: 35 minutes to push each commit.

35 minutes - yikes! Time to speed up deploymentsKeep in mind – this is a simple website. .NET Core MVC6 with some NuGet dependencies. On a local system it takes about 3 minutes to go from a fresh clone to a running app.

When you factor in configuration differences and thing you need to check off when setting up a live environment, that 35 minutes makes this project pretty unwieldily.

The problem stems from the way Azure deploys apps to the service fabric. Everything in your app is shared with mapped network folders which is why things like session state and file uploads are available to every instance. That’s great if you have a complicated stateful app setup, but we’re building totally RESTful services (right?) so we don’t need that feature.

Building from a network drive is a problem for web apps because our npm and bower repositories generate tons of tiny files that absolutely suck to synchronize across a network drive.

Solution: Speed Up Deployments by Skipping the Network

Instead of waiting for the same files to synchronize across the network, use the Git repository on the local machine. This skips the network latency and does all the deployment work on the local machine.

Using local cache for Git deployments

In your Application Settings, add a key for SCM_REPOSITORY_PATH with the value D:\local\repository.

Now my deployments are 10 minutes – a big improvement.

Speed up deployments on Azure by using local drive for buildsStore the Right Things

10 minutes is a big improvement but I know we can do better. Looking through my project I find 100MB of content images that should really be part of a CDN. Now my repo is only 8MB and the deployment time is slashed again.

Respectable build times5 minutes, now that is a respectable build time.

 

Serverless Contact Forms with AWS Lambda

October 27th, 2016

LAMB-da - Serverless Contact FormsServers are a pain to run. They break, get hacked, need updating, and generally need your constant attention or that site you posted two years ago won’t work when you need to make a change. Static sites are a beautiful dream, but what do you do when you need user input? You don’t want to use a third-party service just to get rare contact forms from your visitors. It’s stupid to run a web server to handle this; that completely eliminates the whole purpose of creating a static site. What you need are serverless contact forms.

Architecture

I use Wufoo forms for most of my static sites but today I’m switching to Lambdas. To start, I have a static Jekyll site uploaded to S3. I have a CloudFront distribution set up for edge-caching and SSL. Now I’m going to add a contact form that reaches through AWS’ API Gateway to execute a node.js script hosted on Lambda.

Here is the general data flow:

Serverless Contact Forms Architecture

Serverless Contact Forms Architecture

Email Script

I first write a simple Node.js script that validates the contact form parameters then pumps the messages into the Sendgrid API. You can swap in your preferred client. AWS SES is a popular one for sure and a nice way to keep everything under one umbrella.

Setting Up the Lambda

First create a blank template.

Serverless Contact Forms start with a blank lambda

Serverless Contact Forms start with a blank lambda

Next create an API Gateway trigger for the Lambda. I set my security to ‘Open’ because I am allowing anonymous traffic to send contact forms.

Serverless Contact Forms API Gateway Settings

Serverless Contact Forms API Gateway Settings

Finally, upload the contact form script. Since I’ve used libraries other than Amazon’s own I need to zip everything up and send it as an archive.

Serverless Contact Forms Lambda Setup

Serverless Contact Forms Lambda Setup

Configuring the API Gateway for Serverless Contact Forms

Now that the Lambda function has been created it’s time to open it to the outside world. Go to the ‘API Gateway’ service, find your endpoint, and choose Actions -> Create Method. Add a POST method pointing to your Lambda function’s name.

Serverless Contact Forms API Gateway Setup

Serverless Contact Forms API Gateway Setup

You need to enable CORS because you will be using AJAX to submit the form. This is a cross-domain request while we’re testing it out.

lambda-cors

Then deploy the API. The default environment is prod, which is good enough for our purposes.

lambda-5

Finally you see the the endpoint URL you will use for your form.

lambda-6

Serverless Contact Forms

That’s the backend all set up now you can build out the contact form your visitors will use to contact you. Here’s mine. It’s super ugly but it gets the point across.

When you hit submit the send function executes, posting the email address and message to the endpoint and alerting the results. Nothing fancy, and obviously you would need to code in all the proper validation and UI stuff to make this pretty.

CloudFront Configuration

I can get into the static site setup later, but for now I’m working from an existing distribution.

Since CORS is all set up you could use the endpoint as-is and just launch your contact form, but that’s not as elegant as posting to the same domain as the form itself. You can achieve this illusion because CloudFront is able to forward POST requests now.

Add the origin for the API to the distribution.

lambda-7

Tell CloudFront to forward (and not cache!) your contact form by adding a behaviour for the path where you want to host the form. The path name should match your API resource name:

lambda-8

For this to work make sure you choose the Allowed HTTP Methods option with POST. Setting the Forward Headers option to ALL causes CloudFront to use the origin cache settings (no cache) which will let more than one person use your contact form.

Profit

Overall this feels like it is a longer process than it should be. The first time I did this it took almost 5 hours, but now that I know the process I’m sure next time will be a lot faster. Figuring out the right folders and permission settings (for CORS) was the most finicky part of the process, but the API Gateway has an informative test tool that helped a lot.

This is only going to get better, and Lambda is a fantastic cost-effective tool for replacing on-demand tasks where you would once need to spin up and support a whole server. Serverless contact forms are definitely the way to go if your site is otherwise static.

HTTPS on Static Sites using Cloudfront

February 5th, 2016

It seems like every time I log into my AWS account there are a ton of new services waiting to be discovered. (Not to mention dozens in early preview that don’t even show up in the main list.) I feel like keeping up with what’s going on in that space is becoming a full-time job all by itself. The most exciting new update is the Certificate Manager – pushing us one step closer to “https everywhere”.

S3’s static website hosting is a fascinating service. If you are able to design your site entirely using third party APIs (or even better, none at all), it’s possible to fully host your site through CloudFront. Think unlimited scalability, super fast edge-based response times. A parenting blog I manage as a static site (using Jekyll) has consistent response times all around the world. Improving response times for off-continent visitors has definitely had a big impact on session times.

HTTPS and Static Sites

Amazon has done an incredible job making certificate provisioning easy. To add a new SSL certificate to your Cloudfront-based static site:

1. Go into your Distribution
2. Under the ‘General’ tab, click ‘Edit’
3. Choose ‘Custom SSL Certificate’ and click the button “Request an ACM Certificate”
4. Make sure to type both your naked domain and “www.” variant in the certificate
5. After confirming the certificate, go back to your distribution’s general settings and select it
6. Wait 15 minutes and profit(?!)

Of course if you’re starting a new site the same steps mostly apply, you can add a certificate right from the creation wizard.

What about cost? FREE. You do not have to pay a thing for SSL certificates using this service.

Redirect Non-HTTPS Traffic to HTTPS

The next thing you should think about doing is redirecting all non-HTTP traffic to HTTPS. Yours is a professional site, or at least should look like it was created by a professional. Having that little lock icon next to your address says “I’ve thought this through and know what I’m doing”. Anyone can toss up a website, not everyone is advanced enough to know about secure sites.

Error on Existing Sites

If you’re already running a (non-HTTPS) website through Cloudfront using S3 websites, the default origin behaviour is to “Match Viewer” when it comes to HTTPS. S3 sites do not support HTTPS so you will get a big ugly error.

The fix it:

1. Go to your Distribution
2. Under the “Origins” tab, choose your S3 origin and press the “Edit” button
3. Find “Origin Protocol Policy” and choose “HTTP Only”

while you’re there, make sure SSLv3 is NOT selected under “Origin SSL Protocols”. SSLv3 is bad news bears insecure – we don’t want that!

Limitations

The biggest drawback to Cloudfront’s SSL solution is you are pretty much stuck with server name indication (SNI), sharing your SSL connection with hundreds of other sites. For most people this shouldn’t be a problem, but if your audience is stuck in 2005 and hasn’t upgrade to Vista or better by now, their browsers are not going to be able to access your SSL site this way.

Cloudfront offers IP-based SSL for $600 a month, which is overkill unless your company has seriously deep pockets (seriously, if your goal is free SSL certificates you aren’t going down this option). If SNI is out of the question for you, your best bet will be an elastic load balancer pointing at a small server – you still get the benefit of free SSL certificates that never need maintenance and you get your own IP address.

Malware in the iTunes Store – 500M Users Affected

November 12th, 2015

Malware found in Xcode (xcode-ghost)People rely on their app stores to provide safe and high quality software for their phones and tablets. With iTunes in particular, we trust Apple’s strict review standards will raise the bar for quality and protect us from the shady apps and trojans we might find on file sharing services and third party app stores. What happens when that trust is misplaced?

A bunch of popular apps on Apple’s App store shipped with malware in September. “Popular”, of course, refers to app used by more than 500 million iPhone users primarily in China where the malware originated. Some apps, like WeChat, are also used in the west but by-and-large the affected programs are names written in hanzi most of my readers would not be able to pronounce let alone install to their devices.

What does the malware do?

First off, what can you expect to happen if your phone gets infected?

This post from MacRumours summarizes things pretty well, but the highlights are:

  • Sends device information, phone number and ID to C2 (command-and-control) servers
  • Fake system alerts phishing for passwords
  • Access clipboard (including stealing secrets from your password manager)
  • Hijack URL opening

We’re not talking about phone-takeover threat levels, which is a testament to the sandboxed environment apps run in. These functions are available to pretty much any developer — you’re not supposed to use the device ID but you can get it if you want it.

How Did Malware Get On the App Store?

The first thing that most people are asking is: how did this happen? With Apple’s notoriously stringent app review process, how did apps with this vulnerability get through?

As it turns out, developers chose to download updated versions of XCode from file sharing networks rather than directly from Apple. XCode is a huge download (larger than 3Gb) so downloading directly from Apple can take a long time. In order to save time these developers thought it would be a good idea to trust an unsigned version rather than the one from Apple.

How Did Malware Get Through the Walled Garden’s Gates?

But what about the review process? The UUID-gathering portion alone should have been enough to trigger warning flags, right?

Not so, according to people who know.

Although the App Store performs a cursory check of submitted apps for private API usage, the XCodeGhost files were added as a dynamically loaded bundle to the compiled app bundle — in other words, not subject to the code scanning… pretty sneaky.

How to Protect Yourself

How can you protect yourself when the store you rely on for your apps is poisoned?

Not much. Other than inspecting every single network connection there are not a lot of things you can do to detect this kind of activity on your device.

Installing app updates as they become available is the best thing you can do for your phone. Although an attack on this scale was made possible by the single-provider model of the App Store, it’s also true that having the App Store as the centre of the ecosystem is the best way to quickly push out fixes for problematic apps as they are discovered.