Always Get Better

Archive for the ‘internet’ Category

5 Ways to Keep Your Web Server Secure

Tuesday, September 19th, 2017

Equifax recently revealed that they were hacked and exposed the personal information of over 143 million people. You may not be sitting on such identity-theft rich material, but keeping your server secure is absolutely a must for any business. Fortunately it really isn’t very hard to achieve and maintain a decent level of protection.

1. Hire a Competent Developer

Cloud computing makes web servers super accessible to everyone; unfortunately that means it’s really easy  to get a website running and get a false sense of security thinking everything is right when it’s not. A lot of developers claim they can do it all for you when all the really know is how to install Apache but not how to lock it down.

A good giveaway is: If your developer can’t get by without using a gui tool or web interface to set up and administer the website, they don’t have any business on your server. These are great tools for setting up local development environment but they take shortcuts to make things more convenient for the user – not to improve security.

So if your guy doesn’t have deep command line knowledge and the ability to work without these tools, fine. He’s still a great developer, he can build  you a secure website following all the security best practices. He just doesn’t have any business touching your web server; have someone else set up the “live” environment.

2. Lock Down Ports

When you’re setting up a web server, lots of supporting programs get started that don’t directly affect your website. Things like email, ICMP, DNS, time and DHCP are important to keep the system running but have no business leaving the local network. Everything that you don’t absolutely need to access should be locked down. No access except from inside the server.

Web services like Apache and nginx are specifically designed to prevent people from using them as attack vectors to control your system, and they get compromised routinely. MySQL has no chance at all – don’t open it to the outside world… ever.

3. Separate Database Servers

It’s super common to find database servers improperly configured so they become a major security hole. On MySQL, most people don’t know to add users better than “GRANT ALL PRIVILEGES ON x.* TO y@z;”. Since the SQL server itself is often running with elevated system access, it only takes a single unsecured query to let people create files wherever they want on your server.

The easiest way to prevent this from affecting your websites is to move SQL to another server. Not only do you get the bonus of having a machine configured exclusively for web work and another exclusively for DB work, but bad things happening on one won’t mean bad things happening on the other.

4. Keep Up With Software Patches

If you want to keep your server secure, keep it updated right away when vendors release updates for their software.

In a world full of zero-day exploits, any software with a security update is definitely a risk. Maybe even part of a malware package being sold in some dark corner of the Internet.

Don’t be a victim, keep your server secure by keeping it up to date.

5. Enforce User Permissions

One of the most compelling reasons to use Linux traditionally has been the strong separation between services using user permissions. Even Windows Server supports it these days.

If you’re using PHP, don’t use Apache’s modphp, use php-fpm. Set up your pools to give each website its own user. Again it’s all about compartmentalization. Bad things will happen and a good sysadmin makes sure the damage done by those bad things gets contained to a small area.

BONUS #6: Keep Your Server Secure by Never Logging In

Never allow yourself or anyone else to log into the web server.

There’s no reason for it. If you need to deploy a website update, use a CI tool like Jenkins. If you got rooted, trash the server and launch a new one.

People make mistakes, people forget about config changes, people can’t be trusted. You don’t need to worry about password scheme, RSA keys, iptables goofs or any of a million common problems if you just acknowledge the human risk factor and remove it from the equation.

When we move to programmed servers, we make it easier to bring new people on board, faster to verify and test new software versions, more repeatable in the case of a data failure, and more secure across our entire network. Don’t make humans do the work of computers, automate all the things!

HTTPS on Static Sites using Cloudfront

Friday, February 5th, 2016

It seems like every time I log into my AWS account there are a ton of new services waiting to be discovered. (Not to mention dozens in early preview that don’t even show up in the main list.) I feel like keeping up with what’s going on in that space is becoming a full-time job all by itself. The most exciting new update is the Certificate Manager – pushing us one step closer to “https everywhere”.

S3’s static website hosting is a fascinating service. If you are able to design your site entirely using third party APIs (or even better, none at all), it’s possible to fully host your site through CloudFront. Think unlimited scalability, super fast edge-based response times. A parenting blog I manage as a static site (using Jekyll) has consistent response times all around the world. Improving response times for off-continent visitors has definitely had a big impact on session times.

HTTPS and Static Sites

Amazon has done an incredible job making certificate provisioning easy. To add a new SSL certificate to your Cloudfront-based static site:

1. Go into your Distribution
2. Under the ‘General’ tab, click ‘Edit’
3. Choose ‘Custom SSL Certificate’ and click the button “Request an ACM Certificate”
4. Make sure to type both your naked domain and “www.” variant in the certificate
5. After confirming the certificate, go back to your distribution’s general settings and select it
6. Wait 15 minutes and profit(?!)

Of course if you’re starting a new site the same steps mostly apply, you can add a certificate right from the creation wizard.

What about cost? FREE. You do not have to pay a thing for SSL certificates using this service.

Redirect Non-HTTPS Traffic to HTTPS

The next thing you should think about doing is redirecting all non-HTTP traffic to HTTPS. Yours is a professional site, or at least should look like it was created by a professional. Having that little lock icon next to your address says “I’ve thought this through and know what I’m doing”. Anyone can toss up a website, not everyone is advanced enough to know about secure sites.

Error on Existing Sites

If you’re already running a (non-HTTPS) website through Cloudfront using S3 websites, the default origin behaviour is to “Match Viewer” when it comes to HTTPS. S3 sites do not support HTTPS so you will get a big ugly error.

The fix it:

1. Go to your Distribution
2. Under the “Origins” tab, choose your S3 origin and press the “Edit” button
3. Find “Origin Protocol Policy” and choose “HTTP Only”

while you’re there, make sure SSLv3 is NOT selected under “Origin SSL Protocols”. SSLv3 is bad news bears insecure – we don’t want that!

Limitations

The biggest drawback to Cloudfront’s SSL solution is you are pretty much stuck with server name indication (SNI), sharing your SSL connection with hundreds of other sites. For most people this shouldn’t be a problem, but if your audience is stuck in 2005 and hasn’t upgrade to Vista or better by now, their browsers are not going to be able to access your SSL site this way.

Cloudfront offers IP-based SSL for $600 a month, which is overkill unless your company has seriously deep pockets (seriously, if your goal is free SSL certificates you aren’t going down this option). If SNI is out of the question for you, your best bet will be an elastic load balancer pointing at a small server – you still get the benefit of free SSL certificates that never need maintenance and you get your own IP address.

Tweeting with Buffer

Monday, May 27th, 2013

Glen Canyon Bridge & Dam, Page, ArizonaI continue to have an on and off relationship with Twitter. It’s been fun to talk with other developers and reach people directly, but a huge part of the network is sorting through the signal-to-noise echo chamber. It doesn’t make sense to sit on Twitter all day trying to respond to everything; work needs to be done too!

Then there’s my reading. I read a lot. And I run into all kinds of cool stuff I want to share, and Twitter is the most natural place to share it, but of course that always ends up with Saturdays where I dump four dozen links in the span of a few hours… I hate it when other people do that, so rather than spamming everyone who follows me I’ve pretty much stopped sharing.

Until now.

Buffer to Spread Around the Outbursts
I found an app called Buffer (bufferapp.com) that sits in front of your twitter account and collects your tweets into a “buffer”, then sends them out on a schedule. So you can have a backlog of messages filter out slowly over a day instead of shoving them all out at once.

So my workflow with Twitter now is to monitor it (using Growl, of course) and have conversations where I can. I’ve met some incredible people using Twitter and made more than a few fun connections, and hope to keep building that over time. Whenever I read something interesting I’ll drop it into Buffer, and anyone who is interested can see those links without getting spammed all at once. I think it’s win-win.

Present in More Time Zones
At first night times were lonely when I came out west, since 9pm for me is midnight for friends back home, it got pretty quiet fast. I’ve since made more friends on the west coast, but I came away with a fresh appreciation of how easy it is to get disconnected from our core tribes because of time zones.

Since I started using Buffer I’ve noticed more activity from my contacts in Europe and Australia. Of course I’m asleep when Buffer sends out one of my stored tweets at 3am, but sometimes it’s sparked conversations I’m able to pick up when I wake up in the morning. Although there is a high latency in those communications, I feel more connected than ever to some old friends who I might not have otherwise interacted with so frequently.

In the End, Connections Matter Most
The strongest takeaway theme that seems to be cropping up again and again lately has been the difference between technology and communication. It’s very easy, especially coming from a technical background, to fall in love with a design, a language, a piece of software. The magic comes from the conversations that get enabled by these advances. There’s no reason to put up a web site or build an application if it doesn’t solve some problem – if we build something for the sake of doing it, are we building something that will last?

Observations From Mobile Development

Monday, September 10th, 2012

With just a single mobile release under my belt now, I’m hardly what you might call an expert on the subject. What I can say for certain is the past year has been an eye opener in terms of understanding the capabilities and limitations of mobile platforms in general.

So having gone from “reading about” to “doing” mobile development, these are some of the “aha” moments I’ve had:

Design for Mobile First
The biggest revelation: Even if you’re not setting out to develop a mobile application, your design absolutely must start from the point of view of a handheld.

Think about it for a second – how much information can you fit on a 3.5″ screen? Not much! So when you design for that form factor you have to go through a savage trimming exercise – everything from type, to layout, to navigation must communicate their intent in as tiny a space as possible. In other words, there’s no avoiding the need to effectively communicate your message.

When you build all of your applications this way, mobile or not, it’s hard not to come up with a user experience that has laser focus and goes straight for the mark. Once you have a bare minimum experience, now you can augment it with additional navigation and information (including advertising) for larger form factors like desktop and tablets.

Don’t Fret the Framework
Just as in web development, frameworks are powerful tools that come with sometimes painful trade-offs. Especially when you’re getting started, don’t worry about which framework you choose – just pick the one that caters most to the style of development you’re already familiar with and jump in. If that means Adobe Air for ActionScript, cool. If it means PhoneGap for JavaScript, great.

Most of the new devices coming onto the market have more than enough memory and processing horsepower to handle the massive extra overhead incurred through cross-platform development tools. If you’re starting a new project or iterating a prototype, don’t hesitate to jump on board a tool that will get you to a product faster. This is one of those areas where the short term gain is worth the longer term pain…

Native Wins
We’ve known since the 80s, when developers had to release to a boatload of PC platforms – IBM, Commodore, Amiga, Tandy, etc – that software written directly for a particular platform outperforms and outsells similar software written for a generic platform and ported across to others. The same idea is definitely the case now, even though our cross-platform tools are far more advanced, and our globally-usable app much higher in quality that what we could produce 30 years ago.

Some of the compelling reasons why you would want to take on the expense of building, testing and maintaining you app natively:

  • The UI of your application will integrate seamlessly into the underlying operating system – iOS widgets look like they belong, Android layouts are laid out consistently compared to other applications
  • Raw speed – you don’t need to go through someone else’s API shim to get at the underlying operating system features, you don’t have to develop custom native code since all the code is native; all CPU cycles are devoted to your application, resulting in much higher performance, particularly for graphic-intensive applications
  • Operating system features – each mobile operating system has its own paradigm and set of best practices which cross-platform tools gloss over to give you as a developer a consistent response. So your application misses the subtleties of the user’s hardware experience – for example Android uses Activities as its interaction model, but the Adobe Air framework squashes that instead of forcing developers to program in an Activity-centric way

In other words, cross-platform tools exist in order to give developers the best experience, not to give the user the best experience. Your customer doesn’t care if your app is available on Android, Windows, iPhone, Playbook and WebOS if all they have is an iPhone.

I believe cross-platform tools are the best way to get your project off the ground and usable fast, but right from the beginning you need to be thinking about converting your application to native code in order to optimize the experience for your customers.

Market Fragmentation
I bought an Android phone and have been enjoying developing for it. But I don’t think I would enjoy developing for Android at large because of the massive amount of devices and form factors I would need to support. This is where Apple has an edge – although the learning curve for Objective-C is higher, once I have an iOS application, I know it will run on an iPod Touch, and iPhone or and iPad. Not only that, but my guess is people are more likely to want to spend small amounts of money on app purchases, since they’ve been trained to do so from years of iTunes.

Backward Compatibility
Moving to the web was a huge advantage in terms of software support because if you ever found a bug in your program you could patch it and release it without affecting any of your users. No one has to upgrade a web page.

This isn’t true of mobile applications – ignoring mobile web applications – once someone downloads your app they may be slow to upgrade to newer verions, or they may never upgrade at all. This means any bugs that get released with your bundle are going to remain in the wild forever. If you have any kind of server-based presence, your server code needs to handle requests from all of those old app versions – so you need to make sure you get it right, and have good filtering and upgrade mechanisms in place.

Choosing a Platform
One thing that held me back from diving into mobile development was my hesitation to start. This is totally my fault – instead of just programming for WebOS when I had the Palm Pre, I thought it would be better/more accessible to use a more open JavaScript toolset so I could deploy my app to other phones. But really, what would have been the point? I only had a Palm Pre to run my software on and I definitely wasn’t going to buy new hardware to test other versions. Instead of getting locked in analysis paralysis I should have just started programming for the Pre right away and transferred those skills to a more mainstream platform later.

So if you don’t have a smartphone, go get one – it will change your life, or at least the way you interact with your phone. Then start building apps for it. That’s all it takes to get into the game. Don’t wait another second.

Adobe Aquires Typekit

Monday, October 3rd, 2011

Today Adobe announced its acquisition of Typekit, a web font hosting service that allows designers to use any typeface with their sites rather than relying on standard “safe” font families.

This is an interesting development. Adobe’s Flash player already supports proprietary fonts, which suggests that the company is looking at alternate technologies for its future development. Obviously Flash will remain relevant for more time to come, but as competitors increasingly jump onto the HTML5 bandwagon, Adobe is wise to increase its arsenal of standards-compliant technology.

How to Upgrade Firefox using Ubuntu

Saturday, August 6th, 2011
Mochila Firefox
Creative Commons License photo credit: jmerelo

So I got tired of using Firefox 3.6 in my Ubuntu machine and decided to upgrade to the newest version (5.0). It’s understandable that the package maintainers responsible for Ubuntu don’t put bleeding-edge cutting-edge releases in the distribution due to the possibility of introducing unstable elements into the user experience. But Firefox 4 has been out for over a year, and the migration to 5 is well underway.

Fortunately, it couldn’t be much easier to get the newest official release using our good friend aptitude.

In a terminal window, add the Mozilla team’s stable Firefox repository by issuing the following command:


sudo add-apt-repository ppa:mozillateam/firefox-stable

Next, perform an update to get the package listing, and upgrade to install the newest browser:


sudo apt-get update
sudo apt-get upgrade

That’s it – you’re done! Your shortcuts are even updated, and any bookmarks or open tabs you might have had on the go are carried forward.

I was pleasantly surprised at how easy this process was.

Surviving Cloud Failures

Saturday, April 23rd, 2011
Fixed
Creative Commons License photo credit: Don Fulano

Amazon is in the news today for the failure their Elastic Block Storage (EBS) service suffered, resulting in loss of service and/or extreme latency for hundreds of sites including some of their largest customers like Foursquare and reddit. AWS has been widely regarded as the most stable and overall leader of the cloud providers, so it was a great shock to many observers that they were able to suffer such a large failure.

I think the failure is not surprising, but rather the fact that it hasn’t happened before now is surprising. It underscores my message that cloud computing is not magical but is in fact an abstraction over very real hardware. There are bound to be flaws and issues just as with “real” hosting options, the difference is the end customer has less control over the hardware, hosting and networking environment.

Not every business can afford the overhead of a large dedicated solution, so what to do?

Spread the Load
The key is redundancy. Start by spreading your content across the internet rather than relying on single server to cough up all of your visitors’ needs. Things like content delivery networks (CDNs) will reduce the incoming load on the server and help it stay online.

How can we tell if a website is offloading the right amount of content? Perform regular speed testing and identify problem areas using tools like YSlow.

Redundancy! Eliminate Single Points of Failure
Whenever you have a single system servicing part of your application, you expose the entire application to failure.

For example, suppose you have four Apache servers and a load balancer sending equal traffic to each. If one of the Apache servers fails, the other three are able to compensate for the loss with no downtime for your visitors. But what happens if the load balancer fails? Even though all four web servers are in fine working order, your site is knocked offline.

Some systems are difficult to cluster: replication schemes in the various SQL servers are a huge drain on performance – newer solutions like MySQL Cluster or DrizzleDB aim to solve this problem, but at extra expense in terms of configuration and application design.

The key to successful redundancy is in scripting your software in such a way that failures can be recovered from fast and automatically. Having a hot spare in the group isn’t useful if you need to reach an administrator at 4am to activate – by that point you’ve already lost your overseas customers for the day.

Twilio has an excellent summary of the engineering process that goes into creating a scalable cloud-ready infrastructure.

Avoid the Cloud? Never
Despite some public failures, cloud computing has not suffered any kind of blow. Large organizations will always want their own private non-cloud hosting, small sites will always be looking for an inexpensive VWS. The middle-tier which is serviced by the cloud will continue to see cost savings that greatly outweigh any physical hosting options available at that level.

Because of the low server cost, cloud computing allows smart customers the freedom to build necessary redundancy without breaking the bank. Even though this pays off big time when catastrophic failures happen, there are longer term benefits of improved overall response times to the end users even when the hosting is working well.