Always Get Better

Archive for the ‘Hardware’ Category

5 Ways to Keep Your Web Server Secure

Tuesday, September 19th, 2017

Equifax recently revealed that they were hacked and exposed the personal information of over 143 million people. You may not be sitting on such identity-theft rich material, but keeping your server secure is absolutely a must for any business. Fortunately it really isn’t very hard to achieve and maintain a decent level of protection.

1. Hire a Competent Developer

Cloud computing makes web servers super accessible to everyone; unfortunately that means it’s really easy  to get a website running and get a false sense of security thinking everything is right when it’s not. A lot of developers claim they can do it all for you when all the really know is how to install Apache but not how to lock it down.

A good giveaway is: If your developer can’t get by without using a gui tool or web interface to set up and administer the website, they don’t have any business on your server. These are great tools for setting up local development environment but they take shortcuts to make things more convenient for the user – not to improve security.

So if your guy doesn’t have deep command line knowledge and the ability to work without these tools, fine. He’s still a great developer, he can build  you a secure website following all the security best practices. He just doesn’t have any business touching your web server; have someone else set up the “live” environment.

2. Lock Down Ports

When you’re setting up a web server, lots of supporting programs get started that don’t directly affect your website. Things like email, ICMP, DNS, time and DHCP are important to keep the system running but have no business leaving the local network. Everything that you don’t absolutely need to access should be locked down. No access except from inside the server.

Web services like Apache and nginx are specifically designed to prevent people from using them as attack vectors to control your system, and they get compromised routinely. MySQL has no chance at all – don’t open it to the outside world… ever.

3. Separate Database Servers

It’s super common to find database servers improperly configured so they become a major security hole. On MySQL, most people don’t know to add users better than “GRANT ALL PRIVILEGES ON x.* TO y@z;”. Since the SQL server itself is often running with elevated system access, it only takes a single unsecured query to let people create files wherever they want on your server.

The easiest way to prevent this from affecting your websites is to move SQL to another server. Not only do you get the bonus of having a machine configured exclusively for web work and another exclusively for DB work, but bad things happening on one won’t mean bad things happening on the other.

4. Keep Up With Software Patches

If you want to keep your server secure, keep it updated right away when vendors release updates for their software.

In a world full of zero-day exploits, any software with a security update is definitely a risk. Maybe even part of a malware package being sold in some dark corner of the Internet.

Don’t be a victim, keep your server secure by keeping it up to date.

5. Enforce User Permissions

One of the most compelling reasons to use Linux traditionally has been the strong separation between services using user permissions. Even Windows Server supports it these days.

If you’re using PHP, don’t use Apache’s modphp, use php-fpm. Set up your pools to give each website its own user. Again it’s all about compartmentalization. Bad things will happen and a good sysadmin makes sure the damage done by those bad things gets contained to a small area.

BONUS #6: Keep Your Server Secure by Never Logging In

Never allow yourself or anyone else to log into the web server.

There’s no reason for it. If you need to deploy a website update, use a CI tool like Jenkins. If you got rooted, trash the server and launch a new one.

People make mistakes, people forget about config changes, people can’t be trusted. You don’t need to worry about password scheme, RSA keys, iptables goofs or any of a million common problems if you just acknowledge the human risk factor and remove it from the equation.

When we move to programmed servers, we make it easier to bring new people on board, faster to verify and test new software versions, more repeatable in the case of a data failure, and more secure across our entire network. Don’t make humans do the work of computers, automate all the things!

Pint-Sized Mobile Devices

Thursday, April 28th, 2011

Lately I’m fascinated by the Dell Streak and am thinking it would be fun to get one. But really, what is the use case? Do tablets have a place in everyday life?

The tablet computer is an interesting device because it fills that no-man’s land between the phone and the computer, and has so much potential as a conveyor of meta experience to users of the other platforms.

The problem so far is no manufacturer has provided compelling reasons why we should be buying tablets and adding them into our routines. It’s almost as if everyone is blindly following Apple’s lead regardless of whether doing so makes sense. Now we have a flood of iPad-like devices, where the original iPad is, let’s be honest, interesting but not really useful.

My prediction: There is going to be a killer app for tablets and it is going to be immediately obvious to everyone why we need to use this medium. In the meantime, developers with mobile programming skill are going to be writing their own ticket – so learn how to build apps for Windows Phone, Android, iOS, or (to a lesser extent) WebOS.

Protect Your SSH Server with RSA Keys

Tuesday, April 26th, 2011

If it’s possible to log into your web server over SSH with a username and password, you may not be properly secured. Even if root access is impossible, a username and password combination can be broken with brute force; once your server has been compromised it’s only a matter of time before a rootkit installation attempt is successful.

Even password-less RSA keys provide better protection than a password because they are long encrypted strings that cannot be guessed from a dictionary. Although a brute force attack against an RSA key is still possible, it requires a much more sophisticated attacker and takes many more attempts. As encryption technologies improve, the length of keys need to increase; but even a key with no password attached is way more secure than a human-typed password.

Backup Through Time

Friday, April 15th, 2011

No matter what I do, I never feel fully covered against a disastrous data loss. Despite paranoid backup strategies across many different kinds of media, there is always something missing. I haven’t h ad a hard drive failure yet, but I know it’s a matter of when, not if, it will happen.

If you haven’t had a chance to check out Apple’s Time Machine, you need to do yourself a favour and look it up right now. Time Machine is an incredibly well-put together backup package that automatically saves snapshots of your entire hard drive. Because of the HFS filesystem’s ability to link directories as well as files, Time Machine is able to track incremental changes against your file tree so you can move forward and backwards through time in the history of your computer. A single saved file that you might have lost is now accessible to you regardless of your regular backup regime.

I run my Time Machine from a USB hard drive. A solid state drive would probably be a better choice because if it were to fail the data would still be readable, but my USB drives gives me a conveniently small backup media and extremely fast access speeds – I’m happy with this setup and haven’t lost any data yet. Because time machine copies my entire system and keeps a version of my computer through time, I feel confident that if either my computer’s hard drive or my USB drive were to fail, I would not suffer any long term data loss.

What are your backup rituals?

Cloud Computing Is Not Magical

Tuesday, April 5th, 2011

Back in 2009 I was tired of hearing the phrases “cloud computing” and “in the cloud”. These days I’m so numb to their meaninglessness that it doesn’t even phase me anymore. Somewhere along the way marketers took over the internet and ‘social media’ became a job position.

So what do I have against cloud computing? Would I rather build servers, deal with co-location, and suffer massive downtimes in order to change hardware specs? Of course not.

Let’s not lose sight of the big picture: virtualized servers are still servers. From a remote perspective the management is all the same and from a hardware perspective you still need to be responsible for your data in the event of a catastrophic failure.

While I am a huge proponent of “cloud” providers like Rackspace (heck I host all of my web sites on Cloud Server instances), let’s call a spade a spade: there is nothing magical about servers in the cloud, they are just virtualized instances running on a massively powerful hardware architecture.

Why go with virtualization over a dedicated box? Virtual servers are cheap – I don’t need to incur the startup costs that I would from a dedicated server. For a small business this is a huge deal; for larger business with intense data needs the dedicated solution will always provide the most security but for anything from tiny, small to very large applications the virtualized way is the ticket. Add more servers, remove them, reconfigure: you don’t get that kind of flexibility from traditional server hosting.

Long live cloud computing; but the name has to go. Did the term come from network diagrams where the Internet was represented as a cloud? I don’t think it’s a particularly clever analogy to consider your business assets living as disembodied entities “somewhere” in the networking cloud.

We’re fighting a losing battle if we believe we’re going to get the marketers to back off the internet now. But on the tech side let’s keep calling it what it is and try not to let the marketing buzz cloud our opinion of the technologies we use.

Give Apache a Break with nginx

Saturday, September 25th, 2010

One of the things I’ve learned about Apache is that as good as it is, it suffers from its monolithic “do-everything” nature. The modules and tuning required for effective operation just doesn’t fit into a lean, quick package. That said, I find it beats out the alternatives hands-down when it comes to running web applications of any complexity.

One very simple and effective method for improving Apache’s performance involves off-loading static content to another server. Since Apache spawns a new process (and the memory allocation that goes with that) for every request, it is pretty wasteful to serve your images, css, javascript and similar files this way.

For larger applications, we could run a second web server that makes use of a single-threaded polling process like lighttpd, or use a content delivery network to move our content physically closer to our users for even more speed.

For smaller applications and organizations, we can use nginx as a proxy to serve static content to our visitors. This can be a separate physical web server, or it can be a service running on the same server as Apache. I use the same-server proxy approach at alwaysgetbetter.com, and it has made a significant difference in my server load and memory usage.

To start, change Apache’s settings so it listens on a separate port (rather than the default port 80).

Next, set up nginx to listen on the default http port 80. We will let Nginx decide whether each request should be served directly from the hard drive, or whether it should pass through to Apache.

The config file for nginx looks like this:

server {
listen 80;
server_name mysite.com www.mysite.com;
access_log /var/log/nginx/website-access.log;
error_log /var/log/nginx/website-error.log;
# serve static files
location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|pdf|ppt|txt|tar|wav|bmp|rtf|js|mp3|avi|mov)$ {
root /var/www/html;
expires 30d;
}

# pass requests for dynamic content to site
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 128m;
client_body_buffer_size 256k;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
proxy_buffer_size 4k;
proxy_buffers 32 256k;
proxy_busy_buffers_size 512k;
proxy_temp_file_write_size 512k;
}
}

It is also possible to serve PHP and other dynamic content using nginx, but for our purposes it makes a lot more sense to use Apache for scripting and nginx as the web-facing proxy.

Using a Cell Phone as Backup Internet

Monday, May 24th, 2010

Since we live in the country and rely on line-of-sight Internet for our connectivity, I’ve been increasingly frustrated with service quality and uptime programs. There are a lot of reasons I want to move to a denser population area but access to a proper Internet connection is high on my list.

My phone has turned out to be a decent alternative; using instructions I found online I was able to re-purpose my Palm Pre as a WiFi router. It’s still not broadband but it gives me a way to check my email when my Xplornet fixed wireless (often) fails.

Although Bell Canada supports tethering with their smartphone plans, they don’t go out of their way to make it obvious how to do it. My Tether turned out to be worth the cost; even though there is a free version you can use if you want to play with the settings.