Always Get Better

Archive for the ‘Operating Systems’ Category

5 Ways to Keep Your Web Server Secure

Tuesday, September 19th, 2017

Equifax recently revealed that they were hacked and exposed the personal information of over 143 million people. You may not be sitting on such identity-theft rich material, but keeping your server secure is absolutely a must for any business. Fortunately it really isn’t very hard to achieve and maintain a decent level of protection.

1. Hire a Competent Developer

Cloud computing makes web servers super accessible to everyone; unfortunately that means it’s really easy  to get a website running and get a false sense of security thinking everything is right when it’s not. A lot of developers claim they can do it all for you when all the really know is how to install Apache but not how to lock it down.

A good giveaway is: If your developer can’t get by without using a gui tool or web interface to set up and administer the website, they don’t have any business on your server. These are great tools for setting up local development environment but they take shortcuts to make things more convenient for the user – not to improve security.

So if your guy doesn’t have deep command line knowledge and the ability to work without these tools, fine. He’s still a great developer, he can build  you a secure website following all the security best practices. He just doesn’t have any business touching your web server; have someone else set up the “live” environment.

2. Lock Down Ports

When you’re setting up a web server, lots of supporting programs get started that don’t directly affect your website. Things like email, ICMP, DNS, time and DHCP are important to keep the system running but have no business leaving the local network. Everything that you don’t absolutely need to access should be locked down. No access except from inside the server.

Web services like Apache and nginx are specifically designed to prevent people from using them as attack vectors to control your system, and they get compromised routinely. MySQL has no chance at all – don’t open it to the outside world… ever.

3. Separate Database Servers

It’s super common to find database servers improperly configured so they become a major security hole. On MySQL, most people don’t know to add users better than “GRANT ALL PRIVILEGES ON x.* TO y@z;”. Since the SQL server itself is often running with elevated system access, it only takes a single unsecured query to let people create files wherever they want on your server.

The easiest way to prevent this from affecting your websites is to move SQL to another server. Not only do you get the bonus of having a machine configured exclusively for web work and another exclusively for DB work, but bad things happening on one won’t mean bad things happening on the other.

4. Keep Up With Software Patches

If you want to keep your server secure, keep it updated right away when vendors release updates for their software.

In a world full of zero-day exploits, any software with a security update is definitely a risk. Maybe even part of a malware package being sold in some dark corner of the Internet.

Don’t be a victim, keep your server secure by keeping it up to date.

5. Enforce User Permissions

One of the most compelling reasons to use Linux traditionally has been the strong separation between services using user permissions. Even Windows Server supports it these days.

If you’re using PHP, don’t use Apache’s modphp, use php-fpm. Set up your pools to give each website its own user. Again it’s all about compartmentalization. Bad things will happen and a good sysadmin makes sure the damage done by those bad things gets contained to a small area.

BONUS #6: Keep Your Server Secure by Never Logging In

Never allow yourself or anyone else to log into the web server.

There’s no reason for it. If you need to deploy a website update, use a CI tool like Jenkins. If you got rooted, trash the server and launch a new one.

People make mistakes, people forget about config changes, people can’t be trusted. You don’t need to worry about password scheme, RSA keys, iptables goofs or any of a million common problems if you just acknowledge the human risk factor and remove it from the equation.

When we move to programmed servers, we make it easier to bring new people on board, faster to verify and test new software versions, more repeatable in the case of a data failure, and more secure across our entire network. Don’t make humans do the work of computers, automate all the things!

Speed up Deployments in Azure Web Apps

Saturday, March 4th, 2017

Is it possible to speed up deployments in Azure Web Apps? Without resorting to temporary fixes?

My biggest pain point with Azure is the length of time it takes to push changes. My e-commerce project has a simple architecture with a smallish number of dependencies but takes over 35 minutes to complete a build. Although there’s no downtime during the process, you don’t want to sit around waiting. Fortunately it’s easy to speed up deployments in Azure.

Problem: Starting from a Bad Place

Here’s what I’m dealing with: 35 minutes to push each commit.

35 minutes - yikes! Time to speed up deploymentsKeep in mind – this is a simple website. .NET Core MVC6 with some NuGet dependencies. On a local system it takes about 3 minutes to go from a fresh clone to a running app.

When you factor in configuration differences and thing you need to check off when setting up a live environment, that 35 minutes makes this project pretty unwieldily.

The problem stems from the way Azure deploys apps to the service fabric. Everything in your app is shared with mapped network folders which is why things like session state and file uploads are available to every instance. That’s great if you have a complicated stateful app setup, but we’re building totally RESTful services (right?) so we don’t need that feature.

Building from a network drive is a problem for web apps because our npm and bower repositories generate tons of tiny files that absolutely suck to synchronize across a network drive.

Solution: Speed Up Deployments by Skipping the Network

Instead of waiting for the same files to synchronize across the network, use the Git repository on the local machine. This skips the network latency and does all the deployment work on the local machine.

Using local cache for Git deployments

In your Application Settings, add a key for SCM_REPOSITORY_PATH with the value D:\local\repository.

Now my deployments are 10 minutes – a big improvement.

Speed up deployments on Azure by using local drive for buildsStore the Right Things

10 minutes is a big improvement but I know we can do better. Looking through my project I find 100MB of content images that should really be part of a CDN. Now my repo is only 8MB and the deployment time is slashed again.

Respectable build times5 minutes, now that is a respectable build time.

 

Malware in the iTunes Store – 500M Users Affected

Thursday, November 12th, 2015

Malware found in Xcode (xcode-ghost)People rely on their app stores to provide safe and high quality software for their phones and tablets. With iTunes in particular, we trust Apple’s strict review standards will raise the bar for quality and protect us from the shady apps and trojans we might find on file sharing services and third party app stores. What happens when that trust is misplaced?

A bunch of popular apps on Apple’s App store shipped with malware in September. “Popular”, of course, refers to app used by more than 500 million iPhone users primarily in China where the malware originated. Some apps, like WeChat, are also used in the west but by-and-large the affected programs are names written in hanzi most of my readers would not be able to pronounce let alone install to their devices.

What does the malware do?

First off, what can you expect to happen if your phone gets infected?

This post from MacRumours summarizes things pretty well, but the highlights are:

  • Sends device information, phone number and ID to C2 (command-and-control) servers
  • Fake system alerts phishing for passwords
  • Access clipboard (including stealing secrets from your password manager)
  • Hijack URL opening

We’re not talking about phone-takeover threat levels, which is a testament to the sandboxed environment apps run in. These functions are available to pretty much any developer — you’re not supposed to use the device ID but you can get it if you want it.

How Did Malware Get On the App Store?

The first thing that most people are asking is: how did this happen? With Apple’s notoriously stringent app review process, how did apps with this vulnerability get through?

As it turns out, developers chose to download updated versions of XCode from file sharing networks rather than directly from Apple. XCode is a huge download (larger than 3Gb) so downloading directly from Apple can take a long time. In order to save time these developers thought it would be a good idea to trust an unsigned version rather than the one from Apple.

How Did Malware Get Through the Walled Garden’s Gates?

But what about the review process? The UUID-gathering portion alone should have been enough to trigger warning flags, right?

Not so, according to people who know.

Although the App Store performs a cursory check of submitted apps for private API usage, the XCodeGhost files were added as a dynamically loaded bundle to the compiled app bundle — in other words, not subject to the code scanning… pretty sneaky.

How to Protect Yourself

How can you protect yourself when the store you rely on for your apps is poisoned?

Not much. Other than inspecting every single network connection there are not a lot of things you can do to detect this kind of activity on your device.

Installing app updates as they become available is the best thing you can do for your phone. Although an attack on this scale was made possible by the single-provider model of the App Store, it’s also true that having the App Store as the centre of the ecosystem is the best way to quickly push out fixes for problematic apps as they are discovered.

Setting up WordPress with nginx and FastCGI

Monday, January 30th, 2012

All web site owners should feel a burning need to speed. Studies have shown that viewers waiting more than 2 or 3 seconds for content to load online are likely to leave without allowing the page to fully load. This is particularly bad if you’re trying to run a web site that relies on visitors to generate some kind of income – content is king but speed keeps the king’s coffers flowing.

If your website isn’t the fastest it can be, you can take some comfort in the fact that the majority of the “top” web sites also suffer from page load times pushing up into the 10 second range (have you BEEN to Amazon lately?). But do take the time to download YSlow today and use its suggestions to start making radical improvements.

I’ve been very interested in web server performance because it is the first leg of the web page’s journey to the end user. The speed of execution at the server level is capable of making or breaking the user’s experience by controlling the amount of ‘lag time’ between the web page request and visible activity in the web browser. We want our server to send page data as immediately as possible so the browser can begin rendering it and downloading supporting files.

Not long ago, I described my web stack and explained why I moved away from the “safe” Apache server solution in favour of nginx. Since nginx doesn’t have a PHP module I had to use PHP’s FastCGI (PHP FPM) server with nginx as a reverse proxy. Additionally, I used memcached to store sessions rather than writing to disk.

Here are the configuration steps I took to realize this stack:

1. Memcached Sessions
Using memcached for sessions gives me slightly better performance on my Rackspace VM because in-memory reading&writing is hugely faster than reading&writing to a virtualized disk. I went into a lot more detail about this last April when I wrote about how to use memcached as a session handler in PHP.

2. PHP FPM
The newest Ubuntu distributions have a package php5-fpm that installs PHP5 FastCGI and an init.d script for it. Once installed, you can tweak your php.ini settings to suit, depending on your system’s configuration. (Maybe we can get into this another time.)

3. Nginx
Once PHP FPM was installed, I created a site entry that would pass PHP requests forward to the FastCGI server, while serving other files directly. Since the majority of my static content (css, javascript, images) have already been moved to a content delivery network, nginx has very little actual work to do.


server {
listen 80;
server_name sitename.com www.sitename.com;
access_log /var/log/nginx/sitename-access.log;
error_log /var/log/nginx/sitename-error.log;
# serve static files
location / {
root /www/sitename.com/html;
index index.php index.html index.htm;

# this serves static files that exists without
# running other rewrite tests
if (-f $request_filename) {
expires 30d;
break;
}

# this sends all-non-existing file or directory requests to index.php
if (!-e $request_filename) {
rewrite ^(.+)$ /index.php?q=$1 last;
}
}

location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /www/sitename.com/html$fastcgi_script_name;
include fastcgi_params;
}
}

The fastcgi_param setting controls which script is executed, based upon the root path of the site being accessed. All of the requests parameters are passed through to PHP, and once the configuration is started up I didn’t miss Apache one little bit.

Improvements
My next step will be to put a varnish server in front of nginx. Since the majority of my site traffic comes from search engine results where a user has not yet been registered to the site or needs refreshed content, Varnish can step in and serve a fully cached version of my pages from memory far faster than FastCGI can render the WordPress code. I’ll experiment with this setup in the coming months and post my results.

HP Releases Enyo 2.0

Wednesday, January 25th, 2012

Now that WebOS is being made open source, HP has released a new version of the Enyo JavaScript framework. Whereas the first version of the framework only supported Webkit-based environments (like the HP Touchpad, or Safari or Chrome), the newer version has expanded support for Firefox and IE9 as well. Developers who created apps with the old framework will have to wait a little while longer before all of the widgets and controls from Enyo 1.0 are ported over.

What does this mean for app developers? Now that Enyo is open-source, it means applications built on the platform will run on Android and iOS. But it’s not a disruptive technology – both Android and iOS have supported HTML5 applications for quite awhile; HP will be competing against mature frameworks like jQuery Mobile.

As a WebOS enthusiast I am definitely going to put some time into continuing my explorations of Enyo, but it’s getting harder and harder to justify the investment. My Pre is getting pretty old at this point, and hardware manufacturers have yet to express interest in making new devices to take advantage of WebOS. If I end up switching to Android with my next hardware purchase, it’s going to shift my priorities away from Enyo and its brethren.

How to Upgrade Firefox using Ubuntu

Saturday, August 6th, 2011
Mochila Firefox
Creative Commons License photo credit: jmerelo

So I got tired of using Firefox 3.6 in my Ubuntu machine and decided to upgrade to the newest version (5.0). It’s understandable that the package maintainers responsible for Ubuntu don’t put bleeding-edge cutting-edge releases in the distribution due to the possibility of introducing unstable elements into the user experience. But Firefox 4 has been out for over a year, and the migration to 5 is well underway.

Fortunately, it couldn’t be much easier to get the newest official release using our good friend aptitude.

In a terminal window, add the Mozilla team’s stable Firefox repository by issuing the following command:


sudo add-apt-repository ppa:mozillateam/firefox-stable

Next, perform an update to get the package listing, and upgrade to install the newest browser:


sudo apt-get update
sudo apt-get upgrade

That’s it – you’re done! Your shortcuts are even updated, and any bookmarks or open tabs you might have had on the go are carried forward.

I was pleasantly surprised at how easy this process was.

MySQL Pulled From Mac OS X Lion

Friday, August 5th, 2011

Apple-based LAMP developers be warned: the new version of OS X does not include MySQL, which was formerly part of the developer tools shipped with the operating system. In its place look for deliciously Oracle-free PostgreSQL. Of course, developers can and will continue to download MySQL and install it themselves, but the out-of-box experience moving forward will be with PostgreSQL.

Although it is still an extremely popular database, Oracle’s presence in the MySQL world has put a chill over business users considering using the product as the backbone of their data solutions. Other databases with similar purposes exists but none have the deep community boasted by PostgreSQL.