Always Get Better

Posts Tagged ‘networking’

Speed Up Page Load By Tricking the Browser

Saturday, November 25th, 2017

Nettiquette is built into web browsers. When you go to download a page, its contents will load in at a max of 2 files at a time (by default). So if there are 12 CSS and JS files, you’ll only get 2 at a time until you load them all.

The good news is you can trick browsers into doing more at a time.

Enter subdomains.

Offload your CSS to css.yoursite.com, and your JavaScript files to js.yoursite.com. Now your website will load 6 files at a time (2 CSS files, 2 JavaScript files, 2 HTML/Image files from your main site).

It doesn’t matter if each subdomain is pointing to the same website. Web browsers go by the site URL and nothing more.

HTTP/2 is supposed to eliminate this problem, of course, but in the meantime doing this helps when you can’t crunch down your files any more.

Azure Table Storage vs Azure SQL

Sunday, April 24th, 2011

There is a lot of debate among newcomers to Azure whether to use Azure SQL or file-based Table Storage for data persistence. Azure itself does not make the differences very clear, so let’s take a closer look at each option and try to understand the implications of each.

Azure SQL
Azure SQL is very similar to SQL Server Express. It is meant as a replacement for SQL Server Standard and Enterprise, but the feature set is not there yet. Reporting services, in particular, are in community technology preview (CTP) phase and at the time of writing are not ready for prime time.

Programmatically, Azure SQL is very similar to SQL in existing web applications; any ODBC classes you already wrote will work directly on Azure with no changes.

Size and cost are the major limitations with Azure SQL in its current incarnation. The largest supported data size is 50GB which runs for $499/month. Any databases larger than 50GB would need to be split across multiple instances – this requires knowledge of sharing at the application level and is not easy surgery to perform.

Table Storage
Table storage uses a key-value pair to retrieve data stored on Azure’s disk system, similar to the way memcache and MySQL work together to provide the requested data at fast speeds. Each storage container supports 100TB of data at incredibly cheap ($0.15/GB) rates.

Working with table storage involves accessing them directly from your application differently than you may be accustomed to with SQL. Going all-in ties you to the Azure platform – which is probably not a problem if you’re already developing for Azure as you will likely be trying to squeeze every ounce of performance out of all areas of the platform anyway.

Table storage does not support foreign key references, joins, or any of the other SQL-stuff we usually use. It is up to the programmer to compensate by making wide de-normalized tables and build their lists in memory. If you’re already building clustered applications, this is not a new design pattern as developers typically want to cache data in this manner.

Besides the larger space limits, table storage affords us automatic failover. Microsoft’s SLA guarantees we will always be able to access the data, and this is accomplished by replicating everything across at least three nodes. Compared to self-managing replication and failover with the SQL service, this is a huge advantage as it keeps the complexity out of our hands.

Difference in Evolution
If Azure SQL seems somewhat stunted compared to Table Storage, it’s not an accident: it is a newcomer who was not planned during the original construction of the Azure platform. Microsoft carefully considered the high-availabilty trends used for application development and found that the NoSQL way would most easily scale to their platform. Developer outrage prompted the company to develop Azure SQL, and its service offerings are improving rapidly.

Depending on your storage needs, your course of action may be to store as much data for cheap in Table Storage, and use SQL to index everything. Searching the SQL database will be incredibly fast, and can be done in parallel with any loads against persistent tables – everybody wins.

Accelerate Your Site with a Content Delivery Network

Thursday, April 21st, 2011

The best way to keep visitors engaged in your website is by delivering your experience in as little time as possible. The average visitor will only stick around for a few seconds, so it is important to get them interacting with your content fast. The first thing to check for, of course, is any bottlenecks in the initial page generation. Once the web page is being generated quickly, we can turn our attention to the next biggest culprit: the connection to your client.

Downloading files directly from a web server is costly, even if you’re using an efficient server like nginx for static files.

A content delivery network (CDN) can help speed up the process by storing your content in data centres around the world so they get served to your visitors from locations that are physically close to them. This results in fewer network hops which makes the files download faster, and reduces the overall load on your web server so you can focus on doing more interesting dynamic application stuff.

At one point, CDN services were only available to companies with deep pockets and huge websites, but these days anyone can set up and use an inexpensive service with their regular hosting provider.

Check with your host to see if they offer a content delivery solution. The two providers I use for my blog, Media Temple and Rackspace both have excellent services. If you are using a WordPress site, check out W3 Total Cache, which provides an all-in-one package for managing your files and optimizing the overall speed of your site.

Tracking Down Website Speed Problems

Tuesday, April 19th, 2011
Tortoise
Creative Commons License photo credit: Eric Kilby

Why is my website loading so slowly?!?

There are a few common culprits behind website speed issues. When diagnosing problems, the best bet is to start at the worst performers and move up. Some suggestions, in order from slowest to fastest, are:

1. Internet Traffic
If your web page is downloading anything over the internet during each page request, stop right now. This is the most expensive operation you can perform. Example: Downloading a photo from Flickr and loading it into memory in order to determine its width and height dimensions.

2. Network Traffic
Local network traffic is generally very fast, but still involves transmitting information outside your computer. In some cases, such as web clusters with a shared session cache, the network performance cost is worth it for the overall application.

3. Database
Databases are fast, particularly when the data you need is already stored in a memory cache – which you generally can’t control. When paired with a key-value memory store like memcache, the majority of your database calls can come straight from memory.

4. Disk I/O
Even with the incredible access times found in today’s hard drives, reading and writing from the disk is an expensive operation (and why databases lose points, except for their memory caching abilities). Sometimes reading from disk is the better choice – YMMV.

5. Script Caching
Implement a tool like xcache (PHP). This will keep your code in binary bytecode format which is much faster to execute since it doesn’t have to be re-processed by the web server.

Command and Control Social Media

Friday, April 8th, 2011

From a branding perspective, social media is about joining the conversation rather than trying to constantly send out broadcasts. Any idea worth discussing is already being talked about – if you ignore social media you aren’t just failing to get your message out into the wild; you are, in fact, allowing your voice to be absent from the existing discussion. There is a seismic shift occurring in the way brands and their respective owners are thinking about engaging their target audience. It isn’t good enough to just get the message out anymore – more attention is being placed into measuring the effectiveness of that message.

This isn’t a new idea; in fact, people have been talking about brands for as long as brands have existed. It’s well known that behind every customer who speaks up about their disappointment or service problem are ten others who simply switched to a different supplier. Figuring out what people are saying “on the street” and reacting to improve based on customer expectations isn’t a new concept; Facebook, Twitter and the blogosphere are only tools that make this much easier – they did not invent the conversation. So what’s the big deal?

The difference we are seeing today is the easy access to information that was not present before. Employees at all levels of the organization have access to the same outside data, the same instant feedback to everything being done. Ofttimes the worker at the lowest level has more sense of customer feelings than does the decision-making upper management – this has always been true, of course, so why the sudden magnification?

I believe we are seeing a generational change in business and mindset that is putting people ahead of function. Call it Generation X (over-workers to a fault) passing the torch over to Generation Y (family-focused individuals). In the next several years we are going to see a greater focus toward grassroots-based marketing efforts and a continuation of the trend toward niche-based services alongside the dismantling of mainstream distribution channels.

How to control this? Don’t. Service the customer and listen to their feedback. The same ingredients that have always made businesses successful are still in place: the difference is it is now easier than ever to hear the feedback faster.

Tethering the Internet, Week One

Saturday, May 29th, 2010

So I’ve been tethering my phone and using it as a backup Internet connection for just over a week now and so far I have been pretty happy with the results.

Using Xplornet as my primary source and my cell phone tethered into my computer via USB, I’m actually able to get fairly reliable service – the computer switches back and forth between whichever connection happens to have access to the Internet.

This could work…

I see that Bell is now offering a 2Mbps modem for rural residents. I’d like to try that as an alternative to Xplornet – maybe I’ll be able to drop my contract in March and have reliable net.

The Fragile World Internet

Sunday, March 1st, 2009
internet
Creative Commons License photo credit: kalleboo

In December 2008, a “fault” in three of the undersea cables under the Mediterranean Sea denied Internet service to thousands of subscribers in Egypt, India and the Middle East.

It’s hard to explain to people how the Internet connects together, especially to users in North America who have a hard time understanding about the world beyond our own shores. Communications don’t happen by magic – there are cables laid all around the world by commercial interests. Since much of the worldwide traffic is routed through hubs in the United States, American users rarely notice cable-induced outages.

Across the ocean, however, the Internet is more susceptible to damage. Regional links are expensive to maintain when much of the outgoing traffic is bound for North America anyway. The result is a small number of backbone connections servicing major routes across the Atlantic and Pacific oceans.