Coeus Blue Managed Web Hosting

866.847.8171

Broken USPS international shipping in Magento

Last night when USPS updated their API they opted to change the method names used in the API again. The result of this is that First Class for international no longer shows up for Magento sites using their API. To fix this and reenable international first class you just have to add the text "First-Class Package International Service" to the file app/code/core/Mage/Usa/etc/config.xml in 2 places around line 188 in recent versions. The 2 sections are and . Both are comma separated. Once you add it there and clear your cache you can go into admin, in the config section under shipping methods. Go to the USPS section and the new shipping method will appear in the allowed methods lists. Select it, save, clear your cache and you should be back in business.

 
Amazon EC2 outage, not for us

While sites like Reddit and Foursquare experienced nearly a day of downtime and are still having issues; our hosting clients experienced a total of 11 minutes of downtime related to the Amazon issues. There was an additional 15 minutes of total downtime in the middle of the night while engineers worked to ensure continued stability in the midst of hardware failures.  The site in question had only a single database server, no sites with a pair of replicated database servers experienced any downtime at all.

Enterprise levels of uptime and performance come from the design and support not the hardware.  The system within Amazon at the heart of the failures has been their network virtual disk product, EBS.  The sites which experienced substantial downtime all utilized that storage for the main hard drives of their systems without the use of RAID.  This is poor design having nothing to do with the cloud, because it puts a single point of failure across your environment.

Our environment assumes every system will fail at some point and works to mitigate that as much as possible.  We do utilize the EBS product, but we do so only in the context of RAID and do not use it for the operating system or core services of the machine.  The machines in our enterprise systems do not share any single points of failure, and the loss of one or more machines or even full availability zones is does not pose major problems.
Failures of this nature are going to happen.  I dont care if you host on Amazon, Rackspace, Microsoft, a traditional datacenter or the server under your desk, failures happen.  Core routers die, backhoes cut cables, systems guaranteed to always be up go down.  The key is to design your environment to have ways to deal with every possibility you can think of, and engineers with the experience and creativity to deal with the other million things you didn't.

 

 
Nginx for Magento Multisite

I have been receiving some questions lately about multi site configuration when using nginx.  Normally there are a pair of variables set using apache's SetEnv in the .htaccess file not supported or used by nginx.  This method works for both Magento Community and Magento Enterprise.

The configuration overview found at MagentoCommerce is a reasonable start for single site configuration but punts on the multi site configuration question.  They point out:

The “MAGE_RUN_CODE” and “MAGE_RUN_TYPE” are for multi-store installations, each DOMAIN that represents a store should have that store code instead of “default” (line #53).

but not what needs to be done with it.  You could duplicate your configuration for each store code, but the easier way to deal with the store code is to build a map on it and switch on that map in the configuration.

 

 map $http_host $magesite { 
  www.site1.com site1storecode.com;
  www.site2.com site2storecode;
}
 server {
   <snip>
    location / {
                       
     if ($request_uri ~* "\.(ico|css|js|gif|jpe?g|png)$") {
      access_log   off;
      expires max;
     }

     try_files $uri $uri/ @magento;
     fastcgi_param  MAGE_RUN_TYPE  store;
     fastcgi_param MAGE_RUN_CODE $magesite;
     include fastcgi_params;
     fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
     fastcgi_param HTTPS $fastcgi_https;
     fastcgi_pass fastcgiUpstreamProvider;
}
   location @magento {

    # set expire headers
    if ($request_uri ~* "\.(ico|css|js|gif|jpe?g|png)$") {
     access_log   off;
     expires max;
    }

    include fastcgi_params;
    fastcgi_param MAGE_RUN_TYPE  store;
    fastcgi_param MAGE_RUN_CODE $magesite;
    fastcgi_param SCRIPT_FILENAME $document_root/index.php;
    fastcgi_param SCRIPT_NAME /index.php;
    fastcgi_param HTTPS $fastcgi_https;
    fastcgi_pass fastcgiUpstreamProvider;
   }
}

Another common question is how to avoid duplication of configuration for secure and insecure site sections.  In the above configuration the sections in the location blocks which specify fastcgi_param HTTPS $fastcgi_https; let fastcgi know if secure is on or not. Using those and switching on the scheme with:

       map $scheme $fastcgi_https {
            default off;
            https on;
        }

You can combine the http and https sections.  This change does require a recent version of nginx, at least newer than the version in the centOS base repo as I am writing this.   In the server section just modify your listen lines to be:

                listen       80;
                listen  443  default  ssl; 

and add the normal ssl_ configuration to the server block.  This saves you from having to duplicate much of your configuration. This technique can grant you more control of other things as well.  For many of our clients we switch the fastcgi_pass paramater to let us split out admin functions and front end functions.  You can then push the timeouts and resource limits for admin to the levels they need to be for large jobs without allowing the front end site to over tax your servers.

 

 

 
Dynamic Infrastructure Scaling – AKA Auto-Scaling

Cloud computing represents a major leap forward in enterprise application hosting. In the past, hosting a profitable e-commerce web site required substantial capital investment in hardware that sat idle more often than it was utilized serving content to customers. We developed something called dynamic infrastructure scaling to eliminate that waste and expense. As traffic to a site increases and additional capacity is required, more servers are brought online to meet the demand. As load subsides, servers are ramped back down. We call it auto-scaling and believe it can save online retailers money without affecting customer experience. Why? Because the site gets the capacity it needs for busy days and record- breaking holiday sales, and an inexpensive portion of that capacity at off-peak times.


How does auto-scaling work?


Once we understand the particular needs of an online business, we analyze the information gathered by our advanced monitoring suite with run time automatic server configuration. With that in hand, our engineers design a set of rules that control how and when servers are added to or subtracted from the farm. Because the servers are not physical hardware, but instead designed as a configuration and image combination for an individual business, additional servers can be added in minutes.

Auto-scaling that fits your business


The big providers have auto-scaling features but they differ from our Dynamic Infrastructure Scaling. Solutions from big providers require a lot of configuration to instruct the service how to start a server, rules telling it which metric to monitor, and an action to take when the rules violate an established threshold. We’ve found three problems with this method.

First, the complexity of the configuration itself; subtle mistakes can to often result in a configuration that doesn’t work at all, or even worse launches more or fewer instances than you intend. At Coeus Blue, we manage the complexity of the configuration for our clients.

What if your business logic doesn’t lend itself to the rules the interface allows? For example, if you< have a back end system in your office experiencing problems, adding more web servers that place load on that system will make things worse. By working with many clients over the years, we have developed tools with flexibility built in to work with the business, rather than against it. We design logic based on our clients’ business specs.  

The third problem occurs when load subsides and servers ramp down. Determining when traffic levels can safely be handled by fewer servers requires historical data, not just monitoring alone. Shutting down servers without affecting customers who are trying to checkout, requires more than just turning the instance off. We designed an auto-scaling solution to address both concerns. Our shutdown logic evaluates the site infrastructure's current metrics, as well as trend data to make recommendations. That trend-based logic, combined with advanced load balancing techniques enables us to determine the optimal time to remove a server, and prevent any impact to site customers.

Do we fit your business? Let’s find out.



As a managed service hosting provider we give our clients a turnkey solution that keeps their web sites working. Yes, you can find a vendor with the tools that allow you to build cloud instances, add and remove servers, even automate some of these tasks. The difference? We don’t believe in just providing busy business owners with a set of tools.

We provide a fully functional, fully supported, dynamically scaling web hosting solution. There’s no need for your staff to learn the tools. There is no reason for you to suffer through the headaches and ‘gotchas’. Patching, backups, security? We do it. Launching a new product with a big marketing push? No problem. Unlike other providers, there’s no complicated documentation to read, no counter-intuitive interface to learn. When you need something changed you call, speak with a skilled engineer who takes care of it for you, and quickly get back to running your business.

There’s a big difference between tech support and engineering. We understand that when you need site help, you need an engineer; a call to tech support is not enough. If this sounds like the kind of help your business can use, call me at 866.847.8171

 
Very fast magento caching with apc and memcached

There is no single documentation regarding Magento's caching options in the local.xml and many of the posts that do exist today have not been updated in some time.  We have had a great deal of success using this in a no disk cache configuration.

<config>
<global>
<SNIP OUT INSTALL DETAILS ETC>
<cache>
<backend>Apc</backend>
<slow_backend>Memcached</slow_backend>
<fast_backend>Apc</fast_backend>
<slow_backend_options>
<servers><!-- The code supports using more than 1 server but it seems to hurt performance -->
<server>
<host><![CDATA[127.0.0.1]]></host>
<port><![CDATA[11211]]></port>
<persistent><![CDATA[1]]></persistent>
</server>
</servers>
<compression><![CDATA[]]></compression>
<cache_dir><![CDATA[]]></cache_dir>
<hashed_directory_level><![CDATA[]]></hashed_directory_level>
<hashed_directory_umask><![CDATA[]]></hashed_directory_umask>
<file_name_prefix><![CDATA[]]></file_name_prefix>
</slow_backend_options>

<memcached>
<servers>
<server>
<host><![CDATA[127.0.0.1]]></host>
<port><![CDATA[11211]]></port>
<persistent><![CDATA[1]]></persistent>
</server>
</servers>
<compression><![CDATA[]]></compression>
<cache_dir><![CDATA[]]></cache_dir>
<hashed_directory_level><![CDATA[]]></hashed_directory_level>
<hashed_directory_umask><![CDATA[]]></hashed_directory_umask>
<file_name_prefix><![CDATA[]]></file_name_prefix>
</memcached>
</cache>
<SNIP OUT RESOURCES SECTION>

</global>
<SNIP OUT ADMIN CONFIG SECTION>
</config>

A busy site will use a few hundred meg of APC cache and less than 100 meg of memcached cache.  At very high traffic levels memcache cannot keep up as the fast backend which is why this 2 tier method is being used.

This has been tested fairly heavily using nginx and php-fpm (both the 5.2 with patch and 5.3 native). You will need to use shared memory in apc to see the most gain.

If you have any questions or problems please feel free to post them.

 

 
More Articles...