Managed Cloud Blog

This is some blog description about this site

New report shows IT decisionmakers using the cloud to store sensitive data despite high perceived risk

A new report by Vormetric, summarized today in Forbes, says that while close to 60% of IT decisionmakers are placing sensitive data in the cloud, nearly 89% feel that they are at least somewhat vulnerable to an insider attack, and 46% believe cloud environments are the storage location of greatest risk for data breaches.

In addition, concern over data breaches is eclipsing concerns over achieving and maintaining compliance, and the greatest perceived threat for data breaches is insider threats, which 93% of organizations felt they were vulnerable to.

44% of North American organizations have suffered a serious data breach or failed a compliance audit in the last 12 months.

The study results show that behavior with respect to data security is not in line with perceived risk, and in addition, it is not in line with actual risk, which trailed perceived risk only slightly on the average.  While there are many potential reasons for this misalignment, in my experience the major causes among our prospective clients are:

  • Perceived low cost of cloud hosting does not include the actual costs of maintaining data security
  • Business pressures coupled with the ease of provisioning cloud resources take priority over security
  • Security costs are high (both from a labor and software/service licensing perspective) and not coming down significantly
  • There is confusion about what security measures are necessary to prevent actual threats

Matching the trends in the study, ENKI is seeing an increasing number of prospective clients who are prioritizing protection from data breaches and complying with insurance requirements for data breach protection above compliance, especially for organizations that keep sensitive data that is not covered by compliance requirements.

Our security offerings align closely with those the study identified as being most important to IT decisionmakers:

  • 55% asked for encryption of data with enterprise key control, which ENKI provides as our inexpensive SecurVault service.
  • 52% also want service level commitments and liability terms for a data breach, which ENKI provides as part of our BAA or contracts
  • 48% desire explicit security descriptions and compliance commitments, which ENKI provides as part of our PrimaCare Gold Compliance services

We have assembled a suite of compliance tools and services that can be tailored to meet your exact requirements, whether they are meeting compliance requirements or defending against particular threats that you are concerned about.  Coupled with our operations services, we can also reduce the number of people in your organization that are touching your cloud infrastructure who may have motives to improperly access privelege data. While we are an infrastructure cloud provider, we have realized that many of our clients need operations services (outsourced IT services) that are security-aware and can offload the challenges of meeting and maintaining security requirements from your team.  Overall, we feel confident that we can provide a secure cloud solution for your application hosting needs, and work with your team to achieve your overall security goals.

Please contact us if this approach sounds interesting to you!

Continue reading
6867 Hits

How ENKI can help with your compliance needs

We have recently been seeing a great deal of interest among our cloud hosting customers and prospects in security and compliance, in particular HIPAA and PCI requirements.   The recent revelation that 80 Million healthcare-related records were stolen from Anthem should only increase this interest!  The important thing to remember is that while security and compliance are not the same thing, they have a purpose in common: safeguarding your business and its customers from threats to data security and application uptime.

In addition to requiring best practices for securing your application and your clients’ data, compliance regulations are focused on reputation and responsibility, allowing you to build a reputation for security that allows clients and partners to trust you.  It also helps to assign responsibility in the event of a security breach, since the regulatory agency requiring your compliance certification is usually empowered to fine you or shut down your application/web site if it can be shown that the break-in resulted from being out of compliance with the associated requirements. 

ENKI can help to ensure the security of your cloud hosting while providing many of the necessary building blocks for meeting compliance requirements.  Unfortunately no hosting provider can guarantee that your business is fully compliant with HIPAA or PCI because the requirements extend beyond just the hosting to your application code and company internal processes.  However, ENKI's cloud infrastructure and consulting services eliminate much of the complexity and required knowledge in making sure that your hosted application is compliant.

  • ENKI’s cloud is more secure and compliance-ready than your private datacenter.   Our 9 years of providing secure cloud hosting have given us the experience and know-how to manage our datacenters and your hosted environments for exceptional security.  Learn more on our security page.
  • We are experts on compliance and security, having hosted and managed highly secure and compliant applications for customers in a variety of industries including healthcare, finance, and government.   Our compliance-oriented support services are designed to deploy and manage security measures as well as advise you on how to cost-effectively achieve compliance.  Our standard BAA is one of the most comprehensive in the industry.
  • ENKI offers support packages with a variety of best of breed compliance-oriented technical security measures including Web Application Firewalls, 360 Degree Data Encryption, Site Security Scanning, Intrusion Detection, File Integrity Monitoring, Log File Analysis, and secure VPN access.

Security need not be expensive: data encryption can be deployed for as little $40 per server and will prevent the damage Anthem experienced recently, especially since our SecurVault encryption manages your keys so that they're not stored on your servers.

We have found that the bulk of our compliance clients can benefit from our security and compliance expertise; many have come to us with serious but easily-addressed security holes that we’ve been able to close with our services.  Over the next few weeks, I'll be posting some blog articles about the technical countermeasures that ENKI offers to protect your data and applications.

Please contact us for a free evaluation of your HIPAA or PCI hosting needs.

Continue reading
4116 Hits

Will an infrastructure provider's HIPAA certifications help my application be HIPAA compliant?

HIPAA is a strange beast, in that it has very few specific requirements but holds the Covered Entity and/or its associates responsible for using best practices to secure data.   If a breach occurs, an examiner will determine responsibility based on how complete the Covered Entity and its business associates followed best practices.  Unfortunately, best practices are a “cultural” as well as technical philosophy that evolves over time.  The current set of expected best practices for Technical Safeguards of hosted applications is generally accepted to be storage encryption, external security scanning for externally visible applications, encrypted communications (HTTPS/VPN), Web Application Firewalls for externally visible applications, secure backups, and potentially IDS (intrustion detection systems).   Additionally for Administrative Safeguards, automated file system change monitoring, log file monitoring, and automated change management with approvals are best practices to ensure that the application is administered securely.  Certainly, specific applications may not have a threat vulnerability surface that requires all of them, but HIPAA requires that decisions not to follow best practices be documented and explained as part of the security plan.

So will a certification, such as HYTRUST, help you achieve HIPAA compliance with your hosted application?


Unfortunately, if you look at what HIPAA “requires” – which is control over PHI (protected/private health information) at all stages of its management by the Covered Entity – there is no certification that will ensure that a Covered Entity is HIPAA compliant other than a full audit, because every process, program, server, application, job, person, and vendor that touches the data must be compliant.  Essentially, an entity’s HIPAA responsibility is tied to the amount of control the entity has, so your typical infrastructure service provider – such as Amazon for example, which gives the clients full control over their infrastructure – cannot take much responsibility at all, no matter what certificate may be proffered.


Because of this, any certification on Infrastructure-as-a-Service is almost meaningless since anything the infrastructure service provider does to ensure data safety can only be necessary but not sufficient to ensure compliance. There is a certification called HYTRUST that some infrastructure providers are starting to offer, however from a practical standpoint since the clients of such providers have control over the servers and the application, it offers no additional assurance of compliance.   Instead what ENKI has chosen to offer for clients who want assurance of compliance, is a full suite of automated security controls coupled with application management that complies with the HIPAA Security Rule’s best practices including full change management.  This service offering allows us to guarantee compliance – only of the hosting of course – backed with up to $2M of liability coverage.

Since what your clients ultimately want is data security – so that HIPAA issues never come up – one of the best options for assuring them that their data is secure is the report of an external security scanning service.  ENKI offers the well-respected AlertLogic scanning which also includes intrusion detection – satisfying both the clients’ desire to know their data is secure and systems compliant, plus should a breach occur, the HIPAA security rule’s notification requirements are handled by the IDS.

For an overview of ENKI's HIPAA compliant hosting, please go to our HIPAA intro page.

Continue reading
8872 Hits

Spending too much on the cloud? Tackle your cloud sprawl

I consistently hear from ENKI's enterprise cloud prospects that they have bad bad experiences with some of our competitors because they feel they're spending too much on cloud services compared to their expectations.   It turns out that the root cause of the excess expenditure is "cloud sprawl": the unplanned use of cloud resources that are not providing an economic benefit to the cloud client.  As a result, managers and executives at cloud customers feel they have lost control over their cloud expenditures, and often blame the cloud, the cloud provider, or even the technology (including virtualization.)

The most common causes of cloud sprawl we've come across are:

  • Use of cloud in lab or test environments where there is no defined completion date/time for the use of the resources, so they simply remain on
  • Control of cloud resources by organizations, particularly development organizations, where corporate cloud resource usage is controlled by individuals who use them for personal projects that are not directly associated with production services
  • Lack of centralized control over cloud spend because there is nobody on the client side tasked with controlling cloud costs, often coupled to lack of adequate reporting on usage from the cloud vendor.
  • Separation of expense control and cloud provisioning roles
  • Lack of automated process on the part of the cloud vendor for implementing resource control policies
  • Lack of client cloud use policies

It's been pretty clear from our discussions with prospects that both the cloud provider and consumer must collaborate to control cloud sprawl, especially since the root cause of the sprawl in many cases is client employees ordering cloud resources without any supervision.

ENKI has taken steps to enable our clients to control cloud sprawl, including:

  • Enhanced billing for improved clarity and detail
  • Online portal showing current usage rates and statuses for resources (Customers: check the link at the bottom of our home page)
  • Online portal allowing role-based control over resources outside of the provisioning process
  • Custom automation for select customers to enable us to support their resource control policies
  • Automated discounting for increased usage levels


Continue reading
4734 Hits

Getting the basics right: stopping RDP attacks

ZDNet has a great article out today about how security still relies on taking some simple, easy steps - or fails because they weren't taken despite deploying sophisticated defenses.  In this case, it's Microsoft RDP (remote desktop protocol) which our WIndows customers usually use to get administrative access to their virtual machines, so security is critical.  RDP is secure - as long as you use it securely! 

For securing RDP, the article makes some simple recommendations:

  • Use complex passwords, especially for accounts with administrator access
  • Consider disabling the Administrator account and using a different account name for that access
  • Set the system to lock a user out for a period of time after some number of failed login attempts. Numerous group policies for these rules have been in Windows for a long time
  • Require two-factor authentication, especially for administrator access

If you are an ENKI customer and need help with these steps, or consultation on the security of your cloud deployment, please contact our support services.  ENKI specializes in providing secure cloud installations for compliant applications that must meet HIPAA, PCI, or other regulations.

Continue reading
7844 Hits

How ENKI's on-demand instance sizing can save you up to 25% off Amazon or Rackspace clouds

While we like to compare the total cost of operating your application in our cloud versus the big guys - Amazon and Rackspace - our prospective clients often focus on the per-hour infrastructure costs.   We're currently running a 20% off special on the equivalent resource pricing compared to Amazon - a particularly good deal since they just lowered their prices.  But from a resource cost perspective, ENKI is a better choice yet, because you simply need less resources - and therefore less money - to run your application in our cloud.

How is this possible?   The answer is that ENKI's VMWare-based cloud allows your to allocate the computing resources you need to your applications on an instance (VM) by instance basis.  If your instance needs 10GB of RAM, for example, you won't have to buy a 15-GB instance to run it.   On the average, our customers will allocate 25% less resources than if they were forced into Amazon's fixed-size instances, saving themselves that percentage of cost.   This savings comes before other additional savings like not having to allocate standby instances if you need rapid failover, because our VMware cloud instances are not ephemeral (disappearing if the unlerlying hardware fails).

How did I come up with the 25%?  First of all I assumed that cloud customers "walk in the door" of a cloud provider with a certain instance size in mind that they need to run their application, which I call the "demanded" instance size.  If they go to Amazon, they'll have to pick one of AWS' instances, which has to be larger than the size they demand in order to get the performance or reliability they expect. Secondly, I assumed that the distribution of demanded instance sizes was flat, meaning that if 100 customers decided to create a new instance, they'd demand instance sizes spread randomly between close to zero resources and the maximum that AWS offers.   It is a straightforward leap to understand that between any two AWS instance sizes, the customers who pick the larger one therefore on the average only demand one halfway between the two sizes.  (Otherwise there would have to be some factor driving them to magically "need" the sizes AWS has chosen to offer.)  Working from these assumptions, I created the following table:

AWS Instance size (GB)

Overallocation (GB)
(next size up minus
average size demanded)

Percent Overallocation
1.7 0.85 50.00%
3.75 1.025 27.33%
7.5 1.875 25.00%
15 3.75 25.00%
30 7.5 25.00%
57.95 15 25.88%

So, on the average, customers will save over 25% on resources at ENKI, leading to equivalent cost savings.

I've shown this table to some "devil's advocates" who retorted, "but that means that clients on the average have a bigger safety cushion against running out of resources at AWS!"  A little thought shows this is not correct: the instance-clients still have to decide how big an instance they need, including a safety margin, before they buy an instance at AWS or ENKI.  After that decision, they have their final requirement, which is used to pick their desired instance size - or allocate resources at ENKI.  So the table above still applies.

This savings calculation also applies at RackSpace, or any fixed-size instance provider.   However, the exact percentage may vary.  In order to minimize the number of different instance sizes (for their own efficiency as well as client's sanity) AWS increases instance sizes by doubling the next smaller size.  This keeps the percentage error (and savings) large between demanded and provided instances.   If a cloud provider had instance sizes that increased linearly, the savings would be less, though still respectable.

If you want to learn more about how ENKI compares to AWS, click on this line.

If you want to learn more about how ENKI compares to Rackspace, click on this line.

Continue reading
73693 Hits

Can ENKI's Managed Cloud Services Make Your Company More Competitive?

I'd like to share with you the most powerful benefit of ENKI's high performance managed cloud computing services for SaaS providers such as yourself: increased organizational effectiveness. Over the last 6 years of the cloud revolution, the self-service model of cloud computing has enabled companies to simplify their IT teams and reduce the costs of delivering their services to their clients. Much of those cost reductions have come from eliminating skilled IT operations staffing, whose duties have typically fallen upon staff less skilled in IT systems management. However, research has consistently shown that superior IT capability is associated with overall higher profit ratios and lower cost ratios for businesses.  (Read one compelling study here).

As a result, we often see SaaS providers struggling with managing their own datacenters or cloud deployments, including:

  • Inability to staff and retain IT operations teams with skilled professionals
  • Not knowing or benchmarking their IT capability compared to competitors
  • Problems with uptime, performance, or security of the application deployment
  • Problems with overall systems architecture that result in constant firefighting and inability to deploy desired SaaS capabilities
  • Costly and distracting split focus of management between product and operations
  • Having to maintain a full-sized NOC team for 24x7 response even though the workload is much smaller

To solve these problems and bring our clients the true pay-as-you-go benefits of the cloud, ENKI has developed PrimaCare, an outsourced operations service to complement our high performance Virtual Private Datacenters. With PrimaCare, you get the benefits of a seasoned IT operations team without the expense and management headaches of building your own 24x7 NOC and operations organization, including:

  • Pay only for the services you need instead of a large IT team, with charges scaled based on the size of your cloud deployment.
  • Avoid common problems which can lead to downtime or data loss, such as excessively large databases, undersized virtual machines, unmonitored failures, or missing backups.
  • Get advice from the team that built Netsuite's million-user datacenter to help you with adapting your architecture to scale up, addressing failover or uptime requirements, or planning for meeting security compliance.
  • 24x7 response to production issues, including hands-off corrective actions based on our complete runbook of maintenance procedures for your site.
  • Integration with your development and management teams through regular contact, enables flawlessly executed software releases or marketing campaigns.

ENKI was founded to assist SaaS providers in offering fast, reliable, and secure services to their clients without the necessity for expensive and hard to find expert IT capability. If this is of interest, click on the Contact Us link above to arrange a free consultation on how ENKI can provide you and your application superior support and personal, expert service. You have our commitment to provide a total solution that meets your business and technical requirements.

Continue reading
22858 Hits

Making Big Data work in the public cloud

An increasing number of ENKI's clients are monetizing their transactional data by storing large amounts of it permanently for the revenue opportunities that come from mining that data.  Because of this they are starting to run two databases: one for high speed transactional work and the other for data mining queries on their data warehouse.    However making both of these databases work successfully in the cloud requires radically different skills and even sometimes technologies, and we see our clients struggling with the challenge, especially if they are short on database admin skills or have administrators used to smaller deployments.

To quickly review transactional database performance management in the cloud, it comes down to three critical points: query efficiency, cache size, and IO speed.   Aside from query optimization, there's a discussion of the other points in Optimizing Machine Performance and Costs in a Provisioned Storage Performance Cloud.  Query optimization is critical: you must have an application DBA on staff or on call if it looks like adjusting IO performance and caching isn't doing anything.   In fact we saw this as we migrated customers to our new standard storage system in our latest datacenter: a number of them showed little of the expected increase in application performance because they were still running inefficient queries that were maxing out I/O on the virtual machines.    The access time to storage had decreased by 30% and the throughput had increased by at least 3x, but their queries - being so dependent on access time due to all the random disk accesses they created, and not using cache efficiently - only sped up slightly.   To help these customers, we placed our provisioned-IOPS storage tier into an array provided by Nimble Storage which optimizes access time by caching a large amount of data in SSD storage.    However, even these clients would best be served with query optimization to avoid unnecessary IO performance charges.

Big data operations follow a different rule.   Typically big data queries will sift through large portions of a database looking for patterns.  This requires more I/O throughput to complete the query (often MUCH more!) but is less dependent on access times since accesses are serial, perfect for rotating disk storage.   Cache, so important for transactional database loads, is almost irrelevant because the amounts of data are so large that they cannot be cached in virtual machine memory either due to cost or hardware architecture limitations.   Instead, both the database and the storage - as well as the cloud itself - must be optimized for passing large amounts of data through the VM for analysis.   To make this possible, the entire ENKI cloud runs on dual 10GbE links that connect servers to storage, and we offer very wide stripe RAID 10 with 15kRPM drives for massive data transfer speeds to the VM.

In order to optimize the data access on big data queries to utilize this kind of storage throughput effectively, the database must actually perform serial data access when a query is run.   For a relational database, the queries and the schema must be tuned to allow this.   One of the key requirements (as with transactional queries) is to set the indexes up properly so that they can be cached or quickly retrieved - which means they have to be small.  If very fast storage is available, putting the index on it may significantly improve overall response time.  Also, the database VM must have enough RAM to cache the indexes if your performance plan requires it.   And, the data being accessed for queries must be stored in the largest possible chunks to maximize sequential read performance.  This is why many big data projects have turned to NoSQL databases like Mongo, which retrieve larger objects using relatively small indices.   However, the choice of database also depends on the application architecture and the data itself, since practical use of an object database for data warehousing may require grouping or agglomerating smaller chunks of data from the transactional records in such a way that they can no longer be individually retrieved.  If an object database is not appropriate, traditional techniques for reducing complexity and file size have to be applied to the SQL database to allow it to be maintainable and reliable, including sharding and spreading storage provisioning across multiple storage arrays both for performance and reliability.  ENKI can accommodate these requirements.   The one area to watch out for if using object storage directly is aggregate throughput: some object storage systems simply can't deliver the aggregate throughput of a block/file based storage, especially if writes are involved, due to common object store architecture of storing data on multiple separate hardware systems.

Finally, some cloud providers offer pure SSD storage - usually within the hardware hosting the cloud instance.   While the speed on this SSD is hard to equal, the challenges of keeping the SSD current from longer-term object storage as well as flushing the SSD regularly to the object storage if writes occur, are not solved automatically and would require your application or a third-party file system to be configured for maintaining the local SSD "cache."  In addition, these SSD deployments are in clouds with ephemeral instances, so your application will have to take into account that a server failure will cause incomplete results to be returned from a query, or even cause the query to crash.  On-instance SSD may pay off for the largest big data projects but for the bulk of 10-100 terabyte deployments, the techniques above should prove adequate.

Continue reading
122166 Hits

The NSA has hacked Google's Cloud - and how to protect your cloud assets.

Recent revelations from security contractor Edward Snowden have shown us that the NSA is snooping on our communications. However, an article in the Washington Post just showed that the NSA has successfully broken into Google's cloud - their network of servers and datacenters that handles mail, video, phone calls, user data, and other private information.   Google engineers exploded in profanity when they saw that the NSA had gotten access to their internal systems, which they had specifically tried to deny it.   And it appears the NSA is taking large volumes of information.

The Washington Post article includes a basic network diagram showing how Google's cloud works.  Like all public clouds today, information travels between servers unsecured, relying on either physical security or security at the "edge" of the cloud to keep prying eyes away from private data.  The clever NSA engineers even put a smiley face to show that it's only secured at the edge - allowing the NSA to snoop on internal communications wherever the connections are not physically secured, such as links between different datacenters.  

To me, it is more than a little surprising is that Google's internal cloud is not secured internally, since it is completely within their control, and because of their multiple datacenters, the links between servers are not physically secured.

All public clouds offer essentially the same level of security from determined hackers that Google does: very little.  Public cloud providers have a higher responsibility to their customers to help them secure their data and systems, since often the customer isn't aware of the potential security limitations of the public cloud.  So far, like google, they've relied on physical security, but with resources spread across multiple datacenters, that is no longer enough since the NSA can attack the physical connections directly, as it has with Google.

The general solution to protecting your data at rest and in motion is to buy or build your own virtual private cloud out of the building blocks that your cloud provider provides, unless your cloud provider can offer certified methods for ensuring security of data at rest and in motion.   In the last couple years as ENKI has taken on HIPAA compliant clients, we've added quite a few of our own security offerings, that make it much easier to build your own virtual private cloud or if your data resides entirely within the ENKI cloud, you get a virtual private cloud by default.  I've created a list below that shows the needed steps and our certified offerings that will let you protect your data.  Hopefully, Google is doing the same thing and acting on it.

Storage Encryption.   You will want to make sure data you store is unreadable to hackers.  Not only that but it should be protected on its way to the storage system (something Amazon and other cloud providers offering encrypted storage don't do.)  ENKI offers SecurVault, a technology from High Cloud Security that replaces the storage driver on your virtual machine and won't let anything leave the VM for the storage sytem unless it's encrypted.  If your storage system is in another datacenter - or you don't know - the NSA still can't get at it.

Secure Administrative Access.  If you use your servers from a remote location - pretty much required with the cloud - you'll want to make sure that your communications with them are encrypted.   As a user, the web provides SSL/HTTPS encryption which the diagram in the Wash Post article seems to imply the NSA doesn't want to bother with (they'd rather go for unencrypted data!).   However as an administrator, you'll need a solution for control/admin access.  ENKI offers EasyVPN, a software-based VPN that secures the link directly from your server to your accessing computer.

Secure Inter-Server Communication.   If your cloud servers are in the same datacenter, physical security has generally been considered sufficient to ensure your data in motion is safe.  But is it safe from other cloud users?   The solution is hardware VLANs, which only let authorized servers that are allowed in a VLAN group to talk to each other.   ENKI's cloud uses hardware VLANs for security.   However, what if you have two servers that are in different datacenters?  The NSA will draw a line on their diagram connecting them with a smiley face!   The solution is to encrypt this link as well.   You can set up a SSL connection between the servers, which ENKI does for its clients based on their security plans.  Or you can find out if your cloud provider offers an encrypted link between datacenters - essentially a VPN for your VLANs.   We offer this as well, on request.

Other Places To Look for Data Leaks.  Because the internet is so connected and it's much easier, faster, and cheaper to create new applications that talk to other applications to build value, you should look our carefully for data connections that don't appear in your architecture diagrams and make sure they're encrypted:

  • Backup services for off-server/off-cloud backups
  • Bulk email services (assuming you want them protected)
  • Financial services (those taking credit card payments are already encrypted due to PCI)
  • Marketing/referral/partner relationships and services that include personally identifiable data transmission
  • Links to social media services
  • Remote storage services not offered by your cloud provider
  • ... and many more
Continue reading
38602 Hits

How to use Virtual Private Datacenters for cloud security

An increasing percentage of ENKI’s customer base are asking for security on the scale required by SOX, PCI or HIPAA, even if the data they’re securing doesn’t fall under those regulations.  In essence, they need to certify to their users/clients that their data will be safe from accidental or intentional loss.  This level of security requires that the application be carefully designed to prevent data leakage, and that the data center it resides in meet both regulatory requirements as well as comply with the application’s security architecture.   A true VPDC can support these deep security requirements in many ways, including:

  • Dividing the application up into separate security domains, each served by its own secured VLAN (a necessary private networking component of a true VPDC).  For example, the presentation web servers reside on a lower-security network, while the application servers are accessed by them through a firewall that prevents any accesses that do not come from the application residing on the web servers (such as command shell or database connections).  Similarly, the database servers can be protected by yet another firewall and VLAN.
  • Creating a management network for secured servers that only allows management traffic and is connected to a VPN concentrator accessed by administrative staff.
  • Creating a separate storage network with encryption appliances on it to offer encrypted datastores to any server that needs them, for example with ENKI’s SecurVault service.
  • Deploying the exact active firewall, load balancer, global load balancer, VPN concentrator, directory server, backup application, or security appliance that your meets your requirements without worrying about virtualization or network compatibility, since they will run correctly in a true VPDC.
  • Deploying Citrix XenApp or VMWare ThinApp VDI within the VPDC to add a layer of security to your application by presenting only the user interface to the end user, and hiding the actual application behind an additional layer of network security.  
  • Since every action that affects the VPDC is tracked through the management interface, HIPAA and PCI compliance requirements for change control, access control, and logging are met for the entire deployment (if coupled with standard in-VM logging and access control methods.)
  • Enabling and supporting the implementation of the best practices from security advisory sites like Darkreading’s Compliance Center at

This article is a summary of one section of our VPDC White Paper, which includes information on cloud virtual private datacenters and how to use them for security compliance. You can receive a copy for free by clicking here.

Get Full White Paper

Continue reading
78210 Hits

Optimizing Machine Performance and Costs in a Provisioned Storage Performance Cloud

As of the first of November, ENKI will begin offering provisioned-performance storage (PPS), which allows you to decide how much storage performance you need for your virtual machines (or volumes) by requesting a guaranteed level of performance.   Although a number of cloud providers offer provisioned-performance storage as an option, many customers don't fully understand the advantages that it offers for trading off your costs versus application performance for storage-intensive applications such as databases.

Prior to provisioned-performance storage, or with some PPS offerings that cannot actually guarantee a large range of storage performance, the cloud customer basically had only one variable to adjust when application response time was dependent on storage performance, which was the allocation of memory to the database or perhaps to a caching application.   By storing more of the critical data that application needed regularly (called the "working set") in memory, accesses to storage could be avoided, allowing the speed of the application to decoupled from storage speeds.   However, this is a very expensive approach, since large databases would require large amounts of memory.

With PPS, the customer now has the ability to vary storage speed as well.   The resulting speed of the application will depend proportionally on the amount of storage performance (usually measured in IOPS - input/output operations).   However, adjusting memory size will still have an effect and in many cases should be the first thing that you try.   The reason for this is that if your application's working set doesn't fit into memory, then most of the requests for data that it creates will require a storage access.   Since storage is many times slower than memory, this will cause your application to lose a large portion of its potential performance.

I found the following quote on the web recently:

"A well designed web application should serve 99% of its queries out of memory"

What this means is that you must determine your working set size by using appropriate diagnostic tools on your database, and then allocate enough memory to keep that working set completely in your server.   It also means that you need to design your database schema to partition your data in such a way that a working set can extracted from the total amount of data you are storing.   This may not be practical for some applications (especially "Big Data" applications where much of a very large database must be scanned to generate the answers required) but I think it's a good general rule.

 Memory Storage Performance Tradeoff

In the chart above, we see a typical storage speed/memory optimization curve.   On the right, the memory is being increased and results in a sudden increase in application performance as the entire working set is now cached on the server.   Increasing memory beyond that point only increases application performance slightly, since it caches less frequently-used data.

On the left, the storage performance is being increased.  Here, a linear increase in storage performance results in a similar linear increase in application performance.   A good practice is to choose a level of storage performance that results in acceptable application performance after you have found the amount of memory that is required to store your working set.  

These rules are general, but also universal: you can't beat this effect even if you divide up your storage into smaller chunks and certainly not if you increase CPU speed on your database instance.

Continue reading
177830 Hits

Monetizing SaaS Customer Behavior Based On Infrastructure Costs

I was speaking this morning to Victoria Pointer of MeetTheBoss about monetizing customer behavior for SaaS services, and we got into an interesting discussion about connecting that monetization to costing, which is something I know a bit about for cloud hosting.   The issue that came up was monetizing SaaS services for end-user convenience and efficiency based upon the behavior of those end-users.

Clearly, most SaaS services will claim that they are all about increasing end-user efficiency and convenience (otherwise, why would they expect to displace legacy applications?)  However the creators of any SaaS application will come to a point where they have to choose between providing the most useful service to their clients versus providing profit to their investors.   At the intersection of these choices is the cost of their development and infrastructure.   Setting aside development costs for the sake of this blog, and also setting aside software efficiency and other cost factors which are better covered in the Controlling Cloud Costs whitepaper I wrote and which you can download at the upper right of this page, the interesting question comes up as to what the best approach is to monetizing end-user behaviors that cost the SaaS provider a lot of money in infrastructure charges.

One case study is a service I'm intimately familiar with, which is NetSuite.   Both from my time working there and now my time as a customer, I see that NetSuite has chosen to restrict the cost-incurring behaviors of clients so that they don't impact the overall business by hogging resources on shared servers.    NetSuite will limit the run time of user-submitted customizations, slow the processing of mass record updates, and delay report generation until off-hours.    This is a reasonable approach, considering that critical portions of their infrastructure are not easily scalable to adapt to changing loads.  It guarantees a minimum service level for all clients and avoids breaches of SLA.   It also saves NetSuite a lot of money because it doesn't have to build out its infrastructure to handle unpredictable peak loads.

However, is this also a monetization opportunity?   I think so.   If a SaaS provider offers convenience and efficiency, then why not offer to charge clients based on their resource usage, rather than limiting that usage?   They'd get their reports, mailings, CAD drawing refreshes, project plan recomputations, etc. much more quickly, and that would increase satisfaction as well.   This is not much different than electrical time-of-use billing where you pay more if you insist on drying your clothes in the middle of the day: you're paying for convenience.  And the provider gets a new revenue stream that can be used to build out the extra infrastructure to satisfy this demand, plus a profit.   Customers are used to tiered services (ask anyone who uses Comcast!) and if the offer is sufficiently compelling, they'll pay for it.   What you have to watch out for is not slowing or dumbing down your base level of service so that it gives you a bad name or opens a door for a competitor.

So, to make monetization with respect to infrastructure usage work, you have a few tasks ahead of you:

  1. Determine what user behaviors eat up a lot of infrastructure resources
  2. Determine if those behaviors can be monetized (how much will someone pay to speed them up or get more comprehensive service?)
  3. Determine how to adjust the design of your software to allow variable allocation of resources to clients based on their billing plan
  4. Determine how you will meter or bill for the extra resource usage, or if you will simply allow more usage for a fixed additional price.   Analyze the resource costing and determine the pricing for the new tier.
  5. Plan how to communicate and sell the new tier of service so that it looks desirable without alienating your current client base.
  6. Look for ways to automatically upsell by measuring user behavior and suggesting the appropriate elevated tier of service that will appeal to them.
  7. Make the changes to the software design to support the new service tier
  8. Resolve any systems issues so that your platform (be it cloud or your own datacenter) can allocate new resources on demand.   With respect to cloud (which because of its scalability and pay-per-use is ideal for this approach) make sure your provider can allocate the resources when you need them and how the mechanism for doing so will work.  (This is another entire blog article or more on its own, of course!)
Continue reading
124583 Hits

12 Ways To Make Your Cloud Usage More Efficient

Application efficiency is determined by how much (or little) computing resources are used to serve the application’s customers. It should be unnecessary to point out that deploying an inefficient application to the cloud will cost you more to host it.  However, there are subtle pitfalls that can result in a very inefficient application.  Inefficiency can come from many places; however, you often have a lot of control over them.  Below is a list of common inefficiencies we have found with our startup and growth clients that if addressed (or avoided) can save you a great deal of cloud deployment costs.

Inefficient software.   Look for functions or methods that consume most of the CPU time: they will also consume most of your cloud cost. 

Watch out for single-threading.  A number of our clients have written apps that force certain operations to run through a section of code that is written so that only one copy can be running on the server at a time.  On a multi-core server (virtual machine), this effectively throws all but one of the cores away, even though you’re still paying for them!

Database inefficiencies.  Back when we worked at NetSuite, we found that at times the application speed could be sped up by 10x by redesigning the SQL queries sent to the database, which we spent many a sleepless night toiling over.  Another pitfall we’ve seen is bad database architecture: one client had 15,000 tables for a small application when all the data could have been put in one table, and made his application consume over 100x the resources it actually needed.

Inlining compute-intensive code.  By making the user wait for expensive operations like transcoding video, you place a performance demand on the process that will require allocating a lot of extra resources to it.  Instead, you should separate the compute-intensive code from UI functions that affect the users’ perception of your site’s performance.

Inlining web services calls to other web applications.  Much like compute-intensive code, calls across the internet to get or send data to other applications can be time consuming, especially since you often don’t have control over the performance of the remote application or delays in reaching it.

Bundling too many different code modules onto the same server.  This can cause a long compute operation initiated by one customer to slow the responses for others, requiring you to order much larger servers than necessary.

All customers are not alike.  For many apps, a small percentage of the customer base uses most of the resources.  Find a way to segregate these customers into their own infrastructure and charge them more. It never makes sense (despite being practiced by some large internet companies like NetSuite or Google) to restrict your customers’ usage of your system to save on resources, because it drives them away.

Watch your IOWait.  Some cloud service providers’ architecture simply cannot keep up with the input/output demands of your application, with the result that your servers wait for data to be transmitted or received to or from storage or other servers.  Check your IOWait percentage, since it represents fractional waste of (1/(1-IOWaitPercentage)) of your server’s computing power.

Choose your operating system carefully.  We love Windows for its ease of use, unified software development environment, and enterprise features.  But it eats a lot of memory just to get started.  If your instances would be much smaller under Linux, or if you have many instances, you will save big time by not using Windows, even if you don’t take the licensing costs into account.  

Don’t over-utilize your instances.  It may be counterintuitive that using your cloud servers at less than 100% will save you money, but researchers at the University of Aachen in Germany found that underutilizing your cloud servers in Amazon EC2 could actually increase performance enough that it more than offset the cost of adding extra instances to achieve a desired performance level.

Turn off unused servers.  At this point you’re probably chuckling to yourself, but recent studies show that even mid-sized organizations don’t aggressively turn off unused cloud servers, resulting in much higher than expected costs.

Use swap. If your instance runs out of memory 0.1% of the time, it often makes more sense to add swap space to prevent it from crashing than to upsize the instance to avoid this eventuality.

This article is a summary of one section of our White Paper, "Controlling Cloud Costs", that includes solutions to these problems, and which you can receive for free by clicking here.

Get Full White Paper

Continue reading
124536 Hits

Comparing Amazon AWS Pricing to ENKI: A Real-World Case Study Showing 33% Savings

We recently completed a cost comparison between ENKI and Amazon AWS, starting with the monthly bill of one of our mid-sized customers, a social media/SaaS application company.  Their application consumes 51 virtual machine instances, 78 cores, and 236 GB of RAM.   In order to get an apples-to-apples comparison, we looked at the resource settings on each instance and found the AWS instance type that meets or exceeds both the RAM and CPU in order to determine the equivalent pricing.  We did this comparison for all the client's instances for both reserved and on-demand instances.  

ENKI's pricing ended up being 33% lower than Amazon AWS for this client, while delivering better performance and uptime due to our enterprise architecture.   If the client had chosen reserved instances, ENKI would have been 43% lower. The case study is available as a PDF for download.

Get Case Study

Continue reading
26106 Hits

Crash or Aha! - two different philosophies on how to rightsize your cloud deployment.

Adjusting the size of your deployment to match your business needs is the most obvious tool you have to control costs.  Allocate too large a server or too many servers, and you’re going to waste money.  Allocate too little, and your clients will think there is something wrong with your service and look elsewhere because of slow performance or crashing.  It may be obvious that you want to avoid crashes which lead to downtime, but many of our clients have and continue to economize on resources such that their customer base is continuously alienated.  It’s convenient to blame the cloud provider for inadequate performance, but since the Cloud gives you control over how much resources you allocate, in the end the decisions that lead to poor performance or uptime are in your hands.  It inevitably will cost more to lose clients than to pay a bit more for resources.

crash and overcompensationBut how much do you need?

Unfortunately most new cloud customers have no idea what the appropriate resource allocation is for their application out of the gate because they haven’t had the chance to measure real-world usage. This presents a choice at initial deployment which we like to call “Aha vs.  Crash” (please see our white paper on controlling cloud costs).  You can choose to learn as much about your application as possible (“Aha!”), or you can choose to minimize resources for short-term savings, which will inevitably result in downtime (“Crash!”) – if only to resize and restart your instance.

We recommend the “Aha” approach of oversizing your cloud deployment initially to avoid the crashing, and then measuring it with an appropriate monitoring tool under real or simulated loads to get that all-important ratio of resources to demand at your chosen level of performance.  Because cloud resources can be adjusted down as well as up, you aren’t locked into overpaying for long periods of time, but only until you have your data.  After that, you can monitor usage adjust resources based on measured loadand decide how you will scale up with demand, making adjustments as needed over time.  I call this "adaptive allocation."  By planning ahead, you can schedule downtime with your users for adjusting resource levels, making the “Aha” approach even more appealing.

Or, you can install some auto-scaling functions to adjust your resource levels based on measured loads.  However, there are plenty of gotchas with autoscaling as well, which can result in either ongoing crashes or expensive overallocation of resources.  I'll cover these in another blog article.

This article is a summary of one section of our White Paper, "Controlling Cloud Costs", that includes solutions to these problems, and which you can receive for free by clicking below.

Get Full White Paper

Continue reading
239452 Hits

The economics and future of PaaS

I was having a discussion the other night with a friend potential future client who is growing a startup with a bright future.   He's currently hosted on Heroku and wondering how he's going to increase his uptime and get better control over his software deployment as he grows.   The emails and chats went back and forth for a while and I realized we were writing a blog about PaaS in the process.  I wanted to share it with you...

He asked me why I thought Heroku's reliability was so dependent on Amazon's reliability, because his goal is to surpass 4-nines of uptime and he's already seen that the Heroku platform suffers from Amazon's far lower reliability (see my blog about Amazon reliability).   I'd never really thought about it, just assuming that Heroku didn't know how to do it.   But I think the real reason is market forces that demotivate such a solution.  I have seen how we often get customers who want 5-nines of uptime but when they see that it will cost them 2-4x in cloud resources compared to the base reliability of our offering, they suddenly drop the requirement.  Or, maybe they go to another provider that offers a "100% uptime" guarantee but actually delivers 3+ nines

So, there's an upper bound to cost that many cloud buyers have in their minds, which has been successfully set by Amazon.  I am I of course speaking of cloud buyers who Heroku appeals to, which are lean startups or enterprise departments.  These customers don't want to build the human infrastructure to provide their own PaaS out of cloud infrastructure and open-source software - which is essentially what Heroku has done but tied together with a very nice user interface.  I compare Heroku PaaS pricing to Amazon because every larger Heroku user complains about the pricing in comparison to Amazon: they're always aware of the cost they're paying for that convenience.   Because of this upper bound, Heroku cannot reasonably sell a high reliability infrastructure on top of Amazon, something that is inherently possible to a large (but not complete) degree, though again few users have done so as evidenced by the moans and wails that issue with every major Amazon failure.

Another limiting factor for Heroku (or other PaaS providers) is that PaaS, while convenient especially if you don't have your own IT staff, provides only a limited subset of what an in-house IT group can do with respect to incident response, systems design, accommodating application architecture, etc.   At some point in an application's maturity, it becomes almost imperative that people are involved with maintaining it, especially tuning the deployment to match the requirements of the application.  This limits commercial PaaS to clients that are early on the maturity curve, or steadfastly determined not to hire IT staffing.  And with the advent of third-party PaaS tools like Standing Cloud or CliQr, you can make your own PaaS out of anyone's cloud (though once again, there's a fee for using the tool as a service.)   These new tools are adding quite a few management features, but ultimately they don't eliminate the need for trained system administrators on complex deployments.   I've seen our customers rely on similar features in CPanel or other management tools and back themselves into painful corners where their app couldn't be restarted without rebuilding the server.

As a result of these forces, plus infrastructure cloud providers slowly adding PaaS features, I see a limited future for add-on PaaS services like Heroku.  

But for now, PaaS, like Heroku and others, are a great way to launch an app into the cloud for the first time and run it mostly worry-free until it actually gets a lot of traction.  At that point, you'll need to decide how you want to involve the human element in managing the deployment - either because PaaS costs are higher than a small dedicated IT team, or because you need the flexibility of an IT team in adapting your deployment to your application.   ENKI was created to offer an alternative to building that IT Team, while still paying on a pay-as-you-go basis much like Heroku.

Continue reading
124222 Hits

Cloud 2.0 - the DevOps Revolution

I was recently at a content marketing seminar where I ran into Dave Nielsen, the co-founder of Cloud Camp.   We got to talking as we are always wont to do and he told me about his svDevOps meetup and the devops camps that he's been helping to organize.   As usual we realized we've been thinking the same way: the next phase of cloud computing is bringing the ease of use and cost savings of infrastructure as a service to IT as a service, and DevOps (integration of development and operations) is the next frontier.   

Our founder and CEO, Dave, wrote a paper a few years ago, "Why Cloud Computing Will Never Be Free"  in which he asserted that the bulk of the costs of cloud computing are in the IT administration (not the computing itself) and that services will be the wave of the future that will make cloud infrastructure usable.  Here we are 3 years later and cloud providers are still focusing on dollars per instance-hour (or Gigabyte hour as the case may be) but not on the true cost of operations.   This is what the true Cloud 2.0 revolution will transcend.  And DevOps, both as a field of knowledge and a green field for automation, will bring that transcendance.   This is why we've been honing ENKI's services to truly deliver a "concierge cloud computing" experience to our clients.   I believe this is the only approach that will deliver cloud deep into the enterprise - or companies that think like enterprises - rather than simply conquering the perimeter.


Continue reading
20505 Hits

SSD Storage And The Cloud: Are Reliability AND Speed Both Possible?

Storage has always been a major challenge for ENKI in building a high-performance cloud: how do we achieve reliability AND speed?  

For worry-free reliability, we have consistently chosen not to place storage on the compute instances, unlike Amazon or Rackspace's sliced-dedicated-server clouds.  If the storage is on the instance, any speed gains are offset by the possibility of data loss if the instance fails.   On the other hand, concentrating the storage demands of many instances onto a common storage infrastructure gets you persistent instances with full failover restartability, but it requires the centralized storage to be very high speed and connected to the instances over fast networking.  In other words, expensive.  To solve these problems, we've so far chosen Infiniband networking coupled with SANs that accelerate access to storage using SSD caching for commonly-used data, offering a large fraction of full-SSD storage wthout the price.

But now, Amazon and other cloud providers are upping the available storage speed in their clouds by placing SSD storage into their servers.  Do these increased speeds - necessary for today's cloud database and transactional loads - warrant building in even further incentives to decentralizing storage?   I don't think so.  

First of all, today's applications - in order to reach the highest levels of transaction processing speed - are highly parallelized, meaning they run on multiple servers that have to exchange data in real-time.   So "landlocking" their data on the SSD inside a server actually serves to slow down the application, unless the storage can be fully fragmented (often called "sharded.")   In fact, with many applications the necessity for high speed synchronization between servers becomes so extreme that the networking speeds have to approach the memory access speeds to allow applications to scale linearly.   Very few cloud providers are putting that kind of networking in place because it's expensive.  

Second, placing SSD on the server doesn't solve the failover problem.  In fact it makes it worse in practice because even more of the client's access-speed critical data will be placed on the server.   The cloud provider who places SSD on the server is essentially dangling an irresistable treat in front of their customers, tempting them to leap off a dangerous cliff of unreliability.

There have been a few proposed distributed storage architectures which maintain access-critical data in local SSD cache on the server, but over time, these solutions have all become unidirectional storage products, used for media servers and such, because they didn't synchronize data between the separate local caches.   There haven't yet been products offered that offer distributed cache coherency for storage, especially because the customers with a full Infiniband or similar network structure tying their clouds together just haven't existed.

I think the solution to offering cloud with true enterprise-grade performance still remains with centralized storage: making SSD available as a shared resources, accessed over dedicated, fast networking.    This is the approach that we are going to be offering for our new Santa Clara datacenter cloud cluster in Q2 '13.    It also offers the additional benefit that the SAN can dynamically move the data to the most appropriate storage type (disks or SSD) depending on load, which reduces the overall cost of SSD storage for the cloud customer.

Continue reading
183367 Hits

Want to change clouds? Introducing Concierge Onboarding Services

We recently met with one of our biggest boosters, a Fortune 500 CEO who has always been enthusiastic about ENKI's services.   He pointed out that many of his friends and former clients are unhappy with their choice of cloud providers, but are simply too busy, overworked, overwhelmed, and frazzled to even consider moving to another provider.   He recommended that we emphasize our capability to make migration to ENKI's cloud services, or onboarding new clients, effortless.

To that end, we decided to call out what we can do under the name, Concierge Onboarding.  Concierge, because like a real concierge, we take care of all the details of the onboarding process for you, including migration.  To address the objections that he brought up (and many of our prospects bring up) that keep people from changing cloud providers, we made the onboarding process guaranteed - both in time and money.    

Concierge Onboarding will cost you a fixed amount, agreed in advance.  

Concerge Onboarding will take a known amount of time which we will estimate in advance.  Of course, things can happen or you can add more to our plate, and it might take longer.  But we won't charge you anything - not for cloud computing resources or labor - if we can't make our initial estimate (unless of course you ask us to do more.)

Check our our cheeky page about it and let us know what you think!

Continue reading
36533 Hits

Should you fire your cloud vendor?

I just read, "Firing A Cloud Vendor" by Chris Nerney in ChannelproNetwork, with great interest.  This topic matters to me because most of our clients come from other cloud vendors, or at least have had bad experiences with cloud computing at some point, and of course I'd like ENKI to learn from these experiences. Chris' assertion is that the primary reason to fire your cloud vendor is if they breach SLA, while it's the client's responsibility to make sure the SLA is comprehensive and business-specific.   While I agree with both of these points, the reasons to fire a cloud vendor are usually present when you first choose one.  There are two important observations that our experience has show us make a critical difference in what cloud vendor you choose, as well as whether firing the vendor will actually solve any problems you're experiencing.   These two observations boil down to "know theyself" and "know thy application."

The first observation is that the cloud client must know whether their business and team require an intimate or hands-off relationship with the cloud vendor.  Making the wrong choice at the beginning will inevitably cause you to be unhappy with the vendor because your expectations won't be met.   If they are highly hands-off with only a web provisioning portal for cloud management and support staff that don't hold context with you or understand your application, you should adjust your expectations accordingly.  In particular, you cannot expect a hands-off cloud vendor to be responsible for software crashes or misconfigurations on your servers, so you have to be able to administer them yourself. In the drive for the lowest per-hour compute pricing, there are many hands-off vendors out there, and if you have the internal IT skills to completely manage your own servers, they are a viable option.   In this case, the SLA discussion is quite simple: did they meet their uptime promises, or any other promises relating to performance?  Data loss, because of the self-management requirement, is always going to be at least in part your responsibility and not a sole reason for firing the vendor.  The exception to this is, of course, if you discover that a hands-off vendor was the wrong choice for you.  In that case, you should be careful that your contract allows you an early exit from your committments.  Many low-cost cloud vendors have no required committment, but this simply mirrors the fact that they have no committment to you either!

On the other hand, if the cloud vendor offers full management (what we call "operations services") then their performance to SLA requires - as Chris points out - a detailed SLA that you negotiate with them to describe how their services react to exceptional conditions or to work orders that are necessary for your service to perform reliably.  IT is very complex, and problems will inevitably occur that will violate your SLA requirements from time to time, but as Chris' article points out, if this happens regularly, something is wrong with the vendor or the relationship. If the vendor responds quickly to SLA violations and provides an effective plan to correct them, then you are getting better service than most enterprises get from a hand-picked internal IT team (as I've seen in my time at companies large and small!)  However, it's incumbent upon you to develop an intimate relationship with their team: let them know in advance of any requirements you have like large spikes in load, meet regularly with their team, and if possible share documents (a "runbook") that define how your infrastructure should be managed.  With such documents, the SLA becomes a requirement on the vendor for how well the runbook is managed and followed, which is much easier than enumerating every problem and response in the contract. 

Whether you choose a hands-off or hands-on cloud vendor my experience is that the best predictor of satisfaction with a cloud vendor - or any vendor actually - is the intimacy of the relationship that you have with them.  So even if you want a hands-off relationship, it may make sense to choose a vendor with a documented customer support process, 24x7 support, technical account managers, dedicated sales staff, and if your business warrants it, access to their executives. 

The second observation is that if you don't understand your application well, you will have problems with your cloud vendor.   The dividing line between your responsibility to create a cloud-friendly, reliable application and the cloud vendor's responsibility to create a reliable platform to host it is never clear.   If the application crashes repeatedly, performs poorly, or suffers "inexplicable" errors, the cause could be the cloud vendor's infrastructure, but it is more likely within your application or the configuration of your servers.   We've found that about 80% of our clients' downtime is due to application or configuration problems.  There are well-known "problem" applications including WordPress (which crashes regularly if used as a web application rather than just a blogging or content management platform), or MySQL (which loses data if configured incorrectly.)  If you have one of these apps, your cloud vendor cannot be held responsible for your downtime BUT a good hands-on vendor will sit down with you and do a failure analysis and make recommendations that you can use to improve your system's reliability.   These recommendations may include changing technology or testing and coding practices if you write the software yourself, so starting the relationship with the cloud vendor early in your product cycle can avoid many problems.

If you understand your needs and the limitations of your application, and choose a cloud vendor accordingly, you should be able to avoid having to "fire" them in many of the cases that we see bringing customers to us, or even a few of the cases where we have had to part ways with clients.

Continue reading
8631 Hits