Whether you've just been tasked with finding a new hosting platform for your company's website, or you've just departed the corporate world to build a “better mousetrap”, you will be faced with many decisions on how to handle your website or applications. When researching hosting options, it doesn't take long to identify dedicated servers and cloud servers as the two most popular options. The question then becomes, how do you decide which is better for your specific application or business model. In this article, we compare five key factors to consider when choosing a server.
Dedicated and Cloud Servers both perform similar basic functions. They receive requests for the information they store, process those requests, and then return that information back to the user. While seemingly straightforward, the differences in how these options handle basic functions can greatly affect implementation time, the users' experience, and your bottom line.
Configuration Differences
As we've previously covered, Dedicated Servers and Cloud Servers are widely utilized by companies who need reliability and performance (see our Hosting Comparison article). Due to a more robust systems architecture, they can (usually) handle significantly more traffic, provide faster response times, and ensure greater application resiliency than shared or VPS hosting. This is achieved with the configuration of the physical server, or in the case of cloud, the underlying hypervisor. Let's review these configuration differences:
A dedicated server is a self-contained physical unit that includes all of the necessary hardware for a business to host their product. As the name implies, this unit is “dedicated” to a single host allowing for maximum control and configurability. The processor, memory, and disk storage are chosen during initial set up. Additional memory and disks can be added to the configuration as long as there are available slots or bays.
Unlike dedicated servers, multiple cloud server environments are hosted on a physical machine. Cloud servers tend to allocate storage using a large SAN or other clustered filesystem, such as Ceph. The virtual machine data and hosted data are decentralized to accommodate hosting multiple cloud environments on the same physical server. This also provides for state migration in the event of failure. A hypervisor is installed on a separate server to handle the partitioning of different sized cloud servers (virtual machines). The hypervisor also manages the physical resources that are allotted to each cloud server such as RAM, storage space, and processor cores.
Five Dedicated / Cloud Server Comparison Areas
The configuration differences between dedicated servers and cloud servers are clear. Here are five categories where these differences become apparent.
1. Performance
Data Transfer Speed
Dedicated servers typically store and process data locally. Due to this relative proximity, when a request is made, there is very little delay in retrieving and processing information. This gives dedicated servers an edge when milliseconds and microseconds count – such as with heavy computing or high-frequency financial transactions.
Cloud servers, on the other hand, need to access data from the SAN. This requires that a request traverse the backend infrastructure to be processed. Once the data is returned, it still has to be routed by the hypervisor to the allotted processor before it can be handled. This extra trip back and forth to the SAN and the additional processing time, introduce latency that wouldn't otherwise be evident.
Processing
Multiple cloud servers are typically housed on a physical server. As a result, processor cores need to be effectively managed to avoid performance degradation. This processor management is done by the hypervisor – an application built specifically to divide physical server resources among underlying cloud servers. Due to the way most hypervisors allocate resources, this can add another layer of latency to cloud hosting. Any request must be scheduled and placed into a queue to be executed.
Dedicated servers, by definition, have processors that are devoted to the application or website that is hosted on the server. They do not need to queue requests unless all processing power is being utilized. This allows the greatest level of flexibility and capability. Thus, many enterprise-level systems engineers choose dedicated servers for CPU intensive tasks, while utilizing cloud servers for other tasks.
Networking
Cloud servers provide advanced flexibility and scalability due to their decentralized data storage and shared nature. While sharing certain things works well, sharing a physical network interface puts a tenant at risk of bandwidth throttling. This throttling can occur when other tenants on the server are also utilizing the same network interface. Many hosting providers have the option for a dedicated network interface card (NIC) to be provisioned to a cloud server. This is recommended if you need to utilize the max available bandwidth. However, implementing NICs can be costly due to the complexity of implementation.
Dedicated servers are not at risk of throttling that is caused by a shared environment since their network interfaces are dedicated to the hosted application. Networking is also far simpler with dedicated servers and this introduces fewer points of failure.
2. Scalability
Storage
Cloud server storage expansion is virtually limitless, provided the vendor is using a recent hypervisor and operating system. Due to the off-host nature of the storage provided by the SAN, additional storage space can be provisioned without interacting with the cloud server. This means that cloud storage expansion will not usually incur downtime. Cloud servers offer clear benefits to high-profile or unproven products that may require massive and instant scalability.
Dedicated servers have limited storage capacity due to the physical number of drive bays or DAS arrays available on the server. Additional storage can be added only if there are open bays. Adding drives to open bays can generally be accomplished with a modern RAID controller, associated memory module / battery, and underlying LVM filesystem. However, additional DAS arrays are rarely hot-swappable and will require an outage in order to be added. This downtime can be avoided, but requires a significant amount of preparation, and will generally require maintaining multiple copies of critical application data in a multi-hub setup.
Processing
Cloud server customers are limited to the processor speed and cloud node type that their hosting provider offers. While additional cores can be provisioned to a cloud tenant, limitations may be experienced based on occupancy and resources allocated on the node. This can limit large-scale hosts within a cloud environment. However, if there are cores available on the server, they can be provisioned instantly.
Dedicated servers cannot change their processors without a maintenance window. If additional processing capabilities are needed, a site will either need to be migrated to a completely different server (see point #3), or be networked with another dedicated server to help manage exponential platform growth.
3. Migration
Cloud server resources can be provisioned instantly and are limited only by the underlying host or node. However, large expansions will require scale-out planning that leverages multiple cloud servers or a migration to a dedicated or hybrid cloud architecture.
Dedicated server migrations have many of the same limitations. The downtime for both use-cases is a side effect of transferring the OS and data from the old physical server to the new.
Seamless migration is achievable in both instances; however, it requires a significant investment in both time and resource planning. When migrating, the new solution should consider both current and future growth, and provide an effective scalability plan. Both the old and new solutions will need to run concurrently until the “switch is flipped” and the new server(s) take over. Additionally, the old server(s) will need to be maintained as a backup for a short time to ensure that the new platform is performing within its operational expectations.
4. Systems Administration / Operational Differences
Cloud server planning and operation has considerably different implications than dedicated servers. While scalability is generally faster and has less impact on operations, it has a much lower ceiling of capability. Limitations of the cloud environment need to be analyzed and planned for. Cloud servers do allow you to focus on and take advantage of solutions automation (i.e. Docker, Kubernetes, Puppet, Chef, etc.) and optimize your server usage for cost and efficiency. Currently, solutions automation is much more difficult to accomplish with many “one size fits all” dedicated server providers that do not hone products to your needs.
At Adaptive Data Networks, we can assist you in both implementing and maintaining any size environment. We strive to provide a solution, as opposed to cookie-cutter options. As a result, Adaptive Data Networks has extensive experience with many different CI/CD and Systems Ops tools such as Puppet and Chef.
Dedicated servers generally require a broader understanding of systems administration, as you may be responsible for monitoring your own hardware. A comprehensive understanding of your load profile is also expected to avoid over or underestimating your server's processing and data storage requirements. Scaling the system / infrastructure will require a joint effort with your provider. Upgrades and maintenance will require careful planning and engineering to prevent downtime.
Providers usually have managed support, such as Adaptive Data Networks's Adaptive Support, that is available to help IT professionals with server management.
5. Price
Both cloud servers and dedicated servers have different aspects that can make their cost profile vary widely. In our discussion on scalability, we mentioned that dedicated network interfaces for cloud servers can be a valuable, albeit expensive, option. Additionally, dedicated servers can be affixed with terabytes of memory, NVMe disks, 10/25/100GbE Network cards, and countless other hardware options that will increase the cost. Cloud servers will generally have a cost advantage at the lower end of the spectrum, but tend to lose their cost efficiency at scale. Meanwhile, a dedicated server will have a higher entry cost, but provide more reliable and cost-efficient scaling as your product grows.
Conclusion
Both Dedicated and Cloud Servers receive requests, process those requests, and return information back to the user. The physical differences between both servers affect how they handle those requests. While dedicated servers excel with performance, cloud servers are proven to be more scalable. The power of a dedicated server needs to be deftly wielded and controlled to take advantage of its benefits, while a cloud server offers more flexibility in its utilization and can be more cost effective.
While there is no “one size fits all” solution to hosting, an analysis of your business's needs and expected growth will help guide you in making that decision.
It is imperative to ensure your hosting provider has robust offerings and reliable support that are able to scale with your company's growth. Adaptive Data Networks's certified IT professionals draw from a wide range of consulting, hosting, and IT experience to provide you with the best service and resources that will meet and exceed your needs.