The previous article outlines the basic concepts associated with Cloud Computing. It explains the technicalities of Cloud Computing and shows the differences between the various service providing models. This text explores the real benefits of outsourcing processes.
The amount of computing power required to efficiently perform the tasks imposed on a company is not constant over time. This can even be observed on personal computers. Even if you buy a device that has a considerable power reserve, sooner or later you will find that it is insufficient. In other words, you purchase an expensive piece of equipment, the capabilities of which you only use to a small extent, until one day you realise that you could do with a more powerful machine.
This is not an issue with cloud computing, since many companies give you the option to adjust the volume of resources you use. Consequently, during periods of lower traffic, it is cheaper to maintain the IT infrastructure, as less computing power is used – but, if necessary, more memory or more processor cores can be accessed at little cost. Stackmine’s migration of an automotive wholesaler’s sales platform to the AWS cloud is a great example of this. By using auto-scalable infrastructure with ELB technology, the Client no longer needs to worry about computing power – its amount is adjusted dynamically.
Equipment wears over time and, according to Murphy’s Law, a major failure will occur at the worst possible moment. When data is processed on company-owned computers and servers, this can pose a serious problem. It is therefore crucial to ensure that any problems are fixed as quickly as possible. This involves employing specialists who are on alert around the clock. Some of the larger companies decide to go for backup – spare machines that can take over tasks from the ones that are temporarily out of service. But what if the damage is caused by an external factor, such as power supply problems? At the end of May 2022, the transformer station at the power plant in Bełchatów failed, shutting down 10 of the 11 energy blocks. In order to prevent the risk of a blackout, you can put up your own electricity generators, but this is a costly investment and computers consume huge amounts of energy.
By using cloud computing, you can gracefully bypass the said risks. The calculations are made on large supercomputer farms. This means that the data is backed up in many secure ways, and the computing power we use is provided not by independent units, but by several of the interconnected modules of a giant machine. This is, of course, a major simplification, but it explains the essence of unmanned cloud computing. When, for some reason, the power goes out, back-up power supplies are activated and you don’t pay millions for their installation and maintenance – the cost of operating them is only a fraction of the subscription price. If the mass storage fails, or any other hardware failure occurs, this is not a problem – other machines will take over the tasks, and since they work together all the time, you won’t even notice the moment of the switchover. On the software side, cloud technologies enable automation of many processes, thus streamlining work. For the aforementioned migration of the sales platform to the AWS cloud, the StackMine team used Aurora technology, so that the size of the database is adjusted dynamically as required. The deployment of the application was automated, which means less work and stress on the part of the Client.
This is a somewhat controversial aspect. Indeed, if you compare hardware prices with official distributors, renting cloud power over several years appears to be more expensive overall than your own server. This is because you fail to consider additional costs that you do not think about at the first moment.
First of all, as I mentioned in the first paragraph, hardware needs to be replaced over time with newer and more efficient equipment. At the same time, reselling used equipment that is several years old will not cover the cost of buying new machinery. At the same time, bear in mind that it is a good practice to have a data or computing unit backup in case of failure or a sudden, unforeseen increase in demand.
In-house data processing means that you need to provide space for the machinery park. This could mean renting or building your own hall, which significantly increases the cost right from the start. With cloud computing, all that a company needs is an office and a fast internet connection.
Having your own data processing centre means that you have to pay for your electricity supply – and, as I mentioned above, computers have quite an appetite for it. In addition, electricity prices in Poland are relatively high and this trend will not be reversed in the years to come, as it is directly linked to the high carbon intensity and low efficiency of our energy sector.
Ultimately, when you pay the subscription for cloud computing services, such as AWS, you get technical support from experts who make sure the hardware works as it should. In case of in-house data processing, you need to employ people in-house. As you can see, in the big picture, it appears that cloud computing is a cheaper (and certainly more financially convenient) alternative with stabilised prices.
The computers used for cloud computing are not the same as our desktop computers. On this hardware you cannot just install the software you are already using and work from a remote desktop. Many systems need to be rewritten again, in a platform-appropriate way. This may raise concerns about taking a step forward, but fear not – you are on the Stackmine website! We have the experience, knowledge and a number of successful migrations to the cloud to our credit. Simply write to us and we will do the rest.