Risks and Issues of IAaaS

Share

Large IT companies, Amazon, Google, Apple, IBM and Microsoft, are aware that IA will decide the next dominant operating system of the future. And so they compete to dominate the business of providing IA services through Cloud Computing.

Quantifying the potential financial rewards is difficult, but for leading artificial intelligence providers, it is an unprecedented situation. AI could double the size of the cloud market to reach around 211 billion euros in the next few years, according to senior director of product management at Google’s Cloud AI unit, Rajen Sheth. Given the nature of automatic learning, the more data the system receives, the better the decisions it will make, which will foster customer loyalty to its provider.

Complexity and Resources

Despite its great impact on society and enormous demand, AI and its needs are difficult to understand because of its complexity. Businesses and professionals are not prepared for its large-scale exploitation. These circumstances especially affect the most traditional companies, which are at a disadvantage amid disruptive transformation. To ensure the fair, cheap and agile integration of AI in society, benefiting both companies and their customers, specific tools are needed that address these entry barriers.

Many of these problems are described fairly well in this article:

A common way to solve this type of problems in the IT world is through outsourcing. At expenses of the enormous expectation and potential of AI, and of this recurring trend among IT customers around the world, one would expect a rapid proliferation of cloud services specializing in AI.

Cloud Computing

Cloud Computing enables access to services from anywhere with an Internet connection and introduces successful business models such as IaaS, PaaS or SaaS (Infrastructure as a Service, Platform as a Service and Software as a Service) into the world of technology, capable of simplifying the provisioning and use of technology.

By clustering and virtualising hardware – that is, making several computers work together as if they were one or making a single computer act as many, respectively – it is possible to abstract applications from the real hardware underlying the system and dynamically adjust the resources dedicated to the execution of a service: this is the most efficient form of on-demand computing.

Moving AI systems in the cloud, for these reasons, is an apparently ideal solution:

– First, they centralize knowledge, making systems smarter as they are used and therefore more efficient and effective.

– Second, their elasticity allows users to abstract from the hardware requirements that these systems need to function.

For providers the bet is assured, but for the customer, bringing IA services to the Cloud can be a double-edged sword.

Outsourcing as a solution

Most cloud-based IA services companies employ algorithms, systems, architectures, and infrastructure with proprietary aspects that serve the interests of the provider and not necessarily those of the customer. These circumstances also affect the hardware, such as Google’s TPUs, conditioning that the implemented software cannot be ported.

We live in an era where the tendency is to outsource services, delegating to others the management and operation of systems that are increasingly critical for the business and ignoring in the process what this means in terms of impact in the medium and long term by the effects derived from the loss of know-how. In addition, in many cases, the impossibility of making a portability of solutions to other providers due to the proprietary aspects of the technology can trigger a scenario of loyalty and lock-in with the vendor.

This tendency also impacts, and especially, the IA, in which these circumstances are aggravated because of their own particularities. We must not forget the enormous effort that is necessary to train personalized models and the enormous sensitivity that its replacement supposes when these participate in processes that contribute or carry out the decision making.

This loyalty can also go against the general interest of customers and users, not only because of the corresponding loss of digital sovereignty of companies, but also because of the perpetuation and strengthening of the market positioning of the main providers of Cloud services dominated by an economy of scale.

Economy of scale

Providing a Cloud infrastructure is a highly complex task involving huge installations with thousands of microprocessors and huge data centers: it can only be done by making a strong capital investment within the reach of a few industry giants.

For example, Google spent about $30 trillion on its Cloud infrastructure in 2017. A fair estimate of the cost of provisioning would be half a million dollars per 32 square meters. With this data, these investments are more easily understood: the usual dimensions of a Data Center are around 10,000 – 20,000 square meters.

Of course, these gigantic investments are approached because they are profitable according to a well studied business plan. The strong and increasing demand for cloud services quickly pays for itself and most of these costs are transferred to the service without impeding its use.

However, the high barriers to entry that must be addressed in order to penetrate the cloud services market allow those already positioned to establish strong control. To be more exact, according to 2018 figures, 60% of the Cloud Services Market Share belongs to IBM, Microsoft, Google and Amazon. The next ten players collectively reach less than 20% of the Market Share. This may affect the service itself by setting market policies.

Impact of Cooling

Cloud computing is also not exempt from technical problems that impact on operational cost:

Today statistics indicate that 3% of the energy generated by human beings is consumed by Data Centers in the world. These Data Centers occupy less than 0.01% of the surface. All the necessary hardware to implement this architecture is concentrated in specific installations, where the heat dissipated by the machines accumulates and concentrates, forcing that great part of that consumed energy is used in the systems of cooling or cooling.

Cooling is the only way to guarantee a stable temperature and humidity in these huge hangars where hundreds and thousands of computers are concentrated, releasing heat and working at full capacity. The cooling meet the technical specifications and make the hardware run smoothly in the right range of performance. If the temperature is not guaranteed, the hardware becomes inefficient, reducing its performance and triggering energy consumption and incidences related to damage caused by overheating.

Such circumstances do not only affect from the point of view of operational cost (OPEX). When designing and deploying these centres, privileged locations can be sought that offer a certain degree of sustainability and prevention by boosting the cost of capital goods investments (CAPEX) in matters related to logistics and supply. For example, there are Data Centers located on the Pacific Ocean (Microsoft Natick) or in Antarctica (Ice Cube Lab).

All these factors hinder the fair formation of a criterion on the true economic cost of services, based exclusively on technical costs.

In a scenario where such investment is channelled into the use of existing resources rather than the creation of specific resources, it would not be necessary to compensate the investment and prices would be adjusted exclusively to the use given to the resource.

DainProject

DAIN is the next-generation artificial intelligence platform, a decentralized and geo-dispersed public computing network governed through blockchain and specialized in addressing and solving artificial intelligence problems

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *