Ubi-Flex-Cloud: ubiquitous flexible cloud computing: status quo and research imperatives

Akhilesh S Thyagaturu (Smart Edge Commercial Division (SECD), Intel Corp, Chandler, Arizona, USA)
Giang Nguyen (Chair of Haptic Communication Systems, CeTI, Technische Universität Dresden, Dresden, Germany)
Bhaskar Prasad Rimal (The Beacom College of Computer and Cyber Sciences, Dakota State University, Madison, South Dakota, USA)
Martin Reisslein (School of Electrical, Computer, and Energy Engineering, Ira A Fulton Schools of Engineering, Arizona State University, Tempe, Arizona, USA)

Applied Computing and Informatics

ISSN: 2634-1964

Article publication date: 19 May 2022

1080

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Keywords

Citation

Thyagaturu, A.S., Nguyen, G., Rimal, B.P. and Reisslein, M. (2022), "Ubi-Flex-Cloud: ubiquitous flexible cloud computing: status quo and research imperatives", Applied Computing and Informatics, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ACI-02-2022-0029

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

License

Published in Applied Computing and Informatics. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Cloud computing has heralded tremendous advances in applied computing and informatics around the world [1, 2]. Modern societies depend on reliable secure cloud computing for a wide range of critical functions, including education through virtual learning environments [3–5] and health care [6–8], both for classical health care topics, such as heart health [9], as well as newly emerging diseases, such as Covid-19 [10–13]. Moreover, the influence that social media exert on people in conjunction with advanced cloud computing models enable sophisticated cyber influence campaigns for a wide range of purposes, ranging from public health awareness to military conflicts [14–18]. Also, the ongoing roll-out of fifth generation wireless systems (5G) will enable a new range of use cases that require low-latency communication and compute processing [19, 20], e.g. for tactile internet and human-in-the-loop systems [21, 22].

Effective cloud computing is a key enabler for this wide range of societal functions and therefore deserves close attention and further research and development so as to broadly support the advance of civilization. Past cloud computing research and development has mainly focused on reliably and efficiently providing vast computing resources in large data centers. While large data centers will likely remain important and should be further optimized, we identify two newly emerging dimensions of cloud computing research that will likely become highly important in the near-to mid-term future in the applied computing and informatics domain: Flexibility and Ubiquity. With flexibility, which we refer to as Flex-Cloud, we mean the flexibility to scale the capabilities of a given cloud computing system, e.g. to scale from a small-scale private cloud to a large-scale public cloud, as well as the flexibility to scale the performance and reliability of a given cloud computing system by varying the boundary of software vs. hardware based computing.

With ubiquity, which we refer to as Ubi-Cloud, we mean the continuous cloud computing support of end-user applications and end devices that are mobile across a wide range of varying spatial locations and with a wide range of network connectivities, which are often based on wireless communication. Low-latency cloud computing support is often vital for these mobile applications that may support a wide range of critical tasks, e.g. the control of autonomous vehicles or industrial production plants [19–22].

Cloud computing has been surveyed from a wide range of perspectives. Overviews of the basic principles and terminologies of cloud computing have been provided in [23, 24], while the perspective of fog computing has been covered in [25]. Scheduling mechanisms for cloud computing have been surveyed in [26, 27] and related load balancing mechanisms have been surveyed in [28, 29]. General nature-inspired optimization mechanisms for cloud computing have been surveyed in [30]. The communications technologies enabling cloud computing have been covered in [31], while other surveys have covered mechanisms related to security [32], fault tolerance mechanisms [33] and energy efficiency [34]. A few surveys have covered specific cloud computing application domains, such as health care [35, 36] and the Internet of Things (IoT) [37]. Our review paper is orthogonal to the existing cloud computing review and survey articles in that we focus on the aspects of flexibility and ubiquity in the cloud computing services, which to the best of our knowledge have not previously been covered.

This review paper presents two prominent focus areas of the Flex-Cloud concept, namely the flexible scaling of computing in private and public clouds in Section 2 as well as hardware to software flexibility in Section 3. Next, Section 4 covers the Ubi-Cloud concept of cloud computing across the network edge region, in the physical vicinity of the end-users. Section 5 covers the cloud computing support mechanisms specifically for end-user mobility. Each section describes the current state of the art, and outlines research imperatives for the further development of the respective dimensions of applied cloud computing. Overarching conclusions and future research directions are provided in Section 6.

2. Flex-Cloud: scaling of computing in private and public cloud

2.1 Background and review of existing approaches

This section focuses on the Flex-Cloud concept of scaling a given cloud computing system for a given organization or set of computing tasks; whereby the scaling occurs “in-place” in the sense that mobility or edge networks are not considered in this section. Rather, this section focuses on approaches for flexibly scaling the computing power of a given cloud computing system up and down, as well as approaches to flexibly utilize either private or public clouds, both with flexibility for the scaling of the computing power as well as other vital metrics, such as availability and cost.

The need for a Flex-Cloud dimension in cloud computing originates from the rapid changes of the needs for applied computing and information technology (IT) support in today’s typical organizations. Organizations may grow, re-structure, or shrink and the cloud computing infrastructures and platforms should continuously support the development and operations in an organization throughout such changes. Dynamic changes in an organization may imply changing requirements for a wide range of applied computing resources, such as on-demand virtual machines (VMs), development platforms and production platforms. In order to satisfy these needs for flexibility, cloud computing infrastructures that are resilient and elastic should be available on-demand. While cloud computing as a fundamental concept can in principle be configured to provide low-cost, elastic platforms for development and operations tasks [2, 24, 38], doing so flexibly and over a wide range of scales still poses significant challenges.

One important aspect of the Flex-Cloud dimension is the flexible scaling from private cloud computing to public cloud computing, and vice versa. Traditional public clouds, such as Amazon Web Services (AWS) and Microsoft Azure, are proprietary black-box clouds that are provided by distant data centers that scale to enormous sizes [39]. While these public clouds can provide excellent reliability and elastic scaling of the subscribed cloud computing resources, they force users to relinquish full control over the data that are to be computed on. However, some data-related processes in an organization may require that the cloud computing is conducted on-site, e.g. due to compliance requirements for on-site data retention. Also, concerns about ease of administration with full control of the specifics of the data warehousing and processing may lead to a desire for operating a private cloud system on-site.

Recent research has resulted in management frameworks that employ the OpenStack platform for flexibly provisioning private cloud systems [40–45]. OpenStack controls different types of resources that are typically represented as nodes to provide cloud services. For instance, the OpenStack compute service Nova permits the creation of VM instances on demand [46]. These VMs can then be utilized to provide Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) for various departments in an organization.

Aside from these technology aspects there are important economic considerations related to the flexible scaling of cloud computing [47, 48]. Generally, a sharing economy of computational resources can be achieved by a cloud service provider that serves the aggregate of computing demands from a collection of users (customers). Further, cooperation between cloud service providers that jointly decide on federation policies can maximize the total federation profit [49]. The economies of scale achieved by large cloud service providers or the cooperation of cloud service providers generally drive down the unit cost of computing [50]. From a user perspective, narrow considerations of the rates that are charged by cloud service providers make the offloading of specific services, such as e-mail services [51], medical record keeping [52], or educational services [53], appear to be quite cost-effective, especially over short time horizons, e.g. 1–3 years [54], and if complex regulatory requirements are considered [55].

However, these considerations of the economies of scale of the charging rates do not necessarily mean that outsourcing the computing to a distant public cloud service is the best solution for any enterprise from an economic perspective. Public cloud services may be the best option at some point in an enterprise’s lifetime and under specific market conditions [56]. Detailed cost analyses of the total cost of ownership (TCO) of cloud computing services versus on-premise computing over long time horizons of 4–10 years indicate that depending on the usage scenarios, offloading to a remote cloud service may cost significantly more than keeping the computation services on a local on-premises cloud [54, 57].

2.2 Future research and development directions

While the recent research on private clouds has provided flexible scaling mechanisms for a given private cloud system, the transition and inter-operation between private and public clouds is in its infancy [58, 59]. Future research needs to examine interoperability mechanisms that permit for a well-controlled seamless inter-operation of private and public clouds. The management mechanisms for scaling up from a private cloud to a public cloud and, in reverse from public cloud to a private cloud need to be thoroughly studied.

Also, high levels of availability as well as privacy and security play increasingly important roles both in private clouds, and in inter-operating private and public clouds. Future research needs to examine strategies for ensuring high availability, e.g. clustering strategies. Furthermore, strategies for ensuring the security and privacy of documents and data to the highest levels of trustworthiness and safeguarded against a multitude of malicious attacks [60–63] while complying with applicable regional regulations, e.g. in Europe [64], need to be researched in detail.

In addition, flexible massive data processing capabilities with high levels of availability are required for the emerging digital twin (DT) concept. A DT is an integrated multiphysics, multiscale and probabilistic simulation of a system that uses high-fidelity physical models, sensor updates and historic data [65]. The twinning process is supported by the continuous interaction, communication and synchronization between the DT with respect to the physical (real-life) twin and its surrounding physical environment [66]. High-fidelity physical DT models require massive data processing and high availability that likely require the seamless inter-operation of public and private clouds.

A core principle of today’s cloud computing is ubiquity and availability, regardless of the underlying communication networks. However, if mission-critical and latency-sensitive constraints, as well as the quality of experience and resilient cloud service requirements [67] are not met, there is an economic loss. Therefore, a hybrid cloud architecture envisioned in [68] with software-defined intelligence, e.g. dynamic workload aggregation and network capacity planning, could be a promising option for future cloud designs. Note that the cloud economics are complex due to several parameters, e.g. performance, dedicated/shared resources, business agility, business resilience and business strategies. A three-tier market model of marketplace users and cloud providers has strived to model these complexities [69]. However, comprehensive studies are needed to understand: 1) how profitable are software as a service (SaaS) providers that shoulder more computing management responsibilities compared to PaaS or IaaS providers [70, 71], 2) how can the interplay among these different cloud service paradigms in terms of provider profitability and performance be comprehensively modeled, and 3) how can a customer optimally trade off the TCO as well as the performance levels and availabilities of features of these cloud service paradigms versus on-premise computing.

3. Flex-Cloud: software vs. hardware based computing

3.1 Background and review of existing approaches

This section focuses on the Flex-Cloud concept of scaling from the computing of functions on general-purpose computers in software to the computing with hardware acceleration, including the transition in between these two computing paradigms. While large data centers are typically based on the computing in software on general-purpose compute servers, smaller cloud systems that may be faced with specialized tasks are increasingly considered for hardware acceleration. For instance cloud computing systems in an edge computing setting may be tasked with highly demanding specific functions that relate to the processing of wireless communication signals. The aspects of ubiquity of edge computing settings are the focus of Section 4. The present section focuses on the flexible scaling of the computing in a given cloud computing system, which may be operating at a specific location in an edge computing setting, from software to hardware based computing, and vice versa.

An extensive set of recent studies have explored strategies for accelerating the computation of a variety of specific functions, e.g. functions relating to communication signals and neural networks, on general-purpose computers [72, 73]. The existing studies have mainly focused on strategies for accelerating isolated specific aspects of central processing unit (CPU) processing as well as memory accesses and input/output to the computing platforms and infrastructures.

3.2 Future research and development directions

As data computing loads arrive typically via packet-switching communication networks to the cloud computing nodes, future research needs to examine how to interface the flexible range of software and hardware based computing processing approaches with high-speed low-latency data packet input-output frameworks. Recent fast packet processing frameworks are typically based on data plane development kit (DPDK) as well as eXpress data path (XDP) and extended Berkeley Packet Filter (eBPF) techniques to speed up the input and output of data packets from the network interfaces as well as the data packet processing in software [74–78]. Future research needs to find flexible ways to interface data packets rapidly with both conventional software processing modules as well as hardware acceleration modules. Also, compression techniques for reducing the overhead of the packet protocol headers [79] should be integrated into the flexible novel high-speed data packet processing frameworks.

Traditional cloud computing has always been about meeting the application demands, and to this end, the over-provisioning of resources, replication of data and stand-by operations have been standard techniques to meet the service level agreements (SLAs). An important direction that has emerged recently is to optimize the overall transactions in the data centers and cloud-native functions to improve the energy efficiency. Generally, computing in hardware is more energy efficient than computing in software [72, 73]. Future research needs to develop and evaluate green energy computing and power saving techniques that account for the energy consumption of hardware and software computing, and these energy-saving aspects could become components of future “green SLAs”.

While cloud computing in a central data center provides large scale flexibility, edge cloud computing is targeted toward low-latency and power-efficient approaches due to the proximity to the user applications. However, the orchestration of cloud-native applications from a data center cloud to edge cloud locations (nodes) is challenging due to the large geographical distribution of edge cloud nodes and the typically heterogeneous access network characteristics. In addition, the resource allocation management of the edge cloud infrastructure so as to ensure reliability and remote monitoring of platform and network resources is challenging and requires extensive future research. For instance, a workload that needs to be instantiated on an edge cloud node has typically a smaller set of available choices for specialized hardware and platform components compared to the large set of available choices in a central data center [72, 73]. As a result, cloud applications may need specific adaptations to execute efficiently on the smaller set of edge cloud node hardware and platform components. Future research needs to develop and evaluate such adaptation mechanisms so as to provide flexible efficient hardware and software supported computing both in large-resource data center clouds as well as in edge clouds with restricted sets of available hardware and platform components.

4. Ubi-Cloud: computing at the network edge

4.1 Background and review of existing approaches

This section focuses on the ubiquitous nature of cloud computing at the network edge, i.e. in the space between the end-users and the backbone of the Internet. Since a large proportion of the Internet end-users are connected via wireless links to the Internet, we consider wireless networks as a typical first-hop toward the backbone of the Internet. Roughly speaking, wireless networks, such as the common fourth and fifth generation wireless systems (4G, 5G), consist of a wireless fronthaul that connects end-users via wireless communication to a radio node, e.g. a cellular base station. The radio node can be connected with a wide variety of (wireless or wired) technologies via a gateway over the so-called backhaul to the core network, e.g. the enhanced packet core (EPC) in 4G systems and the 5G packet core (5GC) in 5G systems. The core network, in turn, connects to the Internet at large.

Recent research has examined the resource allocations across these different stages (layers) of wireless systems, i.e. the allocation of computation and communication resources to the radio, gateway and core network nodes, as well as to intermediate switching and gateway nodes that relay and process the traffic along the wireless end-user to Internet-at-large path. In particular, recent studies have explored the benefits of employing the software-defined networking (SDN) paradigm, which features separate control and data planes, i.e. the control is logically separated from the plane that transports and processes the actual data packets [80–82]. The studies found at the judicious sharing of the computation resources along the backhaul path can reduce the peak demands for computational resources in so-called multi-access edge computing (MEC, aka. mobile edge computing) nodes [83–86].

A critical aspect of ubiquitous cloud computing services is to ensure the integrity of the data transmitted over wireless channels, which may drop or corrupt data packets [87]. Recent research has developed network coding techniques that invest computational complexity in order to enable the recovery of dropped or corrupted data packets without complicated synchronization or signaling. The so-called random linear network coding (RLNC) solves a matrix inversion and multiplication problem to recover the data packets [88–92]. The computational challenges of RLNC can be addressed with efficient computation strategies on multicore processors [93, 94], or through innovative coding strategies that reduce the computing demands through sparse coding structures [95, 96].

Fiber-wireless (FiWi) access networks combine the high capacity, scalability and reliability of optical fiber networks with the flexibility and ubiquity of wireless networks to provide broadband services for mobile users as well as fixed subscribers [97]. The concept of integrating cloud computing and edge computing into the backhaul network of wireless access networks has been studied [68, 97]. Results show that integrating cloud and edge computing into backhaul networks is a promising solution for 5G to provide ultra-low latency and ultra-high bandwidth at the edge of the networks [98].

4.2 Future research and development directions

The increasing trends to ever more demanding computation applications on untethered end-devices pose a wide range of challenging problems for future research and development along the Ubi-Cloud dimension and specifically toward the goal of ubiquitous distributed cloud computing that is highly responsive to user-demands, yet dispersed over the layers (stages) of the wireless network systems. The distributed nature of the computing units and the signaling delays between the units make task and resource allocation highly challenging. Typically, classical centralized allocation algorithms are too slow to adapt to highly dynamic task load variations. A promising direction is therefore to allow local regions some autonomy for fast-paced decisions and to coordinate with a central controller over longer time horizons [99]. Federated learning, which exchanges limited learning parameter sets among multiple distributed agents that apply local learning and decision making to optimize allocations, can be one potential avenue for addressing this challenging problem [100–103]. More generally, the integration of machine learning techniques with ubiquitous cloud computing at the network edge [104–108] presents new workloads for Ubi-Cloud infrastructures, but also novel mechanisms for optimizing the provisioning and operation of such Ubi-Clouds. Both these workload characteristics and the optimization mechanisms need to be thoroughly researched.

An emerging communications paradigm that is well aligned with the private clouds in Section 2 is the paradigm of 5G campus networks [109–112]. Conventionally, cellular wireless networks are connected to the public Internet via gateway nodes. In contrast, campus networks operate in complete isolation from the public Internet and are therefore well-suited for scenarios that require all communications and data to remain strictly on-site. 5G campus networks can operate in a so-called standalone (SA) mode that obviates the need to operate a legacy 4G long-term evolution (LTE) network (for the control plane) in conjunction with a 5G network; rather in the SA mode, control and data proceed from a 5G wireless end-device via a 5G new radio (NR) base station to a 5G packet core (5GC). Future research needs to examine the efficient inter-operation between a private cloud (based for instance on OpenStack) and 5G campus networks. Depending on the campus layout, computing nodes may be distributed at the locations of the 5G NR base stations or throughout the network infrastructure that connects the 5G NR base stations with the 5GC. Importantly, the 5GC processing is based on cloud-native microservices that can be flexibly processed in cloud computing units [113, 114].

A related research challenge is to efficiently allocate cloud computing resources along the continuum from central data centers to the computing resources in the end-devices [115, 116] to efficiently support specific highly demanding applications. For instance, wireless sensor networks collect vast amounts of sensing data, while the relevant data that are extracted from the sensed data stream is typically very small in size. Through judicious placement of computing nodes along the network paths that collect the data, the transmitted data could potentially be significantly reduced [117–119]. Similarly, the management of green energy supplies [120] and the integration of cloud computing with the management of electric vehicles and their charging stations pose novel challenges for ubiquitous cloud computing [121–123].

The Ubi-Cloud should accommodate highly diverse network access methods spanning form terrestrial wireless to wired optical to non-terrestrial (e.g. satellite) connectivity. Each network access method has different characteristics that directly impact the cloud computing. For instance, non-terrestrial networks involving satellite connectivity have long delays due to the signal propagation between ground stations and low-orbit satellites. Advanced communication technologies for the Ubi-Cloud include millimeter waves in 5G, Terra hertz waves in 6G, and free space optical communication, which enable very high-bandwidth and low-latency links for large-scale data transactions. However, these links require precise tuning and calibration of transmission and reception radio units to maintain a strict line-of-sight (LoS), as well as stable and static transceivers. These large bandwidth links are not only used for wireless access but also for backhaul connectivity, which is referred to as integrated access backhaul (IAB) technology. Cloud computing in central data centers that are reached over unstable backhaul links can cause reliability issues, and network protocols should be adapted for the variable link characteristics. One future research direction is to devise a hybrid data center cloud–edge cloud computing method: during stable backhaul link operation, the computing is conducted in a central data center; however, during periods of unstable backhaul link operation, the computing is temporarily conducted in edge cloud nodes. Thus, temporary backhaul link outages can be tolerated by the hybrid data center cloud–edge cloud computing by designing the applications and the cloud computing orchestration such that end-applications can still interact with the edge cloud applications (which are reached over the still functioning wireless fronthaul) when the backhaul connectivity to the data center cloud is compromised.

In-network computing enables edge cloud computing on network nodes, such as switches, routers, and gateways. Traditional in-network computing includes caching, storage, and network filtering. Advanced in-network computing applications may involve machine learning techniques to detect traffic characteristics, e.g. for intrusion detection, to identify abnormal traffic patterns, for deep packet inspection of the data packet payload, as well as for dynamic encryption and decryption. In-network computing reduces the overall computing required at the edge cloud nodes and in the data center cloud nodes, e.g. if encrypted flows are decrypted at the gateway feeding into the last communication hop before the destination node, then decryption can be avoided at the destination node. Thus, large-scale cloud and edge applications could be separated into small functional units. Some small functional units can be executed on in-network nodes, thereby reducing the load on the edge and data center cloud nodes. Importantly, in-network computing could execute some small functional units with dedicated application-specific integrated circuit (ASIC) hardware accelerators (essentially as a form of the Flex-Cloud hardware processing principle, see Section 3) at line-rates, and thus reduce the overall end-to-end data processing latency (compared to software based execution at an edge or data center cloud node without an ASIC accelerator). Future research needs to thoroughly examine the trade-offs and operational mechanisms, e.g. the orchestration mechanisms, for in-network computing versus the computing at edge and data center cloud nodes.

Decentralized energy trading based on blockchain and distributed ledger technology (DLT) is an emerging area and has been studied in the area of distributed cloud computing. However, blockchain and DLT have not yet been widely studied in the context of the energy sector. Generally, blockchains are not suitable for handling massive computations, nor for running consensus algorithms; also, blockchains consume massive amounts of energy. Two-tier cloud computing [68, 98] could potentially provide an avenue for efficiently trading decentralized energy and needs to be examined in detail in future research.

Aside from these technological challenges, edge cloud computing poses substantial economic and public policy challenges. Depending on national policies and regulations, the development, operation and management of edge cloud computing infrastructures may be the responsibility of cloud computing providers, telecommunications infrastructure operators, or separate commercial or governmental organizations. Reliable edge cloud computing infrastructures require proper investments and revenue sharing to be economically viable and these economic and public policy aspects need to be thoroughly examined in future research.

5. Ubi-Cloud: computing for mobile users

5.1 Background and review of existing approaches

This section focuses on the Ubi-Cloud aspect of seamlessly supporting mobile end-users with cloud computing services. Low-latency application computing for mobile users typically requires that the compute processes are migrated among the distributed computing nodes in the edge cloud computing infrastructure to follow and stay in close physical vicinity of the mobile users [124]. A wide variety of modern computing applications are specifically geared toward mobile users [125], e.g. mobile crowd sensing and object detection [126, 127], as well as control and management information systems for a wide variety of industrial, cyber-physical, and vehicle traffic systems [128–131].

The migration of complete containers hosting the computing applications is highly demanding and typically only realistic with some hardware acceleration [132]. Therefore, recent research has focused on developing techniques that only transfer the necessary state information of the computing applications [124, 133].

5.2 Future research and development directions

The localization of end-devices through wireless technologies, such as 5G, has been gaining interest to deliver location-based services. Data center and edge cloud applications can effectively deliver computing services to end-devices based on location-aware computing. Mobile end devices often change their location forcing the location-based services to adapt to the location changes. State-of-the-art localization techniques not only provide the current location of an end-device, but also help to predict the user movements, and perform necessary service modifications to serve the applications on the end-device before the location actually changes. Future research needs to rigorously examine and refine these advanced techniques of location estimation and tracking of end-devices to ensure the reliable low-latency service delivery through the efficient transfer of the necessary state information to the appropriate edge cloud computing nodes and by effectively adapting the location-based services based on the predicted future location.

6. Conclusions and outlook

We have introduced the Ubi-Flex-Cloud concept consisting of the two dimensions of cloud computing research and development focused on (i) the flexible scaling of the cloud computing capabilities and features, and (ii) the ubiquity of the cloud computing services. We have outlined topics for future research and development to address critical challenges along the Flex-Cloud and Ubi-Cloud dimensions.

An important overarching future research challenge is to make the various individual research advances along the Flex-Cloud and Ubi-Cloud dimensions compatible with each other. More specifically, an important overarching research imperative is to develop, evaluate and refine integration strategies that unify the various Flex-Cloud and Ubi-Cloud advances into cohesively integrated functioning Ubi-Flex-Cloud systems that achieve the dual goals of flexibility and ubiquity. An important additional overarching future research direction is to explore and evaluate optimization mechanisms for the Ubi-Flex-Cloud. Recent research has employed several strategies, including simulations [134], decision trees [135, 136], as well as nature-inspired strategies [137]. For the complex configuration optimizations in Ubi-Flex-Cloud settings, nature-inspired heuristics may be promising due to their simplicity and wide adaptability. Several recent nature-inspired approaches, e.g. [138–143], could be explored for configuring Ubi-Flex-Clouds in future research, possibly in hybrid approaches with other strategies, such as decision trees.

References

1.Mitropoulos S, Douligeris C. Why and how informatics and applied computing can still create structural changes and competitive advantage. Appl Comput Inform. 2022. in print.

2Rimal BP, Lumb I. The rise of cloud computing in the era of emerging networked society. Cloud Comp. Springer; 2017. 3-25.

3..Borse Y, Gokhale S. Cloud computing platform for education system: a review. Int J Comp Appl. 2019; 177(9): 41-5.

4.Monsalve-Pulido J, Aguilar J, Montoya E, Salazar C. Autonomous recommender system architecture for virtual learning environments. Appl Comput Inform. 2022. in print.

5.Mustafa A. The personalization of e-learning systems with the contrast of strategic knowledge and learner's learning preferences: an investigatory analysis. Appl Comput Inform. 2020; 17(1): 153-67.

6.Li X, Lu Y, Fu X, Qi Y. Building the Internet of Things platform for smart maternal healthcare services with wearable devices and cloud computing. Fut Gen Computer Sys. 2021; 118: 282-96.

7.Nannia L, Ghidoni S, Brahnam S. Ensemble of convolutional neural networks for bioimage classification. Appl Comput Inform. 2020; 17(1): 19-35.

8Raghavan A, Demircioglu MA, Taeihagh A. Public health innovation through cloud adoption: a comparative analysis of drivers and barriers in Japan, South Korea, and Singapore. Int J Environ Res Public Health. 2021; 18(1): 334.1-334.30.

9.Shashikant R, Chetankumar P. Predictive model of cardiac arrest in smokers using machine learning technique based on Heart Rate Variability parameter. Appl Comp Inform. 2022. in print.

10.Battineni G, Chintalapudi N, Amenta F. SARS-CoV-2 epidemic calculation in Italy by SEIR compartmental models. Appl Comput Inform. 2022. in print.

11.Battineni G, Chintalapudi N, Amenta F. Forecasting of COVID-19 epidemic size in four high hitting nations (USA, Brazil, India and Russia) by Fb-Prophet machine learning model. Appl Comput Inform. 2022. in print.

12.Vaishnav PK, Sharma S, Sharma P. Analytical review analysis for screening COVID-19 disease. Int J Mod Res. 2021; 1(1): 22-9.

13.Vyas P, Reisslein M, Rimal BP, Vyas G, Basyal GP, Muzumdar P. Automated classification of societal sentiments on Twitter with machine learning. IEEE Trans Techn Soc. 2022. in print.

14.Chiang CP, Wang SJ, Chen YS. Manipulating cyber army in pilot case forensics on social media. J Supercomputing. 2022; 78: 7749-67.

15.Couretas JM. Cyber security and defense for analysis and targeting. An introduction to cyber analysis and targeting. Springer. 2022: 119-50.

16.Hui PM, Yang KC, Torres-Lugo C, Monroe Z, McCarty M, Serrette BD, Pentchev V, Menczer F. BotSlayer: real-time detection of bot amplification on twitter. J Open Source Softw. 2019; 4(42): 1706.1-1706.4.

17.Johnson N, Turnbull B, Maher T, Reisslein M. Semantically modeling cyber influence campaigns (CICs): ontology model and case studies. IEEE Access. 2020; 9: 9365-82.

18.Johnson N, Turnbull B, Reisslein M, Moustafa N. CNA-TCC: campaign network attribute based thematic campaign classification. IEEE Trans Comput Soc Syst. 2022. in print.

19.Hoeschele T, Dietzel C, Kopp D, Fitzek FH, Reisslein M. Importance of internet exchange point (IXP) infrastructure for 5G: estimating the impact of 5G use cases. Telecommunications Policy. 2021; 45(3): 102 091.1-18.

20.Navarro-Ortiz J, Romero-Diaz P, Sendra S, Ameigeiras P, Ramos-Munoz JJ, Lopez-Soler JM. A survey on 5G usage scenarios and traffic models. IEEE CommST. 2020; 22(2): 905-29.

21Fitzek F, Li SC, Speidel S, Strufe T, Simsek M, Reisslein M. Tactile internet: with human-in-the-loop. Cambridge, MA: Academic Press; 2021.

22.Jolfaei A, Usman M, Roveri M, Sheng M, Palaniswami M, Kant K. Guest editorial: computational intelligence for human-in-the-loop cyber physical systems. IEEE Trans Emerging Top Comput Intell. 2022; 6(1): 2-5.

23.Rashid A, Chaturvedi A. Cloud computing characteristics and services: a brief review. Int J Computer Sci Eng. 2019; 7(2): 421-6.

24.Rimal BP, Choi E, Lumb I. A taxonomy and survey of cloud computing systems. Proc. IEEE Fifth Int. Joint Conference on INC, IMS and IDC. 2009: 44-51.

25.Naha RK, Garg S, Georgakopoulos D, Jayaraman PP, Gao L, Xiang Y, Ranjan R. Fog computing: survey of trends, architectures, requirements, and research directions. IEEE Access. 2018; 6: 47 980-48 009.

26.Arunarani A, Manjula D, Sugumaran V. Task scheduling techniques in cloud computing: a literature survey. Future Generation Computer Syst. 2019; 91: 407-15.

27.Kumar M, Sharma SC., Goel A, Singh SP. A comprehensive survey for scheduling techniques in cloud computing. J Netw Computer Appl. 2019; 143: 1-33.

28Kumar P, Kumar R. Issues and challenges of load balancing techniques in cloud computing: a survey. ACM Comput Surv (CSUR). 2019; 51(6): 1-35.

29.Shafiq DA, Jhanjhi N, Abdullah A. Load balancing techniques in cloud computing environment: a review. J King Saud University-Computer Inf Sci; 2021.

30.Yahia HS, Zeebaree S, Sadeeq M, Salim N, Kak SF, Adel A, Salih AA, Hussein HA. Comprehensive survey for cloud computing based nature-inspired algorithms optimization scheduling. Asian J Res Comp. Sci. 2021; 8(2): 1-16.

31.Dizdarević J, Carpio F, Jukan A, Masip-Bruin X. A survey of communication protocols for internet of things and related challenges of fog and cloud computing integration. ACM Comput Surv (Csur). 2019; 51(6): 1-29.

32.Tabrizchi H, Kuchaki Rafsanjani M. A survey on security challenges in cloud computing: issues, threats, and solutions. The J Supercomputing. 2020; 76(12): 9493-532.

33.Kumari P, Kaur P. A survey of fault tolerance in cloud computing. J King Saud University-Computer Inf Sci. 2021; 33(10): 1159-76.

34Mastelic T, Oleksiak A, Claussen H, Brandic I, Pierson JM, Vasilakos AV. Cloud computing: survey on energy efficiency. ACM Comput Surv (CSUR). 2014; 47(2): 1-36.

35.Ali O, Shrestha A, Soar J, Wamba SF. Cloud computing-enabled healthcare opportunities, issues, and applications: a systematic review. Int J Inf Management. 2018; 43: 146-58.

36Dang LM, Piran M, Han D, Min K, Moon H, et al.. A survey on internet of things and cloud computing for healthcare. Electronics. 2019; 8(7): 768.1-768.49.

37.Sadeeq MM, Abdulkareem NM, Zeebaree SR, Ahmed DM, Sami AS, Zebari RR. IoT and cloud computing issues, challenges and opportunities: a review. Qubahan Acad J. 2021; 1(2): 1-7.

38Hwang K, Dongarra J, Fox GC. Distributed and cloud computing: from parallel processing to the internet of Things. Burlington, MA: Morgan Kaufmann; 2013.

39Foster I, Gannon DB. Cloud computing for science and engineering. Cambridge, MA: MIT Press; 2017.

40.Bhatia G, Al Sulti IH. CASCloud: an open source private cloud for higher education. Proc. IEEE International Arab Conference on Information Technology (ACIT); 2019: p. 14-20.

41.Heuchert S, Rimal BP, Reisslein M, Wang Y. Design of a small-scale and failure-resistant IaaS cloud using OpenStack. Appl Comput Inform. 2022. in print.

42.Hosamani N, Albur N, Yaji P, Mulla MM, Narayan D. Elastic provisioning of Hadoop clusters on OpenStack private cloud. Proc. IEEE ICCCNT. 2020: p. 1-7.

43.Prameela P, Gadagi P, Gudi R, Patil S, Narayan D. Energy-efficient VM management in OpenStack-based private cloud. Adv Comp Netw Commun. 2020; 1: 541-56.

44.Pyati M, Narayan D, Kengond S. Energy-efficient and dynamic consolidation of virtual machines in OpenStack-based private cloud. Proced Computer Sci. 2020; 171: 2343-52.

45.Suriansyah MI, Mulyana I, Sanger JB, Winata S. Compute function analysis utilizing IAAS private cloud computing service model in Packstack development. ILKOM Jurnal Ilmiah. 2021; 13(1): 10-16.

46.OpenStack. OpenStack docs: OpenStack compute (Nova); 2020. [cited 2020 Sep 27]. Available from: https://docs.openstack.org/nova/latest/.

47.Kim WC, Jo O. Cost-optimized configuration of computing instances for large sized cloud systems. ICT Express. 2017; 3(3): 107-10.

48.Rosati P, Fowley F, Pahl C, Taibi D, Lynn T. Right scaling for right pricing: a case study on total cost of ownership measurement for cloud migration. Proc. Int. Conf. On cloud computing and services science. Springer; 2018. p. 190-214.

49.Darzanos G, Koutsopoulos I, Stamoulis GD. Cloud federations: economics, games and benefits. IEEE/ACM Trans Networking. 2019; 27(5): 2111-24.

50Power B, Weinman J. Revenue growth is the primary benefit of the cloud. IEEE Cloud Comput. 2018; 5(4): 89-94.

51.Dey RK, Roy S, Bose R, Sarddar D. Assessing commercial viability of migrating on-premise mailing infrastructure to cloud. Int J Grid Distrib Comput. 2021; 14: 1-10.

52.Ryu AJ., Magnuson DR, Kingsley TC. Why Mayo Clinic is embracing the cloud and what this means for clinicians and researchers. Mayo Clinic Proc Innov Qual Outcomes. 2021; 5(6): 969-73.

53.Nayar KB, Kumar V. Cost benefit analysis of cloud computing in education. Int J Bus Inf Sys. 2018; 27(2): 205-21.

54.Fisher C. Cloud versus on-premise computing. Am J Industr Business Management. 2018; 8(09): 1991-2006.

55.Pugh C. Regulatory compliance and total cost influence on the adoption of cloud technology: a quantitative study. Ph.D. dissertation. Capella University; 2021.

56.Rimal BP, Jukan A, Katsaros D, Goeleven Y. Architectural requirements for cloud computing systems: an enterprise cloud approach. J Grid Comput. 2011; 9(1): 3-26.

57.Makhlouf R. Cloudy transaction costs: a dive into cloud computing economics. J Cloud Comput. 2020; 9(1): 1-11.

58.Ghosh BC, Bhartia T, Addya SK, Chakraborty S. Leveraging public-private blockchain interoperability for closed consortium interfacing. Proc IEEE INFOCOM. 2021: 1-10.

59.Ramalingam C, Mohan P. An efficient applications cloud interoperability framework using I-Anfis. Sym. 2021; 13(2): 268.

60.Abidin SSZ, Husin MH. Improving accessibility and security on document management system: a Malaysian case study. Appl Comput Inform. 2020; 16(1-2): 137-54.

61.Ari AAA, Ngangmo OK, Titouna C, Thiare O, Mohamadou A, Gueroui AM, et al. Enabling privacy and security in Cloud of Things: architecture, applications, security & privacy challenges. Appl Comput Inform. 2022. in print.

62.Sobecki A, Szymański J, Gil D, Mora H. Framework for integration decentralized and untrusted multi-vendor IoMT environments. IEEE Access. 2020; 8: 108 102-8 112.

63.Thorat C, Inamdar V. Implementation of new hybrid lightweight cryptosystem. Appl Comp Inform. 2020; 16(1-2): 195-206.

64.Braud A, Fromentoux G, Radier B, Le Grand O. The road to European digital sovereignty with Gaia-X and IDSA. IEEE Netw. 2021; 35(2): 4-5.

65.Glaessgen E, Stargel D. The digital twin paradigm for future NASA and US Air Force vehicles. Proc Adapt Struct Conf. 2012: 1-14.

66.Barricelli BR, Casiraghi E, Fogli D. A survey on digital twin: definitions, characteristics, applications, and design implications. IEEE Access. 2019; 7: 167 653-167 671.

67.Wagner B, Sood A. Economics of resilient cloud services. Proc. IEEE Int. Conf. on Software Quality, Reliability and Security Companion (QRS-C). 2016. 368-74.

68.Rimal BP, Pham Van D, Maier M. Mobile-edge computing versus centralized cloud computing over a converged FiWi access network. IEEE Trans Netw Serv Management. 2017; 14(3): 498-513.

69.Anselmi J, Ardagna D, Lui JC, Wierman A, Xu Y, Yang Z. The economics of the cloud. ACM Trans Model Perform Eval. Comp. Sys. (Tompecs). 2017; 2(4): 1-23.

70.Lee I. Pricing and profit management models for SaaS providers and IaaS providers. J Theor Appl Electr Commerce Res. 2021; 16(4): 859-73.

71.Wulf F, Lindner T, Strahringer S, Westner M. IaaS, PaaS, or SaaS? the why of cloud computing delivery model selection: vignettes on the post-adoption of cloud computing. Proc HI Int Conf Sys Sci. 2021: 6285-94.

72.Linguaglossa L, Lange S, Pontarelli S, Rétvári G, Rossi D, Zinner T, Bifulco R, Jarschel M, Bianchi G. Survey of performance acceleration techniques for network function virtualization. Proc IEEE. 2019; 107(4): 746-64.

73.Shantharama P, Thyagaturu AS, Reisslein M. Hardware-accelerated platforms and infrastructures for network functions: a survey of enabling technologies and research studies. IEEE Access. 2020; 8: 132 021-132 085.

74Di Girolamo S, Kurth A, Calotoiu A, Benz T, Schneider T, Beránek J, Benini L, Hoefler T. A RISC-V in-network accelerator for flexible high-performance low-power packet processing. Proc ACM/IEEE Ann Int Symp Comp Arch. 2021; 958-71.

75.Gallo M, Finamore A, Simon G, Rossi D. FENXI: fast in-network analytics. Proc. IEEE/ACM SEC; 2021. 1-14.

76.Osiński T, Tarasiuk H, Chaignon P, Kossakowski M. A runtime-enabled P4 extension to the open vswitch packet processing pipeline. IEEE Trans Netw Svc Managmt. 2021; 18(3): 2832-45.

77.Xiang Z, Gabriel F, Urbano E, Nguyen GT, Reisslein M, Fitzek FH. Reducing latency in virtual machines: enabling tactile internet for human-machine co-working. IEEE J Selected Areas Commun. 2019; 37(5): 1098-116.

78.Xiang Z, Höweler M, You D, Reisslein M, Fitzek FH. X-MAN: a non-intrusive power manager for energy-adaptive cloud-native network functions. IEEE Trans Netw Svc Managmt. 2022. in print.

79Tömösközi M, Reisslein M, Fitzek FH. Packet header compression: a principle-based survey of standards and recent research studies. IEEE Commun Surv Tut. 2022; 24(1): 698-740.

80.Costa-Perez X, Garcia-Saavedra A, Li X, Deiss T, De La Oliva A, Di Giglio A, Iovanna P, Moored A. 5G-Crosshaul: an SDN/NFV integrated fronthaul/backhaul transport network architecture. IEEE Wireless Commun. 2017; 24(1): 38-45.

81.Shantharama P, Thyagaturu AS, Karakoc N, Ferrari L, Reisslein M, Scaglione A. LayBack: SDN management of multi-access edge computing (MEC) for network access services and radio resource sharing. IEEE Access. 2018; 6: 57 545-57 561.

82.Thyagaturu AS, Dashti Y, Reisslein M. SDN-based smart gateways (Sm-GWs) for multi-operator small cell network management. IEEE Trans Netw Svc Managmt. 2016; 13(4): 740-53.

83.Doan TV, Nguyen GT, Reisslein M, Fitzek FH. SAP: subchain-aware NFV service placement in mobile edge cloud. IEEE TNSM. 2022. in print.

84.Garcia-Saavedra A, Costa-Perez X, Leith DJ, Iosifidis G. FluidRAN: optimized vRAN/MEC orchestration. Proc. IEEE INFOCOM. 2018: 2366-74.

85.Karakoç N, Scaglione A, Nedić A, Reisslein M. Multi-layer decomposition of network utility maximization problems. IEEE/ACM Trans Networking. 2020; 28(5): 2077-91.

86.Tong X, Tian L, Zhang Z, Sun Q, Wang Y. Statistical multiplexing gain analysis of computing resources for C-RAN with Alpha-Stable model. Proc IEEE WCNC. 2021: 1-6.

87.Rakús M, Farkaš P, Páleník T. Modeling of mobile channels using TIMS in IT education. Appl Comp Inform. 2022. in print.

88.Cohen A, Esfahanizadeh H, Sousa B, Vilela JP, Luís M, Raposo D, Michel F, Sargento S, Médard M. Bringing network coding into SDN: architectural study for meshed heterogeneous communications. IEEE Commun Mag. 2021; 59(4): 37-43.

89.Gabriel F, Wunderlich S, Pandi S, Fitzek FH, Reisslein M. Caterpillar RLNC with feedback (CRLNC-FB): reducing delay in selective repeat ARQ through coding. IEEE Access. 2018; 6: 44 787-44 802.

90.Park J, Cho DH. Separated random linear network coding based on cooperative medium access control. IEEE Networking Lett. 2021; 3(2): 66-9.

91.Tasdemir E, Nguyen V, Nguyen GT, Fitzek FH, Reisslein M. FSW: Fulcrum sliding window coding for low-latency communication. IEEE Access; 2022. in print.

92.Wunderlich S, Gabriel F, Pandi S, Fitzek FH, Reisslein M. Caterpillar RLNC (CRLNC): a practical finite sliding window RLNC approach. IEEE Access. 2017; 5: 20183-20 197.

93.Shin H, Park JS. Optimizing random network coding for multimedia content distribution over smartphones. Multimedia Tools Appl. 2017; 76(19): 19 379-19 395.

94.Wunderlich S, Fitzek FH, Reisslein M. Progressive multicore RLNC decoding with online DAG scheduling. IEEE Access. 2019; 7: 161184-161 200.

95.Nguyen V, Tasdemir E, Nguyen GT, Lucani DE, Fitzek FH, Reisslein M. DSEP Fulcrum: dynamic sparsity and expansion packets for Fulcrum network coding. IEEE Access. 2020; 8: 78 293-78 314.

96.Tasdemir E, Tömösközi M, Cabrera JA, Gabriel F, You D, Fitzek FH, Reisslein M. SpaRec: sparse systematic RLNC recoding in multi-hop networks. IEEE Access. 2021; 9: 168 567-168 586.

97.Rimal BP., Pham Van D, Maier M. Cloudlet enhanced fiber-wireless access networks for mobile-edge computing. IEEE Trans Wireless Commun. 2017; 16(6): 3601-18.

98.Rimal BP, Maier M, Satyanarayanan M. Experimental testbed for edge computing in fiber-wireless broadband access networks. IEEE Commun Mag. 2018; 56(8): 160-7.

99.Karakoç N, Scaglione A, Reisslein M, Wu R. Federated edge network utility maximization for a multi-server system: algorithm and convergence. IEEE/ACM Trans Networking. 2022. in print.

100.Xia Q, Ye W, Tao Z, Wu J, Li Q. A survey of federated learning for edge computing: research problems and solutions. High-Confidence Comput. 2021; 1(1): 100008.

101.Xiao H, Zhao J, Pei Q, Feng J, Liu L, Shi W. Vehicle selection and resource optimization for federated learning in vehicular edge computing. IEEE Trans Intel Transp Sys. 2022. in print.

102.Xu C, Liu S, Yang Z, Huang Y, Wong KK. Learning rate optimization for federated learning exploiting over-the-air computation. IEEE J Sel Areas Commun. 2021; 39(12): 3742-56.

103.Yu R, Li P. Toward resource-efficient federated learning in mobile edge computing. IEEE Netw. 2021; 35(1): 148-55.

104.Baccarelli E, Scarpiniti M, Momenzadeh A, Ahrabi SS. Learning-in-the-fog (LiFo): deep learning meets fog computing for the minimum-energy distributed early-exit of inference in delay-critical IoT realms. IEEE Access. 2021; 9: 25 716-25 757.

105.Chen J, Ran X. Deep learning with edge computing: a review. Proc IEEE. 2019; 107(8): 1655-74.

106.Lyu L, Bezdek JC, He X, Jin J. Fog-embedded deep learning for the internet of things. IEEE Trans Ind Inform. 2019; 15(7): 4206-15.

107.Sobecki A, Szymański J, Gil D, Mora H. Deep learning in the fog. Int J Distr Sensor Networks. 2019; 15(8): 1-17.

108.Wang X, Han Y, Leung VC, Niyato D, Yan X, Chen X. Convergence of edge computing and deep learning: a comprehensive survey. IEEE Comm Sur Tut. 2020; 22(2): 869-904.

109.Bektas C, Schüler C, Falkenberg R, Gorczak P, Böcker S, Wietfeld C. On the benefits of demand-based planning and configuration of private 5G networks. Proc IEEE VNC. 2021: 158-61.

110.Kulkarni V, Walia J, Hämmäinen H, Yrjölä S, Matinmikko-Blue M, Jurva R. Local 5G services on campus premises: scenarios for a make 5G or buy 5G decision. Digital Pol Regul Governance. 2021; 23(4): 337-54.

111.Rischke J, Sossalla P, Itting S, Fitzek FH, Reisslein M. 5G campus networks: a first measurement study. IEEE Access. 2021; 9: 121 786-121 803.

112.Soós G, Ficzere D, Seres T, Veress S, Németh I. Business opportunities and evaluation of non-public 5G cellular networks–a survey. Infocommun J. 2020; 12(3): 31-8.

113.Sheoran A, Fahmy S, Cao L, Sharma P. AI-driven provisioning in the 5G core. IEEE Internet Comp. 2021; 25(2): 18-25.

114.Sheoran A, Fahmy S, Sharma P, Modi N. Procedure-driven deployment support for the microservice era. In: Proc. 22nd ACM Int. Middleware Conference: Industrial Track. 2021: 23-9.

115.Ferrer AJ, Marquès JM., Jorba J. Towards the decentralised cloud: survey on approaches and challenges for mobile, ad hoc, and edge computing. ACM Comp Surv. 2019; 51(6): 1-36.

116.You D, Doan TV, Torre R, Mehrabi M, Kropp A, Nguyen V, Salah H, Nguyen GT, Fitzek FH. Fog computing as an enabler for immersive media: service scenarios and research opportunities. IEEE Access. 2019; 7: 65 797-65 810.

117.Al-kahtani MS, Karim L, Khan N. ODCR: energy efficient and reliable density clustered-based routing protocol for emergency sensor applications. Appl Comput Inform. 2022. in print.

118.El-Sayed WM, El-Bakry HM, El-Sayed SM. Integrated data reduction model in wireless sensor networks. Appl Comput Inform. 2022. in print.

119.Somauroo A, Bassoo V. Energy-efficient genetic algorithm variants of PEGASIS for 3D wireless sensor networks. Appl Comput Inform. 2022. in print.

120.Pinho T, Coelho J, Oliveira P, Oliveira B, Marques A, Rasinmäki J, Moreira A, Veiga G, Boaventura-Cunha J. Routing and schedule simulation of a biomass energy supply chain through SimPy simulation package. Appl Comp Inform. 2020; 17(1): 36-52.

121.Kong C, Rimal BP, Reisslein M, Maier M, Bayram IS, Devetsikiotis M. Cloud-based charging management of heterogeneous electric vehicles in a network of charging stations: price incentive vs. capacity expansion. IEEE Trans Serv Comp. 2022. in print.

122.Madhankumar S, Dharshini S, Vignesh NR, Amrutha P, Dhanaselvam J. Cloud computing-based Li-Ion Battery-BMS design for constant DC load applications. Soft computing for security applications. Springer; 2022. 299-312.

123.Qian KJ, Gu SF, Yang YB. Operation controllable index optimization of virtual power plant with electric vehicle based on 5G technology and cloud computing platform. Proc. Int. Conf. On adv. Hybrid information processing. Springer; 2021. p. 156-68.

124.Doan TV, Nguyen GT, Reisslein M., Fitzek FH. FAST: flexible and low-latency state transfer in mobile edge computing. IEEE Access. 2021; 9: 115 315-115 334.

125.Mushtaq Z, Wahid A. Revised approach for the prediction of functional size of mobile application. Appl Comput Inform. 2022. in print.

126.Mahalingam T, Subramoniam M. A robust single and multiple moving object detection, tracking and classification. Appl Comput Inform. 2020; 17(1): 2-18.

127.Owoh N, Singh M. Security analysis of mobile crowd sensing applications. Appl Comp Inform. 2022; 18(1-2): 2-21.

128.Mitropoulos S, Mitsis C, Valacheas P, Douligeris C. An online emergency medical management information system using mobile computing. Appl Comput Inform. 2022. in print.

129Ribes VS, Mora H, Sobecki A, Gimeno FJM. Mobile cloud computing architecture for massively parallelizable geometric computation. Comput Industry. 2020; 123: 103336.1-103336.12.

130.Sales DC, Becker LB, Koliver C. The systems architecture ontology (SAO): an ontology-based design method for cyber–physical systems. Appl Comput Inform. 2022. in print.

131.Sheriff F. ELMOPP: an application of graph theory and machine learning to traffic light coordination. Appl Comput Inform. 2022. in print.

132.Shantharama P, Thyagaturu AS, Yatavelli A, Lalwaney P, Reisslein M, Tkachuk G, Pullin EJ. Hardware acceleration for container migration on resource-constrained platforms. IEEE Access. 2020; 8: 175 070-175 085.

133.Liu L, Xu H, Niu Z, Wang P, Han D. U-HAUL: efficient state migration in NFV. Proc. ACM SIGOPS. 2016: 1-8.

134.Manner J, Endreß M, Böhm S, Wirtz G. Optimizing cloud function configuration via local simulations. Proc, IEEE 14th Int. Conf. on Cloud Comp. (CLOUD). 2021: 168-78.

135.Bilal M, Serafini M, Canini M, Rodrigues R. Do the best cloud configurations grow on trees? an experimental evaluation of black box algorithms for optimizing cloud workloads. Proc VLDB Endowment. 2020; 13(12): 2563-75.

136.Bilal M, Canini M, Rodrigues R. Finding the right cloud configuration for analytics clusters. Proc. 11th ACM Symposium on Cloud Computing. 2020: 208-22.

137.Kaur G, Kaur K. An adaptive firefly algorithm for load balancing in cloud computing. Proc Int Conf Soft Comp Prob Solv. 2017: 63-72.

138.Dhiman G, Kumar V. Spotted hyena optimizer: a novel bio-inspired based metaheuristic technique for engineering applications. Adv Eng Softw. 2017; 114: 48-70.

139.Dhiman G, Kumar V. Emperor penguin optimizer: a bio-inspired algorithm for engineering problems. Knowledge-Based Sys. 2018; 159: 20-50.

140.Dhiman G, Kaur A. STOA: a bio-inspired based optimization algorithm for industrial engineering problems. Eng Appl Artif Intelligence. 2019; 82: 148-74.

141.Dhiman G. ESA: a hybrid bio-inspired metaheuristic optimization approach for engineering problems. Eng Comput. 2021; 37(1): 323-53.

142.Dhiman G, Oliva D, Kaur A, Singh KK, Vimal S, Sharma A, Cengiz K. BEPO: a novel binary emperor penguin optimizer for automatic feature selection. Know.-Bas Sys. 2021; 211: 106560.

143.Kaur S, Awasthi LK, Sangal A, Dhiman G. Tunicate swarm algorithm: a new bio-inspired based metaheuristic paradigm for global optimization. Eng Appl AI. 2020; 90: 103541.

Acknowledgements

Funding: This research paper has been supported in part by the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) as part of Germany’s Excellence Strategy – EXC 2050/1 – Project ID 390696704 – Cluster of Excellence “Centre for Tactile Internet with Human-in-the-Loop” (CeTI) of Technische Universität Dresden and by the State of South Dakota Board of Regents Competitive Research Grant.

Corresponding author

Martin Reisslein can be contacted at: reisslein@asu.edu

Related articles