Design and orchestration of scalable, event-driven serverless data pipelines for internet of things (IoT) applications
Kuupäev
2024-09-19
Autorid
Ajakirja pealkiri
Ajakirja ISSN
Köite pealkiri
Kirjastaja
Abstrakt
Asjade Interneti (IoT) seadmete üha suureneva kasutamisega on toimunud tohutu toorandmete kasv. Selliste andmete haldamine hõlmab aga keerulisi ülesandeid, sealhulgas andmete hankimist erinevatest seadmetest erinevates vormingutes, filtreerimine ja teisendamine, ning masinõppe rakendamine. Selliste andmevoogude voo ja elutsükli tõhus haldamine on märkimisväärne väljakutse. Selleks, et saavutada madal latentsus ja muud teenusekvaliteedi (QoS) mõõdikud, võetakse üha enam kasutusele pilvepõhise IoT andmetöötluse asemel serva ja udu arvutusmudeleid. See muudab andmeanalüüsi ülesannete dünaamilise täitmise keerukamaks erinevatel kaugustel ja heterogeenses riistvaras.
Üks lähenemisviis Asjade Interneti andmetöötluse realiseerimiseks on monoliitsete konteinerrakenduste kasutamine, mis kondavad andmetoimingud ühte konteinerisse. Selliseid konteinereid saab migreerida üle IoT kihtide (serv, udu, pilv), et optimeerida teenuse kvaliteedi (QoS) mõõdikuid. Monoliitsete konteinerite kasutamine võib tõhusat andmehaldust nõudvate andmepõhiste Asjade Interneti rakenduste väljatöötamisel tekitada väljakutseid ja keerukust. Sujuva ühenduvuse tagamisel ja andmetoimingute skaleerimisel võib tekkida ka muid probleeme. Teised olemasolevad lahendused, nagu suured andmetöötlusklastrid (nt Apache Flink või Spark) ja valmistööriistad, võivad ressursipiirangute (serva- ja uduseadmed) ja asjade Interneti-rakenduste sündmustepõhise olemuse tõttu olla ebausaldusväärsed.
Hüpoteesiks on, et seda saab lihtsustada serverivabade arvutuste ja andmekonveierite kasutusele võtuga. Serverivabade arvutuste kasutamisel saab andmeanalüütilisi ülesandeid luua individuaalselt skaleeritavate virtuaalsete funktsioonidena ja neid sündmustepõhiselt täita.
Andmekonveierid võimaldavad koondada üksikud andmetöötlusülesanded suureks hajutatud andmevooks. Mõlema mudeli kombineerimisel saab luua serverivabad andmekonveierid (SDP), kus serverivabu funktsioone kasutatakse konveieriülesannetena ja neid saab sujuvalt välja kutsuda, kui andmed konveieri kaudu liiguvad. Servervabu funktsioone saab lihtsasti käivitada pilve-, serva- või udukeskkondades ning andmeedastuseks, marsruutimiseks ja funktsioonide kutsumiseks kasutatakse andmekonveieri tehnoloogiaid.
Selle lõputöö eesmärk on adresseerida andmetöötluse kriitilisi aspekte asjade Interneti (IoT) keskkondades, keskendudes üleminekule konteineritelt serverivabale arhitektuuridele. Esmalt analüüsitakse kitsaskohti traditsioonilistes monoliitsetes konteineripõhistes IoT andmetöötluse lähenemisviisides. Seejärel uuritakse serverivaba andmetöötluse rakendamist asjade Interneti keskkondades kui potentsiaalset lahendust monoliitsete arhitektuuridega seotud väljakutsete ületamiseks. Lõpuks analüüsitakse serverivabade andmetöötlusraamistike skaleeritavust asjade Interneti stohhastiliste töökoormuste haldamisel.
Sellel väitekirjal on kolm panust. Esimene on uudne simulaator ja raamistik konteinerite orkestreerimiseks IoT keskkondades koos gradiendipõhise tagasilevitamise lähenemisviisiga (GOBI ja GOBI*) ajastamiseks, mis on effektiivsem olemasolevatest planeerijatest. Teine panus hõlmab kolme disaini lähenemist serverivabade andmekonveierite (SDP) loomiseks ja nende sobivuse analüüsi erinevate asjade Interneti rakenduste jaoks. Standardsetel andmevootööriistadel (DFT) põhinevad SDP-d ei sobi arvutusmahukate ülesannete jaoks, nagu videotöötlus, kuid need on tõhusad laia ribalaiust vajavate rakenduste jaoks. Objektisalvestusteenusel (OSS) põhinevad SDP-d sobivad paremini arvutusmahukate toimingute jaoks ja MQTT-põhised SDP-d sobivad latentsustundlike toimingute jaoks, kuid mitte arvutus- ja ribalaiustundlike ülesannete jaoks, kuna protsessori ja mälu kasutus on suurem. Kolmas panus on reaktiivsete automaatse skaleerimise mehhanismide sobivuse analüüs SDP jaoks nelja erineva töökoormuse mustri korral. Arvutusmahukate ülesannete puhul töötab ressursipõhine skaleerimise lähenemisviis tõhusalt hüppelise, püsiva, järsu ja kõikuvate töökoormuste korral. Lühikese täitmisajaga ülesannete jaoks sobib töökoormusepõhine skaleerimine kõigi nelja töökoormuse korral.
See lõputöö käsitleb IoT andmete töötlemise keerukust ja väljakutseid üleminekul monoliitsetelt konteineriarhitektuuridelt serverivabadele pilvearvutusmudelitele asjade Interneti andmete töötlemisel. Töö väljundid aidatvad asjade Interneti arendajatel valida kõige sobivaamad andmetöötlusmehhanismid, võttes arvesse selliseid tegureid nagu vabad arvutusressursid, ribalaius, energiatarbimine ja latentsus, täites samal ajal tundlikke QoS nõudeid.
With the ever-increasing use of Internet of Things (IoT) devices, there has been a massive influx of raw data. Managing such data involves complex tasks, including acquiring data from diverse devices in various formats, performing operations such as filtering and transformation, and executing machine learning operations. Effectively managing the flow and lifecycle of such data presents a significant challenge. To achieve low latency and other Quality of Service (QoS) metrics, edge and fog computing models are increasingly being adopted over cloud-based IoT data processing. This adds complexity to dynamically executing data analysis tasks across varying distances and on heterogeneous hardware devices. One approach for realizing IoT data processing is using monolithic containerized applications that combine data operations into a single container. These containers can be migrated across the IoT continuum (edge, fog, cloud) to optimize user QoS metrics. Using containers can present challenges and complexities when developing multilayer data-driven IoT applications that require effective data management. Other challenges can arise in ensuring seamless end-to-end connectivity and scaling data operations at a granular level. The other existing solutions, like large data processing clusters (e.g., Apache Flink or Spark) and off-the-shelf tools, can be unreliable due to resource constraints (edge and fog devices) and the event-driven nature of IoT applications. The hypothesis is that this can be simplified by using serverless computing and data pipelines. In Serverless computing, data analytic tasks can be created as individually scalable virtual functions and executed in an event-driven manner. Data pipelines enable composing individual data processing tasks into a large distributed data flow. By combining both models, serverless data pipelines (sdp) can be created where serverless functions are used as pipeline tasks and seamlessly invoked while the data moves through the pipeline. Serverless functions can easily be deployed in edge, fog, or cloud environments, and data pipeline technologies are used for data transport, routing, and function invocation. The goal of this thesis is to address critical aspects of data processing within the IoT environments, focusing on the transition from containers to serverless architectures. It first analyses the bottlenecks in traditional monolithic container-based approaches to IoT data processing. It then explores the application of serverless computing in IoT environments as a potential solution to overcome the challenges identified with monolithic architectures. Finally, it assesses the scalability of serverless data processing frameworks in managing stochastic IoT workloads. This thesis makes three contributions. First is a novel simulator and framework for container orchestration in IoT environments, along with a gradient-based back propagation approach (GOBI and GOBI*) for scheduling, which outperforms existing schedulers. The second contribution comprises three design approaches for SDPs and their suitability analysis for various IoT applications. SDPs based on standard Data Flow tool (DFT)s are unsuitable for compute-intensive tasks such as video processing, but they are efficient for bandwidth-intensive applications. Object Storage Service (OSS) based SDPs are better suitable for compute-intensive tasks and MQTT-based SDPs are suitable for latency-sensitive tasks but not for compute and bandwidth-sensitive tasks due to higher CPU and memory utilization. The third contribution is the suitability analysis of reactive autoscaling mechanisms for SDP under four different workload patterns. For compute-intensive tasks, the resource-based scaling approach works effectively for jump, steady, spike, and fluctuation workloads. For short execution time tasks, workload-based scaling suits all four workloads. Overall, this thesis addresses the complexities and challenges in the processing of IoT data while shifting from monolithic container architectures to serverless computing models for handling IoT data. The contributions assist IoT developers in selecting the most suitable data processing mechanism, considering factors such as computing resources, bandwidth, energy consumption, and latency while meeting sensitive QoS requirements.
With the ever-increasing use of Internet of Things (IoT) devices, there has been a massive influx of raw data. Managing such data involves complex tasks, including acquiring data from diverse devices in various formats, performing operations such as filtering and transformation, and executing machine learning operations. Effectively managing the flow and lifecycle of such data presents a significant challenge. To achieve low latency and other Quality of Service (QoS) metrics, edge and fog computing models are increasingly being adopted over cloud-based IoT data processing. This adds complexity to dynamically executing data analysis tasks across varying distances and on heterogeneous hardware devices. One approach for realizing IoT data processing is using monolithic containerized applications that combine data operations into a single container. These containers can be migrated across the IoT continuum (edge, fog, cloud) to optimize user QoS metrics. Using containers can present challenges and complexities when developing multilayer data-driven IoT applications that require effective data management. Other challenges can arise in ensuring seamless end-to-end connectivity and scaling data operations at a granular level. The other existing solutions, like large data processing clusters (e.g., Apache Flink or Spark) and off-the-shelf tools, can be unreliable due to resource constraints (edge and fog devices) and the event-driven nature of IoT applications. The hypothesis is that this can be simplified by using serverless computing and data pipelines. In Serverless computing, data analytic tasks can be created as individually scalable virtual functions and executed in an event-driven manner. Data pipelines enable composing individual data processing tasks into a large distributed data flow. By combining both models, serverless data pipelines (sdp) can be created where serverless functions are used as pipeline tasks and seamlessly invoked while the data moves through the pipeline. Serverless functions can easily be deployed in edge, fog, or cloud environments, and data pipeline technologies are used for data transport, routing, and function invocation. The goal of this thesis is to address critical aspects of data processing within the IoT environments, focusing on the transition from containers to serverless architectures. It first analyses the bottlenecks in traditional monolithic container-based approaches to IoT data processing. It then explores the application of serverless computing in IoT environments as a potential solution to overcome the challenges identified with monolithic architectures. Finally, it assesses the scalability of serverless data processing frameworks in managing stochastic IoT workloads. This thesis makes three contributions. First is a novel simulator and framework for container orchestration in IoT environments, along with a gradient-based back propagation approach (GOBI and GOBI*) for scheduling, which outperforms existing schedulers. The second contribution comprises three design approaches for SDPs and their suitability analysis for various IoT applications. SDPs based on standard Data Flow tool (DFT)s are unsuitable for compute-intensive tasks such as video processing, but they are efficient for bandwidth-intensive applications. Object Storage Service (OSS) based SDPs are better suitable for compute-intensive tasks and MQTT-based SDPs are suitable for latency-sensitive tasks but not for compute and bandwidth-sensitive tasks due to higher CPU and memory utilization. The third contribution is the suitability analysis of reactive autoscaling mechanisms for SDP under four different workload patterns. For compute-intensive tasks, the resource-based scaling approach works effectively for jump, steady, spike, and fluctuation workloads. For short execution time tasks, workload-based scaling suits all four workloads. Overall, this thesis addresses the complexities and challenges in the processing of IoT data while shifting from monolithic container architectures to serverless computing models for handling IoT data. The contributions assist IoT developers in selecting the most suitable data processing mechanism, considering factors such as computing resources, bandwidth, energy consumption, and latency while meeting sensitive QoS requirements.