Suuremahuline tunnusehõive veebiandmetest
Files
Date
2018
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Veebiandmed on ajas muutuvad ning viis, kuidas neid esitatakse muutub samuti. Linkandmed on muutnud veebis leiduva info masinloetavaks. Selles töös esitame kontseptsioonitõenduseks lahenduse, mis võtab veebisorimise andmetest linkandmed ja teostab nende peal tunnusehõivet. Esitletud lahenduse eesmärgiks on luua sisendeid masinõppe mudelite treenimiseks, mida kasutatakse firmade krediidiskoori hindamiseks. Meie näitelahendus keskendub toote linkandmetele. Me proovime ühendadatoodete linkandmed, mis esitavad sama toodet, aga pärinevad erinevatelt veebilehtedelt.Toodete linkandmed ühendatakse firmadega, mille lehelt tooted pärit on. Informatsioon firmadest ja nende toodetest moodustab graafi, millel arvutame graafimeetrikuid.Erinevate ajahetketede veebisorimisandmetel arvutatud graafimeetrikud moodustavad ajaseeria, mis näitab graafi muutusi läbi aja. Saadud ajaseeriatel rakendame tunnushõive arvutamist.Loodud lahendus on planeeritud suurte andmete jaoks ning ehitatud ja disainitud skaleeruvust silmas pidades. Me kasutame Apache Sparki, et töödelda suurt hulka andmeid kiiresti ning olla valmis, kui sisendandmete hulk suureneb 100 korda.
Data available on the web is evolving, and the way it is represented is changing as well.Linked data has made information on the web understandable to machines. In this thesis we develop a proof of concept pipeline that extracts linked data from web crawling and performs feature extraction on it. The end goal of this pipeline is to provide input to machine learning models that are used for credit scoring. The use case focuses on extracting product linked data and connecting it with the company that offers it. Built solution attempts to detect if two products from different web sites are the same in order to use one representation for both. Information about companies and products is represented as a graph on which network metrics are calculated. Network metrics from multiple different web crawls are stored in time series that shows changes in graph over time. We then calculate derivatives on the values in time series.The developed pipeline is designed to handle data in terabytes and built with scalability in mind. We use Apache Spark to process huge amounts of data and to be ready if input data increases 100 times.
Data available on the web is evolving, and the way it is represented is changing as well.Linked data has made information on the web understandable to machines. In this thesis we develop a proof of concept pipeline that extracts linked data from web crawling and performs feature extraction on it. The end goal of this pipeline is to provide input to machine learning models that are used for credit scoring. The use case focuses on extracting product linked data and connecting it with the company that offers it. Built solution attempts to detect if two products from different web sites are the same in order to use one representation for both. Information about companies and products is represented as a graph on which network metrics are calculated. Network metrics from multiple different web crawls are stored in time series that shows changes in graph over time. We then calculate derivatives on the values in time series.The developed pipeline is designed to handle data in terabytes and built with scalability in mind. We use Apache Spark to process huge amounts of data and to be ready if input data increases 100 times.