Andmebaasi logo
Valdkonnad ja kollektsioonid
Kogu ADA
Eesti
English
Deutsch
  1. Esileht
  2. Sirvi kuupäeva järgi

Sirvi Kuupäev , alustades "2013-06-11" järgi

Filtreeri tulemusi aasta või kuu järgi
Nüüd näidatakse 1 - 10 10
  • Tulemused lehekülje kohta
  • Sorteerimisvalikud
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    On the usage of support vector machines for the short-term price movement prediction in intra-day trading
    (Tartu Ülikool, 2013-06-11) Mušnikov, Vassili; Kangro, Raul, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    The aim of the current thesis is to research the prediction of future stock prices by using the implementation of support vector machines, also to find possible technical solutions and to interpret the gained results. In order to consider the problem of forecasting future stock prices for a short period of time, the market data of the British multinational telecommunications company Vodafone Group Plc and the British-Swedish multinational pharmaceutical and biologics company AstraZeneca Plc is being used to fit the models and verify how good their predictive power is. The opportunities of packages e1071 and kernlab of programming language R are being used in the current thesis. The implementation of the predictions to trading algorithms is not being considered due to it is not relevant to the underlying thesis. The thesis consists of three chapters. The first chapter is dedicated to support vector machines, because this particular method is used in developing prediction algorithms. For better understanding of the principle of this method, certain fundamentals are being explained. The first chapter introduces what is machine learning, explains finding the regression function by using support vector machines and mentions the problems which may arise during finding the regression function. The concept of regression estimation is being explained with theoretical and graphical examples. The second chapter is dedicated to kernels, because that gives an opportunity to use non-linear functions as regression functions. In this chapter, the classification of kernels is being introduced. In addition, it is explained to the reader why does the usage of kernel functions simplify the finding of the regression function. The short overview of technical opportunities of programming language R packages is also being introduced in the second chapter. Finally, such statistical method of evaluating and comparing learning algorithms as cross-validation is being briefly mentioned in the chapter. Unlike from the first two chapters, which give a theoretical overview, the third chapter is the practical part of the thesis. It introduces the implementation of support vector machines on the short-term price movement prediction in intra-day trading. The algorithm of the price prediction is being explained in the third chapter. Given data is also described in this chapter. Due to similar data involved, the author also presents the comparison with the master’s thesis of Andrei Orlov [1]. In addition, at the end of the thesis, the reader can find Appendices which consist of data frame, the diagram explaining the relations between functions in a code of algorithm, the codes of figures and the CD containing the code of the algorithm.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Predictions by non-invertible ARMA models
    (Tartu Ülikool, 2013-06-11) Vaselāns, Agris; Kangro, Raul, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    In the time series analysis one of the most used predicting models are of so called auto regressive moving average (ARMA) type. These models are well studied in numerous monographies and research papers. One of the basic assumptions used in the derivation process of the prediction equations is the invertibility of the underlying process. Usually invertibility is assumed as a prerequisite and very little attention is paid to the forecasting of non-invertible processes. Recent papers [1, pp. 227-229.] shows that nowadays more and more researchers consider and examine the case when the underlying process does not satisfy invertibility condition. Non-invertible processes have been studied also quite a long time ago, but they have become the object of interest due to new applications in sciences (signal detection, nancial analysis) also the rapid development of computer sciences and computation possibilities take part in the growing interest of such processes. In basic time series course non-invertible processes usually are discussed very briefly, but globally the interest in such processes is increasing, therefore the aims of this thesis are: 1) to investigate theoretically the questions related to predicting further values of non-invertible ARMA processes; 2) to do the computer simulations and compare di erent methods. To cover these aims both theoretical and simulation studies are provided. Therefore in the beginning we give a very short introduction and necessary background of stationary ARMA processes needed to give the de nition of the non-invertibility of ARMA process. We proceed with another natural assumption used in the derivation process of the prediction equations. The assumption of Gaussian distributed random variables (innovations) gives some prerogatives and simplifies the derivation of the prediction equations also in case of non-invertible process. We briefly discuss the gains which are represented as a useful collection of consecutive theorems that leads to the minimum mean square error predictor in case of non-invertible process with Gaussian distributed data. To extend our studies of non-invertible processes we continue with studies of non-Gaussian, non-invertible process. This situation requires more specific analysis which is provided by a case study of a non-invertible moving average MA(1) process with uniformly distributed innovations (error process). The thesis consists of 3 main sections with suitable subsections. In the first section the basic concept of a non-invertibility is given. The second section is dedicated to the forecasting of an ARMA process. In this section the derivation of prediction equations in case of invertible process is given, then the derivation of the forecast of a non-invertible process with Gaussian distributed data is described and the sections concludes with the derivation of the minimum mean square error predictor in case of the non-invertible process with uniformly distributed innovation series. Results are illustrated with computer simulations and corresponding graphs. In the last section a real world application is considered and corresponding results are given. Since all sections require some computational work and appropriate programming, the collection of suitable codes written in the R language [2] and scripts of the open-source mathematical software system Sage [3], which provides the symbolic calculations needed for this thesis, can be found in the appendix.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Stochastic reserving methods in non-life insurance
    (Tartu Ülikool, 2013-06-11) Tee, Liivika; Käärik, Meelis, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    The aim of the present thesis is to describe the classical basic chain-ladder method and several stochastic methods. The thesis is set out as follows. First section starts with the notation and basic results. It is followed by the overview and description of the chain-ladder technique. The section continues with the Mack’s stochastic model, where the model assumptions and the results of calculating the variability are given. Section 2 provides an introduction to stochastic models in the basis of generalized linear models (GLM). Discussion starts with the (over-dispersed) Poisson model and since there are several models linked to Poisson model, these models are examined as well. The stochastic models are introduced with the ideas of constructing the models and since the main focus is on estimating the likely variability of the estimate, the results of prediction errors are given. In section 3, the models considered in the previous chapter, will be compared. As the Mack’s distribution-free model and the Poisson model are considered as the chain-ladder "type" methods, it is important to point out the main differences of these models. The comparison leads to the known fact, that the distributin-free model of Mack is called as the stochastic model underlying the chain-ladder method. In addition, discussion about possible negative increments and how the proposed methods deal with them is provided. The last section provides a practical reserving approach. The theoretical results considered in previous sections are implemented in a practical numerical problem, the reserve estimates and their mean square errors (and standard errors) of predictions are found for models of Mack, over-dispersed Poisson, log-normal and Gamma.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Soome elektrienergia hinna modeleerimine ja lühiajaline ennustamine
    (Tartu Ülikool, 2013-06-11) Niidumaa, Anni; Kangro, Raul, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    Seoses üleminekuga avatud elektriturule on muutunud nii elektritootjate kui ka suurtarbijate jaoks äärmiselt oluliseks elektrienergia hindade prognoosimine. Teema olulisuse tõttu on seda küllalt palju uuritud ning on väljapakutud mitmesuguseid keerukaid mudeleid, mis üritavad arvestada kõiksuguseid mõjutegureid, nagu näiteks eri maade vahel olevate ühendustrasside läbilaskevõimed, elektrijaamade käivitamisega seotud kulud, seadusandlusest tulenevad mõjud, eri tüüpi elektrijaamade osakaalud ja palju muud. Samas keerulisi mudeleid on mõtet kasutada ainult siis, kui nende abil on saadavad prognoosid on oluliselt täpsemad, kui lihtsate mudelite abil on võimalik saada. Käesoleva magistritöö eesmärgiks ongi uurida, kui hästi on võimalik elektrihindade käitumist kirjeldada ning lühiajaliselt (kuni aasta ette) prognoosida mitmesuguste küllalt lihtsate mudelite abil. Kuna eesti kohta on seni liiga vähe andmeid, siis keskendume Soome elektrienergia hindade prognoosimisele. Töö on jaotatud neljaks peatükiks. Esimeses peatükis tutvustatakse ja iseloomustatakse elektrituru olemust, hindade kujunemist nii fikseeritud turul kui ka avatud turul. Samuti on välja toodud peamised elektritootmise komponendid ja tootmisviisid. Teises peatükis tutvustatakse esmalt töös kasutatavaid termineid, mida kasutatakse edasistes peatükkides. Kolmandas peatükis vaadeldakse Soome elektrienergia hindade prognoosimist ainult minevikuhindade baasil, kasutades selleks ARIMA tüüpi mudeleid. Lisaks otse originaalandmetele sobitamisele uuritakse prognoosimist veel logaritmitud hindadele sobitatud ARIMA tüüpi mudelitega ning võrreldakse neid lähenemisi omavahel mitmesuguseid mõõdikuid kasutades. Neljandas peatükis uuritakse võimalusi elektrienergia hinna prognooside parandamiseks juhul, kui võtta kasutusele täiendavad andmeid igakuiste keskmiste temperatuuride ja sademete kohta. Modeleerimiseks kasutatakse antud töös tarkvarapaketti R, mis sisaldab vahendeid aegridade mudelite sobitamiseks, tulemuste graafiliseks esitamiseks ja leitud mudelite sobivuse kontrollimiseks.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Taxpayers’ Index
    (Tartu Ülikool, 2013-06-11) Kupatadze, Givi; Pärna, Kalev, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    In the modern everyday life the public finance plays the main role in the world of finance and therefore, its efficient management and transparency is becoming a crucial component for the prosperity and well-being of each particular country. In turn, a cornerstone of the public finance is taxation and, therefore, for each individual it is essential to have highly protected not only inborn rights but also fiscal rights, as far as they are taxpayers. Because the public finance stands on the shoulders of the taxpayers there is a fundamental need to have highly protected taxpayers fiscal rights, which mainly consists two parts: 1) How justly and efficiently taxed money is collected and 2) How transparently and efficiently taxed money used to be spent. Therefore, to develop the model that tries to measure how well taxpayers fiscal rights are protected is a big challenge but at the same time, in the case of successful endeavor we will have developed a real model, which could contribute to the great degree a protection of taxpayers fiscal rights in any particular country. Therefore, our aim in this MA thesis will be to develop Taxpayers’ Index that measures how well taxpayers fiscal rights are protected from the legal point of view, and also to discuss a particular mathematical model (for analysis), which will help us to analyze the obtained information from the Taxpayers’ Index.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Estimating the truncation error in the case of solving one dimensional Black-Scholes equation
    (Tartu Ülikool, 2013-06-11) Mehlomakulu, Babalwa; Kangro, Raul, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    In the early 1970s, Fischer Black and Myron Scholes made a breakthrough by deriving a differential equation that must be satisfied by the price of any derivative security dependent on a non-dividend-paying stock. They used the equation to obtain the values for European call and put options on the stock. Options are now traded on many different exchanges throughout the world and are very popular instruments for both speculating and risk management. There are several approaches to option pricing but however we only consider Partial Differential Equations(PDE) approach, where options are expressed as solutions to certain partial differential equations. These equations are specified over an infinite(unbounded) region and usually cannot be solved exactly. Most numerical methods for solving partial differential equations require the region to be finite, so before applying numerical methods the problem is changed from infinite to finite region. The aim of our thesis is to study the error caused by this change, will do that by estimating the error at the boundaries and use these estimates to get pointwise error inside the domain, followed by numerical verification. The structure of the thesis is as follows: Chapter one provides a brief introduction of option pricing and includes neccesary results. In chapter two we give a defination of maximum principle for backward parabolic equations and prove some lemmas based on this principle which will be useful throughout this thesis. We further outline ways of getting estimates with the aid of the results we got in our lemmas. In chapter three we will obtain estimates at the truncation boundaries for both call and put option. In chapter four we use the estimates of the previous chapter to find the estimates inside the region. In chapter five we demonstrate the process of using our estimates in the case of pricing concrete put and call options and show the validity of the estimates by finding numerically the values of the solution of this truncated problem.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Bootstrap-meetod kahjukindlustuse reservide hindamisel
    (Tartu Ülikool, 2013-06-11) Viin, Rauno; Käärik, Meelis, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    Käesoleva töö eesmärgiks on kirjeldada Bootstrap-meetodi rakendamist reservide hindamisel ning tuua välja ja selgitada erinevaid võimalusi, millele Bootstrap-meetodi kasutamisel võiks tähelepanu pöörata. Magistritöö on jagatud viieks peatükiks. Esimeses peatükis selgitatakse täpsemalt reservide hindamise vajalikkust ning antakse ülevaade ahel-redel meetodi ideest. Teises peatükis tutvustatakse taasvalikumeetodeid ning keskendutakse Bootstrap-meetodi kirjeldamisele. Kolmas peatükk ühendab endas kahte esimest peatükki, selgitades täpsemalt, kuidas on Bootstrap-meetodit võimalik kahjureservide hindamiseks kasutada. Lisaks tuuakse välja mitmed olulised valikud, mis Bootstrap-meetodiga saadud tulemusi mõjutada võivad. Neljandas peatükis võrreldakse analüütiliselt tuletatud prognoosiviga Bootstrap-meetodil saadud prognoosiveaga ning viimases peatükis rakendatakse eelnevates peatükkides kirjeldatud Bootstrap-meetodit praktilistele ülesannetele. Reservide hindamine on hetkel väga aktuaalne teema seoses 2014. aasta algusest jõustuma hakkava Solventsus II direktiiviga, kuna uues riskipõhises Solventsus II mudelis on reserviriski hindamine ja vastavate vahemikhinnangute leidmine üks olulisemaid ülesandeid.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Geneetiliste markerite imputeerimine
    (Tartu Ülikool, 2013-06-11) Iljašenko, Tatjana; Möls, Märt, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    Käesoleva töö põhieesmärgiks on kontrollida IMPUTE2 programmi abil imputeeritud geneetiliste markerite kvaliteeti ja programmi poolt väljastatavate kvaliteedihinnangute kvaliteeti. Eesmärgi saavutamiseks teostatakse kolm erinevat imputeerimisprotsessi, milledest esimene viiakse läbi nö ideaaltingimustes. Võetakse juhuslik valim 1000 Genoomi Projekti haplotüüpide seast. Valimisse sattunud haplotüüpidest eemaldatakse osa geneetiliste markerite väärtustest. Seejärel imputeeritakse puuduolevate markerite väärtused, kasutades referentspaneelina esialgse koguandmestiku ilma valimisse sattunud haplotüüpideta. Teise imputeerimise eesmärk seisneb selles, et eurooplaste referenshaplotüüpe kasutades (see ongi 1000 Genoomi Projekti raames kogutud andmed), imputeerida genotüpiseeritud eestlaste andmetes puuduolevate markerite väärtusi. Selleks kasutatakse samuti 1000 Genoomi Projekti raames kogutud andmeid referentspaneelina, kuid seekord jäetakse välja teatud hulk geneetilisi markereid, et viia referentspaneelina kasutatava andmestiku vastavusse eestlaste genotüüpiseeritud andmetega markerite nimekirja suhtes. Valim moodustatakse seekord eestlaste genotüpiseeritud andmetest, korjates sealt välja hulk teatud markereid. Seejärel imputeeritakse ettevalmistatud eurooplaste haplotüüpide andmestiku abil väljakorjatud markerite väärtusi tagasi. Kolmanda imputeerimise ülesandeks on ennustada eestlaste geeniandmeid, kasutades referentspaneelina eestlaste genotüpiseeritud andmeid, mis koosnevad 49 indiviidi sekveneeritud andmetest ehk kindlaks määratud DNA molekulide aminohapete ja nukleotiitide järjestusest [3]. Valimi moodustamisel valitakse juhuslikult 15 indiviidi andmed 49 indiviidi andmete seast, kust jäetakse välja hulk teatud markereid. Referentspaneeli jääb 34 indiviidi. Eestlaste andmed on saadud Eesti Geenivaramust. Töö esimeses ja teises peatükkides antakse detailne ülevaade imputeerimisprotsessist ja sellega seotud mõistetest ning programmis IMPUTE2 kasutatud meetodist. Kolmandas peatükis analüüsitakse imputeerimistulemusi ja nende kvaliteeti. Neljas peatükis tutvustatakse imputeerimise kvaliteedihinnangu analüüsil kasutatud meetodeid ning analüüsitakse programmi IMPUTE2 poolt raporteeritute kvaliteedihinnangute usaldusväärsust.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Geneetiliste markerite mõju muutlikkust kirjeldavad meta-analüüsi mudelid
    (Tartu Ülikool, 2013-06-11) Klement, Riho; Möls, Märt, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    Üha arenevate teadussaavutuste toel viiakse läbi aina rohkem ja rohkem uuringuid üle kogu maailma pea kõigis eluvaldkondades. On selge, et kogu maailma inimesi hõlmavat uuringut ei ole võimalik korraldada. Küll aga on võimalik uurida ühte ja sama asja väga paljudes erinevates inimrühmades. Tulemuseks on mingi väiksema üldkogumi kohta käivad tulemused ja seosed. Sageli on kasutusel väga erinev metoodika, millest tulenevalt on uuringutel erinev hindamistäpsus. Samuti võivad uuringutulemusi mõjutada asukohamaast tulenevad iseärasused. Et oleks võimalik kõiki ühelaadseid uuringutulemusi kuidagi üldistada, selleks ongi kasutusel meta-analüüs. Meta-analüüsi mõiste defineeris 1976. aastal Gene Glass, kui suure hulga üksikuuringute tulemuste statistilise analüüsi eesmärgiga võtta kokku uuringute tulemusi. Meta-analüüsi tehnikaid kasutati aga juba palju varem. Karl Pearson (1904) rakendas tüüfuse vaktsiini uuringutes leitud korrelatsioonikoefitsientide kombineerimiseks kohaldatud meetodit. Leonhard Henry Caleb Tippet (1931) ja Ronald Fisher (1932) esitlesid meetodeid p-väärtuste kombineerimiseks. Käesolevas töös tutvustatakse meta-analüüsi läbiviimiseks sobivaid meetodeid ning kasutades tutvustatud meetodeid, uuritakse inimese pikkusega seotud geneetiliste markerite mõjusid. Uuritakse, miks erinevates uuringutes näib sama geneetilise markeri mõju olevat erinev ning kas ka erinevate markerite hinnatud mõjude vahel võib olla korreleeritust. Magistritöö koosneb kolmest peatükist. Esimeses peatükis tuuakse ülevaade metaanalüüsis kasutatavast teoreetilisest taustast. Teine peatükk keskendub praktiliseks analüüsiks olemasolevate andmete kirjeldamisele. Viimases peatükis kirjeldatakse läbiviidud analüüsi käiku ja sellest saadud tulemusi. Analüüsi läbiviimiseks ning illustreerivate jooniste tegemiseks on kasutatud tarkvarapaketti R. Autori koostatud programmikood on ära toodud töö lõpus olevas lisas 2. Töö kirjutamisel on kasutatud tekstitöötlusprogrammi MiKTeX.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Overdispersed models for claim count distribution
    (Tartu Ülikool, 2013-06-11) Carsten, Frazier Henry; Käärik, Meelis, juhendaja; Tartu Ülikool. Matemaatika-informaatikateaduskond; Tartu Ülikool. Matemaatilise statistika instituut
    Constructing models to predict future loss events is a fundamental duty of actuaries. However, large amounts of information are needed to derive such a model. When considering many similar data points (e.g., similar insurance policies or individual claims), it is reasonable to create a collective risk model, which deals with all of these policies/claims together, rather than treating each one separately. By forming a collective risk model, it is possible to assess the expected activity of each individual policy. This information can then be used to calculate premiums (see, e.g., Gray & Pitts, 2012). There are several classical models that are commonly used to model the number of claims in a given time period. This thesis is primarily concerned with the Poisson model, but will also consider the Negative Binomial model and, to a lesser extent, the Binomial model. We will derive properties for each of these models, both in the case when all insurance policies cover the same time period, and when they cover different time periods. The primary focus of this thesis is overdispersion, which occurs when the observed variance of the data in a model is greater than would be expected, given the model parameters. We consider several possible treatments for overdispersion, particularly those that apply to the Poisson model. First, we attempt to generalize the Poisson model by adding an overdispersion parameter (see, e.g., K¨a¨arik & Kaasik, 2012). Next, we search for ways to convert an overdispersed Poisson model to a Negative Binomial model. We will derive some basic properties (such as expectation, variance, and additivity properties) for all of the models mentioned above. Finally, results of this thesis are explored in a practical sense, by attempting to fit computer-generated data into an overdispersed Poisson framework.

DSpace tarkvara autoriõigus © 2002-2025 LYRASIS

  • Teavituste seaded
  • Saada tagasisidet