# A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining

Serdica Journal of Computing (2014)

- Volume: 8, Issue: 2, page 111-136
- ISSN: 1312-6555

## Access Full Article

top## Abstract

top## How to cite

topFokoue, Ernest. "A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining." Serdica Journal of Computing 8.2 (2014): 111-136. <http://eudml.org/doc/269898>.

@article{Fokoue2014,

abstract = {Big data comes in various ways, types, shapes, forms and sizes.
Indeed, almost all areas of science, technology, medicine, public health,
economics, business, linguistics and social science are bombarded by ever
increasing flows of data begging to be analyzed efficiently and effectively. In
this paper, we propose a rough idea of a possible taxonomy of big data,
along with some of the most commonly used tools for handling each particular
category of bigness. The dimensionality p of the input space and
the sample size n are usually the main ingredients in the characterization
of data bigness. The specific statistical machine learning technique used to
handle a particular big data set will depend on which category it falls in
within the bigness taxonomy. Large p small n data sets for instance require
a different set of tools from the large n small p variety. Among other
tools, we discuss Preprocessing, Standardization, Imputation, Projection,
Regularization, Penalization, Compression, Reduction, Selection, Kernelization,
Hybridization, Parallelization, Aggregation, Randomization, Replication,
Sequentialization. Indeed, it is important to emphasize right away that
the so-called no free lunch theorem applies here, in the sense that there is
no universally superior method that outperforms all other methods on all
categories of bigness. It is also important to stress the fact that simplicity
in the sense of Ockham’s razor non-plurality principle of parsimony tends
to reign supreme when it comes to massive data. We conclude with a comparison
of the predictive performance of some of the most commonly used
methods on a few data sets.},

author = {Fokoue, Ernest},

journal = {Serdica Journal of Computing},

keywords = {Massive Data; Taxonomy; Parsimony; Sparsity; Regularization; Penalization; Compression; Reduction; Selection; Kernelization; Hybridization; Parallelization; Aggregation; Randomization; Sequentialization; Cross Validation; Subsampling; Bias-Variance Trade-off; Generalization; Prediction Error},

language = {eng},

number = {2},

pages = {111-136},

publisher = {Institute of Mathematics and Informatics Bulgarian Academy of Sciences},

title = {A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining},

url = {http://eudml.org/doc/269898},

volume = {8},

year = {2014},

}

TY - JOUR

AU - Fokoue, Ernest

TI - A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining

JO - Serdica Journal of Computing

PY - 2014

PB - Institute of Mathematics and Informatics Bulgarian Academy of Sciences

VL - 8

IS - 2

SP - 111

EP - 136

AB - Big data comes in various ways, types, shapes, forms and sizes.
Indeed, almost all areas of science, technology, medicine, public health,
economics, business, linguistics and social science are bombarded by ever
increasing flows of data begging to be analyzed efficiently and effectively. In
this paper, we propose a rough idea of a possible taxonomy of big data,
along with some of the most commonly used tools for handling each particular
category of bigness. The dimensionality p of the input space and
the sample size n are usually the main ingredients in the characterization
of data bigness. The specific statistical machine learning technique used to
handle a particular big data set will depend on which category it falls in
within the bigness taxonomy. Large p small n data sets for instance require
a different set of tools from the large n small p variety. Among other
tools, we discuss Preprocessing, Standardization, Imputation, Projection,
Regularization, Penalization, Compression, Reduction, Selection, Kernelization,
Hybridization, Parallelization, Aggregation, Randomization, Replication,
Sequentialization. Indeed, it is important to emphasize right away that
the so-called no free lunch theorem applies here, in the sense that there is
no universally superior method that outperforms all other methods on all
categories of bigness. It is also important to stress the fact that simplicity
in the sense of Ockham’s razor non-plurality principle of parsimony tends
to reign supreme when it comes to massive data. We conclude with a comparison
of the predictive performance of some of the most commonly used
methods on a few data sets.

LA - eng

KW - Massive Data; Taxonomy; Parsimony; Sparsity; Regularization; Penalization; Compression; Reduction; Selection; Kernelization; Hybridization; Parallelization; Aggregation; Randomization; Sequentialization; Cross Validation; Subsampling; Bias-Variance Trade-off; Generalization; Prediction Error

UR - http://eudml.org/doc/269898

ER -

## NotesEmbed ?

topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.