The quantity of data doubles every two years. Until today over three Zettabytes were produced – i.e. 3.000.000.000.000.000.000.000 bytes. Until 2020 according to IDC http://ow.ly/Ao5v7 the volume of data will rise to 40 Zettabytes. It is understandable that the users frighteningly shy away from this flooding of information. The IT-industry that promotes this flood developed solutions for calming. They suggest that these very large, diverse and fast developing amounts of data can be controlled. These products are subsumed under the generic term Big Data. Relationships, meanings and samples can be evaluated with special programs. From now on the experts, who gain the value-add for the enterprise, only need to know, what has to be done, in order to produce the benefits through Big Data.
The new feature is the fact that not only internal amounts of data can be interpreted, but all attainable data, independently of its format. A prerequisite for the benefit is the capability of the business office to formulate the right questions right. It follows the technical realization by the IT department. IT returns thereby to its old function of evaluating data. The procedure consists of three steps.
- Formulating questions
The specialists have to specify their needs for information. First questions are formulated (In which regions, what products? Own ones and those of competitors?). Then sources are determined as well as the time and the period for the evaluation.
- Processing the data
The experts of the IT-department, the so-called Data Scientists, take over the questions and translate them into the technical IT-specs – attainable databases, safety-relevant aspects, data formats, compatibility, etc. The programs that produce the results are derived from them by using SQL, NoSQL, Analytics, visualizations, etc. The outcomes are eventually compiled and delivered to the business experts. This procedure resembles the early data processing and its batch programs. However, a close co-operation between business experts and IT-department takes place in each phase. Additionally, the possibilities of evaluation evolved over the years significantly.
- Using the outcomes
It is crucial to interpret the results correctly. For this purpose it is important to produce reports that provide correct, up-to-date, comparable, understandable and comprehensible results. Measures are then developed on this basis.
Over time, this procedure creates a lot of results that will frequently cover different time frames and pursue diverse intentions. Due to the fact that the data sources also change with great speed, you have to adapt willy-nilly to the fact that reports have only a short half-life. For the users this means that
- they must always be open for new insights,
- outcomes have a short durability, and
- measures have to be made in shorter intervals as well as
- results have to be forgotten quicker,
in order to create the place for new findings. Numbers, data and facts receive a new weight, since the qualified interpretation will regularly change more with difficulty as well as the topicality and interaction.
Bottom line: Big Data permits the processing of available internal and external data. This requires an appropriate IT-infrastructure and, above all, the ability to formulate and translate questions clearly into IT-activities. Thus the importance of the IT-department and the efficient IT-infrastructure rises. The benefit that results from Big Data requires new way of dealing with the ephemerally insights that come from numbers, data and facts. The basis for decision makings have to be renewed regularly.