Analyze Big Data to better manage your network

And it’s the same with Big Data. Big data can rightly be considered as the latest industrial folly, but that does not mean that this technology can make the essential transition from information to knowledge. Fortunately, a network administrator has various solutions to carve out the volume of Big Data, keep a cool head and make it a network operating tool.

In network applications, the key to successful Big Data exploitation is to focus on the issues, not the data points.

When it comes to network administration, big data – which is often a huge reservoir of information about traffic and devices available from interfaces and standard management systems – is collected from probes deployed at different points. , as well as using network layer software installed on client and server devices. When presented within a standard administration system framework, some of this information may be consistent with common management practices for failures / configurations / accounting / performance / security – so-called FCAPS management practices ( Fault, Configuration , Accounting, Performance and Security). However, the majority of companies are unable to correlate data from client / server equipment or probes with current operational activities. That’s where Big Data and Big Data come in.

The most critical single element for taking advantage of Network Big Data is ensuring accurate event synchronization for all data elements. In network administration, everything is about momentary conditions and juxtaposition of events. In terms of information analysis, the loss of temporal synchronism means the complete loss of context. If all data is clocked from a common source, time synchronism should be appropriate. If not, it may be useful to introduce synchronization events at the Big Data data collection points to align the timing of all records at regular intervals.

Match to locate a network problem

Once the timing of events can be accurately correlated, the next step is to map this “common timeline” to network problems. The source of the problem information may come from the current FCAPS process, user complaints, or client / server telemetry. The latter source – telemetry – can also retrieve information about the quality of the interaction, such as reaction time, as well as network performance data measuring packet loss rate and delay (from TCP window sizes, for example). This correspondence allows, through the Big Data analysis elements, to explore the correlation between these problem points and the measurement elements of the period.before the problem occurs.

This type of evaluation of Big Data data can be particularly valuable in an analysis of the root causes of network problems; an operation that other means make more often than not impossible. The state of the network changes particularly rapidly. As a result, administrators often chase problems from one location to another without ever being able to review the relevant item when an incident occurs. Big Data analytics can correlate thousands (or millions) of data items with identified problem points to determine matches. These correspondences are then reassembled to the associated causes, via the analysis of the data.

Identify normal operating conditions

Another strategy for applying Big Data to network problems is using it to determine a basis for normal network conditions. If the previous step – the matching of problem points with a common timeline to Big data – is done in the state of the art, it will also reveal the cases where there are no problems. The analysis of network data collected during these “positive” periods will allow the administrator to determine the basis of what represents normal network behaviors, and to quantify what “normal” means in terms of masses of data collected.

At this stage, the determined normal basic behavior can then be exploited to analyze periods of network operation that are not considered problematic, but are also not conclusively correlated with normal operating behavior. The majority of experienced network administrators know that networks sometimes become unstable, with no real failures or complaints. In addition, the network, the global demand, or the state of the server resources may present certain conditions that may affect the operation of the network. And baseline data can help identify one or all of these conditions.

Big data analysis can help identify new ways to address certain network conditions

If there is a behavior to look for, it is the one where the measuring elements indicate a state of the network that fails to generate problem report, even when this state faithfully reproduces a problem period. Here, the goal is to use the metrics to identify what is likely to have mitigated the expected problem; an identification that could optimize your analysis of the root cause, or suggest other ways to remedy the offending condition.

Another component to consider is how resources are affected by a network, application, or server event, or a change in user traffic load. When a major change occurs in one of these areas, the network must respond in a predictable manner. For example, a significant change in application traffic generally increase a visible increase of u time response, and among other problems, a higher rate of dropped packets.

These behaviors occur without a major change in traffic and therefore suggest that resources are overloaded. Similarly, they may reveal that the network is oversized when a significant change in traffic occurs without an equivalent increase in response time or packet loss. In this case, some decreases in capacity are tolerated, opening the way to securing a lower operating budget.

Focus only on exploitable conditions

One last tip: in terms of data, avoid “doing your business” based on problems. Some administrators will dig Big Data reports for unusual patterns of behavior, even when there is no evidence that these behaviors can be associated with any of the processes or tasks outlined above. You could simply discover that users are asking more about their apps and the network at specific times of the day. (A generally easy thing to do: just take a look around the office!)

The key to successful exploitation of Big Data analysis in network applications is to focus on the issues, not the data points. Information about the state of the network – whether homogeneous and normal, or aberrant and problematic – is only relevant once classified. Unclassifiable states are difficult to transform into action items. It is therefore difficult to justify the investment in analysis time required for their detection and management. Administering a network is not easy. And to prove itself a viable tool, Big Data must facilitate this work and not make it more difficult. In order not to go wrong, focus on the exploitable data.

Leave a Comment