Use the granular data collection feature to analyze second-level data.
Ops Center Analyzer provides numerous reports on system and user resources. Typically, system administrators manage resources for application data, and dashboard reports give sufficient information on system resources (such as volumes) that support a widely used application. When a bottleneck occurs, system administrators look for data spikes. However, this data might not available on the dashboard because of the time delay between monitoring and reporting. While this report data displayed on the dashboard is collected and delivered within minutes of the events, in this situation, system administrators require second-level data. This data is collected in intervals of seconds, providing the granularity necessary to search for data spikes.
- Port saturation
- MPB utilization
- High response times for volumes
Depending on the way granular data collection is run, the CSV files from the resulting output consist of performance statistics for all ports, volumes, processor data (MP), and activity-rate ranking for each processor allocated to an MP blade and volumes of that storage system.
Persistent performance issue
A critical event occurs multiple times throughout the day, but you do not know what component in storage resources is affecting the user resource, such as an application. The following workflow shows the stages of analysis you follow to determine the root cause:
- Start searching through candidate volumes or the volume listed in an SLO warning.
- Analyze the E2E View to identify the resources associated with the Consumer.
- Analyze the Sparkline view to identify the spike in the trend patterns.
- Identify affected resources to determine the bottleneck resource.
- Run granular data collection for that resource, for example, from 15 minutes to an hour.
- Analyze CSV output.
Typically, this use case requires that you run granular data collection for a longer duration since the spikes might not occur predictably.
Performance issue that occurs consistently during a specific time
If you notice a performance problem in storage resources consistently at the same time, for example, from 1 to 3 PM during the week, you can run granular data collection on various components for as long as duration of the period in which the problem occurs (in this example, two hours).
Typically, you run granular data collection many times for shorter periods in this use case.