Collecting granular data

Ops Center Analyzer User Guide

Version
11.0.x
Audience
anonymous
Part Number
MK-99ANA002-06

The Granular Data Collection tool enables you to collect data in intervals as short as one second for analysis.

In cases where detecting spikes in trend patterns is necessary, data collection in intervals of minutes might not provide sufficient granularity of data. To collect data in intervals of seconds, invoke the Granular Data Collection tool, which generates output to CSV files. You can upload this data to a spreadsheet or charting application for graphical analysis of data spikes in the trend patterns.

  • Verify the following OS support:
    • The Ops Center Analyzer server is running Linux OS.
    • The RAID Agent (or Tuning Manager - Agent for RAID) host server is running a version of Linux OS supported by Ops Center Analyzer.
  • The target storage system is a model in the VSP 5000 series, VSP E series, VSP F series, VSP G series, VSP N series, VSP, and HUS VM families.
  • The version of Tuning Manager - Agent for RAID is 8.5.1 or later.
  1. Select a storage resource or storage component, and then select the Execute Action option from the menu.
    During analysis, select any of the following resources and components:
    • storage system
    • volume
    • port
    • processor
    • pool
    • parity group
    • cache
    Note: Other action templates similar to the Granular Data Collection tool might display in the Execute Action window.
  2. On the Execute Action window, select Granular Data Collection. To run the tool on a specified volume, select Granular Data Collection with Volume. After selecting either tool, specify the following arguments and click Execute Command.
    All arguments are optional except where indicated that they are mandatory.
    1. Interval of data collection, in seconds (1-60):
      -intervalInSec
      Default value: 1
    2. Period of data collection, in seconds (1-3600), for one file:
      -periodInSec
      Default value: 300
    3. Number of times (1-24) data collection runs:
      -repeatTimes
      Default value: 1
    4. Starting time of data collection using any time expression (supported by date command usage in Linux OS), such as, 13:00, tomorrow 13:00, 1 PM, or now:
      -startTime
      Default value: now
    5. Timezone offset (-2359 to -0000 and +0000 to +2359) between the user client and UTC:
      -timezone
      Default value: +0000
    6. (Mandatory) The DKC model name of the target storage system:
      -model
      Default value: Specified automatically by the selected resource.
    7. (Mandatory) Serial number of the target storage system:
      -serialNumber
      Default value: Specified automatically by the selected resource.
    8. Number of the volume that you selected at the entry point. This parameter only displays when the action is Granular Data Collection with Volume:
      -ldev
      Default value: Specified automatically by the selected resource.
      You can run granular data collection on multiple volumes (128 maximum) by concatenating volume numbers with commas.
    9. The folder name that is the destination folder of the output files:
      -pathLabel
      Default value: result
    Example of arguments in one string:

    -intervalInSec 1 -periodInSec 300 -repeatTimes 1 -startTime "now" -timezone "+0000" -model "VSP G1000" -serialNumber "345678" -ldev "01:23:45" -pathLabel "result"

    Guidelines for granular data collection:

    • For the first second, the acquisition period is less than 1 second, and if there is no I/O in that period, the performance value is 0.
    • Granular data collection cannot be cancelled once you run the command.
    • You can only run the command for granular data collection one instance at a time.
    • Running granular data collection might affect the performance of storage resources if overused. When running granular data collection, monitor the following:
      • The duration in which the MP blades and MP units are used exclusively depends on the number of volumes for which the data is obtained and on the collection interval. Therefore, keep the number of volumes to a minimum.
      • Because a small collection interval increases the load on the MP blade and MP unit, specify an appropriate value for the collection interval option. If a load problem on the MP blade or MP unit persists, consider allocating the command device that the RAID Agent (or Tuning Manager - Agent for RAID) accesses to other MP blades or MP units.
  3. View the results of the granular data collection task in CSV files located in the following directory: /Ops-Center-Analyzer-installation directory/Analytics/webapps/webroot/Analytics/result/GranularData/start_date/label/storage_id/secdata/yymmdd/.
    Where:
    • start_date with timezone offset is the date that the data is collected with the time zone offset in the yyyymmdd[+/-]hhmm format.
    • label is the label defined by the user in the pathLabel option.
    • storage_id is the target storage system identifier: model-name_serial-number.
    • yyyymmdd is the date that the granular data collection is run.

    Output path example:

    /Ops-Center-Analyzer-installation directory/webapps/webroot/Analytics/result/GranularData/20170119+0000/result/VSP_G1000_345678/secdata/20170119/

    You can also view the results on a web browser:

    http://host-name or IP-address of Ops-Center-Analyzer:port of Ops-Center-Analyzer server/Analytics/result/GranularData/index.html

    Output web address example:

    http://172.0.0.1:22015/Analytics/result/GranularData/20170119+0000/result/VSP_G1000_345678/secdata/20170119/index.html

  4. CSV output is not deleted automatically. You can delete it manually. You can also set up routine file deletion, as shown in the following example.

    Delete output after 3 days at 1:00 PM:

    $crontab -e
    01*** product installation folder/Analytics/sample/rmOldExportData.sh 3