Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
How many tests were performed on which product?
What is the percentage of failed tests?
Which test steps have mainly led to failure?
On which workpiece carrier do most test items fail?
When was the last failure in test step X in product Y on system Z?
In which product types do which failures occur more frequently?
How good are my measurements (Cp, Cpk and other MSA relevant values)?
How large is the dispersion of the measured values?
Are reasonable limits defined for measurement X?
Are there trends in the measured values?
How long does a test execution (min, max, average) take overall?
Are there similarities or dependencies between certain measurements?
Which measurements might cause problems?
IRS Report Analyzer answers all these questions with just a few mouse clicks!
Have you ever had to create an MSA report for the acceptance of a plant and have been provided with an Excel template? Probably you have already spent many minutes or even hours to create the evaluation of a single criterion. With IRS Report Analyzer you can import, filter, group and analyze ALL measurements including PDF reports in less than five minutes.
The software is available in several languages. The screenshots of this user manual have been made in the English version. The names of the windows, buttons, etc. are therefore also in English in this manual and may differ from the actual representation of the respective language.
Windows 7, 8, 10 (32/634 bit)
4GB RAM minimum, 8GB recommended
.NET 4.6.2 or higher
To install the program, run the installer and follow the instructions.
The software must be activated with a valid license to use it. For a period of up to 60 days the software can be tested in trial mode. The trial mode is identical to the licensed version in terms of functionality. Only the created exports (PDF, Word, etc) are watermarked.
For activation a valid license key is required to unlock the software. The license key must be activated once on the IRS server.
The documentation for software activation can be found here.
The capabilities of the analysis stand or fall with the available data. To get the most out of the analysis, reports should at a minimum include the following information:
Per test execution:
Serial number
PartNr (or Product ID)
Date of test execution
overall result
Ideally, the reports contain further metadata and also user-defined fields on product properties or test environment. You can filter and group by this data later. This information can be added to the test sequence very easily if the special test steps are needed. In the program directory you will find examples and instructions on how to optimize your test sequences for the Report Analyzer. This is not a technical requirement for operation, but it improves the possibilities of evaluation enormously!
Per test step:
Measurement ID
Name of the measurement
Measurement value
Data type
Limits
Timestamp
Execution time
At the very least, however, you should know the conventions for forming the Measurement ID, which is used to identify measurements in the Report Analyzer. For this purpose, the respective step name is preceded by a unique identifier in square brackets, e.g. instead of
Voltage Input Channel 1
=> [ADC_0010] Voltage Input Channel 1
Content and length of this ID are arbitrary. However, IRS recommends a short combination of the acronym of the respective test sequence and a consecutive number with sufficient reserve intervals.
results in the Report Analyzer:
Measurement ID: ADC_0010
Measurement Name: Voltage Input Channel 1
The use of the ID is not technically required, but if the Measurement ID is missing, the very cryptic TestStand Step ID is used as a substitute for reasons of unambiguity (e.g. ID#:OgNqkZFG6RGN9ggAJw99EB).
Please also refer to the enclosed, more detailed information about the IRS Report Format and also to the already above mentioned example sequences in TestStand format.
The following figure shows the user interface of the Report Analyzer.
The following actions can be performed on the start screen:
Change to main screen
Load data record
Save record
Change application settings (see App Settings)
Change display settings (see View Settings)
Change export settings (see Export Settings)
Contact support
The start page offers the following information and setting options:
Application version number
used license key
Link to this documentation
Link to the releases
With this function all loaded and pre-filtered executions including the defined groups can be stored as a single data set. Changes made like MeasurementFilter are also taken into account. If a data set is needed again and again, this can speed up the work considerably. The originals of the reports are not needed for this. The analysis data and graphs are not part of the data set, these data will be recalculated if necessary.
A previously saved data set can be loaded either by clicking the corresponding Open button or by using the list of recent projects.
Note: The current analysis parameters always apply! If the data set was saved using other parameters, a note appears with the deviating settings. The subsequent calculation is nevertheless performed with the current parameters!
The arrangement and size of all windows or panels can be adjusted by the user according to his own ideas. To do this, a panel is held at the title bar and moved. Then drag the mouse pointer with the mouse button held down to the corresponding docking symbol in the target area. The panels can be arranged on top of each other, side by side or overlapping.
The panel arrangement is saved when the program is closed. When the program is restarted, the panels are automatically arranged as last configured.
The arrangement of the panels can be reset to a standard layout using the "Reset Layout" button in the main toolbar:
In most tables, only some of the available columns are displayed. Additional columns can be shown or hidden user-specifically.
By right-clicking on the table header line and selecting the Show Column Chooser entry, a dialog opens in which the columns to be displayed can be selected.
This allows the active columns to be set by checkboxes and their order by drag & drop. This mechanism is basically valid for all tables available in the program
By left-clicking on the header of a column, the table is sorted in ascending or descending order by the selected column. A triangle symbol indicates the sorting direction.
For large tables or data sets, individual rows can be filtered user-specifically. This function is especially useful for preselecting and defining groups
The tables can be filtered in three different ways:
by values (single column)
by rules (single column)
filter editor (several columns)
Filter by values (single column)
If you move the mouse over a column header, the filter symbol appears. Left-clicking on it opens a menu with an active tab Filter Values, which lists all possible values of this column. The user can now select the desired values by clicking into the checkbox. The search function in the upper part of the window can be used to narrow down the selection. The functionality is similar to that in Microsoft Excel.
Filter by rules (single column)
If you move the mouse over a column header, the filter symbol appears. Clicking on it opens a menu in which you have to switch to the left tab Filter Rules. In this tab you can set a rule for filtering the column.
Filter Editor (multiple columns)
The Filter Editor can be used to create extensive filter expressions. It is opened by right-clicking on the table header line and selecting the Filter Editor.
With the help of the filter editor several filter operations can be created and linked by boolean operations.
Note: If a filter is active, the filter expression is displayed in the footer of the table. By clicking on the pencil symbol, it can be edited. The filter is removed by clicking on the X.
Similar to the conditional formatting in Excel, cells of a table can be specially displayed according to certain rules. For example, cells can be colored or bars or symbols can be displayed. The dialog for configuring conditional formatting can be accessed by right-clicking on the header of a column and selecting the Conditional Formatting option.
Note: The adjustments described in this chapter which are made to display the tables are saved when the program is closed. When the Report Analyzer is restarted, these settings are applied automatically. Under certain circumstances it can happen that records are not displayed because of the filter settings, because a filter is active.
The main screen consists of only two areas:
Tool bar
Tabs
Tabs represent certain workflow steps and guide the operator through the program:
Import
Grouping
The following tabs are available if import and grouping tasks are completed:
History and Statistics
Error Distribution
Graph Comparison
Similarity Analysis
Trend Analysis
Watchlist
A tab consists of several areas and usually contains its own menu bar. In this menu bar, tab-specific buttons are displayed which the user can use to call up special functions. For example, the display can be changed, the help can be called up or the data export can be started.
In the tab Grouping the loaded data set can be filtered and grouped according to properties of the executions. The groups formed here are the basis for further analysis.
Groups are a central topic in the Report Analyzer. They are used to compare data sets according to arbitrarily defined criteria. These can be different product types, but also different users or characteristics of the test item or the test environment. In MSA analysis, groups are given additional significance in certain cases.
The prerequisite for this is that there is also data that allows grouping. Please refer to the section on optimizing report data (adding header information)!
For all created groups, a statistical analysis is performed for each measurement. Some analysis tools allow to compare the results of the groups.
Even if you have not defined any special properties, there are still many standard fields that support grouping and filtering (e.g. SeriesNr, BatchNr, SocketNr, TestPlan, StationId, Execution Time, and many more). However, we strongly recommend that you add additional fields to the test sequence to allow for more targeted filtering. The Report Analyzer can also group by self-assigned properties! (see examples "Nominal Current", "Nominal Frequency", "Number of Poles", etc.).
Even if no structuring of the data set is desired, at least one group must still be created so that the program can perform the analyses. In this case, simply click Add Group(s) without filtering to create a default group.
The figure below shows the layout of the Grouping Tab. This consists of three sections:
Upper half: Overview of all available Executions, either as a flat list or in groups, depending on how they are arranged. Here you can select the executions that should be created as new group(s).
Bottom left: Overview of all already created groups (Active Groups). These can be moved, renamed or deleted. A subsequent addition of executions is not possible.
Bottom right: Overview of all executions of the group selected on the left. If necessary, executions can be deleted here. Results will be recalculated.
Procedure
The upper view is configured using grouping (drag a column header in the grouping area above the table header) or filtering as desired
via the Add Group(s) button all groups or executions selected via checkbox will be added as new group(s). If nothing is selected, all currently visible executions and groups will be added.
In the list of active groups, the group name can be adjusted (F2) and the order can be changed. In the order defined here, the analyses will later be executed, displayed and structured in the export. The order can also be changed later. The selected groups are removed via the "Trash" button after confirmation.
A (re)calculation of the analysis is started automatically when leaving the grouping page.
Simplified group naming can be activated in the application settings. In this case, only the values without the column names are used as groups. (e.g. instead of "ProductId: xyz" only "xyz").
Groups can be added or deleted later. The analysis and graphs will then be recalculated. If the corresponding group is no longer available, it will be removed from the graph.
Individual executions can be selectively deleted from already created groups. When leaving the grouping page, a new analysis will be started.
Not all executions have to be assigned to a group.
Only executions in groups are analyzed.
An execution can also be assigned to several groups and will then be considered in both groups during the analysis.
Attention When the program is closed, all filter settings are saved and automatically applied when the program is restarted. If you have set a filter when creating the group, when loading the next dataset it may happen that all executions over the filter are hidden. If you remove the filter, the data will be visible again.
This procedure is recommended for the following scenarios:
No grouping necessary (corresponds to the creation of exactly one group).
Grouping by time ranges
Complex groupings that cannot be realized via automated grouping
Procedure
the executions are filtered so that only the required entries are visible Only these entries are included in the group
click on Add Group(s) to create a group with a default name (e.g. Group_0)
with F2 the group can be renamed
steps 1-3 can be repeated as often as you like to create multiple groups
The fastest way to structure the data set is to use automated grouping. To do this, simply drag the column header to be grouped into the grouping area above the table header.
Procedure
Drag and drop the desired column header into the empty area above the table ("Drag a column header here to group by that column.")
The program now groups the data according to the values of the column you selected. The table immediately displays all the groups that have been formed as a preview. Use the arrow symbol to view the contents of the group.
Click Add Group and the groups are created with a default name and added to the list under Active Groups. The group names can be renamed with F2 if necessary.
Tip: A combined grouping is also possible. Simply drag another column header into the grouping area and the second level of the grouping is immediately visible in the preview:
Automatic groupings can also be filtered, allowing you to combine both approaches. For example, you can group by ProductID and then use filters to select only some of the automatically created groups:
The button Assignments allows an analysis of the distribution of executions to groups.
A table lists all executions and informs about the number of assignments and the assigned groups to an execution. This makes it easy to detect wrong or unwanted multiple assignments or even forgotten groups.
For this purpose the table offers additional predefined filters:
All: Shows all imported executions
Duplicates: Displays only those executions that are assigned to multiple groups
Unrelated: Displays only those executions that are not assigned to a group
In the main view of the Grouping Tab there is also the possibility to remove unassigned executions and thus free memory. To do this, just click on Cleanup.
Creating groups can be very time-consuming, especially creating complex filter expressions using the Filter Editor. Therefore, it is possible to save the current grouping visible in the Executions as a filter set and to load and apply this set again later.
Apply filter set If a filter set is to be applied, it only needs to be selected from the drop-down list. The settings take effect immediately. Existing other filters are reset.
Manage Filter Sets With the button Manage Filter-Sets the filter sets can be managed:
Export: The selected filter set is exported as an XML file and can be copied to other computers, for example Import: A previously exported XML filter set can be imported Delete: The selected filter set is removed from the list
Click on this icon to switch to the start screen
Clicking on this icon resets the display and arranges the windows in the default layout
The button Assignments can be used to display which executions are not assigned to any or several groups (see ).
Here, the groups are formed individually via the filter settings. The filter is set as required (see chapter ) and the visible executions are added as a group. The process is repeated until all desired groups are present.
Save filter set To save, simply click on the Save Filter Set button and enter the name of the set. The set will automatically be saved in the app data and displayed in the drop down list.
The workflow basically consists of the following steps:
import data
structure (group) data
evaluate data
export evaluation
In the program, this workflow is structured by arranging the program tabs from left to right.
Import data The measurement protocols can be imported from different sources. Special plugins are provided for this purpose. All available plugins are visible in the tab Import under Import Sources and can be opened by clicking on the button.
Structure data (group) In order to be able to analyze large data sets in a meaningful way, it may be necessary to structure the data set. This structuring can be done in the tab Grouping. To structure the dataset, filters can be set and various groups can be formed. A later performed analysis will always be executed on all defined groups. The results of the groups can be compared (graphical + tabular) afterwards.
At least one group must be created. This can contain e.g. all available data.
Analyze data There are different ways to analyze the data. A separate tab is available for each type of analysis. The following tabs are available for data analysis:
History and Statistics: Display of the measured value histories, calculation of statistical key figures and display of the distribution of the measured values.
Error Distribution: Calculation and display of the group specific error distribution and comparison of the error distribution with other groups
Graph Comparison: Display and comparison of any measured value curves in a diagram
Similarity Analysis: Calculation of the similarity of selected measured values based on cross correlation
Trend Analysis: Recognition of trends in measured value curves and calculation of compensation curves.
Within each analysis tab the respective configuration options and evaluations are displayed.
Export analysis Each analysis method has one or more specific export formats. These are listed in the top line of the respective tab. Depending on the analysis, overall overviews, tables, graphs or detailed reports can be created.
This module shows the statistical analysis for each measurement, visualizes the history of the corresponding measured values and provides a histogram. The statistical key figures are calculated for each measurement for all groups to allow the comparison of measurements within different groups. The history diagram also allows the graphical comparison between groups. The results can be easily exported in different formats.
Select the desired measurement and group. The view is adjusted accordingly.
To create an MSA type 1 report, press Report Wizard.
Note: Please note that the diagrams and tables are located in dock panels, which can be arranged dynamically on the screen and may be hidden.
This table shows the statistical analysis values (MSA type 1) for the selected group and measurement. The page supports two different views: Group First / Measurement First (Default). Depending on the selected view, the table shows either a comparison of the groups based on the measurement or a list of all measurements for the selected group.
Meaning of the columns:
Count
Total number of measurements
CountValid
Number of measurements without NaN or INF values
CountError
Number of measurements with step status "Error"
CountDone
Number of measurements with step status "Done"
CountFail
Number of measurements with step status "Failed"
CountPassed
Number of measurements with step status "Passed"
CountTerminated
Number of measurements with step status "Terminated"
Avg (or Xg)
Arithmetic average of all measurements
SUM(measurements)/Count() without NaN
Min
Minimum value of all measurements without NaN
Max
Maximum value of all measurements without NaN
LSL
Lower specification limit
from test step (if not unique, the highest found lower limit will be used as a surrogate value)
USL
Upper specification limit
from test step (if not unique, the lowest found upper limit will be used as a surrogate value)
T
Tolerance
USL - LSL
Xm
Nominal value
Average of LSL and USL
Sg
Sigma
Corrected sample standard deviation (using N-1)
Bi
Bias / Systematic Error
Xg - Xm
Cg
Repeatability / Gage capability
((CgNumerator * T) / (CgDenominator * Sg))
Gage capability (biased)
(CgkNumerator * T - Abs(Bi)) / (CgkDenominator * Sg)
Cp
Capability of process
T / (6 * Sg)
Cpk
Critical process capability
Min((USL - Avg) / (3 * Sg) , (Avg - LSL) / (3 * Sg))
%EV
Reliability - Equipment Variation
((EvPercNumerator * Sg) / T ) * 100
%K
(2 * (Xm - Xg) / T) * 100
%RE
Resolution
RE / T * 100, where RE is the minimum distance between any measurements (but not 0) and without NaN
Attention: There is no uniform calculation rule for the values Cg, Cgk and %EV. They are therefore calculated using configurable parameters (shown in italics)! Make sure that the selected parameters correspond to the specifications of your employer / your application! The values can be changed in the application settings. The changes take effect immediately, a recalculation is not necessary.
Here is a short overview of the usual settings for Cg and Cgk in different standards and companies. Data without guarantee! In case of doubt, please ask the company for the parameters used!
MSA 3:2002
(0,2 * T) / (5.15 * Sg)
(0,1* T - |Bi| ) / (2,575 * Sg)
Cg >= 1,33
GM, Bosch, MSA 4:2010
(0,2 * T) / (6 * Sg)
(0,1* T - |Bi| ) / (3 * Sg)
Cg >= 1,33
BMW, Q-DAS GmbH, VW / Audi, VDA 5 (09/2010)
(0,2 * T) / (4 * Sg)
(0,1* T - |Bi| ) / (2 * Sg)
Cg >= 1,33
Ford
(0,15 * T) / (6* Sg)
(0,1* T - |Bi| ) / (3 * Sg)
Cg >= 1,0
Ford
0,15 * σProcess / Sg
(0.45 * σProcess -|Bi| ) / (3 * Sg)
Cg >= 1,0
This view shows all measurement values available for the selected measurement in a dynamic diagram. The values can be arranged by timestamp, global index, index group or serial number.
Certain elements can be shown or hidden. These include markers, the diagram legend and the limit values. The area between the lower specification limit and the upper specification limit can be filled optionally. Also the limit errors can be highlighted to get a quick overview of all errors.
The diagram also supports various views, display options and X-axes:
Single: Only graph for the selected group is shown
All-in-one: The graphs of all available groups are superimposed. Here it is recommended to switch off the limit value display. With the checkboxes of the legend you can hide unwanted graphs
Stacked horizontally: The graphs of all groups are displayed side by side in their own diagrams
Stacked vertical: The graphs of all groups are displayed one below the other in their own diagrams
With the mouse wheel or by dragging an area with the mouse the shown section of the graph can be enlarged / zoomed. After zooming, the visible section can also be moved with the mouse. For "stacked" views, the zoom is applied to all diagrams simultaneously. By clicking the "Reset Zoom" button the view can be reset to 100%.
Display options
Lines (directly connected points)
Points
Bars
Spline (Interpolated Curve)
Steps (points connected as steps)
X-Axis Options
Index Global: X-axis is oriented according to the index position of the execution in a global comparison (sorted by execution start)
Index Group: X-axis is oriented according to the index position of the execution within the selected group (sorted by execution start)
Timestamp: X-axis represents the absolute time
Serial Nr: Here the points are sorted according to the serial number of the test object
Watchlist Function Clicking on a measured value in the diagram opens a detail table that displays all available information for the corresponding test step and test execution. Within this window, an execution can be enriched with comments and added to the watchlist to mark it for further analysis. The corresponding report can also be opened.
Representation
The histogram diagram shows the statistical distribution of measured values of the selected measurement. Value ranges are automatically grouped and the number of measured values that lie within these limits are displayed as bars. Within these bars the status is also grouped by Passed (green) and Failed (red).
The groups are formed around the mean value in the range +/-6 sigma. The number of groups can be defined in the application settings. If the number is set to 0, the system calculates a reasonable value depending on the number of measured values.
The histogram also shows the expected normal distribution around the arithmetic mean (Xg/Avg), as well as the critical limits +/-3 sigma and, if possible, the limits LSL and USL and the nominal value (Xm). The latter depend on whether the limits within the measurement series are unambiguous and valid.
Treatment of inconsistent limits
If the limits within the series differ from each other, it is usually not possible to calculate values that depend on the lower and upper specification limits (Nominal, T, Cg, Cgk, Cp, Cpk, %EV, ...). In this case, the narrowest occurring limits are used as a substitute for calculation and the values are displayed in the table in italics. The found value ranges are displayed in the histogram as bands, which mark the largest and smallest occurring limit values:
Treatment of extreme values
Values outside the +/- 7-sigma range are visualized in the histogram in two special groups as gray bars. Values contained here (independent of the bar width) then lie outside the range of Xg +/-7 sigma:
This view shows all measured values in a sortable and filterable data table. The values can be sorted by clicking on a column header. Additionally, each column header has a symbol for filtering the values. In the lower area, sum information such as Min, Max, Avg is offered for selected columns.
This button starts a dialog that helps to select the contained measurements and groups for the report.
The selected composition can also be saved and reloaded. Click on Show Preview to open the preview of the report. From here, the Export button can be used to save the file in various target formats (PDF, Word, Excel, HTML, CSV).
Pressing this button opens the currently selected measurement including all measured values, diagram and histogram as preview without further configuration. With the Export button, the document can be saved in various formats (PDF, Word, Excel, HTML, CSV).
The Export Table button allows the selected measurement to be exported in CSV format.
In the History & Statistics module, graphs for the same measurement from different groups can already be compared. The "Graph Comparison" module also offers the possibility to compare any measurements within the same group. This is helpful, for example, if the measurements are similar to each other, but were taken at different measurement points.
Procedure
Select the desired group and click on New YY Diagram
Rename the title if necessary:
Drag the desired measurement from the "ID" column in the graphs near the Y-axis When a green solid vertical bar appears, release the mouse button. The measurement series is added to the graph:
Repeat step 3 until all desired graphs are included in the diagram:
After adding three measurement series our example now looks like this
Each graph has an additional Y-axis. If all values are in a similar range, you should activate Single Y-Axis:
For an X-Y diagram, proceed in the same way, but here one measurement is drawn to the X-axis and a second to the Y-axis. The resulting X/Y values are entered as points:
The trend analysis can show whether a trend can be identified when a series of measurements is viewed over the long term. Since trends usually develop very slowly due to wear, material aging or mechanical effects, it is necessary to use either very large amounts of data or suitable samples over a longer period of time.
Tip: When using very large data sets, it is advisable to use suitable import filters with only the relevant measurement IDs to keep the memory requirements small!
In the settings, the desired group is set first. In the Test Step Selection section, you can optionally hide entries of failed tests to avoid falsifying the result by bad parts. Finally, the system influence shall be recognized here.
In the section Floating Mean the window size for the floating average is set. A minimum of 20 or 10% of the total number is recommended (e.g. 100 for 1000 measurements)
Afterwards, the parameters whose trend is to be analyzed are selected. A curve is calculated for each value selected here and displayed in the "Floating Mean" panel. Note: The window may be in the background by default.
Calculate All: all available measurements are calculated
Calculate Selected: only the upper part of selected measurements are calculated
For each calculation, one result row is entered in the "Trend Analysis" table. When clicking on the respective row, the corresponding graphs are displayed.
History (Panel) This graph shows the course of the measured values (raw data) as well as a linear interpolation and an interpolation with a polynomial of adjustable degree. From this, an overall trend can already be read off if necessary.
Floating Mean (Panel) This graph shows the smoothed course of selected metrics of the measurement. The window size of the moving average can be adjusted.
Tip: If value ranges of the graphs do not fit together, deactivate individual graphs by clicking into the legend, if necessary, to achieve a better representation.
This analysis method allows you to view the error distribution of a group and compare it with other groups.
The error distribution is always calculated in groups in two steps: 1st stage: The "Result" entry of an execution is considered, i.e. the overall result of an execution Step 2: The "Status" entry of a test step is considered, i.e. the partial result of an execution
The following figure shows the start page of the "Error Distribution" analysis method
The error distribution of the executions (1st level) of each group is displayed in a diagram. If you click on a diagram, you get to the Details view and get the additional information about the error distribution of the test steps (2nd level). In addition, the failures between the groups can then be compared.
Note: With the Import Filter, test steps can be filtered from an execution. Thereby it can happen that a test step with the status "Fail" is removed and only "Pass" entries are left. However, the overall result is not changed by the Report Analyzer, so it remains on "Fail".
The detail view of the error distribution is structured as follows:
On the left is the selection of the group to be viewed
In the middle you see an overview of all test steps ("Measurements")
Below you can switch between two panels: "Diagrams" and "Executions".
This panel lists all test steps contained in the test executions of the respective group. The measurement ID, the step name and the description are displayed. It is also possible to see how often a step was performed and the status "Pass", "Fail", "Error" or "Terminated". This value is also set in relation to the total number of steps performed and displayed as a percentage value.
An important value for root cause analysis is the "First error" column or its percentage value. This indicates in how many test executions this measurement was the first failed step. With a filter > 0 on this column and descending sorting, it is very easy to create a list of the top causes of failure in each group:
Please note in this context: The first failed test step is not necessarily the step that led to the overall result or termination, even if this is normally the case. Similarly, "Failed" steps do not necessarily lead to a failed overall result of the test execution. This depends on the respective step settings and cannot be seen from the report.
The tables can be sorted and filtered, and conditional formatting can also be defined. Most of these settings are also used for the export.
When selecting a row in "Measurements", the affected test executions are displayed in the "Executions" table and the corresponding diagrams are updated.
This panel is divided into three areas. In the left area, a diagram is displayed which shows the status of all steps performed (as shown in the "Measurements"). In the middle, the results of all defined groups are compared in percent. Each group is displayed as a separate bar, so that a comparison across groups is quickly possible. If you move the mouse pointer over a bar, the group name and further information is displayed. The right area is identical to the middle area, but the values are shown in absolute numbers instead of percentages. This also allows you to quickly determine whether, for example, groups have the same workload or whether errors occur group-specifically or across groups.
If the operator is not only interested in the distribution but also wants to take a closer look at the logs, which are e.g. failed, he can do this in the panel "Executions". This panel lists all executed executions that have executed the selected test step. The operator gets various information displayed, such as the serial number. The test report can be opened and viewed by double clicking on a line. In the case of repeated measurements, it is thus possible to determine, for example, whether a failure occurs only with a single assembly or batch.
There are three possibilities to export the analysis data of the error distribution:
Report Wizard
Table export
Windows clipboard
Report wizard
Select the desired groups and measurements which should be included in the report:
Click on preview
A preview is then automatically generated for each selected group, which contains the following information:
Complete overview of all executions (1st part)
Complete overview of all currently visible test steps per selected group (2nd part)
As soon as the preview is completely generated, the report can be saved in various formats. This can be done with the button "Export..." or the arrow below the button.
Report current view
The overview of all test steps is exported in the same way as the "Measurements" table is displayed in the Report Analyzer. I.e. filter, sorting and conditional formatting is taken 1:1 from the display.
Table Export
The user selects in the displayed dialog which groups and test steps are to be exported. This export method saves the table entries of the "Measurements" panel for each selected measurement line by line in a text file. This can then be opened in Excel, for example, and further processed as required. In addition, it can configure the following settings:
File extension
column separator
Decimal point separator
Output folder
File name Click on "Export" to create the desired file.
Windows clipboard
In each table, the operator has the possibility to select cells or rows and copy them to the Windows clipboard by pressing CTRL+C. The data can then be pasted into a text file or Excel spreadsheet. With the shortcut CTRL+A the complete table can be selected. For example, the data can be exported from the panel "Executions".
The watchlist lists all entries and comments added via the corresponding watch function, see for example the comment function in .
The comments are saved as Data Set when you save the file
A double click on an entry (or click on the button "Show Report") opens the corresponding test report.
Click on "Delete" to delete the selected entry.
Here basic settings for the application are made, which can influence both the display and the calculation.
Significant Digits: Sets the number of significant digits if numbers are rounded or formatted for display or in the report
Use simplified group names: Concerns the automatic naming of groups. If activated, the name of the grouped column is omitted and only the value itself is used as name of the group, e.g. instead of "Product ID: XYZ" only "XYZ". Depending on the source of the grouping in the report, this will be easier to read. If the grouping is unclear, a descriptive name can also be helpful
Use group name as default graph title: If activated, the "Graph Comparison" module uses the group name as default for the graph name. Otherwise the name remains empty and must be set by the user
Defines the parameters for the calculation of some process parameters. When changes are made, the formulas are automatically updated to visualize the effects. See also the section Measurement Analysis Table!
Histogram Groups: Sets the number of groups used for histogram generation. If "0" is entered here, the program will automatically calculate a reasonable number based on the number of measurement data
The sections History and Statistics and Graph Comparison of the View Settings define preferences for the view in the respective modules.
Theme: Changes the display of the whole application (dark / light)
Graphs
Visual Groups: Specifies the maximum number of groups that are visualized in the graph at the same time. If there are more groups, only the number set here will be displayed in graphs. This has no influence on the number of groups in the tables.
Here the default paths for various export files are set.
You can also define your own logo for the report, which will replace the IRS logo
The similarity analysis shows linear relationships between at least two measurements. For this purpose the correlation coefficients are calculated according to Karl Pearson and displayed in a matrix. The correlation factors have values between -1.0 (direct negative correlation, red) and 1.0 (direct positive correlation, green). The cells are colored with a gradient based on their value. The stronger the correlation deviates from 0 (transparent), the stronger the cell is colored.
The correlation can be calculated for a selected group between selected or all measurements. The selected measurements are plotted as columns. For each existing measurement the factors are then calculated in one row. The selection therefore only affects the columns. The rows are always created over all measurements.
Export All Values: All visible results are written to the selected file in CSV format according to the settings
Export Selected Values: Only the selected rows of the result table(!) are written to the selected file in CSV format according to the settings
The built-in Report Viewer now has an alternative "Profiler" view:
A new column >>"First failure" shows how often the displayed measurement was the first failed test step in the test execution. This enables a much better root cause analysis than just the number of failed measurements, because previously it was not clear whether a failed step was the cause or a subsequent error.
The new report options added in 1.7 influence the content of the imported reports. A small warning symbol now indicates that the import options are active. If no options are active, the symbol disappears again.
(Breaking Change!) From now on, all available aggregated measurements are consistently used for the calculation of statistical values, even if they contain NaN. Previous versions had filtered / ignored these values, which could lead to incorrect statistical values in individual cases. In V1.8 and above, such measurements will consistently return NaN (Not a Number) in the corresponding statistical values instead of a potentially incorrect value! In principle, it is not advisable to use incorrect measurements for an MSA report at all.
Please note that this option can only be used globally and may have an undesirable effect on the measurements! In the worst case, different measurements (e.g. a series of different channel measurements) are actually mapped in the same measurement! Normally, automatic indexing ensures that each measured value is evaluated individually. In certain cases, however, this may not lead to the desired result and the measured values cannot be compared in the desired way.
Example use cases:
Measurements are deliberately repeated in the test to enable averaging. Without this option, it would still be individual measurements
For an MSA report, a measurement was repeated 50 times in a loop instead of creating 50 individual reports as recommended. With this option, the measured values can still be used
In principle, this option is a special case handling. It should normally be deactivated!
The new option to remove the indices can cause several measurements to coincide at the same point in the previous display. Previously, the measurements were displayed according to their report position in the group or globally, or according to timestamp. The new axis offers the option of simply listing all measurement values found for the measurement one after the other, regardless of their report source. The list is displayed in ascending order, so the execution sequence is always retained.
In individual cases, it can take a long time to display the search result when searching directories, e.g. when selecting very large network folders. Previously, this process could not be interrupted. As of version 1.8, the search can now be aborted with the ESC key.
Previously, reports that had already been loaded were ignored if the same ID was loaded during a new import. However, as content may be changed during or after import (Measurement Filter), it was previously only possible to completely reload the content if the report had previously been deleted. From V1.8, a report that has already been loaded is completely replaced when it is re-imported. The report is retained in groups that have already been created. Statistical values are recalculated.
Set number format in graphical curve comparison is now taken into account in the table
LSL (lower limit) was not marked with *)
if substitute values were used
Some lists were not updated correctly when changing the measurement if a filter was active
The list of failed imports was not displayed correctly
filter by date
remove steps by status (done / passed / failed / error)
keep first failed or error step
remove additional results
remove infotext
add custom header data
Reporting possible via context menu: new context menu "Show statistics" for manual selection of test executions in many views (Import, Grouping, Error Distribution)
Selected tests (use checkbox) can now be added to existing groups via Drag & Drop
Selected tests (use checkbox) can now be added to existing groups via new 'Add to Group' button
Performance has been significantly improved in many places, especially in the area of group management
Please note that all used runtime libraries are compiled at the first start. This may take a few minutes. The process requires administrator rights.
Subsequent program starts should be noticeably faster.
This allows measurements to be filtered according to whether the core values have been changed. In this case, either manually changed limits or automatically generated substitute values in the case of non-uniform limits are considered as changes. Note: The column is hidden by default, but can be activated in the column chooser dialog (right mouse button on a column header)
Using the context menu, almost all tables can now be exported in many different formats. In most cases, visual formats are also exported, but this depends on the selected output format. The selection of columns is based on the current view (with some exceptions). To start the export of a table open the context menu with the right mouse button click on "Export":
A dialog with preview opens. Here you can adjust the page format and choose an output format:
Alternatively, you can copy the contents of the table to the clipboard and paste it e.g. into an Excel spreadsheet (also via the context menu, see above)
Unwanted measurements can now be selected and deleted directly from the measured value table. To do this, the respective measured values are selected (if necessary, with the help of the shift or control key) and then the DEL key is pressed. After a confirmation prompt, the corresponding measurements are then removed.
To delete a measurement directly from the graph panel proceed as follows:
Click on the measurement point (the points must be visible and the zoom level must not be too small if there are many measurement points).
The detail view opens (right panel)
Click here on "Remove measurement" and confirm the security prompt
To make the changes visible immediately, click the "Refresh" button and re-select the measurement to update the graph.
Attention: In both methods the corresponding test executions including ALL test steps are removed from ALL groups and the import list!
Important: To avoid unnecessary waiting times due to sometimes time-consuming recalculations of the analysis values after the deletion process, a new analysis is not performed automatically, but must be triggered either via the "Refresh" button or by switching to another module (e.g. error distribution)!
The mentioned columns are now colored by default according to the following rules: Values up to 1: red Values up to 2: yellow
The selected threshold values of 2 for "good" may be overly critical and can be adjusted if necessary (see below).
Background: Unfortunately, rational number values have to be entered in the currently selected number format, i.e. either as comma or decimal point. Therefore, we cannot preset this setting with e.g. "1.66". In addition, this is to avoid that bad values are possibly overlooked depending on the selected formula settings. All in all, the default setting also serves more as an indication of the possibilities of conditional formatting, which you are welcome to adjust according to your wishes.
For this setting to take effect, the default layout may need to be reset. Alternatively, or to adjust the limits, this behavior can be created or changed manually if necessary:
right click on a column header
"Conditional Formatting" => "Manage Rules"
"New rule", or "Edit rule".
Customize the rule according to your needs,e.g. change the limit value
The import area is displayed by selecting the Import tab and consists of three sub-areas:
Menu bar / Toolbar
Import sources
Imported Executions
The Report Analyzer allows the import from different data sources. For each data source type individual plugins are available. Depending on the version, the number of available plugins can vary. Additional plugins can be used for example to import data from CSV files or databases. The imported data is displayed in the Imported Executions area.
All available import plugins are listed here. Each Plugin appears as button and if necessary an additional button for specific settings. By clicking the button the Plugin dialogue is opened, which initiates the respective import process.
All test protocols imported by the plugin are displayed in the list of imported executions.
IRS XML Plugin
The IRS XML Import Plugin is a file-based import plugin that can load IRS test reports.
Note: The IRS report format is also the data model that Report Analyzer works with internally. Since the data does not need to be converted, the import process is usually faster than from foreign formats, also the generated data is smaller and supports compression methods if needed. Ask for the IRS Report Plugin!
The plugin supports the following files:
Uncompressed files in IRS format: *.xml, *.irp
Compressed files in IRS format: *.gz, *.irpz
Appended Reports (multiple executions within one test report)
In the import dialog the files to be imported are selected. A report can contain one or any number of executions. All executions contained in the report file will be imported automatically.
Procedure
Select the appropriate file filter (default: *.irp). The selected filter is automatically saved and will be kept until the next change.
Select the base path using the Browse button ... or select a path from the dropdown menu of recently selected paths. The Refresh button refreshes the file list in the left pane if necessary and lists all subfolders and files starting from the base path.
The files to be imported are selected via the checkboxes in front of the respective folder or file name. You can select complete folders incl. subfolders or single files
(optional) Configure Import Options to optimize RAM usage or add custom header information
click on Import to start the import process.
Import with search filter The dialog offers a search field, which can be used to search within file names. The previous selection of files is retained! When clicking on whole folders in search mode, however, only the displayed files are added or removed. This is e.g. very helpful to filter only successful (or only failed) tests already during import!
Import via Drag & Drop At the bottom of the dialog there is a "Drop Area". Drag files or folders to this area to add them automatically to the selection. Attention: Search terms or the current file filter will be ignored! If the selection is successful, the number of detected files will be updated. A list is not displayed.
Starting the import process By clicking on "Import" an attempt is made to load all selected files. During the import it is checked if the Execution ID of the report has already been loaded. In this case the file will be ignored. After the import process is finished, a summary is displayed:
After confirmation the dialog closes and the newly imported reports are listed.
Tip: The plugin can be opened and executed multiple times to import data from different sources
Import options
As of version 1.7, the reports to be imported can already be filtered or post-processed during the import. This allows an optimized use of memory by discarding information that is not needed for evaluation, such as info texts, additional results or "Done" steps. In addition, missing header attributes can now be added later during loading, allowing for better grouping in the evaluation. It is also possible to replace or correct existing header attributes.
Please note that the report must first be fully loaded before post-processing can take place. Therefore the import options always need some additional computing time!
Date If activated, test executions will be discarded that are not within the specified time period (the reports will still be loaded first!).
Content
Remove 'Additional Results': removes additional data added by "Additional Results" or a check mark at "Log" in TestStand. Such data is only visible in the report, but cannot be analyzed by the Report Analyzer. Removing it can cause a significant reduction in data size
Remove "Infotext": Removes the TestStand report text. Depending on the type of test, this can also save considerable space
Status. Here the test steps can be removed by status. If "Keep first error step" is selected, the first "Failed" or "Error" step is kept in any case. In this way, for example, an optimized error distribution can be analyzed, which only contains the "Failed" step that led to the test failure.
Header Missing header data can be added here to allow subsequent grouping. Existing header values can also be corrected here.
Settings With the checkmark "Save as default" the current settings will be used for future imports
IRS Test Report Structure For better understanding, the general data structure of an IRS report is shown here, as it is then processed by the Report Analyzer. However, the Report Analyzer only uses the "Execution" level and below after the import:
File
Report
Execution
Attribute (Meta Data)
Attribute
TestStep
TestStep
Attribute (Additional Results)
TestStep
Execution
Attribute (Meta Data)
Attribute
TestStep
...
...
Execution Properties An execution contains both defined property fields (Properties) and individually usable attributes. The predefined properties are supported in all tables. Since version 1.4.23 the grouping page can also be used to filter and group by attributes
Step Properties Additional attributes for steps are displayed in the Details window and in the report. However, they cannot be used for filtering or grouping.
NI TestStand Plugin
Until version 1.4.3, only reports in the native IRS XML format could be used. Therefore the IRS XML TestStand plugin had to be installed to generate this specific format. Even though it still offers advantages for the measurement data analysis due to its compactness, this format is no longer a mandatory requirement!
Starting with version 1.5, the new NI TestStand plugin offers the possibility to directly read the standard formats NI XML, ATML5 and ATML6 used by TestStand!
The used format is automatically recognized by the plugin and the corresponding import type (XML / ATML5 / ATML6) is selected. If the file type is known, it can be set explicitly in the plugin settings, which can slightly speed up the import process.
Otherwise the handling does not differ from the IRS XML plugin.
The Imported Executions window displays all the imported executions. The table contains meta information such as the test time, the serial number and the overall result.
Info: An Execution is a test run and contains all executed test steps. A report can contain several executions.
Viewing and grouping
Double-click on an execution to open it in the Report Viewer and view the report. Measurement filters already applied are also taken into account.
As already described in the section "Filtering", the executions can be filtered by properties. The set of all executions that match the filter criteria remains.
Executions can be grouped in this view as well. To do so, drag the column header into the grouping area. However, this grouping only serves the current display and has no influence on the later necessary grouping.
Menu bar
In the menu bar an import filter can be created, loaded and applied.
Select and delete unwanted executions
It is often useful (e.g. due to aborted tests or during the commissioning phase) to remove certain executions from the data set before the data is analyzed.
Press CTRL+A followed by DEL to remove all currently visible executions from memory. For example, a filter can be used to display all executions with status "Error" and "Terminated" and then remove them all at once.
Single or multiple Executions can be removed from the table by left clicking (+ CTRL key for multiple selection) and pressing the DEL key
Tip: After the cleanup is complete, all executions can be combined as a data set in a single file. The function "Save record as..." on the start page serves this purpose.
It is not possible to load the same execution multiple times. Already loaded executions are recognized by their unique ID and skipped. Note: When reloading, the existing execution is NOT changed. Steps removed by measurement filters will not come back in this case! To actually reload an execution, it must be removed from the list first!
Changes to the execution list lead to a recalculation of the results. However, already created groups remain as long as they contain at least 1 Execution. It is possible that deleting already created graphs will invalidate them if the measurements used no longer exist afterwards. Therefore groups should be created only after the import process is finished.
Export execution reports
To create an overall report for multiple test executions, select the desired rows and press "Export". A dialog will open with various options to show/hide specific content components. Configure the desired view and select the desired export format using the "Export" drop-down button:
The import process can be repeated as often as you like. Thus the data set can be composed of several sources. For example, it can be composed of files that are imported from different folders or drives. It is also possible to assemble the data set from different sources, e.g. a combination of files and database entries. Complex data sets from different sources can be exported and later reloaded with the function Save data set. This saves you from having to make another time-consuming data selection.
Existing data sets can be continuously extended by importing additional executions and can be used for a long-term evaluation, for example. After adding the new executions, the data set can be saved again.
To integrate a custom plugin into the software, it must be placed in the subfolder 'Plugins' in the installation directory. After a restart it will be recognized automatically.
When importing data into the Report Analyzer, all test steps of an execution are imported by default. During subsequent analysis, each imported test step is analyzed. Since the executions often contain considerably more steps than are of interest for the analysis, the executions can be reduced to the relevant test steps by so-called import filters.
The import filter is a list of measurement IDs. All steps whose measurement ID is found in the list are kept, the rest is discarded. The filter can be applied later to the loaded list above already during the loading process.
Tip: The Import Filter is a simple text file. If necessary, it can be edited with any text editor or Excel. One Measurement ID is defined per line.
Advantages:
The clarity is improved, irrelevant measurement steps are removed from the lists
The processing speed for a large number of executions is increased
Memory requirements are reduced (both in RAM and when saving data sets)
Downside:
Accidentally removed steps can only be retrieved by removing the execution from the list and then re-importing
The status of the execution is maintained. This can lead to inconsistent displays and interpretation problems, e.g. if the filter has removed all erroneous steps, but the overall result is still "Failed" or "Error".
A test step is always identified by its unique Measurement ID.
Create Import Filter
To create the import filter, select the Create Import-Filter button in the Import tab of the menu bar.
Note: The Create Import Filter dialog offers all known test steps. This assumes that data has already been imported
All available test steps are displayed on the left side of the window. By dragging and dropping, all test steps to be included in the filter are dragged to the right side of the window.
Using the input field, the list of test steps can be restricted to the search term entered:
After all test steps have been selected, the filter is saved by clicking the Export Filter button and the dialog can be closed.
Load import filter
The Open Import Filter button loads the import filter (but not yet applied). There are now two ways to use the filter:
Apply filter once
The previously loaded filter is applied by the Apply button. The program processes all previously imported executions and discards all test steps that are not listed in the import filter. Depending on the number of loaded executions this process can take some time. With this method the test executions are first loaded completely into memory and then the unneeded test steps are removed.
Apply filter during import
If the filter criteria are already known and the filter already exists, filtering can take place during the import process. For this purpose the button Active is selected before the import. Now the filter function is activated and when importing new executions the loaded filter is used. This filter method is recommended when importing large data sets where not all test steps are required but a large number of executions must be processed, e.g. for trend analysis. The working memory is already released during the import process.
Tip: With a double click on an Execution, the Report Viewer is opened. After applying the filter, only the test steps that were listed in the Import Filter are contained there.
Tip: The overall result of the Report is not changed. For example, if "Failed" test steps were removed from the report and only "Passed" steps are contained, the overall result will still be "Failed".
The profiler is an alternative view of the test report and is displayed using the Report Viewer. The dialog opens by double-clicking on a single test execution in almost any suitable list (import, grouping, error distribution, etc.) You can access the profiler via the second tab at the top left:
Test steps from the report are displayed graphically in the order in which they occur, with the start time on the time axis. The length of the bar symbolizes the execution time of the test step. The colors have the following meaning:
Blue: A test step with status "Done" or also a group total - Red: A test step with status "Failed" - Green: A test step with status "Passed" - Dark blue: A test step with status "Terminated" - Orange: A test step with status "Error" - Yellow: There is no data in the report for this section (gap)
The test steps are organized in groups according to the information in the report.
The functions of the toolbar in detail:
Export: Provides the option to export the current view to various output formats. Note the page formatting options
Expand: All groups and (non-filtered) steps are displayed
Collapse: Only the group overviews are displayed
Zoom out: Decrease current zoom level
Zoom all: Sets the limits so that all contained test steps fit between the left and right margins (even those not currently visible)
Follow: Align top entry to left margin, keep zoom (see below)
Zoom Auto: Fit vertically visible area horizontally (see below)
Zoom In: Increase current zoom level
Show groups: steps in the same group are highlighted with a blue rectangle
Show gaps: time periods not covered by the report are displayed as yellow pseudo-steps
Show results: Shows the column with the step result
Show value: Shows the column with the measured value - Show description: Shows the column with the step limits
Show details: Shows the column with info text and additional results
Min duration: The slider activates a filter for the minimum step length displayed (see below)
The view offers various tools to make navigation easier:
The entries are moved horizontally during vertical scrolling so that the first entry remains aligned at the top left. In contrast to 'Auto Zoom', the zoom level is retained. This mode is particularly suitable for optimally comparing execution times
The currently visible entries are fitted exactly into the available area so that the complete selected section is always visible. Please note that this can impair comparability because the zoom level is changed continuously.
If this function is activated, the viewer creates pseudo entries for detected definition gaps between the end of the last step and the start of the next step. This can help to identify sections with longer execution times that would otherwise remain undetected due to missing entries in the report.
Note: The internally used IRS report does not save any information about the call hierarchy. The names of the sequences or sequence calls (depending on the configuration) are used as the "group". The position of the group in the sequence therefore says nothing about the call depth
In many places, the corresponding test report can be displayed by double-clicking in the table. In this report viewer there is now a second tab "Profiler" at the top left, which can be used to display the progress of the test steps over time. Details see corresponding chapter .
A new checkbox option "Remove Measurement ID Indices" has been added to the dialog. This removes all indices automatically added by reporting for multiple measurements during import. From Meas_A[1]
, Meas_A[2]
... Meas_A[10]
becomes just Meas_A
. In this way, several identical or similar measurements can be treated as the same measurement.
New dialog for all import plugins:
New module/tab with statistics about FPY, all contained test executions, steps and attributes.
This slider can be used to filter very short steps from the view. The set value acts like a "normal" column filter and is displayed at the bottom left. If the range is not sufficient, the same effect can be achieved using the standard >
The IRS Report Analyzer is an application for viewing, filtering and analyzing measurement reports.
The data from many test protocols are grouped together and aggregated based on the individual measurements that occur. Thus, for each measurement, the course over time (history), the distribution of values in certain groups (histogram) as well as statistical values can be determined, which provide information about the quality of the measurements, visualize the course of measured values over time or can be used to do so.
With only a few mouse clicks, complete MSA reports of all measurements of all groups can be generated as PDF files.
IRS XML Report (IRP)
NI TestStand XML Report
NI TestStand ATML5 Report
NI TestStand ATML6 Report
If you need support for the import of special formats, please do not hesitate to contact us. We will gladly make you an offer. You can also inform us about feature requests/bugs directly or via the support form within the software.
Importing reports from various sources
Filtering according to specific criteria
Viewing the reports
Create and manage import filters
Export multiple executions in a PDF report
Creation and comparison of groups according to any criteria or filters
Filter settings can be saved as a set
Automatic creation of metrics of all occurring measurements (MSA type 1 analysis)
Analysis parameters adaptable for different standards and procedures
Visualization of the value course incl. 3-sigma limits and limits
Support of variable limit values
Visualization of the accumulation of measured values (histogram)
Visualization of the status distribution (Pass/Fail) in the histogram anch value ranges
Display of all corresponding single measured values including status
Display of the source report if required
Simultaneous display of measurement series of all groups (overlay or stacked)
REPORTING AND EXPORT
Export single or multiple test reports as PDF, Word, Excel or HTML
Storage and reuse of report presets
Adaptation of the names of the individual measurements for the report
Bookmark function for commenting and reviewing conspicuous measurement points
MSA (Type 1) Analysis
Report of the statistical analysis values of selected measurements and groups including value history, histogram, data table and serial numbers
Individual PDF report of the current screen view
Graphical comparison of groups based on their status values (Pie Chart)
Listing of errors per measurement
Visualization of the top sources of error
Comparison of the error frequency over different groups
Export of the data as CSV
Graphical comparison of different measurements within a group
Report of the current view as PDF
Comparison of the correlation of selected measurements of a group
Visualization of the correlation as a color scale
Export as CSV
Display of the moving average of selected quality titers for each measurement
Adjustable window size
GENERAL
languages: German / English
Themes: Dark / Light
Additional Plugins
Platform: Windows 7/8/10 32/64-bit, .NET >= 4.5
Single user license, bound to computer
RAM: 4GB min, 8GB recommended
Added
Report Viewer: A new "Profiler" dialog displays the chronological sequence of a test graphically and also shows gaps in the report. It is provided as an additional tab in the report viewer dialog.
Report Viewer: Now allows to switch between loaded (in memory) and original content (in case content was modified by import options or filters)
Current X-axis settings are now adopted in the history report. It now also supports SerialNr and TimeStamp XAxis mode.
A new "1st Failure" column in the error distribution shows how often the measurement was the first failure (or error) in the test. Sorting or filtering by this column allows much better analysis of the cause of the test failure.
Active import options are now indicated by a small "warning" icon in all file import dialogs
Import file browser directory scan can now be aborted by pressing ESC
New import option to merge repeated (indexed) mesaurements by removing the index from measurement id
New XAxis Type IndexMeasurement in history chart (required in case of multiple identical measurements in same execution)
Changed
BREAKING CHANGE: Analysis is now always based on ALL loaded measurements, including NaN values. In previous versions, NaN values were ignored in some statistical calculations. This might lead to more invalid values and different calculations compared to previous versions
TrendAnalysis now uses IndexMeasurement instead of IndexGroup as XAxis for graph
History & Statistics Report Wizard: Measurement selection is now kept if the search field is used, allowing incremental selection
Re-Importing existing reports will now replace the loaded data (also in groups) instead of skipping it. This allows to reload already filtered data.
Improve stylesheet of report viewer
ReportAnalyer.exe is now signed
Fixed
File Import: Fixed empty list of failed imports Dialog
File Import: Pressing Cancel after completed import caused an exception
Graph comparison: table did ignore global formatting (number of significant digits)
Error Distribution: List of measurements was not refreshed when group was changed and a non matching filter was active
History & Statistics: LSL was not marked with *) even when it used a substituted value
Page "Grouping" was empty in case of duplicate or invalid entries in report header
Fix error in graph comparison if the measurement of an existing graph was filtered out, then an execution was deleted from the group and the data was recalculated
Removing all executions from a group caused an error if the group was selected in a module. Empty groups are now automatically removed
Added
New import options for all import plugins
filter by date
remove steps by status (done / passed / failed / error)
keep first failed or error step
remove additional results
remove infotext
add custom header data
Grouping
Grouping: Selected tests (use checkbox) can now be added to existing groups via Drag & Drop
Grouping: Selected tests (use checkbox) can now be added to existing groups via new 'Add to Group' button
Group Statistics
New module/tab 'Group Statistics' with statistics about FPY, all contained test executions, steps and attributes. Reporting possible via context menu
New context menu "Show statistics" for manual selection of test executions in many views (Import, Grouping, Error Distribution)
New statistics for whole data set on start page
Changed
Changed initial size of some dialogs to fit also for smaller displays or higher scaling values
Changed target framework to .NET 4.8! Note that .NET 4.8 must installed to run this version of report analyzer
Update used IRS reporting DLL to 1.46.6
Fixed
Statistics report settings did not work properly (number of measurements selection)
Multiple user options (export type, separators etc.) were not correctly evaluated in measurement and statistics table export due to a change of the dialog type
Zero-values caused a division by zero in certain cases in table export when rounding to significant digits, resulting in a NAN value instead of 0
Fixed
Fixed an issue where application would hang when grouping by a column which contains colons ":" in its values, while the application setting "use simplified group names" is activated (affects all previous versions)
Changed
Updated reporting dll to 1.46.4. Now supports single sided limits and treats PassFail tests as numeric (0/1)
Changed
New installer type, now offering options to select import plugins. It is recommended to uninstall older versions before! Settings will be kept!
Startup speed improvement
Significant speed improvement in group assignments dialog for large data sets
Significant speed improvements in group management page for large data sets (create groups, delete executions from groups)
Speed improvement for graph creation in case of very large data sets
Speed improvement in error distribution calculation
Speed improvement in all status count calculations
Limit strip generation significantly improved
One sided limits can now be displayed
Non-constant or modified limits are now displayed as a dashed line ingraph
Added
Export context menu: Almost any visible table content can now be exported into various formats or to the clipboard using the context menu
Remove selected executions directly from measurement values table
Remove selected execution directly from measurement info panel (graph)
Predefined conditional formatting of Cg,Cgk, Cp and Cpk columns (see documentation how to customize this)
New "Deviated Limits" column in analysis table to identify and filter rows with inconsistent od modified limits (hidden by default)
History & Statistics: Group name and measurement ID are now added to graph title
Fixed
Fixed disfunctional dialog for column selection when too many attributes were found. Buttons did not work
Fixed dialog function for measurement group management
Fixed wrong calculation of alternative limits when limits are inconsistent and new USL would be less than LSL
fixed wrong display "loading..." when saving a data set
fixed wrong order of measurements in trend analysis value tables when including failed measurements
Plugins directory was evaluated wrong when the working directory was different to application directory
XY and YY comparison graphs could crash when no data was assinged and a group was deleted or updated
Removed
Fill Limit Strips: this option is no longer available due to new limit draw method
Fixed
Fixed a bug where recalculating the measurement lists caused a crash when the number of measurements was reduced after removing existing executions
Fixed
(Re-)Enabled resizing of fixed size report viewer dialogand support for multiple (non-modal) report viewer dialogs
Fixed
Can't open multiple instances of report analyzer
Calling Cleanup in grouping causes an application error
Fixed dialog handling. Dialogs now stay correctly in front of the application when switching back to ReportAnalyzer
Only measurements that are available in the respective group are now listed in the groups
Changed
Updated Reporting DLL to 1.46.3
Updated Licensing DLL to 2.1.1
Updated NLog package to 4.7.11
Fixed
Fixed wrong order of measurements in analysis report export and "Group First" view
Measurement list is now cleaned up from measurements which are no longer part of any execution after deleting executions
Fixed wrong font in Avg and Unit column of analysis report export
Changed
Huge performance improvement when deleting test executions which are contained in existing groups
Added
Import
==New import module "NI TestStand" adds ability to directly read standard TestStand XML, ATML5 and ATML6 reports!== With auto format detection!
New import dialog: Report directories are now displayed in a File Explorer like tree structure
Executions can now be added and removed from Import page without causing the deletion of all groups. Measurements are recalculated, groups and graphs are kept where possible
Allow grouping on import page
Added "# Steps" column
Improved performance when deleting executions
New PDF export of loaded executions.Multiple executions can now be exported as a single report file with preview and various export formats
Grouping
Grouping is now possible with almost any field in grouping view
Now supports selective adding of current view by checkboxes. Groups or even executions can be selected individually. It is no longer required to add the whole view (but still possible)
Added "# Steps" column
Executions can now be removed from a group, measurements are automatically recalculated. Graphs are kept where possible
New user comment feature for groups. The comment is also optionally added (default:true) to PDF export as a group description
Removed groups now correctly trigger updates in all analysis pages
Removing groups no longer deletes all graphs
Analysis Engine
Restructured and optimized MeasurementAnalysis Engine
Improved calculation performance
Added support for almost all existing TestStand comparison operators including all EQT types (except inverted style limits)
Allowed more data types in report than just "Numeric"
History and Statistics
Analysis tab pages are now initially disabled until a valid grouping exists
Support for inconsistent limits in measurements. In this case now the most narrow of all valid measurement limits (LSL.max ... USL.min) are used as a "worst case surrogate" instead of displaying "NaN"
Highlighting of values calculated from modified limits in views and PDF report (due to manual modifications or usage of surrogate values)
MeasurementValues: Added "Status" column and color formatting
MeasurementValues: Added "Unit" property to MeasurementValue
New "CountValid" column in MeasurementAnalysis table
Added missing formulas to report appendix
Changed header names in history and analysis group data table (%EV, %K, %RE)
Renamed limit columns to USL and LSL
Changed report default measurement count from All to First100
Improved default layouts and default displayed columns
History & Statistics: Histogram
New logic now creates a valid histogram in almost any situation
Now each bar also displays status count (pass/fail) for each group
Improved display of axis labels, crosshair label, value labels
Number of history bars now supports auto calculation when set to 0
Changed default bar range to Mean +/- 6 Sigma
Added special bars for "out-of-range" values capturing everything beyond mean +/- 7*sigma range
Added support for inconsistent limits using vertical strips in histogram
Overall performance improvements
History & Statistics: Graph
New resampling logic to support larger number of points (tested with ~80.000 executions)
Enabled zooming history graph even when in stacked view
Enabled scrollbars for history graph
Enabled checkboxes in legend for all multi-chart modes to enable/disable individual graphs
Keep graph checkbox selection in legend when changing measurement
Now contains PartNumber information in tooltip display
Now displays scrollbars with zooming capabilities
Limits display is automatically switched off for series with more than 5.000 points for performance reasons
Watchlist
A notifier is shown as feedback when a new entry is added
Allowed checkboxes for multiple charts modes to disable single series
Added index property to TestExecutionGroup (allows explicit filtering of single groups)
Error Distribution
Selected group now stays in visible area
Immediate recalculation when changing test execution collection or groups
Fixed and error in export
Graph Comparison
Graphs are updated automatically when executions of used groups are deleted
Series use dynamic resampling for large numbers of measurements to improve performance
Similarity Analysis
Improved performance
changed single color and fixed threshold formatting to a 2-color scale, honoring also negative correlations in red color
Disabled sorting in similarity analysis correlation matrix
General
Improved startup speed
Improved logging
Added local documentation as pdf instead of zipped html page
Improved documentation, added missing chapters and many screenshots
PDF help file now available in german and english language
Added FilterPanel to many tables
Added sample sequence files and reports to demonstrate how to optimize reports for IRS Report Analyzer
Loading and saving data sets no longer uses a temp file which dramatically can improve performance in certain situations
PDF properties for current view exports are now preconfigured
Removed ExecOffset column from all views (internal field, not relevant for user)
Added ExecTime column in all test execution related tables
Added OrderId column
Fixed
Fixed handling of invalid or missing export paths
NaN values are now ignored for %RE calculation
Histogram: Fixed view range of histogram and allowed zooming
Histogram: fixed bar width calculation
Fixed several redundant refresh issues which could have performance impacts
Fixed a bug where additional columns dialog would appear in an endless loop
Added missing reprocessing of existing group data after loading of data set (.gz)
Fixed wrong calculation of Bi (inverted sign)
Fixed potential performance issue with NLog (target was not async)
Fixed various table update/refreshing issues
Warning dialog when loading a saved dataset now only appears if any execution exists in analysis group
Fixed performance issues in similarity analysis when correlation value was NaN (now uses 0)
Fixed
grouping was ignoring the filtering of columns
Fixed
The XSL import plugin was missing from the previous release
Fixed
File import dialog did not load when recently used directory doesn't exist anymore
Inconsistent third party reference led to a crash on application start
Added
Execution View: added a button to open a preview window where a specific execution can be exported to a PDF file
Import View: added a button to bulk export all selected test executions to a PDF file
Added unit to measurement list view and the default report overview page
Fixed
Minor bugfix for the new group by user-defined header feature
Changed
Graph Comparison: Only the selected graph can be exported to PDF
Changed
The splash screen can be minimized
Updated irs licensing
Set properties by default on Export as PDF
Added
It is now also possible to filter and group data by user-defined header fields
Added support for multiple folders and subfolders when doing drag & drop import
Added
Files can be added to the import dialog via drag&drop
Added a new setting to allow a more compact default group naming
Added a new setting to use the group name for the default graph title
Fixed
Deleting a group could occasionally crash the application
Reset measurement search string after each new analysis to fix binding issues (history & statistics)
Improved selection mechanism for history diagram steps
Fixed the "Show report"- and "Add to Watchlist"-button with the history & statistics step view
Changed
Show "missing groups"-alert only once for each view
Removed
Removed "Prefer 32-bit"
Added
Added plugins to start view
Added execution time to grouping view
Changed
Updated irs licensing
Updated reporting dll
Changed
Updated irs licensing
Set result to "unknown" for executions where the result is null
Max visual group value can be configured at the settings
Fixed
Fixed filtering for several views (select first visible item when a filter changed)
Refresh measurements grid control when a group changed (trend analysis)
Added
Error distributions now has a column for "Done"
Added execution time to history & statistics
Added appendix to default history & statistics export
Added product ids to default history & statistics export
Changed
Enhanced selection for data grids
Changed
Changed "WinUI" message box to "DX" message box
Trend analysis: reselect previously selected measurement after calculation
Updated IRS licensing
Fixed
Floating mean did not filter according to test step selection
Removed
License key from startup view
Added
Import XML data with XSL transformation
Import LLT data from csv files or from database
Refresh file import dialog with F5
Fixed
Minor bugfixes
Changed
Minor UI enhancements
Added
Similarity analysis export
Changed
Minor UI enhancements
Added
New export: error distribution default export
New export: error distribution group comparison
New analysis value: % RE
Separate dialog for export settings
History & statistics default export: show/hide IDs
History & statistics default export: new parameter to add additional custom information
Measurement filter can now already be applied during import
Changed
Moved significant digits from view settings to app settings
Optimizations for chart coloring
Better visibility of the import status at file import view
Removed
Export path and logo path from history & statistics default export (can only be set at export settings now)
Changed
Updated licensing DLL
Fixed
Installer
Added
Load recently used datasets from home screen
App settings: separate default path for each export
History and statistics default export can be generated via command line
Graph comparison default export
Changed
Using significant digits from settings at the reports
Fixed
Several fixes
Added
Table headers to every report page in case a table spreads over multiple pages
Save button to app settings and app view settings
History & Statistics: Export Table
Changed
Restructured grouping layout
Grouping: moved save and load filter-set buttons to toolbar
Fixed
Wrong axis color at graph comparison when limits are shown
Opening a new analysis dataset didn't work properly when there was already loaded another dataset before
Ignoring NaN measurements at graph comparison
Removed
Navigation buttons from graph comparison charts
Added
Light theme
Custom window for unhandled exceptions which allows the user to save an error log
Translations for enums
Saving and loading of analysis datasets
History & Statistics: Export Current
Changed
Overworked navigation and toolbar
Overworked start screen
Resetting custom layout when the application was updated
Several icons
Fixed
Several fixes and improvements
Removed
Unused references
Initial Release
NI TestStand offers an excellent built-in report system. Using the reports, all relevant step details and parameters as well as the test sequence can be reproduced in detail at a later date if required.
While this file depth is helpful for debugging, the file size of this report makes it less suitable for archiving, and the complexity of the data structure makes it poorly suited for machine evaluation. Since practically any data type can occur, the extraction of the actually relevant measured values is sometimes tedious.
In production, however, it is primarily the measured values that are of interest, rather than the test sequence. The IRS XML Report was designed to provide a standardized data structure for the most frequently required data types and easy access to the measured values in a compact format. In this form, the data can then be easily analyzed by the IRS Report Anlayzer.
Compact format
Generic structure
Support for user-defined header data
Available as TestStand Report Plugin (contact IRS)
Storage as XML (if required also automatically packed in GZip format, approx. 90% space saving)
.NET API (.NET 4.6.1)
HTML preview
Direct support of custom stylesheets (XSL) for customized preview or special output format
Multi-instance capable
TestStand Report preview
Support for asynchronous processing
Standalone operation as assembly possible, e.g. for processing or generating reports from LabVIEW or other programming languages
Works in all plugin-enabled TestStand versions from 2013, 32-bit or 64-bit
Dynamic generation of the storage path using expressions based on report properties (e.g. serial number, date, status, custom fields)
Plugin callbacks for dynamic configuration on UUT basis (if required)
Configuration dialog for TestStand Report Options
Viewer program for direct opening with HTML view
CSV Writer as configurable extension for generating simple overviews in CSV format (requires IRS Report Plugin)
Data base for IRS Report Analyzer
Used by many IRS customers for over 10 years as standard format
IRS Reporting was not designed as a complete replacement or alternative for NI Reporting. Rather, its purpose is to provide a compact extract of measured values from a complex data structure as a flat list. To achieve this, some conceptual compromises are necessary, resulting in the following rules and limitations. However, these are essentially the same restrictions that apply to MES systems:
Supported data types for measured values:
Numeric data
String
Boolean
Arrays are not supported
The following contents are excluded from the report
Steps with status "Done" (exception: Additional Results were added)
Flow control steps
Sequence Calls. Only the calling step is suppressed. The included test steps are added to the list as a matter of course. The name of the sequence is noted in the report as "Group".
Custom step types that differ significantly from the standard steps in the data structure may not be recognized correctly
On-the-fly reporting is not supported
Basically the IRS Reporting works like the NI Reporting on the basis of the ResultList Arrays. Steps that are not included here cannot be processed.
Additional Result StepTypes(!) generate one step per Additional Result with the value of the added variable in the report (if the data type is supported). This can be used to provide arbitrary data for evaluation in the Report Analyzer, even if no (limit or pass/fail) test is associated with it.
Multiple Numeric Limit Tests generate one step per measurement. The name of the measurement is used as the measurement name. The Measurement ID gets an additional index
Additional Results in Tests (Numeric Limit Tests, Pass/Fail Tests, etc.) are added as attributes to the Step. These are listed e.g. in the HTML report as info text. An evaluation in the report manager, however, is not possible.
Since the data is largely transformed directly from the TestStand Report, the same rules apply to the assignment of limits as in TestStand: One-sided (LT/LE/GT/GE) or individual limits (EQ/NE) are ALWAYS written to the Lower Limit, regardless of the actual meaning. This rule is unfortunately system-dependent and must be considered if necessary with the representation! In the Report Viewer or the HTML Report Preview this assignment is corrected in the HTML Stylesheet. A corresponding demo stylesheet can be found in the examples. Note: For this reason the Reporting API contains an auxiliary class (NumericLimits), which calculates correct numeric values for upper and lower limit, tolerance, nominal value etc. from the step data using the comparison operator for all conceivable cases, as far as the situation permits.
In order to achieve the best possible result in terms of statistical evaluation, the following additional simple measures are strongly recommended. These have no influence on the test execution itself and are also compatible with TestStand Standard Reporting:
Each test step in the report is assigned a (ideally unique) Measurement ID. This allows the results of different test runs to be cumulated based on this ID, regardless of the position in the report. Since such an identification is unfortunately not provided as a TestStand feature, IRS has defined a special syntax convention for step names here. If the step name begins with a string enclosed in square or curly brackets, this string is used as the measurement ID. The structure of the ID is arbitrary regarding content and length. The following part is used as Measurement Name.
Examples
"[ADC_0200] Check ADC input value" =>. => Measurement ID: ADC_200 => Measurement Name: "Check ADC input value"
"{Setup CAN} Configure CAN Communication Interface" =>. => Measurement ID: Setup CAN => Measurement Name: "Configure CAN Communication"
In order to provide the report with more meta information than is possible via the built-in TestStand mechanisms, a special test step is inserted. The location of the step is irrelevant. It is only necessary to ensure that the data ends up in the ResultList. Therefore, the step cannot be used in callbacks such as PreUUT, where Result Recording is disabled by default.
Procedure:
Add a new step of type Additional Results
. Recommendation: into the setup group of the MainSequence, if necessary also into the Cleanup, if during the test still further header data develops).
Rename the step to: REPORT HEADER DATA
. The spelling (incl. upper/lower case and spaces) is important and must not differ, otherwise it will not be recognized!. Each Additional Result of this step is used as header entry. There are a number of predefined fixed fields, which thus get their own XML element in the report. All other entries that do not correspond to a reserved fixed field are translated as dynamic header attributes (XML element "Attr") in the report.
Available predefined (fixed) field names:
OrderId
ProductId
ProductType
TestMode
TestType
TestTitle
TestAuthor
TestRevision
SwRevision
HwRevision
CarrierId
AdapterId
FixtureId
PanelId
DataCode
Logfile
Comment
The spelling is important. Otherwise, in case of deviations, the reserved field is not used, but a similarly named attribute is created.
Important: The following data is read directly from the TestStand environment and therefore need not (or should not) be created as Additional Results:
Execution ID (GUID, wird ggf. vom System generiert oder aus dem ATML Report übernommen)
Startzeit
Ausführungszeit
Result (Status / Ergebnis)
UUT SerialNr
UUT PartNr
TestSocket Nr
Batch SerialNr
Station Id
User
Testplan (Name der Sequenzdatei mit dem Einsprungspunkt der Execution)
Infotext
Error Code
Error Message