Introduction
With MUS Evaluation, the objective is to project the findings from the sample back to the population. After creating the sample and its high value items file (if applicable) using MUS Sample Planning, the user is expected to export the data file(s) to Excel. After auditing the file(s) for correctness, the user should have two columns: one with the recorded amounts and one with the true, audited amounts. The user may choose to take advantage of the RECORDED_AMT and AUDIT_AMT columns that are created for the user’s convenience. Once the auditing of the sample is complete, the user can import the data file(s) back into ADA to perform MUS Evaluation.
MUS Evaluation can be used on low and high error rate samples. With low error rate samples, either the Stringer Bound or Cell Evaluation will project most likely error point estimates with upper error precision limits based on taintings in the sample. A tainting is the absolute value of the difference between the recorded and audited amounts divided by the recorded amount. With high error rate samples, PPS Evaluation will employ the Classical Variable PPS ratio estimator, which looks at ratios of audited amount divided by recorded amount, to make projections to the population.
To get started, open the Spec file created when you extracted the sample and click Monetary Unit > Evaluate. Using the Spec file is ideal, but not required. The program will use the information in the Spec file to search for the sample file and high value file and automatically fill in all the information to perform the evaluation. It will look for a sample file and high value file in the current project folder that have the same names as the files created in planning and named in the Spec file except it will be looking for a hyphen (-) after those names, a phenomenon created when importing an Excel worksheet. For example, if during planning the sample name (Output Filename entry) is “MUS Sample” and a high values file was created, which would be automatically named “MUS Sample_HV”, then exporting these to Excel will result in Excel files “MUS Sample.xlsx” and “MUS Sample_HV.xlsx”. The worksheet names will be “Sheet1” for this example. After importing, the files will be called “MUS Sample-Sheet1” and “MUS Sample_HV-Sheet1”. The sheet names do not matter, just that the hyphen is there. Please note that for this to work, the user must not change the Output Filename Prefix entry and simply accept the default when performing Excel import.
A Spec file is not required to perform MUS Sample Evaluation. If you do not have a Spec file, you can open the sample file (not the high value items file) and click Monetary Unit > Evaluate. Alternatively, you can have nothing open and click Monetary Unit > Evaluate. If starting with the sample file, only some information will be filled in. If no file is provided when invoking the module, then a blank dialog will appear.
Formats Supported by MUS Sample Evaluation
ADA’s MUS Sample Evaluation utility allows you to evaluate a sample that is Parquet data. Parquet is the format ADA uses for storing data files.
Using the Dialog
The dialog box consists of five sections: Sample Specification File, Sample File, High Values File, Evaluation Options and Output Name.
Sample Specification File. This is optional and can be used to conveniently fill in the information in the other sections. This file is the _Spec file created when using ADA’s MUS Sample Planning module. If other sections are edited and then the Spec file is chosen, the program will automatically overwrite the changes in the other sections with the applicable information in the Spec file.
Sample File. This is required. Once provided, the user will need to choose the appropriate recorded amount and audited amount columns.
High Values File. This is optional and whether one exists depends on how the sample was planned and extracted in MUS Sample Planning. If provided, the user will need to choose the appropriate recorded amount and audited amount columns.
Evaluation Options. Consists of two sections: Evaluation Method and Evaluation Inputs.
Evaluation Method. This section breaks down options into a low error rate group and a high error rate option.
Stringer Bound. The Stringer Bound produces a more conservative upper error limit than Cell Evaluation, meaning the Stringer Bound upper error limit is slightly higher, giving it a slightly greater chance of breaching the materiality goal.
Cell Evaluation. An evaluation method intended to be used secondarily if the conservatism of the Stringer Bound evaluation is suspected to be the cause of a materiality breach.
PPS Evaluation. Employs the Classical Variable PPS ratio estimator to estimate a high error rate sample.
Evaluation Inputs. This section breaks down options into a low error rate group and a high error rate option.
Sampled Population Total Value. The “Sampled Population” is the population from which the sample was drawn. If high value items are moved to a file, they are done so prior to sampling, so this total value should be the original population value minus the value of the high value items.
Confidence Level. Defines the Beta risk for the sampling evaluation, which is the risk of not finding material error with the sample when it exists in the population. The Beta risk is the complement of the Confidence Level (i.e. 100% – C.L.%). This defines a one-sided confidence level for both Stringer Bound and Cell Evaluation. For PPS Evaluation, the user has the option of selecting Two-Sided or a one-sided Upper confidence interval for evaluation.
Error Size Limit. Sets the upper bound for an individual item tainting, which is the percentage difference between recorded and audited amounts. Most applications of MUS assume the most an amount could be in error is 100% and this entry should probably not be changed. However, if evaluation finds that the error size limit has been breached, then the user will be notified and the Error Size Limit will be increased automatically to account for error taintings greater than 100%.
Output Name. Name your results data file. It will be saved in the project folder with a .parquet extension. Here, the output file will be named MUS Demo Evaluation.parquet.
Interpretation of Results
After generating the results files, it is expected that the user will export them to Excel, which can easily be done with Quick Export, where the user will have complete control over presentation of their results and be able to easily include the output in their reports and papers. The following views of the output are in Excel with no additional formatting applied other than zooming them down to be able to fit on screen before taking the screenshot.
For low error samples, an evaluation will produce two files: Results Overview and Detailed Calculations. The first one uses the Output Name exactly and represents the overview of the results. The second file uses the Output Name and follows it with a _DetailedCalcs suffix. The first one we will call the Results Overview and the second one we will call the Detailed Calculations.
Here is an example of a Results Overview when using Stringer Bound evaluation:
The calculations for the Basic Precision, Gross Most Likely Error, Precision Gap Widening and Gross Upper Error Limit are shown in the Detailed Calculations.
-
The overstatements Net Most Likely Error is the overstatement Gross Most Likely Error minus the understatement Gross Most Likely Error
-
The understatement Net Most Likely Error is the understatement Gross Most Likely Error minus the overstatement Gross Most Likely Error
-
The overstatement Net Upper Error Limit is the overstatement Gross Upper Error Limit minus the understatement Gross Most Likely Error
-
The understatement Net Upper Error Limit is the understatement Gross Upper Error Limit minus the overstatement Gross Most Likely Error
Any error found in the High Value Results section is added straight to the Gross Most Likely error. Note that these errors are not projected and do not change the Basic Precision or Precision Gap Widening.
The Statistics section offers the following statistics:
-
Total Achieved Precision is also called Total Precision and Absolute Precision. This is the sum of Basic Precision and Precision Gap Widening or the Gross Upper Error Limit minus the Gross Most Likely Error calculated separately for overstatements and understatements.
-
Total Audited Amount is the recorded population total amount (given in cell F5, Population Value with High Values) adjusted by the Net Most Likely Error. For understatements the Net MLE is added to the population total and for overstatements the Net MLE is subtracted from the population total.
-
Achieved Relative Precision is defined as the Total Achieved Precision divided by the Total Audited Amount
-
Two-sided Confidence Intervals are provided since the Overstatement and Understatement columns provide upper error precision limits that are one-sided limits. The last three rows provide a two-sided confidence interval with the net most likely error as the point estimate between the lower and upper error precision limits. The two-sided Confidence Level equals 1 – 2 * (1 – one-sided Confidence Level). Note that a 95% confidence level entry when running evaluation results in a 90% two-sided confidence interval. To obtain a 95% two-sided confidence interval, use 97.5% as the confidence level entry.
And here are the Detailed Calculations for Stringer Bound Evaluation:
For completion’s sake, here is the same sample evaluated with Cell Evaluation. The Results Overview is the same as the one for Stringer Bound Evaluation. The Net Upper Error Limit for overstatements is 44,292,313.27, which is less than (or not as conservative as) the Stringer Bound limit, which is 44,506,565.80. If the materiality threshold was set as 44,480,657.54 as it was for this sample, then the Cell Evaluation will show that the Stringer Bound conservativism is responsible for the breach of materiality. The sample errors yielded 105% total tainting, which just barely exceeded the 104% of calculated allowable tainting during planning (see MUS Sample Planning documentation Spec File).
And here are the Detailed Calculations for Cell Evaluation:
For high error samples, PPS Evaluation should be used. PPS Evaluation does not include a Detailed Calculations file, just a Results Overview. This shows the results of a two-sided PPS Evaluation. A one-sided Upper PPS Evaluation will produce similar results, but without the Lower Error Limit. This result provides a confidence interval around the Most Likely Error with the Lower and Upper Error Limits.
The Statistics section offers the following statistics:
-
Total Achieved Precision is also called Total Precision and Absolute Precision. This is the difference between the point estimate (Most Likely Error) and the upper and lower precision limits (Upper Error Limit and Lower Error Limit)
-
Total Audited Amount is the recorded population total amount (given in cell F5, Population Value with High Values) adjusted by adding the Most Likely Error. A negative Most Likely Error suggests the population is overstated and the amount should be adjusted down.
-
Achieved Relative Precision is defined as the Total Achieved Precision divided by the Total Audited Amount
Questions
If you have questions about ADA software or you would like to know about purchasing custom ADA analytics, wonderful! Please call us at (864) 625 – 2524, and we’ll be happy to help.