Skip to content
Audit Data Analytics
(864)625-2524
contact@auditdataanalytics.net
Menu
  • Home
  • Solutions
  • Blog
  • Research
  • About Us
  • Privacy
  • Download
  • Shop
    • Cart
    • Checkout
    • My Account
      • Communication preferences
  • ADA Help
Menu

How to Evaluate an Attribute Sample with ADA

Introduction

With Attribute Sampling Evaluation, the objective is to project the sample findings to the population. The sample error rate represents the point estimate or most likely error rate for the population. The Evaluation module calculates this along with the one-sided and two-sided confidence intervals. You must have at least one error to get two-sided confidence intervals. The width of the confidence intervals reflects the precision of the sample, which is governed by the sample size. Larger samples are more precise. The more precise the estimate, the narrower the confidence intervals. The confidence intervals allow one to judge the goodness of the point estimate.

After auditing a sample for correctness, the user should have a count of the number of records in the sample that exhibit an error. An item either has an error or does not. There is no degree of error or partial error in attribute sampling.

The Attribute sampling module uses the hypergeometric cumulative distribution function (CDF) to make all its planning and evaluation calculations as it is the most appropriate statistical sampling distribution to use when sampling without replacement. Unstratified, uniform random sampling whereby each item in the population has an equal chance of selection should have been employed to draw the sample. Using an attribute table or calculation with a stratified sample is not strictly valid because the tables/calculations assume unrestricted random sampling.

Formats Supported by Attribute Sampling Evaluation

ADA’s Attribute Sampling Evaluation utility does not work on any data, just user inputs.

attribute sample evaluation dialog

Using the Dialog

The dialog box consists of two sections: Evaluation Inputs and Evaluation Results. The user’s entries in the Evaluation Inputs section are used to calculate the results in the Evaluation Results section. The Evaluation Results section gives the results of clicking Calculate: the point estimate and confidence intervals.

Evaluation Inputs. This section requires Population Size, the Sample Size, the Number of Sample Errors and the Confidence Level. Once provided, the user can click Calculate and get the point estimate and confidence intervals in the Evaluation Results section.

Population Size. The number of items or records in the population data file.

Sample Size. The number of items or records in the sample. Must be less than or equal to the Population Size.

Number of Sample Errors. The number of control deviations, errors, failures, etc. in the sample. Must be less than or equal to the Sample Size.

Confidence Level. Defines the Beta risk for the sampling evaluation, which is the risk of not finding material error with the sample when it exists in the population. The Beta risk is the complement of the Confidence Level (i.e. 100% – C.L.%). The hypergeometric CDF will calculate the confidence intervals so that the likelihood of not finding material error with the sample when it exists in the population is less than or equal to this Beta risk complement amount.

attribute sample evaluation

Evaluation Results. The result of evaluation using the Evaluation Inputs entries. After the user clicks Calculate, these results are updated.

Sample Error Rate (Point Estimate). The Number of Sample Errors divided by the Sample Size. This is the unbiased point estimate of the population’s error rate.

Confidence Intervals. Presents the upper and lower precision limits around the point estimate for one-sided and two-sided confidence intervals.

One-Sided Upper Limit. Provides the point estimate and the upper limit. A one-sided upper limit assumes the variance of the error can only be greater than the point estimate. The upper limit of the one-sided confidence interval is less than the upper limit of the two-sided confidence interval because all of the Beta risk (i.e. 100% – Confidence Level %) is assigned to the upper end.

Two-Sided Limit. At least one error is required to calculate the two-sided limit. The upper limit of the two-sided confidence interval is greater than the upper limit of the one-sided confidence interval because half of the Beta risk (i.e. 100% – Confidence Level %) is assigned to the upper end and half is assigned to the lower end.

Questions

If you have questions about ADA software or you would like to know about purchasing custom ADA analytics, wonderful! Please call us at (864) 625 – 2524, and we’ll be happy to help.

fb-share-icon
Tweet
Pin Share
RSS
Follow by Email
Facebook
fb-share-icon
Twitter
Tweet
LinkedIn
Share

ADA Help Contents

ADA Overview
How to Append Data with ADA
How to Detect Duplicates with ADA
How to Evaluate a Monetary Unit Sample with ADA
How to Evaluate a Variable Sample with ADA
How to Evaluate an Attribute Sample with ADA
How to Filter Data with ADA
How to Generate Summary Statistics with ADA
How to Import Data with ADA
How to Join Data with ADA
How to Manage Columns with ADA
How to Perform Error Assurance with ADA
How to Plan a Monetary Unit Sample with ADA
How to Plan an Attribute Sample with ADA
How to Plan and Extract a Classical Variable Sample with ADA
How to Quick Export Data with ADA
How to Random Sample with ADA
How to Sort Data with ADA
How to Summarize Data with ADA
How to Write Criteria with ADA

 

 

 

fb-share-icon
Tweet
Pin Share
©2023 Audit Data Analytics Corp | Design: Newspaperly WordPress Theme