t e sample population at the landowner s discretion. sing an owner -, per!11IssIon as an up-front screen for site selection automatically_ �ele_cts for . landowners that are at the ver least confident they are rule compliant, while eselecting landowners who knowin I ar . T is aria ogous to t e IRS_Hr:iterr:!_c!!Bevenue Service} only conductin audits on .:. e U.S. o ulation (person t x a ers who volunteer to be audited iven a c o,ce, an ow wou an corporations wou c oose to be that selection process bias tax audit results? Without State agency authority to -access and audit forest practices on unbiased, r�ndom.ly selected harvest units at _the agency's discretion, OD F's Complianc� �ug1.tresults are biase·cf in the-direction·· of full· compliance while grossly u · nderr�presenting non-complianc�Ihe -Compliance Audit reg_ort �dmits this to_some extent in stating that: ......,...._. "Analysis focused on implementation of Forest Practices Act rules and potential or actual impacts to resources. Without a full enforcement investigation and legal decision on compliance, the agency considers outcomes as apparent rates of compliance or non-compliance, although for readability the word "apparent" is not used but implied." Moreover, PNI (private non-industrial} landowners may not know the difference between a "full enforcement investigation and legal decisions on compliance" and being audited when it comes to volunteering. Both non-nonresponses and those contacted but unwilling to volunteer for the audit would also contribute to biased results. Would you volunteer knowing the state carries legal enforcement authority for gross violations? In Washington state (WAC 222-08-160(4)), the Compliance Monitoring Program must answer whether forest practices are being conducted in compliance with the rules and to provide statistically sound biennial audits and reports to the Forest Practices Board. Further, "Sample size estimation is based on attaining a margin of error of+/- 5% for the statewide compliance proportion for riparian and road activities." See https://www.dnr.wa.gov/publications/fp cm program design.pdf?zi4n8 In order to guard against landowner biased results, the WA protocol for Compliance Monitoring directs: "Give the landowner a notification call with the dates that you will be reviewing their application," and "The landowner may 2 attend the assessment; and they can clarify elements of the FPA. However, they cannot be part of the decision-making process for determining the compliance of their activities." https://www.dnr.wa.gov/publications/fp cm program protocols.pdf?uqguvv 2. Statistical Reporting Methods for Aggregating Artificially Inflate Results Tables 3, 4 and 5 depict artificially inflated compliance rates resulting from methods that are fa��lly flawed by dividing the total number of non-compliance validated "on the ground" by the total number (thousands in many cases) o!_ potential rule applications that were never visited, then using that result as % compliance (Table 3 "overall com liance" row one "625 / 25,600 = 98%). This is , analogous to state law enforcement offers detecting 200 out of 1000 motorist '- ' ·-· ··----speeding on a major interstate (e.g.- over 60 m·ph)a"nd then dividing the 200 that .: were noncompliant by the 100,000 motorists that used the same interstate, but were never checked for speed (200/100,000 = 100% compliance), instead of by the 1000 they did check for speed (200 / 1000 = 80% compliance, 20% non­ compliance). The Report goes on using the same flawed method when breaking down overall compliance rates by administrative area (Table 3), landowner class (Table· 4), and Rule division (Table 5) all of which are statistically inaccurate and misleading. Only after a sample size has been determined adequate for each forest practices rule, (by a qualified statistician) can one infer the results to the general population being sampled, and never by dividing the number of sites actually checked for compliance on the ground by the total number of the potential applications for that rule (see above). . . - If ODF's goal is to report statistically robust resul�s on compliance rates for forest pra_ctices rules, or other sub-categories (e.g. harvest units), they must first and foremost determine adequate field sample size(s) required to infer to the general . population not field sampled, as opposed-to ODF's flawed-method of dhiiding the __ field-sampled pop11lation by the total number of EPA's that werenot fiercr-,. sampled (e.g. usin 10 00 - 5 000 FPAs as numerator). The ODF report gives a ough the sample size is cot!_ple of examples of how this approach should wor , too small to statistically infer to the larger population, by simply reporting the_ total number of sites that were actually field-checked for compliance monitoring. In reporting on culvert sizing to the SO-year flow, On page 15 ODF reports: 3 However, ODF lists compliance for Harvest units at "99%" because once again the field-sampled units were inappropriately divided by the total number of harvest units, including those that were not field checked. Table 3 listing compliance by rule division exemplifies how compliance rates become artificially inflated by placing the number of rule applications that were never field checked for compliance in the denominator (thousands in some cases) with Noncompliance in the numerator (1-384): Table 6. Compliance by rule division. ''¥1L - - - , -- t"'-'''le forest practice compliance monitoring program would CE._��lt with J qualified statisticians to determine adequate sam��li� confidence intervals (e.g. 95%) for specific rules being tested, and other related categories (e.g. harvest unit, rule division), before attempting to infer the results to the larger landscape. And under no conditions of which I am aware has a statistician ever recommended dividing the actual field-sampled population by the total number of rule applications that were never visited for reasons stated above. - ·-- . ) 5