How to use
- Enter TP, FP, TN, and FN from one binary classification result.
- Optionally rename the positive and negative labels to match your test, model, or screening workflow.
- Read sensitivity beside specificity so misses and false alarms stay visible at the same time.
Wave 7 classification metrics
Detection rate and rejection rate at one threshold
This page stays focused on one operating point. It is useful when teams already chose a cutoff and need a clear readout of misses, false alarms, and class balance.
Inputs
Run a calculation to review sensitivity and specificity from one binary result set.
Sensitivity and specificity answer different failure modes
Sensitivity asks how often true positives are found. Specificity asks how often true negatives are protected from false alarms. A threshold can look operationally strong on one side while still failing badly on the other, so these rates should be reported together.
When to stay on this page
Stay here when the threshold is already chosen and the next task is to explain misses, false alarms, and base rate in plain language. This is often the right page for screening policies, diagnostic summaries, and one-cutoff model reviews.
When to move to predictive values
If the next question is what a positive or negative call means to the end user, move to NPV & PPV. Predictive values add prevalence and are often easier to explain in decision notes.
Frequently asked questions
When should I use this page instead of ROC AUC?
Use this page when one threshold is already fixed and you want to explain sensitivity and specificity at that operating point. ROC AUC is for score ranking across many possible thresholds.
How is specificity related to false positive rate?
They move in opposite directions. False positive rate equals one minus specificity, so higher specificity means fewer false alarms among actual negatives.
Why should prevalence still be shown?
Sensitivity and specificity do not depend on prevalence directly, but the base rate still matters when you explain how hard the task is and when you connect these rates to predictive values.
Does the share URL include my counts or labels?
No. The share URL stores only lightweight settings such as decimal places. Counts and custom labels stay in your browser.
Related
Comments (optional)
To reduce load, comments are fetched only when needed.