← Math & statistics

StatisticsClassification

MCC Calculator

Other languages 日本語 | English

Enter TP, FP, TN, and FN to calculate Matthews correlation coefficient, balanced accuracy, and the supporting rates at one binary classification threshold.

Use this page when plain accuracy is not enough, especially with class imbalance. If you need threshold-by-threshold behavior, move next to ROC AUC.

How to use

  1. Enter TP, FP, TN, and FN from one binary classification result.
  2. Optionally rename the positive and negative labels to match your workflow.
  3. Read MCC beside balanced accuracy so you can see both overall agreement and class-balance effects.

Wave 6 classification metrics

One-number summary for all four confusion-matrix cells

MCC reacts to TP, FP, TN, and FN at the same time. It is often a cleaner headline than plain accuracy when the positive class is rare or when false alarms and misses both matter.

Inputs

Run a calculation to review MCC, balanced accuracy, and the supporting rates from one binary result set.

MCC is a correlation-style metric for the confusion matrix

MCC answers a different question from plain accuracy. Accuracy asks how many cases were right overall. MCC asks whether predictions and true labels move together across all four confusion-matrix cells. That makes it more resilient when one class is much rarer than the other.

When to prefer MCC

Prefer MCC when a single summary number must reflect both false alarms and misses, or when one class is small enough that plain accuracy can look comforting while the minority class is handled poorly.

What to do if MCC is undefined

If one row or column total collapses to zero, MCC is undefined. In that case, keep the raw counts visible and report recall, specificity, and balanced accuracy directly.

Frequently asked questions

When is MCC more useful than plain accuracy?

MCC is usually more useful when classes are imbalanced or when you want one summary metric that responds to all four cells of the confusion matrix. Plain accuracy can look high even when minority-class behavior is poor.

What range does MCC use?

MCC ranges from -1 to 1. A value near 1 means strong agreement with the true labels, a value near 0 means weak signal, and a negative value means the predictions are working against the true labels more than expected.

Why can MCC be undefined?

MCC becomes undefined when one margin of the contingency table collapses to zero, such as no actual positives or no predicted negatives. In that case, keep the raw counts visible and use recall or specificity directly.

Does the share URL include my counts or labels?

No. The share URL stores only lightweight settings such as decimal places. Counts and custom labels stay in your browser.

Comments (optional)

To reduce load, comments are fetched only when needed.