"Our AI is 99% accurate."

Every time you see this in a trading service's marketing, the same thing is happening behind it: someone has either changed the definition of "accurate" until it means nothing, or they're lying outright. Both are common.

This post is about why a real model — even a great one — can't honestly claim 99% accuracy on trading signals.

The math problem

A trading signal is profitable when, in expectation, the wins outweigh the losses. The formula is:

Expectancy = (Win Rate × Average Winner) − (Loss Rate × Average Loser)

A signal with 99% win rate at 1:1 R:R has expectancy of ~0.98R per trade — fantastic. Multiply by 100 trades a year and you'd 100x your account.

This level of edge does not exist in liquid markets. If it did, it would be arbitraged out within hours. Hedge funds spending hundreds of millions on infrastructure can't reliably hit 60% win rate at 1:2 R:R. A retail signal service is not going to beat them.

So when someone claims 99%, one of three things is true:

Option 1: they're playing definition games. They're calling a trade "accurate" if it ever ticks 0.1% in their direction before reversing. By that definition, almost any signal in any volatile market is "accurate" at some point.

Option 2: they're cherry-picking the timeframe. "99% accurate over the last week" — sure, in a strong trend, almost any LONG signal works. The 99% number doesn't survive a regime change.

Option 3: they're outright lying. They've never tracked their actual results. They invented the number.

In all three cases, the claim tells you the service is selling marketing, not edge.

Why honest models cap their confidence

The TradeVelocity engine caps confidence at 90% by design. Every signal — even the highest-grade A+ — is labeled with confidence ≤ 90%, never higher.

Why?

Because no model is ever truly 100% sure. A model that claims 99% certainty has either underfit (it's missing something important) or overfit (it's memorized the training data and will fail on new market conditions). Either way, the high-confidence number is a sign of poor calibration, not strong edge.

A well-calibrated model does the opposite: it acknowledges uncertainty. When our engine says 90% confidence on an A+ signal, we mean: in the long run, about 9 out of 10 of these trades will work. That implies 1 out of 10 will fail — and that's not a bug, it's the honest expectation.

The losing trades are not anomalies. They are the price of having a real edge.

The marketing lie costs traders money

The reason this matters isn't pedantic — it's that "99% accurate" claims directly cause new traders to size up their positions beyond what they can afford to lose.

The thinking is: "if it's 99% accurate, I can risk 10% per trade — I'll only lose 1 in 100." So they put 10% of their account on a single signal. The 1-in-100 loss happens on trade 8. Their account is down 10%. They double down to "make it back." The next loss takes them to 35% drawdown. They quit.

The advertised 99% caused the over-sizing. The over-sizing caused the blow-up. The signal service blames the trader for "not following our risk management" — but their marketing literally caused the risk-management failure.

A trader who knew the real win rate (say, 60%) would have sized at 1-2% per trade. Same losing streak, much smaller drawdown, account survives, edge has time to play out.

How to read confidence numbers correctly

A well-published confidence number tells you two things:

  1. Roughly how often the signal works in the long run.
  2. How aggressively to size your position.

A 90% confidence signal at 1:2 R:R is structurally better than a 70% signal at 1:1 R:R, but both might be profitable. You shouldn't size them identically.

Conversely, a 99% claim with no R:R disclosure tells you nothing useful — and is probably wrong about the 99% anyway.

What to do when you see "99% accurate"

Three steps:

  1. Ask for the trade history. If they can't show every trade, the 99% is a marketing number.
  2. Ask for the R:R. "99% accurate at 1:0.2 R:R" loses money — high accuracy with tiny winners and huge losers is the most common scam pattern.
  3. Ask what their worst week was. If their answer is "we don't have losing weeks," they're lying. If they have a real number, they have a real edge.

The legitimate services say: "Past performance doesn't guarantee future results. Trading involves risk. We have an honest 60-70% win rate at 1:2 R:R and that produces positive expectancy over many trades — but individual trades can absolutely lose."

That's not exciting marketing. It's also the truth. The exciting marketing is the lie.

Where TradeVelocity stands

Our confidence is capped at 90%. Always. Even on A+ grade signals.

Our published performance page shows every closed trade with a 24-hour delay — including losses. The cumulative R curve has visible drawdowns. The win rate is transparently calculated.

We don't claim 99% accuracy because no honest model can. The fact that we cap at 90% is the reason you should trust the rest of our numbers.