Practical answers about signals, scoring, TP/SL, expiry, alerts, and scanner behavior.
These are escalation tiers based on minimum confidence and minimum authenticity. Higher tiers require stronger confluence, so STRONG BUY is the most selective.
AI Confidence is the model’s conviction that the setup is attractive given recent behavior. It’s not a guarantee—use it with authenticity and risk management.
It’s a quality filter intended to reduce thin-liquidity spikes and noisy moves. Higher authenticity generally means better “real participation” characteristics.
Your code applies a time-decay to confidence as the signal ages, so older signals naturally lose edge and urgency.
Take-profits are auto-scaled off entry price and a capped expected jump factor, then multiplied by (1.0x, 1.5x, 2.0x). In SHORT setups, the math flips direction.
The app infers instrument class (e.g., EQUITY vs MICROCAP vs LEVERAGED_LIKE) using price, expected jump, and volatility. That classification caps the expected move and can disable TP2/TP3 for lower-confidence leveraged-like names.
It’s a suggested scale-out plan depending on whether TP2/TP3 are enabled:
Expected Jump is the predicted move size (percentage) used to derive targets. It’s bounded for realism depending on the inferred instrument class.
“Same day” signals are treated as intraday opportunities. The dashboard calculates time remaining until market close (UTC) and marks them expired after close.
If the signal isn’t tagged “Same day,” it’s treated as a multi-day setup and won’t show intraday expiry countdown behavior.
Alerts are sent only if TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID are configured. If they’re blank, alert sending silently skips.
There’s an alert cooldown window to prevent spam. A symbol can be blocked from alerting again until the cooldown has passed.
Your scanner fetches daily bars per symbol using a Polygon API key from environment variables. If the key is missing/empty, you’ll typically see empty results or failures upstream.
Most common reasons:
You explicitly block repeated signals via DB-persisted de-dup rules (lookback days + min hours), and only allow re-arming if price moved meaningfully.
Progress is computed from processed symbols / total universe. ETA is estimated from symbols/sec and remaining symbols. Speed depends on API latency, your worker count, and symbol universe size.
Streamlit re-runs the script, so your scanner uses a shared dict + lock to update progress safely across threads without UI race conditions.
It ranks symbols using an average of AI confidence multiplied by authenticity (scaled), plus shows samples and win rate over a recent lookback window.
That message typically appears when background threads interact with Streamlit runtime context. It’s often harmless, but it can indicate code running outside the Streamlit execution context.