AltVector — CEO Q&A Brief

Boardroom-ready answers for NBFC leadership conversations — designed for clarity, governance, and measured outcomes.

How to use: Keep responses to ~20–40 seconds. Land the principle, then offer the pilot. Avoid jargon; emphasise controls (caps, stop‑loss, audit trail).

Related pages: Credibility & Governance  •  Partner Handoff  •  Value Ledger

Questions leaders actually ask

Q1. What exactly are you selling—models, consulting, or a product?
We sell a decision layer: a small set of portfolio rules that turn your existing data into actionable lending policies — approval, pricing, and guardrails — with measurable outcomes. In a pilot we deliver it as a contained package (policy logic + cohort definitions + measurement). After proof, it becomes an API/service that plugs into your stack.
Q2. How is this different from our existing risk scorecards / ML models?
Scorecards and ML models predict risk. We focus on what happens when you change a decision: which actions reliably improve outcomes, for which cohorts, and under what limits. The output is not just a score — it is a policy you can execute, plus a measured trade‑off curve and a safe operating point to deploy.
Q3. What’s the fastest pilot that proves value?
The fastest proof is usually False‑Negative Revival: identify good borrowers hiding inside rejects, approve them under tight caps, and track performance versus a matched baseline. It’s operationally simple, shows a clean ‘lift story’, and it translates directly into growth without loosening standards.
Q4. What success metrics will you commit to in a pilot?
We commit to jointly defined primary metrics — for example: approval lift at a fixed bad‑rate ceiling, collection rate improvement, or loss reduction — plus secondary metrics like ROA/ROE proxies, early delinquency, and vintage stability. We report the full trade‑off curve and recommend the safest operating point, rather than a single headline number.
Q5. How do you control downside risk while experimenting?
We run with guardrails by design: cohort caps, channel/branch whitelists, exposure limits, early‑bucket monitors, and stop‑loss triggers. If any metric crosses an agreed threshold, the pilot pauses automatically or reverts to baseline. This makes experimentation controlled, reversible, and audit‑friendly.
Q6. What data do you need, and how invasive is the integration?
For a pilot, a secure extract is usually enough: application and rejection history, key borrower and product fields, pricing, collections outcomes, and timestamps. We start with batch files so your core systems stay untouched. API integration comes after value is proven and governance is agreed.
Q7. How do you ensure governance, auditability, and regulator comfort?
Everything is versioned: policy logic, cohort definitions, thresholds, and roll‑out configuration. Each decision can carry an audit_id and a human‑readable rationale, so reviews are straightforward. We keep a decision log, a measurement log, and a rollback plan — the same ingredients regulators and internal audit expect.
Q8. What happens when the environment changes (drift)?
We watch drift the way a credit team does: performance by cohort, channel, and time, plus leading indicators like early delinquency and acceptance mix. When drift appears, we tighten caps, freeze learning, or revert to a safe policy until stability returns. The system is designed to degrade safely, not silently.
Q9. How long to see results, and what’s the typical timeline?
A pragmatic timeline is 6–8 weeks for a controlled pilot: define outcomes and baselines, prepare data extract, run analysis, configure guardrails, and start a capped roll‑out. Early indicators show within weeks; vintage confirmation follows your normal reporting rhythm.
Q10. What is the commercial model—fees, success share, or both?
We can do fixed pilot fee, implementation fee, success share — or a hybrid. In practice, leadership prefers something that aligns incentives and is easy to audit: a modest fixed component plus a clearly defined upside share tied to verified portfolio deltas.
Q11. What if the pilot doesn’t show lift?
Then we stop and we do not scale. The pilot is intentionally small and reversible, so the cost of being wrong is bounded. A no‑lift outcome is still useful because it prevents an uncontrolled roll‑out and clarifies where the constraint really is (data, policy, channel mix, or operations).
Q12. Who owns the IP and what do you retain?
You retain your data and outcomes. We deliver your policy artifacts — cohort logic, thresholds, guardrails, and measurement spec — and we retain only reusable tooling and non‑portfolio IP. If you want, the policy artifacts for your book can be contractually treated as your deliverable.

A simple close

Suggested close: “Let’s run one controlled pilot — measured, capped, and auditable — then scale only if the lift is real.”

Notes

This page is written for executive discussions and intentionally stays practical. All examples are illustrative; final metrics, thresholds, and commercial terms should be agreed jointly for each institution.