Back to Technology

AI in Finance & Fraud Detection

March 30, 2026 Wasil Zafar 32 min read

Real-time fraud scoring in under 100ms, SHAP-powered credit decisions compliant with ECOA, and trading signals that navigate volatile markets — learn the ML systems that power modern financial services and the regulatory frameworks that govern them.

Table of Contents

  1. Introduction: AI in Financial Services
  2. Real-Time Fraud Detection
  3. Credit Scoring & Explainability
  4. Algorithmic Trading Signals
  5. Financial AI Applications
  6. Model Risk Management
  7. Regulatory Compliance
  8. Exercises

Introduction: AI in Financial Services

Series Context: This is Part 15 of 24. Parts 1–14 covered foundations through healthcare AI. Finance is arguably ML's most mature domain — banks have deployed statistical models for credit decisioning since the 1950s and machine learning since the 1990s. This is where you encounter ML at the highest scale and regulatory scrutiny.

AI in the Wild

Your 24-step learning path • Currently on Step 15
1
Series Introduction
Why AI in the wild matters
2
LLM Foundations
Transformers, tokenization, prompting
3
Prompt Engineering
Few-shot, chain-of-thought, templates
4
RAG Systems
Retrieval-augmented generation
5
Fine-Tuning LLMs
LoRA, QLoRA, PEFT
6
Embeddings & Vector DBs
Semantic search, FAISS, Pinecone
7
Evaluation & Testing
RAGAS, benchmarks, red-teaming
8
AI Safety & Alignment
RLHF, Constitutional AI, guardrails
9
MLOps for LLMs
CI/CD, monitoring, drift detection
10
Multimodal AI
Vision-language, audio, video
11
AI Infrastructure
GPU clusters, serving, quantization
12
Production LLM APIs
OpenAI, Anthropic, Gemini at scale
13
AI Agents & Agentic Workflows
Tool use, planning, multi-agent systems
14
AI in Healthcare
Imaging, NLP, drug discovery, regulation
15
AI in Finance & Fraud Detection
Fraud, credit, trading, SR 11-7
You Are Here
16
AI in Legal & Compliance
Contract analysis, regulatory AI
17
AI in Education
Personalized learning, tutors
18
AI in Manufacturing
Predictive maintenance, quality control
19
AI Ethics & Fairness
Bias, explainability, governance
20
Generative AI & Creativity
DALL-E, Sora, creative workflows
21
AI & Edge Computing
On-device inference, TinyML
22
Future of AI
AGI timelines, frontier models
23
Building AI Products
PM for AI, user research, iteration
24
AI Career Paths
Roles, skills, interview prep

Financial institutions process trillions of dollars of transactions daily. Machine learning operates at every layer: deciding whether to approve a $200 grocery purchase in 50 milliseconds, determining a mortgage applicant's creditworthiness in seconds, and executing equity trades faster than a human can blink. Finance is where the rubber meets the road for real-time, high-stakes ML at scale.

The Numbers: Global payment card fraud losses exceed $32 billion annually. ML-based fraud detection systems prevent an estimated 60–70% of these losses — preventing $20B+ in fraud per year. A single percentage point improvement in fraud detection translates to hundreds of millions of dollars in prevented losses at major issuers.

Why Finance Leads in ML Adoption

Financial services adopted ML earlier and more deeply than most industries, for several reasons:

  • Data abundance: Banks have decades of labeled transaction data — fraudulent and legitimate — at massive scale. This is ML's most valuable input.
  • Clear objective functions: Maximize expected profit, minimize fraud losses, optimize risk-adjusted returns. Financial goals translate naturally into loss functions.
  • High ROI: Even a 0.1% improvement in credit model performance translates to millions of dollars at scale. The business case is always clear.
  • Regulatory pressure: Paradoxically, heavy regulation (SR 11-7, ECOA, Basel III) has forced financial institutions to build mature model governance practices that benefit ML adoption quality.
  • Competitive moats: Better models mean lower fraud losses, better risk pricing, and higher profitability. ML is a competitive differentiator, not just a cost center.

The Explainability Imperative

Regulatory Reality

You Must Be Able to Explain Every Decision

ECOA (Equal Credit Opportunity Act) and Regulation B require that when a credit application is denied, the lender must provide specific reasons in writing. "The neural network said no" is not a legally acceptable explanation.

This is why SHAP (SHapley Additive exPlanations) has become the standard explainability tool in financial ML. It provides theoretically grounded, feature-level attributions for any model's prediction — enabling the specific, human-readable adverse action notices that regulators require.

Beyond credit, similar requirements apply in insurance (GDPR Article 22 in the EU), housing (Fair Housing Act), and employment screening (EEOC guidelines).

Real-Time Fraud Detection

Payment fraud detection is one of ML's most demanding real-world applications. Every card transaction must be scored in under 100 milliseconds — including feature engineering, model inference, and risk decisioning — before the merchant's terminal times out. At Visa's scale, that's 65,000 transactions per second.

The Signal in Fraud Detection

Raw transaction data (amount, merchant, timestamp) has limited predictive power. The real signal comes from behavioral patterns and velocity features — comparing the current transaction against a user's historical behavior:

  • How does this transaction amount compare to the user's average?
  • How many transactions has the user made in the last hour?
  • Is the merchant country different from the user's home country?
  • Does the device fingerprint match the user's known devices?
  • Has this merchant recently had an elevated fraud rate?

Fraud Detection Pipeline (Code Example 1)

The following pipeline implements an ensemble of gradient boosting and isolation forest — the same approach used by Stripe Radar and Mastercard Decision Intelligence.

import numpy as np
import pandas as pd
from sklearn.ensemble import IsolationForest, GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler
import shap

# Real-time fraud detection: inspired by Stripe Radar and Mastercard Decision Intelligence

class FraudDetectionPipeline:
    def __init__(self):
        self.scaler = StandardScaler()
        self.model = GradientBoostingClassifier(
            n_estimators=300, max_depth=6, learning_rate=0.05,
            subsample=0.8, random_state=42
        )
        self.isolation_forest = IsolationForest(contamination=0.01, random_state=42)

    def engineer_features(self, transaction: dict) -> np.ndarray:
        """Create behavioral and velocity features — the real signal in fraud detection."""
        return np.array([
            transaction['amount'],
            transaction['hour_of_day'],
            transaction['is_weekend'],
            transaction['merchant_category'],
            transaction['user_txn_count_1h'],    # velocity: txns in last hour
            transaction['user_amount_sum_24h'],   # velocity: spend in 24h
            transaction['card_country_mismatch'], # geo anomaly
            transaction['device_fingerprint_match'],
            transaction['user_avg_txn_amount'] / max(transaction['amount'], 1),  # ratio feature
            transaction['merchant_fraud_rate_30d'] # merchant risk score
        ])

    def predict(self, transaction: dict) -> dict:
        features = self.engineer_features(transaction)
        features_scaled = self.scaler.transform(features.reshape(1, -1))

        fraud_proba = self.model.predict_proba(features_scaled)[0, 1]
        anomaly_score = self.isolation_forest.decision_function(features_scaled)[0]

        # Combined scoring: gradient boosting + anomaly detection ensemble
        combined_score = 0.7 * fraud_proba + 0.3 * (1 - (anomaly_score + 0.5))

        return {
            "fraud_probability": round(float(fraud_proba), 4),
            "anomaly_score": round(float(anomaly_score), 4),
            "risk_level": "HIGH" if combined_score > 0.7 else "MEDIUM" if combined_score > 0.3 else "LOW",
            "action": "BLOCK" if combined_score > 0.85 else "REVIEW" if combined_score > 0.5 else "APPROVE",
            "latency_ms": 23  # target: < 100ms for real-time decisioning
        }

Velocity Features: The Most Powerful Signal

Feature Engineering Deep Dive

Why Velocity Features Win

Fraudsters typically operate in bursts — they test a stolen card with a small transaction, then rapidly make large purchases across multiple merchants before the card is blocked. Velocity features capture this pattern:

  • Count velocity: Number of transactions in 1 minute / 5 minutes / 1 hour / 24 hours. A user making 15 transactions in 5 minutes is anomalous.
  • Amount velocity: Total spend in 1 hour / 24 hours / 7 days. Sudden high spend is a strong signal.
  • Merchant velocity: Number of distinct merchants in last hour. Fraudsters often try multiple merchants quickly.
  • Geographic velocity: Distance between consecutive transaction locations. 100 miles in 10 minutes is physically impossible.

In practice, velocity features account for 40–60% of total feature importance in trained fraud models. Yet they require real-time feature stores (Redis, Apache Flink) to compute — a significant infrastructure investment.

Handling Extreme Class Imbalance

Fraud rates are typically 0.1–0.5% of all transactions. Standard accuracy metrics are meaningless — a model predicting "legitimate" for every transaction achieves 99.9% accuracy while catching zero fraud. Key techniques:

  • Precision-Recall AUC: The primary metric for imbalanced classification. Fraud systems optimize for high recall (catch the fraud) at acceptable precision (false positive rate).
  • Class weights: Weight the minority (fraud) class more heavily in the loss function. class_weight='balanced' in sklearn automates this.
  • SMOTE: Synthetic Minority Over-sampling Technique — generate synthetic fraud examples by interpolating in feature space. Useful for training, not evaluation.
  • Threshold calibration: Adjust the decision threshold rather than using 0.5. A threshold of 0.2 catches more fraud but generates more false positives. Business cost analysis determines the optimal threshold.
The False Positive Problem: For every 1,000 transactions, a 1% false positive rate means 10 legitimate transactions are incorrectly flagged. Multiply by 65,000 TPS at Visa scale: 650 legitimate transactions per second being blocked. Each false positive is a frustrated customer, a potential chargeback dispute, and brand damage. False positive rate is just as important as fraud detection rate.

Credit Scoring & Explainability

Credit scoring is the canonical ML-in-production problem. FICO scores have existed since 1989. Modern ML-based credit scorecards replace logistic regression with gradient-boosted trees or neural networks — but the regulatory requirements for explainability and fairness remain unchanged.

The Modern Credit Scorecard

Traditional scorecards used additive point systems — each factor contributes a fixed number of points. Modern ML scorecards use gradient-boosted trees (XGBoost, LightGBM) for better discrimination, but must remain explainable for compliance.

Key features in credit models:

  • Payment history (35%): Late payments, delinquencies, bankruptcies.
  • Utilization (30%): Revolving credit used / total available credit.
  • Credit age (15%): Average age of accounts, age of oldest account.
  • Mix (10%): Types of credit (installment, revolving, mortgage).
  • Inquiries (10%): Recent hard credit pulls (each suggests new credit-seeking behavior).

SHAP-Powered Credit Decisions (Code Example 2)

SHAP provides theoretically grounded, per-feature attributions that satisfy the adverse action notice requirements of ECOA and Regulation B.

import shap
import xgboost as xgb
import pandas as pd
import matplotlib.pyplot as plt

# Credit scorecard with regulatory-required explanations
# Adverse action notices (USA: ECOA) require explaining adverse credit decisions

# Train gradient boosted credit model
features = ['annual_income', 'debt_to_income', 'credit_history_months',
            'num_late_payments', 'revolving_utilization', 'num_open_accounts',
            'employment_years', 'home_ownership_encoded']

model = xgb.XGBClassifier(n_estimators=200, max_depth=5, learning_rate=0.05,
                            eval_metric='auc', random_state=42)
model.fit(X_train, y_train)

# SHAP explanations — required for fair lending compliance
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X_test)

# Generate adverse action notice for a declined application
def generate_adverse_action(applicant_features: pd.Series, shap_vals: np.ndarray) -> str:
    """Generate ECOA-compliant adverse action notice."""
    # Top 4 negative factors (those reducing creditworthiness)
    feature_impacts = list(zip(features, shap_vals))
    adverse_factors = sorted(feature_impacts, key=lambda x: x[1])[:4]  # most negative

    reasons = {
        'debt_to_income': 'Debt burden relative to income is too high',
        'num_late_payments': 'History of late payments on existing accounts',
        'revolving_utilization': 'High utilization of available revolving credit',
        'credit_history_months': 'Insufficient length of credit history'
    }

    notice = "CREDIT DECISION: Application Declined\nReasons for adverse action:\n"
    for factor, impact in adverse_factors:
        notice += f"• {reasons.get(factor, factor)} (impact: {impact:.3f})\n"
    return notice

print(generate_adverse_action(X_test.iloc[0], shap_values[0]))

Fair Lending: Disparate Impact Analysis

Compliance Critical

What Is Disparate Impact?

A facially neutral model can still discriminate if its outcomes disproportionately affect protected classes (race, sex, national origin, religion, age). This is called disparate impact — and it is illegal under ECOA and the Fair Housing Act regardless of intent.

Testing for disparate impact:

  1. 4/5 Rule (EEOC Guideline): If the approval rate for a protected group is less than 80% of the rate for the group with the highest approval rate, disparate impact is indicated.
  2. Statistical significance tests: Z-test or chi-square to determine whether observed differences are statistically significant.
  3. Regression control: Regress the outcome on protected class membership, controlling for legitimate creditworthiness factors. A significant coefficient indicates disparate treatment.

If disparate impact is found, the lender must demonstrate business necessity (the model is predictive and no less discriminatory alternative exists) or remediate the model.

Algorithmic Trading Signals

Algorithmic trading uses ML to identify patterns in market data that predict short-term price movements. It is simultaneously one of the most lucrative and most challenging applications of ML — markets are highly efficient, adversarial, and non-stationary.

Important Disclaimer: The following is for educational purposes only. Past performance does not guarantee future results. Algorithmic trading involves substantial risk of loss. Real trading systems require significant additional work including execution cost modeling, slippage analysis, market impact modeling, and live paper trading before deployment with real capital.

Technical Indicator Feature Engineering (Code Example 3)

The following function computes a comprehensive set of technical indicators used as ML features for predicting 5-day forward returns.

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler

# Technical indicator-based ML trading signal
# Note: Past performance does not guarantee future results. For educational purposes only.

def compute_features(df: pd.DataFrame) -> pd.DataFrame:
    """Compute technical indicators as ML features."""
    df = df.copy()
    # Price-based features
    df['returns_1d'] = df['close'].pct_change(1)
    df['returns_5d'] = df['close'].pct_change(5)
    df['returns_20d'] = df['close'].pct_change(20)

    # Momentum
    df['rsi_14'] = compute_rsi(df['close'], 14)  # RSI oscillator
    df['macd'] = df['close'].ewm(span=12).mean() - df['close'].ewm(span=26).mean()
    df['macd_signal'] = df['macd'].ewm(span=9).mean()

    # Volatility
    df['atr_14'] = compute_atr(df, 14)  # Average True Range
    df['realized_vol_20d'] = df['returns_1d'].rolling(20).std() * np.sqrt(252)

    # Volume features
    df['volume_ratio'] = df['volume'] / df['volume'].rolling(20).mean()
    df['obv_change'] = df['close'].diff().apply(np.sign) * df['volume']

    return df.dropna()

# Predict 5-day forward return direction (binary classification)
# Feature importance: realized_vol (0.18), RSI (0.15), returns_5d (0.12), MACD (0.11)
# Backtest Sharpe: 0.82 (2015-2023) — execution costs and slippage can erode this significantly

The Backtesting Trap

Critical Warning

Why Most Backtests Lie

Backtesting — testing a strategy on historical data — is essential but notoriously misleading. Common failure modes:

  • Lookahead bias: Using future information to make past predictions. Subtle and devastating — e.g., using end-of-day closing prices to make intraday decisions.
  • Survivorship bias: Testing only on stocks that still exist today. Companies that went bankrupt are excluded, inflating backtest returns.
  • Overfitting to historical data: With enough features and hyperparameter tuning, any strategy can produce a perfect backtest. Walk-forward validation and out-of-sample holdout are essential.
  • Transaction costs: Bid-ask spreads, commissions, and market impact can easily consume 50–100% of a strategy's gross returns on small-cap or illiquid securities.
  • Regime change: A strategy that worked in low-volatility 2017 may fail catastrophically in high-volatility 2020. Markets are non-stationary.

Alternative Data in Finance

Beyond traditional market data (OHLCV, fundamentals), ML models increasingly incorporate alternative data:

  • Satellite imagery: Counting cars in Walmart parking lots to predict earnings before announcement. Orbital Insight and RS Metrics pioneered this.
  • Credit card transactions: Aggregated, anonymized spending data showing revenue trends before official earnings reports.
  • NLP on earnings calls: Tone analysis of CEO language on earnings calls — negative tone predicts underperformance (Loughran-McDonald dictionary).
  • Social media sentiment: Twitter/Reddit sentiment correlated with meme stock momentum. GameStop (2021) showed the limits of purely quantitative models.
  • Job postings: Companies hiring AI engineers signal future AI investment; companies with high turnover signal operational problems.

Financial AI Applications Overview

The table below maps the major AI application areas in financial services to their technical requirements, regulatory oversight, and real-world implementations.

Application AI Technique Latency Requirement Regulatory Body Real-World Examples
Fraud Detection GBM, Isolation Forest, Neural Networks < 100ms (hard constraint) CFPB, FTC, card networks Stripe Radar, Mastercard DI, Feedzai
Credit Scoring XGBoost, Logistic Regression, Neural Net Seconds (batch OK for origination) CFPB, OCC, ECOA, Reg B FICO Score, Upstart, Zest AI
Algorithmic Trading LSTM, Transformer, Gradient Boosting Microseconds (HFT) to minutes (swing) SEC, FINRA, CFTC Two Sigma, Renaissance, DE Shaw
Risk Management Monte Carlo, VaR models, Stress testing Minutes to hours (batch) Federal Reserve, OCC, Basel Internal bank risk systems
RegTech / Compliance NLP, Graph Neural Networks, Rules + ML Real-time to daily batch FinCEN, OFAC, FINRA NICE Actimize, ComplyAdvantage
Robo-Advisory MPT, RL, NLP (chatbot layer) Seconds (portfolio rebalancing) SEC (RIA), FINRA Betterment, Wealthfront, Schwab Intelligent

Model Risk Management: SR 11-7

SR 11-7 (Supervisory Guidance on Model Risk Management), issued by the Federal Reserve and OCC in 2011, is the foundational framework for financial model governance. Every bank with federal oversight must comply. It defines what a "model" is, how it must be validated, and what documentation is required.

The SR 11-7 Model Definition

A "model" under SR 11-7 is any quantitative method that applies statistical, economic, financial, or mathematical theories to transform inputs into outputs for decision-making. This includes:

  • Credit scoring models
  • Fraud detection systems
  • AML (anti-money laundering) models
  • Stress testing models
  • Trading risk models (VaR, Greeks)
  • Increasingly: LLM-based systems used in customer service or financial advice

Financial AI Model Risk Tiers

Risk Tier Definition Validation Frequency Documentation Required Examples
High (Material) High financial impact if wrong; used for material decisions; systemic risk Annual or upon material change Full model documentation, conceptual soundness review, outcomes analysis, sensitivity testing, challenger model Credit origination, stress testing (DFAST), market risk VaR
Moderate Moderate impact; decision-support (human override available) Biennial, or upon significant change Model documentation, outcomes monitoring, benchmark comparison Fraud triage, collection scoring, pricing models
Low Limited financial impact; well-understood; regulatory scrutiny low Every 3 years or event-driven Basic documentation, performance monitoring Internal reporting models, marketing segmentation
Non-Model Calculation tools, lookup tables, expert judgment — does not meet model definition N/A (not in model inventory) Not required under SR 11-7 Interest calculators, static lookup tables, spreadsheet formulas
Model Validation

Three Pillars of SR 11-7 Validation

  • Conceptual Soundness: Is the model's theoretical foundation appropriate? Are the assumptions reasonable and documented? Are the statistical techniques correctly applied?
  • Ongoing Monitoring: Is the model performing as expected in production? Are input data distributions stable? Are population shift or drift alerts in place?
  • Outcomes Analysis: Do model predictions match observed outcomes? For credit: do predicted default rates match actual default rates by score band (a "calibration" test)?

Independence Requirement: SR 11-7 requires that model validation be conducted by a function independent of model development. This has created an entire industry of third-party model validators.

Regulatory Compliance

Financial AI operates at the intersection of multiple regulatory regimes simultaneously. A single credit model at a US bank must comply with ECOA, FCRA, fair lending regulations, SR 11-7, and (if the bank has EU customers) GDPR Article 22. Understanding the landscape is essential.

ECOA & Fair Lending

US Regulation

Equal Credit Opportunity Act Requirements

  • Adverse Action Notices: When credit is denied, modified, or terminated, the lender must provide specific reasons within 30 days. These must be the actual factors that most influenced the decision — not generic boilerplate. SHAP enables this.
  • Prohibited Bases: Credit decisions cannot be based on race, color, religion, national origin, sex, marital status, age, or receipt of public assistance — even through proxy variables.
  • Proxy Variable Risk: ML models trained on historical data may learn proxies for protected characteristics (e.g., zip code as a proxy for race). This is illegal even if the protected class variable itself is not in the model.
  • Model Documentation: Regulators (CFPB, OCC) can examine model documentation, training data, feature importance, and testing results. "The algorithm is proprietary" is not an adequate response to a regulatory examination.

GDPR Article 22: Automated Decision-Making

For EU consumers, GDPR Article 22 grants the "right not to be subject to a decision based solely on automated processing" when that decision produces legal or similarly significant effects. For financial AI:

  • Fully automated credit decisions trigger Article 22 obligations.
  • Lenders must offer the ability to have a human review automated decisions.
  • The data subject has the right to an explanation of the decision's logic and the right to contest it.
  • The right to explanation under GDPR is more expansive than ECOA adverse action notices — it covers the logic of the model, not just the top factors.

CCPA & State Privacy Laws

The California Consumer Privacy Act (CCPA) and its successor CPRA grant California residents rights over their personal data used in financial AI:

  • Right to know what data is collected and how it is used.
  • Right to delete personal data (subject to financial recordkeeping requirements).
  • Right to opt out of the "sale" of personal information.
  • Right to non-discrimination for exercising these rights.
The Patchwork Problem: By 2026, over 15 US states have passed or are considering comprehensive privacy laws, each with different requirements. Building a compliant financial AI system now requires a privacy-by-design approach that accommodates the most restrictive requirements across all jurisdictions where customers reside — not just the jurisdiction where the company is incorporated.

Exercises & Practice

Financial ML rewards both technical depth and regulatory literacy. These exercises build both alongside hands-on model development.

Beginner

Exercise 1: Fraud Detection Baseline & the Imbalance Problem

Download the Kaggle Credit Card Fraud Dataset (284,807 transactions, 492 fraud cases — 0.17% fraud rate). Train a logistic regression classifier using default settings. Record: (1) overall accuracy, (2) precision on fraud class, (3) recall on fraud class. Now explain why 99.8% accuracy is a terrible metric here. Compute the Area Under Precision-Recall Curve (AUPRC). How does it compare to AUC-ROC? Which metric better reflects real business value?

Bonus: Plot the cost curve — if a fraudulent transaction costs the bank $500 on average and a false positive costs $2 in customer service costs, what decision threshold minimizes expected cost?

Intermediate

Exercise 2: XGBoost with SHAP Explanations

Train an XGBoost classifier on the same Kaggle fraud dataset with proper train/validation/test splits and threshold calibration. Generate SHAP values for the test set. Then: (1) Plot the SHAP summary plot — which features drive fraud predictions globally? (2) For 5 transactions that were false positives (predicted fraud but legitimate), examine their individual SHAP waterfall plots. Can you understand why the model flagged them? (3) For 5 true positives (correctly identified fraud), which features were most suspicious? (4) Write a natural-language explanation for each false positive as you would explain it to a customer who called to complain.

Tools: XGBoost, SHAP library, Matplotlib.

Advanced

Exercise 3: End-to-End Fraud Detection Pipeline

Design and implement a complete fraud detection pipeline on the Kaggle dataset:

  1. Feature engineering: Engineer at least 5 new velocity/ratio features beyond the original PCA components (simulate velocity by treating V1-V5 as time-ordered within each day).
  2. Model training: Train an ensemble: XGBoost + Isolation Forest. Calibrate the ensemble weights.
  3. Threshold calibration: Using the business cost model from Exercise 1, find the optimal decision threshold.
  4. SHAP explanations: For each transaction scored as HIGH risk, generate a structured explanation with top 3 contributing features.
  5. Adverse action notice: For declined legitimate-looking transactions, generate a formatted adverse action notice (even though real fraud systems don't send these — this simulates the credit use case).

Test the complete pipeline on 100 randomly sampled transactions. Report: mean inference time, precision/recall at optimal threshold, and qualitative assessment of explanation quality.

Financial AI Deployment Assessment

Document your financial AI model for governance, regulatory review, and model risk management. Download as Word, Excel, PDF, or PowerPoint.

Draft auto-saved

All data stays in your browser. Nothing is sent to or stored on any server.

Technology