Product guide

How to use Flash for employee operations

This guide walks you through the full journey: creating a project, preparing your CSV, running the model, locking your schema, and growing your dataset over time.

Flash in one page: the basic flow

Flash is built around projects. Each project represents a focused operational question – for example, “Where is burnout highest in my hospital?” or “Which employees are most at risk of leaving?”

  1. Step 1

    Create a project

    Give it a name, pick a domain (e.g. healthcare), and define a high-level goal.

  2. Step 2

    Prepare your employee CSV

    Start with a template, copy a few rows from your HR/people data, and clean obvious issues.

  3. Step 3

    Upload & run the model

    Upload your CSV, run the pipeline, and review risk scores and insights.

  4. Step 4

    Lock the schema & append new data

    When you’re happy with the structure, lock the dataset. Future uploads append new rows and keep IDs consistent.

You can always come back to this page from the navigation bar if you need a refresher.

1. Creating a project

Head to the Projects page and click New project.

    Project name – short, descriptive label (e.g. “ICU Burnout – Q3”).

Once a project exists, you can upload data, run the model, and lock the schema whenever you’re ready.

2. Preparing your employee CSV

Flash ingests your historical employee data to surface risk, burnout signals, and operational hotspots. The easiest path is to start from one of the templates and paste in fields from your HR or workforce system.

CSV templates

Choose a template that matches how much data you have available right now.

Minimal template (fast)

Perfect for first uploads or when you only have core HR fields handy.

EmployeeIDJobRoleTenureMonthsAgeSalaryBurnoutScoreWorkLifeScorePerformanceScoreHoursWorkedAvgLeftCompany

Standard template (full power)

Recommended for pilots and production. Adds richer context for stronger signals and copilot guidance.

EmployeeNameDepartmentTrainingsCompletedParentalLeave401kMatchMedicalCopayCoverageAvailableCoverageGapsPTOUsageRateOvertimeFlagRestDaysTaken

Key columns – what they mean

EmployeeID

Unique identifier from your HRIS. Keeps employees consistent across uploads.

JobRole

Title/role like “ICU RN” or “Respiratory Therapist”. Helps benchmark risk by role.

TenureMonths

Time employed (months). New staff often carry different risk profiles.

BurnoutScore

0–10 burnout or stress indicator (higher = more burned out).

WorkLifeScore

0–10 work–life balance score (higher = healthier balance).

PerformanceScore

0–10 overall performance rating.

HoursWorkedAvg

Average weekly hours over the recent period. Flags overload patterns.

LeftCompany

0 = still employed, 1 = left. Optional for brand-new predictions but powerful for training.

Example rows

Minimal template

EmployeeID,JobRole,TenureMonths,Age,Salary,BurnoutScore,WorkLifeScore,PerformanceScore,HoursWorkedAvg,LeftCompany
RN-001,Registered Nurse,24,34,78000,6,7,8,42,0
RN-002,ICU RN,6,29,72000,8,4,7,50,1

Standard template

EmployeeID,EmployeeName,JobRole,Department,TenureMonths,Age,Salary,BurnoutScore,WorkLifeScore,PerformanceScore,HoursWorkedAvg,TrainingsCompleted,ParentalLeave,401kMatch,MedicalCopay,CoverageAvailable,CoverageGaps,PTOUsageRate,OvertimeFlag,RestDaysTaken,LeftCompany
RN-001,"Alex Kim","Registered Nurse","ICU",24,34,78000,6,7,8,42,3,0,3,20,1,2,0.4,1,6,0

Start with a handful of rows, upload, and iterate — you can grow the dataset over time.

3. Upload your CSV and run the model

Inside a project, scroll to the Upload data section and upload your CSV.

  • Start with a small sample (e.g. 50–200 employees) to validate that columns line up correctly.
  • Once uploaded, click Run model to score the dataset.

After a run completes, you’ll see risk scores, summary statistics, and distributions in the project overview.

4. Locking the schema & appending new data

As soon as you’re comfortable with the column structure of your dataset, you can lock the schema for that project.

  • When a project is unlocked, each upload is treated as a separate experimental dataset.
  • When you lock the schema, the current dataset becomes the canonical dataset.
  • Future uploads must match the locked schema and will be merged into the canonical dataset by EmployeeID, updating existing employees and adding new ones.

This lets you keep a single, growing view of your workforce while preserving consistent model behavior over time.

5. Reading scores and insights

After a run, Flash surfaces scores and summaries that help you move from data to decisions.

  • Risk scores – each row receives a probability-like score of risk (e.g. likelihood of leaving).
  • Population overview – total employees, average tenure, and basic workforce stats.
  • Wellbeing metrics – average burnout, work–life balance, and hours worked where available.
  • Distributions – risk by department and tenure bands, so you can spot hotspots quickly.
  • Copilot insights – the agent summarizes what’s most interesting about this run and suggests next questions to ask.

6. Managing projects over time

As you experiment with different datasets, you may want to archive older projects or keep only a core set of active ones.

  • Active vs archived – your Projects page lets you switch between active and archived views.
  • Archive – hides a project from the default list without deleting data or runs.
  • Unarchive – restores a project back to active if you want to keep working with it.

Archiving is a soft delete: you can always bring a project back later if you decide to continue that experiment.

If you get stuck at any point, start with a tiny dataset (even 10–20 rows), run the model, and iterate. Flash is designed to work well with small pilots and then grow with you as you add more data and more projects.