Skip to content

AI reports

ARGUS generates a structured Mission After-Action Report from the raw data of any completed operation. This page covers the media and data that feed the report and how to iterate on it. For the full workflow — opening, exporting, sharing — see Mission report.

The report is produced by the Mission After-Action Report dialog, opened from the operation detail view. Generation runs in phases and each phase lights up in order in the progress strip.

Report sections and their sources

The generator pulls from every data stream the operation wrote. Each section in the final document corresponds to a specific slice of that data.

SectionSourced from
Assessment bannerComputed — SUCCESS / PARTIAL / FAILURE derived from task completion, incident flags, and vehicle status.
SummaryLLM narrative over mission metadata, operation type, deployed assets, and the chat timeline.
StatsTask counts (tasksCompleted / tasksTotal), asset counts, distance covered, mission duration.
TimelineChat messages, flag drops, workflow transitions, mission state changes, checklist completions. Rendered as a chronological strip.
Tasksoperations/{id}/tasks — one sub-section per task with completion state and assignee.
Assets deployedassetMarkers at mission start, final positions, and total track distance per asset.
Media highlightsPhotos and video flagged during the mission, items manually Attached to report via Media search, and automatically-surfaced moments (high-confidence detections, long PTT bursts).
Flags and detectionsMap flags with author and note, plus AI detections with class, confidence, and thumbnail.
Comms transcriptSpeech-to-text over recorded radio traffic, speaker-labelled by asset callsign.
IncidentsEmergency-stop events, lost-link events, obstacle-avoidance aborts, and any manually logged incidents.
WeatherSnapshot taken from the operation’s weather service at start and end.
RecommendationsLLM-generated lessons-learned, scoped to the patterns in the data above.

Regenerating a single section

Once the report is visible, each section has an inline Regenerate this section button in the editor. Clicking it re-runs only the LLM pass for that section — the underlying data sample is unchanged. Useful when:

  • The narrative missed a detail you care about — edit the prompt hint for that section and re-run.
  • A task was re-classified as complete after the report was generated — regenerate Tasks and Summary.
  • You tuned Media highlights by attaching or detaching items in Media search and want the report to reflect the new selection.

Per-section regeneration preserves manual edits in other sections. Full report regeneration is a separate button and will overwrite all manual edits, so it prompts for confirmation.

Editing manually

Every section is rich-text after generation. Typical workflow:

  1. Generate.
  2. Review each section.
  3. Regenerate the weak ones.
  4. Edit inline to remove PII, tighten language, add operator notes.
  5. Export to PDF (see Mission report).

What the generator needs

The dialog surfaces a dedicated error state when prerequisites are missing. Most common:

  • OpenAI API key not configured — fix in Settings → App settings under AI Copilot. The dialog points directly at that section in its error message.
  • No chat or task data — nothing to summarise. Usually means the operation was opened but never ran.

Any result in Media search has an Attach to report action. Attached items feed the Media highlights section on the next regenerate. You can also detach an item from within the report editor — it flips back to an unflagged state in the search index so it does not resurface on the next auto-highlight pass.