AI reports
ARGUS generates a structured Mission After-Action Report from the raw data of any completed operation. This page covers the media and data that feed the report and how to iterate on it. For the full workflow — opening, exporting, sharing — see Mission report.
The report is produced by the Mission After-Action Report dialog, opened from the operation detail view. Generation runs in phases and each phase lights up in order in the progress strip.
Report sections and their sources
The generator pulls from every data stream the operation wrote. Each section in the final document corresponds to a specific slice of that data.
| Section | Sourced from |
|---|---|
| Assessment banner | Computed — SUCCESS / PARTIAL / FAILURE derived from task completion, incident flags, and vehicle status. |
| Summary | LLM narrative over mission metadata, operation type, deployed assets, and the chat timeline. |
| Stats | Task counts (tasksCompleted / tasksTotal), asset counts, distance covered, mission duration. |
| Timeline | Chat messages, flag drops, workflow transitions, mission state changes, checklist completions. Rendered as a chronological strip. |
| Tasks | operations/{id}/tasks — one sub-section per task with completion state and assignee. |
| Assets deployed | assetMarkers at mission start, final positions, and total track distance per asset. |
| Media highlights | Photos and video flagged during the mission, items manually Attached to report via Media search, and automatically-surfaced moments (high-confidence detections, long PTT bursts). |
| Flags and detections | Map flags with author and note, plus AI detections with class, confidence, and thumbnail. |
| Comms transcript | Speech-to-text over recorded radio traffic, speaker-labelled by asset callsign. |
| Incidents | Emergency-stop events, lost-link events, obstacle-avoidance aborts, and any manually logged incidents. |
| Weather | Snapshot taken from the operation’s weather service at start and end. |
| Recommendations | LLM-generated lessons-learned, scoped to the patterns in the data above. |
Regenerating a single section
Once the report is visible, each section has an inline Regenerate this section button in the editor. Clicking it re-runs only the LLM pass for that section — the underlying data sample is unchanged. Useful when:
- The narrative missed a detail you care about — edit the prompt hint for that section and re-run.
- A task was re-classified as complete after the report was generated — regenerate Tasks and Summary.
- You tuned Media highlights by attaching or detaching items in Media search and want the report to reflect the new selection.
Per-section regeneration preserves manual edits in other sections. Full report regeneration is a separate button and will overwrite all manual edits, so it prompts for confirmation.
Editing manually
Every section is rich-text after generation. Typical workflow:
- Generate.
- Review each section.
- Regenerate the weak ones.
- Edit inline to remove PII, tighten language, add operator notes.
- Export to PDF (see Mission report).
What the generator needs
The dialog surfaces a dedicated error state when prerequisites are missing. Most common:
- OpenAI API key not configured — fix in Settings → App settings under AI Copilot. The dialog points directly at that section in its error message.
- No chat or task data — nothing to summarise. Usually means the operation was opened but never ran.
Attaching media from search
Any result in Media search has an Attach to report action. Attached items feed the Media highlights section on the next regenerate. You can also detach an item from within the report editor — it flips back to an unflagged state in the search index so it does not resurface on the next auto-highlight pass.