Most conversations about AI automation assume the work happens in the cloud. You connect a tool, give it API access, it pulls data, something happens on a server somewhere, and a result comes back. That model works well for a lot of tasks. But there is a whole category of business work where it does not: regulated industries, client data that cannot leave a local machine, consultancies with strict data handling policies, and anyone who needs to run reliably without an internet connection.
The AGC reporting tool I built for a consultancy client sits in that category. The brief was straightforward: take a CSV export from their internal system, run it through a set of calculations, apply their branded Word template, and produce a finished client-ready report. The whole thing had to run locally, with no cloud dependency, triggered by a non-technical person in three steps or fewer.
The reporting problem nobody talks about
The consultancy was producing monthly performance reports for each of their clients. The raw data came from their internal system as a CSV: rows of figures, dates, and reference codes. To turn that into a report, someone had to open Excel, run several calculations manually, then copy the results into a Word document, apply formatting, adjust the layout, write a summary paragraph, and save it as a PDF.
Per report, this took between 45 minutes and 90 minutes depending on complexity. They were producing 12 to 15 reports each month. That is up to 22 hours of work that followed an identical pattern every time, where the only genuine judgment required was in the summary paragraph at the end.
The rest was mechanical. The calculations were always the same. The template was always the same. The formatting was always the same. The only thing that changed was the numbers.
This is the type of work that is most dangerous to leave manual: it is repetitive enough that attention lapses, errors creep in, and the person doing it finds it progressively harder to care about doing it well. And it is visible to clients, so errors have direct consequences.
Why offline matters
The first instinct for many people approaching this problem would be to reach for a cloud-based automation platform: connect the CSV export to a Make.com workflow, push it through a template service, get a document back. That works if the data is not sensitive.
In this case, the CSV contained client financial data. Sending it to a third-party cloud service required a data processing agreement, a security review, and sign-off from the client's compliance team. The timeline for that process was measured in months. The timeline for building a local Python tool was measured in days.
Offline tools are underrated in the automation conversation. They are faster to deploy in regulated contexts, easier to audit, cheaper to run long-term (no per-task fees, no subscription tiers), and simpler to debug when something goes wrong because everything happens on a machine you control.
How the tool is built
The stack is deliberately minimal:
- Python 3 with
pandasfor reading and processing the CSV - python-docx for generating the Word document from a template
- Anthropic Python SDK for generating the summary commentary (optional: runs only when an API key is present)
- A simple batch file (Windows) that the user double-clicks to trigger the whole process
The user's workflow is three steps: export the CSV from their internal system, drop it into a designated folder, double-click the batch file. The script picks up the latest CSV in that folder automatically. No command line, no configuration, no choices to make.
The output appears in a separate output folder with the client name and month in the filename, ready to be sent.
From CSV to Word in practice
The CSV processing step uses pandas to read the export, validate the expected columns are present (and raise a clear error if they are not), run the standard calculations (totals, percentage changes, period comparisons), and produce a clean dictionary of values keyed by the variable names used in the Word template.
The calculations themselves are not complex. The work is in defining them precisely and making them robust to common CSV quirks: trailing spaces in column names, mixed date formats, rows that represent subtotals the script needs to skip. Every data pipeline has these edge cases. The first version of the script handles the clean case. Production robustness comes from running it against real exports and patching what breaks.
After processing, the script has a clean set of values: this month's figures, last month's figures, the delta between them, the percentage change, and any flags triggered by values outside expected ranges.
The template layer
The Word template is the consultancy's existing branded document, with placeholder variables inserted where the generated values should appear. python-docx opens the template, finds each placeholder, and replaces it with the calculated value, preserving all the formatting (fonts, colours, table styles, header layout) that the template already has.
This is the right way to approach document generation for a client-facing tool. The consultancy owns the template. When they want to change the layout, adjust colours, or add a section, they edit the Word file directly. They do not need to touch the Python code. The script treats the template as a black box: put values in, get a formatted document out.
Tables are handled separately: the script iterates over client data rows and writes them into a pre-formatted table in the template, row by row, using the same column structure and cell styles already defined in the Word file.
Adding AI commentary
The summary paragraph at the end of the report was the one part the team did not want to fully automate. They wanted to review it before it went to a client. The solution: generate a draft and make it easy to edit before the final PDF export.
When an Anthropic API key is configured, the script sends the processed figures to Claude with a system prompt that describes the consultancy's reporting style and instructs it to write a 3 to 4 sentence summary: what changed this month, what is notable, what warrants attention. The draft appears in the Word document in a clearly marked section so the reviewer knows where to look.
When no API key is present, the section is left with a placeholder: "Summary pending review." The rest of the report is complete and correct regardless.
This makes the AI layer genuinely optional. The core value of the tool (eliminating the mechanical formatting work) does not depend on it. The AI speeds up the one remaining manual task rather than being a required component.
What the client receives
From the consultancy's side, the finished Word document is reviewed, the summary paragraph is adjusted if needed, and it is saved as a PDF directly from Word. Total time from CSV export to ready-to-send PDF: under 10 minutes. Previous time: 45 to 90 minutes.
From the end client's side, the report looks identical to what they always received. Same branding, same layout, same structure. No visible sign that anything changed in how it was produced. That invisibility is the benchmark: a good automation does not look like an automation to the person receiving the output.
The consultancy is now running this tool across all 12 to 15 monthly reports. The time saving compounds. More importantly, the error rate on the calculations is zero, because the same code runs every time and has been validated against the real data.
Where this pattern applies
The CSV-to-Word pattern is not specific to this consultancy. The same architecture applies wherever you have structured data exports and a fixed-format document that needs to present them:
- Accountants producing monthly management accounts from bookkeeping exports
- Property managers generating tenancy reports from property management software
- Agencies producing campaign performance summaries from ad platform exports
- Any service business that delivers a regular document to clients and formats it by hand each time
The build is faster than most people expect because the template already exists (it is whatever the business currently uses), the calculations are already defined (they are what the person currently does manually), and the data source is already structured (it is a CSV export). The script is the bridge between them.
You can see the full case study on the AGC LinkedIn Reporting Tool page. If you run a service business that produces regular client-facing documents from structured data, the AI Automation Systems service covers exactly this type of build.