KAPE, Explained: History, Installation, Real-World Usage, and DFIR Impact
Table of Contents
1. TL;DR#
KAPE (Kroll Artifact Parser and Extractor) is a forensic triage tool designed for speed and efficiency. Its core function is a two-step process: first, Targets collect forensically relevant files (e.g., event logs, registry hives, browser history) from a source system. Second, Modules process these collected files, running parsers (like Eric Zimmerman’s EZ Tools) to convert raw data into human-readable formats like CSV or JSON. KAPE is invaluable for incident responders who need to quickly gather and analyze evidence from live Windows systems or mounted forensic images. It standardizes collection, accelerates analysis, and helps prioritize which systems need deeper investigation, making it a staple for any 3 a.m. investigation.
2. Background, History & Origins#
KAPE was created by veteran DFIR practitioner and tool developer Eric Zimmerman. Frustrated by the time-consuming and often inconsistent nature of manual artifact collection, Zimmerman developed KAPE with a clear design philosophy: automate the collection and initial processing of forensic artifacts to get to analysis faster. The goal was to create a tool that was fast, portable, flexible, and extensible.
Timeline:
- c. 2018: Eric Zimmerman begins development, building on his experience creating the SANS SIFT Workstation and his suite of individual command-line parsers (EZ Tools).
- February 2019: KAPE is publicly released. It quickly gains traction in the DFIR community for its simple yet powerful Target/Module concept.
- March 2019: Zimmerman introduces the GKAPE graphical user interface with KAPE version 0.8.2.0, making the tool more accessible to practitioners who prefer GUIs over command-line interfaces.
- 2020: Kroll (formerly Duff & Phelps) acquires Zimmerman’s company. KAPE becomes a Kroll-supported tool, ensuring its continued development and maintenance. The licensing model is formalized, distinguishing between free personal/educational/government use and paid commercial use.
- 2021: KAPE Targets and Modules are standardized to a documentation template format, and the
!!ToolSyncmodule is introduced for automated synchronization. - 2022-2025: Ongoing quarterly updates continue to enhance KAPE’s capabilities, including PowerShell-based modules, MFTECmd resident file dumping, and community-driven improvements via the KAPE-EZToolsAncillaryUpdater PowerShell script.
The core definitions for what KAPE collects and processes—the Targets and Modules—are maintained as open-source text files in a community-driven GitHub repository, KapeFiles. This allows the DFIR community to contribute new artifact definitions and parsing logic, keeping the tool current with the latest forensic discoveries.
KAPE is heavily featured in DFIR training from institutions like the SANS Institute, solidifying its place as an industry-standard tool.
3. What KAPE Is: Concepts & Architecture#
KAPE operates on a simple but powerful dual-pass model. Understanding this is key to using it effectively.
[Source System] ---> kape.exe (using Targets) ---> [Collected Raw Artifacts] ---> kape.exe (using Modules) ---> [Processed, Parsable Output]
(Live C:\ or (1. Collection) (e.g., .evtx, NTUSER.dat) (2. Processing) (e.g., CSV, JSON, TXT)
Mounted E:)
Core Definitions#
Targets: A Target is a file (
.tkape) that defines what to collect. It’s a simple YAML text file containing paths to files and directories on the source system. Examples include registry hives, event logs, browser data, LNK files, and prefetch files. KAPE comes with dozens of pre-made Targets, which can be combined. Since 2021, all Targets follow a standardized template that includes documentation for each artifact.- Example Target:
!BasicCollection,KapeTriage,BrowserHistory.
- Example Target:
Modules: A Module is a file (
.mkape) that defines how to process collected data. It specifies which command-line tool (e.g.,RECmd.exefor registry parsing,EvtxECmd.exefor event logs) to run against which type of artifact. Modules point to the executables that do the actual parsing. Modern modules can also leverage PowerShell scripts for enhanced functionality.- Example Module:
!EZParser(runs the entire suite of EZ Tools),RegistryExplorer-Timeline,!!ToolSync(automated synchronization).
- Example Module:
GKAPE vs. kape.exe#
kape.exe: The command-line engine. It’s fast, scriptable, and ideal for automation, remote execution, and experienced practitioners. All functionality is exposed through CLI flags.gkape.exe: The Graphical User Interface (GUI) wrapper aroundkape.exe. It’s excellent for beginners, for building complex command lines without memorizing flags, and for interactive, one-off collections on a local machine. Under the hood, it constructs and executes akape.execommand. GKAPE also includes an editor for Target and Module configurations with automatic validation.
Directory Structure & Output#
When you run KAPE, it uses a standardized directory structure. Assuming your output directory is C:\DFIR\Case01:
- Target Source (
--tsource): The drive you are collecting from (e.g.,C:for a live system,E:for a mounted image). - Target Destination (
--tdest): Where KAPE copies the raw artifacts. KAPE mirrors the source directory structure here. For example, a file fromC:\Windows\System32\config\SAMwould be copied toC:\DFIR\Case01\C\Windows\System32\config\SAM. - Module Destination (
--mdest): Where the output from the Modules (the parsed data) is stored. This is where you’ll find your CSVs, JSONs, and text files ready for analysis. The folder structure within--mdestis organized by the type of artifact processed (e.g.,Registry,EventLogs,FileSystem). - Logs: KAPE generates detailed execution logs in the same directory where
kape.exeis run, typically in a_LOGSsubfolder. These are crucial for troubleshooting and documenting your collection process. - Hashing/Verification: KAPE can automatically hash collected files using MD5, SHA1, SHA256, or all three (
--hash). This is vital for maintaining the chain of custody. By default, it creates a CSV log of files processed and their hashes.
Syncing KapeFiles and Tools#
The definitions for Targets and Modules live in the community-maintained KapeFiles GitHub repository. To ensure you’re collecting the latest and greatest artifacts, you must keep your local copy in sync. KAPE has built-in commands for this, and the !!ToolSync module automates synchronization of KAPE Targets/Modules, EvtxECmd Maps, SQLECmd Maps, and RECmd Batch Files.
4. Installation & Updates#
Prerequisites#
- OS: Windows 7 or newer (Windows 10/11 recommended).
- .NET Framework: Version 4.6.2 or higher is typically required for KAPE and the underlying EZ Tools. Modern Windows versions usually have this.
- Permissions: You must run KAPE with Administrator privileges to access system files like the SAM hive or live registry hives.
Installation#
- Download the latest version of KAPE from the official Kroll page.
- Extract the
.zipfile to a simple, trusted location. AvoidC:\Program Filesor user-profile directories. A good choice isC:\Tools\KAPEor directly on your trusted USB analysis drive (e.g.,D:\KAPE). - The extracted folder will contain
kape.exe,gkape.exe, and subdirectories forModulesandTargets.
RISK: Anti-Virus/EDR Interference AV/EDR solutions may flag KAPE or its associated parsers as malicious because they access sensitive system files (e.g., registry hives, NTDS.dit). It is critical to have a process for creating temporary exclusions or running KAPE in a detection-only mode if authorized. Always coordinate with the security operations team before running collection tools on a production endpoint.
Updating KAPE and KapeFiles#
There are multiple components to update: the KAPE binaries, the KapeFiles definitions, and the EZ Tools binaries.
- Updating KAPE Binaries: Download the latest zip from the Kroll site and replace your existing
kape.exeandgkape.exefiles. - Updating Targets & Modules (Essential): Open an Administrator PowerShell or Command Prompt, navigate to your KAPE directory, and run the sync command.
# Navigate to your KAPE directory
cd C:\Tools\KAPE
# Sync with the KapeFiles GitHub repository
.\kape.exe --sync
Automated Tool Synchronization: Use the
!!ToolSyncmodule to automatically update KAPE Targets/Modules, EvtxECmd Maps, SQLECmd Maps, and RECmd Batch Files while connected to the internet.EZ Tools Binary Updates: For comprehensive updates including EZ Tools binaries in the
.\KAPE\Modules\bindirectory, use the community-maintained KAPE-EZToolsAncillaryUpdater PowerShell script available on GitHub.
This command connects to the GitHub repo, downloads the latest Target and Module files, and updates your local .\Targets and .\Modules directories. Run this frequently to stay current.
5. Quick Start: Hands-On Triage#
Here are copy-pasteable commands for common scenarios. Assume your case output is going to D:\Cases\CASE-001.
Example 1: Rapid Live Triage (Collect + Parse)#
This is the most common use case. We’ll collect a standard set of triage artifacts from the live C: drive, including from Volume Shadow Copies, then immediately parse them.
# Step 1: Collect triage artifacts from C: including VSCs. Output to D:\Cases\CASE-001\Triage_Collection
C:\Tools\KAPE\kape.exe --tsource C: --target KapeTriage --tdest D:\Cases\CASE-001\Triage_Collection --vss
# Step 2: Parse the collected data. Output to D:\Cases\CASE-001\Triage_Parsed. Zip the parsed results.
C:\Tools\KAPE\kape.exe --tdest D:\Cases\CASE-001\Triage_Collection --module !EZParser --mdest D:\Cases\CASE-001\Triage_Parsed --zip Triage_Results
--tsource C:: Target Source is the localC:drive.--target KapeTriage: Use theKapeTriage.tkapeTarget, which is a compound target that includes many high-value artifacts.--tdest ...: Target Destination is where the raw artifacts will be copied.--vss: Process Volume Shadow Copies. This is crucial for finding deleted files or previous versions of files.--module !EZParser: Use the compound!EZParser.mkapeModule, which runs the entire suite of EZ Tools against the collected data.--mdest ...: Module Destination is where the parsed CSVs/JSONs will be written.--zip Triage_Results: Create a zip file namedTriage_Results.zipcontaining the--mdestdirectory.
Example 2: Triage on a Mounted Forensic Image#
Imagine you mounted a forensic image (E01, VHDX, etc.) as the E: drive.
# Collect from the mounted image on E:
C:\Tools\KAPE\kape.exe --tsource E: --target KapeTriage --tdest D:\Cases\CASE-001\E_Image_Collection
# Parse the collected data
C:\Tools\KAPE\kape.exe --tdest D:\Cases\CASE-001\E_Image_Collection --module !EZParser --mdest D:\Cases\CASE-001\E_Image_Parsed
The workflow is identical, just the --tsource changes.
Example 3: Using GKAPE (GUI)#
- Launch
gkape.exeas Administrator. - Target Options:
- Set Target source to
C:. - Check Process VSCs.
- Set Target destination to your case folder (e.g.,
D:\Cases\CASE-001\Triage_Collection_GUI). - In the Targets list on the right, search for and check
KapeTriage.
- Set Target source to
- Module Options:
- Check Use Module options.
- Set Module source to the same Target destination you just set.
- Set Module destination to your parsed output folder (e.g.,
D:\Cases\CASE-001\Triage_Parsed_GUI). - In the Modules list on the right, search for and check
!EZParser.
- At the bottom right, click Execute!.
- GKAPE will show you the exact
kape.execommand it’s building and running, along with a live log. This is a great way to learn the CLI syntax.
6. Real-World Use Cases & Playbooks#
Ransomware Triage#
- Goal: Quickly identify initial access, lateral movement, and execution artifacts before the system is wiped or encrypted further.
- Targets to Use:
KapeTriage,RDPBitmapCache,EventLogs,ScheduledTasks,Amcache,SRUM. - Why: This combination grabs evidence of execution (
Amcache,SRUM), persistence (ScheduledTasks), remote access (EventLogsfor RDP,RDPBitmapCache), and general user activity. Time is critical, and this set provides the highest value in the shortest time.
Lateral Movement Investigation#
- Goal: Determine how an attacker moved from one host to another.
- Targets to Use:
EventLogs(especially Security, System, PowerShell, TerminalServices),ScheduledTasks,Services,LNKFilesAndJumpLists,PowerShellConsoleHistory. - Why: These artifacts directly show logons (local and remote), service creation (e.g., PsExec), scheduled task abuse (e.g.,
at/schtasks), and commands executed by the threat actor.
Browser Forensics & Data Exfiltration#
- Goal: Investigate suspect browsing activity or find evidence of data being uploaded to cloud services.
- Targets to Use:
BrowserHistory(a compound Target for Chrome, Firefox, Edge, etc.),WebCacheV01.dat(for Edge/IE),FileSystem(for$LogFileand$MFTto find file uploads/deletions). - Modules to Use: The
!EZParsermodule will run tools likeWxTCmd(for browser history) andMFTECmd(for MFT). - Workflow: The resulting CSVs can be loaded into Timeline Explorer to filter for activity around the time of the suspected exfiltration, looking for visits to file sharing sites, webmail, etc.
Enterprise-Scale Collection#
CAUTION: Running tools remotely at scale requires proper authorization, testing, and operational security. Use EDR remote shell or approved remote admin tools only.
- Staging: Place KAPE on a network share that endpoints can access (e.g.,
\\fileserver\DFIR_Tools\KAPE). - Execution: Use an EDR’s live response feature, PowerShell Remoting, or PsExec to execute
kape.exeon the remote endpoint. - Command: The key is to direct output back to a central collection share.
# Example using PsExec (ensure you have rights and authorization) psexec.exe \\REMOTE-HOST -s -c -f C:\Tools\KAPE\kape.exe --tsource C: --target KapeTriage --tdest \\fileserver\DFIR_Cases\CASE-001\REMOTE-HOST\Collection --mdest \\fileserver\DFIR_Cases\CASE-001\REMOTE-HOST\Parsed --zip-s: Run as SYSTEM.-c -f: Copykape.exeto the remote host temporarily and force overwrite.
Piping KAPE Output to Other Tools#
The structured CSV output from KAPE is designed for analysis in other tools:
- Timeline Explorer: Eric Zimmerman’s companion tool, Timeline Explorer, is purpose-built to load, filter, and analyze the CSV/JSON output from KAPE modules.
- SIEM/Data Lakes: The CSV and JSON outputs can be easily ingested by Splunk, Elasticsearch, or other log analysis platforms. You can build dashboards and run large-scale analytics across collections from hundreds of hosts.
- Timesketch: An open-source collaborative forensic timeline analysis tool. KAPE’s output can be processed into a timeline format and uploaded for collaborative analysis.
7. Advanced Topics#
Creating a Custom Target#
Let’s say you need to collect all log files from a custom application located in C:\ProgramData\SuperApp\Logs.
- Create a new text file in your
KAPE\Targets\Customdirectory namedSuperAppLogs.tkape. - Add the following content using the standardized template format:
# KAPE Target for collecting SuperApp logs # Author: Your Name # Version: 1.0 # Documentation: # - https://superapp.com/docs/logging # - Custom application logs for forensic analysis Name: SuperAppLogs Author: Your Name Version: 1.0 Id: 219d3f11-9a7c-4a3d-a2f4-3e9a11ac898d # Generate a new GUID Description: Collects all log files from the custom SuperApp. Category: ApplicationLogs Paths: - Path: C:\ProgramData\SuperApp\Logs\** FileMask: '*.log' Recursive: True - Now you can use
--target SuperAppLogsin your command line. The**indicates recursive collection.
Creating a Custom Module#
Imagine you have a special Python script, superlog_parser.py, that parses the logs you just collected.
- Create a file named
SuperAppParser.mkapeinKAPE\Modules\Custom. - Add the following content using the standardized template format:
# KAPE Module for parsing SuperApp logs # Author: Your Name # Version: 1.0 # Documentation: # - Custom parser for SuperApp log format # - Requires Python 3.x in PATH Name: SuperAppParser Author: Your Name Version: 1.0 Id: 5f1b5d1a-f3c2-4e8b-8a9a-9e8c7b6a5d4e # Generate a new GUID Description: Runs superlog_parser.py against SuperApp logs. Category: Custom # Assumes python.exe is in the PATH. The executable can be in a subfolder of Modules. # For example, create KAPE\Modules\bin\SuperApp and place python.exe and your script there. Executable: python.exe Arguments: superlog_parser.py --input %sourceDirectory% --output %destinationDirectory% # This module should run on any files found in a 'Logs' directory TargetDirectory: Logs FileMask: '*.log'
%sourceDirectory%and%destinationDirectory%are special variables KAPE replaces with the appropriate paths at runtime.
Advanced MFTECmd Features#
Since 2022, MFTECmd includes the ability to dump resident files from the $MFT. KAPE includes dedicated modules for this functionality:
- MFTECmd_$MFT_Residents: Dumps resident files to a separate folder (typically 30-80MB of recovered files).
- MFTECmd_$J_$MFT: Processes both
$J(USN Journal) and$MFTfiles simultaneously for comprehensive filesystem analysis.
Performance Tuning#
- I/O is the Bottleneck: KAPE is I/O-bound. Running it from and writing to a fast SSD or NVMe drive will be significantly faster than a spinning disk or a slow network share.
- Use Temporary Directories: The
--tempflag allows you to specify a temporary directory. If your destination is a slow network drive, setting--tempto a local SSD can speed up intermediate processing. - Concurrency: KAPE is multi-threaded by default and is very efficient. You generally don’t need to tune these settings.
Error Handling#
- Check the Logs: The console output and the log files in the
_LOGSdirectory are your best friends. They will contain details about files that couldn’t be copied (due to locks or permissions) or modules that failed to run. - File Locks: If collecting from a live system, some files may be locked by the OS (e.g., active registry hives). KAPE automatically tries to access them via raw disk access or by targeting the VSS, which usually succeeds. If not, the logs will show a “locked” status.
- Permissions: “Access Denied” errors mean you didn’t run KAPE with sufficient privileges. Ensure you’re using an Administrator shell.
8. Licensing & Ethical Use#
KAPE’s licensing model is straightforward but critical to understand. It is detailed on Kroll’s website and in the EULA presented upon first run.
KAPE Solo (Free): KAPE is free for:
- Personal, non-commercial use (individual research, personal projects, educational use)
- Federal, state, local, or international government agencies (including law enforcement) for any purpose, including training
- Students and educational institutions for learning, training, research, or development functions
- Internal company use (organizations using KAPE on their own systems)
Commercial/Enterprise Use (Paid): Any use of KAPE for third-party commercial purposes requires a commercial license from Kroll. This includes:
- DFIR consultants using KAPE on client engagements
- Use on third-party networks as part of paid engagements
- Any commercial services involving KAPE
The official license agreement provides the authoritative terms. As a practitioner, it is your responsibility to ensure you and your organization are properly licensed for your use case. Using the free version for paid consulting work on third-party systems is a violation of the license agreement.
9. Ecosystem & Community Impact#
KAPE fundamentally changed DFIR triage. Before KAPE, artifact collection was often a manual process involving custom scripts or remembering dozens of file paths, leading to inconsistency. KAPE introduced a standardized, community-driven approach.
- Speed to Analysis: By automating the “collect and parse” workflow, KAPE reduces the time from initial host access to actionable intelligence from hours to minutes.
- Standardization: The community-maintained KapeFiles repository has become a de-facto standard for which artifacts matter in a Windows investigation. New forensic artifacts discovered by researchers are quickly added as Targets.
- Interplay with EZ Tools: KAPE is the perfect orchestrator for Eric Zimmerman’s suite of parsers (RECmd, MFTECmd, EvtxECmd, etc.). It automates feeding the right data to the right tool.
- Democratization: KAPE’s ease of use (especially GKAPE) lowered the barrier to entry for performing competent forensic triage, empowering junior analysts and blue teamers.
- Community Resources: Resources like AboutDFIR, DFRWS workshops, and countless blog posts have popularized KAPE best practices and real-world case studies.
The tool has been recognized multiple times, with Eric Zimmerman winning Forensic 4:cast DFIR Investigator of the Year awards and the tool being featured prominently in SANS training curricula.
10. Comparison & Positioning#
KAPE is a triage and collection orchestrator, not an all-in-one DFIR platform.
- vs. Velociraptor: Velociraptor is a full-featured endpoint monitoring and response platform. It can collect artifacts like KAPE, but it excels at continuous monitoring, enterprise-wide hunting, and live system interrogation using the VQL query language. KAPE is a point-in-time collection tool, often used after a tool like Velociraptor gets you on the box. They are complementary.
- vs. Plaso/log2timeline: Plaso’s goal is to create a “super timeline” of all activity on a system. KAPE can be used to collect the files that Plaso would then process. KAPE’s modules provide targeted, faster parsing into discrete CSVs, whereas Plaso provides a holistic but more time-intensive timeline database.
- Strengths: Speed, simplicity, community-driven artifact definitions, tight integration with EZ Tools, standardized template format, automated synchronization capabilities.
- Limitations: It is primarily a Windows-focused tool. It is not an agent and has no real-time monitoring capabilities. Collection is point-in-time.
Live-Response Risks: Always remember that running any tool on a live system, including KAPE, alters it. KAPE minimizes its footprint but still writes logs and uses system resources. Always follow your organization’s procedures for evidence handling and live response.
11. Cheat Sheet#
| Command / Recipe | Notes |
|---|---|
./kape.exe --sync | Update KapeFiles. Run this first and frequently. |
./kape.exe --tsource C: --target KapeTriage --tdest D:\Case\C_collect --vss | Collect triage artifacts from live C: drive, including Volume Shadow Copies. |
./kape.exe --tsource E: --target !SANS_Triage --tdest D:\Case\E_collect | Collect from mounted image E: using the SANS Triage profile. |
./kape.exe --tdest D:\Case\C_collect --module !EZParser --mdest D:\Case\C_parsed | Parse collected data with all EZ Tools, outputting CSVs/JSONs to the parsed folder. |
./kape.exe --tdest D:\Case\C_collect --module !!ToolSync --mdest D:\Case\sync_output | Run automated tool synchronization module to update all definitions. |
./kape.exe ... --zip MyCase | Add --zip <name> to a module run to zip the module destination folder. |
./kape.exe ... --hash sha256 | During collection (--tsource), hash all collected files with SHA256. (Supports md5, sha1) |
./kape.exe --tlist | List all available Targets. |
./kape.exe --mlist | List all available Modules. |
./kape.exe --target KapeTriage,BrowserHistory,CustomLogs | Combine multiple Targets by separating them with a comma (no spaces). |
./kape.exe --tsource \\REMOTE-PC\C$ ... | Collect from a remote admin share (requires permissions and caution). |
./kape.exe --gui | Force launch of GKAPE from the command line. |
Quick Picker Tables:
| Common Targets | Description |
|---|---|
KapeTriage | A large collection of high-value artifacts. |
!BasicCollection | Core system files (Registry, EVTX, MFT). |
BrowserHistory | All major browser artifacts. |
EventLogs | Windows Event Logs from System32\winevt\Logs. |
LNKFilesAndJumpLists | Evidence of file opening and program execution. |
Amcache / Shimcache | Application execution and compatibility artifacts. |
SRUM | System Resource Usage Monitor database. |
SQLDatabases | Aggregate all SQLite databases for SQLECmd parsing. |
| Common Modules | Description |
|---|---|
!EZParser | Runs all EZ Tools parsers. Your default choice. |
!!ToolSync | Automated synchronization of all KAPE definitions. |
EvtxECmd | Specifically parses EVTX files. |
RECmd | Parses Registry hives. |
MFTECmd | Parses the $MFT file. |
MFTECmd_$MFT_Residents | Dumps resident files from $MFT (30-80MB typically). |
TimelineExplorer | Generates a file for direct loading into that tool. |
12. FAQ#
Do I need admin rights?
- Yes. To access critical system artifacts and Volume Shadow Copies, you must run
kape.exeorgkape.exefrom a shell with Administrator privileges.
- Yes. To access critical system artifacts and Volume Shadow Copies, you must run
Can I run KAPE from a USB drive?
- Yes, this is a very common and recommended practice. Place your entire KAPE folder on a trusted, write-protected (if possible) USB drive for your IR kit.
What if the source drive is encrypted with BitLocker?
- KAPE needs access to the file system. You must unlock the BitLocker volume first. For a live system, this is usually already done. For a forensic image, you will need the recovery key or password to mount and unlock the volume before pointing KAPE to it.
How do I update KapeFiles and tools?
- Open an Administrator command prompt,
cdto your KAPE directory, and run.\kape.exe --syncfor basic updates. Use the!!ToolSyncmodule for comprehensive synchronization, or use the community KAPE-EZToolsAncillaryUpdater PowerShell script for complete binary and ancillary file updates.
- Open an Administrator command prompt,
What if my Antivirus/EDR quarantines KAPE or a parser?
- This is a common issue. You must work with your security team to create an authorized exception for the toolset. Never blindly trust a tool; always download KAPE from the official Kroll source. Document the business justification and ensure proper approval processes are followed.
Can KAPE collect from Linux or macOS?
- No. KAPE is designed specifically for Windows file systems and artifacts. Its Targets and Modules are Windows-centric. For other OSes, use tools like the Velociraptor agent, osquery, or platform-specific collection scripts.
How do I detect timestomping with KAPE?
- Use the
$MFTTarget to collect the Master File Table, then run theMFTECmd_$MFTModule. In Timeline Explorer, look for theSI<FNfield (when checked, indicates potential timestomping) anduSec Zerosfield (zeroed microseconds can indicate manipulation).
- Use the
What’s the difference between compound targets and regular targets?
- Compound targets (like
KapeTriageor!BasicCollection) are collections of multiple individual targets. They’re designed to gather comprehensive artifact sets for common use cases. The!prefix typically indicates a compound target that includes many sub-targets.
- Compound targets (like
13. References#
[1] EricZimmerman, KapeFiles GitHub Repository. GitHub. Accessed August 23, 2025, from https://github.com/EricZimmerman/KapeFiles
[2] Kroll, KAPE: Kroll Artifact Parser and Extractor. Kroll. Accessed August 23, 2025, from https://www.kroll.com/en/services/cyber-risk/incident-response-services/kape
[3] Kroll, KAPE End User License Agreement. Kroll. Accessed August 23, 2025, from https://www.kroll.com/en/end-user-license-agreement
[4] SANS Institute, KAPE Tool Page. SANS Institute. Accessed August 23, 2025, from https://www.sans.org/tools/kape
[5] Zimmerman, E., Exploring KAPE’s Graphical User Interface. Kroll. March 5, 2019. Accessed August 23, 2025, from https://www.kroll.com/en/insights/publications/cyber/exploring-kapes-graphical-user-interface
[6] Rathbun, A., KAPE Quarterly Update - Q1 2021. Kroll. April 22, 2021. Accessed August 23, 2025, from https://www.kroll.com/en/insights/publications/cyber/kape-quarterly-update-q1-2021
[7] AboutDFIR, KAPE - The Definitive Compendium Project. AboutDFIR. August 28, 2022. Accessed August 23, 2025, from https://aboutdfir.com/toolsandartifacts/windows/kape/
[8] Rathbun, A., Detecting and Analyzing Timestomping with KAPE. Kroll. June 13, 2022. Accessed August 23, 2025, from https://www.kroll.com/en/publications/cyber/anti-forensic-tactics/detecting-analyzing-timestomping-with-kape
[9] Zimmerman, E., About EZ Tools. Zimmerman’s Forensics Tools. Accessed August 23, 2025, from https://ericzimmerman.github.io/
[10] GitHub, KAPE-EZToolsAncillaryUpdater PowerShell Script. Community-maintained tool for automated KAPE and EZ Tools updates.
[11] DFRWS, KAPE: What’s all the buzz about? Digital Forensics Research Workshop. November 18, 2019. Accessed August 23, 2025, from https://dfrws.org/presentation/kape-whats-all-the-buzz-about/
[12] Zimmerman, E., Eric Zimmerman Profile. Kroll. Accessed August 23, 2025, from https://www.kroll.com/en/our-team/eric-zimmerman