Sunday, June 15, 2014

Digital Forensics Tools Bookmarks

We want to share with you a list of bookmarks related to hardware and software tools for Digital Forensics acquisition and analysis. The bookmark file is in Mozilla Firefox, so it can be directly imported into it.

You can download the file from

If you are interested in adding a tool to our list, please contact me at mattia @

Friday, March 28, 2014

mimikatz offline addendum

I must admit I did not expect so many acknowledgments by writing the volatility mimikatz plugin. I want to say thanks to all people that tweeted, emailed - and so on - me: it is just a piece of the puzzle, and the big pieces are those from volatility and from mimikatz.

First, I want to say thanks to Andrew Case, for the support and for having tweeted about the plugin: probably all those acks are because Andrew is an uber-well-known DFIR expert! Then I want to say thanks to Kristinn Gudjonsson, my favorite plaso “harsh” reviewer, who spotted some “devil” (you wrote it! ;) issues in my code, as the multiple inheritance I used… lol, I will fix it! Last but not least I want to once again say thanks to Benjamin aka gentilkiwi, who wrote an e-mail to me making the congratulations for the plugin.

With this post, I want to point out some features of mimikatz that I had not considered in the first instance.

mimikatz can work offline

In the previous post I wrote “Mimikatz is "normally" used on live Windows, where it injects itself inside the lsass and then it does a lot of stuffs”. That is not entirely true: since July 2012, mimikatz uses memory reading, and this is a key point. Moreover, mimikatz deals with minidump, and mimilib with full dump/minidump. Let's start with the first reference

mimikatz minidump

Probably this could be the best approach during a pentest: do not send mimikatz on the target, use (for example) sysinternals procdump. Then, create a crash dump for the lsass process (pay attention to specify the right parameters) and get it on your machine.

Once you have the crash dump, you can load it in mimikatz by using just two commands (!!):

sekurlsa::minidump <name of the lsass crash dump file>

You’ll get all the info! Awesome!

Just a quick note: use mimikatz on a platform of the same major version and same architecture as the original dump. The following image comes from his blog.

But mimikatz has another great ODI capability, as pointed in the following post (2nd reference):

mimikatz with RAM and hiberfil

In my previous post I asked “How to do the same during post-mortem ultra-died forensics?”. Well, you can use mimikatz if you have a Windows OS! How? Benjamin explained it, and I followed his instructions to get the job done.

First, you have to convert your memory dump or hiberfil to a windows crash dump: you can do with the immense volatility or with Matthieu Suiche’s memory tools (bin2dmp and hibr2dmp).

Then, launch windbg (better if with the right architecture… x86 or x64 depending or your target) and load the target crash dump (note: I changed the target, a Windows 7 SP1 x86).

At this point you have to load – guess what? – mimikatz, and specifically mimilib.dll. It will even provide the instructions for the next steps!

Follow the instructions (red square in the next figure, pay attention to symbols) et… voilà! Logged users’ credentials.

You can even work with VmWare vmem files! Let’s say that’s awesome! Finally, some considerations.

mimikatz or volatility? mimikatz AND volatility!

Finally, you can achieve the same result directly with mimikatz and without volatility. Which is the best approach? It depends: actually mimikatz+minidump are Windows only, so, if you are working with another OS, volatility+mimikatz plugin is the way, unless virtualization. Besides that consider that the engine (I mean signatures and data structures) is the same: I have an idea to add, and I will share it with Benjamin, so they should be aligned. If in Windows, it’s up to the user.

some instructions

Some people wrote to me asking how to use the mimikatz volatility plugin. Remember, it’s a PoC, anyway, this is how I’m using it.

·         python 2.7 (
·         volatility >= 2.3 (python, not binaries)
I use trunk code (svn checkout volatility)
·         volatility dependencies (
·         mimikatz plugin (
copy the “” in <volatility directory>/volatility/plugins
·         mimikatz plugin python dependencies
·         a memory dump? =)

keep updated

Actually, the volatility plugin lacks several features with respect to mimikatz: I will post when major updates are ready, meanwhile you could check the source code here:

Have fun!

Wednesday, March 26, 2014

et voilà le mimikatz offline

In one of my recent cases, I needed to recover the windows user password: I had different OSes with various levels of cryptography, mainly at file level. Usually I think it's a good approach to recover as many hints as possible, to derive a scheme and/or to find a way to access the data. 


I like to call it ODI (Offensive Digital Investigations, in Italian "odi" means hear, find out). I remember an old case where I got 500+ strong encrypted archives... too many without a password catalog. I searched for the weakest protection and I found three zip-crypto (not a strong protection) archives: I cracked them in few days and then I was able to derive the schema to access all of them. I was lucky.

This time I felt that the Windows user password was the... key. Usually the dirty work is made with rainbow tables, but no way: I was unable to crack the Windows 7 user password.


I don't remember exactly why I was playing with mimikatz (hem, coff coff) but I had a dream: mimikatz offline... why not? For the few guys who do not know what mimikatz is, this is the site: suffice to say that it's an awesome work made by Gentil Kiwi, who made a deep reverse engineering of the lsass process and discovered how to extract plaintext credentials from it. Mimikatz is "normally" used on live Windows, where it injects itself inside the lsass and then it does a lot of stuffs, not only getting logged users credentials.

it's a matter of RAM

How to do the same during post-mortem ultra-died forensics? First, usually you don't have a RAM dump (don't pull the plug! don't pull THAT plug!.... too late...) but you could get the hiberfil! The hibernation file is like an easter egg: you can't bet on it, it could be corrupted, it could be too old and so on. But, if lucky, you'll get your RAM dump. Tell me the first word that comes to mind when speaking about RAM? volatility.

volatility + old-old-style approach

I got the RAM. I got volatility. I got mimikatz. I didn't get the password. There is something to do, and the first thing is to say uber-thanks to Gentil Kiwi who published the mimikatz source code. By digging inside that code I got the anchors he found as entry points for lsass and its authentication packages. So I started by dumping lsass memory, lsasrv module and the wdigest module: then I used mimikatz anchors and I moved inside lsass, finding what I was looking for (tools used: volatility, HxD, Notepad++, calc. Definitely oooold school, apart from volatility...). So I got the user name, the domain, the encrypted password, the 3DES key and its IV: a bit of python... et voilà. Uh, a fair password! (I forgot: I drank a good beer...).

mimikatz offline

Dumping processes, modules and moving in the hex view it's not always comfortable and it's quite slow. After two rounds of refactoring, I wrote the mimikatz offline plugin for volatility, which automates the previous steps, without dumping anything apart user credentials! It's a PoC which supports only the wdigest authentication package, Windows Vista and 7 both x86 and x64 versions. You can find it on hotoloti, as usual.


I'm planning to add more authentications packages and other stuffs inside the plugin, but actually I had to freeze a bit since I'm getting fun (and loosing sleep) from another hot topic I will share as soon as possible. Basically this is the desired roadmap for the plugin: an external review on the high-level design; a consideration on plugin vs non-plugin approach ; what about rekall; adding authentication packages; testing; what else?

Windows password cracking? No thanks, I quit

Throw away those rainbow tables! Throw away dictionaries! You can get the password in few seconds! Sounds cool,  doesn't it? Unfortunately it's not always the case, but this is another possibility to be considered when you need credentials. Odi and happy hunting. 

Tuesday, December 3, 2013

3minutesOf: a bit of X-Ways and RAID

Some days ago I was working on four images coming from a QNAP storage: so, four disk whose partitions were used to build up RAID volumes. "No problem" I said to myself, knowing that QNAP are *nix based and that XWF (X-Ways Forensics) is so powerful that I'll not need to switch on Linux.

Which RAID?

That's true, but you need to instruct XWF about which type and parameters the RAID is using. Easy again, let's find the configuration raidtab file. Here is it: 

 raiddev /dev/md0
    raid-level               0
    nr-raid-disks            4
    nr-spare-disks           0
    chunk-size               4
    persistent-superblock    1
    device                   /dev/sda3
    raid-disk                0
    device                   /dev/sdb3
    raid-disk                1
    device                   /dev/sdc3
    raid-disk                2
    device                   /dev/sdd3
    raid-disk                3

The third partition of each disk is used inside a level-0 RAID (striping) with a block size of 4KB (the chunk size is expressed in kilobytes, as man says), so 8 sectors (assuming 512... bla bla bla).

chunk-size size
Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the kernel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration.

Moreover from the first disk (the only one showing up to own a file system structure) I got an EXT4 volume. Ok, let XWF rebuild the RAID and inspect the volume.

But... at this point XWF showed up many errors about wrong inodes... hum, something weird there... The first doubt is usually about the stripe size, under the assumption that the RAID type is correct. I got that info from the only configuration file available, so where is the issue?


Following the maccO razor (the worst and complicated idea) I thought to explore the ext4 fs structure to see where there is a jump from the first disk volume to the second inside the Group Descriptors: but, luckily since easier, I opened again the raidtab file and I spot the persistent block configuration value. From the man:

persistent-superblock 0/1
newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use.

That's interesting: first because if images (or disks) were wrong labeled, you can reorder the disks. Moreover you get a lot of information to verify the RAID type, the RAID UUID and... the chunk-size. Where this data is memorized? It's intuitive, but from the linux raid source code (thanks God is OSS!) and from RAID_superblock_formats, I got all the information needed to explore the RAID superblock.

let's template it

Exploring the hexadecimal is great but if you can create something that displays the data... and XWF has templates! So based on the raid source code I made a little template to get the following output:


You know where to apply the template ("The superblock is 4K long and is written into a 64K aligned block that starts at least 64K and less than 128K from the end of the device") and you know the magic value. So, in my case, the stripe size (chunk size) is 128 sectors... and XWF was able to complete its work.

Here is it the template: cut and paste inside a tpl file and use it with XWF. Note that only version 0.90 is "supported".

template "RAID superblock version 0.90"

// Template by Francesco "fpi" Picasso
// Tweet me @dfirfpi

description "To be applied to RAID superblock"
applies_to disk
requires 0x0 FC4E2BA9
requires 0x4 00000000
requires 0x8 5A000000

    hexadecimal uint32 "Signature: 0xA92B4EFC"
    uint32    "major version (want 0!)"
    uint32    "minor version (want 90!)"

    section "Generic Information"
    uint32     "patch level"
    uint32    "section words len"
    hexadecimal uint32    "Raid UIDD0 (1)"
    time_t    "Creation time"
    uint32    "RAID level"
    uint32    "Size of individual disk"
    uint32    "Number of disks"
    uint32    "Fully functional disks"
    uint32    "Preferred min MD device"
    uint32    "Persistent superblock"
    hexadecimal uint32    "Raid UUID1 (2)"
    hexadecimal uint32    "Raid UUID2 (3)"
    hexadecimal uint32    "Raid UUID3 (4)"
    move 64

    section "Generic state information"
    time_t     "Superblock update time"
    hexadecimal uint32    "State bitmask"
    uint32    "Active disks"
    uint32    "Working disks"
    uint32    "Failed disks"
    uint32    "Spare disks"
    hexadecimal uint32 "Superblock checksum"
    int64    "Superblock update count"
    int64    "Checkpoint update count"
    uint32    "Recovery sector count"
    int64    "(v>90) reshape position"
    uint32    "(v>90) new level"
    uint32    "(v>90) delta disks"
    uint32    "(v>90) new layout"
    uint32    "(v>90) new chunk size bytes"
    move 56

    section "Personality Information"
    uint32    "Array physical layout"
    uint32    "Chunk size (bytes)"
    uint32    "LV root PV"
    uint32    "LV root block"
    move 240

    section "RAID disks descriptors (first 6 of 27)"
        uint32 "Disk~ number"
        uint32 "Disk~ major"
        uint32 "Disk~ minor"
        uint32 "Disk~ raid disk"
        hexadecimal uint32 "Disk~ state"
        move 108
    move 2688

    section "Disk descriptor"
    uint32    "Number"
    uint32    "Major"
    uint32    "Minor"
    uint32    "Raid disk"
    hexadecimal uint32 "State"
    move 108


that's all!

Thursday, July 5, 2012

wtmp timeline efforts

In DFIR activities timelines are often determinant to understand what happened (lot of refs here). Luckily Kristinn Gudjonsson provided the community with the great log2timeline tool (here, from now l2t) that, along with the invaluable Brian Carrier's SleuthKit, gives a (temporal) order to chaos. But l2t is not currently considering valuable artifacts coming from wtmp/btmp files on Linux systems.

wtmp (utmp? btmp!)

For a rapid introduction to those files let's see what wikipedia says about them: "utmp, wtmp, btmp and variants such as utmpx, wtmpx and btmpx are files on Unix-like systems that keeps track of all logins and logouts to the system. The utmp file keeps track of the current login state of each user. The wtmp file records all logins and logouts history. The btmp file records failed login attempts. The utmp, wtmp and btmp files were never a part of any official Unix standard, such as Single UNIX Specification, while utmpx and corresponding APIs are part of it". Here we are.

Having included those information in the timeline of a (for example) compromised Linux server could help a lot in answering the who part of the canonical DFIR questions. Moreover keeping tracks on every registered login/logout of who is currently logged should be really useful. Indeed, it's preferable to have a quite verbose timeline to prune/filter during analysis than not having this logs included (and be obliged to manually correlate the various outputs, when spotted).


wtmp, btmp and utmp (not considered here, but should be considered during live/memory analysis) share a common format. Since the target OS is Linux, the format is found inside wtmp.h include file or, easier, in the utmp(5) man. An excerpt is following: 

           /* Values for ut_type field, below */

           #define EMPTY         0 /* Record does not contain valid info
                                      (formerly known as UT_UNKNOWN on Linux) */
           #define RUN_LVL       1 /* Change in system run-level (see
                                      init(8)) */
           #define BOOT_TIME     2 /* Time of system boot (in ut_tv) */
           #define NEW_TIME      3 /* Time after system clock change
                                      (in ut_tv) */
           #define OLD_TIME      4 /* Time before system clock change
                                      (in ut_tv) */
           #define INIT_PROCESS  5 /* Process spawned by init(8) */
           #define LOGIN_PROCESS 6 /* Session leader process for user login */
           #define USER_PROCESS  7 /* Normal process */
           #define DEAD_PROCESS  8 /* Terminated process */
           #define ACCOUNTING    9 /* Not implemented */

           #define UT_LINESIZE      32
           #define UT_NAMESIZE      32
           #define UT_HOSTSIZE     256

           struct exit_status {              /* Type for ut_exit, below */
               short int e_termination;      /* Process termination status */
               short int e_exit;             /* Process exit status */

           struct utmp {
               short   ut_type;              /* Type of record */
               pid_t   ut_pid;               /* PID of login process */
               char    ut_line[UT_LINESIZE]; /* Device name of tty - "/dev/" */
               char    ut_id[4];             /* Terminal name suffix,
                                                or inittab(5) ID */
               char    ut_user[UT_NAMESIZE]; /* Username */
               char    ut_host[UT_HOSTSIZE]; /* Hostname for remote login, or
                                                kernel version for run-level
                                                messages */
               struct  exit_status ut_exit;  /* Exit status of a process
                                                marked as DEAD_PROCESS; not
                                                used by Linux init(8) */
               /* The ut_session and ut_tv fields must be the same size when
                  compiled 32- and 64-bit.  This allows data files and shared
                  memory to be shared between 32- and 64-bit applications. */

           #if __WORDSIZE == 64 && defined __WORDSIZE_COMPAT32
               int32_t ut_session;           /* Session ID (getsid(2)),
                                                used for windowing */
               struct {
                   int32_t tv_sec;           /* Seconds */
                   int32_t tv_usec;          /* Microseconds */
               } ut_tv;                      /* Time entry was made */
                long   ut_session;           /* Session ID */
                struct timeval ut_tv;        /* Time entry was made */

               int32_t ut_addr_v6[4];        /* Internet address of remote
                                                host; IPv4 address uses
                                                just ut_addr_v6[0] */
               char __unused[20];            /* Reserved for future use */

Despite semantic it should be easy to parse the data from files: but when you go delving into something what often happens is that few things are straightforward...

Alignment oddity?

The following is an example of the first entry in a wtmp file, where an entry is an instance of struct utmp.

00000000  02 00 00 00 00 00 00 00  7e 00 00 00 00 00 00 00  |........~.......|
00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000020  00 00 00 00 00 00 00 00  7e 7e 00 00 72 65 62 6f  |........~~..rebo|
00000030  6f 74 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |ot..............|
00000040  00 00 00 00 00 00 00 00  00 00 00 00 32 2e 36 2e  |............2.6.|
00000050  6e 6f 73 79 20 3b 29
2e  78 38 36 5f 36 34 00 00  |36.fuffa.x86_64.|
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000070  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000080  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000100  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000110  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000120  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000140  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000150  00 00 00 00 85 a8 d6 4e  a9 f2 09 00 00 00 00 00  |.......N........|
00000160  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000170  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

Possibly something weird "happened" but when summing sizeof of every "struct utmp" field the result was 382 (bytes), instead of 384 as it's easy guessed from hex-viewing. So there is some alignment which currently is not obvious to me. Recalling the first lines from the previous example

00000000  02 00 00 00 00 00 00 00  7e 00 00 00 00 00 00 00  |........~.......|
00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000020  00 00 00 00 00 00 00 00  7e 7e 00 00 72 65 62 6f  |........~~..rebo|

the ut_line string (char[32]) must start at ninth byte, but we have two "unknown" preceding bytes. Actually I did not clarify this issue (more testing needed, I rely on the community or in more free-time to come...), but I considered ut_type to be 4 bytes long and so the first line become

00000000  02 00 00 00 00 00 00 00  7e 00 00 00 00 00 00 00  |........~.......|

with ut_type=2 and ut_pid = 0. You are advised that this cut could be wrong, anyway on my cases I didn't face any problem when getting ut_types.

going further

Having all fields parsed from file, it was time to extract and summarize the login/logout entries. If you think that wtmp entries contain a simple (almost) human readable information, you're wrong: or, probably better, when you known how-to, well, it's simple. To achieve the result another read of  wtmp man is needed, with the aid of the last Linux command implementation (one here). The following C code belongs to last.c:

    for(i = listnr - 1; i >= 0; i--) {
        bp = utl+i;
         * if the terminal line is '~', the machine stopped.
         * see utmp(5) for more info.
        if (!strncmp(bp->ut_line, "~", LMAX)) {
             * utmp(5) also mentions that the user
             * name should be 'shutdown' or 'reboot'.
             * Not checking the name causes e.g. runlevel
             * changes to be displayed as 'crash'. -thaele
            if (!strncmp(bp->ut_user, "reboot", NMAX) ||
            !strncmp(bp->ut_user, "shutdown", NMAX)) {   
            /* everybody just logged out */
            for (T = ttylist; T; T = T->next)
                T->logout = -bp->ut_time;

            currentout = -bp->ut_time;
            crmsg = (strncmp(bp->ut_name, "shutdown", NMAX)
                ? "crash" : "down ");
            if (!bp->ut_name[0])
            (void)strcpy(bp->ut_name, "reboot");
            if (want(bp, NO)) {
            ct = utmp_ctime(bp);
            if(bp->ut_type != LOGIN_PROCESS) {
            if (maxrec && !--maxrec)

I did not followed exactly what found in last.c, but I used it to correlate steps: in other words I made some tests and observations to write the code, then I checked results with last output. From my point of view this was more formative than a rough language porting. I made a switch statement on ut_types looking for: "User Process", which is a user login on the ut_line from the ut_host specified; "End Process", which is a user logout if the corresponding ut_line registered a login; "Run Level", which could be a shutdown; "Boot Time", which could be a rebooting. It could happens that wtmp registers a booting without a precedent shutdown (a crash?): in this cases the script PURGEs the still-logged-in users, so you cannot get the assurance that the users' work times are right (they should be smaller). Regarding btmp files, they are simpler since they contain only one entry type, which represents a failed login.

scripting... (and timeline)

I do not want to stuck with technical details since the Perl script is open source and can be downloaded from hotoloti (download here). What makes this script different from the last tool is that it's able to generate a Sleuthkit mactime v3.x timeline of logins and logouts (the so called body file), and that body file can be added to other body files (like that one coming from Sleuthkit fls) to get a more complete timeline of events. Moreover, the script shows not only logins/logouts but who is currenlty logged at that event time, example following (not in mactime output format):

type = [0x0007] User Process
pid = 5192 [0x1448]
line = pts/7
user = root
host =
tv_sec  = 1335961904 (Wed May  2 12:31:44 2012)
tv_usec = 780918
ut_addr_v6 string = 157574295
ut_addr_v6 IPV4 =
NOTE = LOGIN  ( logged in on line=pts/7 now=root@foo.it_pts/6 gino@:1101.0_pts/0 root@foo.it_pts/7 root@:1100.0_pts/3 )

I made out a template for X-Ways Forensics too ("a template is a dialog box that provides means for editing custom data structures in a more comfortable and error-preventing way than raw hex editing does, info here), even if this great DFIR (more-than-a) tool has the capability to understand and parse wtmp file.

Log2Timeline (I forgot selinux)

I felt that the script was too isolated and too unmanageable, so I wondered if it could be useful to expand the effort to get something more "shareable" and useful. What if not l2t? I wrote to Kristinn Gudjonsson to ask if it could be useful having such script included: moments after my email, Kristinn provided suggestions and instructions on what to do. Results: Log2Timeline has a new input module called utmp (Fast&Furious collaboration)! Actually the script is hosted in the experimental branch and it's subject to testing and revision. Feel free to download, test and send feedback.

I forgot selinux (wiki helps here). Selinux creates audit logs like "/var/log/audit/audit.log" which were not included in l2t input modules. They are quite simple to parse, so I wrote another l2t input module called (guess what?!) selinux: another useful source to be included in timelines. Again, it's hosted in the experimental branch of Log2Timeline.


This post and those scripts born from a compromised Linux server with EXT4 file system case. During the analysis I experienced how many benefits could come out from DFIR sharing. Without Log2Timeline and the Sleuthkit it would have been much more harder to get the job done. Moreover I want to thanks Simson Garfinkel and Kevin Fairbanks for the Sleuthkit version that includes EXT4 support (originally made by Willi Ballenthin here) and for the prompt support when facing a fls body file issue (one more field with respect to mactime v3.x format, be sure to download the last Ext4_Dev branch). Finally, I learned a lot from the Sans blog EXT4 series written by Hal Pomeranz, first part here. Following these great examples and trying to "repay", here there is something hopefully useful to the community.