Hashcat - Utils (Hashcat Wiki)
Hashcat - Utils (Hashcat Wiki)
id=hashcat_utils
▪ hashcat
▪ Forums
▪ Wiki
▪ Tools
▪ Events
hashcat-utils
Description
Hashcat-utils are a set of small utilities that are useful in advanced password cracking.
All of these utils are designed to execute only one specific function.
Since they all work with STDIN and STDOUT you can group them into chains.
hashcat-utils is released [https://github.com/hashcat/hashcat-utils/] as open source software under the MIT [https://github.com/
hashcat/hashcat-utils/blob/master/LICENSE] license.
Current Version
Download
The programs are available for Linux and Windows on both 32-bit and 64-bit architectures, as well as .app binaries for 64-
bit OSX/macOS. The project is released as MIT-licensed open source software.
hashcat-utils does not have a dedicated homepage, but this download link always has the latest release:
▪ hashcat-utils [https://github.com/hashcat/hashcat-utils/releases/]
List of Utilities
cap2hccapx
Tool used to generate .hccapx files from network capture files (.cap or .pcap) to crack WPA/WPA2 authentications. The
.hccapx files are used as input by the hash type -m 2500 = WPA/WPA2.
The additional options allow you to specify a network name (ESSID) to filter out unwanted networks and to give
cap2hccapx a hint about the name of a network (ESSID) and MAC address of the access point (BSSID) if no beacon was
captured.
Syntax:
$ ./cap2hccapx.bin
usage: ./cap2hccapx.bin input.pcap output.hccapx [filter by essid] [additional network essid:bssid]
cleanup-rules
Strips rules from STDIN that are not compatible with a specified platform.
Syntax:
Example ('<' is a rules directive that only works with legacy CPU hashcat):
$ cat dirty.rules
l
<3
combinator
Syntax:
$ ./combinator.bin
usage: ./combinator.bin file1 file2
Each word from file2 is appended to each word from file1 and then printed to STDOUT.
Since the program is required to rewind the files multiple times it cannot work with STDIN and requires real files.
Another option would be to store all the content from both files in memory. However in hash-cracking we usually work with
huge files, resulting in a requirement that the size of the files we use does matter.
$ cat two.list
nes
tor
combinator3
Like combinator, but accepts three files as input, producing the combination of all three lists as output.
$ cat three.list
nes
tor
combinatorX
An expanded combinator tool that can combine up to eight elements, with custom separators between each element, and
with session / restore support and other useful flags.
Note that combinatorX cannot use the same direct file on disk for more than two lists. This is due to how the files are read
from disk at a low level. A workaround for this is to copy the lists to separate files.
Simple example:
$ cat list1.txt
the
quick
$ cat list2.txt
brown
fox
$ cat list3.txt
jumped
over
combipow
$ cat wordlist
a
b
c
XYZ
123
$ combipow wordlist
a
b
ab
c
ac
bc
abc
XYZ
aXYZ
bXYZ
abXYZ
cXYZ
acXYZ
bcXYZ
abcXYZ
123
a123
b123
ab123
c123
ac123
bc123
abc123
XYZ123
aXYZ123
bXYZ123
abXYZ123
cXYZ123
acXYZ123
bcXYZ123
abcXYZ123
cpu_rules
TBD
ct3_to_ntlm
Syntax:
There are two different versions for NetNTLMv1 - one with ESS, and one without.
51ad
cutb
This program (new in hashcat-utils-0.6) is designed to cut up a wordlist (read from STDIN) to be used in Combinator attack.
Suppose you notice that passwords in a particular dump tend to have a common padding length at the beginning or end of
the plaintext, this program will cut the specific prefix or suffix length off the existing words in a list and pass it to STDOUT.
Syntax:
$ ./cutb.bin
usage: ./cutb.bin offset [length] < infile > outfile
$ cat wordlist
apple1234
theman
fastcars
Suggested uses:
deskey_to_ntlm
TBD
Syntax:
expander
Each word going into STDIN is parsed and split into all its single chars, mutated and reconstructed and then sent to
STDOUT.
There are a couple of reconstructions generating all possible patterns of the input word by applying the following iterations:
▪ All possible lengths of the patterns within a maximum of 4 (defined in LEN_MAX macro, which you can increase in
the source).
▪ All possible offsets of the word.
▪ Shifting the word to the right until a full cycle.
▪ Shifting the word to the left until a full cycle.
Example:
export_potfile
Attempt to extract plaintexts from potfile records containing mixed hash types. Operates on stdin. Assumes last colon-
separated string is the plaintext and that any plaintexts containing colons are HEX-encoded.
gate
Each wordlist going into STDIN is parsed and split into equal sections and then passed to STDOUT based on the amount
you specify. The reason for splitting is to distribute the workload that gets generated.
For example if you have an i7 CPU and want to use your dictionary with a program that is unable to handle multiple cores,
you can use gate to split your dictionary into multiple smaller pieces and then run that program in multiple instances.
Syntax:
$ ./gate.bin
usage: ./gate.bin mod offset < infile > outfile
▪ The mod value is the number of times you want to split your dictionary.
▪ The offset value is which section of the split is getting that feed.
$ cat numbers
1
2
3
4
5
6
7
8
9
10
11
12
13
14
generate-rules
./generate-rules.bin 10 42
$ $} z3
*61 t
l
o2*
L2
*6B *98 D1
x0A f x32
s^L
s[5 s'#
swU }
hcstatgen
A tool used to generate .hcstat files for use with older hashcat's –markov-hcstat parameter, and with the
statsprocessor.
NOTE: The output generated by hcstatgen is no longer supported by current hashcat and does not support longer
passwords (up to length 256). Use hcstat2gen instead.
Syntax:
Nothing much else to say here. Each outfile will be exactly 32.1MB in size.
hcstat2gen
A tool for generating custom Markov statistics, for use (after LZMA compression) with hashcat's --markov-hcstat
(soon to be --markov-hcstat2) parameter.
To conserve space, hashcat now expects hcstat2 files to be compressed as LZMA. If the file is not compressed, you will
see a “Could not uncompress data” error.
Syntax:
Each raw outfile should be about 132MB in size (with variable size after compression).
keyspace
=======
Options
=======
len
Each word going into STDIN is parsed for its length and passed to STDOUT if it matches a specified word-length range.
Syntax:
$ cat dict
1
123
test
pass
hello
world
mli2
Like rli2, the two lists must be sorted (in LC_ALL=C order).
Syntax:
Example:
$ cat w1.txt
123
1234
999
aceofspades
cards
password
veryfast
$ cat w2.txt
123
1234
999
extra
If you use mli2 on unsorted lists, you will get unmerged results.
If you use mli2 on sorted but non-uniq'd lists, you will get sorted but non-uniq'd results.
morph
Basically morph generates insertion rules for the most frequent chains of characters from the dictionary that you provide
and that, per position.
Syntax:
- Depth = Determines what “top” chains that you want. For example 10 would give you the top 10 (in fact, it seems to start
with value 0 so that 10 would give the top 11).
- Width = Max length of the chain. With 3 for example, you will get up to 3 rules per line for the most frequent 3 letter
chains.
- pos_min = Minimum position where the insertion rule will be generated. For example 5 would mean that it will make rule
to insert the string only from position 5 and up.
-pos_max = Maximum position where the insertion rule will be generated. For example 10 would mean that it will make rule
to insert the string so that it's end finishes at a maximum of position 10.
permute
Each word going into STDIN is parsed and run through “The Countdown QuickPerm Algorithm” by Phillip Paul Fuchs (see:
https://permuteweb.tchs.info [https://permuteweb.tchs.info] (Internet Archive [https://web.archive.org/web/20140409134808/http://
permuteweb.tchs.info/])
permute_exist
TBD
Syntax:
prepare
Due to the nature of the permutation algorithm itself, the input words “BCA” and “CAB” would produce exactly the same
password candidates.
The best way to sort out these “dupes” is to reconstruct the input word reordered by the ASCII value of each char of the
word:
Input:
1. B ⇒ 0x42
2. C ⇒ 0x43
3. A ⇒ 0x41
Output: ABC
Input:
1. C ⇒ 0x43
2. A ⇒ 0x41
3. B ⇒ 0x42
Output: ABC
$ wc -l rockyou.txt
14344391 rockyou.txt
$ wc -l rockyou.txt.prep
9375751 rockyou.txt.prep
Sorted out 4968640 words (34.6%) which would produce dupes in permutation attack.
remaining
Syntax:
▪ A wordlist (search): Each word is matched against the other wordlist. Don't make this too big, it's cached in
memory.
▪ A wordlist (base): Prints what remains after above word was “subtracted”. This is something like rockyou.txt or
better
There's high chances to create duplicates, you need to sort -u the result yourself.
Result is ideal for using in -a 1 combinator attack mode. You may want to do two attacks:
Example:
isajack3935
jackysch_5131
HBjackas5
mom1jackhopes
jack
jacky
… produces candidates:
isa
3935
ysch_5131
HB
as5
mom1
hopes
sch_5131
req
req-exclude
req-include
Each word going into STDIN is parsed and passed to STDOUT if it matches an specified password group criteria.
Sometimes you know that some password must include a lower-case char, a upper-case char and a digit to pass a specific
password policy.
That means checking passwords that do not match this policy will definitely not result in a cracked password. So we should
skip it.
This program is not very complex and it cannot fully match all the common password policy criteria, but it does provide a
little help.
LOWER 1 abcdefghijklmnoprstuvwxyz
UPPER 2 ABCDEFGHIJKLMNOPRSTUVWXYZ
DIGIT 4 0123465789
To configure a password group out of the single entries you just add the item numbers of all the single entries together.
For example if you want to pass to STDOUT only the words that match at least one lower and at least one digit, you would
just lookup the table and search for “lower”, which is “1” and then “digit”, which is “4” and add them together so it makes “5”.
rli
rli compares a single file against another file(s) and removes all duplicates:
rli
usage: rli infile outfile removefiles...
password
123
cards
999
aceofspades
1234
veryfast
And w2.txt:
123
999
1234
password
cards
aceofspades
veryfast
OUT_FiLE.txt:
cards
aceofspades
veryfast
rli can be very useful to clean your dicts and to have one unique set of dictionaries.
But the dictionary size cannot exceed host memory size. Read rli2 below for large files.
rli2
Unlike rli, rli2 is not limited. But it requires infile and removefile to be sorted (in LC_ALL=C order) and uniqued
before, otherwise it won't work as it should.
For example using w1.txt and w2.txt files from above, if we run:
password
123
cards
999
aceofspades
1234
veryfast
And running:
Will do it accurately:
aceofspades
cards
password
veryfast
Note that rli2 can't do multiple files. And if you haven't already notice, rli2 outputs to STDOUT not a file. You can always
pipe to a file to work-around that.
rules_optimize
TBD
seprule
TBD
splitlen
oclHashcat has a very specific way of loading dictionaries, unlike CPU hashcat. The best way to organize your dictionaries
for use with oclHashcat is to sort each word in your dictionary by its length into specific files, into a specific directory, and
then to run oclHashcat in directory mode.
Syntax:
$ ./splitlen.bin
usage: ./splitlen.bin outdir < infile
$ mkdir ldicts
$ ./splitlen.bin ldicts < rockyou.txt
Results in:
$ ls -l ldicts/
total 129460
NOTE: splitlen does not append, it overwrites the files in the outdir. Thats why you should use empty directories.
strip-bsn
strip-bsr
tmesis
tmesis will take a wordlist and produce insertion rules that would insert each word of the wordlist to preset positions.
For example, the word ‘password’ will create insertion rules that would insert ‘password’ from position 0 to position F (15),
and will mutate the string ‘123456’ as follows:
password123456
1password23456
12password3456
123password456
1234password56
12345password6
123456password
Hints:
* Use tmesis to create rules to attack hashlists that came from the source. Run initial analysis on the cracked passwords,
collect the top 10-20 words that appear on the passwords, and use tmesis to generate rules from them.
* tmesis does not handle multibyte unicode characters as single characters, but rather as individual bytes. This means that
it can be used to insert multibyte characters as well.
tmesis-dynamic
Syntax:
tmesis-dynamic will take 2 wordlists and produces a new one, using a user-defined substring as a “key”.
Each word of wordlist 1 which matches that user-defined substring substitutes that substring with each word of wordlist 2.
$ cat wordlist1.txt
isajack3935
jackysch_5131
HBjackas5
mom1jackhopes
$ cat wordlist2.txt
123456
password
jill
hashcat
… produce the following candidates when supplied with the key “jack”:
topmorph
TBD
Syntax:
Limitations
Some programs from hashcat-utils have a minimum and maximum allowed word-length range (like in “len” example).
#define LEN_MIN 1
#define LEN_MAX 64
You can change them and then recompile the hashcat-utils. However we usually do not need plain words of greater length
in password cracking.
Except where otherwise noted, content on this wiki is licensed under the following license: Public Domain [http://
creativecommons.org/licenses/publicdomain/]