-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement bfac tricks into url_fuzzer #17434
base: master
Are you sure you want to change the base?
Implement bfac tricks into url_fuzzer #17434
Conversation
w3af/plugins/crawl/url_fuzzer.py
Outdated
'.gzip', '.bzip2', '.inc', '.zip', '.rar', '.jar', '.java', | ||
'.class', '.properties', '.bak', '.bak1', '.bkp', '.back', | ||
'.backup', '.backup1', '.old', '.old1', '.$$$' | ||
_appendables = ('~', '~~', '_', '.', '.tar.gz', '.gz', '.7z', '.cab', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that these lists are getting larger, we have the risk of duplicating one item in them.
Let's change the type from tuple (current) to set (desired). By doing so we make sure that even if we add duplicates in the variable definition, the set will eliminate them.
Just change: _appendables = (...)
with _appendables = {...}
.
The same change should be applied to all other class attributes that define lists like this in the url_fuzzer class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
w3af/plugins/crawl/url_fuzzer.py
Outdated
if filename: | ||
domain = url.get_domain_path() | ||
url_string = domain.url_string | ||
name = filename[:filename.rfind(u'.')] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens here when the input URL is http://w3af.org/foobar
(there is no point in the filename)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See below's answer.
w3af/plugins/crawl/url_fuzzer.py
Outdated
|
||
mutate_name_testing = ( | ||
url_string + '#' + filename + '#', | ||
url_string + name + ' (copy).' + extension, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And also, following the same example above where the filename has no "name", what happens in these lines when extension
is empty? Does it make sense to send these tests?
These cases should be investigated and handled:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it wasn't handling these files correctly, I did some modifications and I believe it does it well now.
Here are the tests I did and the results : https://pastebin.com/WG5szsnB (GitHub importing files system doesn't work right now so I put the output on Pastebin)
w3af/plugins/crawl/url_fuzzer.py
Outdated
) | ||
|
||
for change in mutate_name_testing: | ||
newurl = URL(change) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just in case, do a try/except around the URL instance creation, we never know what could go wrong when crafting these URLs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if you want the try/except in w3af or if it is just to find out what's generating all http://localhost/
in the list fuzzable requests.
I did this:
try :
newurl = URL(change)
yield newurl
except ValueError as exception:
om.out.information('Error while generating a new URL: '\
+ exception)
And nothing wrong came from it during all my tests.
Let me know if I have to add the exception to the pull request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the code you show in the comment is what I wanted, just in case... you never know...
The only change I would propose to that code is to move the yield newurl
to the else:
section.
I took a quick look at the output.txt file and noticed these lines:
This is what you said was strange, right? I also believe it is strange... but can't understand why it is like that... Could you share the w3af script you are using to run the scan, and a little bit more about the target web application? Thanks! |
Yes this is it. I haven't looked for what causes it yet. I'm using w3af with these modifications (my fork's branch url_fuzzer-bfac-tricks). It only appears when the function I tested it with two servers and find the same result in both:
|
See #16271
Hey, here is a first try.
What I added:
appendables
,backup_exts
andfile_types
Copy_of_
)_mutate_by_prepending
to generate new URLs from this set_mutate_file_name
to generate change on the name of the file like :index.html
->index-backup.html
,index (copy).html
,.index.html.swp
, etcI checked that new patterns don't create duplicates of existing URLs.
I don't know if the function
_mutate_file_name
is the best way to do it. I used the same technique asbfac
, what do you think about this function ?I performed some tests, everything works properly but I found something strange. The plugin finds all 26 URLs it could find on the server but the list of fuzzable requests contains several times
http://localhost/
.See the attached file output.txt for the list of file on the test server and w3af output.
This bug as something to do with the new function
_mutate_file_name
. It found 6 new URLs and there's 6 morehttp://localhost/
in the fuzzable requests. If the plugin don't call the function the result is good. I'm sure that the function doesn't create this URL directly. I'll work on it.