-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to debug when getting no output or errors? #159
Comments
Hey it's likely that the URL you're crawling just doesn't have any links, or maybe the links are generated dynamically on the client-side, in which case hakrawler might not detect them. Try doing a curl of the same URL and seeing what it returns. If you provide me the url you're trying to call I can give you more info. |
Hi Hakluke, I tested the One more thing, can you add a flag for link to rather than just pipelining it? Cause in that way, the tool is more independent |
Hi Hakluke, never mind the previous comment, those sample links are not existed, their correct versions do not contain "www" in the domain name. |
Hello. First, I appreciate your script and hard work for this project that we can all use. I think its great.
I have been struggling to crawl a website. Read the Readme over and over. Tried all the flags to see if it would fix the lack of output/crawling.
What can I do to see whats actually going wrong? Not familiar with go specifically. Would a cookie help and which cookie would I want to input? Whats the best way to attain that cookie i could use (for any attempt, figured id ask)
Sorry to bother, know this might be a no-brainer for a more experienced programmer. If you could point me in the right direction or let me know if I'm missing something id be really grateful!
Thanks so much.
Running this on the most up to date Kali, installed with go (used the apt installed one before quickly figured out how to run the newer one from the go install so both are installed)
The text was updated successfully, but these errors were encountered: