Add link to sites.md in readme and merge with master

pull/22/head
anto-christo 5 years ago
commit a3ff122d9d

7
.gitignore vendored

@ -1,3 +1,10 @@
# Jupyter Notebook
.ipynb_checkpoints
*.ipynb
# Output files, except requirements.txt
*.txt
!requirements.txt
# Comma-Separated Values (CSV) Reports
*.csv

@ -1,13 +1,15 @@
# Sherlock
> Find usernames across over 75 social networks
> Find usernames across [social networks](https://github.com/sdushantha/sherlock/blob/master/sites.md)
<p align="center">
<img src="sherlock_preview.png">
<img src="preview.png">
</a>
</p>
## Installation
**NOTE**: Python 3.6 or higher is required.
```bash
# clone the repo
$ git clone https://github.com/sdushantha/sherlock.git
@ -20,87 +22,30 @@ $ pip3 install -r requirements.txt
```
## Usage
Just run ```python3 sherlock.py```
All of the accounts found will be stored in a text file with their username (e.g ```user123.txt```)
## List of Social Networks
1. [500px](https://500px.com)
2. [About.me](https://about.me)
3. [AngelList](https://angel.co)
4. [BLIP.fm](https://blip.fm)
5. [Bandcamp](https://www.bandcamp.com)
6. [Behance](https://www.behance.net)
7. [BitBucket](https://bitbucket.org)
8. [Blogger](https://www.blogspot.com)
9. [BuzzFeed](https://buzzfeed.com)
10. [Canva](https://www.canva.com)
11. [CashMe](https://cash.me)
12. [Codecademy](https://www.codecademy.com)
13. [Codementor](https://www.codementor.io)
14. [ColourLovers](https://www.colourlovers.com)
15. [Contently](https://www.contently.com)
16. [DailyMotion](https://www.dailymotion.com)
17. [Designspiration](https://www.designspiration.net)
18. [DeviantART](https://www.deviantart.com)
19. [Disqus](https://disqus.com)
20. [Dribbble](https://dribbble.com)
21. [Ebay](https://www.ebay.com)
22. [Ello](https://ello.co)
23. [Etsy](https://www.etsy.com)
24. [Facebook](https://www.facebook.com)
25. [Flickr](https://www.flickr.com)
26. [Flipboard](https://flipboard.com)
27. [Fotolog](https://fotolog.com)
28. [Foursquare](https://foursquare.com)
29. [GitHub](https://www.github.com)
30. [GoodReads](https://www.goodreads.com)
31. [Google Plus](https://plus.google.com)
32. [Gravatar](http://en.gravatar.com)
33. [Gumroad](https://www.gumroad.com)
34. [HackerNews](https://news.ycombinator.com)
35. [HackerOne](https://hackerone.com)
36. [Houzz](https://houzz.com)
37. [IFTTT](https://www.ifttt.com)
38. [Imgur](https://imgur.com)
39. [Instagram](https://www.instagram.com)
40. [Instructables](https://www.instructables.com)
41. [Keybase](https://keybase.io)
42. [Kongregate](https://www.kongregate.com)
43. [LiveJournal](https://www.livejournal.com)
44. [Medium](https://medium.com)
45. [MixCloud](https://www.mixcloud.com)
46. [Newgrounds](https://www.newgrounds.com)
47. [Pastebin](https://pastebin.com)
48. [Patreon](https://www.patreon.com)
49. [Pexels](https://www.pexels.com)
50. [Pinterest](https://www.pinterest.com)
51. [Reddit](https://www.reddit.com)
52. [ReverbNation](https://www.reverbnation.com)
53. [Roblox](https://www.roblox.com)
54. [Scribd](https://www.scribd.com)
55. [Slack](https://www.slack.com)
56. [SlideShare](https://slideshare.net)
57. [SoundCloud](https://soundcloud.com)
58. [Spotify](https://open.spotify.com)
59. [Steam](https://steamcommunity.com)
60. [Tinder](https://www.gotinder.com)
61. [Trakt](https://www.trakt.tv)
62. [Trip](https://www.trip.skyscanner.com)
63. [TripAdvisor](https://tripadvisor.com)
64. [Twitter](https://www.twitter.com)
65. [Unsplash](https://unsplash.com)
66. [VK](https://vk.com)
67. [VSCO](https://vsco.co)
68. [Vimeo](https://vimeo.com)
69. [Wattpad](https://www.wattpad.com)
70. [We Heart It](https://weheartit.com)
71. [WordPress](https://www.wordpress.com)
72. [YouTube](https://www.youtube.com)
73. [devRant](https://devrant.com)
74. [iMGSRC.RU](https://imgsrc.ru)
75. [last.fm](https://last.fm)
```bash
$ python3 sherlock.py --help
usage: sherlock.py [-h] [--version] [--verbose] [--quiet] [--csv] [--tor] [--unique-tor]
USERNAMES [USERNAMES ...]
Sherlock: Find Usernames Across Social Networks (Version 0.1.0)
positional arguments:
USERNAMES One or more usernames to check with social networks.
optional arguments:
-h, --help show this help message and exit
--version Display version information and dependencies.
--verbose, -v, -d, --debug
Display extra debugging information.
--quiet, -q Disable debugging information (Default Option).
--csv Create Comma-Separated Values (CSV) File.
--tor, -t Make requests over TOR; increases runtime; requires TOR to be installed and in system path.
--unique-tor, -u Make requests over TOR with new TOR circuit after each request; increases runtime; requires TOR to be installed and in system path.
```
For example, run ```python3 sherlock.py user123```, and all of the accounts
found will be stored in a text file with the username (e.g ```user123.txt```).
## License
MIT License

@ -1,360 +1,462 @@
{
"Instagram": {
"url": "https://www.instagram.com/{}",
"urlMain": "https://www.instagram.com/",
"errorType": "message",
"errorMsg": "The link you followed may be broken"
},
"Twitter": {
"url": "https://www.twitter.com/{}",
"urlMain": "https://www.twitter.com/",
"errorType": "message",
"errorMsg": "page doesnt exist"
},
"Facebook": {
"url": "https://www.facebook.com/{}",
"urlMain": "https://www.facebook.com/",
"errorType": "status_code"
},
"YouTube": {
"url": "https://www.youtube.com/{}",
"urlMain": "https://www.youtube.com/",
"errorType": "message",
"errorMsg": "Not Found"
},
"Blogger": {
"url": "https://{}.blogspot.com",
"urlMain": "https://www.blogger.com/",
"errorType": "status_code",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"Google Plus": {
"url": "https://plus.google.com/+{}",
"urlMain": "https://plus.google.com/",
"errorType": "status_code"
},
"Reddit": {
"url": "https://www.reddit.com/user/{}",
"urlMain": "https://www.reddit.com/",
"errorType": "message",
"errorMsg":"page not found"
},
"Pinterest": {
"url": "https://www.pinterest.com/{}",
"urlMain": "https://www.pinterest.com/",
"errorType": "response_url",
"errorUrl": "https://www.pinterest.com/?show_error=true"
},
"GitHub": {
"url": "https://www.github.com/{}",
"urlMain": "https://www.github.com/",
"errorType": "status_code",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z0-9](?:[a-zA-Z0-9]|-(?=[a-zA-Z0-9])){0,38}$"
},
"Steam": {
"url": "https://steamcommunity.com/id/{}",
"urlMain": "https://steamcommunity.com/",
"errorType": "message",
"errorMsg": "The specified profile could not be found"
},
"Vimeo": {
"url": "https://vimeo.com/{}",
"urlMain": "https://vimeo.com/",
"errorType": "message",
"errorMsg": "404 Not Found"
},
"SoundCloud": {
"url": "https://soundcloud.com/{}",
"urlMain": "https://soundcloud.com/",
"errorType": "status_code"
},
"Disqus": {
"url": "https://disqus.com/{}",
"urlMain": "https://disqus.com/",
"errorType": "status_code"
},
"Medium": {
"url": "https://medium.com/@{}",
"urlMain": "https://medium.com/",
"errorType": "status_code"
},
"DeviantART": {
"url": "https://{}.deviantart.com",
"urlMain": "https://deviantart.com",
"errorType": "status_code",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"VK": {
"url": "https://vk.com/{}",
"urlMain": "https://vk.com/",
"errorType": "status_code"
},
"About.me": {
"url": "https://about.me/{}",
"urlMain": "https://about.me/",
"errorType": "status_code"
},
"Imgur": {
"url": "https://imgur.com/user/{}",
"urlMain": "https://imgur.com/",
"errorType": "status_code"
},
"Flipboard": {
"url": "https://flipboard.com/@{}",
"urlMain": "https://flipboard.com/",
"errorType": "message",
"errorMsg": "loading"
},
"SlideShare": {
"url": "https://slideshare.net/{}",
"urlMain": "https://slideshare.net/",
"errorType": "status_code"
},
"Fotolog": {
"url": "https://fotolog.com/{}",
"urlMain": "https://fotolog.com/",
"errorType": "status_code"
},
"Spotify": {
"url": "https://open.spotify.com/user/{}",
"urlMain": "https://open.spotify.com/",
"errorType": "status_code"
},
"MixCloud": {
"url": "https://www.mixcloud.com/{}",
"urlMain": "https://www.mixcloud.com/",
"errorType": "message",
"errorMsg": "Page Not Found"
},
"Scribd": {
"url": "https://www.scribd.com/{}",
"urlMain": "https://www.scribd.com/",
"errorType": "message",
"errorMsg": "Page not found"
},
"Patreon": {
"url": "https://www.patreon.com/{}",
"urlMain": "https://www.patreon.com/",
"errorType": "status_code"
},
"BitBucket": {
"url": "https://bitbucket.org/{}",
"urlMain": "https://bitbucket.org/",
"errorType": "status_code"
},
"Roblox": {
"url": "https://www.roblox.com/user.aspx?username={}",
"urlMain": "https://www.roblox.com/",
"errorType": "message",
"errorMsg": "Page cannot be found or no longer exists"
},
"Gravatar": {
"url": "http://en.gravatar.com/{}",
"urlMain": "http://en.gravatar.com/",
"errorType": "status_code"
},
"iMGSRC.RU": {
"url": "https://imgsrc.ru/main/user.php?user={}",
"urlMain": "https://imgsrc.ru/",
"errorType": "response_url",
"errorUrl": "https://imgsrc.ru/"
},
"DailyMotion": {
"url": "https://www.dailymotion.com/{}",
"urlMain": "https://www.dailymotion.com/",
"errorType": "message",
"errorMsg": "Page not found"
},
"Etsy": {
"url": "https://www.etsy.com/shop/{}",
"urlMain": "https://www.etsy.com/",
"errorType": "status_code"
},
"CashMe": {
"url": "https://cash.me/{}",
"urlMain": "https://cash.me/",
"errorType": "status_code"
},
"Behance": {
"url": "https://www.behance.net/{}",
"urlMain": "https://www.behance.net/",
"errorType": "message",
"errorMsg": "Oops! We cant find that page."
},
"GoodReads": {
"url": "https://www.goodreads.com/{}",
"urlMain": "https://www.goodreads.com/",
"errorType": "status_code"
},
"Instructables": {
"url": "https://www.instructables.com/member/{}",
"urlMain": "https://www.instructables.com/",
"errorType": "message",
"errorMsg": "404: We're sorry, things break sometimes"
},
"Keybase": {
"url": "https://keybase.io/{}",
"urlMain": "https://keybase.io/",
"errorType": "status_code"
},
"Kongregate": {
"url": "https://www.kongregate.com/accounts/{}",
"urlMain": "https://www.kongregate.com/",
"errorType": "message",
"errorMsg": "Sorry, no account with that name was found.",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"LiveJournal": {
"url": "https://{}.livejournal.com",
"urlMain": "https://www.livejournal.com/",
"errorType": "message",
"errorMsg": "Unknown Journal",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"VSCO": {
"url": "https://vsco.co/{}",
"urlMain": "https://vsco.co/",
"errorType": "status_code"
},
"AngelList": {
"url": "https://angel.co/{}",
"urlMain": "https://angel.co/",
"errorType": "message",
"errorMsg": "We couldn't find what you were looking for."
},
"last.fm": {
"url": "https://last.fm/user/{}",
"urlMain": "https://last.fm/",
"errorType": "message",
"errorMsg": "Whoops! Sorry, but this page doesn't exist."
},
"Dribbble": {
"url": "https://dribbble.com/{}",
"urlMain": "https://dribbble.com/",
"errorType": "message",
"errorMsg": "Whoops, that page is gone.",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"Codecademy": {
"url": "https://www.codecademy.com/{}",
"urlMain": "https://www.codecademy.com/",
"errorType": "message",
"errorMsg": "404 error"
},
"Pastebin": {
"url": "https://pastebin.com/u/{}",
"urlMain": "https://pastebin.com/",
"errorType": "response_url",
"errorUrl": "https://pastebin.com/index"
},
"Foursquare": {
"url": "https://foursquare.com/{}",
"urlMain": "https://foursquare.com/",
"errorType": "status_code"
},
"Gumroad": {
"url": "https://www.gumroad.com/{}",
"urlMain": "https://www.gumroad.com/",
"errorType": "message",
"errorMsg": "Page not found."
},
"Newgrounds": {
"url": "https://{}.newgrounds.com",
"urlMain": "https://newgrounds.com",
"errorType": "status_code",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"Wattpad": {
"url": "https://www.wattpad.com/user/{}",
"urlMain": "https://www.wattpad.com/",
"errorType": "message",
"errorMsg": "This page seems to be missing..."
},
"Canva": {
"url": "https://www.canva.com/{}",
"urlMain": "https://www.canva.com/",
"errorType": "message",
"errorMsg": "Not found (404)"
},
"Trakt": {
"url": "https://www.trakt.tv/users/{}",
"urlMain": "https://www.trakt.tv/",
"errorType": "message",
"errorMsg": "404"
},
"500px": {
"url": "https://500px.com/{}",
"urlMain": "https://500px.com/",
"errorType": "message",
"errorMsg": "Sorry, no such page."
},
"BuzzFeed": {
"url": "https://buzzfeed.com/{}",
"urlMain": "https://buzzfeed.com/",
"errorType": "message",
"errorMsg": "We can't find the page you're looking for."
},
"TripAdvisor": {
"url": "https://tripadvisor.com/members/{}",
"urlMain": "https://tripadvisor.com/",
"errorType": "message",
"errorMsg": "This page is on vacation…"
},
"Contently": {
"url": "https://{}.contently.com/",
"urlMain": "https://contently.com/",
"errorType": "message",
"errorMsg": "We can't find that page!",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"Houzz": {
"url": "https://houzz.com/user/{}",
"urlMain": "https://houzz.com/",
"errorType": "message",
"errorMsg": "The page you requested was not found."
},
"BLIP.fm": {
"url": "https://blip.fm/{}",
"urlMain": "https://blip.fm/",
"errorType": "message",
"errorMsg": "Page Not Found"
},
"HackerNews": {
"url": "https://news.ycombinator.com/user?id={}",
"urlMain": "https://news.ycombinator.com/",
"errorType": "message",
"errorMsg": "No such user."
},
"Codementor": {
"url": "https://www.codementor.io/{}",
"urlMain": "https://www.codementor.io/",
"errorType": "message",
"errorMsg": "404"
},
"ReverbNation": {
"url": "https://www.reverbnation.com/{}",
"urlMain": "https://www.reverbnation.com/",
"errorType": "message",
"errorMsg": "Sorry, we couldn't find that page"
},
"Designspiration": {
"url": "https://www.designspiration.net/{}",
"urlMain": "https://www.designspiration.net/",
"errorType": "message",
"errorMsg": "Content Not Found"
},
"Bandcamp": {
"url": "https://www.bandcamp.com/{}",
"urlMain": "https://www.bandcamp.com/",
"errorType": "message",
"errorMsg": "Sorry, that something isnt here"
},
"ColourLovers": {
"url": "https://www.colourlovers.com/love/{}",
"urlMain": "https://www.colourlovers.com/",
"errorType": "message",
"errorMsg": "Page Not Loved"
},
"IFTTT": {
"url": "https://www.ifttt.com/p/{}",
"urlMain": "https://www.ifttt.com/",
"errorType": "message",
"errorMsg": "The requested page or file does not exist"
},
"Ebay": {
"url": "https://www.ebay.com/usr/{}",
"urlMain": "https://www.ebay.com/",
"errorType": "message",
"errorMsg": "The User ID you entered was not found"
},
"Slack": {
"url": "https://{}.slack.com",
"urlMain": "https://slack.com",
"errorType": "status_code",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"Trip": {
"url": "https://www.trip.skyscanner.com/user/{}",
"urlMain": "https://www.trip.skyscanner.com/",
"errorType": "message",
"errorMsg": "Page not found"
},
"Ello": {
"url": "https://ello.co/{}",
"urlMain": "https://ello.co/",
"errorType": "message",
"errorMsg": "We couldn't find the page you're looking for"
},
"HackerOne": {
"url": "https://hackerone.com/{}",
"urlMain": "https://hackerone.com/",
"errorType": "message",
"errorMsg": "Page not found"
},
"Tinder": {
"url": "https://www.gotinder.com/@{}",
"urlMain": "https://tinder.com/",
"errorType": "message",
"errorMsg": "Looking for Someone?"
},
"We Heart It": {
"url": "https://weheartit.com/{}",
"urlMain": "https://weheartit.com/",
"errorType": "message",
"errorMsg": "Oops! You've landed on a moving target!"
},
"Flickr": {
"url": "https://www.flickr.com/people/{}",
"urlMain": "https://www.flickr.com/",
"errorType": "status_code"
},
"WordPress": {
"url": "https://{}.wordpress.com",
"urlMain": "https://wordpress.com",
"errorType": "response_url",
"errorUrl": "wordpress.com/typo/?subdomain=",
"noPeriod": "True"
"regexCheck": "^[a-zA-Z][a-zA-Z0-9_-]*$"
},
"Unsplash": {
"url": "https://unsplash.com/@{}",
"urlMain": "https://unsplash.com/",
"errorType": "status_code"
},
"Pexels": {
"url": "https://www.pexels.com/@{}",
"urlMain": "https://www.pexels.com/",
"errorType": "message",
"errorMsg": "Ouch, something went wrong!"
},
"devRant": {
"url": "https://devrant.com/users/{}",
"urlMain": "https://devrant.com/",
"errorType": "response_url",
"errorUrl": "https://devrant.com/"
},
"MyAnimeList": {
"url": "https://myanimelist.net/profile/{}",
"urlMain": "https://myanimelist.net/",
"errorType": "status_code"
},
"ImageShack": {
"url": "https://imageshack.us/user/{}",
"urlMain": "https://imageshack.us/",
"errorType": "response_url",
"errorUrl": "https://imageshack.us/"
},
"Badoo": {
"url": "https://badoo.com/profile/{}",
"urlMain": "https://badoo.com/",
"errorType": "status_code"
},
"MeetMe": {
"url": "https://www.meetme.com/{}",
"urlMain": "https://www.meetme.com/",
"errorType": "response_url",
"errorUrl": "https://www.meetme.com/"
},
"Quora": {
"url": "https://www.quora.com/profile/{}",
"urlMain": "https://www.quora.com/",
"errorType": "status_code"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

@ -1 +1,2 @@
requests
torrequest

@ -1,10 +1,21 @@
"""Sherlock: Find Usernames Across Social Networks Module
This module contains the main logic to search for usernames at social
networks.
"""
import requests
import json
import os
import sys
import argparse
import re
import csv
from argparse import ArgumentParser, RawDescriptionHelpFormatter
import platform
from torrequest import TorRequest
module_name = "Sherlock: Find Usernames Across Social Networks"
__version__ = "0.1.0"
DEBUG = False
# TODO: fix tumblr
@ -12,41 +23,57 @@ def write_to_file(url, fname):
with open(fname, "a") as f:
f.write(url+"\n")
def print_error(err, errstr, var, debug = False):
if debug:
print (f"\033[37;1m[\033[91;1m-\033[37;1m]\033[91;1m {errstr}\033[93;1m {err}")
print(f"\033[37;1m[\033[91;1m-\033[37;1m]\033[91;1m {errstr}\033[93;1m {err}")
else:
print (f"\033[37;1m[\033[91;1m-\033[37;1m]\033[91;1m {errstr}\033[93;1m {var}")
print(f"\033[37;1m[\033[91;1m-\033[37;1m]\033[91;1m {errstr}\033[93;1m {var}")
def make_request(url, headers, error_type, social_network):
def make_request(url, headers, error_type, social_network, verbose=False, tor=False, unique_tor=False):
r = TorRequest() if (tor or unique_tor) else requests
try:
r = requests.get(url, headers=headers)
if r.status_code:
return r, error_type
rsp = r.get(url, headers=headers)
if unique_tor:
r.reset_identity()
if rsp.status_code:
return rsp, error_type
except requests.exceptions.HTTPError as errh:
print_error(errh, "HTTP Error:", social_network, DEBUG)
print_error(errh, "HTTP Error:", social_network, verbose)
except requests.exceptions.ConnectionError as errc:
print_error(errc, "Error Connecting:", social_network, DEBUG)
print_error(errc, "Error Connecting:", social_network, verbose)
except requests.exceptions.Timeout as errt:
print_error(errt, "Timeout Error:", social_network, DEBUG)
print_error(errt, "Timeout Error:", social_network, verbose)
except requests.exceptions.RequestException as err:
print_error(err, "Unknown error:", social_network, DEBUG)
print_error(err, "Unknown error:", social_network, verbose)
return None, ""
def sherlock(username, verbose=False, tor=False, unique_tor=False):
"""Run Sherlock Analysis.
Checks for existence of username on various social media sites.
def sherlock(username):
# Not sure why, but the banner messes up if i put into one print function
print("\033[37;1m .\"\"\"-.")
print("\033[37;1m / \\")
print("\033[37;1m ____ _ _ _ | _..--'-.")
print("\033[37;1m/ ___|| |__ ___ _ __| | ___ ___| |__ >.`__.-\"\"\;\"`")
print("\033[37;1m\___ \| '_ \ / _ \ '__| |/ _ \ / __| |/ / / /( ^\\")
print("\033[37;1m ___) | | | | __/ | | | (_) | (__| < '-`) =|-.")
print("\033[37;1m|____/|_| |_|\___|_| |_|\___/ \___|_|\_\ /`--.'--' \ .-.")
print("\033[37;1m .'`-._ `.\ | J /")
print("\033[37;1m / `--.| \__/\033[0m")
print()
Keyword Arguments:
username -- String indicating username that report
should be created against.
verbose -- Boolean indicating whether to give verbose output.
tor -- Boolean indicating whether to use a tor circuit for the requests.
unique_tor -- Boolean indicating whether to use a new tor circuit for each request.
Return Value:
Dictionary containing results from report. Key of dictionary is the name
of the social network site, and the value is another dictionary with
the following keys:
url_main: URL of main site.
url_user: URL of user on site (if account exists).
exists: String indicating results of test for account existence.
http_status: HTTP status code of query which checked for existence on
site.
response_text: Text that came back from request. May be None if
there was an HTTP error when checking for existence.
"""
fname = username+".txt"
if os.path.isfile(fname):
@ -57,66 +84,179 @@ def sherlock(username):
raw = open("data.json", "r", encoding="utf-8")
data = json.load(raw)
# User agent is needed because some sites does not
# User agent is needed because some sites does not
# return the correct information because it thinks that
# we are bot
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:55.0) Gecko/20100101 Firefox/55.0'
}
# Results from analysis of all sites
results_total = {}
for social_network in data:
# Results from analysis of this specific site
results_site = {}
# Record URL of main site
results_site['url_main'] = data.get(social_network).get("urlMain")
# URL of user on site (if it exists)
url = data.get(social_network).get("url").format(username)
results_site['url_user'] = url
error_type = data.get(social_network).get("errorType")
cant_have_period = data.get(social_network).get("noPeriod")
if ("." in username) and (cant_have_period == "True"):
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m User Name Not Allowed!".format(social_network))
continue
r, error_type = make_request(url=url, headers=headers, error_type=error_type, social_network=social_network)
if error_type == "message":
error = data.get(social_network).get("errorMsg")
# Checks if the error message is in the HTML
if not error in r.text:
print("\033[37;1m[\033[92;1m+\033[37;1m]\033[92;1m {}:\033[0m".format(social_network), url)
write_to_file(url, fname)
else:
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Not Found!".format(social_network))
elif error_type == "status_code":
# Checks if the status code of the repsonse is 404
if not r.status_code == 404:
print("\033[37;1m[\033[92;1m+\033[37;1m]\033[92;1m {}:\033[0m".format(social_network), url)
write_to_file(url, fname)
else:
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Not Found!".format(social_network))
elif error_type == "response_url":
error = data.get(social_network).get("errorUrl")
# Checks if the redirect url is the same as the one defined in data.json
if not error in r.url:
print("\033[37;1m[\033[92;1m+\033[37;1m]\033[92;1m {}:\033[0m".format(social_network), url)
write_to_file(url, fname)
else:
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Not Found!".format(social_network))
elif error_type == "":
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Error!".format(social_network))
regex_check = data.get(social_network).get("regexCheck")
# Default data in case there are any failures in doing a request.
http_status = "?"
response_text = ""
if regex_check and re.search(regex_check, username) is None:
#No need to do the check at the site: this user name is not allowed.
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Illegal Username Format For This Site!".format(social_network))
exists = "illegal"
else:
r, error_type = make_request(url=url, headers=headers, error_type=error_type, social_network=social_network, verbose=verbose, tor=tor, unique_tor=unique_tor)
# Attempt to get request information
try:
http_status = r.status_code
except:
pass
try:
response_text = r.text.encode(r.encoding)
except:
pass
if error_type == "message":
error = data.get(social_network).get("errorMsg")
# Checks if the error message is in the HTML
if not error in r.text:
print("\033[37;1m[\033[92;1m+\033[37;1m]\033[92;1m {}:\033[0m".format(social_network), url)
write_to_file(url, fname)
exists = "yes"
else:
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Not Found!".format(social_network))
exists = "no"
elif error_type == "status_code":
# Checks if the status code of the response is 404
if not r.status_code == 404:
print("\033[37;1m[\033[92;1m+\033[37;1m]\033[92;1m {}:\033[0m".format(social_network), url)
write_to_file(url, fname)
exists = "yes"
else:
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Not Found!".format(social_network))
exists = "no"
elif error_type == "response_url":
error = data.get(social_network).get("errorUrl")
# Checks if the redirect url is the same as the one defined in data.json
if not error in r.url:
print("\033[37;1m[\033[92;1m+\033[37;1m]\033[92;1m {}:\033[0m".format(social_network), url)
write_to_file(url, fname)
exists = "yes"
else:
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Not Found!".format(social_network))
exists = "no"
elif error_type == "":
print("\033[37;1m[\033[91;1m-\033[37;1m]\033[92;1m {}:\033[93;1m Error!".format(social_network))
exists = "error"
# Save exists flag
results_site['exists'] = exists
# Save results from request
results_site['http_status'] = http_status
results_site['response_text'] = response_text
# Add this site's results into final dictionary with all of the other results.
results_total[social_network] = results_site
print("\033[1;92m[\033[0m\033[1;77m*\033[0m\033[1;92m] Saved: \033[37;1m{}\033[0m".format(username+".txt"))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('username', help='check services with given username')
parser.add_argument("-d", '--debug', help="enable debug mode", action="store_true")
return results_total
def main():
version_string = f"%(prog)s {__version__}\n" + \
f"{requests.__description__}: {requests.__version__}\n" + \
f"Python: {platform.python_version()}"
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
description=f"{module_name} (Version {__version__})"
)
parser.add_argument("--version",
action="version", version=version_string,
help="Display version information and dependencies."
)
parser.add_argument("--verbose", "-v", "-d", "--debug",
action="store_true", dest="verbose", default=False,
help="Display extra debugging information."
)
parser.add_argument("--quiet", "-q",
action="store_false", dest="verbose",
help="Disable debugging information (Default Option)."
)
parser.add_argument("--tor", "-t",
action="store_true", dest="tor", default=False,
help="Make requests over TOR; increases runtime; requires TOR to be installed and in system path.")
parser.add_argument("--unique-tor", "-u",
action="store_true", dest="unique_tor", default=False,
help="Make requests over TOR with new TOR circuit after each request; increases runtime; requires TOR to be installed and in system path.")
parser.add_argument("--csv",
action="store_true", dest="csv", default=False,
help="Create Comma-Separated Values (CSV) File."
)
parser.add_argument("username",
nargs='+', metavar='USERNAMES',
action="store",
help="One or more usernames to check with social networks."
)
args = parser.parse_args()
if args.debug:
DEBUG = True
if args.username:
sherlock(args.username)
# Banner
print(
"""\033[37;1m .\"\"\"-.
\033[37;1m / \\
\033[37;1m ____ _ _ _ | _..--'-.
\033[37;1m/ ___|| |__ ___ _ __| | ___ ___| |__ >.`__.-\"\"\;\"`
\033[37;1m\___ \| '_ \ / _ \ '__| |/ _ \ / __| |/ / / /( ^\\
\033[37;1m ___) | | | | __/ | | | (_) | (__| < '-`) =|-.
\033[37;1m|____/|_| |_|\___|_| |_|\___/ \___|_|\_\ /`--.'--' \ .-.
\033[37;1m .'`-._ `.\ | J /
\033[37;1m / `--.| \__/\033[0m""")
if args.tor or args.unique_tor:
print("Warning: some websites might refuse connecting over TOR, so note that using this option might increase connection errors.")
# Run report on all specified users.
for username in args.username:
print()
results = sherlock(username, verbose=args.verbose, tor=args.tor, unique_tor=args.unique_tor)
if args.csv == True:
with open(username + ".csv", "w", newline='') as csv_report:
writer = csv.writer(csv_report)
writer.writerow(['username',
'name',
'url_main',
'url_user',
'exists',
'http_status'
]
)
for site in results:
writer.writerow([username,
site,
results[site]['url_main'],
results[site]['url_user'],
results[site]['exists'],
results[site]['http_status']
]
)
if __name__ == "__main__":
main()

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

Loading…
Cancel
Save