It’s been more than a year since the last major ( and initial) release of creepy. A year that had everything, from nasty comments and ill based criticism, to appreciation , shout outs and positive feedback. Everything from random pull requests (only 1 sadly enough) to “is this project dead or what, dammit?” Everything from 400+ downloads per day (for a short period) to the infamous twitter key revocation issue that rendered the application useless for a couple of weeks ( this deserves a post on its own , but I fear that this will remain one of the posts that “I should have written” )
When I wrapped my mind around the idea of an application like creepy, I had 2 explicit goals :
1) Raise awareness. Publicity came, both good and bad, I got a little more involved personally than I would imagine/want to , but looking over the past 13 months I feel that I succeeded in that. References in mainstream media ( TV, newspapers, radio) and of course blogs/twitter gave the project enough exposure to send the message across. I have no metrics, but I think it was a good scare for social network fanatics and a wake up call for people to take their locational privacy a little more seriously. Or at least just a good step towards it. Or at least that’s what I want to believe.
2) Useful tool for information gathering. I have had some good feedback and use cases from OSINT people and police agencies across the globe, but not so much from penetration testers. So, if you do use the tool, drop me a line, tell me if it has been useful, submit a feature request.
Bah, anyway, enough with the retrospection. Without further delay, I give you creepy 0.2
Release notes :
The source code is already in github, feel free to try it from there. The windows binaries and deb packages should follow shortly!
- More responsive user interface. Previous versions handled the information retrieval ( tweets, photos.. ) and the geolocation information extraction as a single event. This made people uncomfortable since it could take a while, and there was no way to know if the application is still gathering data or it just hung. This is why i separated the retrieval from the extraction . Now users will get feedback that creepy finished retrieving tweets before it will start analyzing them.
- FASTER. I used the multiprocessing module to make the geolocation extraction process “threaded” . This made the whole application a lot, lot faster.
New Features :
- Support for t.co links. A while ago, twitter started replacing all links with t.co links. There was no support for parsing these links, so links to image hosting sites where being ignored
- Connected to the above, configurable exclusion of t.co link parsing.
Since all links in twitter are t.co links, with no further information about them, creepy has to go and fetch them, follow the redirects and end up to the target website before it can determine if it is a website (twitpic, yfrog etc ) from where it can gather geolocation information. As @thomzee mentioned on twitter, they actually return the expanded URL . Oh well, someone didn’t go through the API well enough.. I will update the code, but I think the need for the option is still valid, as extracting geolocation information from external links is by far the most time consuming part. So, there is an option to ignore all links in the tweets and get geolocation information only from twitter.
- Control over the pagination of the results from twitter : Many people had big problems with twitter 502 responses ( service unavailable) . Although this should not be happening (at least not that often ) , I did some testing and realized it can be improved using tweepy’s Cursor object controls over result pagination. The earlier versions were hardcoded to get pages of 200 results which is still the default value(and the maximum that twitter allows ), but if you get 502s often, you might want to play with lower numbers using the configuration file.
- Last but not least, my favorite ! : Creepy is good at aggregating information, but what can you do with it afterwards ? I was thinking about that a lot and realized that I should somehow allow for the data that creepy gathers to be used in an easy way with other security tools. First step in this direction, is integration with the Social Engineer Toolkit. I am pretty sure there is no need for introductions, but if haven’t heard about it, go to secmaniac and give a try to the beast the Dave has put together. Creepy 0.2 allows you to create templates for SET’s spear-phishing attack vector. What if instead of generic emails, you could create personalized ones that would put you in the target’s comfort zone and highly raise the chances of him/her opening the attachment ? Creepy uses a meta-template file format very similar to the one that SET is using itself for storing mail templates. The addition is that creepy allows you to input specific geolocation and date,time information into the generic templates. Creepy comes with 3 templates you can use and you are of course encouraged to create your own since this attack vector is rather specialized. Let’s take a look at one of them to see how it works :
# Author: Ioannis Kakavas # # Email about photos # SUBJECT="Is that really you ?" BODY="Hey man, \n I got this pictures of you in an email from a stranger last night. This guy claimed that you were in @area@ , sometime around @hour@ on @date@". It looks like he is stalking you or something.. Take a look at them and tell me if its really you, I got kind of spooked...\n Best, \n
The @area@ and @hour@ in this case are placeholders and they are just two of the available ones which include the following : @formatted_address@, @area@,@username@, @realname@ ,@date@,@time@,@ampm@, @datetime@,@month@, @day@, @year@, @hour@, @minutes@.
So when you right click in the location list in creepy, select “Export template for Social Engineer Toolkit” and select the specific template from the drop-down list
creepy will parse the template, replace the place holders with the actual information about the specific time and place and save it in SET’s template directory. The included templates
are here to get you going and give an example of what you can do, but the possibilities are endless !! If people find it useful, I will create a new repository where we can share
templates. So, go, play with the new feature and please do come back with bugs and feature requests of how this could be made better.
- Working on creepy now, made me realize the the code is becoming slowly close to being unmaintainable. I need to fix that now, before it grows to be a spaghetti code monster that I will have trouble debugging or adding new functionality to. That’s why the next major release will be only about redesigning and refactoring the application without any added functionality.
- Secondly, I will try to drop gtk and move to QT. This will allow for a truly cross-platform application (Mac OS X included). I have started working on it already and I dare to say that I like QT better visually 😉
- An android version is in the plans, but not the immediate ones. I don’t know how useful it will be, but I think it will be cool .
Need for help :
This one man team thing is not progressing very well. 13 months for a major release is a bit too much, plus the fact that I hardly have time for bug fixing in the meanwhile. I work full time, free time is not plenty and there are oh so many more nice things in life apart from code that I need to enjoy 🙂 So if you feel like joining the team, drop me mail or a tweet and let’s have a talk about it.
That’s that for now, go play and let me know how it went 😉