Archive

Archive for the ‘privacy’ Category

creepy 0.2 or “Your SET was cool, but now it’s creepy too”

March 29, 2012 7 comments

It’s been more than a year since the last major ( and initial) release of creepy. A year that had everything, from nasty comments and ill based criticism, to appreciation , shout outs and positive feedback. Everything from random pull requests (only 1 sadly enough) to “is this project dead or what, dammit?” Everything from 400+ downloads per day (for a short period) to the infamous twitter key revocation issue that rendered the application useless for a couple of weeks ( this deserves a post on its own , but I fear that this will remain one of the posts that “I should have written” )

When I wrapped my mind around the idea of an application like creepy, I had 2 explicit goals :

1) Raise awareness. Publicity came, both good and bad, I got a little more involved personally than I would imagine/want to , but looking over the past 13 months I feel that I succeeded in that. References in mainstream media ( TV, newspapers, radio) and of course blogs/twitter gave the project enough exposure to send the message across. I have no metrics, but I think it was a good scare for social network fanatics and a wake up call for people to take their locational privacy a little more seriously. Or at least just a good step towards it. Or at least that’s what I want to believe.

2) Useful tool for information gathering. I have had some good feedback and use cases from OSINT people and police agencies across the globe, but not so much from penetration testers. So, if you do use the tool, drop me a line, tell me if it has been useful, submit a feature request.

Bah, anyway, enough with the retrospection. Without further delay, I give you creepy 0.2

Release notes :

Availability:

The source code is already in github, feel free to try it from there. The windows binaries and deb packages should follow shortly!

Improvements :

  • More responsive user interface. Previous versions handled the information retrieval ( tweets, photos.. ) and the geolocation information extraction as a single event. This made people uncomfortable since it could take a while, and there was no way to know if the application is still gathering data or it just hung. This is why i separated the retrieval from the extraction . Now users will get feedback that creepy finished retrieving tweets before it will start analyzing them.
  • FASTER. I used the multiprocessing module to make the geolocation extraction process “threaded” . This made the whole application a lot, lot faster.

New Features :

    • Support for t.co links. A while ago, twitter started replacing all links with t.co links. There was no support for parsing these links, so links to image hosting sites where being ignored
    • Connected to the above, configurable exclusion of t.co link parsing. Since all links in twitter are t.co links, with no further information about them, creepy has to go and fetch them, follow the redirects and end up to the target website before it can determine if it is a website (twitpic, yfrog etc ) from where it can gather geolocation information. As @thomzee mentioned on twitter, they actually return the expanded URL . Oh well, someone didn’t go through the API well enough.. I will update the code, but I think the need for the option is still valid, as extracting geolocation information from external links is by far the most time consuming part.  So, there is an option to ignore all links in the tweets and get geolocation information only from twitter.
    • Control over the pagination of the results from twitter : Many people had big problems with twitter 502 responses ( service unavailable) . Although this should not be happening (at least not that often ) , I did some testing and realized it can be improved using tweepy’s Cursor object controls over result pagination. The earlier versions were hardcoded to get pages of 200 results which is still the default value(and the maximum that twitter allows ), but if you get 502s often, you might want to play with lower numbers using the configuration file.
    • Last but not least, my favorite ! : Creepy is good at aggregating information, but what can you do with it afterwards ? I was thinking about that a lot and realized that I should somehow allow for the data that creepy gathers to be used in an easy way with other security tools. First step in this direction, is integration with the Social Engineer Toolkit. I am pretty sure there is no need for introductions, but if haven’t heard about it, go to secmaniac and give a try to the beast the Dave has put together. Creepy 0.2 allows you to create templates for SET’s spear-phishing attack vector. What if instead of generic emails, you could create personalized ones that would put you in the target’s comfort zone and highly raise the chances of him/her opening the attachment ? Creepy uses a meta-template file format very similar to the one that SET is using itself for storing mail templates. The addition is that creepy allows you to input specific geolocation and date,time information into the generic templates. Creepy comes with 3 templates you can use and you are of course encouraged to create your own since this attack vector is rather specialized. Let’s take a look at one of them to see how it works :
# Author: Ioannis Kakavas
#
# Email about photos
#
SUBJECT="Is that really you ?"

BODY="Hey man, \n I got this pictures of you in an email from a stranger last night. This guy claimed that you were in @area@ , sometime around @hour@ on @date@". It looks like he is stalking you or something.. Take a look at them and tell me if its really you, I got kind of spooked...\n Best, \n

The @area@ and @hour@ in this case are placeholders and they are just two of the available ones which include the following : @formatted_address@, @area@,@username@, @realname@ ,@date@,@time@,@ampm@, @datetime@,@month@, @day@, @year@, @hour@, @minutes@.
So when you right click in the location list in creepy, select “Export template for Social Engineer Toolkit” and select the specific template from the drop-down list
creepy will parse the template, replace the place holders with the actual information about the specific time and place and save it in SET’s template directory. The included templates
are here to get you going and give an example of what you can do, but the possibilities are endless !! If people find it useful, I will create a new repository where we can share
templates. So, go, play with the new feature and please do come back with bugs and feature requests of how this could be made better.

Roadmap:

  • Working on creepy now, made me realize the the code is becoming slowly close to being unmaintainable. I need to fix that now, before it grows to be a spaghetti code monster that I will have trouble debugging or adding new functionality to. That’s why the next major release will be only about redesigning and refactoring the application without any added functionality.
  • Secondly, I will try to drop gtk and move to QT. This will allow for a truly cross-platform application (Mac OS X included). I have started working on it already and I dare to say that I like QT better visually 😉
  • An android version is in the plans, but not the immediate ones. I don’t know how useful it will be, but I think it will be cool .

Need for help :

This one man team thing is not progressing very well. 13 months for a major release is a bit too much, plus the fact that I hardly have time for bug fixing in the meanwhile. I work full time,  free time is not plenty and there are oh so many more nice things in life apart from code that I need to enjoy 🙂 So if you feel like joining the team, drop me mail or a tweet and let’s have a talk about it.

That’s that for now, go play and let me know how it went 😉

Advertisements

Honey, I canceled the laundry. – No factor authentication

August 8, 2011 2 comments

How would you feel if you couldn’t wash your clothes ? Like , never ? Well, there’s a (web)app for that !!!

Let’s take things from the start : This post applies to people living in student accommodation offered by SSSB ( Striftelsen Stockholms Studentbösteder ) in Stockholm, Sweden. Well , most of them actually, specifically the ones in which the electronic lock and booking system provided by aptus has been installed already.

The situation : In the aforementioned housing establishments, external (and some internal ) ordinary door locks have been replaced with electronic proximity readers and keys . Those proximity keys are also used for booking (laundry) services , offered by the same company. Basically, when one needs to book a slot in the laundry rooms , he/she accesses the control unit, uses his/her proximity key to activate it and book the desired time slot. As advertised in the company brochures: “Communication between control unit and booking board is encrypted using 32-bit keys.” I’ll leave out the discussion about cloning proximity keys/cards, as it is irrelevant to the point of this post. For the time being, let’s just all assume that the proximity keys are clone-proof , the 32-bit key sufficient, and the communication is tamper-proof as implied.

The problem : SSSB, trying to be tech-savy and helpful , offers another way to access the booking system , provided again by the same company. It is a web application, built on asp.net where users can login and manage their bookings ( book, cancel , view ) without having to physically access the installed control unit. Hm, so what is the problem , you might ask. Login credentials. Aptus portal uses a username/password authentication system , which , although not without all the potential password related  problems ) can be considered a safe practice . I copy wikipedia’s wording : ” A password is a secret word or string of characters that is used for authentication, to prove identity or gain access to a resource (example: an access code is a type of password). The password should be kept secret from those not allowed access.” (emphasis is mine)  Well, SSSB went a bit over in their attempt to make things easy for users, providing themselves the username and password. What’s worse ? The username and the password is the same string. What’s worst ? The password is not secret, publicly available in many cases, and in worst case easily deduced. The string used as a username and password in the system is the object number of the apartment ( Hyresobjekt ) which is a 11 digit string in the form of abcd-efgh-xyz . The abcd part is the 4-digit code of the housing area ( for example , Lappkärrsberget has 7404 , Jerum has 1106 e.t.c. ) . The efgh part is somehow (not in a consistent manner from what I’ve seen ) deduced from the street number of the building and the floor number of the room ( en example room has 1308 because the address is xxxxxx 13, and it is on the seventh floor . The rooms on the sixth floor have 1307 and so on and so forth ). Lastly the xyz part is deduced from the room number inside the floor. Some correspond to the actual room number , so if the room number is 11 the code is 011 , some are deduced from some older ordering I guess . My room’s xyz part doesn’t correspond to my room’s number, but it corresponds to my kitchen cupboards number, which I suppose is a left-over from previous numbering schemes. Taking into consideration that the room’s object number is publicly available in the SSSB’s website when the room is open for biding , and that especially in some periods like August , SSSB updates the available rooms every 3 days, it shouldn’t be really difficult to deduce all the possible object numbers for all the apartments in SSSB premises. Worst case scenario, with only the abcd part available for each housing area , one could fire up his THC Hydra and get the valid object numbers from the successful logins. ***I’m not suggesting that you should go and do that ***.

Sure it’s not a life threatening issue , but it just comes to show how easily sophisticated access control systems can be circumvented due to bad design and implementation solutions. Your neighbor had a party at the night before your exam ? Well, no laundry for him in the next month ! Or worse, consider an automated script changing laundry booking times every hour for all the students leaving in SSSB ( That was up to 7000 rooms in 2000, sssb doesn’t have updated statistics but I guess it’s valid to argue that it is more than 10000 people ) . Really not convenient . 10000 students walking around with dirty clothes in the trendy and fashion-victim Stockholm , shouldn’t be that much fun ! One can go a bit further in the paranoia zone and claim that valuable information about the whereabouts of a tenant can be determined from the laundry bookings ( time of day that he/she is at home e.t.c. )

The solution : The solution isn’t that hard to implement. SSSB already has an authentication system for the website and a general sssb account, based on the personal number of tenants and a password. How hard can it be to connect that to the booking portal ? I contacted SSSB in October 2010 but after the kind reply from the helpdesk thanking me about my thoughts and assuring me that it will be forwarded to the people managing the system, nothing has been done , so I guess it’s fair to come out with the issue and let all interested parties know. I , for one, am already irritated enough by my bookings moving time slots “by themselves” often enough.

Keep clean 😉

Harvesting google profiles

May 19, 2011 8 comments

Some minutes ago, I saw an interesting tweet from Mikko H. Hypponen saying that he found out that all (yes, as in ALL – 35,513,445 )  google profiles addresses can be retrieved from a single XML file  . Looked through it and , yeap, he was quite right.

Well , all these information is going to be useful somehow ,right? Right. In case it’s going to be removed here is a simple way to harvest them before that happens :

#!/usr/bin/env python

import urllib
from BeautifulSoup import BeautifulStoneSoup as bs

xml = bs(urllib.urlopen('http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml').read())
for i in xml.findAll('loc'):
    try:
        urllib.urlretrieve(i.text, i.text[35:])
        print 'Downloaded %s' % i.text[35:]
    except Exception, err:
        print '%s could not be retrieved' % i.text
print 'All done'

That’s it, save it , run it and wait 🙂 Not that I used it, but I calculate that you get around 1.7 GB worth of profile links .

Well , the juicy part is obviously the harvesting of the information from the profiles themselves. People are mentioning on twitter that Google is aware for a long time, or at least should be. Thoughts about the potential implications from that harvesting, on a blogpost to come .

Introducing creepy …

February 3, 2011 5 comments

or “The birdy told me where you’ve been” .

Creepy – A geolocation information aggregator

Background:

Well, privacy concerns with regards to information shared across social networking platforms is not something new. At the same time it doesn’t get old also.  I will spare you the links session, google can fetch all the papers , articles you ll ever need. Location awareness is rolled out not only in platforms created for that reason ( foursquare, gowalla ) but also in facebook ( call me places ), twitter ( location feature(?)) . Moreover as was shown here , almost a year ago, by Johannes B. Ullrich (@johullrich) users tend to ( even unknowingly )  share their location via EXIF tags in the pictures they share with everyone on image hosting services.  Then came PleaseRobMe and iCanStalkYou which really helped to made the point clear.

What :

Creepy is a geolocation aggregator. It searches for a users geolocation information that he/she has shared publicly in the social networking platforms that he/she uses. In the version released , twitter, flickr, foursquare (through twitter ) and a list of image hosting services are supported. You feed creepy with the twitter username and/or flickr id of the user and it retrieves all the locations the user has shared. Locations are determined by

  • Location information on twitter
  • foursquare checkins
  • exif tags from pictures uploaded to a number of image hosting services and posted to twitter
  • geolocation information from photos posted on Flickr

Locations are presented as a list and are also shown in an embedded map ( courtesy of the awesome osmgpsmap widget) . For each location, the context is also presented ( i.e. the text that the user tweeted ) . Features include automatic caching of discovered information ( retrieved tweets, determined locations ) in order to minimize API calls to twitter and flickr, and a (not so nice) GUI in pygtk.

Why :

Well , I have had the idea since I first read the article by Dr . Ullrich. Then came icanstalkyou and pleaserobme , but it was not exactly what i was thinking about.

Ok, the goal is double. First , to raise awareness . By making the process of retrieving and analyzing all the shared location-specific information that users share , easy and automated , I hope to make clear how easy it is for someone to stalk you, rob you, find out where you ve been and why e.t.c. It’s not worth to rewrite how one can defend himself and control the information he is sharing so I ll provide a link to the instructions posted on icanstalkyou  about disabling geotagging in smartphones and see here about how to enable/disable the location feature in twitter.

The second goal is to create a tool to add in your social engineering toolbox.  The ethics are a bit blurry , so I just want to state that I do not endorse stalking or any other form of use of creepy for malicious purposes . What the app does is to aggregate information already publicly shared . If you find that useful, you’ re welcome to use it 🙂

Screenshots :

Here are some screenshots of creepy in use

This slideshow requires JavaScript.

Where :

You can find creepy on github . I can count on the fact that there are many bugs lurking around, so use the tool , find them, and I’ll promise I’ll fix them soon enough . Enjoy !