According to LieSpotting author Pamela Meyer, we live in a sea of deception, lying and being lied to dozens if not hundreds of times per day. However, you can learn to spot liars and get to the truth. She explains some of the statistics on lying as well as techniques to spot lies in this 20 minute TED talk. Worth a viewing for all Paranoid Peeps.
An interesting post from www.social-engineer.org asks whether an attacker with a visible handicap (real or simulated) has an advantage in getting your users to give up sensitive information… the available research seems to indicate that it does… worth a read!
Here is a textbook description of what companies should NOT do when someone privately reports a security vulnerability in their publicly available web site which is chock full of PII…
A couple of observations about the article…
The guy who found and reported the vulnerability was a customer of the firm in question and seems to have done everything in an above board manner.
It sounds like the vulnerability involved changing a single parameter in a URL in order to access another customer’s account. Whoever designed/wrote that application needs some serious re-edumacation at the very least. Maybe these are the folks who should be paying to fix the vulnerability.
I’m not sure why they are demanding the researcher’s computer. The nature of the vulnerability would make it extremely easy to make sure he did not access additional PII by simply reading the web server logs.
I’ll bet that plenty of people at this organization are wishing that this incident never hit the news. Had they simply thanked the researcher and fixed the bug, their customers and business would have been protected and they would not have gotten such a public flogging. If I were a customer of theirs, I’d be wondering about the rest of their information security right about now.
So, to sum things up… WTF!
The latest edition of Microsoft’s Security Intelligence Report provides some interesting analysis as to how computers get infected with malware. Microsoft’s dataset is pretty large, comprising some 600 million computers equipped with Microsoft’s Malicious Software Removal Tool (MSRT) which reports details of malware infections back to the mother ship in Redmond. The numbers hold some important lessons for security professionals.
Don’t get your knickers in a twist about zero day exploits. While the press loves a good zero day story, only 0.12% of the infections seen by Microsoft used unpatched vulnerabilities. Zero day vulnerabilities are valuable commodities which attackers will not waste on run of the mill cyberattacks. Don’t center your anti malware program on the latest zero day vulnerability of the week.
Vulnerabilities are sooo last year – your users are the weakest link. Only about 6% of malware infections seen by Microsoft were the result of vulnerability exploitation. In contrast, almost half of all malware infections in the study required the user to take an action (clicking a link, running a program, opening an attachment, etc.) in order for the infection to be successful. In most cases, no vulnerability was used – the user simply gave the malware permission to run. Spending some time and effort edumacating your users to be skeptical and think before they click that link or open that attachment has the potential to significantly reduce your malware attack surface.
You still need to keep software up to date. Testing and installing patches from Microsoft and other vendors will protect your systems from the 7% of attacks which use exploits to worm their way in (get it?) to your systems. This is a small portion of the malware threat, but once you get patching and updating to be part of your normal automated business processes, it is a low touch, low cost addition to your malware defenses.
Filtering and monitoring your outbound web traffic is a must – if malware is unable to download code, connect to command and control servers or exfiltrate data, the threat it poses is greatly reduced. Keep your filter lists up to date with the latest known malware URLs – the subscription fees are a small price to pay for preventing access to the malweb in the first place.
Monitoring your network traffic, proxy logs, and changes to the services running on your hosts for strange patterns can pay off big time. Since we can’t count on signatures to find every type of malware you may encounter, look for strange behavior for the early warning signs.
I found Microsoft’s analysis of the malware problem to be pretty interesting and I am looking forward to reviewing the rest of the Security Intelligence Report for nuggets of wisdom – I’ll post more soon!
Sometimes I feel like I’m selling elephant repellent:
I identify a particular species of elephant (for example, compromise of our networks due to spearphish delivered email).
I find examples of this particular elephant showing up on the networks of similar organizations.
I try to calculate the damage which said elephant would cause (which nearly always includes hard to quantify types of damage to things like “reputation” and “trust.”)
I run some tests to show that, yes, some of our users would in fact happily open the gates of the village to this particular elephant by clicking on just about any link emailed to them.
I then look for some sort of elephant repellent – a policy, a procedure, education, some technology or a combination of the above to keep said pachyderm from rampaging through our village.
Of course, elephant repellent is not free… there is a cost in productivity, usability, share of user attention, or cold hard cash. If the risk to cost ratio seems right, I take action, spraying elephant repellent all around the village. Time passes. No elephants show up, I proudly announce the success of this particular elephant repellent and start looking for the next elephant to repel. Of course, the question remains as to whether the lack of elephantine activity in the village is due to the repellent, well, repelling or whether the elephants never would have shown up at the village gates in the first place. (or whether the elephants will get clever and will show up next week and trample the place in spite of my efforts)
Elephants come in a variety of sizes. Some of them can rampage through the village and leave a wide path of destruction. Other elephants sound scary, but end up being more mouse like in their impact. If you ring the elephant alarm every day, the villagers (in particular, the village elders) are going to pay less attention as time goes on. Elephants are also unpredictable – sometimes they show up, other times, they pass your village by and trample the village next door. You gotta pick your elephants. I guess that is part of the “art” side of infosec (anticipating howls of protest from the quantitative guys on this).
At least Infosec people don’t usually have to deal with elephants which kill people – let’s say, a devastating earthquake. The stakes are, of course, very high in these cases and the village elders can get very angry when these elephants make it through the village gates. In fact, six seismologists and a government official are currently on trial for manslaughter in Italy for failing to predict an earthquake which struck the L’Aquila region in April, 2009. Yes, you read that right… While this episode may be an outlier, it does point out the rising expectations of all sorts of village elders (both corporate and governmental) as to the risk experts’ ability to make very accurate predictions of risks – expectations which may not be possible to achieve. Call it the “CSI effect” – we are used to seeing all sorts of cool technology providing definite answers to questions and we have come to expect that all questions can be answered in this way.
We as Infosec professionals have to strike a balance between the quantitative and qualitative approaches to choosing which elephants to worry about. To add to the problem, some of us (particularly in highly regulated industries like finance) are given a set of elephants which we must repel by regulators and other stakeholders. These “default elephants” may pose less risk to the village than other, less famous, elephants, but we have to divert resources (and repellent) to deal with them in order to stay in business.
So… the takeaway? We need to share best practices for spotting, measuring and evaluating risk from both a qualitative and quantitative point of view. Organizations like the FS-ISAC (and other industry ISACS) where we can share information in confidence with our peers are a great place to do this. We need to up the level of information sharing in these fora – while it is great to get lists of bad IP addresses and URLs, I’d also like to see more (anonymous) sharing of stories about risks and repellents. The more people looking at the elephant and reporting on what it did when it visited their villages the better picture we can put together.
You might want to watch this video before trusting your valuables to a hotel room “safe…”
The evil unchanged default password strikes again!
According to a study published by Danish security vendor CSIS…
When a Microsoft Windows machine gets infected by viruses/malware it does so mainly because users forget to update the Java JRE, Adobe Reader/Acrobat and Adobe Flash
Most users (and many IT folks) don’t really think too much about these “helper” programs, even though they are installed on almost all workstations in our environments. This makes sense, as users almost never run these programs knowingly – they get executed in the background when web pages are visited or documents are viewed. Users do get reminders when new updates are available, but how often do your users take the time to let the updates install and reboot their systems? Rolling these updates out is a pain in the nether regions, but the payoff (protection against 80% plus of the most commonly used attack vectors) is high. Buy your IT guys and gals a beer and get this terrible trio on your periodic update schedule. And remember to let users know when they need to update their personal systems…