Due to some last minute stuff at work, I am sitting here in rainy, hot and humid NYC rather than getting my geek on out in Vegas with my trusty security minions and the folks at BSides and Defcon. I am really bummed – it sounds like some really good stuff is going on in Sin City this week. I am monitoring the Twitter-sphere, Blogo-sphere, and Newso-sphere for interesting items coming out of Vegas by my fellow security bloggers and will post some links to their coverage over the next few days. This pesky day job thing can really get in the way of the really important stuff, like attending cons. Dang.
Sounds like Adobe is planning to take action to make Reader a less attractive target for hackers. According to a report out today, the maker of the ubiquitous document rendering software will release a new version of Reader which “sandboxes” PDF documents in a restricted environment while they are read. This will mean that if the file contains malicious code, that code will be trapped in a virtual jail and will be unable to access the underlying operating system for its nefarious purposes. Similar technology is used in Google’s Chrome browser (my personal favorite) and Microsoft Office 2010. The first version will just block writes to the host computer, but later versions will also control other operations from PDFs. While this is not a cure-all, it sounds like a great step forward and will provide another layer of defense from evil PDFs.
In other sandbox news, Dell’s KACE systems management division released a free tool which combines Mozilla Firefox browser with Adobe Flash and Acrobat Reader into a virtualized package which allows web browsing to take place within a sandbox isolated from the rest of the Windows environment. They also offer a management appliance (not free) which will allow enterprises to deploy and manage Secure Browsers on hundreds or thousands of computers. I have not yet had a chance to play with this tool, but it looks promising.
Some new developments in the Siemens SCADA trojan story…
It turns out that the trojan uses a well known default password to log in to the backend MySQL database used by Siemens’ software but Siemens has told users of the software (factories, power plants and the like) NOT to change the database password, as doing so would cause the software to stop working. A fix is forthcoming, but plant operators are likely to have an anxious few days (?) until a solution is available.
A second version of the trojan program has been detected on the Interwebs. The new variant seems to also be targeting SCADA systems and is also signed with a code certificate (this time from Taiwan based JMicron Technology Corp, which has offices in the same location as the firm whose cert was appropriated for the first version of the worm).
The whole default password thing is just plain embarrassing… this is a problem from another era, which should be an unpleasant memory by now. It seems like it would be easy to eliminate this problem programmatically by creating a unique database password (derived from the license key and a secret, maybe?) by default when the software is installed. Or at least require the installing user to enter a password during installation. SCADA systems control the technological backbone of our civilization (power, water, sewage, manufacturing) and deserve better security than this.
As far as the underlying vulnerability used to spread the Stuxnet code, we are still at risk – a patch has not been released by Microsoft yet, and while the major anti virus vendors have released signatures which detect the SCADA worm, it is only a matter of time before we start seeing other, new malware using this vector to spread. It seems like using a Group Policy Object to prevent executables launching from drives other than C might be the best way to protect your networks for the time being.
Over the past few days, reports of a new attack against Windows based SCADA systems (the computer software which control power plants, water treatment facilities and other parts of the critical infrastructure) have been making the rounds of the security blogosphere. While the payload carried in the new attacks is aimed specifically at these vital control systems (specifically a system called Siemens SCADA WinCC + S7) , the vulnerability used to deliver it looks like it could be quite dangerous to all Windows XP, Server 200x, Vista and 7 users. The previously unknown flaw allows arbitrary code to be executed simply by browsing to a folder containing a specially crafted .lnk file. In the attacks seen to date, the malware attempts to access information from the control system, suggesting that it is meant to aid in corporate espionage or reconnaissance of electrical power distribution systems for purposes unknown, but probably nefarious.
In addition to raising the spectre of an attack against critical infrastructure, this series of attacks also provides makers of all sorts of malware targeting corporate and personal systems with a new 0-day vector for infection. The flaw can be exploited by getting users to browse a USB drive, a Windows file share or a WebDAV file share. The flaw seems to be able to bypass the No Autorun protections in Windows as well as Windows 7’s UAC protections. If I were a malware author, I would be all over this as a way to get my creation installed on as many machines as possible before Microsoft issues a fix.
Microsoft is aware of the problem and has issued a tech bulletin with a workarounds that is are pretty unworkable for most corporate environments. According to a blog post by Chester Wisniewski of Sophos, one way to effectively combat this attack in a corporate environment is to set up a GPO (group policy object) which prevents executables from running from drives other than the C: drive. This may be the best way to respond to this threat until Microsoft issues a patch, hopefully before the next Patch Tuesday.
The malware arms race goes on…
The Russian spy ring seems to be the gift that just keeps on giving in terms of blog fuel.
First… if this story is to be believed, one of the spies set himself up as a consultant, talking to companies about their plans for a post oil economy (a subject of interest to fossil fuel producers such as Russia) and pitching a software package to help companies model the effects of future events on their businesses. Since this software would be installed on customer networks, it could be used as a vector to plant spyware on clients’ computers.
Another report reveals that a Russian man who may be linked to the spy ring and who was recently deported had worked at Microsoft as a software tester both as an intern and as a full time employee. He worked in Redmond for less than a year, and Microsoft claims that no software was compromised. Hmmm. I hope the boys and girls are putting in some serious overtime looking at what this guy had access to.
If true, these stories point to a new face of state sponsored espionage – one focused on the private sector, which is much less prepared to protect the secrets which are important to their business as well as to the critical infrastructure. Another good reason for security folks to join their local InfraGard chapter and learn more about protecting their businesses (and their country) against corporate espionage.
One of the revelations from the recent capture of a number of deep cover Russian spies here in the US was that they used steganography (the concealment of data within innocuous looking files) in order to hide and transmit secret documents and messages to their handlers. Steganography is one of those techniques which get talked about a lot a security conferences, but has not seemed to play a major role in news of security breaches. This seems a bit odd to me – stego seems like a great way to exfiltrate information in plain sight. By embedding ill gotten data in vacation pictures posted to Flickr or Facebook, spies (corporate or otherwise) can create very low risk electronic dead drops with a few mouse clicks. Unlike encryption, stego does not leave suspicious encrypted files to exfiltrate, just innocent looking pictures or songs. The software needed to create stego protected files is available on the Net. So why (other than some articles about Al Qaeda reportedly using stego to embed secret information in internet images) do we not hear more about this technique? I have a couple of hypotheses here:
Attackers are using stego, but they are not getting caught. Detection of files with steganographically hidden content is very difficult, requiring very specialized knowledge and tools which most enterprises and forensic examiners don’t have access to.
Attackers don’t need to use stego because they don’t need to. There are so many organizations out there who do not have a handle on what information is leaving their networks, that they don’t feel the need to go to the trouble of hiding the information they are swiping. Or they are using really low tech methods to get the data out of the organization, like printing, or fax, or this.
Is stego a real threat to the enterprise? I am not sure. But the availability of stego underlines the need to build a security culture in your organization and use both technology and non tech means to detect potential problems. Stego seems to be a tool which insiders would be predisposed to use – detecting insider threats takes both technology and plain old vigilance. There is some excellent information on detecting insider threats available from the CERT team – this should be on your reading list.
This post was inspired by Kai Axford’s (Accretive Solutions) great presentation at today’s New York Metro InfraGard meeting.
OK, call me a cold war relic, but I find the recent revelation that Microsoft has provided the source code for Windows, SQL Server, and Office to the Russian FSB (the spies formerly known as the KGB) as well as to the Chinese government quite disturbing. As recent events prove, Russia is still actively engaged in espionage against the US public and private sectors. We know that the Chinese People’s Liberation Army is actively building an offensive cyber capability and that they use technology to suppress free expression in their country. Microsoft’s disclosures have been going on since 2002, as part of a program under which Microsoft has supplied source code for its products to a number of countries as well as NATO.
It does not take too much imagination to conjure up visions of Russian or Chinese government security researchers finding zero-day exploits to allow their paymasters to craft undetectable malware which is then placed on US government and private sector computers. Such an attack would be a cost effective, low risk way to gather more information in a day than the recently unmasked spy ring was able to collect over a decade. It takes even less imagination to envision the Chinese government using their access to Windows source code to build more efficient tools to monitor and muzzle those who dare to speak out against the Communist Party.
This incident raises a number of interesting questions.
Is Microsoft (a company born in America, whose success was built on the US market, and which benefits from tax breaks funded by US taxpayers) right to provide access to source code of products which are the underpinnings of all sorts of critical infrastructure to nations which are actively engaged in espionage against the US and whom we may meet on the cyber battlefield of the future? It seems to me that this is sort of like hiring a company to build a fort and then allowing them sell the plans to your adversaries.
Should Microsoft’s products have some sort of special status which recognizes them as part of the US critical infrastructure? After all, Microsoft has been allowed to gain what is basically a monopoly in the US market for operating systems and other key software. Does this engender a responsibility on their part to act in accordance with US national interests? I think it does.
Microsoft hasn’t done anything illegal here. It would be nice if they felt a need to protect the critical infrastructure of their country, but as a private entity with no laws or regulations to prevent their actions, they made the logical business decision to share the source code in order to gain better access to the Russian and Chinese markets. However, their choice is a bum deal for the rest of us, who will have to deal with the repercussions of this decision while Microsoft reaps the profits. We need to tell our legislators that it is time to take a fresh look at what we ask of companies like Microsoft and Cisco, whom we have allowed to develop monopolies on key parts of the nation’s critical infrastructure. In the conflicts yet to come, cyberspace will play a key role – and Microsoft has sold the plans for the fort to potential adversaries.
Interested in Enterprise Rights Management? In the New York City metro area? Free on July 14th? New York Metro InfraGard is putting on an ERM seminar which looks really worthwhile. I think that ERM is going to be a key tool for security professionals over the next year or two as new mobile devices, as well as devices owned by employees and business partners become more and more integrated with our businesses. I’m planning to be there and look forward to meeting some readers!
Here’s an interesting story that bears some watching… security researcher Sean O’Neill claims to have reverse engineered the proprietary encryption which Skype uses to protect voice, video and IM communications on its network. This work, while impressive, does not mean that Skype’s encryption has been broken, since knowing the details of an encryption algorithm does not allow you to decrypt data unless you can also derive the keys used to encrypt the data. However, there are some reports that the O’Neill’s code has been used to launch spam attacks on Skype users. I am sure that intelligence and law enforcement agencies all over the world are quite interested in how this all turns out, as they have complained in the past that Skype provides criminals, terrorists and other n’er do wells with un-wiretap-able communications. O’Neill plans to provide more information on his work at the Chaos Computer Congress in December.
In the mean time, I plan to continue using Skype without too much worry. Of course, I’ll think twice about using it for coordinating the global tentacles of my evil plan for world domination, but I see no reason to avoid Skype for personal and business communications right now. Stay tuned.
Friday’s Wall Street Journal featured a page 1 article (unfortunately behind a subscription paywall – less detailed but free coverage here, but you can get the full WSJ article by searching Google News for “HSBC data theft”) on a massive theft of private banking client data from HSBC. The thief was… wait for it… an HSBC infosec employee whose job it was to improve the security of the systems and databases holding that data. Said employee then shopped the data around to a number of European tax authorities as well as to competing banks. When the French police raided his parents’ home in France as part of the investigation into the theft, the data was turned over to the French tax people, resulting in collection of 1 billion euros from les tax evadeurs. Now the French tax people are sharing this treasure trove of data with their colleagues in other countries, who also expect to collect lots of back taxes.
Of course, the guy at the center of this claims he was not in it for the money – he wanted to point out flaws in HSBC security or help catch tax evaders or was working for intelligence services. (He can’t seem to decide on which story to go with…) In any event, he denies any illegal activity and stated that he copied the data to his personal computers and offsite servers as part of his normal work. HSBC states that it is against company policy to copy such data to non HSBC computers.
The story is quite interesting and raises a number of questions for security pros, organizations and law enforcement (as well as folks who like to stash their cash out of sight of the tax man).
Is France’s use of the ill gotten data and it’s further distribution of what is in effect stolen property a legitimate tool for government authorities? While there is a social good in collecting these taxes from the rich tax evaders, is this benefit outweighed by the message it sends vis a vis the rule of law?
Why was this very sensitive data not protected by some sort of DLP solution or even just old fashioned auditing and log review on the database server? Someone looking at a log and seeing this guy perform SELECT * on a sensitive database was all that would have been needed to detect this crime.
Why did this employee even have access to this data? I can’t see how his job function (in a properly designed technical and procedural environment) required the ability to view and copy database information. Changes and testing of security for that database should have been done in a separate QA environment using test data and then staged to production by another party.
My final question is one for the security community… Where does our fiduciary duty to our employers end and our responsibility as citizens start? In this case, I think that the HSBC employee was clearly in the wrong. HSBC was offering a service to it’s clients which is perfectly legal under Swiss law. The users of the service had a responsibility to report their income to their taxation authorities under the current regime. If the employee had a problem with the world of private banking, he should have gotten into a new line of work rather than resorting to theft. As for his claimed pure motives, I would have a lot less trouble believing him had he not shopped the data to competing banks. I’d also point out that it would have been reasonable for him to expect some sort of renumeration from the tax authorities for his “aid” in collecting lost revenue. His stories just don’t seem to add up.
It is important to note that this is not a problem unique to HSBC – the lapses that led to this data theft are extremely common across all industries. Heck, even the US military has data stolen through loopholes in data protection policies (and Lady Gaga).
This case is a great learning opportunity for security and risk professionals – organizations need to remember that security personnel are human and need to have appropriate controls placed on their systems access as well. In most organizations, the Internal Audit group can provide this oversight. Smaller organizations may need to resort to periodic reviews of internal security by an external consultant. In any case, make sure someone is watching the watchers!
Update 2010-07-10 2010 – Just noticed that US tax authorities are “ramping up” their investigation into whether HSBC marketed tax evasion services to US clients. Now, if they did engage in this activity, shame on them. However, if the allegations are found to be true, it still does not transform a data theft by a person in a position of trust. Had the employee involved simply contacted authorities with his concerns, the data could have been gotten by the authorities. And his shopping the data to competitors still sticks in my craw.