ceo, cfo, pants on fire?

A recently published research paper entitled “Detecting Deceptive Discussions in Conference Calls” provides an interesting look at lies and the liars who tell them (in this case, company CEOs and CFOs) as well as a peek into the future of of lie detection in general.

For this paper, the researchers decided to look at a group of statements by CEOs and CFOs in quarterly earnings conference calls held with investors.  They specifically wanted to focus on the times when fibs may have been told on these calls.   In order to find these occasions, they looked for cases where companies had to restate their financial results after the calls, or had to disclose other information such as material weaknesses in controls, late filings, changes to auditors, or form 8-K filings.

The researchers got hold of all available transcripts for US quarterly earnings conference calls between 2003 and 2007.  The transcripts were formatted in XML, making them a lot easier to parse.  Next, they broke down the transcripts, ignoring the “Management Discussion” parts which are presumed to be heavily scripted and vetted by legal, investor relations, marketing and other corporate types before a word is uttered.  That left the “Question and Answer” parts of the calls, which tend to be more spontaneous (hence providing more opportunities for questions leading to prevarications to be asked).  Finally, the statements of the CEOs and CFOs were isolated for analysis.  The researchers presumed that the CEO and CFO would know the true state of the company, thus providing them with opportunities to fib to investors during the Q&A.

After analyzing the data, they found that when executives fib…

  • They make more references to audience or general knowledge – “As you know…”
  • They use more words linked to extreme positive emotions – “The outlook for the company is fabulous!”
  • They make fewer references to shareholders value and value creation
  • CEOs in particular make fewer references to themselves and use more third person and impersonal pronouns.  They also use fewer words indicating non-extreme positive emotions as well as fewer “hesitation words” and words indicating certainty.
  • Interestingly, when CFOs tell lies, they tend to use more words indicating certainty

While the study itself was fascinating reading, I also found the authors’ summary of the different perspectives on deception noted by other researchers in the field.

From an emotional perspective, deceivers are thought to feel guilty about their deceptions and have a fear of being caught in their lies.  This leads to negative emotions, and a negative affect.  According to this perspective, deceivers will make more negative comments, use more general terms and avoid referring to themselves.  Their statements will tend to be short, indirect and evasive.

Taking a cognitive perspective on deception highlights the fact that it takes mental energy to lie and keep one’s story straight.  This perspective  suggests that deceptive statements will use more general terms and lack specific details.  Again, the deceiver will avoid referring to themselves and will avoid mentioing personal experiences.  Statements will tend to be shorter, to minimize the amount of keeping track that the deceiver must perform to make their narrative consistent.

Looking at deception from an attempted control perspective focuses on the deceiver’s efforts to avoid making statements which would expose their lies.  This perspective also expects deceptive statements to have non specific language, few self references, and short statements with little real detail.  The deceiver may inject irrelevant information into his or her statements to distract their audience.  If the deceiver is well prepared, there will be more specific information, and fewer of the natural hesitations found in normal speech.   This perspective also looks for lexical diversity as an indicator of deception; people telling the truth tend to repeat themselves, while deceivers seem to use a more varied vocabulary.  Maybe this is why it is interesting to listen to storytellers and other “professional deceivers.”

The final perspective on deception is that of lack of embracement – in this approach, the deceiver feels uncomfortable telling a lie and appears to lack conviction in what they are saying, mainly due to the fact that their claims are not in line with their experience.  Again, speaking in generalities, few slef references and short answers would be expected from a deceiver operating under this framework.

I had a few take aways from this paper:

It gave me a rational basis for the “gut feelings” we have when deciding whether a person is telling us the truth or not.  I will be a lot more conscious of the structure and content of statements when making these evaluations.

I also see this type of research, when combined with technologies such as pervasive digital recording and speech recognition, as possibly marking the beginning of a time when many of the statements we make will be automatically dissected, analyzed and evaluated (possibly in real time) to indicate whether statements are true or deception.  Like any other lie detection technology, this must be used with a clear understanding of its limitations.  A few years ago, we were told that voice stress analysis would make it possible for our phones to tell us when someone is lying in real time; the technology has not lived up to the hype.  A lot more research needs to be done here, but I think we are going to be hearing a lot more about this topic in the future.

I mean, would I lie to you?

ceo, cfo, pants on fire?

password strength take 2

A few days ago, I posted on the subject of password strength… and then I saw some new research on the issue from Georgia Tech which adds some additional paranoia to password issue.   According to the folks from the Peachtree State, recent advances in repurposing the Graphics Processing Units (GPUs) on computer graphics cards put some serious computing power in the hands of password crackers:

“Designed to handle the ever-growing demands of computer games, today’s top GPUs can process information at the rate of nearly two teraflops (a teraflop is a trillion floating-point operations per second). To put that in perspective, in the year 2000 the world’s fastest supercomputer, a cluster of linked machines costing $110 million, operated at slightly more than seven teraflops.”

 

The bad news is that these easily harnessed teraflops make it possible that passwords shorter than 12 characters could be brute forced quickly enough for attackers to make use of them.  Now, as I mentioned in the previous posts, well designed systems should implement some mitigating factors to prevent brute forcing from working, the most important of which is intruder lockout and alerting.   However, the attacker going after offline data such as encrypted files could make use of brute force attacks.  And it is important to remember that many current attacks depend on keystroke loggers – once the attacker has a logger on your system, the length and complexityof your password does not matter any more.

In the end, my recommendations stay the same – run (and update) anti malware software on your machine, and use different, well constructed passwords for every site you visit (LastPass is a great way to keep track of these).  It is amazing how many people use the same passwords for different sites… c’mon people – let’s make the bad guys work just a little bit!

password strength take 2

the great helium shortage of 2035?

It turns out that helium is important for more than party balloons and making our voices high and squeaky… and that we may run out of the stuff in spite of the fact that it is the second most abundant element in the universe (after hydrogen).   Amongst atomic element number 2’s many uses are cryogenics (required for MRI scans) and the manufacture of semiconductors, optic fiber and liquid crystal displays.  Here on Earth, there is a finite supply of helium, half of which sits in the US Government’s Federal Helium Program stockpiles.  In 1996, the US Congress decided to mandate that the entire stockpile be sold off by 2015.  The result?  Bargain basement helium prices which encourage waste.  Many of the applications for helium can be designed to recapture and reuse the gas, but since the stuff is so cheap, there is no incentive for users to manage the supplies in a sane manner.  As a result, we could run out of the gas within 25 years.

Currently, there is no commercially viable way to make more helium – our supplies here on Earth are the result of radioactive decay, and extracting helium from the air would result in prices many thousands of times higher than today (think $100 for a single party balloon).  And I shudder to think how much a big screen TV would cost in a helium poor world (now we are talking an emergency the public can understand).

Seems to me that Congress screwed up here and we still have time to fix the problem – simply raise the price of helium to a point where it makes sense to conserve the stuff.   It seems to me that the need for helium is going to grow over the coming years and we are setting ourselves up for a totally avoidable problem – time to write the congress-creatures…

the great helium shortage of 2035?

this conversation may be recorded, just cause i wanna…

From the US Federal Courts (via ThreatLevel)… it turns out that recording a conversation on your iPhone (and I assume any other device capable of making such recordings) with the permission of the other person you are recording is not a violation of the Wiretap Act unless you plan to use the recording for “nefarious purposes.”  (The court did not weigh in on whether secretly recording your conversation makes you obnoxious, however).   Now, in order to do this legally, you must be one of the participants in the conversation, triggering the “one party permission” exception to the law.  One of the interesting (and somewhat unsettling) statements made in the opinion was that a person having a conversation with another person in their own kitchen did not have a “reasonable expectation of privacy.”  It was also noted that one does not need to have been invited to participate in the conversation being recorded to be considered a participant allowed to record.

Recording conversations is getting easier and easier as more of our devices include the hardware, software and storage needed.  Products such as LiveScribe’s Echo Smartpen and iPad apps such as Audiotorium add a productivity bonus, allowing recordings to be quickly tied to written notes.   This decision seems to remove the last legal barrier to people unilaterally recording their conversations for later reference – or to make sure that the person they are talking to cannot claim they said something different or was misunderstood later.

My takeaways from this:

I think we are going to see a lot more personal use of recording devices in the coming years… storage is cheap and the ability to index and search recordings is only going to get better.   The idea of having a permanent record of your normal daily interactions for later review will become more mainstream.  While this has some advantages (“You did so promise to have my home renovations done in 30 days, shady contractor… and here you are saying it”), it also has the potential to change the dynamics of conversations.  Will this make us more careful in choosing our words?  (Probably not, but it will make it more entertaining to trip people up with their own words.   I hope my wife is not reading this…)

Forensics to prove that a voice on a recording belongs to a specific person already exist; they will become more of an issue (and profit center) as more recordings are used in civil cases.  I wonder if geotagging of recordings will also play a role here… if you checked in to FourSquare at the same time and place as my recording of a conversation with you, does this make the conversation more attributable to you?

This seems to present a dilemma for corporate security professionals.  Recording conversations can be a great memory aid and productivity enhancer, however, how can we know that those same recordings (probably on devices not owned by the organization) will be stored and handled securely?   There is also the question of the effect of such recordings on corporate culture – will people be willing to share ideas and opinions freely knowing that their words may be recorded for posterity?  It seems to me that organizations need to make a conscious decision as to whether to allow recordings of meetings and conversations to be made on their premises – and if the answer is “no,” to make the policy known to employees and visitors.

this conversation may be recorded, just cause i wanna…

under the sea…

A while back, I did a post about the global undersea communications network which forms the underpinning of the global Internet.  Here’s a great way to get an idea of how your data gets from point A to point B:

Greg’s Cable Map is an attempt to consolidate all the available information about the undersea communications infrastructure. The initial data was harvested from Wikipedia, and further information was gathere by simply googling and transcribing as much data as possible into a useful format, namely a rich geocoded format. I hope you find the resource useful and any constructive criticism is welcome.”

under the sea…

154 killed by malware?

Did a malware infection play a  part in killing 154 people in the crash of Spanair 5022 at Madrid’s Barajas airport?  According to a story in Spanish news paper El Pais, quite possibly.   Investigators have found that the computer system used to track maintenance faults in Spanair’s jet fleet was infected with “Trojan Horse” programs, causing it to fail on the day of the crash.  Had the system been up and running, maintenance and flight crews would have possibly received an alert that the aircraft for flight 5022 had experienced repeated technical problems in the days leading up to the crash which should have led to the plane being grounded.   Now, to be fair, Spanair was also under investigation for taking too long to enter fault information into the computer system in the first place, so the malware infection may be just one factor in the cause of the crash.

It is amazing to me that a critical maintenance system would be:

  • Allowed to become infected with malware – was the machine running up to date anti-malware software?  Was the machine allowed to connect to the Internet or use USB storage devices?  It seems to me that a system which is so critical to safety needs to be isolated from the Internet, or at least run within a virtualized sandbox protected from other processes.
  • Not configured for redundancy – why was there no backup for this system?  Hardware fails.  Software fails.  The unexpected happens.  Having a backup system might have saved 154 lives.

I wonder how many other safety critical systems are out there running on improperly secured platforms… IT and InfoSec professionals in industries which deal in life and death have a responsibility to think about the possibility of life safety related impacts from what would be annoying incidents in other industries.  Had Spanair followed some very basic InfoSec and IT bast practices, 154 lives might have been saved.

154 killed by malware?

so long, SAS70!

SAS70 season - the most wonderful time of the year!

Since 1992, many organizations have relied on SAS70 audit reports to determine whether their service providers’ controls are appropriately designed and effectively implemented. 

In the information security field, the SAS70 has become the unofficial standard for how service providers provide assurance to their customers that their systems are safe and secure.  This is not what the SAS70 was originally designed to do – SAS70 reports are supposed to focus on financial matters, but the people have spoken.

Since my employer is a service provider, we conduct an annual SAS70 Type 2 audit with an external audit partner.  We then provide the resulting report to our customers’ information security and risk teams to help convince them that they can trust us with their sensitive information.  This is pretty typical for the industry.

It is important to remember that the SAS70 standard simply describes the format for the report produced by the external auditor.  There is no such thing as being “SAS70 Certified.”  It is even more important to realize that the scope of the SAS70 report (in the form of the list of controls to be tested) is pretty much up to the organization being audited.  If the organization does not want to test certain controls, they can simply leave them out of the report.  For this reason, when evaluating a SAS70 report from a vendor, you need to read carefully – what is not said in the report is probably even more important than what is included.  (very zen, don’t you think?)

The new SSAE 16/ISAE 3402 standards which took effect in June of this year are meant to replace the SAS70.  SSAE 16 is the US version of the standard, while ISAE 3402 is the international version.   These new frameworks do provide some improvements over SAS70 for those evaluating outsourcers.  I am pleased with two of the changes in particular:

  • While the SAS70 was centered around the description of controls, SSAE 16/ISAE 3402 adds a requirement to include a description of the system being audited.  This description must include transaction flows as well as significant non transaction events.  While this is going to mean more reading for the recipient of a report, it also provides the recipient with much better context to evaluate whether the controls described later on are sufficient.

 

  • The new standard also requires organization to do a risk assessment and make sure that the controls that are reviewed address the risks to the system described.  The service provider does not have to include the risk review in the report, but the auditor will be looking for risk/control linkages during the review process.  I think that this is a good thing, especially for smaller service providers who may not have a mature enterprise risk management function, as it will force management to take stock of risks in a systematic way.

While these are welcome changes, they do not really address what I think is main problem with SAS70 as an information security assurance tool – the lack of common objective criteria to be assessed.   The approach that I have taken in the SAS70 that I am responsible for is to map out all of the controls described in the report against the ISO 27002 standard, which provides provides  “established guidelines and general principles for initiating, implementing, maintaining, and improving information security management within an organization.”  While my shop has not gone through the formal certification process for 27002, I feel that using the standard as a framework provides the readers of our SAS70 with some additional assurance that we are including all of the relevant infosec controls.

So, as we bid a (somewhat) fond farewell to SAS70, I look forward (somewhat) to our new, improved minty fresh SSAE 16 reports.  My company will be doing our first SSAE 16 early in 2011 – I’ll report back on how the process differs from what we have done to date.

so long, SAS70!

google and the government

The US Federal Government has given Google the FISMA certification needed to allow government agencies to outsource their (non secret) email and calendar systems to the search giant’s cloud data centers.  In order to get the feds’ stamp of approval, Google had to set up dedicated servers located in the continental United States for government data and have a third party perform an assessment of whether Google’s security practices were in alignment with FISMA, the Federal Information Security Management Act, which sets standards for security on government systems.  Apparently, the documentation provided by Google to back up their application ran to over 1500 pages.

So, does this mean that since the cloud is secure enough for Uncle Sam, all of us in the private sector can ditch our Exchange servers and move to the cloud?  I’m not yet convinced. 

 

  • As private sector users, our data doesn’t get its own servers located in the US and presumably shielded from the great unwashed masses of the Internet and watched very carefully by a dedicated security team.

 

  • Seeing the FISMA evaluation report would help the private sector determine whether the testing performed meets our requirements for security.  Google currently offers the report documentation to government organizations considering moving to Google Apps. 

I love the idea of being able to outsource non core functions like email and calendaring – the cost savings are very compelling.  But before making that kind of decision, I’d have to see a lot more disclosure from Google on their security practices.  I would also want some sort of assurance that my organization’s email would not be used by Google’s mighty data analysis machines for purposes other than providing services to my company.  The Googlers are great at mining the data they have for profit… I am not sure that I would want to add my corporate email (or my government’s email) to their ever expanding database.  

I still need a lot of convincing that this is a good idea.

google and the government

truecrypt – disk encryption for everyone

 

It is amazing how much (sensitive) information we can now carry around every day.  I have 8 gigs of all sorts of interesting stuff on a flash drive on my key ring, and hundreds of gigs on my laptop.  Keeping that data out of the hands of evildoers should I lose my keys or have my laptop stolen is really important to me.

 

That’s why TrueCrypt  is one of my favorite open source software products – it provides full disk encryption for Windows, Mac OS X and Linux systems at an unbeatable price point (free).  One of its nice features is that you can create a fully encrypted flash drive (or hard drive) on, say, a Windows system and then take that device and use it on a Mac or Linux system with TrueCrypt installed – quite handy for those of us who use different operating systems on a regular basis.  For the most paranoid amongst us, you can even set up hidden encrypted volumes within encrypted volumes to further shield your data from prying eyes.  Version 7 of this vital part of my personal information security toolkit was released back in July, and adds the ability to have volumes automount when they are connected to the computer as well as protection of crash dump and hibernation files on Windows 7 systems.  If you haven’t had a chance to play with TrueCrypt, give it a try today!

truecrypt – disk encryption for everyone