Making Sure Your Security Advice and Decisions Are Relevant
Perhaps the most challenging and exciting aspect of information security is the need to account for business context when making decisions. One way to do this is to determine the unique strengths of the company—its competitive advantages—so you can frame risk conversations accordingly.
Economic Moats to Safeguard the Business
Gunnar Peterson discussed aspects of this concept using the notion of economic moats. According to Morningstar, an economic moat “refers to how likely a company is to keep competitors at bay for an extended period.” This term is similar to what others might call a sustainable competitive advantage. Just like a moat helps safeguard the castle from attackers, an economic moat contributes towards protecting the business from competitors.
Companies have different economic moats and those without a sustainable competitive advantage tend to stagnate. Gunnar outlined several types of moats highlighted by Morningstar, including: Low operational costs, intangible assets (strong brand, patents, etc.), high switching costs (customers tend to stay), etc.
Relate Security Risks to Economic Moats
What are your organization’s economic moats? If you don’t know what capabilities help the company protect or expand its market share, find out. This knowledge will help you make informed security decisions and will allow you to be a more persuasive participant in risk discussions. As Gunnar pointed out, “the two most important things in infosec are identifying what kind of moat your business has and then defending that moat.”
Information security professionals often complain that executives ignore their advice. There could be many reasons for this. One explanation might be that you are presenting your concerns or recommendations in the wrong business context. You’re more likely to be heard if you relate the risks to an economic moat relevant to your company.
A common approach to emphasizing the importance of information security is based on the notion that a data breach can tarnish the company’s brand. In many cases, the reality shows that the business doesn’t actually suffer in the long term, and in some cases the attention brought by the breach could actually help the company. However, even if the company might suffer in the short term, an argument based on brand tarnishing could fall on deaf ears if the organization doesn’t consider its brand a competitive advantage.
Security in Support of Sustainable Competitive Advantages
A company whose economic moat is its brand, will spend considerable efforts to protect its brand equity. For organizations like that, the brand-tarnishing argument might be effective and could be a good way to justify security funding. However, companies that have other moats, won’t care that much about safeguarding their brands.
For instance, consider a firm whose economic moat is tied to low costs due to its operational expertise and supplier relationships. A good context for making security decisions in this organization might be its efforts to protect proprietary details related to internal and supplier logistics. Threats to this moat will likely capture executives’ attention.
Another organization whose moat is its proprietary intellectual property will want to hear your thoughts on protecting such trade secrets. Alternatively, if a firm sees its time-to-market as a competitive advantage, it will want to know about the security risks that could slow it down and prevent the next timely release of its product.
An economic moat might protect the company from competitors, but it could be eroded by internal factors such as a security breach. Understand your company’s economic moats. Use them to frame security decisions and to ensure that your infosec advice are relevant to the company’s business objectives and strategies.
When characterizing ill-effects of malicious software, it’s too easy to focus on malware itself, forgetting that behind this tool are people that create, use and benefit from it. The best way to understand the threat of malware is to consider it within the larger ecosystem of computer fraud, espionage and other crime.
A Tip of a Spear
I define malware as code that is used to perform malicious actions. This implies that whether a program is malicious depends not so much on its capabilities but, instead, on how the attacker uses it.
Sometimes malware is compared to a tip of a spear—an analogy that rings true in many ways, because it reminds us that there is a person on the other end of the spear. This implies that information security professionals aren’t fighting malware per se. Instead, our efforts contribute towards defending against individuals, companies and countries that use malware to achieve their objectives.
Understanding the Context
Without the work of personnel that handles technical aspects of malware infections, the malware-empowered threat actors would be unencumbered. Yet, these tactical tasks need to be informed by a strategic perspective on the motivations and operations of the individuals that create, distribute and profit from malware.
To deal with malware-enabled threats, organizations should know how to detect, contain and eradicate infections, but we cannot stop there. We also need to also understand the larger context of the incident. We won’t be able to accomplish this until we can see beyond the malicious tools to understand the perspective of our adversaries. The who is no less important than the what.
Participating in the Eternal Cycle of Cybersecurity
When engaged in a fight, it’s natural to ask yourself whether you are winning or losing. However, in the context of cybersecurity, this question might not make sense, because it presupposes that the state of winning exists.
Maintaining the Equilibrium
Every day, new people and transactions appear online, making the digital world more attractive to criminals. Miscreants fund malicious software and attack operations, so they can achieve financial, political and other objectives. Security practitioners respond to evolving online threats; the attackers adjust their tactics, the defenders tweak their approaches, attackers regroup, and so on and so forth.
Defenders sometimes feel that the attackers are innovating at a pace that’s outpacing our ability to defend sensitive data and computer infrastructure. Such observations tend to be based on emotions and subjective observations and often lead to questions about which party is winning the fight. Defining our objectives in terms of winning or losing might not be practical.
The Eternal and Vicious Cycle
My perspective on the dynamics between cyber attackers and defenders aligns with the ecological metaphor that Lamont Wood described in an article Malware: War Without End. He referred to it as “an eternal cycle between prey and predator, and the goal is not victory but equilibrium.” It’s unlikely that this cycle will end and that either party will “win.”
When I spoke with Lamont, I suggested that attackers work to bypass our defenses and the defenders respond as part of the cycle. If attackers get in too easily, they are spending too much on their efforts. If we are blocking 100% of the attacks, we are probably spending too much on defense.
The digital ecosystem as a whole continues to thrive, because it benefits its legitimate users and criminals that act as parasites within the system. However, individual participants in this ecosystem could find themselves at a disadvantage and suffer losses. Being complacent is risky for a given party, because it must constantly apply energy to maintain the equilibrium.
If our goal is to “win” the fight against cyber criminals, we don’t stand a chance, in part because there will always be more threats to combat. It might be more useful to define our objectives in terms of maintaining an equilibrium between the defenders and the attackers. This way, we can help our organizations excel in the contaminated world of the Internet.
The future of information security is intertwined with the evolution of IT at large and the associated business and consumer trends. It’s worth taking the time to understand these dynamics to define a path for your professional development. How is the industry evolving and what role will you play?
Key Security Trends
Rich Mogull’s write-up on infosec trends offers an excellent framework for peeking 7-10 years into the future. Rich highlights key factors related to: hypersegregation, operationalization of security, incident response, software-defined security, active defense and closing the action loop. Read his article to understand these trends, then come back to consider how they might affect and inform your career development plans.
I won’t get into every trend that Rich described, but I’d like to share my thoughts on how some of these factors offer professional development opportunities for information security and IT professionals. Operationalization of security might be a good place to start.
IT Operations Professionals
As Rich points out, today infosec personnel “still performs many rote tasks that don’t actually require security expertise.” He predicts that security teams will divest themselves “of many responsibilities for network security and monitoring, identity and access management,” etc.
If you’re an IT operations professional who has no interest in specializing in security, you can expand your expertise so that you can take on some of the tasks performed by security personnel today. This might be a natural expansion of what you’re doing already. Moreover, consider what skills you need to possess to automate as many of these responsibilities as possible, allowing your organization to lower costs and improve quality of IT operations and helping you maintain your own sanity.
Information Security Professionals
If you’re an infosec person looking to grow in this field, consider what responsibilities will remain with security professionals. A security person might lack some of the expertise of his operations-focused IT colleagues, but presumably he is better at understanding security. This includes the knowledge of attack and defense tactics, the dynamics of incident response, security architecture and patterns, etc. These are some of the areas where you should focus your professional development efforts.
How to design and validate security of a network where every node is segregated from each other? How to assist the organization in living through a security incident cycle that could span days, but sometimes spans years? How to oversee and validate safeguards when most aspects of the IT infrastructure and applications have been virtualized and could be accessed via an API? What deception tactics could be employed to deter, slow down and detect intruders?
These are some of the questions, grounded in Rich’s trends, that infosec professionals should be able to answer, as they consider how to best contribute to their organization’s success in the future.
Asking the Right Questions
Do your best to project the future of industry trends. Based on these, consider what questions an employer might need answered 3, 7, 10 years from now. You might not know the answers to these questions yet, but the questions can guide you in drafting a professional development plan that will be right for you.
How to Get a Windows XP Mode Virtual Machine on Windows 8.1
Despite its age, Windows XP is useful to have in your IT lab, for instance if you need to experiment with older software or study malware. Microsoft distributes a Windows XP virtual machine called Windows XP Mode, which you can download if you’re running Windows 7, as I explained earlier.
First, download Windows XP Mode from Microsoft. You’ll need to go through the validation wizard to confirm you’re running a licensed copy of Windows. Though designed to look for Windows 7, it appears to accept Windows 8.1 as well. After the validation completes, you’ll able to download the Windows XP Mode installation file.
Once you’ve downloaded the Windows XP Mode installation file, don’t run it. Instead, explore its contents using a decompressing utility such as WinRAR or 7-Zip. Inside that file, go into the “sources” directory and extract the file called “xpm”.
The file “xpm” is another compressed archive, whose contents you can navigate using a tool such as 7-Zip. Extract from the “xpm” archive the file called “VirtualXPVHD” and rename it to something like “VirtualXP.vhd”.
This VHD file represents the virtual hard disk of the Windows XP system. You can use VirtualBox to create a virtual machine out of it. To do this, use the VirtualBox wizard for creating a new virtual machine and select “Use an existing virtual hard drive file” when prompted.
To create a VMware virtual machine out of the VHD file you’ll first need to convert it to the VMDK format, which VMware uses to represent virtual disks. The most convenient way to do this might be to use WinImage. If following this approach, select “Convert Virtual Hard Disk image…” from the Disk menu in WinImage, then select “Create Dynamically Expanding Virtual Hard Disk”. When prompted to save the file, select the VMware VMDK format and name the output file something like “VirtualXP.vmdk”.
Once the VMDK file has been saved, you can create a VMware virtual machine out of it by using VMware Workstation. Go to File > New Virtual Machine. Select “Custom (advanced)” when prompted for the configuration type. Accept defaults as you navigate through the wizard. When prompted to select a disk, select “Use an existing virtual disk” and point the tool to the VirtualXP.vmdk file.
Once the VMware virtual machine has been created, launch it, then go through the Windows XP setup wizard within the new virtual machine the same way you would do it for a regular Windows XP system. After Windows XP setup is done, install VMware tools into the Windows XP virtual machine you just created. Take a snapshot of your virtual machine, in case it breaks.
Windows XP installed into the virtual machine in that manner might need to be activated with Microsoft within 30 days of the installation. Be sure to understand and comply with the applicable software licensing agreements.
Sometimes people share passwords. This practice might stem from the lack of support for unique user account in some applications. Even more importantly, the reasons for password-sharing have to do with convenience and social norms. Technologists are starting to recognize the opportunity to account for these real-world practices in their products.
I detailed password-sharing practices in a post on the future of user account access sharing and presented several examples. I looked mostly at consumer and small business situations, not necessarily at heavily-controlled enterprise applications. For instance, adults frequently share logon credentials to their Netflix accounts among a small group of people. This saves on Netflix service fees.
Netflix seems to not only tolerate, but even encourage this practice. The service now supports multiple user profiles as part of a single account. One set of logon credentials is actually expected to be used by multiple individuals! A security purist would not approve. Yet, this product management decision recognized the lack of user’s interest in safeguarding their Netflix account from friends and family members.
Rather pretending that logon credential-sharing doesn’t exist, Netflix accepted that its users will share accounts and built the features its customers wanted. (The fact that any Netflix user profile can cancel the service helps constrain the potential size of the group who can access the account.)
For other examples, consider a small office environment within which individuals trust each other enough to share passwords to applications and websites used by the company, such as Pandora, UPS, Mailchimp, etc. Preaching to these people about the importance of individual logon credentials is usually a waste of time. This is why I hailed the emergence of password vault applications such as Mitro, which make it easier for individuals to share passwords while offering some security safeguards that don’t exist when using Post-it notes for this purpose.
Let’s project into the future a bit. According to danah boyd, teens often share passwords with friends as a sign of trust. This practice is akin to giving a person you trust the combination code to your school locker. Since teens will inevitably become adults, product managers and security architects will eventually have to account for such practices in their applications.
Customer-focused organizations will need to balance the desire to protect people from themselves with the need to give customers what they want. Some say that passwords will eventually go away and be replaced by biometrics and other technologies. Perhaps. In any case, people’s desire and sometimes legitimate need to share account access will persist. Security professionals and product managers will need to figure out how to provide such capabilities while accounting for meaningful risks.
Potential Security Applications of the iPhone 5S M7 Motion Coprocessor
In addition to the fingerprint scanner, there is another new hardware component in iPhone 5S that security-minded folks might get excited about. I’m referring to the M7 motion coprocessor. According to Apple, the chip continuously measures motion data “from the accelerometer, gyroscope and compass to offload work” from the main processor. Here’s why this is interesting.
Although M7 is being positioned as an enabler for enhanced fitness apps, there might be interesting security applications for the precision and energy-efficiency capabilities of the coprocessor. For instance, consider the possibility for continuous and seamless user authentication. As I proposed earlier:
Continuous user authentication could occur transparently by spotting anomalies in which the user interacts with the system. Such methods could avoid interrupting the user unless the system begins to doubt the person’s identity.
Another example of an authentication approach that could make use of the phone’s motion sensors is exhibited by SilentSense, which tracks “the unique patterns of pressure, duration and fingertip size and position” exhibited by the phone’s user. The goal of this technology is to spot when the phone is being used by an unexpected person.
As nice as the iPhone 5S fingerprint reader is, authenticating users through walking patterns could be event more user-friendly and offer unique benefits of continuous validation that one-time authentication doesn’t provide. (BTW, the new Moto X phone dedicates a hardware component to “context-sensing,” which seems to support motion-awareness functions.)
While Apple isn’t planning to grant app developers access to the fingerprint reader (a.k.a. “Touch ID sensor”) according to AllThingsD, access to M7 will be available using the expanded Core Motion API. This would allow third-party developers to come up with innovative ways of utilizing M7 motion coprocessor capabilities that go beyond fitness-related applications, assuming Apple’s App Store guidelines will allow this.
Who knows, perhaps your favorite phone security app might soon issue an alert if it detects that an impostor is carrying your phone, giving you an opportunity to wipe the device or take other corrective action. Just an idea.
Teaching Malware Analysis and the Expanding Corpus of Knowledge
Over the years, the set of skills needed to analyze malware has been expanding. After all, software is becoming more sophisticated and powerful, regardless whether it is being used for benign or malicious purposes. The expertise needed to understand malicious programs has been growing in complexity to keep up with the threats.
My perspective on this progression is based on the reverse-engineering malware course I’ve been teaching at SANS Institute. Allow me to indulge in a brief retrospective on this course, which I launched over a decade ago and which was recently expanded.
Starting to Teach Malware Analysis
My first presentation on the topic of malware analysis was at the SANSFIRE conference in 2001 in Washington, DC, I think. That was one of my first professional speaking gigs. SANS was willing to give me a shot, thanks to Stephen Northcutt, but I wasn’t yet a part of the faculty. My 2.5-hour session promised to:
Discuss “tools and techniques useful for understanding inner workings of malware such as viruses, worms, and trojans. We describe an approach to setting up inexpensive and flexible lab environment using virtual workstation software such as VMWare, and demonstrate the process of reverse engineering a trojan using a range of system monitoring tools in conjunction with a disassembler and a debugger.”
I had 96 slides. Malware analysis knowledge wasn’t yet prevalent in the general community outside of antivirus companies, which were keeping their expertise close to the chest. Fortunately, there was only so much one needed to know to analyze mainstream samples of the day.
Worried that evening session attendees would have a hard time staying alert after a day’s full of classes, I handed out chocolate-covered coffee beans, which I got from McNulty’s shop in New York.
Expanding the Reverse-Engineering Course
A year later, I expanded the course to two evening sessions. It included 198 slides and hands-on labs. I was on the SANS faculty list! Slava Frid, who helped me with disassembly, was the TA. My lab included Windows NT and 2000 virtual machines. Some students had Windows 98 and ME. SoftICE was my favorite debugger. My concluding slide said:
Too many variables to research without assistance
Ask colleagues, search Web sites, mailing lists, virus databases
Share your findings via personal Web sites, incidents and malware mailing lists
That advice applies today, though one of the wonderful changes in the community from those days is a much larger set of forums and blogs focused on malware analysis techniques.
By 2004, the course was two-days long and covered additional reversing approaches and browser malware. In 2008 it expanded to four days, with Mike Murr contributing materials that dove into code-level analysis of compiled executables. Pedro Bueno, Jim Shewmaker and Bojan Zdrnja shared their insights on packers and obfuscators.
In 2010, the course expanded to 5 days, incorporating contributions by Jim Clausing and Bojan Zdrnja. The new materials covered malicious document analysis and memory forensics. I released the first version of REMnux, a Linux distro for assisting malware analysts with reverse-engineering malicious software.
Around that time the course was officially categorized as a forensics discipline by SANS and was brought into the organization’s computer forensics curriculum thanks to the efforts of Rob Lee.
The most recent development related to the course is the expansion from five to six days. Thanks to the efforts of Jake Williams, the students are now able to reinforce what they’ve learned and fine-tune their skills by spending a day solving practical capture-the-flag challenges. The challenges are built using the NetWars tournament platform. It’s a fun game. For more about this expansion, see Jake’s blog and tune into his recorded webcast for a sneak peek at the challenges.
It’s exciting to see the community of malware analysts increase as the corpus of our knowledge on this topic continues to expand. Thanks to all the individuals who have helped me grow as a part of this field and to everyone who takes the time to share their expertise with the community. There’s always more for us to learn, so keep at it.
Researching Scams Helps Understand Human Vulnerabilities
Researching online scams opens a window into the world of deceit, offering a glimpse into human vulnerabilities that scammers exploit to further their interests at the victims’ expense. These social-engineering tactics are fascinating, because sometimes they work even when the person suspects that they are being manipulated.
Here are examples of 7 social engineering principles I’ve seen utilized as part of online scams:
Crafting messages to specify victims’ locations, so that the context of the message seems personally-relevant. For instance, a website that sold a fraudulent “home income” kit customized its text to mention the visitor’s city or town. In another example, variants of the Waledac worm generated a fake new report about an explosion that occurred in the place where the victim was.
Relying on people’s self-interest by crafting situations that exploit victims’ propensity towards vanity. For instance, “profile spy” scams convinced people to click on links by promising the ability to see who has been looking on their Facebook profiles. In another example, “rogue antivirus” scammers employed SEO techniques in the hopes of getting people to visit the website when they searched for their names.
Taking advantage of people’s need to feel secure by positioning malicious actions as security measures. For instance, numerous "rogue antivirus" scams have tricked people into installing malware while believing they were installing security tools. In another example, a website presented a fake security warning to entice the visitor into downloading “security updates.”
Appealing to character traits such as greed or honesty to catch victims’ interest and take action. For instance, the "home income" kit promised easy money for working from home. In another example, the check overpayment scamer expected the victims’ honesty to compel them to forward along the balance of the funds they received.
Miscreants know how to exploit weaknesses in human psyche. Potential victims should understand their own vulnerabilities. This way, they might notice when they’re being social-engineered before the scam has a chance to complete itself. If this topic interests you, you might also like the following posts:
Looking for Anomalies in Check Overpayment Scam Correspondence
After agreeing to purchase the item or service you are selling, the remote buyer offers a check for an amount larger than you requested, with a polite request that you forward the difference to the buyer’s friend or colleague. You deposit the check, wait for it to clear, and send the funds to the designated party. What could possibly go wrong?
This is the gist of the check overpayment scam, which debuted on the Internet around 2002 according to Scopes.com. In 2004, FTC warned about initial variations of the scam, which began with the scammer agreeing to buy the item advertised online by the victim. A modern variation starts as a response to an ad for private lessons, placed on a site such as Craigslist:
"How are you doing today?I want a private lessons for my daughter,Mary. Mary is a 13 year old girl, home schooled and she is ready to learn. I would like the lessons to be in your home/studio. Please I want to know your policy with regard to the fees,cancellations, and make-up lessons."
How might you determine whether the sender has malicious intent? The lack of spaces after some punctuation marks seems strange. Also, a legitimate message would have probably included some details specific to the lessons being requested. The scammer probably excluded such details, because he or she used the same text for multiple victims, who offer different services. The message above didn’t include the recipient on the To: line, supporting the theory that the scammer sent it to multiple people.
Concerned recipients might look up some of the unusual phrases from this message, in which case they’d see numerous reports about scams that include phrases from the text above. (1, 2, 3, 4, 5, etc.)
If the victim responds to the original email, the scammer continues to weave the web of deceit:
"I’m a single parent who always want the best for my daughter and I would be more than happy if you can handle Mary very well for me. I would have loved to bring Mary for "meet and greet"interviews before the lessons commence, which I think is a normal way to book for lessons but am in Honolulu,Hawaii right now for a new job appointment. So,it will not be possible for me to come for the meeting. Mary will be coming for the lessons from my cousin’s location which is very close to you. Although,Mary and my cousin are currently in the UK and I want to finalize the arrangement for the lessons before they come back to the United States because that was my promise to Mary before they left for the UK."
"As for the payment, I want to pay for the three month lessons upfront which is $780 and I’m paying through a certified cashier’s check.Hope this is okay with you?"
I wouldn’t point to any specific aspect of this message as malicious, but the tone feels a bit off, even for a person who might be a non-native English speaker. The phrase “handle Mary very well for me” is strange. Also, it seems unusual to pay for private lessons using a cashier’s check. In addition, the message has more details about Mary and the sender’s travel than one would expect in this context. The scammer is probably doing this to justify a request that will be presented to the victim later.
If the recipient writes back, the scammer responds:
"My cousin will get in touch with you for the final lessons arrangement immediately they are back from the UK. I will want you to handle Mary very well for me because she is all I have left ever since her mother’s death four years ago. Being a single parent, It’s not easy but I believe God is on my side."
Several aspects of this message might trigger a careful reader’s anomaly detector. Again we see the strange phrase “handle Mary very well.” Also, the sender offers extraneous detail that don’t belong in the context of this email discussion, perhaps attempting to establish an emotional connection on the basis of death in the family and religion. There is more:
"I also want to let you know that the payment will be more than the cost for the three month lessons. So, as soon as you receive the check, I will like you to deduct the money that accrues to the cost of the lessons and you will assist me to send the rest balance to my cousin. This remaining balance is meant for Mary and my cousin’s flight expenses down to the USA, and also to buy the necessary materials needed for the lessons. I think,I should be able to trust you with this money?"
Now we see the classic element of the check overpayment scam. The sender is appealing to the recipient’s honesty and perhaps hopes to build upon the emotional response evoked in the earlier paragraph.
Scopes.com explains that the scam works because FTC “requires banks to make money from cashier’s, certified, or teller’s checks available in one to five days. Consequently, funds from checks that might not be good are often released into payees’ accounts long before the checks have been honored by their issuing banks. High quality forgeries can be bounced back and forth between banks for weeks before anyone catches on to their being worthless.”
Watch out for situations where a buyer is willing to overpay and asks for your help in forwarding the remainder of the funds to someone else. Also, look for anomalies that might indicate malicious intent, such as strange tone, unusual phrases, unnecessary details and uncommon punctuation. If you notice such aspects of the text, research them on the web before deciding whether to continue your correspondence.
How the Digital Certificates Ecosystem is Being Strengthened
Given the many ways in which digital certificates can be misused and the severe repercussions of such incidents, several initiatives have been launched to strengthen the ecosystem within which the certs are issued, validated and utilized. This is a start of what I hope will be a slew of projects and security improvements that will gradually gain foothold in enterprise and personal environments.
Current efforts to improve the state of the web’s Public Key Infrastructure (PKI) include:
Operating systems and software is becoming more mindful of the need to maintain up-to-date lists of revoked certificates. For example, while Internet Explorer on Windows XP only enabled code-signing revocation checking by default, Vista and higher also checks for the revoked server certificates used in SSL/TLS connections by default, according to Websense. Microsoft has also enhanced its mechanism for automatically distributing updates to the listing of revoked certificates.
EFF launched the SSL Observatory project, to catalog the SSL/TLS certificates used by websites to facilitate the “search for vulnerabilities, document the practices of Certificate Authorities, and aid researchers interested the web’s encryption infrastructure.” Such initiatives might assist in understanding the scope of nature of issues that affect HTTPS-browsing.
Google launched the Certificate Transparency project, aimed at strengthening the PKI ecosystem by “providing a publicly accessible place for issued certificates to be published.” According to the initial proposal, the project’s primary goal is to make it difficult for malicious or careless CAs to “issue a certificate for a domain without the knowledge of the owner of that domain.”
Of the efforts to strengthen the web’s PKI environment, the pinning of the certificates or the associated public keys seems most promising.
Many information security practices are based on the principle of denying access by default, unless there is an explicit need to grant access. For instance, most network firewalls only allow specific traffic, instead of allowing all ports and blocking only risky ones. Soon, we might need to exercise the same degree of control over digital certificates trusted by our systems. The tools available to us for accomplishing this are still awkward and immature. This will change.
Online communication and data sharing practices rely heavily on digital certificates to encrypt data as well as authenticate systems and people. There have been many discussions about the cracks starting to develop in the certificate-based Public Key Infrastructure (PKI) on the web. Let’s consider how the certs are typically used and misused to prepare for exploring ways in which the certificate ecosystem can be strengthened.
How Digital Certificates are Used
Digital certificates are integral to cryptographic systems that are based on public/private key pairs, usually in the context of a PKI scheme. Wikipedia explains that a certificate binds a public key to an identity. Microsoft details the role that a Certificate Authority (CA) plays in a PKI scenario:
"The certification authority validates your information and then issues your digital certificate. The digital certificate contains information about who the certificate was issued to, as well as the certifying authority that issued it. Additionally, some certifying authorities may themselves be certified by a hierarchy of one or more certifying authorities, and this information is also part of the certificate."
HTTPS communications are safeguarded by SSL/TLS certificates, which help authenticate the server and sometimes the client, as well as encrypt the traffic exchanged between them. Digital certificates also play a critical role in signing software, which helps determine the source and authenticity of the program when deciding whether to trust it. Certificates could also be used as the basis for securing VPN and Wi-Fi connections.
Misusing Digital Certificates
In the recent years, the PKI environment within which digital certificates are created, maintained and used began showing its weaknesses. We’ve witnessed several ways in which the certs have been misused in attacks on individuals, enterprises and government organizations:
Man-in-the-middle (MITM) attacks abused certificates to intercept SSL/TLS traffic. Software rarely complains when a server’s SSL/TLS certificate has been signed by a trusted, but unexpected CA. For example, one person noticed that when connecting to Gmail via IMAP/SSL from a hotel, the server’s certificate was signed by an entity other than Google. A similar technique was allegedly used by Nokia, presumably for legitimate purposes, to decrypt phones’ HTTPS activities. There are probably many MITM instances where the traffic was captured illegitimately; unfortunately, such situations are often hard to detect.
These were just some examples of real-world incidents where digital certificates were misused. In a follow-up post, I discuss the initiatives aimed at strengthening the PKI ecosystem within which the certificates are issued, validated and utilized. Read on.
What Happens After You've Set Up Google Inactive Account Manager?
Google’s Inactive Account Manager allows you to designate a person to be notified if your Google account has been inactive for 3 or more months. You can also elect for Google to share your data with that contact at that time. Let’s take a look at what happens after the inactivity period is reached and consider what security precautions you might need to take when using this feature.
Activating Google’s Inactive Account Manager
To better understand how Inactive Account Manager works, I activated this feature for a test account, designating myself as the trusted contact. To do this, I followed prompts to provide Google with the contact’s email address and phone number. I also directed Google to “share my data with this contact,” in addition to notifying the designee about inactivity of my test account:
I also specified the subject and contents of the email message that Google would send to the trusted contact when the account becomes inactive. In addition, I provided a mobile phone number where Google would send text message alerts regarding the inactive account.
I told Google to “Remind me that Inactive Account Manager is enabled.” I chose not to ask Google to delete the account after it expires.
Then, I waited 3 months, avoiding the use of my test account. I wanted to see what will happen once it reaches the state of inactivity.
Initial Google Account Inactivity Alerts
A month before the Google account was going to enter the expired state, my test account received an initial alert using email and SMS:
The email message explained that to avoid expiration, the user needed to sign into Google and edit Inactive Account Manager settings. Google sent another set of alerts 11 days later, then a week after that, and again 5 days after that.
Google Account Becoming Inactive
On the day when the account was set to enter the state of inactivity, Google sent the user an alert via email and SMS, stating that the timeout set for the account has expired. The message explained that “Inactive Account Manager has notified your trusted contacts and shared data with them.”
Despite the urgent tone of the message, Google waited two more days (a grace period?) prior to notifying the trusted contact about expiration. At that time, I received an email message with the subject and text that was set up by my test user when activating Inactive Account Manager:
Google’s message provided a link to obtain the data from the expired account and stated that I have been given access to download it over the next 3 months.
Accessing the Inactive User Account Data
After I clicked the “Download data” link, I was presented with a message explaining that before I download the data, I have to verify my identity by confirming that I received a verification code via SMS or a call. Google directed the code to the number designated when setting up Inactive Account Manager:
After supplying the verification code, I was able to download the expired account’s data:
The data was available as a Zip file archive, similar to how Google provides it user the Google Takeout service.
Security Notes on Google’s Inactive Account Manager
I retained access to the expired account’s data for about 24 hours after my test user logged into Google and reactivated the account. The data that I could access during that time was a snapshot in time, taken on the date when the account expired. The export didn’t include new data generated by the reactivated account afterwards.
After the test user logged into the account, there was no warning about the trusted contact having accessed the data when the account expired. If enabling Google’s Inactive Account Manager and reaching the inactivity state, you should assume that the designated person exported your data.
A day after reactivating the inactive account, my test user received a notice stating that the timeout period was reset and that trusted contacts no longer had access to the data. Indeed, at that point clicking in the “Download data” link took me to an “Invalid link” page:
As you can see, Google built safeguards to avoid the expired account’s data being shared accidentally or becoming available to the wrong party. This process entails sending several email and SMS notifications, which could be misused by attackers to create phishing scams.
If you or people in your organizations rely on Google accounts, educate them about the flow of information associated with Inactive Account Manager and remind them to interact solely with Google’s SSL-enabled websites regarding this feature.
I’d like to define the term honeypot persona as a fake online identity established to deceive scammers and other attackers. If this notion interests you, take a look at the article where I proposed using honeypot personas to safeguard user accounts and data. If you haven’t read that note yet, go ahead I’ll wait…
In that article, I wrote that:
"Using decoys to protect online identities might be an overkill for most people at the moment. However, as attack tactics evolve, employing deception in this manner could be beneficial. As technology matures, so will our ability to establish realistic online personas that deceive our adversaries."
Online attackers have many advantages over potential victims, making it hard to defend enterprise IT resources and personal data. In such situations, diversion tactics might help the defenders balance the scales by slowing down and helping to detect attackers.
I’ve outlined my recommendations for the role that honeypots can play as part of a modern IT infrastructure earlier. Now, I’m also suggesting that honeypot personas, which could also be called decoy personas, might be effective at confusing, misdirecting, slowing down and helping detect online adversaries. For example,
"A decoy profile [on a social networking or another site] could purposefully expose some inaccurate information, while the person’s real profile would be more carefully concealed using the site’s privacy settings."
I define the term honeypot as an item that is designed to be desired by an adversary. In this light, a honeypot persona exhibits characteristics that might be attractive to online attackers, deflecting malicious activities and potentially warning the real person who carries the same name as the decoy, that he or she might be targeted soon.
Why Organizations Don't Prepare for Information Security Incidents
Despite the general agreement that being prepared for an information security incident decreases the pain of dealing with the intrusion, few companies seem to plan for the eventuality of a data breach. Understanding why organizations don’t prepare for incident response might help us come up with advice that goes beyond merely emphasizing the need for an incident response plan.
Do Organizations Understand the Threats?
One explanation for the lack of preparedness suggests that organizations simply don’t know that it’s a dangerous online world out there and, therefore, might not even understand what security incidents are about. This is probably incorrect. A Symantec study of small and midsize businesses found that the SMBs were aware of online threats. More than half “stated that malware would reduce productivity while the infected systems were repaired and 36 percent recognized that hackers could gain access to proprietary information.” If SMBs understand this, so do the larger enterprises.
Do Organizations Recognize That They Might Be Attacked?
As indicated above, companies seem to possess a general awareness of the threats and understand that a data breach could have adverse effects. The problem might be that these firms don’t believe they will be attacked.
Indeed, the Symantec study mentioned above found that SMBs thought that, due to their smaller size, they were less likely to be attacked. In reality, Symantec’s data showed that SMBs were more likely to be targeted than large enterprises and, therefore, underestimated the risk.
"In keeping with other research findings, young smokers in this study estimated their own personal risks differently from risks to smokers in general."
Perhaps there is an underlying psychological tendency among individuals to be optimistic when assessing risks, thinking they are less likely to experience a negative event than others in their peer group.
Do Organizations Foresee the Pain of Dealing with the Incident?
There’s another explanation for organizations not thinking in advance about security incidents, even if they are aware of threats and recognize that they might be attacked. It has to do with the weak understanding of the effect an incident can have on the organization. Without this understanding, the company would fail to see the need to prepare in advance and wouldn’t know what dealing with incidents might be like.
In the UPenn smoking study mentioned above, some teens acknowledged that they were at an increased risk of developing cancer if they continued to smoke. However, they failed to understand the horrible effect that cancer, if it occurs, could have on their lives. They underestimated the deadliness of this disease. The researchers concluded:
"Young smokers understand that smoking is likely to shorten a person’s life, but do not have a clear idea of the number of years involved."
It’s quite possible that organizations similarly underestimate the disruptive effects of information security incident. At least in the short term, it will find itself suspending some aspects of its routine, switching into incident handling mode that is costly and stressful.
Persuading Organizations to Prepare for Information Security Incidents
The studies act as illustrations of the challenges we encounter when assessing risks. If we are to attempt persuading companies to consider how that will handle information security incidents before the event takes place, we need to consider several reasons why they might not care to think about this in advance:
The lack of basic understanding of threats and repercussions of a data breach
The belief that they organization will have to deal with an infosec incident
The inability to foresee the challenges associated with responding to an incident
Technical and Political Boundaries of Security Assessments
Agreeing on the scope of a security assessment, such as a penetration test, is easier said than done. Define the scope too narrowly, and you will miss the vulnerabilities another attacker could have exploited. On the other hand, an overly-wide or ambiguous scope can unnecessarily increase the cost of the assessment and put at risk careers of people involved with the project.
Jake William’s recent blog posting Penetration Testing Scope - Murky Waters Ahead! highlights the challenges associated with determining what may be examined by the tester. For instance, if the assessment is supposed to exclude web applications, should a PHP vulnerability that allows remote code execution be excluded as well?
Defining the scope is as much of a technical as it is a political issue. Many assessments focus exclusively on “web applications” or only on “network services,” instead of combining the two efforts, because these resources are usually maintained by different teams. As the result, the funding for these assessments comes from different sources and different people will be blamed for security problems.
Unfortunately, this means that many penetration tests fail to adequately mimic a typical attacker’s actions, because the scope of the assessment is artificially constrained.
One way to determine whether a particular vulnerability is in scope is to ask the client whether they are the ones responsible for patching that vulnerability. That’s not satisfying , I know. The good news is that the assessor can still provide value by asking questions about the vulnerability, explaining its significance and highlighting the need for someone within the client’s organization should take a look at it.
A well thought-out penetration test will define the rules of engagement in a realistic manner without drawing these artificial boundaries. A consultant should strive to define the rules of engagement and the scope in this “fused” manner when selling the assessment. Similarly, organizations should combine these types of testing efforts into a single engagement, even if it means that two different teams need to collaborate and pool their budgets.
In a follow up to his post, Jake pointed out that talking to “the customer is ultimately the correct way to obtain ultimate resolution on the issue.” He explained that they report all issues they “find (even if they happen to be out of scope). … this provides a real value added to the client.”
5 Tech Trends That Explain the Evolution of Online Threats
Information security professionals need to keep an eye on the always-evolving cyber threat landscape. Accomplishing this involves understanding how changes in people’s use of technology influence the opportunities and techniques pursued by criminals on-line. Below are 5 tech trends that have affected the evolution of threats.
Mainstream adoption of the Internet into daily activities. The Internet has become so interwoven into our lives that we often don’t notice when activities make use of Internet-connected resources. Technology that allows people and businesses to utilize Internet connectivity has become so convenient, that even non-technical people, old and young, are able to harness the power of the web. As the result:
The increase in numbers of non-techies present on and accessible via the Internet made social engineering more fruitful. It’s often easier to target people who aren’t technology specialists.
Simplification of user interfaces, necessitated by the need to service non-techies, eliminated some of the details that could assist people in spotting malicious activities or intentions.
Commerce and other critical activities moved online, so the criminals followed. To paraphrase the famous saying, criminals are online “because that’s where the money is.”
The increase in usefulness and popularity of mobile devices. Powerful pocket-sized computers with always-on Internet connectivity, also known as phones„ have become so common, that we rarely make a distinction between a regular and a “smart” phone. Overall, mobile devices have become as integral to the modern way of life as glasses, wallets and shoes. As the result:
The critical role of mobile devices, which act as a wallets, authentication tools and a communication portals, made them attractive targets. A criminal with access to someone’s mobile device has significant insights into and control over the victim’s life.
User interface limitations of small screens conceal visual elements that could aid people in making informed information security decisions. Mobile apps often omit security indicators such as SSL icons that have become staples of the traditional desktop browsing experience.
The use of personal devices for work purposes (BYOD) increased the attack surface available to criminals looking to compromise information security safeguards of enterprises. Attackers can use employee-owned mobile devices as portals into the organization’s network, systems and applications.
The popularity and acceptance of online social networking. While initially seen as serving the needs of niche groups, websites such as Twitter, Facebook and LinkedIn, have been joined by numerous others to support new ways in which people socialize online. Social networking sites have become the backbone of modern interactions. As the result:
The ease with which people can be reached through online social networks provided criminals with easy access to potential victims. While people might conceal their email addresses, they often allow strangers to contact them through online social networks.
The curation culture of online social networks, which encourages people to share links to videos, articles and other items of interest, provided scammers and malware operators convenient ways to distribute malicious links.
The wealth of personal data available on people’s social networking profiles provided criminals with the details for executing targeted attacks and social engineering scams.
The connectivity between “physical” and “virtual” worlds. Objects, tools and other constructs (e.g., thermostats, industrial control systems, home automation devices) in the “physical” world are increasingly connected to the web, giving rise to the concept of the “Internet of things.” As the result:
The popularity of digital currencies, such as Bitcoin, and game currencies World of Warcraft gold, offered criminals new financial targets and monetization schemes that took them beyond standard currencies such as Dollar, Pound and Euro.
The ease of connectivity between VoIP and traditional telephone networks gave rise to new forms of telephone-based scams and denial-of-service attacks (TDoS) that target companies’ phone systems.
The addition of online access features to sensors such as video cameras provided attackers with new ways to observe victims remotely, compromising privacy and exposing people and organizations to espionage and other risks.
The acceptance of cloud computing. The use of external, virtualized and/or outsourced IT resources has gained mainstream adoption for not only personal, but also enterprise applications. The cloud is permeating all aspects of modern life. It is becoming increasingly difficult and unnecessary to make a distinction between traditional and cloud-based technologies. As the result:
Consolidated data stores outside of the traditional security perimeter of the individual’s PC or the organization’s network established attractive targets. For instance, compromising the email database of a mass-marketing service provider, the attacker can gain access to information useful for further criminal activities.
Greater reliance on third-party service providers blurred the line between the roles and responsibilities related to safeguarding data. With each party assuming that the other provides information security oversight and governance, the vulnerabilities available to attackers have increased in number.
The proliferation of online cloud-based services has increased the number of passwords that people need to manage, increasing the likelihood that people will select easy-to-remember and, therefore, easy-to-guess logon credentials.
Though I’ve broken out technology trends as distinct observations, they are interrelated within a system that comprises the modern way of life, which incorporates phones, social exchanges, interconnectedness and cloud services into its very fabric. Similarly, the trends in attack strategies, targets and rewards are intertwined to create the reality that infosec professionals need to understand and safeguard.
Digital Forensics and InfoSec Career Advice From Across the Web
Sometimes people ask me for career advice related to information security in general and, more specifically, digital forensics and incident response. I’ve written a few articles on this topic, as did many other respected professionals. Below are pointers to some of these tips.
What Anomalies Trigger The LinkedIn Sign-In Verification Challenge?
LinkedIn prompts users to take additional steps when it determines that the logon attempt is unusual. What activities does LinkedIn consider suspicious? This isn’t well documented, but here are a few possibilities.
According to LinkedIn, the service presents a security challenge when the user attempts to sign-in “from an unfamiliar location or device” or when the service detects “suspicious web activity.” In this case, the user might be emailed a verification link or presented with a CAPTCHA challenge.
The security challenge could come up when the user accesses LinkedIn from a new country. In this case, the person would see:
“This sign-in attempt seems unusual for you. As a security precaution, please check your email to verify this sign-in attempt.”
The email message will explain, “Someone just tried to sign in to your LinkedIn account from an unfamiliar location, so we want to make sure it’s really you.” The email will specify the IP address and the country where the attempt originated. The recipient will be advised to click a button to verify the sign-in attempt or click another link to change the password.
Watch out, scammers might misuse this text for phishing!
LinkedIn also presents the verification prompt after an extended absence according to one report on Twitter. Another sighting on Twitter suggests that LinkedIn might be checking for frequent login/logout actions from a single location, though specifics of this logic are a bit unclear.
To reduce the likelihood that the sign-in verification prompt will come up, LinkedIn recommends against signing out “each time you use LinkedIn during the day.” Strangely, the service also suggests that “you sign out at the end of each day.” (I doubt that’s very practical advice.)
It’s great to see that LinkedIn has been taking measures to strengthen its authentication practices!
Attributing Cyberattack Activities to a Group in India
There is much we can learn about coordinated online activities of skilled attackers with nation-state affiliations. The following two write-ups provide a wealth of information about one such attack group, which has been targeting organization in South Asia over the past few years and appears to reside in India:
According to these reports, the group engaged in industrial espionage and spying on political activists. The victims resided in many countries, but Pakistan stood out as the most targeted location. The attackers relied on spear phishing to gain initial access to the targeted environment. The emails were thematically appropriate to the targets and included malicious documents that exploited unpatched vulnerabilities. Some of the malware was digitally signed.
The analysts attributed these cyberattack activities to specific source by examining:
Types and locations of the targeted organizations
Categories and contents of the data pursued by the attackers
Contents of decoy documents used for spear phishing
Debug path and other strings embedded in the malicious programs
Code-signing certificate details
Domain registration records of the systems used by the attackers
As the result, Norman and Shadowserver researchers concluded that the attackers apparently operated from India “and have been conducting attacks against business, government and political organizations.” Similarly, ESET analysts concluded “that the entire campaign originates from India.”
In addition, Norman and Shadowserver researchers concluded that the malicious software used in these campaigns was created by multiple software developers who were “tasked with specific malware deliverances.” The developers collaborated, “working on separate subprojects, but apparently not using a centralized source control system.”
Automating Static Malware Analysis With MASTIFF: MASTIFF is an open source framework for automating static malware analysis. This tool, created by Tyler Hudak, determines the type of file that is being analyzed and then applies only the static analysis techniques that are appropriate for that file type. MASTIFF offers a useful way for performing triage on a large set of suspicious files. Extra: See my MASTIFF demo as part of the recorded What’s New in REMnux v4 for Malware Analysis webcast.
Tools for Examining XOR Obfuscation for Malware Analysis: There are numerous ways of concealing sensitive data and code within malicious files and programs. Fortunately, attackers use one particular XOR-based technique very frequently, because offers sufficient protection and is simple to implement. Here’s a look at several tools for deobfuscating XOR-encoded data during static malware analysis. Extra: Experiment with Thomas Chopitea’s unXOR tool.
Some organizations have encountered Advanced Persistent Threat over 5 years ago—earlier than most of us. Because of the types of data they process, these initial APT victims were exposed to carefully-orchestrated, espionage-motivated attacks before they spread to a wider range of targets.
Now, half a decade later, might the time to look at the attacks that the initial APT victims are fighting nowadays to forecast the threats that will eventually reach other companies. I am wondering:
Will traditional APT actors eventually disengage from early APT targets, perhaps after obtaining the necessary data, finding the cost of maintaining presence too costly or deciding to focus on easier-to-attack victims? Have they done this already?
Will APT groups remain engaged, but drastically change tactics according to new goals and in response to new defensive elements? How have these tactics changed in the recent years?
What can we learn by treating initial APT targets as predictors of threat dynamics that will eventually affect a broader set of victims? What attacks are effective today against the organizations that had the time and skills to adapt to initial APT tactics?
It’s hard to answer these questions without first-hand access to the companies that witnessed the first wave of APT attacks. Furthermore, the dilution of the term APT by marketing departments makes it harder to differentiate between reliable APT insights, such as what Mandiant has been publishing, from generic APT-themed sales collateral peppered throughout the web.
Based on public information and observations, I suspect the threat landscape over the next few years will involve:
These are just conjectures. I don’t have the answers to the questions I posed above; however, I thought I’d at least ask them and explore the idea of looking at early APT targets’ current state to anticipate advanced threats that will later affect other organizations.
Free Recorded Malware Forensics and Analysis Webcasts
In the field of IT in general and digital forensics in particular, you become obsolete the moment you stop learning. Here are several free recorded webcasts related to reverse-engineering and malware analysis that will help you keep your skills up to date:
New Release of REMnux Linux Distro for Malware Analysis
I’m pleased to announce the release of version 4 of the REMnux Linux distribution for reverse-engineering malicious software. The new version includes a variety of new malware analysis tools and updates the utilities that have already been present on the distro.
REMnux is now available as a Open Virtualization Format (OVF/OVA) file for improved compatibility with virtualization software, including VMware and VirtualBox. (Here’s how to easily install the REMnux virtual appliance.) A proprietary VMware file is also available. You can also get REMnux as an ISO image of a Live CD.
Key updates to existing tools and components:
Core system: Upgraded the underlying Ubuntu OS components and packages; increased default RAM of the virtual appliance to 512MB; replaced OpenJDK with Oracle Java 7 runtime.
Memory analysis: Updated Volatility to version 2.2.
Apple explains that “two-step verification is an optional security feature for your Apple ID.” To activate it, sign into My Apple ID on Apple’s website and go to the Password and Security area. You will then have the ability to specify which “trusted devices” associated with your Apple ID you wish to use as the second authentication token.
When designating a trusted device, such as an iPhone or an iPad, Apple will send a 4-digit verification code, which will pop up on the device almost instantaneously. You’ll need to enter the code on Apple’s website to confirm that you’re in the possession of the device.
Once you’ve enabled two-step verification, you’ll need to verify that you still have the device whenever you login to the My Apple ID website, when you “make an iTunes, App Store, or iBookstore purchase from a new device” or when you attempt to “get Apple ID-related support from Apple.”
For example, after signing into the My Apple ID website with your username and password, you’ll be presented with the prompt to “verify your identity” using one of the enrolled devices.
A pop-up like this will appear on the designated trusted device:
If your device is locked when the code is delivered, you will need to unlock it before seeing the code. The overall experience is a bit more streamlined than what Google uses, because Google requires the user to install and the activate the Google Authenticator app on the mobile device.
Receiving the code requires an active data connection. If you are using an iPhone, don’t have data but are able to receive SMS, Apple can send a verification code to your a verified phone via SMS. To take advantage of this feature, you need to verify the phone number through the My Apple ID website.
When activating the two-step verification option, Apple automatically generates a Recovery Key, which can be used as an authentication token if you lose access to a trusted device:
Google, Apple and to some extent Facebook now give users the option of strengthening their account authentication process. It’s only a matter of time before other industry giants, such as Twitter, jump in. Perhaps stronger authentication becomes the norm, we might see some innovation in making it more reliable and convenient for end-users.
Indicators of Compromise Entering the Mainstream Enterprise?
The need to define custom, incident-specific signatures is slowly gaining traction in the mainstream enterprise. A few years ago this concept, often called Indicators of Compromise (IOCs), was mostly discussed by government organizations and defense contractors who were coming to terms with Advanced Persistent Threat (APT) attacks.
Madiant began popularizing the term IOC around 2007. Kris Kendall’s paper Practical Malware Analysis mentioned IOCs in the context of malware reversing at Black Hat DC 2007. For a precursor to this, see Kevin Mandia’s Foreign Attacks on Corporate America slides from Black Hat Federal 2006. At the time, few organizations saw the need to go beyond antivirus-based detection by analyzing the adversary’s artifacts to define custom host-level signatures.
Now, several years later, the term IOC is pretty well-known in the infosec industry. More companies are adding malware and related analysis skills to incident response teams. As Jake Williams put it, such firms know how to examine new malware and extract IOCs. “These are then fed back into the system and scans are repeated until no new malware is found.” Automated analysis products from vendors such as Norman, Mandiant, FireEye and HB Gary are being increasingly positioned as IR triage-enablers.
That said, the knowledge and skills for deriving and using IOCs is far from being mainstream. Anton Chuvakin highlighted the distinction between security haves and have-nots along the lines of this capability. The haves know how to reverse-engineer malware to “extract the IOCs FAST (or get those IOCs shared with you by trusted friends) and then look for them on other systems.”
IOC techniques haven’t entered the mainstream just yet. But we’re heading in that direction, as more people attain forensics skills and as more tools become available for defining and making use of such custom, incident-specific signatures.
To learn how to define and make use of IOCs, take a look at:
Hiring a Software Engineering Manager in Dallas, TX
Update: This position has been filled.
I’m looking for a software engineering manager to join my team at NCR in Dallas, TX. The person leads the efforts to develop and maintain software that addresses our customers’ information technology needs. To accomplish this, the manager motivates team members and oversees their activities in the context of Agile-inspired development practices.
Some of the required skills and proficiency levels include:
Experience managing a software engineering team
Past experience developing applications using C, C++, C#/.NET or Java
Experience in overseeing the development of mission-critical software projects from design to completion
A cultural fit that allows the person and the team to have fun and be productive
Are you such a person or do you know someone like this?
I had the pleasure of speaking with Jake Williams, my colleague at SANS Institute, about his perspective on various malware analysis and reverse-engineering topics. You can read the interview in three parts:
Part 1: Getting into digital forensics, crafting a strong malware analysis reports and making use of the analyst’s findings
Part 2: Acting upon malware analyst’s findings and the role of indicators of compromise (IOCs) in the incident response effort
Part 3: Various approaches to malware analysis, including behavioral, dynamic, static and memory forensics
Jake is highly experienced in this space and shared helpful insights in the interview above. Jake will be teaching FOR610: Reverse-Engineering Malware on several occasions at SANS this year.
Beyond Logins: Continuous and Seamless User Authentication
User authentication is usually discussed in the context of the person’s initial interactions with the system—a safeguard often implemented by a classic login screen. However, one-time validation of the user’s identity is becoming insufficient for modern devices and applications that process sensitive data. Such situations might benefit from a seamless authentication approach that incorporates continuous verification of the user’s identity.
Initial attempts at continuous user authentication can be seen in security policies that lock the user’s workstation after a period of inactivity or settings demanding that mobile phone users enter their PIN every few minutes. These traditional security measures annoy people and leave much room for innovation.
Continuous user authentication could occur transparently by spotting anomalies in which the user interacts with the system. Such methods could avoid interrupting the user unless the system begins to doubt the person’s identity. For instance, the user’s web application activities could be continuously scrutinized for deviations from normal workflow and UI interaction patterns. Similarly, a mobile phone could regularly examine the user’s bio-signs to spot an impostor.
The notion of continuous and seamless authentication isn’t new; however, it has yet to enter mainstream computing in a meaningful way. Here are a few examples of what might be feasible:
Users of modern web applications and mobile devices demand strong security measures that don’t get in the way of normal activities. Continuous user authentication could help fulfill such seemingly unattainable demands by passively tracking relevant sensors and metrics, getting on the way only after observing an anomaly that exceeded a reasonable threshold.
Creative Options for Better Authentication of Mobile Phone Users
If you think your mobile phone is already deeply embedded in your life, consider the critical role it will have in just a few years. As the importance and sensitivity of the data handled by mobile phones increase, so do the repercussions of the devices falling into unauthorized hands. Manufacturers and app developers will need to implement creative ways of authenticating legitimate phone users without relying on awkward passwords and PINs.
Here are a few creative options for determining whether an authorized person is using the phone:
Authentication factors above might not work on their own, but they could be combined with each other to reach the right balance between false positives and false negatives.
For additional context, the authentication decision could account for the expected bio-pattern of the legitimate user, such as the heart rate range that could be obtained using activity trackers that integrate with phones, such as FuelBand, Fitbit or UP. The phone could also pay attention to the user’s breathing patterns, in the style of the Breathing Zone iPhone App.The decision could also incorporate the person’s expected physical location and activities (i.e. jogging); for an example of the phone can “predict” the user’s activities see the Google Now app.
Innovative authentication options are gradually becoming available for mobile phones. More will come to light over the next few years. In the next decade, we’ll see authentication mechanisms that effortlessly tie the bio-measured identity and context with the phone’s hardware and software functions. In some ways, it will be hard to distinguish between the mobile device and its user.
Public accounts of intrusions conducted or supported by state actors highlight the importance that military organizations are placing on cyber warfare. Those without access to privileged information have been debating when “real-world” warfare will find its way to the Internet, without realizing that such activities have been ongoing for at least several years.
Intrusions initiated by nation states against companies and governments of other countries are motivated by political and economic reasons, much like the traditional form of warfare. My hypothesis is that a country looking to safeguard its own cyber interests has to engage in a systemic campaign to compromise IT assets of its adversaries. The logical goal of such offensive operations is the state of mutually-assured destruction that deters each party in the conflict from taking advantage of the IT assets it compromised.
Here’s why I believe this might be the case:
There is presently no practical way to defend IT infrastructure of any nation state against intrusions, be they commercial or government assets. If there was, we wouldn’t be experiencing so many breaches.
As the result, a country needs to assume that an adversarial nation state will be able to successfully compromise a significant number of the country’s critical IT assets. Many of these intrusions will be undetected.
Therefore, the country will need to find a way to deter the adversary from taking aggressive action against a significant number of the IT assets it illicitly controls.
One way to accomplish this is for the country to compromise a meaningful amount of the adversary’s critical IT infrastructure, creating the situation of a mutually-assured destruction.
The idea of mutually-assured destruction in cyberspace isn’t novel. It was brought up at an RSA Conference panel in February 2012. According to the Threatpost’s article discussing that panel:
"Deterrence will play an important role in avoiding conflict, as it did in the Cold War with Russia. The Chinese military appreciates that both it and the U.S. have cyber offensive capabilities and defensive vulnerabilities - ‘big stones, and plate glass windows,’ said Lewis. ‘We’re back to mutually assured destruction.’"
A June 2012 article in the New York Times discusses several cyber warfare initiatives that appear to have been conducted by the U.S. and highlights some of the challenges of achieving cyber warfare dominance and reaching the state of mutually-assured destruction.
Nations with the interest, expertise and budget to conduct offensive cyber activities are probably busy hacking each other to avoid being outpaced in this process by their adversaries. They are doing this to achieve the state of mutually-assured destruction as a way of deterring each other from launching a full-scale cyber war. Just a theory.
“Be doubly vigilant after a physical break-in. Don’t just look for what’s missing, but what might have been left behind.”—Paul Ducklin, discussing the practice of some cyber-criminals to install a keylogger after breaking into victims’ offices and stores.
It’s unusual for information security professionals to work in a group that directly generates revenue instead of being a cost center. Many find working within a cost center hard, in part because when it is time to cut costs, infosec budgets are among the first to go. Product management provides an opportunity for infosec pros to work in a profit center for a change. (There are others, such as consulting and sales.)
From my perspective, the primary goal of product management is to define product capabilities and drive product adoption. Sometimes this view on product management is called product development.
Defining product capabilities entails working closely with customers to understand and anticipate their needs. It also requires understanding the company’s strengths and weaknesses related to the market as well as the competitive landscape.
Driving product adoption involves those steps that help the product find its way to its consumers. This usually requires the need to understand the company’s channel and partnerships, unless the product is sold directly. It also involves regular customer interactions and some aspects of marketing.
In the world of information security, a product might be a hardware gadget, such as a network tap, a piece of software such as an anti-malware tool, or a service, such as a managed security offering. Sometimes it is a combination of these categories.
Here are the type of tasks a product manager might be asked to perform to support the objectives outlined above:
Define a strategy for the product’s evolution to support business and customer needs.
Create specifications, prioritize requirements and maintain a roadmap of the features being developed.
Manage the process of making the product available to customers.
Act as a subject matter expert for the product’s capabilities in pre and post-sales discussions.
Collaborate with the engineering team building the product to clarify requirements and specifications.
Allowing Gullible Victims to Self-Select in Online Attacks
Cormac Herley’s paper Why do Nigerian Scammers Say They are from Nigeria? explains how some purposefully-lame scam emails are advantageous to the attacker. Such messages allow the scammer to avoid victims who will consume valuable time, but will turn out to be too savvy to fall for the scam. Herley explains that by initiating contact using a blatantly fraudulent email “that repels all but the most gullible, the scammer gets the most promising marks to self-select.”
This motivates some scammers to send messages that are easily identified as fraudulent by many people, yet succeed at catching the more gullible portion of the population. An excerpt from one such example:
"We are top officials of the Federal Government Contract Review Panel who are interested in importation of goods into our country with funds which are presently trapped in Nigeria. In order to commence this business we solicit your assistance to enable us RECEIVE the said trapped funds ABROAD."
An article in The Economist on this subject quotes Basil Udotai, a former cybersecurity director of Nigeria’s National Security Adviser: “There are more non-Nigerian scammers claiming [to be] Nigerian than ever reported.” One motive for this might be “Nigeria’s dreadful reputation for corruption that makes the strange tales of dodgy lawyers, sudden death and orphaned fortunes seem plausible in the first place.”
Allowing victims to self-select as being vulnerable might be useful for online attacks and scams that involve social engineering and require human involvement on the attacker’s part. They also seem most appropriate for mass-scale attacks, where a small percentage of gullible people produces a sufficiently large set of likely targets.
Self-selecting victims by using blatantly malicious communications also might be useful for some penetration testing and targeted attack scenarios. A human-powered attack will want to focus on people most likely to assist the attacker. Moreover, the attacker might conceal his true sophistication by purposefully appearing amateurish.
So perhaps the next time you come across a poorly-worded email scam, filled with all-uppercase letters, typos, grandiose titles and financial promises, you won’t laugh at the naive message. The scammer might be so clever, that his apparent incompetence is a charade.
There are many reasons why business managers seem to ignore the risks brought forth by information security professionals. I outlined six of them in an earlier post. In this note, I’d like to add another possible explanation: the endowment effect, which affects how humans value their possessions.
"You can also create value by investing time and effort into something (hence why we cherish those scraggly scarves we knit ourselves) or by knowing that someone else has (gifts fall under this category)."
Information security professionals experience a sense of ownership for the data they safeguard. Therefore, the endowment effect might bias us towards overestimating the value of this data. Business managers are somewhat removed from the data by layers of applications and business processes and aren’t affected by the bias to the same degree.
In other words, business managers might value the data less than how infosec professionals value it. This would contribute to the disagreement regarding the level of risk associated with security of the data.
If information security professionals are, indeed, irrationally influenced by the endowment effect, what can we do about it? Alternatively, when persuading business managers to agree with our perspective, how might we influence them to experience the endowment effect to the same extent?
How Malicious Code Can Run in Microsoft Office Documents
One of the most effective methods of compromising computer security, especially as part of a targeted attack, involves emailing the victim a malicious Microsoft Office document. Even though the notion of a document originally involved non-executable data, attackers found ways to cause Microsoft Office to execute code embedded within the document. Below are 4 of the most popular techniques used to accomplish this.
Support for executing code that’s embedded as a VBA macro is built into Microsoft Office. Once the victim opens the document and allows macros to run, this code can run arbitrary commands on the user’s system, including those that launch programs and interact over the network. The penetration testing tool Metasploit makes it relatively straightforward to generate payload that attackers could embed in an Office file as a VBA macro. (See one example by Chris Patten.)
Such macros can be included in “legacy” binary formats (.doc, .xls., .ppt) and in modern XML-formatted documents supported by Microsoft Office 2007 and higher. In either case, Office will present the user with a security warning, stating that macros have been disabled and offering to “enable content.” Social engineering techniques can persuade the victim to click the button that will allow the embedded macro to run and infect the system.
Payload of a Microsoft Office Exploit
Another way to execute malicious code as part of an Office document involves exploiting vulnerabilities in a Microsoft Office application. The exploit is designed to trick the targeted application into executing the attacker’s payload, which is usually concealed within the Office document as shellcode.
In this case, Microsoft Office has to be exploited to execute the attacker’s code. This is in contrast to the previous scenario, where the attacker takes advantage of macros, supported by Microsoft Office as a feature. For instance, vulnerability CVE-2012-0141, announced in May 2012, could allow the attacker to craft a malicious Excel file to include an exploit that would “take complete control of an affected system.”
Embedded Flash Program
Embedding a Flash program inside an Office document provides attackers yet another way to run malicious code on the victim’s system. In this case, the code within the Flash object run as soon as the victim opens the document without any warnings and without relying on exploits. This code is till subject to security restrictions imposed by Flash Player, so to perform escalated actions the code would need to exploit a vulnerability in Flash Player.
Attackers can embed Flash objects in Office documents using automated tools. manual steps to do this the Flash object instructed Flash Player to download and play an MP4 file that was designed to exploit the CVE-2012-0754 vulnerability in Flash Player, announced in February 2012. This allowed the attacker to infect the victim’s system with a malicious Windows executable (trojan).
These are some of the techniques that intruders have used to execute code in Microsoft Office documents to compromise the system. The attacker could directly take advantage of a vulnerability in the targeted Office application. In other cases, the attacker uses functionality provided by Microsoft Office to either trick the user into allowing the malicious code to run (VBA macros) or to use a weakness in Office settings to run code that exploits vulnerabilities in other applications (Flash Player).
Confusing the Padlock and the Favicon in the Web Browser
Web browser makers are continuing to change how they display two visual elements that people have been taking for granted: the padlock that designates an HTTPS connection and the favicon that acts as the thumbnail of the website’s visual identity. These changes are aimed at helping to minimize the risk that a favicon that looks like a lock might instill a false sense of security.
Users of web browsers have gotten accustomed to looking for the padlock image as part of the URL to determine whether the connection is “secure.” The browsers have typically displayed the lock for HTTPS connections where the SSL certificate was properly validated. Most non-geeky people don’t know what aspect of security the lock is supposed to signify, but they have been trained to rely on it as a symbol of online safety.
Webmasters can specify favicons as tiny images that web browsers have displayed in the URL bar to reinforce the site’s digital identity. Unfortunately, computer attackers can display favicons that look like padlocks, fooling victims into thinking that they were using an SSL-encrypted and authenticated connection. One tool that can automate such an attack is sslstrip, and this category of attacks is described in the Black Hat presentation by the tool’s author.
Microsoft’s Internet Explorer (v9) displays both the padlock and favicon in the URL bar. The favicon is on the left of the URL and—assuming the connection is using HTTPS with a valid certificate—the padlock is on the right:
A malicious webmaster or an attacker who can interfere with the web session may be able to display a padlock favicon even for HTTP or an invalid HTTPS connection, fooling the victim into thinking that the connection is “secure.” The distinction of where the trustworthy lock should exist in Internet Explorer (to the right of the URL, not to the left) is likely to be lost on most people.
Google Chrome doesn’t display the favicon as part of the URL, showing it only in on tabs and bookmarks. Chrome displays the lock icon in the URL bar (in several variations) for connections that use HTTPS.
Because Chrome doesn’t display the favicon on the URL bar, this browser’s users are being conditioned to place greater trust in the padlock image displayed next to the URL. This is good.
Mozilla’s Firefox stopped displaying the padlock icon for HTTPS connections starting in version 4 of the browser, phasing it out in favor of the Site Identity Button. The current production (v12) and beta (v13) releases of Firefox still display the favicon the browser’s URL bar and on the tab:
In this setup, a website that doesn’t use HTTPS display a favicon that looks like a padlock in the URL bar, fooling the victims who associate the lock with safety into thinking that the connection is “secure.”
To help address this risk, the current nightly build of Firefox (v14) no longer displays the favicon in the URL, showing it only on tabs, bookmarks and Awesome bar suggestions, according to Firefox developer Jared Wein. This version of Firefox adds the padlock to the Site Identity Button, while preventing webmasters from placing a false lock icon in the URL bar:
The approach to eliminating the favicon from the URL bar and displaying the padlock there for valid HTTPS connections is consistent with the behavior of Google Chrome. It’s encouraging to see the two browsers using compatible visual approaches that are designed to minimize user confusion.
Fortunately, the behavior of Opera is consistent with how Firefox and Chrome display locks and favicons. Opera displays favicons on tabs and bookmarks, while displaying the padlock on the URL bar for appropriate HTTPS connections:
Unfortunately, Safari displays the padlock and favicon on the URL bar, as well as on tabs and bookmarks in a manner consistent with Internet Explorer. In other words, the favicon is displayed to the left of the URL and the lock, when appropriate, is shown to the right of the URL.
There’s room for continuing to improve the visual indications that browsers present to users for making security-related decisions. Internet Explorer and Safari appear behind times in their treatment of favicons, because they display these images in the URL bar. In the meantime, individuals who educate non-technical people in web safety practices should consider how to best explain the various security indicators in the URL bar. None of these are easy undertakings.
I published the slides to my presentation “How attackers use social engineering to bypass your defenses,” which shows numerous examples of real-world social engineering attacks. These materials are designed to help you improve the relevance of your security awareness training and to adjust your data defenses by revisiting your perspective of the threat landscape. They cover techniques such as:
The use of alternative channels of communication
Focus on personally-relevant messages
The principle of social compliance in potential victims
People’s reliance on security mechanisms
Why bother breaking down the door if you can simply ask the person inside to let you in? Social engineering works, both during penetration testing and as part of real-world attacks. This briefing explores how attackers are using social engineering to compromise defenses. It presents specific and concrete examples of how social engineering techniques succeeded at bypassing information security defenses.
Are Anxious People More Vigilant in Information Security?
Common wisdom suggests that anxious individuals are better at spotting danger than those with more mellow personalities. However, research by Tahl Frenkel and Yair Bar-Haim indicates that the opposite may be true: People with nonanxious personalities might be more skilled at spotting the early signs of trouble. This finding could highlight the type of people best suited for information security jobs.
Spotting Fearful Faces on Photographs
Some of the individuals selected for the study possessed anxious personality traits on the a State-Trait Anxiety Inventory scale, while others were nonanxious. The participants were shown photographs of a face that exhibited a progressive degree of fearfulness. The researchers measured how early in the progression the participants could detect fear on the photos.
"As expected anxious participants needed significantly less stimulus fear intensity for conscious fear detection," researchers discovered. However, only non-anxious participants began exhibiting early signs of fear detection before consciously recognizing fear on the photograph. The Scientific American Mind’s March 2012 issue clarified:
"The brains of anxious subjects barely responded to the images until the frightened face had reached a certain obvious threshold, at which point their brains leapt into action as though caught off guard. Meanwhile nonanxious respondents showed increasing brain activity earlier in the exercise, which built up subtly with each increasingly fearful face."
The researchers concluded that anxious people might lack the ability to detect threats in a granular manner and “therefore might face threats with no prior warning signal—further contributing to their already heightened anxiety level and perhaps associated with their enhanced baseline threat vigilance.”
Early Detection of Threats in Information Security
If there is a stereotype of an information security professional, it is sure to include anxious characteristics, such as concerns regarding threats, distrust and perhaps a degree of paranoia. These traits allow us to recognize the signs of danger, building defenses in anticipation of risks and also responding to the situation when the defenses fail.
Yet, those professionals who are calm and nonanxious might be better at spotting early warning signs of an intrusion before it escalates into a major breach. This skill is similar to the ability to detect the subtle signs of fear when looking at a photograph of a face. If this is true, then I wonder whether such individuals trust their instincts and have the time to begin investigating the potential problem early enough.
Here’s a listing of my 4 favorite on-line security articles, papers and blog posts that I read in the past week. This week I’m featuring a series of articles that profile Anonymous that were written by Josh Corman and Brian Martin: