What to Do About Password-Sharing?


Sometimes people share passwords. This practice might stem from the lack of support for unique user account in some applications. Even more importantly, the reasons for password-sharing have to do with convenience and social norms. Technologists are starting to recognize the opportunity to account for these real-world practices in their products.

I detailed password-sharing practices in a post on the future of user account access sharing and presented several examples. I looked mostly at consumer and small business situations, not necessarily at heavily-controlled enterprise applications. For instance, adults frequently share logon credentials to their Netflix accounts among a small group of people. This saves on Netflix service fees.

Netflix seems to not only tolerate, but even encourage this practice. The service now supports multiple user profiles as part of a single account. One set of logon credentials is actually expected to be used by multiple individuals! A security purist would not approve. Yet, this product management decision recognized the lack of user’s interest in safeguarding their Netflix account from friends and family members.

Rather pretending that logon credential-sharing doesn’t exist, Netflix accepted that its users will share accounts and built the features its customers wanted. (The fact that any Netflix user profile can cancel the service helps constrain the potential size of the group who can access the account.)

For other examples, consider a small office environment within which individuals trust each other enough to share passwords to applications and websites used by the company, such as Pandora, UPS, Mailchimp, etc. Preaching to these people about the importance of individual logon credentials is usually a waste of time. This is why I hailed the emergence of password vault applications such as Mitro, which make it easier for individuals to share passwords while offering some security safeguards that don’t exist when using Post-it notes for this purpose.

Let’s project into the future a bit. According to danah boyd, teens often share passwords with friends as a sign of trust. This practice is akin to giving a person you trust the combination code to your school locker. Since teens will inevitably become adults, product managers and security architects will eventually have to account for such practices in their applications.

Customer-focused organizations will need to balance the desire to protect people from themselves with the need to give customers what they want. Some say that passwords will eventually go away and be replaced by biometrics and other technologies. Perhaps. In any case, people’s desire and sometimes legitimate need to share account access will persist. Security professionals and product managers will need to figure out how to provide such capabilities while accounting for meaningful risks.

Lenny Zeltser

Potential Security Applications of the iPhone 5S M7 Motion Coprocessor


In addition to the fingerprint scanner, there is another new hardware component in iPhone 5S that security-minded folks might get excited about. I’m referring to the M7 motion coprocessor. According to Apple, the chip continuously measures motion data “from the accelerometer, gyroscope and compass to offload work” from the main processor. Here’s why this is interesting.

Although M7 is being positioned as an enabler for enhanced fitness apps, there might be interesting security applications for the precision and energy-efficiency capabilities of the coprocessor. For instance, consider the possibility for continuous and seamless user authentication. As I proposed earlier:

Continuous user authentication could occur transparently by spotting anomalies in which the user interacts with the system. Such methods could avoid interrupting the user unless the system begins to doubt the person’s identity.

An app utilizing M7 could help Identify the unique walking pattern of the phone’s user. This authentication approach is described in a paper titled Pace Independent Human Identification Using Cell Phone Accelerometer Dynamics. Perhaps without M7, this methodology would have been impractical due to the battery drain and the lack of accurate measurements.

Another example of an authentication approach that could make use of the phone’s motion sensors is exhibited by SilentSense, which tracks “the unique patterns of pressure, duration and fingertip size and position” exhibited by the phone’s user. The goal of this technology is to spot when the phone is being used by an unexpected person.

As nice as the iPhone 5S fingerprint reader is, authenticating users through walking patterns could be event more user-friendly and offer unique benefits of continuous validation that one-time authentication doesn’t provide. (BTW, the new Moto X phone dedicates a hardware component to “context-sensing,” which seems to support motion-awareness functions.)

While Apple isn’t planning to grant app developers access to the fingerprint reader (a.k.a. “Touch ID sensor”) according to AllThingsD, access to M7 will be available using the expanded Core Motion API. This would allow third-party developers to come up with innovative ways of utilizing M7 motion coprocessor capabilities that go beyond fitness-related applications, assuming Apple’s App Store guidelines will allow this.

Who knows, perhaps your favorite phone security app might soon issue an alert if it detects that an impostor is carrying your phone, giving you an opportunity to wipe the device or take other corrective action. Just an idea.

Lenny Zeltser

Teaching Malware Analysis and the Expanding Corpus of Knowledge


Over the years, the set of skills needed to analyze malware has been expanding. After all, software is becoming more sophisticated and powerful, regardless whether it is being used for benign or malicious purposes. The expertise needed to understand malicious programs has been growing in complexity to keep up with the threats.

My perspective on this progression is based on the reverse-engineering malware course I’ve been teaching at SANS Institute. Allow me to indulge in a brief retrospective on this course, which I launched over a decade ago and which was recently expanded.

Starting to Teach Malware Analysis

My first presentation on the topic of malware analysis was at the SANSFIRE conference in 2001 in Washington, DC, I think. That was one of my first professional speaking gigs. SANS was willing to give me a shot, thanks to Stephen Northcutt, but I wasn’t yet a part of the faculty. My 2.5-hour session promised to:

Discuss “tools and techniques useful for understanding inner workings of malware such as viruses, worms, and trojans. We describe an approach to setting up inexpensive and flexible lab environment using virtual workstation software such as VMWare, and demonstrate the process of reverse engineering a trojan using a range of system monitoring tools in conjunction with a disassembler and a debugger.”

I had 96 slides. Malware analysis knowledge wasn’t yet prevalent in the general community outside of antivirus companies, which were keeping their expertise close to the chest. Fortunately, there was only so much one needed to know to analyze mainstream samples of the day.

Worried that evening session attendees would have a hard time staying alert after a day’s full of classes, I handed out chocolate-covered coffee beans, which I got from McNulty’s shop in New York.

Expanding the Reverse-Engineering Course

A year later, I expanded the course to two evening sessions. It included 198 slides and hands-on labs. I was on the SANS faculty list! Slava Frid, who helped me with disassembly, was the TA. My lab included Windows NT and 2000 virtual machines. Some students had Windows 98 and ME. SoftICE was my favorite debugger. My concluding slide said:

  • Too many variables to research without assistance
  • Ask colleagues, search Web sites, mailing lists, virus databases
  • Share your findings via personal Web sites, incidents and malware mailing lists

That advice applies today, though one of the wonderful changes in the community from those days is a much larger set of forums and blogs focused on malware analysis techniques.

By 2004, the course was two-days long and covered additional reversing approaches and browser malware. In 2008 it expanded to four days, with Mike Murr contributing materials that dove into code-level analysis of compiled executables. Pedro Bueno, Jim Shewmaker and Bojan Zdrnja shared their insights on packers and obfuscators.

In 2010, the course expanded to 5 days, incorporating contributions by Jim Clausing and Bojan Zdrnja. The new materials covered malicious document analysis and memory forensics. I released the first version of REMnux, a Linux distro for assisting malware analysts with reverse-engineering malicious software.

Around that time the course was officially categorized as a forensics discipline by SANS and was brought into the organization’s computer forensics curriculum thanks to the efforts of Rob Lee.

Recent Course Expansion: Malware Analysis Tournament

The most recent development related to the course is the expansion from five to six days. Thanks to the efforts of Jake Williams, the students are now able to reinforce what they’ve learned and fine-tune their skills by spending a day solving practical capture-the-flag challenges. The challenges are built using the NetWars tournament platform. It’s a fun game. For more about this expansion, see Jake’s blog and tune into his recorded webcast for a sneak peek at the challenges.

It’s exciting to see the community of malware analysts increase as the corpus of our knowledge on this topic continues to expand. Thanks to all the individuals who have helped me grow as a part of this field and to everyone who takes the time to share their expertise with the community. There’s always more for us to learn, so keep at it.

Lenny Zeltser

Researching Scams Helps Understand Human Vulnerabilities


Researching online scams opens a window into the world of deceit, offering a glimpse into human vulnerabilities that scammers exploit to further their interests at the victims’ expense. These social-engineering tactics are fascinating, because sometimes they work even when the person suspects that they are being manipulated.

Here are examples of 7 social engineering principles I’ve seen utilized as part of online scams:

Miscreants know how to exploit weaknesses in human psyche. Potential victims should understand their own vulnerabilities. This way, they might notice when they’re being social-engineered before the scam has a chance to complete itself. If this topic interests you, you might also like the following posts:

Lenny Zeltser

Looking for Anomalies in Check Overpayment Scam Correspondence


After agreeing to purchase the item or service you are selling, the remote buyer offers a check for an amount larger than you requested, with a polite request that you forward the difference to the buyer’s friend or colleague. You deposit the check, wait for it to clear, and send the funds to the designated party. What could possibly go wrong?

This is the gist of the check overpayment scam, which debuted on the Internet around 2002 according to Scopes.com. In 2004, FTC warned about initial variations of the scam, which began with the scammer agreeing to buy the item advertised online by the victim. A modern variation starts as a response to an ad for private lessons, placed on a site such as Craigslist:

"How are you doing today?I want a private lessons for my daughter,Mary. Mary is a 13 year old girl, home schooled and she is ready to learn. I would like the lessons to be in your home/studio. Please I want to know your policy with regard to the fees,cancellations, and make-up lessons."

How might you determine whether the sender has malicious intent? The lack of spaces after some punctuation marks seems strange. Also, a legitimate message would have probably included some details specific to the lessons being requested. The scammer probably excluded such details, because he or she used the same text for multiple victims, who offer different services. The message above didn’t include the recipient on the To: line, supporting the theory that the scammer sent it to multiple people.

Concerned recipients might look up some of the unusual phrases from this message, in which case they’d see numerous reports about scams that include phrases from the text above. (1, 2, 3, 4, 5, etc.)

If the victim responds to the original email, the scammer continues to weave the web of deceit:

"I’m a single parent who always want the best for my daughter and I would be more than happy if you can handle Mary very well for me. I would have loved to bring Mary for "meet and greet"interviews before the lessons commence, which I think is a normal way to book for lessons but am in Honolulu,Hawaii right now for a new job appointment. So,it will not be possible for me to come for the meeting. Mary will be coming for the lessons from my cousin’s location which is very close to you. Although,Mary and my cousin are currently in the UK and I want to finalize the arrangement for the lessons before they come back to the United States because that was my promise to Mary before they left for the UK."

"As for the payment, I want to pay for the three month lessons upfront which is $780 and I’m paying through a certified cashier’s check.Hope this is okay with you?"

I wouldn’t point to any specific aspect of this message as malicious, but the tone feels a bit off, even for a person who might be a non-native English speaker. The phrase “handle Mary very well for me” is strange. Also, it seems unusual to pay for private lessons using a cashier’s check. In addition, the message has more details about Mary and the sender’s travel than one would expect in this context. The scammer is probably doing this to justify a request that will be presented to the victim later.

If the recipient writes back, the scammer responds:

"My cousin will get in touch with you for the final lessons arrangement immediately they are back from the UK. I will want you to handle Mary very well for me because she is all I have left ever since her mother’s death four years ago. Being a single parent, It’s not easy but I believe God is on my side."

Several aspects of this message might trigger a careful reader’s anomaly detector. Again we see the strange phrase “handle Mary very well.” Also, the sender offers extraneous detail that don’t belong in the context of this email discussion, perhaps attempting to establish an emotional connection on the basis of death in the family and religion. There is more:

"I also want to let you know that the payment will be more than the cost for the three month lessons. So, as soon as you receive the check, I will like you to deduct the money that accrues to the cost of the lessons and you will assist me to send the rest balance to my cousin. This remaining balance is meant for Mary and my cousin’s flight expenses down to the USA, and also to buy the necessary materials needed for the lessons. I think,I should be able to trust you with this money?"

Now we see the classic element of the check overpayment scam. The sender is appealing to the recipient’s honesty and perhaps hopes to build upon the emotional response evoked in the earlier paragraph.

Scopes.com explains that the scam works because FTC “requires banks to make money from cashier’s, certified, or teller’s checks available in one to five days. Consequently, funds from checks that might not be good are often released into payees’ accounts long before the checks have been honored by their issuing banks. High quality forgeries can be bounced back and forth between banks for weeks before anyone catches on to their being worthless.”

Watch out for situations where a buyer is willing to overpay and asks for your help in forwarding the remainder of the funds to someone else. Also, look for anomalies that might indicate malicious intent, such as strange tone, unusual phrases, unnecessary details and uncommon punctuation. If you notice such aspects of the text, research them on the web before deciding whether to continue your correspondence.

Related posts:

Lenny Zeltser

How the Digital Certificates Ecosystem is Being Strengthened


Given the many ways in which digital certificates can be misused and the severe repercussions of such incidents, several initiatives have been launched to strengthen the ecosystem within which the certs are issued, validated and utilized. This is a start of what I hope will be a slew of projects and security improvements that will gradually gain foothold in enterprise and personal environments.

Current efforts to improve the state of the web’s Public Key Infrastructure (PKI) include:

  • Operating systems and software is becoming more mindful of the need to maintain up-to-date lists of revoked certificates. For example, while Internet Explorer on Windows XP only enabled code-signing revocation checking by default, Vista and higher also checks for the revoked server certificates used in SSL/TLS connections by default, according to Websense. Microsoft has also enhanced its mechanism for automatically distributing updates to the listing of revoked certificates.
  • EFF launched the SSL Observatory project, to catalog the SSL/TLS certificates used by websites to facilitate the “search for vulnerabilities, document the practices of Certificate Authorities, and aid researchers interested the web’s encryption infrastructure.” Such initiatives might assist in understanding the scope of nature of issues that affect HTTPS-browsing.
  • Google launched the Certificate Transparency project, aimed at strengthening the PKI ecosystem by “providing a publicly accessible place for issued certificates to be published.” According to the initial proposal, the project’s primary goal is to make it difficult for malicious or careless CAs to “issue a certificate for a domain without the knowledge of the owner of that domain.”

Of the efforts to strengthen the web’s PKI environment, the pinning of the certificates or the associated public keys seems most promising.

Many information security practices are based on the principle of denying access by default, unless there is an explicit need to grant access. For instance, most network firewalls only allow specific traffic, instead of allowing all ports and blocking only risky ones. Soon, we might need to exercise the same degree of control over digital certificates trusted by our systems. The tools available to us for accomplishing this are still awkward and immature. This will change.

Lenny Zeltser

How Digital Certificates Are Used and Misused


Online communication and data sharing practices rely heavily on digital certificates to encrypt data as well as authenticate systems and people. There have been many discussions about the cracks starting to develop in the certificate-based Public Key Infrastructure (PKI) on the web. Let’s consider how the certs are typically used and misused to prepare for exploring ways in which the certificate ecosystem can be strengthened.

How Digital Certificates are Used

Digital certificates are integral to cryptographic systems that are based on public/private key pairs, usually in the context of a PKI scheme. Wikipedia explains that a certificate binds a public key to an identity. Microsoft details the role that a Certificate Authority (CA) plays in a PKI scenario:

"The certification authority validates your information and then issues your digital certificate. The digital certificate contains information about who the certificate was issued to, as well as the certifying authority that issued it. Additionally, some certifying authorities may themselves be certified by a hierarchy of one or more certifying authorities, and this information is also part of the certificate."

HTTPS communications are safeguarded by SSL/TLS certificates, which help authenticate the server and sometimes the client, as well as encrypt the traffic exchanged between them. Digital certificates also play a critical role in signing software, which helps determine the source and authenticity of the program when deciding whether to trust it. Certificates could also be used as the basis for securing VPN and Wi-Fi connections.

Misusing Digital Certificates

In the recent years, the PKI environment within which digital certificates are created, maintained and used began showing its weaknesses. We’ve witnessed several ways in which the certs have been misused in attacks on individuals, enterprises and government organizations:

  • Man-in-the-middle (MITM) attacks abused certificates to intercept SSL/TLS traffic. Software rarely complains when a server’s SSL/TLS certificate has been signed by a trusted, but unexpected CA. For example, one person noticed that when connecting to Gmail via IMAP/SSL from a hotel, the server’s certificate was signed by an entity other than Google. A similar technique was allegedly used by Nokia, presumably for legitimate purposes, to decrypt phones’ HTTPS activities. There are probably many MITM instances where the traffic was captured illegitimately; unfortunately, such situations are often hard to detect.
  • Malware installed illegitimate certificates, configuring infected systems to trust them. For instance, a malicious Browser Helper Object (BHO) installed a fake Verisign cert as a Trusted Root Certificate Authority after infecting the system to eliminate security warnings. In another example, spyware acted as a local proxy for SSL/TLS traffic and installed a rogue certificate to conceal this behavior. Installing a fake root CA certificate on the compromised system can also assist with phishing scams, because they allow the attacker to set up a fake domain that uses SSL/TLS and passes certificate validation steps.

These were just some examples of real-world incidents where digital certificates were misused. In a follow-up post, I discuss the initiatives aimed at strengthening the PKI ecosystem within which the certificates are issued, validated and utilized. Read on.

Lenny Zeltser

What Happens After You’ve Set Up Google Inactive Account Manager?

Google’s Inactive Account Manager allows you to designate a person to be notified if your Google account has been inactive for 3 or more months. You can also elect for Google to share your data with that contact at that time. Let’s take a look at what happens after the inactivity period is reached and consider what security precautions you might need to take when using this feature.

Activating Google’s Inactive Account Manager

To better understand how Inactive Account Manager works, I activated this feature for a test account, designating myself as the trusted contact. To do this, I followed prompts to provide Google with the contact’s email address and phone number. I also directed Google to “share my data with this contact,” in addition to notifying the designee about inactivity of my test account:



I also specified the subject and contents of the email message that Google would send to the trusted contact when the account becomes inactive. In addition, I provided a mobile phone number where Google would send text message alerts regarding the inactive account.

I told Google to “Remind me that Inactive Account Manager is enabled.” I chose not to ask Google to delete the account after it expires.

Then, I waited 3 months, avoiding the use of my test account. I wanted to see what will happen once it reaches the state of inactivity.

Initial Google Account Inactivity Alerts

A month before the Google account was going to enter the expired state, my test account received an initial alert using email and SMS:



The email message explained that to avoid expiration, the user needed to sign into Google and edit Inactive Account Manager settings. Google sent another set of alerts 11 days later, then a week after that, and again 5 days after that.

Google Account Becoming Inactive

On the day when the account was set to enter the state of inactivity, Google sent the user an alert via email and SMS, stating that the timeout set for the account has expired. The message explained that “Inactive Account Manager has notified your trusted contacts and shared data with them.”


Despite the urgent tone of the message, Google waited two more days (a grace period?) prior to notifying the trusted contact about expiration. At that time, I received an email message with the subject and text that was set up by my test user when activating Inactive Account Manager:


Google’s message provided a link to obtain the data from the expired account and stated that I have been given access to download it over the next 3 months.

Accessing the Inactive User Account Data

After I clicked the “Download data” link, I was presented with a message explaining that before I download the data, I have to verify my identity by confirming that I received a verification code via SMS or a call. Google directed the code to the number designated when setting up Inactive Account Manager:


After supplying the verification code, I was able to download the expired account’s data:


The data was available as a Zip file archive, similar to how Google provides it user the Google Takeout service.

Security Notes on Google’s Inactive Account Manager

I retained access to the expired account’s data for about 24 hours after my test user logged into Google and reactivated the account. The data that I could access during that time was a snapshot in time, taken on the date when the account expired. The export didn’t include new data generated by the reactivated account afterwards.

After the test user logged into the account, there was no warning about the trusted contact having accessed the data when the account expired. If enabling Google’s Inactive Account Manager and reaching the inactivity state, you should assume that the designated person exported your data.

A day after reactivating the inactive account, my test user received a notice stating that the timeout period was reset and that trusted contacts no longer had access to the data. Indeed, at that point clicking in the “Download data” link took me to an “Invalid link” page:

As you can see, Google built safeguards to avoid the expired account’s data being shared accidentally or becoming available to the wrong party. This process entails sending several email and SMS notifications, which could be misused by attackers to create phishing scams.

If you or people in your organizations rely on Google accounts, educate them about the flow of information associated with Inactive Account Manager and remind them to interact solely with Google’s SSL-enabled websites regarding this feature.

Lenny Zeltser

The Notion of a Honeypot Persona


I’d like to define the term honeypot persona as a fake online identity established to deceive scammers and other attackers. If this notion interests you, take a look at the article where I proposed using honeypot personas to safeguard user accounts and data. If you haven’t read that note yet, go ahead I’ll wait…

In that article, I wrote that:

"Using decoys to protect online identities might be an overkill for most people at the moment. However, as attack tactics evolve, employing deception in this manner could be beneficial. As technology matures, so will our ability to establish realistic online personas that deceive our adversaries."

Online attackers have many advantages over potential victims, making it hard to defend enterprise IT resources and personal data. In such situations, diversion tactics might help the defenders balance the scales by slowing down and helping to detect attackers.

I’ve outlined my recommendations for the role that honeypots can play as part of a modern IT infrastructure earlier. Now, I’m also suggesting that honeypot personas, which could also be called decoy personas, might be effective at confusing, misdirecting, slowing down and helping detect online adversaries. For example,

"A decoy profile [on a social networking or another site] could purposefully expose some inaccurate information, while the person’s real profile would be more carefully concealed using the site’s privacy settings."

I define the term honeypot as an item that is designed to be desired by an adversary. In this light, a honeypot persona exhibits characteristics that might be attractive to online attackers, deflecting malicious activities and potentially warning the real person who carries the same name as the decoy, that he or she might be targeted soon.

Lenny Zeltser

Why Organizations Don’t Prepare for Information Security Incidents


Despite the general agreement that being prepared for an information security incident decreases the pain of dealing with the intrusion, few companies seem to plan for the eventuality of a data breach. Understanding why organizations don’t prepare for incident response might help us come up with advice that goes beyond merely emphasizing the need for an incident response plan.

Do Organizations Understand the Threats?

One explanation for the lack of preparedness suggests that organizations simply don’t know that it’s a dangerous online world out there and, therefore, might not even understand what security incidents are about. This is probably incorrect. A Symantec study of small and midsize businesses found that the SMBs were aware of online threats. More than half “stated that malware would reduce productivity while the infected systems were repaired and 36 percent recognized that hackers could gain access to proprietary information.” If SMBs understand this, so do the larger enterprises.

Do Organizations Recognize That They Might Be Attacked?

As indicated above, companies seem to possess a general awareness of the threats and understand that a data breach could have adverse effects. The problem might be that these firms don’t believe they will be attacked.

Indeed, the Symantec study mentioned above found that SMBs thought that, due to their smaller size, they were less likely to be attacked. In reality, Symantec’s data showed that SMBs were more likely to be targeted than large enterprises and, therefore, underestimated the risk.

This situation is a bit reminiscent of the University of Pennsylvania’s study that examined teens’ perception on the risks associated with smoking. The teenagers knew about the link between smoking and cancer; however, they believed that they personally would be able to “avoid the health consequences of smoking.” The researchers noted:

"In keeping with other research findings, young smokers in this study estimated their own personal risks differently from risks to smokers in general."

Perhaps there is an underlying psychological tendency among individuals to be optimistic when assessing risks, thinking they are less likely to experience a negative event than others in their peer group.

Do Organizations Foresee the Pain of Dealing with the Incident?

There’s another explanation for organizations not thinking in advance about security incidents, even if they are aware of threats and recognize that they might be attacked. It has to do with the weak understanding of the effect an incident can have on the organization. Without this understanding, the company would fail to see the need to prepare in advance and wouldn’t know what dealing with incidents might be like.

In the UPenn smoking study mentioned above, some teens acknowledged that they were at an increased risk of developing cancer if they continued to smoke. However, they failed to understand the horrible effect that cancer, if it occurs, could have on their lives. They underestimated the deadliness of this disease. The researchers concluded:

"Young smokers understand that smoking is likely to shorten a person’s life, but do not have a clear idea of the number of years involved."

It’s quite possible that organizations similarly underestimate the disruptive effects of information security incident. At least in the short term, it will find itself suspending some aspects of its routine, switching into incident handling mode that is costly and stressful.

Persuading Organizations to Prepare for Information Security Incidents

The studies act as illustrations of the challenges we encounter when assessing risks. If we are to attempt persuading companies to consider how that will handle information security incidents before the event takes place, we need to consider several reasons why they might not care to think about this in advance:

  • The lack of basic understanding of threats and repercussions of a data breach
  • The belief that they organization will have to deal with an infosec incident
  • The inability to foresee the challenges associated with responding to an incident

Try as we may, we should also recognize that human nature leans towards being reactive, rather than encouraging proactive behavior. This is one of the reasons I’ve prepared various incident handling cheat sheets and outlined some advice in a presentation titled How to Respond to an Unexpected Incident.

Lenny Zeltser