Researching online scams opens a window into the world of deceit, offering a glimpse into human vulnerabilities that scammers exploit to further their interests at the victims’ expense. These social-engineering tactics are fascinating, because sometimes they work even when the person suspects that they are being manipulated.
Here are examples of 7 social engineering principles I’ve seen utilized as part of online scams:
Miscreants know how to exploit weaknesses in human psyche. Potential victims should understand their own vulnerabilities. This way, they might notice when they’re being social-engineered before the scam has a chance to complete itself. If this topic interests you, you might also like the following posts:
After agreeing to purchase the item or service you are selling, the remote buyer offers a check for an amount larger than you requested, with a polite request that you forward the difference to the buyer’s friend or colleague. You deposit the check, wait for it to clear, and send the funds to the designated party. What could possibly go wrong?
This is the gist of the check overpayment scam, which debuted on the Internet around 2002 according to Scopes.com. In 2004, FTC warned about initial variations of the scam, which began with the scammer agreeing to buy the item advertised online by the victim. A modern variation starts as a response to an ad for private lessons, placed on a site such as Craigslist:
"How are you doing today?I want a private lessons for my daughter,Mary. Mary is a 13 year old girl, home schooled and she is ready to learn. I would like the lessons to be in your home/studio. Please I want to know your policy with regard to the fees,cancellations, and make-up lessons."
How might you determine whether the sender has malicious intent? The lack of spaces after some punctuation marks seems strange. Also, a legitimate message would have probably included some details specific to the lessons being requested. The scammer probably excluded such details, because he or she used the same text for multiple victims, who offer different services. The message above didn’t include the recipient on the To: line, supporting the theory that the scammer sent it to multiple people.
If the victim responds to the original email, the scammer continues to weave the web of deceit:
"I’m a single parent who always want the best for my daughter and I would be more than happy if you can handle Mary very well for me. I would have loved to bring Mary for "meet and greet"interviews before the lessons commence, which I think is a normal way to book for lessons but am in Honolulu,Hawaii right now for a new job appointment. So,it will not be possible for me to come for the meeting. Mary will be coming for the lessons from my cousin’s location which is very close to you. Although,Mary and my cousin are currently in the UK and I want to finalize the arrangement for the lessons before they come back to the United States because that was my promise to Mary before they left for the UK."
"As for the payment, I want to pay for the three month lessons upfront which is $780 and I’m paying through a certified cashier’s check.Hope this is okay with you?"
I wouldn’t point to any specific aspect of this message as malicious, but the tone feels a bit off, even for a person who might be a non-native English speaker. The phrase “handle Mary very well for me” is strange. Also, it seems unusual to pay for private lessons using a cashier’s check. In addition, the message has more details about Mary and the sender’s travel than one would expect in this context. The scammer is probably doing this to justify a request that will be presented to the victim later.
If the recipient writes back, the scammer responds:
"My cousin will get in touch with you for the final lessons arrangement immediately they are back from the UK. I will want you to handle Mary very well for me because she is all I have left ever since her mother’s death four years ago. Being a single parent, It’s not easy but I believe God is on my side."
Several aspects of this message might trigger a careful reader’s anomaly detector. Again we see the strange phrase “handle Mary very well.” Also, the sender offers extraneous detail that don’t belong in the context of this email discussion, perhaps attempting to establish an emotional connection on the basis of death in the family and religion. There is more:
"I also want to let you know that the payment will be more than the cost for the three month lessons. So, as soon as you receive the check, I will like you to deduct the money that accrues to the cost of the lessons and you will assist me to send the rest balance to my cousin. This remaining balance is meant for Mary and my cousin’s flight expenses down to the USA, and also to buy the necessary materials needed for the lessons. I think,I should be able to trust you with this money?"
Now we see the classic element of the check overpayment scam. The sender is appealing to the recipient’s honesty and perhaps hopes to build upon the emotional response evoked in the earlier paragraph.
Scopes.com explains that the scam works because FTC “requires banks to make money from cashier’s, certified, or teller’s checks available in one to five days. Consequently, funds from checks that might not be good are often released into payees’ accounts long before the checks have been honored by their issuing banks. High quality forgeries can be bounced back and forth between banks for weeks before anyone catches on to their being worthless.”
Watch out for situations where a buyer is willing to overpay and asks for your help in forwarding the remainder of the funds to someone else. Also, look for anomalies that might indicate malicious intent, such as strange tone, unusual phrases, unnecessary details and uncommon punctuation. If you notice such aspects of the text, research them on the web before deciding whether to continue your correspondence.
Given the many ways in which digital certificates can be misused and the severe repercussions of such incidents, several initiatives have been launched to strengthen the ecosystem within which the certs are issued, validated and utilized. This is a start of what I hope will be a slew of projects and security improvements that will gradually gain foothold in enterprise and personal environments.
Current efforts to improve the state of the web’s Public Key Infrastructure (PKI) include:
Of the efforts to strengthen the web’s PKI environment, the pinning of the certificates or the associated public keys seems most promising.
Many information security practices are based on the principle of denying access by default, unless there is an explicit need to grant access. For instance, most network firewalls only allow specific traffic, instead of allowing all ports and blocking only risky ones. Soon, we might need to exercise the same degree of control over digital certificates trusted by our systems. The tools available to us for accomplishing this are still awkward and immature. This will change.
Online communication and data sharing practices rely heavily on digital certificates to encrypt data as well as authenticate systems and people. There have been many discussions about the cracks starting to develop in the certificate-based Public Key Infrastructure (PKI) on the web. Let’s consider how the certs are typically used and misused to prepare for exploring ways in which the certificate ecosystem can be strengthened.
How Digital Certificates are Used
Digital certificates are integral to cryptographic systems that are based on public/private key pairs, usually in the context of a PKI scheme. Wikipedia explains that a certificate binds a public key to an identity. Microsoft details the role that a Certificate Authority (CA) plays in a PKI scenario:
"The certification authority validates your information and then issues your digital certificate. The digital certificate contains information about who the certificate was issued to, as well as the certifying authority that issued it. Additionally, some certifying authorities may themselves be certified by a hierarchy of one or more certifying authorities, and this information is also part of the certificate."
HTTPS communications are safeguarded by SSL/TLS certificates, which help authenticate the server and sometimes the client, as well as encrypt the traffic exchanged between them. Digital certificates also play a critical role in signing software, which helps determine the source and authenticity of the program when deciding whether to trust it. Certificates could also be used as the basis for securing VPN and Wi-Fi connections.
Misusing Digital Certificates
In the recent years, the PKI environment within which digital certificates are created, maintained and used began showing its weaknesses. We’ve witnessed several ways in which the certs have been misused in attacks on individuals, enterprises and government organizations:
These were just some examples of real-world incidents where digital certificates were misused. In a follow-up post, I discuss the initiatives aimed at strengthening the PKI ecosystem within which the certificates are issued, validated and utilized. Read on.
Google’s Inactive Account Manager allows you to designate a person to be notified if your Google account has been inactive for 3 or more months. You can also elect for Google to share your data with that contact at that time. Let’s take a look at what happens after the inactivity period is reached and consider what security precautions you might need to take when using this feature.
Activating Google’s Inactive Account Manager
To better understand how Inactive Account Manager works, I activated this feature for a test account, designating myself as the trusted contact. To do this, I followed prompts to provide Google with the contact’s email address and phone number. I also directed Google to “share my data with this contact,” in addition to notifying the designee about inactivity of my test account:
I also specified the subject and contents of the email message that Google would send to the trusted contact when the account becomes inactive. In addition, I provided a mobile phone number where Google would send text message alerts regarding the inactive account.
I told Google to “Remind me that Inactive Account Manager is enabled.” I chose not to ask Google to delete the account after it expires.
Then, I waited 3 months, avoiding the use of my test account. I wanted to see what will happen once it reaches the state of inactivity.
Initial Google Account Inactivity Alerts
A month before the Google account was going to enter the expired state, my test account received an initial alert using email and SMS:
The email message explained that to avoid expiration, the user needed to sign into Google and edit Inactive Account Manager settings. Google sent another set of alerts 11 days later, then a week after that, and again 5 days after that.
Google Account Becoming Inactive
On the day when the account was set to enter the state of inactivity, Google sent the user an alert via email and SMS, stating that the timeout set for the account has expired. The message explained that “Inactive Account Manager has notified your trusted contacts and shared data with them.”
Despite the urgent tone of the message, Google waited two more days (a grace period?) prior to notifying the trusted contact about expiration. At that time, I received an email message with the subject and text that was set up by my test user when activating Inactive Account Manager:
Google’s message provided a link to obtain the data from the expired account and stated that I have been given access to download it over the next 3 months.
Accessing the Inactive User Account Data
After I clicked the “Download data” link, I was presented with a message explaining that before I download the data, I have to verify my identity by confirming that I received a verification code via SMS or a call. Google directed the code to the number designated when setting up Inactive Account Manager:
After supplying the verification code, I was able to download the expired account’s data:
The data was available as a Zip file archive, similar to how Google provides it user the Google Takeout service.
Security Notes on Google’s Inactive Account Manager
I retained access to the expired account’s data for about 24 hours after my test user logged into Google and reactivated the account. The data that I could access during that time was a snapshot in time, taken on the date when the account expired. The export didn’t include new data generated by the reactivated account afterwards.
After the test user logged into the account, there was no warning about the trusted contact having accessed the data when the account expired. If enabling Google’s Inactive Account Manager and reaching the inactivity state, you should assume that the designated person exported your data.
A day after reactivating the inactive account, my test user received a notice stating that the timeout period was reset and that trusted contacts no longer had access to the data. Indeed, at that point clicking in the “Download data” link took me to an “Invalid link” page:
As you can see, Google built safeguards to avoid the expired account’s data being shared accidentally or becoming available to the wrong party. This process entails sending several email and SMS notifications, which could be misused by attackers to create phishing scams.
If you or people in your organizations rely on Google accounts, educate them about the flow of information associated with Inactive Account Manager and remind them to interact solely with Google’s SSL-enabled websites regarding this feature.
I’d like to define the term honeypot persona as a fake online identity established to deceive scammers and other attackers. If this notion interests you, take a look at the article where I proposed using honeypot personas to safeguard user accounts and data. If you haven’t read that note yet, go ahead I’ll wait…
In that article, I wrote that:
"Using decoys to protect online identities might be an overkill for most people at the moment. However, as attack tactics evolve, employing deception in this manner could be beneficial. As technology matures, so will our ability to establish realistic online personas that deceive our adversaries."
Online attackers have many advantages over potential victims, making it hard to defend enterprise IT resources and personal data. In such situations, diversion tactics might help the defenders balance the scales by slowing down and helping to detect attackers.
I’ve outlined my recommendations for the role that honeypots can play as part of a modern IT infrastructure earlier. Now, I’m also suggesting that honeypot personas, which could also be called decoy personas, might be effective at confusing, misdirecting, slowing down and helping detect online adversaries. For example,
"A decoy profile [on a social networking or another site] could purposefully expose some inaccurate information, while the person’s real profile would be more carefully concealed using the site’s privacy settings."
I define the term honeypot as an item that is designed to be desired by an adversary. In this light, a honeypot persona exhibits characteristics that might be attractive to online attackers, deflecting malicious activities and potentially warning the real person who carries the same name as the decoy, that he or she might be targeted soon.
Despite the general agreement that being prepared for an information security incident decreases the pain of dealing with the intrusion, few companies seem to plan for the eventuality of a data breach. Understanding why organizations don’t prepare for incident response might help us come up with advice that goes beyond merely emphasizing the need for an incident response plan.
Do Organizations Understand the Threats?
One explanation for the lack of preparedness suggests that organizations simply don’t know that it’s a dangerous online world out there and, therefore, might not even understand what security incidents are about. This is probably incorrect. A Symantec study of small and midsize businesses found that the SMBs were aware of online threats. More than half “stated that malware would reduce productivity while the infected systems were repaired and 36 percent recognized that hackers could gain access to proprietary information.” If SMBs understand this, so do the larger enterprises.
Do Organizations Recognize That They Might Be Attacked?
As indicated above, companies seem to possess a general awareness of the threats and understand that a data breach could have adverse effects. The problem might be that these firms don’t believe they will be attacked.
Indeed, the Symantec study mentioned above found that SMBs thought that, due to their smaller size, they were less likely to be attacked. In reality, Symantec’s data showed that SMBs were more likely to be targeted than large enterprises and, therefore, underestimated the risk.
This situation is a bit reminiscent of the University of Pennsylvania’s study that examined teens’ perception on the risks associated with smoking. The teenagers knew about the link between smoking and cancer; however, they believed that they personally would be able to “avoid the health consequences of smoking.” The researchers noted:
"In keeping with other research findings, young smokers in this study estimated their own personal risks differently from risks to smokers in general."
Perhaps there is an underlying psychological tendency among individuals to be optimistic when assessing risks, thinking they are less likely to experience a negative event than others in their peer group.
Do Organizations Foresee the Pain of Dealing with the Incident?
There’s another explanation for organizations not thinking in advance about security incidents, even if they are aware of threats and recognize that they might be attacked. It has to do with the weak understanding of the effect an incident can have on the organization. Without this understanding, the company would fail to see the need to prepare in advance and wouldn’t know what dealing with incidents might be like.
In the UPenn smoking study mentioned above, some teens acknowledged that they were at an increased risk of developing cancer if they continued to smoke. However, they failed to understand the horrible effect that cancer, if it occurs, could have on their lives. They underestimated the deadliness of this disease. The researchers concluded:
"Young smokers understand that smoking is likely to shorten a person’s life, but do not have a clear idea of the number of years involved."
It’s quite possible that organizations similarly underestimate the disruptive effects of information security incident. At least in the short term, it will find itself suspending some aspects of its routine, switching into incident handling mode that is costly and stressful.
Persuading Organizations to Prepare for Information Security Incidents
The studies act as illustrations of the challenges we encounter when assessing risks. If we are to attempt persuading companies to consider how that will handle information security incidents before the event takes place, we need to consider several reasons why they might not care to think about this in advance:
Try as we may, we should also recognize that human nature leans towards being reactive, rather than encouraging proactive behavior. This is one of the reasons I’ve prepared various incident handling cheat sheets and outlined some advice in a presentation titled How to Respond to an Unexpected Incident.
Agreeing on the scope of a security assessment, such as a penetration test, is easier said than done. Define the scope too narrowly, and you will miss the vulnerabilities another attacker could have exploited. On the other hand, an overly-wide or ambiguous scope can unnecessarily increase the cost of the assessment and put at risk careers of people involved with the project.
Jake William’s recent blog posting Penetration Testing Scope - Murky Waters Ahead! highlights the challenges associated with determining what may be examined by the tester. For instance, if the assessment is supposed to exclude web applications, should a PHP vulnerability that allows remote code execution be excluded as well?
Defining the scope is as much of a technical as it is a political issue. Many assessments focus exclusively on “web applications” or only on “network services,” instead of combining the two efforts, because these resources are usually maintained by different teams. As the result, the funding for these assessments comes from different sources and different people will be blamed for security problems.
Unfortunately, this means that many penetration tests fail to adequately mimic a typical attacker’s actions, because the scope of the assessment is artificially constrained.
One way to determine whether a particular vulnerability is in scope is to ask the client whether they are the ones responsible for patching that vulnerability. That’s not satisfying , I know. The good news is that the assessor can still provide value by asking questions about the vulnerability, explaining its significance and highlighting the need for someone within the client’s organization should take a look at it.
A well thought-out penetration test will define the rules of engagement in a realistic manner without drawing these artificial boundaries. A consultant should strive to define the rules of engagement and the scope in this “fused” manner when selling the assessment. Similarly, organizations should combine these types of testing efforts into a single engagement, even if it means that two different teams need to collaborate and pool their budgets.
In a follow up to his post, Jake pointed out that talking to “the customer is ultimately the correct way to obtain ultimate resolution on the issue.” He explained that they report all issues they “find (even if they happen to be out of scope). … this provides a real value added to the client.”
Information security professionals need to keep an eye on the always-evolving cyber threat landscape. Accomplishing this involves understanding how changes in people’s use of technology influence the opportunities and techniques pursued by criminals on-line. Below are 5 tech trends that have affected the evolution of threats.
Mainstream adoption of the Internet into daily activities. The Internet has become so interwoven into our lives that we often don’t notice when activities make use of Internet-connected resources. Technology that allows people and businesses to utilize Internet connectivity has become so convenient, that even non-technical people, old and young, are able to harness the power of the web. As the result:
The increase in usefulness and popularity of mobile devices. Powerful pocket-sized computers with always-on Internet connectivity, also known as phones„ have become so common, that we rarely make a distinction between a regular and a “smart” phone. Overall, mobile devices have become as integral to the modern way of life as glasses, wallets and shoes. As the result:
The popularity and acceptance of online social networking. While initially seen as serving the needs of niche groups, websites such as Twitter, Facebook and LinkedIn, have been joined by numerous others to support new ways in which people socialize online. Social networking sites have become the backbone of modern interactions. As the result:
The connectivity between “physical” and “virtual” worlds. Objects, tools and other constructs (e.g., thermostats, industrial control systems, home automation devices) in the “physical” world are increasingly connected to the web, giving rise to the concept of the “Internet of things.” As the result:
The acceptance of cloud computing. The use of external, virtualized and/or outsourced IT resources has gained mainstream adoption for not only personal, but also enterprise applications. The cloud is permeating all aspects of modern life. It is becoming increasingly difficult and unnecessary to make a distinction between traditional and cloud-based technologies. As the result:
Though I’ve broken out technology trends as distinct observations, they are interrelated within a system that comprises the modern way of life, which incorporates phones, social exchanges, interconnectedness and cloud services into its very fabric. Similarly, the trends in attack strategies, targets and rewards are intertwined to create the reality that infosec professionals need to understand and safeguard.
Sometimes people ask me for career advice related to information security in general and, more specifically, digital forensics and incident response. I’ve written a few articles on this topic, as did many other respected professionals. Below are pointers to some of these tips.
Digital forensics in general:
Specific to malware analysis:
Broader IT and information security career tips:
I’m sure I missed many other excellent articles with practical career tips for digital forensics and related fields. If you’d like to recommend your favorite references, kindly leave a comment.