Despite its age, Windows XP is useful to have in your IT lab, for instance if you need to experiment with older software or study malware. Microsoft distributes a Windows XP virtual machine called Windows XP Mode, which you can download if you’re running Windows 7, as I explained earlier.
If you’re using Windows 8 or 8.1, you can still get the Windows XP virtual machine, but it requires a bit more work. The initial steps below are based on the instructions documented on the Redmond Pie blog.
First, download Windows XP Mode from Microsoft. You’ll need to go through the validation wizard to confirm you’re running a licensed copy of Windows. Though designed to look for Windows 7, it appears to accept Windows 8.1 as well. After the validation completes, you’ll able to download the Windows XP Mode installation file.
Once you’ve downloaded the Windows XP Mode installation file, don’t run it. Instead, explore its contents using a decompressing utility such as WinRAR or 7-Zip. Inside that file, go into the “sources” directory and extract the file called “xpm”.
The file “xpm” is another compressed archive, whose contents you can navigate using a tool such as 7-Zip. Extract from the “xpm” archive the file called “VirtualXPVHD” and rename it to something like “VirtualXP.vhd”.
This VHD file represents the virtual hard disk of the Windows XP system. You can use VirtualBox to create a virtual machine out of it. To do this, use the VirtualBox wizard for creating a new virtual machine and select “Use an existing virtual hard drive file” when prompted.
To create a VMware virtual machine out of the VHD file you’ll first need to convert it to the VMDK format, which VMware uses to represent virtual disks. The most convenient way to do this might be to use WinImage. If following this approach, select “Convert Virtual Hard Disk image…” from the Disk menu in WinImage, then select “Create Dynamically Expanding Virtual Hard Disk”. When prompted to save the file, select the VMware VMDK format and name the output file something like “VirtualXP.vmdk”.
Once the VMDK file has been saved, you can create a VMware virtual machine out of it by using VMware Workstation. Go to File > New Virtual Machine. Select “Custom (advanced)” when prompted for the configuration type. Accept defaults as you navigate through the wizard. When prompted to select a disk, select “Use an existing virtual disk” and point the tool to the VirtualXP.vmdk file.
Once the VMware virtual machine has been created, launch it, then go through the Windows XP setup wizard within the new virtual machine the same way you would do it for a regular Windows XP system. After Windows XP setup is done, install VMware tools into the Windows XP virtual machine you just created. Take a snapshot of your virtual machine, in case it breaks.
Windows XP installed into the virtual machine in that manner might need to be activated with Microsoft within 30 days of the installation. Be sure to understand and comply with the applicable software licensing agreements.
If this topic is interesting to you, take a look at my Reverse-Engineering Malware course. Other related items:
— Lenny Zeltser
Sometimes people share passwords. This practice might stem from the lack of support for unique user account in some applications. Even more importantly, the reasons for password-sharing have to do with convenience and social norms. Technologists are starting to recognize the opportunity to account for these real-world practices in their products.
I detailed password-sharing practices in a post on the future of user account access sharing and presented several examples. I looked mostly at consumer and small business situations, not necessarily at heavily-controlled enterprise applications. For instance, adults frequently share logon credentials to their Netflix accounts among a small group of people. This saves on Netflix service fees.
Netflix seems to not only tolerate, but even encourage this practice. The service now supports multiple user profiles as part of a single account. One set of logon credentials is actually expected to be used by multiple individuals! A security purist would not approve. Yet, this product management decision recognized the lack of user’s interest in safeguarding their Netflix account from friends and family members.
Rather pretending that logon credential-sharing doesn’t exist, Netflix accepted that its users will share accounts and built the features its customers wanted. (The fact that any Netflix user profile can cancel the service helps constrain the potential size of the group who can access the account.)
For other examples, consider a small office environment within which individuals trust each other enough to share passwords to applications and websites used by the company, such as Pandora, UPS, Mailchimp, etc. Preaching to these people about the importance of individual logon credentials is usually a waste of time. This is why I hailed the emergence of password vault applications such as Mitro, which make it easier for individuals to share passwords while offering some security safeguards that don’t exist when using Post-it notes for this purpose.
Let’s project into the future a bit. According to danah boyd, teens often share passwords with friends as a sign of trust. This practice is akin to giving a person you trust the combination code to your school locker. Since teens will inevitably become adults, product managers and security architects will eventually have to account for such practices in their applications.
Customer-focused organizations will need to balance the desire to protect people from themselves with the need to give customers what they want. Some say that passwords will eventually go away and be replaced by biometrics and other technologies. Perhaps. In any case, people’s desire and sometimes legitimate need to share account access will persist. Security professionals and product managers will need to figure out how to provide such capabilities while accounting for meaningful risks.
— Lenny Zeltser
In addition to the fingerprint scanner, there is another new hardware component in iPhone 5S that security-minded folks might get excited about. I’m referring to the M7 motion coprocessor. According to Apple, the chip continuously measures motion data “from the accelerometer, gyroscope and compass to offload work” from the main processor. Here’s why this is interesting.
Although M7 is being positioned as an enabler for enhanced fitness apps, there might be interesting security applications for the precision and energy-efficiency capabilities of the coprocessor. For instance, consider the possibility for continuous and seamless user authentication. As I proposed earlier:
Continuous user authentication could occur transparently by spotting anomalies in which the user interacts with the system. Such methods could avoid interrupting the user unless the system begins to doubt the person’s identity.
An app utilizing M7 could help Identify the unique walking pattern of the phone’s user. This authentication approach is described in a paper titled Pace Independent Human Identification Using Cell Phone Accelerometer Dynamics. Perhaps without M7, this methodology would have been impractical due to the battery drain and the lack of accurate measurements.
Another example of an authentication approach that could make use of the phone’s motion sensors is exhibited by SilentSense, which tracks “the unique patterns of pressure, duration and fingertip size and position” exhibited by the phone’s user. The goal of this technology is to spot when the phone is being used by an unexpected person.
As nice as the iPhone 5S fingerprint reader is, authenticating users through walking patterns could be event more user-friendly and offer unique benefits of continuous validation that one-time authentication doesn’t provide. (BTW, the new Moto X phone dedicates a hardware component to “context-sensing,” which seems to support motion-awareness functions.)
While Apple isn’t planning to grant app developers access to the fingerprint reader (a.k.a. “Touch ID sensor”) according to AllThingsD, access to M7 will be available using the expanded Core Motion API. This would allow third-party developers to come up with innovative ways of utilizing M7 motion coprocessor capabilities that go beyond fitness-related applications, assuming Apple’s App Store guidelines will allow this.
Who knows, perhaps your favorite phone security app might soon issue an alert if it detects that an impostor is carrying your phone, giving you an opportunity to wipe the device or take other corrective action. Just an idea.
— Lenny Zeltser
Over the years, the set of skills needed to analyze malware has been expanding. After all, software is becoming more sophisticated and powerful, regardless whether it is being used for benign or malicious purposes. The expertise needed to understand malicious programs has been growing in complexity to keep up with the threats.
My perspective on this progression is based on the reverse-engineering malware course I’ve been teaching at SANS Institute. Allow me to indulge in a brief retrospective on this course, which I launched over a decade ago and which was recently expanded.
Starting to Teach Malware Analysis
My first presentation on the topic of malware analysis was at the SANSFIRE conference in 2001 in Washington, DC, I think. That was one of my first professional speaking gigs. SANS was willing to give me a shot, thanks to Stephen Northcutt, but I wasn’t yet a part of the faculty. My 2.5-hour session promised to:
Discuss “tools and techniques useful for understanding inner workings of malware such as viruses, worms, and trojans. We describe an approach to setting up inexpensive and flexible lab environment using virtual workstation software such as VMWare, and demonstrate the process of reverse engineering a trojan using a range of system monitoring tools in conjunction with a disassembler and a debugger.”
I had 96 slides. Malware analysis knowledge wasn’t yet prevalent in the general community outside of antivirus companies, which were keeping their expertise close to the chest. Fortunately, there was only so much one needed to know to analyze mainstream samples of the day.
Worried that evening session attendees would have a hard time staying alert after a day’s full of classes, I handed out chocolate-covered coffee beans, which I got from McNulty’s shop in New York.
Expanding the Reverse-Engineering Course
A year later, I expanded the course to two evening sessions. It included 198 slides and hands-on labs. I was on the SANS faculty list! Slava Frid, who helped me with disassembly, was the TA. My lab included Windows NT and 2000 virtual machines. Some students had Windows 98 and ME. SoftICE was my favorite debugger. My concluding slide said:
That advice applies today, though one of the wonderful changes in the community from those days is a much larger set of forums and blogs focused on malware analysis techniques.
By 2004, the course was two-days long and covered additional reversing approaches and browser malware. In 2008 it expanded to four days, with Mike Murr contributing materials that dove into code-level analysis of compiled executables. Pedro Bueno, Jim Shewmaker and Bojan Zdrnja shared their insights on packers and obfuscators.
In 2010, the course expanded to 5 days, incorporating contributions by Jim Clausing and Bojan Zdrnja. The new materials covered malicious document analysis and memory forensics. I released the first version of REMnux, a Linux distro for assisting malware analysts with reverse-engineering malicious software.
Around that time the course was officially categorized as a forensics discipline by SANS and was brought into the organization’s computer forensics curriculum thanks to the efforts of Rob Lee.
Recent Course Expansion: Malware Analysis Tournament
The most recent development related to the course is the expansion from five to six days. Thanks to the efforts of Jake Williams, the students are now able to reinforce what they’ve learned and fine-tune their skills by spending a day solving practical capture-the-flag challenges. The challenges are built using the NetWars tournament platform. It’s a fun game. For more about this expansion, see Jake’s blog and tune into his recorded webcast for a sneak peek at the challenges.
It’s exciting to see the community of malware analysts increase as the corpus of our knowledge on this topic continues to expand. Thanks to all the individuals who have helped me grow as a part of this field and to everyone who takes the time to share their expertise with the community. There’s always more for us to learn, so keep at it.
— Lenny Zeltser
Researching online scams opens a window into the world of deceit, offering a glimpse into human vulnerabilities that scammers exploit to further their interests at the victims’ expense. These social-engineering tactics are fascinating, because sometimes they work even when the person suspects that they are being manipulated.
Here are examples of 7 social engineering principles I’ve seen utilized as part of online scams:
Miscreants know how to exploit weaknesses in human psyche. Potential victims should understand their own vulnerabilities. This way, they might notice when they’re being social-engineered before the scam has a chance to complete itself. If this topic interests you, you might also like the following posts:
— Lenny Zeltser
After agreeing to purchase the item or service you are selling, the remote buyer offers a check for an amount larger than you requested, with a polite request that you forward the difference to the buyer’s friend or colleague. You deposit the check, wait for it to clear, and send the funds to the designated party. What could possibly go wrong?
This is the gist of the check overpayment scam, which debuted on the Internet around 2002 according to Scopes.com. In 2004, FTC warned about initial variations of the scam, which began with the scammer agreeing to buy the item advertised online by the victim. A modern variation starts as a response to an ad for private lessons, placed on a site such as Craigslist:
"How are you doing today?I want a private lessons for my daughter,Mary. Mary is a 13 year old girl, home schooled and she is ready to learn. I would like the lessons to be in your home/studio. Please I want to know your policy with regard to the fees,cancellations, and make-up lessons."
How might you determine whether the sender has malicious intent? The lack of spaces after some punctuation marks seems strange. Also, a legitimate message would have probably included some details specific to the lessons being requested. The scammer probably excluded such details, because he or she used the same text for multiple victims, who offer different services. The message above didn’t include the recipient on the To: line, supporting the theory that the scammer sent it to multiple people.
Concerned recipients might look up some of the unusual phrases from this message, in which case they’d see numerous reports about scams that include phrases from the text above. (1, 2, 3, 4, 5, etc.)
If the victim responds to the original email, the scammer continues to weave the web of deceit:
"I’m a single parent who always want the best for my daughter and I would be more than happy if you can handle Mary very well for me. I would have loved to bring Mary for "meet and greet"interviews before the lessons commence, which I think is a normal way to book for lessons but am in Honolulu,Hawaii right now for a new job appointment. So,it will not be possible for me to come for the meeting. Mary will be coming for the lessons from my cousin’s location which is very close to you. Although,Mary and my cousin are currently in the UK and I want to finalize the arrangement for the lessons before they come back to the United States because that was my promise to Mary before they left for the UK."
"As for the payment, I want to pay for the three month lessons upfront which is $780 and I’m paying through a certified cashier’s check.Hope this is okay with you?"
I wouldn’t point to any specific aspect of this message as malicious, but the tone feels a bit off, even for a person who might be a non-native English speaker. The phrase “handle Mary very well for me” is strange. Also, it seems unusual to pay for private lessons using a cashier’s check. In addition, the message has more details about Mary and the sender’s travel than one would expect in this context. The scammer is probably doing this to justify a request that will be presented to the victim later.
If the recipient writes back, the scammer responds:
"My cousin will get in touch with you for the final lessons arrangement immediately they are back from the UK. I will want you to handle Mary very well for me because she is all I have left ever since her mother’s death four years ago. Being a single parent, It’s not easy but I believe God is on my side."
Several aspects of this message might trigger a careful reader’s anomaly detector. Again we see the strange phrase “handle Mary very well.” Also, the sender offers extraneous detail that don’t belong in the context of this email discussion, perhaps attempting to establish an emotional connection on the basis of death in the family and religion. There is more:
"I also want to let you know that the payment will be more than the cost for the three month lessons. So, as soon as you receive the check, I will like you to deduct the money that accrues to the cost of the lessons and you will assist me to send the rest balance to my cousin. This remaining balance is meant for Mary and my cousin’s flight expenses down to the USA, and also to buy the necessary materials needed for the lessons. I think,I should be able to trust you with this money?"
Now we see the classic element of the check overpayment scam. The sender is appealing to the recipient’s honesty and perhaps hopes to build upon the emotional response evoked in the earlier paragraph.
Scopes.com explains that the scam works because FTC “requires banks to make money from cashier’s, certified, or teller’s checks available in one to five days. Consequently, funds from checks that might not be good are often released into payees’ accounts long before the checks have been honored by their issuing banks. High quality forgeries can be bounced back and forth between banks for weeks before anyone catches on to their being worthless.”
Watch out for situations where a buyer is willing to overpay and asks for your help in forwarding the remainder of the funds to someone else. Also, look for anomalies that might indicate malicious intent, such as strange tone, unusual phrases, unnecessary details and uncommon punctuation. If you notice such aspects of the text, research them on the web before deciding whether to continue your correspondence.
— Lenny Zeltser
Given the many ways in which digital certificates can be misused and the severe repercussions of such incidents, several initiatives have been launched to strengthen the ecosystem within which the certs are issued, validated and utilized. This is a start of what I hope will be a slew of projects and security improvements that will gradually gain foothold in enterprise and personal environments.
Current efforts to improve the state of the web’s Public Key Infrastructure (PKI) include:
Of the efforts to strengthen the web’s PKI environment, the pinning of the certificates or the associated public keys seems most promising.
Many information security practices are based on the principle of denying access by default, unless there is an explicit need to grant access. For instance, most network firewalls only allow specific traffic, instead of allowing all ports and blocking only risky ones. Soon, we might need to exercise the same degree of control over digital certificates trusted by our systems. The tools available to us for accomplishing this are still awkward and immature. This will change.
— Lenny Zeltser
Online communication and data sharing practices rely heavily on digital certificates to encrypt data as well as authenticate systems and people. There have been many discussions about the cracks starting to develop in the certificate-based Public Key Infrastructure (PKI) on the web. Let’s consider how the certs are typically used and misused to prepare for exploring ways in which the certificate ecosystem can be strengthened.
How Digital Certificates are Used
Digital certificates are integral to cryptographic systems that are based on public/private key pairs, usually in the context of a PKI scheme. Wikipedia explains that a certificate binds a public key to an identity. Microsoft details the role that a Certificate Authority (CA) plays in a PKI scenario:
"The certification authority validates your information and then issues your digital certificate. The digital certificate contains information about who the certificate was issued to, as well as the certifying authority that issued it. Additionally, some certifying authorities may themselves be certified by a hierarchy of one or more certifying authorities, and this information is also part of the certificate."
HTTPS communications are safeguarded by SSL/TLS certificates, which help authenticate the server and sometimes the client, as well as encrypt the traffic exchanged between them. Digital certificates also play a critical role in signing software, which helps determine the source and authenticity of the program when deciding whether to trust it. Certificates could also be used as the basis for securing VPN and Wi-Fi connections.
Misusing Digital Certificates
In the recent years, the PKI environment within which digital certificates are created, maintained and used began showing its weaknesses. We’ve witnessed several ways in which the certs have been misused in attacks on individuals, enterprises and government organizations:
These were just some examples of real-world incidents where digital certificates were misused. In a follow-up post, I discuss the initiatives aimed at strengthening the PKI ecosystem within which the certificates are issued, validated and utilized. Read on.
— Lenny Zeltser
Google’s Inactive Account Manager allows you to designate a person to be notified if your Google account has been inactive for 3 or more months. You can also elect for Google to share your data with that contact at that time. Let’s take a look at what happens after the inactivity period is reached and consider what security precautions you might need to take when using this feature.
Activating Google’s Inactive Account Manager
To better understand how Inactive Account Manager works, I activated this feature for a test account, designating myself as the trusted contact. To do this, I followed prompts to provide Google with the contact’s email address and phone number. I also directed Google to “share my data with this contact,” in addition to notifying the designee about inactivity of my test account:
I also specified the subject and contents of the email message that Google would send to the trusted contact when the account becomes inactive. In addition, I provided a mobile phone number where Google would send text message alerts regarding the inactive account.
I told Google to “Remind me that Inactive Account Manager is enabled.” I chose not to ask Google to delete the account after it expires.
Then, I waited 3 months, avoiding the use of my test account. I wanted to see what will happen once it reaches the state of inactivity.
Initial Google Account Inactivity Alerts
A month before the Google account was going to enter the expired state, my test account received an initial alert using email and SMS:
The email message explained that to avoid expiration, the user needed to sign into Google and edit Inactive Account Manager settings. Google sent another set of alerts 11 days later, then a week after that, and again 5 days after that.
Google Account Becoming Inactive
On the day when the account was set to enter the state of inactivity, Google sent the user an alert via email and SMS, stating that the timeout set for the account has expired. The message explained that “Inactive Account Manager has notified your trusted contacts and shared data with them.”
Despite the urgent tone of the message, Google waited two more days (a grace period?) prior to notifying the trusted contact about expiration. At that time, I received an email message with the subject and text that was set up by my test user when activating Inactive Account Manager:
Google’s message provided a link to obtain the data from the expired account and stated that I have been given access to download it over the next 3 months.
Accessing the Inactive User Account Data
After I clicked the “Download data” link, I was presented with a message explaining that before I download the data, I have to verify my identity by confirming that I received a verification code via SMS or a call. Google directed the code to the number designated when setting up Inactive Account Manager:
After supplying the verification code, I was able to download the expired account’s data:
The data was available as a Zip file archive, similar to how Google provides it user the Google Takeout service.
Security Notes on Google’s Inactive Account Manager
I retained access to the expired account’s data for about 24 hours after my test user logged into Google and reactivated the account. The data that I could access during that time was a snapshot in time, taken on the date when the account expired. The export didn’t include new data generated by the reactivated account afterwards.
After the test user logged into the account, there was no warning about the trusted contact having accessed the data when the account expired. If enabling Google’s Inactive Account Manager and reaching the inactivity state, you should assume that the designated person exported your data.
A day after reactivating the inactive account, my test user received a notice stating that the timeout period was reset and that trusted contacts no longer had access to the data. Indeed, at that point clicking in the “Download data” link took me to an “Invalid link” page:
As you can see, Google built safeguards to avoid the expired account’s data being shared accidentally or becoming available to the wrong party. This process entails sending several email and SMS notifications, which could be misused by attackers to create phishing scams.
If you or people in your organizations rely on Google accounts, educate them about the flow of information associated with Inactive Account Manager and remind them to interact solely with Google’s SSL-enabled websites regarding this feature.
— Lenny Zeltser
I’d like to define the term honeypot persona as a fake online identity established to deceive scammers and other attackers. If this notion interests you, take a look at the article where I proposed using honeypot personas to safeguard user accounts and data. If you haven’t read that note yet, go ahead I’ll wait…
In that article, I wrote that:
"Using decoys to protect online identities might be an overkill for most people at the moment. However, as attack tactics evolve, employing deception in this manner could be beneficial. As technology matures, so will our ability to establish realistic online personas that deceive our adversaries."
Online attackers have many advantages over potential victims, making it hard to defend enterprise IT resources and personal data. In such situations, diversion tactics might help the defenders balance the scales by slowing down and helping to detect attackers.
I’ve outlined my recommendations for the role that honeypots can play as part of a modern IT infrastructure earlier. Now, I’m also suggesting that honeypot personas, which could also be called decoy personas, might be effective at confusing, misdirecting, slowing down and helping detect online adversaries. For example,
"A decoy profile [on a social networking or another site] could purposefully expose some inaccurate information, while the person’s real profile would be more carefully concealed using the site’s privacy settings."
I define the term honeypot as an item that is designed to be desired by an adversary. In this light, a honeypot persona exhibits characteristics that might be attractive to online attackers, deflecting malicious activities and potentially warning the real person who carries the same name as the decoy, that he or she might be targeted soon.
— Lenny Zeltser