I set up a brand new web server to see what type of connections it will receive. Since the server had no “production” purpose, all attempts to access it could be considered suspicious at best. Such requests are associated with scans, probes and other malicious activities that tend to blend into the background of web traffic. Here’s what I observed.
An Internet-Mapping Experiment by PDR Labs
The web server began receiving the following unexpected HTTP requests once or twice per day:
HEAD / HTTP/1.1
User-Agent: Cloud mapping experiment. Contact email@example.com
These connection attempts stood out because the HTTP requests were missing the “Accept” header and included the server’s IP address, rather than hostname in the “Host:” field (not shown here). This tends to occur with bots.
Searching the web for “pdrlabs.net” led to www.pdrlabs.net, which contained a bare-bones page stating:
"We are conducting an ongoing experiment to map the Internet in its entirety. Our crawling is not malicious in intent and does nothing more than attempt the connection; no further information is mined."
These connections originated from different IP addresses, all of which were hosted at Amazon Elastic Compute Cloud (EC2). These included 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11 and 18.104.22.168.
I didn’t find any other suspicious connections associated with these IPs so I am not too worried about this activity. Still, what are PDR Labs up to and who is behind this project? Perhaps some day these secrets will be revealed to us.
Scans for Open Web Proxies
Another set of anomalous requests, unrelated to the connections above, looked like this:
GET http:// hotel.qunar. com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1
Referer: http:// hotel.qunar. com/
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
These requests stood out because the client attempted to retrieve a page from hotel.qunar.com, which was unrelated to my web server. Such connections, regardless the third-party URL they attempt to retrieve, tend to be scans for open proxies. If my web server was configured as an open proxy, it would retrieve the requested URL and present it to the client.
According to the Httpd Wiki, such open proxies could be misused to “manipulate pay-per-click ad systems, to add comment or link-spam to someone else’s site, or just to do something nasty without being detected.” Open proxies are also used to bypass corporate or government access restrictions.
I observed these connections roughly every other day. They originated from different IP addresses, all of which were registered in China. These included 22.214.171.124, 126.96.36.199 and 188.8.131.52.
Why do these scans use the hotel.qunar.com URL for its tests? I doubt the person behind them is intent on finding a way to make anonymous hotel reservations through this site. Any URL would do. However, hotel.qunar.com is specifically mentioned as an example in the onlineProxy.js tool:
* a proxy with totoro, to test online page.
step1: totoro -R http:// 10.211.55. 2:9998/proxy?target=hotel.qunar.com -a mocha
step2: this proxy, request the target url, add mocha script and case to response
step3: response the added html to totoro server
This tool is a module for Totoro, which is a free, “simple and stable cross-browser testing tool.” Perhaps the scanner was implemented by using Totoro and onlineProxy.js, with the person behind it using the example above when launching the scans. Another mystery of the web unraveled!
This wasn’t the only set of proxy connections that the server encountered. Another probe came from 184.108.40.206, which attempted to retrieve:
GET http:// www. k2proxy. com//hello.html
The connecting client specified the following User-Agent string: “Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)”. The connection came from the system that, according to Spamhous CBL was infected with Torpig malware. The K2 proxy website, authored in Chinese, seems to be an effort to locate and document open proxies and appears to be maintained by firstname.lastname@example.org.
Yet another proxy probe came from 220.127.116.11, an IP address classified as being potentially malicious by Project Honey Pot:
GET http://www.baidu.com/ HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)
A couple of seconds before submitting this HTTP request, the attacking system also attempted to connect to the server on TCP ports 135 and 1433, both of which are associated with Microsoft SQL Server activity.
Probes from Potentially-Infected Systems
Let’s move to another unusual set of connections Approximately every other day the web server received the following request:
HEAD / HTTP/1.0
These connections stood out because they were missing all other headers typically present in an HTTP connection. The requests came from different IPs, which included 18.104.22.168, 22.214.171.124, 126.96.36.199 and 188.8.131.52. These IPs were located in the US, Japan and Taiwan.
Several of these IP addresses were flagged on the Spamhous Composite Blocking List (CBL) as being associated with infected hosts. According to CBL, some of these systems were running Gameover Zeus and Hesperbot malware. Perhaps these bots were directed to scan the web looking for web servers to infect—I’m not sure, but if you have promising theories, please let me know.
Scans for phpMyAdmin Vulnerabilities
The web server also saw several requests associated with User-Agent “ZmEu”. They looked like this:
GET /MyAdmin/scripts/setup.php HTTP/1.1
Accept-Encoding: gzip, deflate
These connections stood out because they attempted to access PHP pages not present on the server and specified an unusual User-Agent. Also, they provided a “Host:” header (not shown here) that specified the web server’s IP address, rather than its hostname.
These probes came from 184.108.40.206 in Bulgaria. According to Spamhous CBL, this IP was associated with Gameover Zeus malware. The infected system attempted to access pages used by phpMyAdmin, a popular MySQL administration tool. The scanner looked for vulnerabilities in phpMyAdmin that it could exploit.
According to Phil Riesch, User-Agent “ZmEu” is used by “a security tool used for discovering security holes” in phpMyAdmin. Older web probes associated with this tool included a reference to its potential origin and pointed to a now-defunct website:
Made by ZmEu @ WhiteHat Team - www. whitehat.ro
Someone seemed to be using a bot network to scan for vulnerable phpMyAdmin systems, though the reference to “ZmEu” could have been added regardless of whether that was the tool that the attacker actually employed.
This completes the overview of the suspicious activities I observed recently on a brand new web server that should not have seen any connections. Such probes are easy to notice on a non-production system like that. On most real servers, they probably go unnoticed, blending into the noise that comprises today’s Internet traffic.
It’s my pleasure to announce the availability of version 5 of REMnux, a Linux distribution popular among malware analysts. The new release adds lots of exciting free tools for examining malicious software. It also updates many of the utilities that have already been present in the distro. Here is a listing of the tools added to REMnux v5.
Examine Browser Malware
Examine Document Files
Extract and Decode Artifacts
Handle Network Interactions
Process Multiple Samples
Examine File Properties and Contents
Investigate Linux Malware
In addition to the newly-installed tools above, REMnux v5 includes updates to core OS components as well as numerous other utilities present in earlier versions of the distro, including Volatility, peepdf, Network Miner, OfficeMalScanner, MASTIFF, ProcDOT and others. For a full listing of REMnux v5 tools, see the XLSX spreadsheet or the XMind mind map.
A huge thank you to David Westcott, who set up and upgraded many of the packages available as part of REMnux v5, thoroughly tested them and help with the documentation. I’m also very grateful to the beta testers who reviewed early versions of this release. As always, thank you to the developers of the malware analysis tools that I am able to include as part of REMnux.
You can download the new version from REMnux.org. It’s available as a virtual appliance in VMware and OVF/OVA formats, as well as an ISO image of a live CD.
P.S. I expect the next major REMnux release to be based on a Long Term Support (LTS) version of Ubuntu and employ a modular package architecture to support incremental updates.
"You Have Been Selected for Family Resettlement to Australia," began the email that included the seal of the Embassy of Australia. "You are among the list of nominated for 2014 resettlement visa to Australia." The signature line claimed that the message had been sent by Hon Thomas Smith and came from "Australia Immigration Section <email@example.com>."
This was a scam, of course.
"What do I need to do?" I responded, curious what might come next. Hon Thomas Smith responded within a few hours, this time from firstname.lastname@example.org.
Request for Personal Information
The message attempted to mimic the letterhead of the Australian Department of Immigration and Citizenship and welcomed me “to Australia visa office.” It explained that:
"every year certain number of people are selected through our electronic ballot system for resettlement by Australia Government as part of support to Countries regarded as war zone area."
The miscreant requested that I submit a scanned copy of my travel passport, a recent photo and my phone number. In addition, I was to email a scanned white paper sheet with my fingerprints on it.
The email message included a PDF attachment that claimed to be Visa Form File/10121L-2014, which requested details such as date of birth, mother’s name and address. The PDF file didn’t have an exploit, as far as I can tell, and was merely designed as a place where the scammer’s target could conveniently provide personal information.
The scammer was pursuing this information probably with the goal of performing identity theft. Also, future interactions with the scammer would probably include a request for money to process the bogus application.
Free Sub-Domain Registration
The domain from which the scammer sent the application, immigrationsection.com.au.pn, is considered malicious by some security companies, according to VirusTotal. It redirects webs visitors to www-dot-popnic-dot-com, which some sources consider malicious.
Popnic-dot-com seems to be a front for Unionic-dot-com, which provides free domain registration, email forwarding, web hosting, URL forwarding, etc. under unusual TLDs such as .tc, .mn, .ms and others. More specifically, it offers registration under second-level domains that resemble TLDs assigned to major countries such as .uk.pn, .us.pn, .ca.pn, .au.pn, and others. No wonder it’s attractive to scammers, who want to get a domain that at a first glance seems legitimate.
With the increasing variety of TLDs available, scammers will have an easier job selecting domain names that catch the victims’ attention or evoke trust. Regardless of the domain used by the sender of the email message, if the offer sounds too good to be true and involves supplying sensitive information, it’s probably a scam.
If you are looking to get started with malware analysis, tune into the webcast series I recorded to illustrate key tools and techniques for examining malicious software:
Since the best way to learn malware analysis involves practice, I am happy to provide you with malware samples from each of these webcasts. Just send me an email after you’ve watched the webcast and confirm that you will be taking precautions to properly isolate your laboratory environment.
Examining malicious software involves a variety of tasks, some simpler than others. These efforts can be grouped into stages based on the nature of the associated malware analysis techniques. Layered on top of each other, these stages form a pyramid that grows upwards in complexity. The closer you get to the top, the more burdensome the effort and the less common the skill set.
The easiest way to assess the nature of a suspicious file is to scan it using fully-automated tools, some of which are available as commercial products and some as free ones. These utilities are designed to quickly assess what the specimen might do if it ran on a system. They typically produce reports with details such as the registry keys used by the malicious program, its mutex values, file activity, network traffic, etc.
Fully-automated tools usually don’t provide as much insight as a human analyst would obtain when examining the specimen in a more manual fashion. However, they contribute to the incident response process by rapidly handling vast amounts of malware, allowing the analyst (whose time is relatively expensive) to focus on the cases that truly require a human’s attention.
Static Properties Analysis
An analyst interested in taking a closer look at the suspicious file might proceed by examining its static properties. Such details can be obtained relatively quickly, because they don’t involve running the potentially-malicious program. Static properties include the strings embedded into the file, header details, hashes, embedded resources, packer signatures, meta data such as the creation date, etc.
Looking at static properties can sometimes be sufficient for defining basic indicators of compromise. This process also helps determine whether the analyst should take closer look at the specimen using more comprehensive techniques and where to focus the subsequent steps. Analyzing static properties is useful as part of the incident triage effort.
VirusTotal is an example of an excellent online tool whose output includes the file’s static properties. For a look at some free utilities you can run locally in your lab, see my posts Analyzing Static Properties of Suspicious Files on Windows and Examining XOR Obfuscation for Malware Analysis.
Interactive Behavior Analysis
After using automated tools and examining static properties of the file, as well as taking into account the overall context of the investigation, the analyst might decide to take a closer look at the specimen. This often entails infecting an isolated laboratory system with the malicious program to observe its behavior.
Behavioral analysis involves examining how sample runs in the lab to understand its registry, file system, process and network activities. Understanding how the program uses memory (e.g., performing memory forensics) can bring additional insights. This malware analysis stage is especially fruitful when the researcher interacts with the malicious program, rather than passively observing the specimen.
The analyst might observe that the specimen attempts to connect to a particular host, which is not accessible in the isolated lab. The researcher could mimic the system in the lab and repeat the experiment to see what the malicious program would do after it is able to connect. for example, if the specimen uses the host as a command and control (C2) server, the analyst may be able to learn about specimen by simulating the attacker’s C2 activities. This approach to molding the lab to evoke additional behavioral characteristics applies to files, registry keys and other dependencies that the specimen might have.
Being able to exercise this level of control over the specimen in a properly-orchestrated lab is what differentiates this stage from fully-automated analysis tasks. Interacting with malware in creative ways is more time-consuming and complicated than running fully-automated tools. It generally requires more skills than performing the earlier tasks in the pyramid.
For additional insights related to interactive behavior analysis, see my post Virtualized Network Isolation for a Malware Analysis Lab, a my recorded webcast Intro to Behavioral Analysis of Malicious Software and Part 3 of Jake Williams’ Tips on Malware Analysis and Reverse-Engineering.
Manual Code Reversing
Reverse-engineering the code that comprises the specimen can add valuable insights to the findings available after completing interactive behavior analysis. Some characteristics of the specimen are simply impractical to exercise and examine without examining the code. Insights that only manual code reversing can provide include:
Manual code reversing involves the use of a disassembler and a debugger, which could be aided by a decompiler and a variety of plugins and specialized tools that automate some aspects of these efforts. Memory forensics can assist at this stage of the pyramid as well.
Reversing code can take a lot of time and requires a skill set that is relatively rare. For this reason, many malware investigations don’t dig into the code. However, knowing how to perform at least some code reversing steps greatly increases the analyst’s view into the nature of the malicious program in a comp
To get a sense for basic aspects of code-level reverse engineering in the context of other malware analysis stages, tune into my recorded webcast Introduction to Malware Analysis. For a closer look at manual code reversing, read Dennis Yurichev’s e-book Reverse Engineering for Beginners.
Combining Malware Analysis Stages
The process of examining malicious software involves several stages, which could be listed in the order of increasing complexity and represented as a pyramid. However, viewing these stages as discrete and sequential steps over-simplifies the steps malware analysis process. In most cases, different types of analysis tasks are intertwined, with the insights gathered in one stage informing efforts conducted in another. Perhaps the stages could be represented by a “wash, rinse, repeat" cycle, that could only be interrupted when the analyst runs out of time.
If you’re interested in this topic, check out the malware analysis course I teach at SANS Institute.
Perhaps the most challenging and exciting aspect of information security is the need to account for business context when making decisions. One way to do this is to determine the unique strengths of the company—its competitive advantages—so you can frame risk conversations accordingly.
Economic Moats to Safeguard the Business
Gunnar Peterson discussed aspects of this concept using the notion of economic moats. According to Morningstar, an economic moat “refers to how likely a company is to keep competitors at bay for an extended period.” This term is similar to what others might call a sustainable competitive advantage. Just like a moat helps safeguard the castle from attackers, an economic moat contributes towards protecting the business from competitors.
Companies have different economic moats and those without a sustainable competitive advantage tend to stagnate. Gunnar outlined several types of moats highlighted by Morningstar, including: Low operational costs, intangible assets (strong brand, patents, etc.), high switching costs (customers tend to stay), etc.
Relate Security Risks to Economic Moats
What are your organization’s economic moats? If you don’t know what capabilities help the company protect or expand its market share, find out. This knowledge will help you make informed security decisions and will allow you to be a more persuasive participant in risk discussions. As Gunnar pointed out, “the two most important things in infosec are identifying what kind of moat your business has and then defending that moat.”
Information security professionals often complain that executives ignore their advice. There could be many reasons for this. One explanation might be that you are presenting your concerns or recommendations in the wrong business context. You’re more likely to be heard if you relate the risks to an economic moat relevant to your company.
A common approach to emphasizing the importance of information security is based on the notion that a data breach can tarnish the company’s brand. In many cases, the reality shows that the business doesn’t actually suffer in the long term, and in some cases the attention brought by the breach could actually help the company. However, even if the company might suffer in the short term, an argument based on brand tarnishing could fall on deaf ears if the organization doesn’t consider its brand a competitive advantage.
Security in Support of Sustainable Competitive Advantages
A company whose economic moat is its brand, will spend considerable efforts to protect its brand equity. For organizations like that, the brand-tarnishing argument might be effective and could be a good way to justify security funding. However, companies that have other moats, won’t care that much about safeguarding their brands.
For instance, consider a firm whose economic moat is tied to low costs due to its operational expertise and supplier relationships. A good context for making security decisions in this organization might be its efforts to protect proprietary details related to internal and supplier logistics. Threats to this moat will likely capture executives’ attention.
Another organization whose moat is its proprietary intellectual property will want to hear your thoughts on protecting such trade secrets. Alternatively, if a firm sees its time-to-market as a competitive advantage, it will want to know about the security risks that could slow it down and prevent the next timely release of its product.
An economic moat might protect the company from competitors, but it could be eroded by internal factors such as a security breach. Understand your company’s economic moats. Use them to frame security decisions and to ensure that your infosec advice are relevant to the company’s business objectives and strategies.
When characterizing ill-effects of malicious software, it’s too easy to focus on malware itself, forgetting that behind this tool are people that create, use and benefit from it. The best way to understand the threat of malware is to consider it within the larger ecosystem of computer fraud, espionage and other crime.
A Tip of a Spear
I define malware as code that is used to perform malicious actions. This implies that whether a program is malicious depends not so much on its capabilities but, instead, on how the attacker uses it.
Sometimes malware is compared to a tip of a spear—an analogy that rings true in many ways, because it reminds us that there is a person on the other end of the spear. This implies that information security professionals aren’t fighting malware per se. Instead, our efforts contribute towards defending against individuals, companies and countries that use malware to achieve their objectives.
Understanding the Context
Without the work of personnel that handles technical aspects of malware infections, the malware-empowered threat actors would be unencumbered. Yet, these tactical tasks need to be informed by a strategic perspective on the motivations and operations of the individuals that create, distribute and profit from malware.
To deal with malware-enabled threats, organizations should know how to detect, contain and eradicate infections, but we cannot stop there. We also need to also understand the larger context of the incident. We won’t be able to accomplish this until we can see beyond the malicious tools to understand the perspective of our adversaries. The who is no less important than the what.
When engaged in a fight, it’s natural to ask yourself whether you are winning or losing. However, in the context of cybersecurity, this question might not make sense, because it presupposes that the state of winning exists.
Maintaining the Equilibrium
Every day, new people and transactions appear online, making the digital world more attractive to criminals. Miscreants fund malicious software and attack operations, so they can achieve financial, political and other objectives. Security practitioners respond to evolving online threats; the attackers adjust their tactics, the defenders tweak their approaches, attackers regroup, and so on and so forth.
Defenders sometimes feel that the attackers are innovating at a pace that’s outpacing our ability to defend sensitive data and computer infrastructure. Such observations tend to be based on emotions and subjective observations and often lead to questions about which party is winning the fight. Defining our objectives in terms of winning or losing might not be practical.
The Eternal and Vicious Cycle
My perspective on the dynamics between cyber attackers and defenders aligns with the ecological metaphor that Lamont Wood described in an article Malware: War Without End. He referred to it as “an eternal cycle between prey and predator, and the goal is not victory but equilibrium.” It’s unlikely that this cycle will end and that either party will “win.”
When I spoke with Lamont, I suggested that attackers work to bypass our defenses and the defenders respond as part of the cycle. If attackers get in too easily, they are spending too much on their efforts. If we are blocking 100% of the attacks, we are probably spending too much on defense.
The digital ecosystem as a whole continues to thrive, because it benefits its legitimate users and criminals that act as parasites within the system. However, individual participants in this ecosystem could find themselves at a disadvantage and suffer losses. Being complacent is risky for a given party, because it must constantly apply energy to maintain the equilibrium.
If our goal is to “win” the fight against cyber criminals, we don’t stand a chance, in part because there will always be more threats to combat. It might be more useful to define our objectives in terms of maintaining an equilibrium between the defenders and the attackers. This way, we can help our organizations excel in the contaminated world of the Internet.
The future of information security is intertwined with the evolution of IT at large and the associated business and consumer trends. It’s worth taking the time to understand these dynamics to define a path for your professional development. How is the industry evolving and what role will you play?
Key Security Trends
Rich Mogull’s write-up on infosec trends offers an excellent framework for peeking 7-10 years into the future. Rich highlights key factors related to: hypersegregation, operationalization of security, incident response, software-defined security, active defense and closing the action loop. Read his article to understand these trends, then come back to consider how they might affect and inform your career development plans.
I won’t get into every trend that Rich described, but I’d like to share my thoughts on how some of these factors offer professional development opportunities for information security and IT professionals. Operationalization of security might be a good place to start.
IT Operations Professionals
As Rich points out, today infosec personnel “still performs many rote tasks that don’t actually require security expertise.” He predicts that security teams will divest themselves “of many responsibilities for network security and monitoring, identity and access management,” etc.
If you’re an IT operations professional who has no interest in specializing in security, you can expand your expertise so that you can take on some of the tasks performed by security personnel today. This might be a natural expansion of what you’re doing already. Moreover, consider what skills you need to possess to automate as many of these responsibilities as possible, allowing your organization to lower costs and improve quality of IT operations and helping you maintain your own sanity.
Information Security Professionals
If you’re an infosec person looking to grow in this field, consider what responsibilities will remain with security professionals. A security person might lack some of the expertise of his operations-focused IT colleagues, but presumably he is better at understanding security. This includes the knowledge of attack and defense tactics, the dynamics of incident response, security architecture and patterns, etc. These are some of the areas where you should focus your professional development efforts.
How to design and validate security of a network where every node is segregated from each other? How to assist the organization in living through a security incident cycle that could span days, but sometimes spans years? How to oversee and validate safeguards when most aspects of the IT infrastructure and applications have been virtualized and could be accessed via an API? What deception tactics could be employed to deter, slow down and detect intruders?
These are some of the questions, grounded in Rich’s trends, that infosec professionals should be able to answer, as they consider how to best contribute to their organization’s success in the future.
Asking the Right Questions
Do your best to project the future of industry trends. Based on these, consider what questions an employer might need answered 3, 7, 10 years from now. You might not know the answers to these questions yet, but the questions can guide you in drafting a professional development plan that will be right for you.