Posts tagged security

Security of Third-Party Keyboard Apps on Mobile Devices

Major mobile device platforms allow users to replace built-in keyboard apps with third-party alternatives, which have the potential to capture, leak and misuse the keystroke data they process. Before enabling the apps, their users should understand the security repercussions of third-party keyboards, along with the safeguards implemented by their developers.

Third-party keyboards received a boost of attention when Apple made it possible to implement such apps on iOS 8, though this capability have existed on Android for a while. iOS places greater restrictions on keyboards than do Android operating systems; however, even Apple cannot control what keyboard developers do with keystroke data if users allow these apps to communicate over the network.

Granting Network Access to the Keyboard App

The primary security concerns related to keyboard applications are associated with their ability to transmit the user’s keystrokes and potentially other sensitive data to developers’ servers. On iOS, this is possible if the user grants the app “full access,” as encouraged or required by the application.

image

The notion of “full access” is equivalent to the term “network access” that Apple’s App Extension Programming Guide discusses when explaining how to build a custom keyboard. According to the guide, network access is disabled by default. Once enabled by the user, this capability allows the keyboard to access “Location Services and Address Book, with user permission,” “send keystrokes and other input events for server-side processing,” and perform additional actions that would otherwise be unavailable.

image

A legitimate keyboard app might send keystrokes to its server to perform intelligent analysis that is best conducted in the “cloud,” such as predict upcoming letters, store user typing patterns or perform large-scale lexicon analytics. Some third-party keyboards will function without their users granting them “full access;” however this will disable some of the features that might have drawn the users to the app in the first place.

Guidelines for Networked Keyboard Apps

Apple encourages keyboard developers to “establish and maintain user trust” by understanding and following “privacy best practices” and the associated iOS program requirements, guidelines and license agreements. For instance, if the keyboard sends keystroke data to the provider’s server, Apple asks the developer to not store the data “except to provide services that are obvious to the user.”

Presumably, Apple’s app review process catches blatant violations of such guidelines; however, once the keyboard transmits user data off the phone, there is little Apple can do to assess or enforce developers’ security measures. (I haven’t been able to locate keyboard-related security guidelines for the Android ecosystem, but I suspect they are not stronger than those outlined by Apple.)

Users of third-party keyboards should not count on mobile OS providers to enforce security practices on server-side keyboard data processing. Instead, they should decide whether they trust the keyboard developer with their data, perhaps on the basis of their perception of the developer’s brand and public security statements.

Security and Privacy Statements from Keyboard Developers

I looked at the security and privacy statements published by the developers of three popular keyboard apps that are available for Android and iOS devices: SwiftKey, Fleksy and Swype. Their developers differ in their explanation of how the safeguard users’ data.

SwiftKey

SwiftKey publishes a high-level overview of its data security and privacy practices. For users who opt into the SwiftKey Cloud feature of the product, the company collects “information concerning the words and phrases” that users utilize. The company calls this Language Modeling Data and explains that it is used to provide users with “personalization, prediction synchronization and backup.” Fields that website or app developers designate as denoting a password or payment information are excluded from such collection.

SwiftKey states that the data transferred to SwiftKey Cloud is transmitted to their servers “over encrypted channels” and is stored in a “fully encrypted” manner. The company also points out that its data protection practices are governed by “stringent EU privacy protection laws and mentions Safe Harbor principles. The company also states that data of users who don’t enable SwiftKey Cloud is not transferred out of their devices.

Users of the SwiftKey iOS app are instructed to grant the keyboard full access. Once this is accomplished, the user has the option of enabling SwiftKey Cloud “to enhance SwiftKey’s learning” by signing in with their Facebook or Gmail credentials. To discover security implications of enabling this feature, the user had to dig into the details available by clicking “Privacy policy” and “Find out more” links.

image

Unfortunately, SwiftKey on iOS doesn’t function at all until the user grants it full access. The user needs to trust SwiftKey that they won’t send data off the mobile device unless SwiftCloud is enabled.

Fleksy

Like SwiftKey, Fleksy uses the term Language Modeling Data in its Privacy Policy, explaining that it refers to data “such as common phrases or words that you use when typing with Fleksy.” According to the policy, the collection of this data is “disabled by default” and “is not enabled during the installation process.” Like SwiftKey, Fleksy pledges to encrypt the data when transmitting and storing it and to “not log data entered into password or credit card fields.”

Fleksy states that it doesn’t collect Language Modeling Data unless the user opts into supplying such data to Fleksy. However, I saw no mention of Language Modeling Data or the associated security repercussions in the Fleksy iOS app. Instead, I was encouraged to enable “Customization” by granting the keyboard full access, stating that “keyboard apps require full access to sync your preferences.”

image

Swype

Swype is another popular keyboard for mobile devices. It’s developed by a company called Nuance. Unfortunately, I could not find any information about this app’s security practices beyond a knowledgebase article for its Android keyboard, which stated that the company “does not collect personal information such as credit card numbers” and “suppresses all intelligent learning and personal dictionary addition functionality when used in a password field.” The company’s privacy policy contains generic statements such as:

"Nuance may share Personal Information within Nuance to fulfill its obligations to you and operate its business consistent with this Privacy Policy and applicable data protection law."

Conclusions and Implications

Third-party keyboards for mobile devices offer features that improve upon some aspects of built-in keyboards. However, to enjoy such innovations, the users generally need to grant keyboard apps network access. This means that the users need to accept the following risks:

  • The users need to be OK allowing keyboard developers to collect and store on their servers Language Modeling Data without fully understanding the meaning of this term or its privacy and security implications. The collection of such data is generally acknowledged by the keyboard developers, though they offer almost no details regarding how this data is safeguarded beyond referring to “encryption.”
  • The users need to trust the keyboard developer not to capture keystrokes and other sensitive data beyond Language Modeling Data. Doing this could be done on purpose by a malicious keyboard app or by accident by an otherwise benign application. In this case, the keyboard could act as a powerful keylogger for the mobile device.

Users might assume that the guardians of their mobile OS, be they Google, Apple, Samsung, etc., might protect them from malicious or accidental misuse of keystroke data and network access. However, such firms have no direct control over what happens once the data leaves the mobile device.

Keyboard developers should provide additional details about their security practices to reassure users and consider how their apps might provide innovative features in a manner that minimizes users’ risk and the apps’ need for network access. It’s unclear why SwiftKey cannot function (at least on iOS) without network access, why Fleksy doesn’t make it easier for users of its app to understand when data is sent to the company’s servers and why Nuance doesn’t discuss any of these topics for its Swype keyboard on its website.

Hey, that’s just my perspective. What’s yours?

Lenny Zeltser

Morse-Style Tap Codes for Mobile Authentication

Authenticating legitimate users of mobile devices in a manner that is both reliable and convenient is hard, which is why companies continue to experiment with approaches that try balance security and convenience. A freshly-minted Coin app asks users to authenticate by touching the phone’s screen to enter a sequence of short and long taps in the style of Morse code.

When starting the Coin app for the first time, the user needs to login using the traditional username/password pair. After that, the app asks the person to select a private Tap Code, which is “a Morse code-like sequence that you’ll use each time you open the Coin app. The Tap Code is a series of 6 taps, short or long.”

image

Anticipating the need to train people how to properly specify a Tap Code, the app requires users to practice entering a Tap Code before selecting their own, requesting that they “get a feel for long and short taps by practicing the pattern above.” Sadly, I failed several practice tests before eventually tapping in the right sequence. Tapping short and long codes is harder than you might think!

When the Coin app launches, it prompts the user to enter the “six element tap code.” The general approach of providing a quick way to authenticate to the app in lieu of typing the full password is present in many financial and other applications. However, those apps generally ask the user to enter a numeric PIN, rather than to tap a Morse-style code.

image

As a user of the Coin app, I struggled with the taps. Perhaps my thumb lacked the dexterity, or maybe my brain was intentionally sabotaging the experience to avoid the concentration that remembering and entering my tap sequence requires. Also, I kept forgetting my Tap Code.

When I failed to enter the correct sequence three times in a row, the app prompted for a full password as a way of deterring Tap Code guessing.

image

I applaud Coin’s attempt to provide an alternative to traditional PIN-based shortcut authentication approaches. Unfortunately, I find that the uniqueness of its Tap Code method negatively impacts the user experience, at least in the company’s present implementation of the mechanism. Though not as sexy, numeric PINs are familiar to people and work reasonably well. Authentication approaches based on gestures might be more familiar to users, too, especially on Android devices. 

If innovative approaches to mobile authentication is interesting to you, check out these articles:

Lenny Zeltser

Internet Noise and Malicious Requests to a New Web Server

I set up a brand new web server to see what type of connections it will receive. Since the server had no “production” purpose, all attempts to access it could be considered suspicious at best. Such requests are associated with scans, probes and other malicious activities that tend to blend into the background of web traffic. Here’s what I observed.

An Internet-Mapping Experiment by PDR Labs

The web server began receiving the following unexpected HTTP requests once or twice per day:

HEAD / HTTP/1.1
Accept-Encoding: identity
User-Agent: Cloud mapping experiment. Contact research@pdrlabs.net

These connection attempts stood out because the HTTP requests were missing the “Accept” header and included the server’s IP address, rather than hostname in the “Host:” field (not shown here). This tends to occur with bots.

Searching the web for “pdrlabs.net” led to www.pdrlabs.net, which contained a bare-bones page stating:

"We are conducting an ongoing experiment to map the Internet in its entirety. Our crawling is not malicious in intent and does nothing more than attempt the connection; no further information is mined."

These connections originated from different IP addresses, all of which were hosted at Amazon Elastic Compute Cloud (EC2). These included 75.101.197.159, 23.20.195.127, 23.22.8.3, 54.196.118.77, 23.22.13.205, 54.90.93.247, 54.87.77.99, 54.89.82.208, 107.22.50.242, 54.87.77.99, 54.89.82.208 and 107.22.50.242.

I didn’t find any other suspicious connections associated with these IPs so I am not too worried about this activity. Still, what are PDR Labs up to and who is behind this project? Perhaps some day these secrets will be revealed to us.

Scans for Open Web Proxies

Another set of anomalous requests, unrelated to the connections above, looked like this:

GET http:// hotel.qunar. com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1
Accept-Encoding: gzip,deflate,sdch
Referer: http:// hotel.qunar. com/
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
Host: hotel.qunar.com

These requests stood out because the client attempted to retrieve a page from hotel.qunar.com, which was unrelated to my web server. Such connections, regardless the third-party URL they attempt to retrieve, tend to be scans for open proxies. If my web server was configured as an open proxy, it would retrieve the requested URL and present it to the client.

According to the Httpd Wiki, such open proxies could be misused to “manipulate pay-per-click ad systems, to add comment or link-spam to someone else’s site, or just to do something nasty without being detected.” Open proxies are also used to bypass corporate or government access restrictions.

I observed these connections roughly every other day. They originated from different IP addresses, all of which were registered in China. These included 114.232.150.176, 114.231.131.97 and 115.29.146.211.

Why do these scans use the hotel.qunar.com URL for its tests? I doubt the person behind them is intent on finding a way to make anonymous hotel reservations through this site. Any URL would do. However, hotel.qunar.com is specifically mentioned as an example in the onlineProxy.js tool:

/**
* a proxy with totoro, to test online page.
*
step1: totoro -R http:// 10.211.55. 2:9998/proxy?target=hotel.qunar.com -a mocha
step2: this proxy, request the target url, add mocha script and case to response
step3: response the added html to totoro server
*
*/

This tool is a module for Totoro, which is a free, “simple and stable cross-browser testing tool.” Perhaps the scanner was implemented by using Totoro and onlineProxy.js, with the person behind it using the example above when launching the scans. Another mystery of the web unraveled!

This wasn’t the only set of proxy connections that the server encountered. Another probe came from 122.226.223.69, which attempted to retrieve:

GET http:// www. k2proxy. com//hello.html

The connecting client specified the following User-Agent string: “Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)”. The connection came from the system that, according to Spamhous CBL was infected with Torpig malware. The K2 proxy website, authored in Chinese, seems to be an effort to locate and document open proxies and appears to be maintained by zngohaha117@hotmail.com.

Yet another proxy probe came from 63.246.129.40, an IP address classified as being potentially malicious by Project Honey Pot:

GET http://www.baidu.com/ HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)

A couple of seconds before submitting this HTTP request, the attacking system also attempted to connect to the server on TCP ports 135 and 1433, both of which are associated with Microsoft SQL Server activity.

Probes from Potentially-Infected Systems

Let’s move to another unusual set of connections Approximately every other day the web server received the following request:

HEAD / HTTP/1.0

These connections stood out because they were missing all other headers typically present in an HTTP connection. The requests came from different IPs, which included 162.242.146.223, 184.173.172.150, 103.3.189.14 and 210.17.38.217. These IPs were located in the US, Japan and Taiwan.

Several of these IP addresses were flagged on the Spamhous Composite Blocking List (CBL) as being associated with infected hosts. According to CBL, some of these systems were running Gameover Zeus and Hesperbot malware. Perhaps these bots were directed to scan the web looking for web servers to infect—I’m not sure, but if you have promising theories, please let me know.

Scans for phpMyAdmin Vulnerabilities

The web server also saw several requests associated with User-Agent “ZmEu”. They looked like this:

GET /MyAdmin/scripts/setup.php HTTP/1.1
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: ZmEu

These connections stood out because they attempted to access PHP pages not present on the server and specified an unusual User-Agent. Also, they provided a “Host:” header (not shown here) that specified the web server’s IP address, rather than its hostname.

These probes came from 95.111.68.120 in Bulgaria. According to Spamhous CBL, this IP was associated with Gameover Zeus malware. The infected system attempted to access pages used by phpMyAdmin, a popular MySQL administration tool. The scanner looked for vulnerabilities in phpMyAdmin that it could exploit.

According to Phil Riesch, User-Agent “ZmEu” is used by “a security tool used for discovering security holes” in phpMyAdmin. Older web probes associated with this tool included a reference to its potential origin and pointed to a now-defunct website:

Made by ZmEu @ WhiteHat Team - www. whitehat.ro

Someone seemed to be using a bot network to scan for vulnerable phpMyAdmin systems, though the reference to “ZmEu” could have been added regardless of whether that was the tool that the attacker actually employed.

This completes the overview of the suspicious activities I observed recently on a brand new web server that should not have seen any connections. Such probes are easy to notice on a non-production system like that. On most real servers, they probably go unnoticed, blending into the noise that comprises today’s Internet traffic.

Lenny Zeltser

New Release of REMnux Linux Distro for Malware Analysis

image

It’s my pleasure to announce the availability of version 5 of REMnux, a Linux distribution popular among malware analysts. The new release adds lots of exciting free tools for examining malicious software. It also updates many of the utilities that have already been present in the distro. Here is a listing of the tools added to REMnux v5.

Examine Browser Malware

  • Thug: Honeyclient for investigating suspicious websites
  • mitproxy: Intercept, modify, replay and save HTTP and HTTPS traffic
  • Automater: Look up URL/Domain, IP and MD5 hash details
  • Java Cache IDX Parser: Examine Java IDX files
  • JSDetox: Decode obfuscated JavaScript
  • ExtractScripts: Extract JavaScript scripts from an HTML file

Examine Document Files

  • AnalyzePDF: Examine a malicious PDF file
  • Pdfobjflow: Visualize the output from pdf-parser
  • officeparser: Extract embedded files and macros from office documents

Extract and Decode Artifacts

  • unXOR: Guess a XOR key via known-plaintext attacks
  • XORStrings: Locate and decode XOR-obfuscated strings
  • ex_pe_xor: Carve out single-byte XOR encoded executables from files
  • Balbuzard: Extract and decode suspicious patterns from malicious files
  • Foremost: Carve contents of files
  • Scalpel: Carve contents of files
  • strdeobj: Extract and decode strings defined as arrays

Handle Network Interactions

Process Multiple Samples

  • Maltrieve: Retrieve malware from malicious sites
  • Ragpicker: Malware crawler with analysis and reporting functionality
  • Viper: Store, classify and investigate suspicious binary files

Examine File Properties and Contents

  • YaraGenerator: Generate Yara rules for designated files
  • Yara Editor: Create and modify Yara rules
  • IOCextractor: Extract indicators of compromise from a text report file
  • Hash Identifier: Identify the types of a hash being examined
  • nsrllookup: Look up file hashes on an NSRL database server
  • totalhash: Look up a suspicious file hash in the totalhash.com database

Investigate Linux Malware

  • Sysdig: Track and examine local system activities on a Linux system
  • Unhide: Find local hidden processes or connections on a Linux system
  • Bokken: Interactive static malware analysis tool
  • Vivisect: Statically examine and emulate the execution of binary files
  • Evan’s Debugger (EDB): Interactively disassemble and debug ELF binary files.

Other Tools

In addition to the newly-installed tools above, REMnux v5 includes updates to core OS components as well as numerous other utilities present in earlier versions of the distro, including Volatility, peepdf, Network Miner, OfficeMalScanner, MASTIFF, ProcDOT and others. For a full listing of REMnux v5 tools, see the XLSX spreadsheet or the XMind mind map.

A huge thank you to David Westcott, who set up and upgraded many of the packages available as part of REMnux v5, thoroughly tested them and help with the documentation. I’m also very grateful to the beta testers who reviewed early versions of this release. As always, thank you to the developers of the malware analysis tools that I am able to include as part of REMnux.

You can download the new version from REMnux.org. It’s available as a virtual appliance in VMware and OVF/OVA formats, as well as an ISO image of a live CD.

Lenny Zeltser

P.S. I expect the next major REMnux release to be based on a Long Term Support (LTS) version of Ubuntu and employ a modular package architecture to support incremental updates.

Scammers in Action: Domain Names and Family Resettlement to Australia

image

"You Have Been Selected for Family Resettlement to Australia," began the email that included the seal of the Embassy of Australia. "You are among the list of nominated for 2014 resettlement visa to Australia." The signature line claimed that the message had been sent by Hon Thomas Smith and came from "Australia Immigration Section <australiaimmigrationsection@gmail.com>."

This was a scam, of course.

"What do I need to do?" I responded, curious what might come next. Hon Thomas Smith responded within a few hours, this time from australia@immigrationsection.com.au.pn.

Request for Personal Information

The message attempted to mimic the letterhead of the Australian Department of Immigration and Citizenship and welcomed me “to Australia visa office.” It explained that:

"every year certain number of people are selected through our electronic ballot system for resettlement by Australia Government as part of support to Countries regarded as war zone area."

The miscreant requested that I submit a scanned copy of my travel passport, a recent photo and my phone number. In addition, I was to email a scanned white paper sheet with my fingerprints on it.

The email message included a PDF attachment that claimed to be Visa Form File/10121L-2014, which requested details such as date of birth, mother’s name and address. The PDF file didn’t have an exploit, as far as I can tell, and was merely designed as a place where the scammer’s target could conveniently provide personal information.

image

The scammer was pursuing this information probably with the goal of performing identity theft. Also, future interactions with the scammer would probably include a request for money to process the bogus application.

Free Sub-Domain Registration

The domain from which the scammer sent the application, immigrationsection.com.au.pn, is considered malicious by some security companies, according to VirusTotal. It redirects webs visitors to www-dot-popnic-dot-com, which some sources consider malicious.

image

Popnic-dot-com seems to be a front for Unionic-dot-com, which provides free domain registration, email forwarding, web hosting, URL forwarding, etc. under unusual TLDs such as .tc, .mn, .ms and others. More specifically, it offers registration under second-level domains that resemble TLDs assigned to major countries such as .uk.pn, .us.pn, .ca.pn, .au.pn, and others. No wonder it’s attractive to scammers, who want to get a domain that at a first glance seems legitimate.

With the increasing variety of TLDs available, scammers will have an easier job selecting domain names that catch the victims’ attention or evoke trust. Regardless of the domain used by the sender of the email message, if the offer sounds too good to be true and involves supplying sensitive information, it’s probably a scam.

Lenny Zeltser

A Series of Introductory Malware Analysis Webcasts

image

If you are looking to get started with malware analysis, tune into the webcast series I recorded to illustrate key tools and techniques for examining malicious software:

Since the best way to learn malware analysis involves practice, I am happy to provide you with malware samples from each of these webcasts. Just send me an email after you’ve watched the webcast and confirm that you will be taking precautions to properly isolate your laboratory environment.

Lenny Zeltser

Mastering 4 Stages of Malware Analysis

image

Examining malicious software involves a variety of tasks, some simpler than others. These efforts can be grouped into stages based on the nature of the associated malware analysis techniques. Layered on top of each other, these stages form a pyramid that grows upwards in complexity. The closer you get to the top, the more burdensome the effort and the less common the skill set.

Fully-Automated Analysis

The easiest way to assess the nature of a suspicious file is to scan it using fully-automated tools, some of which are available as commercial products and some as free ones. These utilities are designed to quickly assess what the specimen might do if it ran on a system. They typically produce reports with details such as the registry keys used by the malicious program, its mutex values, file activity, network traffic, etc.

Fully-automated tools usually don’t provide as much insight as a human analyst would obtain when examining the specimen in a more manual fashion. However, they contribute to the incident response process by rapidly handling vast amounts of malware, allowing the analyst (whose time is relatively expensive) to focus on the cases that truly require a human’s attention.

For a listing of free services and tools that can perform automated analysis, see my lists of Toolkits for Automating Malware Analysis and Automated Malware Analysis Services.

Static Properties Analysis

An analyst interested in taking a closer look at the suspicious file might proceed by examining its static properties. Such details can be obtained relatively quickly, because they don’t involve running the potentially-malicious program. Static properties include the strings embedded into the file, header details, hashes, embedded resources, packer signatures, meta data such as the creation date, etc.

Looking at static properties can sometimes be sufficient for defining basic indicators of compromise. This process also helps determine whether the analyst should take closer look at the specimen using more comprehensive techniques and where to focus the subsequent steps. Analyzing static properties is useful as part of the incident triage effort.

VirusTotal is an example of an excellent online tool whose output includes the file’s static properties. For a look at some free utilities you can run locally in your lab, see my posts Analyzing Static Properties of Suspicious Files on Windows and Examining XOR Obfuscation for Malware Analysis.

Interactive Behavior Analysis

After using automated tools and examining static properties of the file, as well as taking into account the overall context of the investigation, the analyst might decide to take a closer look at the specimen. This often entails infecting an isolated laboratory system with the malicious program to observe its behavior.

Behavioral analysis involves examining how sample runs in the lab to understand its registry, file system, process and network activities. Understanding how the program uses memory (e.g., performing memory forensics) can bring additional insights. This malware analysis stage is especially fruitful when the researcher interacts with the malicious program, rather than passively observing the specimen.

The analyst might observe that the specimen attempts to connect to a particular host, which is not accessible in the isolated lab. The researcher could mimic the system in the lab and repeat the experiment to see what the malicious program would do after it is able to connect. for example, if the specimen uses the host as a command and control (C2) server, the analyst may be able to learn about specimen by simulating the attacker’s C2 activities. This approach to molding the lab to evoke additional behavioral characteristics applies to files, registry keys and other dependencies that the specimen might have.

Being able to exercise this level of control over the specimen in a properly-orchestrated lab is what differentiates this stage from fully-automated analysis tasks. Interacting with malware in creative ways is more time-consuming and complicated than running fully-automated tools. It generally requires more skills than performing the earlier tasks in the pyramid.

For additional insights related to interactive behavior analysis, see my post Virtualized Network Isolation for a Malware Analysis Lab, a my recorded webcast Intro to Behavioral Analysis of Malicious Software and Part 3 of Jake Williams’ Tips on Malware Analysis and Reverse-Engineering.

Manual Code Reversing

Reverse-engineering the code that comprises the specimen can add valuable insights to the findings available after completing interactive behavior analysis. Some characteristics of the specimen are simply impractical to exercise and examine without examining the code. Insights that only manual code reversing can provide include:

  • Decoding encrypted data stored or transferred by the sample;
  • Determining the logic of the malicious program’s domain generation algorithm;
  • Understanding other capabilities of the sample that didn’t exhibit themselves during behavior analysis.

Manual code reversing involves the use of a disassembler and a debugger, which could be aided by a decompiler and a variety of plugins and specialized tools that automate some aspects of these efforts. Memory forensics can assist at this stage of the pyramid as well.

Reversing code can take a lot of time and requires a skill set that is relatively rare. For this reason, many malware investigations don’t dig into the code. However, knowing how to perform at least some code reversing steps greatly increases the analyst’s view into the nature of the malicious program in a comp

To get a sense for basic aspects of code-level reverse engineering in the context of other malware analysis stages, tune into my recorded webcast Introduction to Malware Analysis. For a closer look at manual code reversing, read Dennis Yurichev’s e-book Reverse Engineering for Beginners.

Combining Malware Analysis Stages

The process of examining malicious software involves several stages, which could be listed in the order of increasing complexity and represented as a pyramid. However, viewing these stages as discrete and sequential steps over-simplifies the steps malware analysis process. In most cases, different types of analysis tasks are intertwined, with the insights gathered in one stage informing efforts conducted in another. Perhaps the stages could be represented by a “wash, rinse, repeat" cycle, that could only be interrupted when the analyst runs out of time.

If you’re interested in this topic, check out the malware analysis course I teach at SANS Institute.

Lenny Zeltser

P.S. The pyramid presented in this post is based on a similar diagram by Alissa Torres (@sibertor). Also, Andrs Velzquez (@cibercrimen) translated this article into Spanish!

Making Sure Your Security Advice and Decisions Are Relevant

image

Perhaps the most challenging and exciting aspect of information security is the need to account for business context when making decisions. One way to do this is to determine the unique strengths of the company—its competitive advantages—so you can frame risk conversations accordingly.

Economic Moats to Safeguard the Business

Gunnar Peterson discussed aspects of this concept using the notion of economic moats. According to Morningstar, an economic moat “refers to how likely a company is to keep competitors at bay for an extended period.” This term is similar to what others might call a sustainable competitive advantage. Just like a moat helps safeguard the castle from attackers, an economic moat contributes towards protecting the business from competitors.

Companies have different economic moats and those without a sustainable competitive advantage tend to stagnate. Gunnar outlined several types of moats highlighted by Morningstar, including: Low operational costs, intangible assets (strong brand, patents, etc.), high switching costs (customers tend to stay), etc.

Relate Security Risks to Economic Moats

What are your organization’s economic moats? If you don’t know what capabilities help the company protect or expand its market share, find out. This knowledge will help you make informed security decisions and will allow you to be a more persuasive participant in risk discussions. As Gunnar pointed out, “the two most important things in infosec are identifying what kind of moat your business has and then defending that moat.”

Information security professionals often complain that executives ignore their advice. There could be many reasons for this. One explanation might be that you are presenting your concerns or recommendations in the wrong business context. You’re more likely to be heard if you relate the risks to an economic moat relevant to your company.

A common approach to emphasizing the importance of information security is based on the notion that a data breach can tarnish the company’s brand. In many cases, the reality shows that the business doesn’t actually suffer in the long term, and in some cases the attention brought by the breach could actually help the company. However, even if the company might suffer in the short term, an argument based on brand tarnishing could fall on deaf ears if the organization doesn’t consider its brand a competitive advantage.

Security in Support of Sustainable Competitive Advantages

A company whose economic moat is its brand, will spend considerable efforts to protect its brand equity. For organizations like that, the brand-tarnishing argument might be effective and could be a good way to justify security funding. However, companies that have other moats, won’t care that much about safeguarding their brands.

For instance, consider a firm whose economic moat is tied to low costs due to its operational expertise and supplier relationships. A good context for making security decisions in this organization might be its efforts to protect proprietary details related to internal and supplier logistics. Threats to this moat will likely capture executives’ attention.

Another organization whose moat is its proprietary intellectual property will want to hear your thoughts on protecting such trade secrets. Alternatively, if a firm sees its time-to-market as a competitive advantage, it will want to know about the security risks that could slow it down and prevent the next timely release of its product.

An economic moat might protect the company from competitors, but it could be eroded by internal factors such as a security breach. Understand your company’s economic moats. Use them to frame security decisions and to ensure that your infosec advice are relevant to the company’s business objectives and strategies.

Lenny Zeltser

Malware: Whom or What Are We Fighting?

When characterizing ill-effects of malicious software, it’s too easy to focus on malware itself, forgetting that behind this tool are people that create, use and benefit from it. The best way to understand the threat of malware is to consider it within the larger ecosystem of computer fraud, espionage and other crime.

A Tip of a Spear

I define malware as code that is used to perform malicious actions. This implies that whether a program is malicious depends not so much on its capabilities but, instead, on how the attacker uses it.

Sometimes malware is compared to a tip of a spear—an analogy that rings true in many ways, because it reminds us that there is a person on the other end of the spear. This implies that information security professionals aren’t fighting malware per se. Instead, our efforts contribute towards defending against individuals, companies and countries that use malware to achieve their objectives.

Understanding the Context

Without the work of personnel that handles technical aspects of malware infections, the malware-empowered threat actors would be unencumbered. Yet, these tactical tasks need to be informed by a strategic perspective on the motivations and operations of the individuals that create, distribute and profit from malware.

To deal with malware-enabled threats, organizations should know how to detect, contain and eradicate infections, but we cannot stop there. We also need to also understand the larger context of the incident. We won’t be able to accomplish this until we can see beyond the malicious tools to understand the perspective of our adversaries. The who is no less important than the what.

Lenny Zeltser