The curious case of app collusion

MalwareResearch
3 min readAug 30, 2021

I like analogies. Both for learning and teaching. Especially if the analogy involves drawing parallels from a beloved intellect or fan favourite, I love it. In one of my previous posts, I discussed about Asimovian take on computer security, let us continue our discussion from where we left and apply it in analysing a modern day malware paradigm.

What is app collusion?

The standard dictionary definition for the term collusion is, “secret agreement or cooperation especially for an illegal or deceitful purpose”. Extending it to the world of malware research, app collusion is a situation where a malware author breaks up the malicious functionalities into multiple groups based on the “privileges” (permissions) required for those functionalities. Then, the malware author deploys the group of malware in the target environment instead of deploying a single application. Example: Consider a malicious app that reads your contacts and exfiltrates to its server. Generally an app like this will need two capabilities (To access internet, To read contacts), However, if the author has deployed two applications one with ability to communicate to internet without the ability to read contact and another one with the ability to read contact but without the ability to access internet, these apps would collude in the target device and accomplish the task with less suspicious look from an ordinary user.

So, the basic idea is when viewed as an individual, the app will not be considered as malicious, but, when viewed as a part of the group of apps, it will be deemed malicious.

That is an achievement for a malware author as it provides an excellent cloaking mechanism for their malicious operations against primitive scanning.

1957 — Back to the future — Asimov’s Robot tetralogy

In the second novel of Robot series, Asimov paints a picture of a malicious Roboticist who uses the concept of app collusion to sneak-in a malicious activity which otherwise would have been rejected for violating the First Law of Robotics. As you know the First law states: “A Robot may not injure a human being or, through inaction, allow a human being to come to harm

That means, any direct order to cause harm to a human by the malicious roboticist would get rejected by the Robot. So, the Roboticist deploys what is exactly now known as app collusion, he splits the malicious activity of poisoning a person into two activities. First, he uses one of his Robots to mix the something in a glass of milk. Later, he orders his second Robot to serve “a” glass of milk to the victim. Could you recollect what was the central idea of malicious authors’ in app collusion?

So, the basic idea is when viewed as an individual, the app will not be considered as malicious, but, when viewed as a part of the group of apps, it will be deemed malicious.

Similarly, when these two orders given to two different robots are viewed as a single order, it is malicious and would get rejected. But when executed as two different orders by two different Robots, it was carried out without rejection (A Robot would not have questioned it as long as its an individual activity, Had it known it would be served to a human, it would ask additional questions about the nature of the product mixed in the beverage.)

How to solve it?

Well, Context is our friend. As discussed in some of my previous posts here and here, Context plays a vital role in causal inference. Consider a closed Euler Walk in a knowledge graph, that would easily detect the collusion of this sort, much like the protagonist (Baley) of Asimovian universe, a context aware security deduction to deduce the maliciousness.

Hope you have enjoyed reading this. See you in the next blog

--

--