On the effectiveness of random testing for Android: Or how i learned to stop worrying and love the monkey

Document Type

Conference Proceeding

Publication Date

5-28-2018

Abstract

Random testing of Android apps is attractive due to ease-of-use and scalability, but its effectiveness could be questioned. Prior studies have shown that Monkey - a simple approach and tool for random testing of Android apps - is surprisingly effective, "beating" much more sophisticated tools by achieving higher coverage. We study how Monkey's parameters affect code coverage (at class, method, block, and line levels) and set out to answer several research questions centered around improving the effectiveness of Monkey-based random testing in Android, and how it compares with manual exploration. First, we show that random stress testing via Monkey is extremely efficient (85 seconds on average) and effective at crashing apps, including 15 widely-used apps that have millions (or even billions) of installs. Second, we vary Monkey's event distribution to change app behavior and measured the resulting coverage. We found that, except for isolated cases, altering Monkey's default event distribution is unlikely to lead to higher coverage. Third, we manually explore 62 apps and compare the resulting coverages; we found that coverage achieved via manual exploration is just 2 - 3% higher than that achieved via Monkey exploration. Finally, our analysis shows that coarse-grained coverage is highly indicative of fine-grained coverage, hence coarse-grained coverage (which imposes low collection overhead) hits a performance vs accuracy sweet spot.

Identifier

85051234181 (Scopus)

ISBN

[9781450357432]

Publication Title

Proceedings International Conference on Software Engineering

External Full Text Location

https://doi.org/10.1145/3194733.3194742

ISSN

02705257

First Page

34

Last Page

37

This document is currently not available here.

Share

COinS