In the previous article about multi-platform mobile applications, we focused on the creation process. We explained hardware, system, and ideological differences. Of course, it’s not a comprehensive guide but rather a simplified overview. Its purpose was to make readers aware that mobile operating systems differ on many levels, and the most visible one—the logo on the smartphone case—is just the tip of the iceberg.
As one might guess, if an application is meant to function on multiple platforms, it should be tested on each of them.
The previous article touched upon the diversity of available programming languages. For multi-platform applications, an essential solution lies in choosing universal tools that allow the program’s logic to be implemented across different operating systems. This presents the first challenge for testers.
An application might function on one platform but fail to install, let alone run, on another. If such an obvious error occurs, it’s merely a matter of reaching its source and fixing the faulty code. However, what happens when an application seemingly works, most of its functions can be triggered, and it returns results more or less smoothly? What if the program runs on a virtual machine but crashes on a real device? Is the issue related to communication between hardware and the application interface, between the interface and underlying processes, or perhaps the same engine behaves differently depending on the platform it’s launched on?
By 2015, over 24,000 different Android devices had been created. In 2023 alone, under the Redmi brand, 16 smartphones and a tablet were announced. It’s worth noting that this is just one of the brands associated with Xiaomi. Apart from these, four models were announced under the POCO label, along with eight smartphones and three tablets under the main brand.
Of course, it’s not necessary for a modern mobile application to support every known mobile device model. However, it’s essential to ensure that it functions properly at least on the most popular smartphones. Testers, therefore, need to check the program independently on numerous devices. In the case of the iPhone, it’s somewhat easier, as even if the application is supposed to function correctly on models that haven’t received software updates for two years, the total number of configurations still amounts to 32. I’d like to remind you that brands associated with Xiaomi announced 28 different smartphones in 2023 alone.
In reality, various smart software and design tricks help reduce the amount of work for testers. For instance, if a reference smartphone (such as the QRD, Qualcomm’s reference model) can handle the tested application, there’s no need to test every model with the same SoC. If a weaker processor manages the program, it’s clear that a more powerful unit based on the same microarchitecture won’t encounter issues.
Almost anyone who has ever glanced at the comments section below any article about smartphones on any tech blog has likely witnessed the eternal battle between iOS and Android fans. Needless to say, in the eyes of enthusiasts of both platforms, the system they use is flawless. Except, that’s not true.
An essential challenge in testing multi-platform applications is having a certain familiarity with the issues and limitations of different solutions. Such knowledge is crucial as it allows for paying particular attention to certain subtle aspects of application functioning. Take, for example, the handling of the BLIK system on Xiaomi smartphones with the MIUI overlay. Due to specific restrictions related to banking applications, causing them to automatically close when minimized, it was impossible to use the mentioned payment method for online purchases made from the smartphone.
The problem didn’t affect the entire market of devices operating on Android but only the hardware from one manufacturer. Unfortunately, such errors can go unnoticed if application testing isn’t carried out on a sufficiently large and diverse group of devices.
This directly stems from the points mentioned above. Universal code doesn’t solve the problems caused by differences between systems, such as varied hardware communication capabilities. While Android grants apps nearly unlimited access to hardware and storage, iOS apps operate within a sandbox. This means that during testing, special attention should be paid not only to what happens inside the app and how it interacts with the user, but also to the aspect of data exchange between it and the operating system or hardware.
A good example illustrating the issue is software related to audio. Applications performing the same tasks and running equally fast may encounter a problem when it comes to communicating with the system’s audio engine. Its configuration, dependent on both hardware and the operating system, as well as decisions made by each smartphone manufacturer individually, can cause the program to operate more or less efficiently on hardware with similar specifications, depending on the logo on the back of the device. To some extent, this problem can be mitigated, but sometimes adjusting the code is sufficient, while other times it may require publishing specific instructions for users or preventing the installation of the program on devices from certain manufacturers.
Is it the necessity of possessing extensive knowledge about the hardware of modern smartphones? Understanding precisely how different operating systems function? Undoubtedly, that’s important. Nevertheless, it seems that the most significant challenge lies in the awareness of fragmentation hidden beneath the guise of the iOS and Android duopoly, as well as understanding that not all errors can be easily fixed.