Skip to main content

13 posts tagged with "android"

View All Tags

· 8 min read
Kevin Skrei

Lessons Learned From Logging

Intro

Writing software can be extremely complex. This complexity can sometimes make finding and resolving issues incredibly challenging. At Arccos Golf, we often run into these kinds of problems. One of the unique things about golf is that a round of golf is like a snowflake or a fingerprint, no two are alike. A golfer could play the same course every day without ever replicating a round identically.

Thus, trying to write software rules about the golf domain inevitably leads to bugs. And since no two rounds of golf are the same, trying to reproduce a user issue they encounter on the golf course is nearly impossible. So, what have we done to attempt to track down some of these issues? You guessed it, logging.

Logging has proven to be an indispensable tool in my workflow, especially when developing new features. This article aims to guide you through key questions that shape a successful logging strategy along with some considerations around performance. Concluding with practical insights, it features a few case studies from Arccos Golf, demonstrating how logging has been instrumental in resolving real-world bugs.

The Arccos app showing several shot detection modes available (Phone & Link)

Figure 1: The Arccos app showing several shot detection modes available (Phone & Link)

Should you log?

When trying to track down a bug or build a new feature, and you're considering logging, the first question to ask yourself is, “Is logging the right choice?”. There are many things to consider when deciding whether to add logging to a particular feature. Some of those considerations are:

  1. Do I have any other tools at my disposal that could be useful if this feature fails? This could be an analytics platform that shows screenviews, and perhaps logs metadata in another way besides traditional logging.
  2. Will adding logging harm the user in any way? This includes privacy, security, and performance.
  3. How will I actually get or view these logs if something goes wrong?

What to log?

Logging should be strategic, focusing on areas that yield the most significant insights.

  1. Identifying Critical Workflows: Determine which parts of your app are crucial for both users and your business. For instance, in a finance app, logging transaction processes is key.
  2. Focusing on Error-Prone Areas: Analyze past incidents and pinpoint sections of your app that are more susceptible to errors. For example, areas with complex database interactions or integrations with 3rd party SDKs might require more intensive logging.

What About Performance?

One of the primary challenges with logging is its impact on performance, a concern that becomes more pronounced when dealing with extensive string creation. To mitigate this, consider the following tips:

  1. Method Calls in Logs: Be wary of incorporating method calls within your log statements. These methods can be opaque, masking the complexity or time-consuming operations they perform internally.
  2. Log Sparingly: Practice judicious logging. Over-logging, particularly in loops, can severely degrade performance. Log only what is essential for debugging or monitoring.
  3. Asynchronous Logging: If your logging involves file operations or third-party libraries, always ensure that these tasks are executed on a background thread, thus preserving the main thread's responsiveness and application performance.

Implementing these strategies will help you strike a balance between obtaining valuable insights from logs and maintaining optimal application performance. I have found that you develop an intuition about what to log the more you practice and learn about the intricacies of your system.

How Do I Access The Logs?

The most straightforward and easiest method to access your applications logs is utilizing a third-party software tool like Shipbook, which offers the convenience of remote, real-time access to your logs.

Finally, I wanted to showcase a few stories illustrating how logging has helped us solve real-world production issues, along with some lessons learned about logging performance.

The 15-Minute Mystery

Our Android mobile app faced an intriguing issue. We noticed conflicting user feedback reports: one showed normal satisfaction, while another indicated a significant drop. The key difference? The latter report included golf rounds shorter than 15 minutes.

Upon investigating these brief rounds, we found that their feedback was much lower than usual. But why? There were no clear patterns related to device type or OS.

The trail of breadcrumbs started when we examined user comments on their rounds, many of which mentioned, "No shots were detected." Diving into the logs of these short rounds, a pattern quickly emerged. We repeatedly saw this line in the logs:

    [2023-12-01 14:20:09.322] DEBUG: Shot detected with ID: XXX but no user location was found at given shot time

This means that we detected a user took a golf shot but we didn’t know where they were on earth to place the shot at a particular location. This was unusual because we had seen log lines like this in our location provider which requests the phones GPS location:

    [2023-12-01 14:20:08.983] VERBOSE: Received GPS location from system with valid coordinates

So, we were clearly receiving location updates at regular intervals but we couldn’t associate them when a shot was taken by the user. After some further analysis, we discovered this line:

    [2023-12-01 14:20:09.321] VERBOSE: Attempting to locate location for timestamp XXX but requested source: “link” history is empty. Current counts: [“phone”:60, “link”:0]

We have a layer above our location providers that handles serving locations depending on which method the user selected for shot detection mode (either their Phone or their external hardware device “Link”). It was attempting to find a location for “Link” even though all of these rounds should have been in phone shot detection mode. Finally, we located this log line:

    [2023-12-01 14:14:33.455] DEBUG: Starting new round with ID: XXX and shot detection mode: Link … { metadata: { “linkConnected”: false, linkFirmwareVersion: null }... }

Once we analyzed this log line it became immediately obvious - The app was starting the round with the incorrect shot detection mode. Some rounds were started with shot detection mode of Link even if the Phone was selected in the UI (Figure 2).

The Arccos app showing a round of golf being played and tracked

Figure 2: The Arccos app showing a round of golf being played and tracked

We eventually identified the issue and it was due to some changes in our upgrade pathing code if users had certain firmware and prior generations of our Link product. Thankfully, this build was early in its incremental rollout and we were able to patch it quickly.

This experience highlighted the crucial role of widespread effective logging in mobile app development. It allowed us to quickly identify and fix an issue, reinforcing the importance of comprehensive testing and attentive log analysis.

When Too Much Detail Backfires

Dealing with hardware is especially difficult given you can rarely easily get information off of the hardware device. We often rely on verbose logging during the development phase to diagnose communication issues between hardware and software. This approach seemed foolproof as we added a new feature to our app, implementing detailed logging to capture every byte of data exchanged with the hardware of our new Link Pro product. In the controlled environment of our office, everything functioned seamlessly in our iOS app.

While on the course testing our app faced an unforeseen adversary: it began to get killed by the operating system. The culprit? Excessive CPU usage. Our iOS engineer, armed with profiling tools, discovered a significant CPU spike during data sync with the external device. Our initial assumption was straightforward – perhaps we were syncing too much data too quickly.

To test this theory, we modified the app to sync data less aggressively. This change did reduce app terminations, but it was a compromise, not a solution. We wanted to offer our users real-time experience without interruptions. Digging deeper into the profiling data, we uncovered the true source of our problem. It wasn't the Bluetooth communication overloading the CPU; it was our own verbose logging.

The moment we disabled this extensive logging, the CPU usage dropped dramatically, bringing it back to acceptable levels. This incident was a stark reminder of how even well-intentioned features, like detailed logging, can have unintended consequences on app performance. We decided to use a remote feature flag paired with a developer setting to be able to toggle detailed verbose logging of the complete data transfer only when necessary.

Through this experience, we learned a valuable lesson: the importance of balancing the need for detailed information with the impact on app performance. In the world of mobile app development, sometimes less is more. This insight not only helped us optimize our Link Pro product but also shaped our approach to future feature development, ensuring that we maintain the delicate balance between functionality and efficiency.

Afterword

In conclusion, our experiences at Arccos Golf have demonstrated the invaluable role of logging in software development. Through it, we’ve successfully navigated the complexities of writing golf software, turning unpredictable challenges into opportunities for improvement. Tools like Shipbook have been instrumental in this journey, offering the ease and flexibility for effective log management. I hope I’ve illustrated that logging is more than just a troubleshooting tool; it's a crucial aspect of understanding and enhancing application performance and user experience.

· 10 min read
Petros Efthymiou

Android Performance Optimization Series - Memory RAM

Introduction

In our previous article, we explored the fundamentals of Android performance optimization, focusing on CPU and battery. This second article delves deeper into the crucial aspect of RAM optimization, examining strategies for profiling and managing memory usage effectively to enhance your app's performance and user experience.

By implementing the practical techniques presented here, you can ensure your app utilizes system resources efficiently, delivering a smooth, responsive experience for your users.

RAM (Random Access Memory) is the primary memory of an Android device, acting as a temporary workspace for storing data actively used by applications.

Why RAM Optimization Matters

RAM optimization is essential for several reasons:

  1. Improved Performance:

    RAM is the primary workspace for active app data, and efficient RAM management ensures that your app doesn't consume excessive resources. This leads to several benefits:

    • Increase responsiveness and avoid ANRs. If the device runs out of memory, the application may become unresponsive. The app will appear as “stacked”. The OS, at that point, may choose to free some memory forcefully, but the UX is already jeopardized.
    • Reduced Scrolling Lag: Efficient RAM usage prevents bottlenecks that can cause scrolling to become sluggish or unresponsive, enhancing the overall user experience.
    • Smoother Animations and User Interface: RAM optimization allows your app to render animations and transitions smoothly, ensuring a responsive and engaging user experience.
  2. Reduced Crashes:

    Memory leaks occur when unused memory remains allocated, leading to performance degradation and potential crashes.

    Memory leaks occur when unused memory remains allocated, leading to performance degradation and potential crashes. By Memory leaks, we mean objects that are unused by the app, but the JVM garbage collector cannot release them because we have forgotten a reference to them somewhere in our code.

    Examples of this can be firing a coroutine to fetch information for a screen but not using the ViewModel scope. Then, if you navigate away from that screen and it gets destroyed, the coroutine will not be destroyed as it’s not tied to the lifecycle of that screen’s ViewModel.

    By implementing proper memory management practices, you can prevent these leaks and maintain system stability.

  3. Extended Battery Life:

    When apps consume excessive RAM, the system needs to constantly reload data from storage, which can drain the battery. RAM optimization helps conserve battery life:

    • Reduced Memory Thrashing: Efficient memory management minimizes the need for frequent garbage collection, which can impact battery performance.
    • Lower Background Activity: By using resources efficiently, your app reduces the need for background activities that consume battery power. By Background Activity, we refer to any kind of asynchronous data retrieval or processing that is not directly related to the current user action.
    • Optimized Data Storage: Use data compression and caching techniques to reduce the amount of data stored in RAM, minimizing battery consumption.

    By prioritizing RAM optimization, you can create a high-performing app that not only delivers a smooth user experience but also extends battery life and contributes to a more efficient overall system experience for your users.

Memory Profiling: Unveiling Memory Usage Patterns

Effective RAM optimization requires a deep understanding of how your app utilizes memory. Memory profiling tools provide valuable insights into memory usage patterns, enabling you to identify potential bottlenecks and optimize memory allocation.

Android Studio's built-in Memory Profiler is a powerful tool for analyzing your app's memory footprint. It allows you to monitor memory usage over time, identify memory allocation spikes, and track the lifecycle of objects. By analyzing heap dumps, you can pinpoint memory leaks and understand which objects are consuming excessive memory.

How to profile memory usage in Android

Realtime Memory Tracking

Monitor the app's memory usage in real time to identify spikes and trends. In order to profile RAM memory usage in Android, you need to open your project in Android Studio. In the search bar, search for “profiler” and click on the respective option.

profiler

Now, the Android profiler has been attached to your running application. You can see it at the bottom view of Android Studio. The initial view is capturing the CPU (top) and the memory (bottom) usage.

cpu and memory

You can see the CPU and MEMORY usage based on time (bottom) that is consumed by each Activity. In our case, as you can see, we first opened a LoginActivity that consumed certain resources, and then, after the login at 00:47, we switched to the MainActivity. We had a spike in CPU usage at the moment of transition, but the RAM usage remained stable. Also, as you can see, the current state of the LoginActivity is stopped - saved while the MainActivity is active.

For more on CPU usage, you can refer to the previous article in the series. Since this article focuses on RAM, let’s switch to the dedicated memory view and get the CPU out of the tracked metrics. In order to do this, click on System Trace

system trace

And on the top right, click on the “MEMORY” tab.

memory

Now you can see a detailed view of the memory consumption per category:

memory detail

Again, you can track the transition of the Activities on the top, but now we get a more detailed RAM graphic that indicates where the RAM is being used. We get the total memory consumption, which is 152 MB, and then we can see that:

  • Java and JVM are consuming 19,2 MB
  • Native 34,6 MB. This refers to C / C ++ objects.
  • Android Graphics 0
  • The stack 1,1 MB
  • The code execution 66,3 MB
  • And others 30,7MB

Two more helpful things to note:

  1. if you look at the top, you can see some pink dots. These represent the user clicks in the application. The prolonged ones refer to extended clicks or scrolling through a list. In my case, I was scrolling through a list, that’s why you can notice there are some spikes in memory usage at those time frames. Scrolling through extensive lists is memory-consuming.
  2. The screen line at the top that represents the activity lifecycle contains some gray spots. Those represent the switching between different fragments. Depending on how much memory each Fragment consumes, you may notice memory spikes at those time frames as well.

Heap Dump Analysis

Besides real-time memory profiling, you can capture heap dumps at different points in the app's lifecycle to analyze the allocation and retention of objects. Identify objects that remain allocated even when no longer needed, indicating potential memory leaks.

In order to do this, you can select the “Capture heap dump” option and click “Record”.

This will capture the current snapshot of the heap and all the active objects that consume memory. What normally helps me navigate through the memory dump is to click “Arrange by package” and then expand on the package name of my application in order to see which of the objects I control consumes the most memory.

heap dump

In this view, you can see how much memory each package is using per memory category, and if you expand on the packages, you will see the detailed memory consumption per object. You can play a bit around with this tool in order to find the view that best suits you to understand where your memory is consumed.

The Heap Dump, as we explained, is a snapshot of the app that contains all the information about how the memory is currently consumed. You also have the option to record the usage of either native(C/C++) or Java/Kotlin allocations over time by using the options below.

allocation options

Personally, I use the real-time memory tracking to get an idea about how my apps consume memory over time or the Heap Dump when I need very detailed information about the current memory usage per package and class.

Leak Canary

Another helpful tool to capture memory leaks in the Android app is the library Leak Canary.

Leak Canary is a useful library to detect such memory leaks. We can very easily integrate it by adding the respective dependency to our app’s build.gradle.

dependencies {
// debugImplementation because LeakCanary should only run in debug builds.
debugImplementation 'com.squareup.leakcanary:leakcanary-android:3.0-alpha-1'
}

No further code is needed, now, when the library detects a memory leak, it will pop a notification and capture a heap dump to help us detect what the memory leak is and what caused it.

Leak Canary

I strongly recommend using Leak Canary in your app.

Memory Optimization Techniques

Effective RAM optimization involves a combination of measures and strategies.

  1. Avoid memory leaks with Coroutines structured concurrency. In the previous section, we explained how to detect memory leaks. Let’s not see how to avoid them. Most memory leaks are caused by background work that is no longer required but still referenced. The most effective way to prevent this is by using the Coroutines structure concurrency.
    Make sure to replace all the background work mechanisms, such as Async Task, RX Kotlin, etc, with coroutines and tie the work to the adequate coroutine scope. When the work is related to a screen, tie it to its View Mode’s lifecycle by using the View Model scope. This way, the work will be canceled when the View Model is destroyed. Avoid using global scope, and if you do, make sure you cancel it when it’s no longer needed.

  2. Build efficient lazy loading lists with Jetpack Compose lazy column or view holder pattern. Extensive lists consume a lot of memory, especially if you load all the items at once. Currently, the most memory-efficient list mechanism is the Jetpack Compose Lazy Column; for more info, please refer to our respective article. The second most efficient way is the recycler view combined with the view holder pattern. The lazy loading technique can be extended to more objects besides lists.

  3. Minimize Unused Resources: Carefully manage the resources your app consumes, particularly images and background services. Use appropriate image formats, such as WebP or PNG, and optimize image dimensions to reduce file size.

  4. Optimize Animation Usage: Animations can be resource-intensive. Use animations sparingly and optimize them for efficiency to minimize memory usage.

  5. Utilize Dependency Injection Frameworks: Dependency injection frameworks like Hilt or Dagger 2 can help manage and reuse objects efficiently, reducing memory usage. Those frameworks using the scope mechanism provide an easy way to allow only a single instance of an object. By allowing only a single object instance, we avoid loading the memory with unnecessary objects.

    Finally, you should be Mindful of External Libraries: Carefully select and use external libraries. Some libraries may introduce unnecessary resource overhead.

By implementing these memory optimization techniques, you can ensure your Android app consistently delivers a smooth, responsive user experience while utilizing system resources efficiently.

Conclusion

In the second, we dived deep into RAM optimization. We first saw how to profile memory usage and detect memory leaks, and then we discussed optimization techniques.

Effective RAM optimization is a crucial aspect of developing high-performing Android apps. By implementing the strategies discussed in this article, you can significantly enhance your app's memory management, reducing memory leaks, improving performance, and extending battery life. Shipbook’s remote logging capabilities are also a helpful tool to track down issues.

Remember, continuous monitoring and optimization are essential for maintaining a top-notch user experience.

· 16 min read
Boris Nikolov

 Kotlin Multiplatform Mobile including Android and iOS

Introduction to Kotlin Multiplatform Mobile

Understanding Kotlin Multiplatform Mobile

What is KMM?

Kotlin Multiplatform Mobile is an extension of the Kotlin programming language that enables the sharing of code between different platforms, including Android and iOS. Unlike traditional cross-platform frameworks that rely on a common runtime, KMM allows developers to write platform-specific code while sharing business logic and other non-UI code.

Key Advantages of KMM

  1. Code Reusability: With KMM, you can write and maintain a single codebase for your business logic, reducing duplication and ensuring consistency across platforms.
  2. Native Performance: KMM leverages the native capabilities of each platform, providing performance comparable to writing platform-specific code. All your KMM code is built to platform-specific code before running on any device following all the latest best practices eventually providing users peak native capabilities.
  3. Interoperability: KMM seamlessly integrates with existing codebases and libraries, allowing developers to leverage platform-specific features when needed.
  4. Incremental Adoption: You can introduce KMM gradually into your projects, starting with shared modules and gradually expanding as needed.

KMM vs. Flutter

While KMM and Flutter do have a lot in common in terms of functionality and end result, they have very different approaches to reaching it:

  1. Programming language - KMM uses Kotlin, a language known for its conciseness, safety features and strong null-safety. Flutter on the other hand uses Dart, developed by Google and specifically targeted at building UIs through a reactive programming model
  2. Architecture - KMM focuses on sharing business logic between platforms and encourages a modular architecture by mixing sharing of core business logic modules with platform specific UI implementations. Flutter embraces a reactive and declarative UI framework with a widget-based architecture. The entire UI in Flutter is expressed as a hierarchy of widgets and doesn’t have a clear separation between business logic and UI.
  3. UI Framework - KMM doesn’t have a UI framework of its own, but rather leverages native UI frameworks like Jetpack Compose for Android and SwiftUI for iOS. Flutter proposes a custom UI framework that is equipped with a rich set of customisable widgets. The UI is rendered via the Skia graphics engine which is aimed at delivering a consistent look and feel across all supported platforms.
  4. Community and ecosystem - KMM is actively developed by JetBrains and has been gaining a lot of traction since inception by drawing many benefits from the Kotlin community. Flutter is maintained by Google and has a large and active community. It’s constantly growing its ecosystem of packages and plugins.
  5. Integration with native code - KMM seamlessly integrates with native codebases making its adoption effortless. Flutter relies on a platform channel mechanism to communicate with native code. It can invoke platform-specific functionality, but requires additional setup.
  6. Performance - Kotlin compiles to native code, providing near-native performance. Flutter uses a custom rendering engine (Skia) and introduces an additional layer between the app and the platform, potentially affecting performance in graphic-intensive applications.
  7. Platform support - KMM currently supports Android and iOS devices with planned support for other platforms in the future. Flutter has a broader range of supported platforms including Android, iOS, web, desktop (yet in experimental stage) and embedded devices.

The choice between KMM and Flutter still remains mostly subjective and is still dependent on language and architecture preferences, specific project requirements and of course - personal choice.

Creating a New KMM Project

Creating a new KMM project is a straightforward process:

  1. Open Android Studio:
    • Select "Create New Project."
    • Choose the "Kotlin Multiplatform App" template.
  2. Configure Project Settings:
    • Provide a project name, package name, and choose a location for your project.
  3. Configure Platforms:
    • Choose names for the platform-specific and shared modules (Android, iOS and shared).
    • Configure the Kotlin version for each platform module.
  4. Finish:
    • Click "Finish" to let Android Studio set up your KMM project.

If you don’t see the “Kotlin Multiplatform App” template then open Settings > Plugins, type “Kotlin Multiplatform Mobile”, install the plugin and restart your IDE.

“Kotlin Multiplatform Mobile plugin IDE

Project Structure and Organization

Understanding the structure of a KMM project is crucial for efficient development:

MyKMMApp
|-- shared
| |-- src
| |-- commonMain
| |-- androidMain
| |-- iosMain
|-- androidApp
|-- iosApp
  • shared: Contains code shared between Android and iOS.
  • commonMain: Shared code that can be used on both platforms.
  • androidMain: Platform-specific code for Android.
  • iosMain: Platform-specific code for iOS.
  • androidApp: Android-specific module containing code and resources specific to the Android platform.
  • iosApp: iOS-specific module containing code and resources specific to the iOS platform.

Shared Code Basics: Writing Platform-Agnostic Logic

Now that you have your Kotlin Multiplatform Mobile (KMM) project set up, it's time to dive into the heart of KMM development—writing shared code. In this chapter, we'll explore the fundamentals of creating platform-agnostic logic that can be used seamlessly across Android and iOS.

Identifying Common Code Components

The essence of KMM lies in identifying and isolating the components of your code that can be shared between platforms. Common code components typically include:

  • Business Logic: The core functionality of your application that is independent of the user interface or platform.
  • Data Models: Definitions for your application's data structures that remain consistent across platforms.
  • Utilities: Helper functions and utilities that don't rely on platform-specific APIs.

Identifying these shared components sets the foundation for maximizing code reuse and maintaining a consistent behavior across different platforms.

Writing Business Logic in Shared Modules

In your KMM project, the commonMain module is where you'll write the majority of your shared code. Here's a simple example illustrating a shared class with business logic:

// shared/src/commonMain/kotlin/com.example.mykmmapp/Calculator.kt

package com.example.mykmmapp

class Calculator {
fun add(a: Int, b: Int): Int {
return a + b
}

fun multiply(a: Int, b: Int): Int {
return a * b
}
}

In this example, the Calculator class provides basic mathematical operations and can be used across both Android and iOS platforms.

Ensuring Platform Independence

While writing shared code, it's crucial to avoid dependencies on platform-specific APIs. Instead, use Kotlin's expect/actual mechanism to provide platform-specific implementations where necessary.

Here's an example illustrating the use of expect/actual for platform-specific logging. In order to stay consistent while writing your code it’s recommended to use the same service provider on both platforms, for example Shipbook’s logger providing all required dependencies for both platforms. For the sake of simplicity, the example given below is using the native loggers of each platform.

Code in shared module:

// shared/src/commonMain/kotlin/com.example.mykmmapp/Logger.kt

package com.example.mykmmapp

expect class Logger() {
fun log(message: String)
}

Code in Android’s module:

// shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidLogger.kt

package com.example.mykmmapp

actual class Logger actual constructor() {
actual fun log(message: String) {
android.util.Log.d("MyKMMApp", message)
}
}

Code in iOS’s module:

// shared/src/iosMain/kotlin/com.example.mykmmapp/IOSLogger.kt

package com.example.mykmmapp

import platform.Foundation.NSLog

actual class Logger actual constructor() {
actual fun log(message: String) {
NSLog("MyKMMApp: %@", message)
}
}

By employing expect/actual declarations, you ensure that the shared code can utilize platform-specific features without compromising the platform independence of the core logic.

Platform-Specific Code: Adapting for Android

Now that you've laid the groundwork with shared code, it's time to explore the intricacies of adapting your Kotlin Multiplatform Mobile (KMM) project for the Android platform.

Leveraging Platform-Specific APIs

One of the advantages of KMM is the ability to seamlessly integrate with platform-specific APIs. In Android development, you can use the Android-specific APIs in the androidMain module. Here's an example of using the Android Toast API:

// shared/src/androidMain/kotlin/com.example.mykmmapp/Toaster.kt

package com.example.mykmmapp

import android.content.Context
import android.widget.Toast

actual class Toaster(private val context: Context) {
actual fun showToast(message: String) {
Toast.makeText(context, message, Toast.LENGTH_SHORT).show()
}
}

In this example, the Toaster class is designed to display Toast messages on Android. The class takes an Android Context as a parameter, allowing it to interact with Android-specific features.

Managing Platform-Specific Dependencies

When working with platform-specific code, it's common to have dependencies that are specific to each platform. KMM provides a mechanism to manage platform-specific dependencies using the expect and actual declarations. For example, if you need a platform-specific library for Android, you can declare the expected behavior in the shared module and provide the actual implementation in the Android module.

Here is a shared class and function intended to fetch data from an online source making a HTTP request:

// shared/src/commonMain/kotlin/com.example.mykmmapp/NetworkClient.kt

package com.example.mykmmapp

expect class NetworkClient() {
suspend fun fetchData(): String
}

Android-specific implementation:

//shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidNetworkClient.kt

package com.example.mykmmapp

import okhttp3.OkHttpClient
import okhttp3.Request

actual class NetworkClient actual constructor() {
private val client = OkHttpClient()

actual suspend fun fetchData(): String {
val request = Request.Builder()
.url("https://api.example.com/data")
.build()

val response = client.newCall(request).execute()
return response.body?.string() ?: "Error fetching data"
}
}

In this example, the NetworkClient interface is declared in the shared module, and the Android-specific implementation is provided in the androidMain module using the OkHttp library.

Building UI with Kotlin Multiplatform

User interfaces play a pivotal role in mobile applications, and with Kotlin Multiplatform Mobile (KMM), you can create shared UI components that work seamlessly across Android and iOS. In this chapter, we'll explore the basics of building UI with KMM, creating shared UI components, and handling platform-specific UI differences.

Overview of KMM UI Capabilities

KMM provides a unified approach to UI development, allowing you to share code for common UI elements while accommodating platform-specific nuances. The shared UI code resides in the “commonMain” module, and platform-specific adaptations are made in the “androidMain” and “iosMain” modules. A more convenient, but advanced approach to designing shared components would be to use a multiplatform composer tool, like the one provided by JetBrains named Compose Multiplatform. While still young in its development, it already provides powerful approach to writing UI logic reusable in many platforms like:

  • Android (including Jetpack Compose, hence the name “Compose Multiplatform)
  • iOS (currently in Alpha, but unfortunately without support for SwiftUI)
  • Desktop (Windows, Mac and Linux)
  • Web (but still in Experimental stage)

Creating Shared UI Components

Let's consider a simple example of creating a shared button component:

// shared/src/commonMain/kotlin/com.example.mykmmapp/Button.kt

package com.example.mykmmapp

expect class Button {
fun render(): Any
}

In this example, the Button interface is declared in the shared module, and the actual rendering implementation is provided in the platform-specific modules.

Android Implementation

// shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidButton.kt

package com.example.mykmmapp

import android.widget.Button

actual class Button actual constructor(private val text: String) {
actual fun render(): Button {
val button = Button(AndroidContext.appContext)
button.text = text
return button
}
}

iOS Implementation

// shared/src/iosMain/kotlin/com.example.mykmmapp/IOSButton.kt

package com.example.mykmmapp

import platform.UIKit.UIButton
import platform.UIKit.UIControlStateNormal

actual class Button actual constructor(private val text: String) {
actual fun render(): UIButton {
val button = UIButton()
button.setTitle(text, UIControlStateNormal)
return button
}
}

In these platform-specific implementations, we use Android's “Button” and iOS's “UIButton” to render the button with the specified text.

Storing Platform-Specific Resources

To manage platform-specific resources such as layouts or styles, you can utilize the “androidMain/res” and “iosMain/resources” directories. This allows you to tailor the UI experience for each platform without duplicating code.

Interoperability: Bridging the Gap Between Kotlin and Native Code

Kotlin Multiplatform Mobile (KMM) doesn't exist in isolation; it seamlessly integrates with native code on each platform, allowing you to leverage platform-specific libraries and functionalities. In this chapter, we'll explore the intricacies of interoperability, incorporating platform-specific libraries, communicating between shared and platform-specific code, and addressing data serialization/deserialization challenges.

Incorporating Platform-Specific Libraries

One of the strengths of KMM is its ability to integrate with existing platform-specific libraries. This allows you to leverage the rich ecosystems of Android and iOS while maintaining a shared codebase. Let's consider an example where we integrate an Android-specific library for image loading.

Shared Code Interface

// shared/src/commonMain/kotlin/com.example.mykmmapp/ImageLoader.kt

package com.example.mykmmapp

expect class ImageLoader {
fun loadImage(url: String): Any
}

Android Implementation

// shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidImageLoader.kt

package com.example.mykmmapp

import android.widget.ImageView
import com.bumptech.glide.Glide

actual class ImageLoader actual constructor() {
actual fun loadImage(url: String): ImageView {
val imageView = ImageView(AndroidContext.appContext)
Glide.with(AndroidContext.appContext).load(url).into(imageView)
return imageView
}
}

In this example, we've integrated the popular Glide library for Android to load images. The ImageLoader interface is declared in the shared module, and the actual implementation utilizes Glide in the Android-specific module.

Communicating Between Shared and Platform-Specific Code

Effective communication between shared and platform-specific code is crucial for building cohesive applications. KMM provides mechanisms for achieving this, including the use of interfaces, callbacks, and delegation.

Callbacks and Delegation

// shared/src/commonMain/kotlin/com.example.mykmmapp/CallbackListener.kt

package com.example.mykmmapp

interface CallbackListener {
fun onResult(data: String)
}

Usage in Android-specific module

//shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidCallbackHandler.kt

package com.example.mykmmapp

actual class AndroidCallbackHandler {
private var callback: CallbackListener? = null

fun setCallback(callback: CallbackListener) {
this.callback = callback
}

fun performCallback(data: String) {
callback?.onResult(data)
}
}

In this example, the “AndroidCallbackHandler” class in the Android-specific module utilizes the shared callback interface and acts as an intermediary for callback communication between shared code and Android-specific code.

Handling Data Serialization/Deserialization

When dealing with shared data models, KMM provides tools for efficient data serialization and deserialization. The “kotlinx.serialization” library simplifies the process of converting objects to and from JSON, facilitating seamless communication between shared and platform-specific code.

Add Serialization Dependency

Ensure that your shared module has the kotlinx.serialization dependency added to its “build.gradle.kts” or “build.gradle” file:

commonMain {
dependencies {
implementation "org.jetbrains.kotlinx:kotlinx-serialization-json:1.3.0"
}
}

Define Serializable Data Class:

Create a data class that represents the structure of your serialized data. Annotate it with “@Serializable”:

// shared/src/commonMain/kotlin/com.example.mykmmapp/User.kt

package com.example.mykmmapp

import kotlinx.serialization.Serializable

@Serializable
data class User(val id: Int, val name: String, val email: String)

Serialize Data to JSON:

Use the “Json.encodeToString” function to serialize an object to JSON:

// shared/src/commonMain/kotlin/com.example.mykmmapp/UserService.kt

package com.example.mykmmapp

import kotlinx.serialization.encodeToString
import kotlinx.serialization.json.Json

class UserService {
fun getUserJson(user: User): String {
return Json.encodeToString(user)
}
}

Deserialize JSON to Object:

Use the “Json.decodeFromString” function to deserialize JSON to an object:

// shared/src/commonMain/kotlin/com.example.mykmmapp/UserService.kt

package com.example.mykmmapp

import kotlinx.serialization.decodeFromString
import kotlinx.serialization.json.Json

class UserService {
fun getUserFromJson(json: String): User {
return Json.decodeFromString(json)
}
}

Debugging and Testing in a Kotlin Multiplatform Project

Debugging and testing are critical aspects of the software development lifecycle, ensuring the reliability and quality of your Kotlin Multiplatform Mobile (KMM) project. In this chapter, we'll explore strategies for debugging shared code, writing tests for shared and platform-specific code, and running tests on Android.

Writing Tests for Shared Code

Testing shared code is crucial for ensuring its correctness and reliability. KMM supports writing tests that can be executed on both Android and iOS platforms. The “kotlin.test” framework is commonly used for writing tests in the shared module.

Sample Test in the Shared Module

// shared/src/commonTest/kotlin/com.example.mykmmapp/CalculatorTest.kt

package com.example.mykmmapp

import kotlin.test.Test
import kotlin.test.assertEquals

class CalculatorTest {
@Test
fun testAddition() {
val calculator = Calculator()
val result = calculator.add(3, 4)
assertEquals(7, result)
}

@Test
fun testMultiplication() {
val calculator = Calculator()
val result = calculator.multiply(2, 5)
assertEquals(10, result)
}
}

Running Tests on Android

Running tests on Android and iOS involves using Android Studio's and xCode’s testing tools. Ensure that your Android and iOS test configurations are set up correctly, and then execute your tests as you would with standard Android and iOS tests.

Testing Platform-Specific Code

While shared code tests focus on business logic, platform-specific code tests ensure the correct behavior of platform-specific implementations. Write tests for Android and iOS code using their respective testing frameworks.

Android Unit Test Example

// shared/src/androidTest/kotlin/com.example.mykmmapp/AndroidImageLoaderTest.kt

package com.example.mykmmapp

import androidx.test.ext.junit.runners.AndroidJUnit4
import org.junit.Test
import org.junit.runner.RunWith
import kotlin.test.assertTrue

@RunWith(AndroidJUnit4::class)
class AndroidImageLoaderTest {
@Test
fun testImageLoading() {
val imageLoader = ImageLoader()
val imageView = imageLoader.loadImage("https://example.com/image.jpg")
assertTrue(imageView is android.widget.ImageView)
}
}

iOS Unit Test Example

// shared/src/iosTest/kotlin/com.example.mykmmapp/IosImageLoaderTest.kt

import XCTest
import MyKmmApp // Assuming this is your Kotlin Multiplatform module name

class IosImageLoaderTest: XCTestCase {

func testImageLoading() {
let imageLoader = ImageLoader()
let imageView = imageLoader.loadImage("https://example.com/image.jpg")
XCTAssertTrue(imageView is UIImageView)
}
}

integrating Kotlin Multiplatform Mobile with Existing Android Projects

Integrating Kotlin Multiplatform Mobile (KMM) with existing Android projects allows you to gradually adopt cross-platform development while leveraging your current codebase. In this chapter, we'll explore the process of adding KMM modules to existing projects, sharing code between new and existing modules, and managing dependencies.

Adding KMM Modules to Existing Projects

  1. Add KMM Module

    • Navigate to "File" > "New" > "New Module..."
    • Choose "Kotlin Multiplatform Shared Module"
    • Follow the prompts to configure the module settings.
  2. Configure Dependencies

    Ensure that your Android module and KMM module are appropriately configured to share code and dependencies. Update the settings.gradle and build.gradle files as needed.

    // settings.gradle

    include ':app', ':shared', ':kmmModule'
    // app/build.gradle

    dependencies {
    implementation project(":shared")
    implementation project(":kmmModule")
    }
  3. Sharing Code

    You can now share code between the Android module and the KMM module. Place common code in the “commonMain” source set of the KMM module.

    // shared/src/commonMain/kotlin/com.example.mykmmapp/CommonCode.kt

    package com.example.mykmmapp

    fun commonFunction() {
    println("This function is shared between Android and KMM.")
    }
  4. Run and Test

    Run your Android project, ensuring that the shared code functions correctly on both platforms.

Managing Dependencies

Shared Dependencies

Ensure that dependencies required by shared code are included in the KMM module's “build.gradle.kts” file.

// shared/build.gradle.kts

kotlin {
android()
ios()
sourceSets {
val commonMain by getting {
dependencies {
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.0")
// Add other shared dependencies
}
}
}
}

Platform-Specific Dependencies

For platform-specific dependencies, declare them in the respective source sets.

// shared/build.gradle.kts

kotlin {
android()
ios()
sourceSets {
val androidMain by getting {
dependencies {
implementation("com.squareup.okhttp3:okhttp:4.9.0")
// Add other Android-specific dependencies
}
}
val iosMain by getting {
dependencies {
// Add iOS-specific dependencies
}
}
}
}

Conclusion

As we conclude our exploration of Kotlin Multiplatform Mobile (KMM), it's evident that this technology has emerged as a powerful solution for cross-platform mobile app development. By seamlessly bridging the gap between Android and iOS, KMM empowers developers to build robust applications with efficiency and code reusability at its core.

Kotlin Multiplatform Mobile stands as a testament to the evolving landscape of mobile app development. By embracing the principles of code reusability, adaptability, and continuous improvement, you are well-equipped to navigate the complexities of cross-platform development.

· 11 min read
Petros Efthymiou

Android Performance Optimization Series- Battery & CPU

Introduction

In the dynamic world of Android app development, performance is crucial in order to meet the growing user expectations. Users demand smooth, responsive, and battery-efficient experiences, and they won't hesitate to uninstall apps that fall short. As developers, it's our responsibility to ensure our Android applications are not just functional but also performant.

We will be posting an exclusive series of articles where we go deep into the realm of Android performance profiling and optimization! Over the next few blog posts, we'll embark on an enlightening journey to demystify the Android apps’ performance. In this comprehensive series, we'll touch on the critical aspects of CPU usage, battery consumption, memory management, and UI optimization. Whether you're a seasoned developer seeking to fine-tune your app or a newcomer eager to master the art of Android optimization, this series is your roadmap to achieving peak performance. Get ready to unleash the full potential of your Android applications! 🚀

The Importance of Performance Optimization

Performance optimization isn't merely a luxury; it's a necessity. Beyond satisfying your users, there are several reasons to prioritize performance optimization in Android app development:

  1. User Retention: Performance issues, such as laggy UIs and slow load times, frustrate users and lead to high uninstall rates. An optimized app is more likely to retain and engage its user base.
  2. Market Competition: The landscape of mobile applications is crowded, and competition is fierce. An app that outperforms its peers has a clear advantage, which often translates to better ratings and more downloads.
  3. Battery Efficiency: Mobile device batteries are finite resources. An inefficient app can quickly drain a user's battery, leading to negative reviews and uninstalls. Optimal performance can significantly extend battery life.
  4. Resource Utilization: Efficient apps consume fewer system resources, such as CPU and memory. This, in turn, benefits the entire ecosystem by reducing strain on the device and enhancing the user experience across all apps.

In this article, we will explore battery consumption and CPU usage profiling and optimization. These two aspects are closely related. High CPU usage also leads to high battery consumption.

Understanding CPU Usage and Battery Consumption

Let’s first make sure we are on the same page regarding what we mean by the terms CPU Usage and Battery Consumption.

CPU Usage

The Central Processing Unit (CPU) is the brain of any computing device, including smartphones. CPU usage in the context of Android app performance refers to the percentage of the CPU's processing power that your app consumes. High CPU usage can lead to sluggish performance, increased power consumption, and a less responsive user interface. This happens as the CPU is unable to calculate everything which results to slow response times.

Monitoring CPU usage is crucial for several reasons:

  • Responsiveness: High CPU usage can cause your app to become unresponsive. Monitoring CPU usage allows you to identify performance bottlenecks and optimize your code for a smoother user experience.
  • Battery Life: As we already explained, excessive CPU usage can quickly drain a device's battery. By reducing CPU load, you can extend the device's battery life, leading to happier users.

Battery Consumption

Battery consumption is a key concern for mobile users. Apps that consume excessive battery are likely to be uninstalled or used sparingly. Why tracking battery consumption is essential:

  • User Retention: Excessive battery consumption is a major annoyance for users. By reducing your app's power consumption, you increase the likelihood of user retention.

I personally tend to uninstall apps that are very battery-demanding.

Profiling Battery Consumption and CPU usage

The skill to identify performance issues is arguably more important than the skill to optimize. In the same way, the read code to write code ratio is estimated to be about 10 to 1, we should spend more time identifying performance issues rather than performance optimizing. At first, this sounds weird, but it actually makes a lot of sense. Nowadays, even mobile devices have become quite powerful and are able to handle effectively heavy-duty tasks. Furthermore, performance optimization often leads to code that is harder to read and reason about. Therefore, we shouldn’t spend time optimizing code that has little to no effect on the actual real-time performance our users have. We must, though, always keep an eye on whether we have serious performance holes that we are not aware of. The Android Profiler is an excellent tool to do that!

Android Profiler

In order to start profiling an app, we first need to run the application from Android Studio in an emulator or a real device. When you have the app running, click the “Profiler” tab at the bottom of Android Studio:

profiler

Then, you need to locate the device on which you are running your app and click the “plus” icon to start a new profiler session. Find your app (debuggable process) and click on it.

debuggable process

Monitoring CPU Usage and Battery Consumption

Once you select your application, you are going to see something like the screenshot below. The top section indicates the percentage of CPU usage, and the bottom section the memory that our application is using.

cpu and memory

We are going to ignore the memory section for now as this article is focusing on CPU and battery. If we start using our app and navigate from screen to screen, we will notice that the CPU usage is increasing. Particularly when scrolling an extensive list that uses pagination, we can notice that the CPU usage is well getting above 50%. This happens because of the multiple network requests to fetch the next items as well as the lazy calculation of the UI items.

The pink dots at the top indicate the clicks we are doing inside the app.

clicks

Now, please click on the System Trace Link. The system trace initially has 2 tabs, one for the CPU and one for memory. Please click on CPU, and you will be able to track the CPU usage in even greater detail.

detailed cpu and memory

The green color indicates the CPU usage by our application, while the gray color the CPU by external factors such as the OS or other apps that may run in the background. We can also see the amount of threads that are currently active.

In order to track the battery usage, select on the left of the screen the system trace option and start recording.

recording

You can now use your app and perform the actions that you are interested in profiling, like navigating inside the app or scrolling a list. Once you are done, click stop recording, and you will get a full profiling report. On the top of the screen, you can see the CPU and, at the bottom, the energy profiler with the battery consumption.

full profiling report

The “Capacity” represents the remaining battery percentage (%).

The Charge the remaining battery charge in microampere-hours (µAh).

The Current is the instantaneous current in microampere (μΑ)

I personally though prefer to focus on CPU usage, which I find more helpful and straightforward. Generally, as a rule of thumb, high CPU usage means high battery consumption.

Besides CPU, though, there are other factors that contribute to battery consumption, such as GPU usage, Sensor core GPS or camera usage, etc. Unfortunately, in most devices, we are unable to get the detailed report as they don’t support the “On Device Power Rails Monitor” (ODPM). A few devices, such as Pixel 6 or Pixel 7, do support it, and the energy profiler there can give us the full battery usage report to understand further where we consume battery.

On Device Power Rails Monitor

Another great way to understand if your application is consuming too much battery is to simply use it as a user and check the system settings report that indicates your app’s battery consumption over time.

We now clearly understand how to profile our app’s CPU usage and battery consumption, either during runtime or by recording and storing usage reports. Let’s move on to the next section, where we will learn certain optimization techniques.

Optimization

The general rule to optimize both CPU usage and battery consumption is to avoid any unnecessary work. When we optimize CPU usage, we also optimize battery consumption and vice-versa. The difference is that in terms of CPU usage, we must avoid “doing all the work at once” which will overload it and cause performance issues, while battery consumption is about how much work we do over time.

Below, we will present certain areas that can overload the CPU and cause high battery drainage.

Precalculations

We often precalculate information, anticipating that we will need to display it later. We do it so that the information is available to the user instantly, and the user doesn’t have to wait for it. In many of the cases, though, the user will never navigate to the anticipated area, and the information won’t be displayed. Resulting in wasted CPU consumption and battery drainage.

  • Try to avoid prefetching data with multiple network requests at the application startup unless it’s really necessary. This can both overload your CPU, resulting at sluggish application startup, as well as unnecessarily drain the battery.
  • Avoid precalculating list elements. Use either the recycler view combined with the view holder pattern or the Jetpack Compose lazy column. Those components are performance-optimized and only create the items when the user is about to see them. API pagination is also a great technique to avoid prefetching an extensive amount of data.

Background Services

Background services are essential for tasks that need to run continuously or periodically, even when your app is not in the foreground. However, they can also be significant contributors to CPU usage and battery drain.

Optimization Strategies:

  • Scheduled Alarms: Utilize the AlarmManager to schedule tasks at specific intervals rather than running them continuously. This allows your app to minimize background processing time and conserve battery.
  • WorkManager: For periodic and deferrable tasks, use WorkManager. It efficiently manages background work, respecting device battery optimization features and network constraints.

Wake Locks

A wake lock allows your app to keep the device awake, which can significantly impact battery life if used excessively.

Optimization Strategies:

  • Use Wake Locks Sparingly: Only use wake locks when necessary, and release them as soon as the task is completed. Prolonged use of wake locks can prevent the device from entering low-power modes.
  • AlarmManager: In scenarios where you need to wake the device periodically, consider using the AlarmManager to schedule tasks instead of a continuous wake lock.
  • JobScheduler or WorkManager: These tools can be used to schedule tasks efficiently without the need for a persistent wake lock.

Location-Based Services

Location-based services, such as GPS and network-based location tracking, can have a significant impact on CPU usage and battery consumption, especially if they're continuously running.

Optimization Strategies:

  • Location Updates: Request location updates at longer intervals or adaptive intervals based on the user's current location. High-frequency updates consume more battery.
  • Geofencing: Utilize geofencing to trigger location-based actions when the user enters or exits defined areas. Geofencing is more efficient than continuous location tracking.
  • Fused Location Provider: Use the Fused Location Provider, which combines data from various sources and optimizes location requests. It reduces the need for the GPS chip, which consumes more power.

Battery and CPU Efficient Network Requests

Network requests can impact the device resource’s usage.

Optimization Strategies:

  • Batch Requests: Minimize the number of network requests by batching multiple requests into one. This reduces the frequency of radio usage, which is a significant battery consumer.
  • Network Constraints: Use tools like WorkManager, which respect network constraints. Schedule network-related work when the device is on Wi-Fi or when it has an unmetered connection, reducing cellular data usage.
  • Background Sync: If your app needs periodic data synchronization, schedule these tasks at intervals that minimize battery impact.
  • Optimize Payload Size: Minimize the size of data payloads exchanged with the server. Smaller payloads lead to shorter network activity, reducing battery usage.

Database queries

Similarly to Network requests, when we utilize a local database for data caching or other purposes, we should be mindful of its usage. Database queries consume both CPU and battery and should be optimized with the same techniques as the network requests.

By implementing these optimization strategies, you can ensure that your app is more energy-efficient and less likely to experience lag during usage.

Conclusion

In the first blog post of the optimization series, we deep-dived into the CPU usage and battery optimization topics. We learned how to effectively use the Android studio profiler to identify potential performance issues as well as optimization techniques to mitigate potential issues.

Remember to “profile often but optimize rarely and only when it’s truly required.

Stay tuned for the rest of the Android optimization series, where we will touch on the critical aspects of memory and UI optimization.

· 13 min read
Petros Efthymiou

Biometric Authentication in Android

Introduction

In today's digital landscape, security and user experience are paramount considerations for developers creating Android applications. Biometric authentication, a revolutionary advancement in mobile security, has emerged as a pivotal solution that addresses both security concerns and user convenience. With the rise of data breaches and the increasing dependency on mobile devices for various transactions, implementing robust authentication mechanisms is non-negotiable.

Biometric authentication is a cutting-edge method that leverages the unique physiological and behavioral characteristics of an individual to grant access to applications and sensitive data. Instead of relying solely on traditional methods like PINs or passwords, biometric authentication harnesses distinctive traits such as fingerprints, facial features, and iris patterns to verify a user's identity.

Advantages of Biometric Authentication

  1. Enhanced Security: Biometric authentication offers a higher level of security compared to traditional methods. Unlike passwords or PINs, which can be forgotten, shared, or hacked, biometric characteristics are unique to each individual. Having said that, there are security gaps in biometrics as well, such as authentication false positives due to poor device hardware, but those can be mitigated, as we will see later. Another opportunity to bypass it might be malicious fingerprint capturing (with photos or other methods) for user imitation.

  2. User Convenience: One of the standout benefits of biometric authentication is its ease of use. Users no longer need to remember complex passwords or worry about typing errors. A simple touch of a finger or a glance at the camera is all it takes to gain access. This frictionless experience not only reduces user frustration but also encourages secure behavior.

  3. Seamless Interaction: Biometric authentication seamlessly integrates into the user's natural interaction with the device. It eliminates the need to switch between apps to retrieve passwords or codes, streamlining the user journey and increasing overall efficiency.

  4. Reduced Friction: Traditional authentication methods often lead to abandoned sign-up or login processes due to the cumbersome nature of password entry. Biometric authentication reduces this friction, leading to higher user engagement and retention rates.

  5. Multifactor Authentication: Many modern devices support multifactor authentication, combining biometric traits with other factors such as PINs or tokens. This layered approach further enhances security by adding an extra barrier against unauthorized access.

In this step-by-step guide, we will explore how to implement biometric authentication in Android applications using the power of Jetpack Compose. To read more about Jetpack Compose you may visit our article. By combining the capabilities of Jetpack Compose with the Android Biometric API, developers can craft applications that prioritize security and provide a seamless and delightful user experience.

In the following sections, we will walk through the process of integrating biometric authentication into an Android app using Jetpack Compose. We will cover various aspects such as understanding the Biometric API, preparing the project, implementing different biometric modalities, and ensuring security best practices.

Stay tuned as we embark on this journey to create more secure, user-centric, and innovative Android applications with the power of biometric authentication and Jetpack Compose.

Understanding Biometric Authentication

Android devices offer several biometric modalities, each with its own set of characteristics and advantages.

Fingerprint Authentication:

Fingerprint authentication is one of the most widely recognized biometric methods. It relies on capturing and analyzing the distinctive patterns in a user's fingerprints. As every individual has unique ridge patterns and minutiae points at their fingertips, fingerprint authentication offers a high level of accuracy and security. Android devices equipped with fingerprint sensors enable users to unlock their devices, authorize transactions, and access sensitive apps simply by placing their registered finger on the sensor. This method has gained significant popularity due to its ease of use and quick recognition.

Face Recognition:

Face recognition involves capturing and analyzing a user's facial features to establish identity. It works by detecting key facial landmarks and comparing them with registered data. The minimum hardware requirement is a high-resolution camera. This camera should have sufficient resolution and quality to detect facial features accurately. To enhance security, some phones carry depth sensors that create a 3D depth map of the user's face. Or, even better, an infrared camera that enables IRIS recognition. Just with a front-facing camera, the device is considered to have weak biometric authentication.

Face recognition is convenient and non-intrusive, providing a seamless user experience. However, it's important to note that lighting conditions and angle variations can impact its accuracy.

Iris Recognition:

Iris recognition is a highly secure biometric method that involves capturing and analyzing the unique patterns in a user's iris, which is the colored part of the eye surrounding the pupil. Like fingerprints, iris patterns are distinct to each individual and remain stable over time. This method offers a higher degree of accuracy and security due to the complexity of the iris patterns. While iris recognition may require specific hardware, it provides a robust solution for applications that demand stringent security measures.

The Role of Biometric Authentication in App Security:

Biometric authentication plays a crucial role in enhancing the security of sensitive app functionalities. While traditional authentication methods like passwords can be compromised through hacking, phishing, or even user negligence, biometric traits are inherent and difficult to replicate. By incorporating biometric authentication as an additional security layer, apps can ensure that only authorized individuals gain access to critical features, sensitive data, and financial transactions.

For instance, financial apps can use biometric authentication to authorize high-value transactions, ensuring that even if a user's device is stolen, unauthorized transactions cannot be carried out without the user's biometric input. Similarly, healthcare apps can use biometrics to secure patient records and medical data, safeguarding sensitive information from unauthorized access.

The significance of biometric authentication extends beyond security. By reducing the need for complex passwords and PINs, biometrics offer a seamless and user-friendly experience, contributing to higher user engagement and satisfaction. Users are more likely to adopt apps that prioritize both security and convenience.

As we proceed through this step-by-step guide, we will explore how to harness the power of Jetpack Compose to integrate biometric authentication seamlessly into your Android apps. By combining the strength of biometric modalities with the modern UI capabilities of Jetpack Compose, you'll be able to create applications that are not only secure but also delightful to use. Stay with us as we dive deeper into the implementation details and unlock the potential of biometric authentication in your Android projects.

Prerequisites

Before diving into the implementation of biometric authentication in your Android app using Jetpack Compose, there are several prerequisites that you need to ensure are in place. These prerequisites ensure that your app can effectively utilize the Biometric API and provide a seamless and secure user experience.

Minimum SDK Version:

To implement biometric authentication, your app should have a minimum SDK version of 23 (Android 6.0, Marshmallow) or higher, as the Biometric API was introduced in this version.

Hardware Requirements:

The availability of biometric authentication methods depends on the hardware capabilities of the user's device. Such as:

  • Fingerprint sensor for fingerprint authentication.
  • Front-facing Camera for facial recognition.
  • Infrared camera for iris recognition.

Ensure that your app gracefully handles scenarios where the required hardware is not available on the device.

Setting Up Biometric Authentication and Jetpack Compose

Now that we've covered the prerequisites, it's time to set up your Android project for biometric authentication using the Android Biometric API and Jetpack Compose. This section will guide you through adding the necessary permissions and dependencies to your project, ensuring that you're well-equipped to integrate biometric authentication seamlessly into your app.

  1. Adding Permissions:

Depending on the biometric modality you plan to use, you may need to add specific permissions to your app's AndroidManifest.xml file. For example, if you intend to use face recognition, you must request CAMERA permission to access the front-facing camera:

<uses-permission android:name="android.permission.CAMERA" />

Make sure to request permissions at runtime if your app targets Android 6.0 (Marshmallow) or higher. You can use the AndroidX Activity or Fragment libraries to handle permission requests effectively.

  1. Adding Dependencies:

To begin implementing biometric authentication using the Android Biometric API and Jetpack Compose, you must add the required dependencies to your app's build.gradle file. We'll be using the Biometric API to interact with biometric hardware and the Jetpack Compose libraries for UI creation.

In your app's build.gradle file, add the following dependencies:

android {
// ...
buildFeatures {
compose true
}

composeOptions {
kotlinCompilerExtensionVersion "1.5.1"
}
}

dependencies {
// ...
implementation "androidx.compose.ui:ui:1.4.3"
implementation "androidx.compose.material:material:1.4.3" // Check for the latest version
implementation "androidx.activity:activity-compose:1.7.2"
implementation("androidx.biometric:biometric:1.2.0-alpha05")

}

The androidx.compose and androidx.activity:activity-compose are required for building the user interface using Jetpack Compose.

The androidx.biometric:biometric dependency provides access to the Android Biometric API, which is essential for implementing biometric authentication.

Checking Biometric Device Compatibility

Now, let’s start implementing the actual solution. As we are using Jetpack Compose we will create a MainActivity and add our Composables to it.

class MainActivity : AppCompatActivity() {

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
BiometricAuthenticationScreen()
}
}
}

We now need to implement the BiometricAuthenticationScreen Composable that will be responsible for the actual biometric authentication.

@Composable
fun BiometricAuthenticationScreen() {
val context = LocalContext.current as FragmentActivity
val biometricManager = BiometricManager.from(context)
val canAuthenticateWithBiometrics = when (biometricManager.canAuthenticate(BiometricManager.Authenticators.BIOMETRIC_STRONG)) {
BiometricManager.BIOMETRIC_SUCCESS -> true
else -> {
Log.e("TAG", "Device does not support strong biometric authentication")
false
}
}

Surface(color = MaterialTheme.colors.background) {
Column(
modifier = Modifier.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center
) {
if (canAuthenticateWithBiometrics) {
//TODO perform biometric authentication
} else {
Text(text = "Biometric authentication is not available on this device.")
}
}
}
}

We have implemented a simple Composable that first of all is using the BiometricManager to identify whether biometric authentication is available on this device. It stores the result on a boolean value. As we explained earlier there are devices, particularly older devices, that do not support any fingerprint, face, or iris authentication.

In our implementation are logging those cases and presenting a text on the screen that informs the user. In a real-world app, we would probably want to redirect the user to a username-password authentication screen instead.

Implementing Biometric Authentication

Let’s proceed with implementing the biometric authentication. First of all, we will create a button Composable that will appear on the screen when the device supports biometric authentication.

@Composable
fun BiometricButton(
onClick: () -> Unit,
text: String
) {
Button(
onClick = onClick,
modifier = Modifier.padding(8.dp)
) {
Text(text = text)
}
}

Now we will implement the authenticate with biometric function.

fun authenticateWithBiometric(context: FragmentActivity) {
val executor = context.mainExecutor
val biometricPrompt = BiometricPrompt(
context,
executor,
object : BiometricPrompt.AuthenticationCallback() {
override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
//TODO handle authentication success, proceed to HomeScreen
Log.d("TAG", "Authentication successful!!!")
}

override fun onAuthenticationError(errorCode: Int, errString: CharSequence) {
Log.e("TAG", "onAuthenticationError")
//TODO Handle authentication errors.
}

override fun onAuthenticationFailed() {
Log.e("TAG", "onAuthenticationFailed")
//TODO Handle authentication failures.
}
})

val promptInfo = BiometricPrompt.PromptInfo.Builder()
.setTitle("Biometric Authentication")
.setDescription("Place your finger the sensor or look at the front camera to authenticate.")
.setNegativeButtonText("Cancel")
.setAllowedAuthenticators(BiometricManager.Authenticators.BIOMETRIC_STRONG)
.build()

biometricPrompt.authenticate(promptInfo)
}

Initially, we are creating a Biometric Prompt with the respective callbacks that decide what happens on each occasion. The onAuthenticationSucceedded callback is called when the authentication is successful, here you probably want to include an Intent for your HomeActivity or present your HomeScreen Composable. Personally, I prefer to separate the pre-authentication app from the post-authentication app with a separate activity.

After we create the BiometricPrompt we also create the PromptInfo that defines the options and literals that will present to the user when he triggers the biometric authentication.

Then we define what authenticators we want to allow. Here we are requesting the BIOMETRIC_STRONG type of authentication. This includes:

  1. Fingerprint authentication.
  2. Face recognition with IRIS detection.
  3. Face recognition with a 3D depth sensor.

As we mentioned earlier, a device that only carries a front-facing camera cannot apply strong biometric authentication. The OS is picking automatically the strong authentication option that is available on the current device (fingerprint or face recognition). Usually, devices don’t carry more than 1 strong-biometric authentication sensors, as this would unnecessarily increase their cost.

Finally, we call the authenticate function on the biometricPrompt to trigger the actual authentication popup.

In order to finalize the implementation, we need to display the BiometricButton on the devices that support biometric authentication. Replace the //TODO perform biometric authentication with BiometricButton(...)

    Surface(color = MaterialTheme.colors.background) {
Column(
modifier = Modifier.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center
) {
if (canAuthenticateWithBiometrics) {
BiometricButton(
onClick = {
authenticateWithBiometric(context)
},
text = "Authenticate with Biometric"
)
} else {
Text(text = "Biometric authentication is not available on this device.")
}
}
}

The implementation is completed! You can now build and install the app on a device that supports biometrics and perform the authentication!

Biometric Authentication Error Handling

Let’s now discuss the error handling of biometric authentication.

Both onAuthenticationError and onAuthenticationFailed are callback methods of the BiometricPrompt.AuthenticationCallback class. These methods are invoked based on different scenarios during the biometric authentication process.

onAuthenticationError Method:

The onAuthenticationError method is called when an error occurs during the biometric authentication process. This could include various types of errors, such as the user clicking the cancel button, sensor errors, hardware issues, or other unexpected conditions that prevent successful authentication. The method receives two parameters:

  1. errorCode: An integer code representing the specific error that occurred. This code can be used to identify the nature of the error.
  2. errString: A human-readable error message that provides additional details about the error.

onAuthenticationFailed Method:

The onAuthenticationFailed method is called when the biometric authentication process fails to recognize the biometric data provided by the user. This can occur when the biometric data presented to the sensor does not match any enrolled biometric template. It's important to note that this callback is not invoked for every unsuccessful attempt; it's specifically for cases where the biometric data provided cannot be matched to any registered data.

Similar to the onAuthenticationError method, the onAuthenticationFailed method should be used to handle authentication failures by implementing appropriate logic.

In summary, onAuthenticationError is called when there's an error during the authentication process, and onAuthenticationFailed is called when the provided biometric data cannot be matched to any registered data. Both methods are essential for creating a comprehensive biometric authentication experience that informs users about errors and failures and guides them through the authentication process.

Conclusion

As we conclude this step-by-step guide on implementing biometric authentication in Android with Jetpack Compose, we've explored the fusion of cutting-edge security measures and user-centric design principles. Biometric authentication has emerged as a formidable solution that not only enhances the security of your Android applications but also elevates the user experience to new heights.

By harnessing the power of biometric modalities such as fingerprint, face recognition, and iris authentication, developers can provide users with a seamless and secure way to access sensitive features, authenticate transactions, and interact with confidential data. The integration of Jetpack Compose further amplifies the potential, enabling the creation of intuitive and visually appealing user interfaces that align with modern design trends.

Shipbook provides awesome remote logging capabilities that can help you identify, debug, and fix critical authentication errors at the time they appear!

Thank you for joining us on this exploration of biometric authentication with Jetpack Compose. As technology continues to evolve, we encourage you to stay curious, experiment, and continually enhance your skills to build exceptional and secure experiences for Android users worldwide.

· 9 min read

RecyclerView Vs ListView Introduction

RecyclerView and ListView are two popular options for displaying long lists of data within an Android application. Both are subclasses of the ViewGroup class and can be used to display scrollable lists. However, they have different features, capabilities and implementation.

The process of implementing both may seem pretty similar, for example,

  • You get list of data
  • You create an adapter
  • Find the view to which you have to display the list
  • Set the adapter to that list

ListView was one of the earliest components introduced in Android development for displaying a scrollable list of items. Although it provided basic functionality and ease of implementation, it had its limitations, especially when it came to handling large data sets and customizing the appearance and behavior of the list.

As Android applications evolved and the need for more sophisticated list management became apparent, RecyclerView was introduced as a more versatile and efficient solution for displaying lists. As a developer, it's essential to understand the key differences between ListView and RecyclerView to appreciate their respective advantages and disadvantages.

In this article, we'll explore the key differences between RecyclerView and ListView and give you a good understanding of when to use what and how and also appreciate why RecyclerView came into existence over ListView.

ListView

ListView was introduced in Android 1.0 and has been around since then. ListView was the go-to solution for displaying lists of data before RecyclerView was introduced.

One of the biggest advantages of using a ListView is that it's simpler to implement, easier to use. Here is an example of how simply ListViews can be implemented in Android.

main activity

Link to snippet

As you can see, the code is pretty simple and straight-forward compared to the RecyclerView implementation one has to do by implementing custom adapter and viewholder classes.

If you ask any Android Developer about the difference between the two they would say something like “ListView is still available and can be a good solution for displaying smaller lists of data. But, as the complexity of the app increases, the ListView might not be the best solution for managing and displaying large amounts of data.” Let’s try to understand why?

To implement anything a little more complex than just a simple list of Strings, it’s a good practice to write our own Adapter class whose responsibility is to map the data to the positioned view as we scroll through the list.

Let’s write our own adapter class instead of a simple ArrayAdapter for the above snippet.

list adapter

Link to snippet

The getView function on a high level does the following:

  • Gets each view item from the listview
  • Find references to its child views
  • Sets the correct data to those views depending upon the position
  • Returns the created view item.

For each row item in the 1000 item list, we don’t have to create 1000 different views, we can repopulate and reuse the same set of views with different data depending on the position in the list. This can be a major performance boost as we are saving tons of memory for a large list. This is called View-Recycling and is a major building block for RecyclerView which we will see in a while and here is a representation of how View Recycling works.

graph

Now, we have recycled the views by a simple null check and saved memory but if we look inside the getView() function we can see that we are trying to find the references to the child views by doing findViewByID() calls.

Depending upon how many child views there are, in my example code there are 4 so** for each item in the list, we are** calling findViewByID() 4 times.

Hence for a 1000 item list, there will be 4000 findViewByID() calls even though we have optimized the way in which the rowItem views are initialized. To help fix this problem for large lists, the ViewHolder pattern comes into play.

ViewHolder Pattern in Android

The ViewHolder pattern was created in Android to improve the performance of ListViews (and other AdapterView subclasses) by reducing the number of calls to findViewById().

When a ListView is scrolled, new views are created as needed to display the list items that become visible. Each time a new view is created, the findViewById() method is called to find the views in the layout and create references to them. This process can be slow, especially for complex layouts with many views while also at the same time the instantiated views references are kept in memory for the whole list which can grow directly proportional to the size of the list you are rendering.

The ViewHolder pattern addresses this performance issue by caching references to the views in the layout. When a view is recycled (i.e., reused for a different list item), the ViewHolder can simply update the views with new data, rather than having to call findViewById() again.

Implementing ViewHolder Pattern in our ListView

Lets implement our ViewHolder class inside the MyListAdapter class.

MyListAdapter class

Code Snippet

With the above mentioned changes, we have created a structure to:

  • Reuse the View for each item in the list instead of creating new ones for each item in the list.
  • Reduce the number of findViewByID() calls which in case of complex layouts and large number of items in the lists can take down the performance of the app significantly.

These are the two key things which are provided as a structure to the developers with RecyclerView apart from other features of customisations in RecyclerView.

Drawbacks of Using ListView

  • Inefficient scrolling due to inefficient memory usage out of the box
  • Lesser flexibility to customize how the list items should be positioned.
  • Can only implement a vertically scrolling list.
  • Implementing animations can be hard and complex out of the box
  • Only offers notifyDataSetChanged() which is an inefficient way to handle updates.

RecyclerView

RecyclerView was introduced in Android 5.0 Lollipop as an upgrade over the ListView. It is designed to be more flexible and efficient, allowing developers to create complex layouts with minimal effort.

It uses "recycling" out of the box which we have seen above. It also has more flexible layout options, allowing you to create different types of lists with ease and also provides various methods to handle data set changes efficiently.

Let’s use RecyclerView instead of ListView in our above implementation.

RecyclerView

As you can see there are multiple functions to override instead of just one getView() function of ArrayAdapter which makes the implementation of RecyclerViews not as beginner friendly as compared to ListView. It can also feel like an overkill implementation for the simplest of the lists in Android.

Benefits of Using RecyclerView

  • The major advantage of RecyclerView is its performance. It uses a view holder pattern out of the box, which reuses views from the RecyclerView pool and prevents the need to constantly inflate or create new views. This reduces the memory consumption of displaying a long list compared to ListViews and hence improves performance.

  • With LayoutManager you can define how you want your list to be laid out, linearly, in a grid, horizontally, vertically rather than just vertically in a ListView.

  • RecyclerView also offers a lot of customisation features over listview that make it easier to work with. For example, It supports drag and drop functionality, rearrange items in the list, item swiping gestures features like deleting or archiving items in the list. Below is an attached example code on how easy it is to extend the functionality to add swiping gestures.

// Set up the RecyclerView with a LinearLayoutManager and an adapter
recyclerView.layoutManager = LinearLayoutManager(this)
adapter = ItemAdapter(createItemList())
recyclerView.adapter = adapter

// Add support for drag and drop
val itemTouchHelper = ItemTouchHelper(object : ItemTouchHelper.Callback() {
override fun getMovementFlags(
recyclerView: RecyclerView,
viewHolder: RecyclerView.ViewHolder
): Int {
// Set the movement flags for drag and drop and swipe-to-dismiss
val dragFlags = ItemTouchHelper.UP or ItemTouchHelper.DOWN
val swipeFlags = ItemTouchHelper.START or ItemTouchHelper.END
return makeMovementFlags(dragFlags, swipeFlags)
}

override fun onMove(
recyclerView: RecyclerView,
viewHolder: RecyclerView.ViewHolder,
target: RecyclerView.ViewHolder
): Boolean {
// Swap the items in the adapter when dragged and dropped
adapter.swapItems(viewHolder.adapterPosition, target.adapterPosition)
return true
}

override fun onSwiped(viewHolder: RecyclerView.ViewHolder, direction: Int) {
// Remove the item from the adapter when swiped to dismiss
adapter.removeItem(viewHolder.adapterPosition)
}
})

// Attach the ItemTouchHelper to the RecyclerView
itemTouchHelper.attachToRecyclerView(recyclerView)

  • Implementing animations is pretty simple in RecyclerView and can be done by simply setting the itemAnimator as shown below:
val itemAnimator: RecyclerView.ItemAnimator = DefaultItemAnimator()
recyclerView.itemAnimator = itemAnimator

Best Practices to keep in mind with RecyclerView

To ensure the best results, developers should follow best practices when working with RecyclerView and ListView. For example:

  • Use item animations sparingly, as too many animations can lead to janky performance.

  • To update the UI with a RecyclerView, we can use the notifyItemInserted(), notifyItemRemoved() or even notifyItemChanged() methods, which tells the adapter that the data has changed and the list needs to be refreshed, but if not used responsibly can lead to redundant rebuilds of the list and introduce unwanted bugs.

Conclusion

In this article, we started off with implementing a simple list using ListView and made changes to it which don’t come out of the box with ListView to make it more memory efficient, like View Recycling and View Holder pattern, only to realize the limitations of customizations available in ListView.

Then we implemented the same list with RecyclerView which enforces developers to implement the features of View Recycling and ViewHolder pattern out of the box making them efficient, customizable and performant out of the box explaining their popularity as a solution in the Android Community.

· 11 min read
Petros Efthymiou

From Android Views to Jetpack Compose

Jetpack Compose and why it matters

Jetpack Compose is a revolutionary UI toolkit introduced by Google for building native Android applications. Unlike traditional Android Views, Jetpack Compose adopts a declarative approach to UI development, allowing developers to create user interfaces using composable functions.

This paradigm shift simplifies UI development by eliminating the need for complex view hierarchies and manual view updates. With Jetpack Compose, developers can express the desired UI state and let the framework handle the rendering and updating automatically. This results in cleaner and more readable code, improved productivity, and faster UI development cycles.

Jetpack Compose offers a modern and intuitive way to build UIs, enabling developers to create beautiful, responsive, and highly interactive Android applications with ease. Its importance lies in providing a more efficient and enjoyable development experience, enabling developers to focus on crafting exceptional user experiences while reducing boilerplate code and increasing code maintainability.

And the cherry on top? No more Android Fragments! We all had our fair share of pain trying to comprehend and debug the complex Fragment lifecycle. With Jetpack Compose, we can put an end to it! That’s right, Composables can take the Fragments’ place as reusable UI components that are tied up to an Activity.

Declarative UI building is the way that all front-facing applications are moving towards. It was first introduced by React in 2013. After its successful adoption in the web, it later moved to cross platform mobile platforms such as React Native and Flutter. Realizing its advantages, both native platforms, Android and iOS, have recently made a similar move by introducing Jetpack Compose and SwiftUI. Soon all other UI-creating tools will be a thing of the past.

Understanding RecyclerView and its Limitations

RecyclerView has long been a popular component in Android app development for efficiently displaying lists and grids. It offers flexibility and performance optimizations by recycling views as users scroll through the list, reducing memory consumption and improving scrolling smoothness. However, RecyclerView also comes with its limitations. Managing view recycling, implementing complex adapter logic, and supporting different view types for diverse list items can often lead to boilerplate code and increased development effort.

Additionally, RecyclerView lacks built-in support for animations and complex layout transitions, making it challenging to create dynamic and visually engaging user interfaces. These limitations have prompted developers to seek alternative solutions that offer a more streamlined and intuitive approach to building user interfaces. The Jetpack Compose Column and Lazy Column are coming to the rescue.

Analyzing the Existing RecyclerView Implementation

We are creating an application that fetches a list of playlists and displays them on the screen. The initial implementation is based on Android Fragment and Recycler View. Let's take a closer look at the code structure and components involved:

class PlaylistFragment : Fragment() {

private val viewModel: PlaylistViewModel by viewModels()
@Injected
var playlistAdapter: PlaylistAdapter

override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View {
// Inflate the layout for this fragment
val view = inflater.inflate(R.layout.fragment_playlist, container, false)

val playlistsRecyclerView: RecyclerView = view.findViewById(R.id.recyclerView)
playlistsRecyclerView.layoutManager = LinearLayoutManager(requireContext())
playlistsRecyclerView.adapter = playlistAdapter

lifecycleScope.launchWhenStarted {
viewModel.playlists.collect { playlists ->
playlistAdapter.submitList(playlists)
}
}

return view
}
}

Our Fragment depends on the ViewModel, which exposes a Kotlin StateFlow that emits a list of playlists. We observe this StateFlow using the collect method, and upon receiving the updated list, we populate the RecyclerView with the playlist items by calling submitList. The RecyclerView is set up with a custom adapter that extends the RecyclerView Adapter and holds a list of playlists as its data source.

Below is the respective code for the RecyclerView Adapter:

class PlaylistAdapter : RecyclerView.Adapter<PlaylistAdapter.PlaylistViewHolder>() {

private var playlistItems: List<Playlist> = emptyList()

override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PlaylistViewHolder {
val itemView = LayoutInflater.from(parent.context)
.inflate(R.layout.item_playlist, parent, false)
return PlaylistViewHolder(itemView)
}

override fun onBindViewHolder(holder: PlaylistViewHolder, position: Int) {
val playlist = playlistItems[position]
holder.bind(playlist)
}

override fun getItemCount(): Int {
return playlistItems.size
}

inner class PlaylistViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) {
private val titleTextView: TextView = itemView.findViewById(R.id.titleTextView)
private val descriptionTextView: TextView = itemView.findViewById(R.id.descriptionTextView)

fun bind(playlist: Playlist) {
titleTextView.text = playlist.title
descriptionTextView.text = playlist.description
}
}

fun submitList(playlists: List<Playlist>) {
playlistItems = playlists
notifyDataSetChanged()
}
}

Within the adapter, we override the necessary methods, such as onCreateViewHolder, onBindViewHolder, and getItemCount to handle view creation, data binding, and determining the item count respectively. The item layout XML file defines the visual representation of each playlist item, containing the necessary views and bindings.

As we explained earlier, RecyclerView implementations require a lot of boilerplate and repetitive code.

Jetpack Compose Column vs Lazy Column

Before we jump into improving our implementation with Jetpack Compose, let’s discuss the differences between the Column and LazyColumn components.

In Jetpack Compose, both Column and LazyColumn are composable functions used to display vertical lists of UI elements. The primary difference lies in their behavior and performance optimization. The Column is suitable for a small number of items or when the entire list can fit on the screen. It lays out all its children regardless of whether they are currently visible on the screen, which may lead to performance issues with large lists. For short lists, rendering the items from the start offers increased performance.

On the other hand, LazyColumn is optimized for handling large lists efficiently. It loads only the visible items on the screen and recycles the off-screen items, similar to the traditional RecyclerView. This approach reduces memory consumption and enhances scrolling performance for long lists. Therefore, LazyColumn is the preferred choice when dealing with extensive datasets or dynamic content, ensuring a smooth and responsive user experience.

Setting Up Jetpack Compose in the Project

In order to use Jetpack Compose in our project, we need to complete the following setup steps:

Step 1: Add the Jetpack Compose dependency in build.gradle

plugins {
id 'com.android.application'
id 'kotlin-android'
}

android {
// ...
buildFeatures {
compose true // Enable Jetpack Compose
}

composeOptions {
kotlinCompilerExtensionVersion = “$version”
}
// ...
}

dependencies {
implementation "androidx.compose.ui:ui:$compose_version" // Check for the latest version
implementation "androidx.compose.material:material:$material_version" // Check for the latest version
implementation "androidx.activity:activity-compose:$compose_version" // Check for the latest version
// ...
}

Step 2: Initialize Jetpack Compose In your Application class, in the onCreate method.

class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO) // Optional: Disable dark mode
}
}

You can now start adding Composables inside your MainActivity and leverage the power of Jetpack Compose!

class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
//TODO add a composable
}
}
}

Migrating RecyclerView to Lazy Column

Jetpack Compose belongs to the declarative UI family. In declarative UI, we receive the state of the data that needs to be displayed, and we programmatically create the views. The views are immutable, and their state cannot change. Every time the data state changes, everything is being redrawn on the screen, and the views recreated from scratch. Practically, behind the scenes, there are smart diffing mechanisms that don’t redraw elements that their data hasn’t changed. But we, as developers, must write code as if everything is being redrawn when the data changes.

Let’s see how we can refactor the playlists screen with Jetpack Compose.

As we promised earlier, with Jetpack Compose, we can get rid of the Android Fragments. Everything is Jetpack Compose, from a whole screen to a small UI element, is a composable. The composables are functions instead on objects. This reflects one of the paradigm shifts that the declarative UI introduces. We are moving towards stateless functional programming instead of stafeful object oriented programming.

Let’s start by replacing our PlaylistFragment with a screen composable.

@Composable
fun PlaylistScreen(viewModel: PlaylistViewModel) {
val playlists by viewModel.playlists.collectAsState()

LazyColumn {
items(playlists) { playlist ->
PlaylistItem(playlist = playlist)
}
}
}

The PlaylistScreen composable represents the screen where the playlists are displayed. It collects the playlists from the PlaylistViewModel using collectAsState to recompose the composable whenever the playlist data changes automatically. The main component in the PlaylistScreen is the LazyColumn, which is a Jetpack Compose equivalent of RecyclerView. It handles view recycling and renders only the visible items on the screen. Every time the playlist StateFlow emits another result, the composable function PlaylistScreen will automatically recompose, and the UI be redrawn with the updated data.

Each list item is described by the composable below:

@Composable
fun PlaylistItem(playlist: Playlist) {
// Custom composable for rendering an individual playlist item
Column(
modifier = Modifier
.fillMaxWidth()
.padding(16.dp)
) {
Text(
text = playlist.title,
style = TextStyle(fontWeight = FontWeight.Bold, fontSize = 18.sp)
)
Spacer(modifier = Modifier.height(8.dp))
Text(text = playlist.description)
}
}

The PlaylistItem composable represents an individual playlist item. We use a Column composable to stack the title and description texts vertically. We apply styling and padding.

With Jetpack Compose's LazyColumn, we achieve a more concise and declarative way of displaying the list of playlists without the need for a separate adapter or view holder logic. The composable functions automatically handle the UI rendering and updates based on the provided state. This refactoring results in cleaner, moer reuseable and more maintainable code, making UI development more intuitive and efficient. Furthermore, we don’t have to handle the Fragment’s complex lifecycle while retaining the benefit of reusable UI components.

The playlist with Compose&#39;s LazyColumn

Figure: The playlist with Compose's LazyColumn

Handling Clicks

Handling clicks in the Jetpack Compose Column component is super easy, we simply need to add the ‘clickable’ modifier and call the code that we want to execute when the respective list item is clicked. We have access to selected playlist model info.

 @Composable
fun PlaylistItem(playlist: Playlist) {
// Custom composable for rendering an individual playlist item
Column(
modifier = Modifier
.fillMaxWidth()
.clickable { /* Handle item click here */ }
.padding(16.dp)
) {
Text(
text = playlist.title,
style = TextStyle(fontWeight = FontWeight.Bold, fontSize = 18.sp)
)
Spacer(modifier = Modifier.height(8.dp))
Text(text = playlist.description)
}
}

Testing

As good engineers, we should always include automated tests that verify that our code works correctly. With Jetpack Compose, UI testing is much easier than before. Let’s see how we can test the PlaylistScreen after we migrate it to Jetpack Compose.

@ExperimentalCoroutinesApi
@get:Rule
val composeTestRule = createComposeRule()

@OptIn(ExperimentalCoroutinesApi::class)
@Test
fun playlistScreen_RenderList_Success() {
// Dummy data for testing
val playlists = listOf(
Playlist("Playlist 1", "Description 1"),
Playlist("Playlist 2", "Description 2"),
Playlist("Playlist 3", "Description 3")
)

// Create a TestCoroutineDispatcher to be used with Dispatchers.Main
val testDispatcher = TestCoroutineDispatcher()
val testCoroutineScope = TestCoroutineScope(testDispatcher)

// Launch the composable with TestCoroutineScope
testCoroutineScope.launch {
composeTestRule.setContent {
PlaylistScreen(viewModel = PlaylistViewModel(playlists))
}
}

// Wait for recomposition
composeTestRule.waitForIdle()

// Check if each playlist item is rendered correctly
playlists.forEach { playlist ->
composeTestRule.onNode(hasText(playlist.title)).assertIsDisplayed()
composeTestRule.onNode(hasText(playlist.description)).assertIsDisplayed()
}
}

In this test, we use the createComposeRule to set up the Compose test rule. We also create a TestCoroutineDispatcher and a TestCoroutineScope to simulate the background coroutine execution. Then, we launch the PlaylistScreen composable with dummy data for testing. After the recomposition, we use onNode to check if each playlist item title and description is correctly displayed. Note that we are testing UI, therefore this is an instrumentation test that must be inserted under the AndroidTest folder.

Let’s now see how we can test the PlaylistItem in isolation:

@get:Rule
val composeTestRule = createComposeRule()

@Test
fun playlistItem_Render_Success() {
val playlist = Playlist("Playlist 1", "Description 1")

composeTestRule.setContent {
PlaylistItem(playlist = playlist)
}

composeTestRule.onNode(hasText(playlist.title)).assertIsDisplayed()
composeTestRule.onNode(hasText(playlist.description)).assertIsDisplayed()
}

In this test, we use the createComposeRule to set up the Compose test rule. We then render the PlaylistItem composable with a dummy Playlist object. After rendering, we use onNode to check if the playlist title and description are correctly displayed.

These automated tests use Jetpack Compose's testing libraries to verify if the PlaylistScreen and PlaylistItem composables render as expected. They help ensure that the UI is correctly displayed and the appropriate data is rendered, providing confidence in the correctness of your composable functions. Remember to import the necessary dependencies and adapt the test code to your specific project setup.

Conclusion

Declarative UI is the future both in the web and mobile platforms. All major players have already adopted it, and it looks like all the other UI generation tools will eventually become deprecated.

It introduces a paradigm shift in building the UI where the views are immutable, and their state cannot change. When the data state changes, the views are recreated from scratch and are put to display the data updates.

Declarative UI building and Jetpack Compose specifically offer advantages such as simpler code that is easier to read, write and maintain. As a bonus, we can get rid of Fragments while maintaining the advantage of reusable UI components.

Shipbook offers fantastic Jetpack Compose debugging capabilities. You can add logs to monitor any UI rendering errors. Those will enable you to track, trace and fix every issue efficiently and effectively.

The sooner you start getting your hands on it, the better!

· 8 min read
Nikita Lazarev-Zubov

ConstraintLayout

Even though Jetpack Compose has become the recommended tool for building Android applications’ UI, the vast majority of applications still use traditional layout modes and their XML-based syntax. Android SDK provides us with many layout options. Some are already obsolete, but others remain popular and are widely used, including the newest offering: ConstraintLayout. Before we assess which options are actually effective, let’s briefly review the basics of the Android layout system.

Android Layout Basics

The fundamental building block of UI in Android is the View class, which represents a rectangular area on the screen. It’s also a base class for specific views like Button and ImageView. On top of them are ViewGroups—special Views that are used as containers for other views. ViewGroup is also the base class for various layout classes.

Android offers multiple layout options, including RelativeLayout, FrameLayout, and LinearLayout. However, back in 2018, ConstraintLayout was introduced, presumably, to rule them all. But does it live up to the hype? Let’s find out by looking at an example.

Android Layout Example

Let’s pretend ConstraintLayout doesn’t exist and build a UI for the login screen of our Layout Guru application using only pre-ConstraintLayout options.

Old Ways

Here’s what we’re going to build:

Layout Guru’s login screen

Figure 1: Layout Guru’s login screen

The view that we’re going to implement consists of two pairs of input fields and text labels centered on the screen. According to specification, each field takes up 60% of the screen width, and the text occupies the rest of the width. The application’s logo is centered above the fields, and uses 70% of the width. The “Sign In” button is positioned directly below the bottom input field and aligned to the right side of the screen.

Let’s start with one of the input text fields. The most straightforward way to implement it is with a horizontal LinearLayout. The layout_weight attribute will help us to set the desired width distribution. Here’s the layout’s XML:

    <LinearLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:weightSum="1">

<TextView
android:id="@+id/emailInputTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="0.4"
android:text="@string/email_address"
android:textColor="@color/black" />

<EditText
android:id="@+id/emailInputField"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="0.6"
android:inputType="textEmailAddress"
android:autofillHints="Email"
android:hint="@string/email_address"
android:backgroundTint="@color/black" />

</LinearLayout>

The second input is similar, but uses a different inputType’s value. Both inputs can be wrapped with a vertical LinearLayout:

    <LinearLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="wrap_content">

<include layout="@layout/email_field"/>
<include layout="@layout/password_field" />

</LinearLayout>

Finally, let’s combine the input fields with the rest of UI elements in a single RelativeLayout. For the first step of this process, we can add inputs to the layout and center them:

    <include
layout="@layout/login_form"
android:id="@+id/login_form"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_centerInParent="true" />

Then, we can add the “Sign In” button below the inputs, and align it to the right side of the screen:

    <Button
style="?android:attr/borderlessButtonStyle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:layout_below="@+id/login_form"
android:layout_alignParentEnd="true"
android:backgroundTint="@color/white"
android:text="@string/sign_in"
android:textColor="@color/black" />

The trickiest part, though, is the logo. Putting it above the inputs is easy, but there’s no straightforward way to make it take only 70% of the width of the screen using RelativeLayout. One way to achieve this is to put the image inside another LinearLayout, which has a convenient way of manipulating its child views’ weight (but doesn’t provide a way to position elements relative to each other):

    <LinearLayout
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:gravity="center_horizontal"
android:layout_above="@+id/login_form"
android:weightSum="1">

<ImageView
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="0.7"
android:src="@drawable/logo"
android:contentDescription="@string/layout_guru" />

</LinearLayout>

And here’s an outline of the resulting XML:

    <RelativeLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp">

<LinearLayout
<!--...-->>
<ImageView
<!--...-->>
</LinearLayout>

<include
<!--...-->>

<Button
<!--...-->>

</RelativeLayout>

Looking at the result, we can already draw one important conclusion: even simple pieces of UI require a lot of code and mixing-and-matching of various layout types.

ConstraintLayout

Let’s look at how the same screen could be implemented using ConstraintLayout.

This time, let’s start by putting two EditTexts and two TextViews in the center of the screen, and placing them relative to one another exactly as we did before using a combination of multiple LinearLayouts. Because the text input fields are higher than their text labels, we constrain the top one to the parent’s top, the bottom one to the parent’s bottom, and combine them into a packed chain. This will make them centered vertically as a whole. Then, the text fields can be aligned to the inputs’ baselines. This is the corresponding XML snippet:

    <TextView
android:id="@+id/emailInputTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="@string/email_address"
android:textColor="@color/black"
app:layout_constraintBaseline_toBaselineOf="@id/emailInputField"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintWidth_percent="0.4" />

<EditText
android:id="@+id/emailInputField"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:autofillHints="Email"
android:backgroundTint="@color/black"
android:hint="@string/email_address"
android:inputType="textEmailAddress"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@+id/passwordInputField"
app:layout_constraintStart_toEndOf="@id/emailInputTitle"
app:layout_constraintVertical_chainStyle="packed"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintWidth_percent="0.6" />

<TextView
android:id="@+id/passwordInputTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="@string/password"
android:textColor="@color/black"
app:layout_constraintBaseline_toBaselineOf="@id/passwordInputField"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintWidth_percent="0.4" />

<EditText
android:id="@+id/passwordInputField"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:autofillHints="Password"
android:backgroundTint="@color/black"
android:hint="@string/password"
android:inputType="textPassword"
app:layout_constraintTop_toBottomOf="@+id/emailInputField"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toEndOf="@id/passwordInputTitle"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintWidth_percent="0.6" />

The rest of the work is fairly straightforward. The image can be pinned to the top of the parent and to the top of the topmost input field. The relative width can be be provided using the layout_constraintWidth_percent attribute:

    <ImageView
android:layout_width="0dp"
android:layout_height="wrap_content"
android:src="@drawable/logo"
android:contentDescription="@string/layout_guru"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/emailInputField"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintWidth_percent="0.7" />

Positioning of the Button is simple as well:

    <Button
style="?android:attr/borderlessButtonStyle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:backgroundTint="@color/white"
android:text="@string/sign_in"
android:textColor="@color/black"
app:layout_constraintTop_toBottomOf="@id/passwordInputField"
app:layout_constraintEnd_toEndOf="parent"/>

An outline of the resulting layout is self explanatory:

    <androidx.constraintlayout.widget.ConstraintLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp">

<ImageView
<!--...-->>

<TextView
<!--...-->>
<EditText
<!--...-->>

<TextView
<!--...-->>
<EditText
<!--...-->>

<Button
<!--...-->>

</androidx.constraintlayout.widget.ConstraintLayout>

So, coming back to the original question—does ConstraintLayout replace other layouts? No doubt one can build a complicated UI by means of ConstraintLayout alone. Although, looking at the resulting code, some might prefer traditional options as being (arguably) easier to modularize and reuse, relatively complicated UI can be built simpler and using less code. The more sophisticated the UI, the more evident the statement becomes. This only confirms the conclusion from the previous section.

Another advantage of ConstraintLayout is that it’s more straightforward when building UI by means of the visual design tools of the Android Studio instead of coding it in XML.

Before we jump to conclusions, though, let’s look at another important metric: performance.

Layout Rendering Performance

Android provides us with useful developer tools that can help to measure rendering efficiency, one of which is Profile GPU Rendering. The output of the tool for each layout implementation will look something like this:

GPU Rendering GPU Rendering ConstraintLayout

Figure 2: Profile GPU Rendering output for the two layouts, with ConstraintLayout on the right

The ConstraintLayout option, on the right, is slightly shorter on the horizontal axis, and has fewer red spikes, which translates to less CPU overhead.

Let’s also look at the output from another tool—Debug GPU Overdraw:

GPU Overdraw GPU Overdraw ConstraintLayout

Figure 3: Debug GPU Overdraw output for the two layouts, with ConstraintLayout again on the right

The results are, again, very similar, but the RelativeLyout/LinearLayout version (on the left) has more purple areas—which mean areas that were redrawn once—and even one small green area indicating two redraws.

Although the difference between two layouts appears insignificant at first glance, in real-world situations with a more complicated user interface, the penalty can easily become noticeable and result in choppy animations and visible delays. Let’s explore why that’s the case.

Double Taxation

The phenomenon of slower rendering of nested layouts is widely referred to in the Android community as double taxation. While the system renders the view hierarchy, it iterates over the elements multiple times before finalizing the size and position of each view: At the first pass, the layout system calculates each child’s position and size based on the child’s layout After that, the system makes another iteration, taking into account the layout parameters of the parent layout. The more levels of hierarchy, the bigger the overhead. The problem applies to RelativeLayout, horizontal LinearLayout, and GridLayout.

If performance problems with rendering begin to occur, one of the first things to try is eliminating nested layouts wherever possible. Another potential way to experience an improvement is to switch to ConstraintLayout, which is cheaper in terms of underlying calculation because of its “flat” nature.

Conclusion

While choosing between the newer ConstraintLayout and other, more “traditional” alternatives, several factors should be considered. First of all, it's true that ConstraintLayout can turn into a universal solution for any type of UI. Additionally, for truly complicated user interfaces, ConstraintLayout can be a more lightweight and performant solution. On the other hand, in very simple cases where LinearLayout would provide a more straightforward solution, ConstraintLayout might be overkill.

Logging

If you need to log information related to rendering, Android has an interface called ViewTreeObserver.OnDrawListener that can be easily put to use together with a system to collect and store your log messages remotely, such as Shipbook.

· 9 min read
Nikita Lazarev-Zubov

Exception Handling

The first version of Java was released in 1995 based on the great idea of WORA (“write once, run anywhere”) and a syntax similar to C++ but simpler and human-friendly. One notable language invention was checked exceptions—a model that later was often criticized.

Let’s see if checked exceptions are really that harmful and look at what’s being used instead in contemporary programming languages, such as Kotlin and Swift.

Good Ol’ Java Way

Java has two types of exceptions, checked and unchecked. The latter are runtime failures, errors that the program is not supposed to recover from. One of the most notable examples is the notorious NullPointerException.

The fact that the exception is unchecked doesn’t mean you can’t handle it:

Object object = null;
try {
System.out.println(object.hashCode());
} catch (NullPointerException npe) {
System.out.println("Caught!");
}

The difference between a checked and unchecked exception is that if the former is raised, it must be included in the method’s declaration:

void throwCustomException() throws CustomException {
throw new CustomException();
}

static class CustomException extends Exception { }

The compiler will make sure that it’s handled— sooner or later. The developer must wrap the throwCustomException() with a try-catch block:

try {
throwCustomException();
} catch (CustomException e) {
System.out.println(e.getMessage());
}

Or pass it further:

void rethrowCustomException() throws CustomException {
throwCustomException();
}

What’s Wrong with the Model

Checked exceptions are criticized for forcing people to explicitly deal with every declared exception, even if it’s known to be impossible. This results in a large amount of boilerplate try-catch blocks, the only purpose of which is to silence the compiler.

Programmers tend to work around checked exceptions by either declaring the method with the most general exception:

void throwCustomException() throws Exception {
if (Calendar.getInstance().get(Calendar.DAY_OF_MONTH) % 2 == 0) {
throw new EvenDayException();
} else {
throw new OddDayException();
}
}

Or handling it using a single catch-clause (also known as Pokémon exception handling):

void throwCustomException()
throws EvenDayException, OddDayException {
// ...
}

try {
throwCustomException();
} catch (Exception e) {
System.out.println(e.getMessage());
}

Both ways lead to a potentially dangerous situation, when all possible exceptions are sifted together, including everything that is not supposed to be dismissed. Error-handling blocks of code also become meaningless, fictitious, if not empty.

Even if all exceptions are meticulously dealt with, public methods swarm with various throws declarations. This means all abstraction levels are aware of all exceptions that are thrown around it, compromising the principle of information hiding.

In some parts of the system, where multiple throwing APIs meet, a problem with scalability might emerge. You call one API that raises one exception, then call another that raises two more, and so on, until the method must deal with more exceptions than it reasonably can. Consider a method that must deal with these two:

void throwsDaysExceptions() throws EvenDayException, OddDayException  {
// …
}
void throwsYearsExceptions() throws LeapYearException {
// …
}

It's doomed to have more exception-handling code than business logic:

void handleDate() {
try {
throwsDaysExceptions();
} catch (EvenDayException e) {
// ...
} catch (OddDayException e) {
// ...
}
try {
throwsYearsExceptions();
} catch (LeapYearException e) {
// ...
}
}

And finally, the checked exception approach is claimed to have a problem with versioning. Namely, adding a new exception to the throws section of a method declaration breaks client code. Consider the throwing method from the example above. If you add another exception to its throws declaration, the client code will stop compiling:

void throwException()
throws EvenDayException, OddDayException, LeapYearException {
// ...
}

try {
// Unhandled exception: LeapYearException
} catch (EvenDayException e) {
// ...
} catch (OddDayException e) {
// ...
}

The Kotlin Way

Sixteen years after Java was first released, in 2011, Kotlin was born from the efforts of JetBrains, a Czech company founded by three Russian software engineers. The new programming language aimed to become a modern alternative to Java, mitigating all its known flaws.

I don’t know any programming language that followed Java in implementing checked exceptions, Kotlin included, despite the fact it targeted JVMs. In Kotlin, you can throw and catch exceptions similarly to Java, but you’re not required to indicate an exception in a method’s declaration. (In fact, you can’t):

class CustomException: Exception()

fun throwCustomException() {
throw CustomException()
}

fun rethrowCustomException() {
try {
throwCustomException()
} catch (e: CustomException) {
println(e.message)
}
}

Even catching is up to the programmer:

fun rethrowCustomException() {
throwCustomException() // No compilation errors.
}

For interoperability with Java (and some other programming languages), Kotlin introduced the @Throws annotation. Although it’s optional and purely informative, it’s required for calling a throwing Kotlin method in Java:

@Throws(CustomException::class)
fun throwCustomException() {
throw CustomException()
}

From One Extreme to Another

It may seem that programmers can finally breathe easy, but, personally, I think by solving the original problem, this new approach—Kotlin’s exceptions model—creates another. Unscrupulous developers are free to entirely ignore all possible exceptions. Nothing stops them from quickly wrapping a handful of exceptions with a try-catch expression and shipping the result to their end users, with a prayer. Or not covered exceptions are going to be discovered by end users.

Even if you’re a disciplined engineer, you’re not safe: Neither the compiler nor API will alert you to exceptions lurking inside. There’s no reliable way to make sure that all possible errors are being properly handled.

You can only guard yourself from your own code, patiently annotating your methods with @Throws. Though, even in this case, the compiler will tell you nothing and it’s easy to forget one exception or another.

The Swift Way

Swift first appeared publicly a little later, in 2014. And again, we saw something new. The error-handling model itself lies somewhere between Java’s and Kotlin’s, but how it works together with the language’s optionals is incredible. But first things first.

Of course, Swift has runtime, “unchecked”, errors—an array index out of range, a force-unwrapped optional value turned out to be nil, etc. But unlike Java or Kotlin, you can’t catch them in Swift. This makes sense since runtime exceptions can only happen because of a programming mistake, or intentionally (for instance, by calling fatalError()).

The rest of exceptions are errors that are explicitly thrown in code. All methods that throw anything must be marked with the throws keyword, and all code that calls such methods must either handle errors or propagate them further. Looks familiar, doesn’t it? But there’s a catch.

Fly in the Ointment

Let’s look at an example from above:

func throwError() throws {
if (Calendar.current.component(.day, from: Date()) % 2 == 0) {
throw EvenDayError()
} else {
throw OddDayError()
}
}

As you can see, you don’t declare specific errors that a method can throw; you’re only required to mark it as throwing something. The consequence of this is that you, again, don’t really know what to catch.

Unfortunately, the code below won’t compile:

do {
/*
Errors thrown from here are not handled because the enclosing
catch is not exhaustive
*/
try throwError()
} catch is EvenDayError {
print(String(describing: EvenDayError.self))
} catch is OddDayError {
print(String(describing: EvenDayError.self))
}

You always have to add Pokémon handling:

do {
try throwError()
} catch is EvenDayError {
print(String(describing: EvenDayError.self))
} catch is OddDayError {
print(String(describing: EvenDayError.self))
} catch {
print(error)
}

In fact, the Swift compiler doesn’t care about specific error types that you try to catch. You can even add a handler for something entirely irrelevant:

do {
try throwError()
} catch is EvenDayError {
print(String(describing: EvenDayError.self))
} catch is IrrelevantError {
print(String(describing: EvenDayError.self))
} catch {
print(error)
}

Or you can have only one default catch block that covers everything:

    do {
try throwError()
} catch {
print(error)
}

Another bad thing about the approach is that, without a workaround, you can’t catch one error and propagate another. The only way to implement such behavior is to catch the error you’re interested in and throw it again:

func rethrow() throws {
do {
try throwError()
} catch is EvenDayError {
throw EvenDayError() // Here's the trick.
} catch is IrrelevantError {
print(String(describing: EvenDayError.self))
} catch {
print(error)
}
}

Ointment

In my opinion, Swift’s strongest merit is its optionals system that cooperates with all aspects of the language. If you don’t care about thrown errors, instead of fictitious catch-blocks, you can always write try? Execution of the method will stop the moment the error is thrown, without propagating it further:

try? throwError()

If you’re feeling bold, you can use try! instead of try?, which won’t suppress the error if it’s thrown, but will let you omit the do-catch block:

try! throwError()

This method also allows converting a throwing call to a value. try? will give you an optional one, whereas try! has an effect similar to force-unwrapping:

func intOrError() throws -> Int {
// …
}

let optionalInt = try? intOrError() // Optional(Int)
let dangerousCall = try! intOrError() // Int or die!

Conclusion

Personally, I find Kotlin’s way, ahem, a failure. I can understand why Kotlin developers decided not to follow Java in its way of checked exceptions, but ignoring exceptions entirely, without a hint of static checks, is too much.

On the other hand, is the Java way really that harmful? No mechanism can defend software from undisciplined programmers. Even the best idea can be distorted and misused. But applying Java’s principles as designed can lead to good results.

Connecting two levels of abstraction, you can catch errors from one level and re-throw new types of errors to propagate them to the next level. You can catch several types of errors, “combine” them into one another, and throw them for further handling. This can help mitigate problems with encapsulation and scalability. For instance:

void throwCustomException() throws CustomException {
try {
throwDayException();
} catch (EvenDayException | OddDayException e) {
throw new CustomException();
}
}

What Java lacked from the very beginning is Swift’s optionality system and a syntax to bind exception handling and optional values. I believe, coupled with entirely static checks of thrown exceptions, this would build a very strong model that can satisfy the grouchiest programmers. Although, in any aforementioned programming language, this would require breaking changes, I personally believe it would be a game-changing improvement of code safety.

And if you want to improve your app stability right now, Shipbook is already here for you! It proactively inspects your app, catches exceptions and allows you to analyze them even before your users stumble upon the problem.

· 13 min read
Donald Le

Unit Testing in Android Development

Introduction

Unit testing entails the testing of the smallest parts of software, such as methods or classes. The main role of unit testing is to make sure the isolated part works as expected without integrating with third-party software, databases, or any dependency. To achieve this, software developers implement multiple testing techniques, like using stubs, mocks, dummies, and spies.

This post will show you why you should perform unit testing and how to implement it in your Android development project.

Benefits of Unit Testing

Unit testing allows you to catch software bugs early in the software development process, instead of QA finding them in the integration phase or end-to-end-testing, or, even worse, in the production environment. Moreover, as you develop your product, more features are added, meaning integration tests and end–to-end tests alone cannot cover all the corner cases. With unit testing, more corner cases are covered, which ensures your product meets the expected quality.

Benefits of Test-Driven Development (TDD)

Unit testing often goes along with the test-driven development (TDD) methodology, where developers first write the test, then write the feature code. At first, the tests will fail because the feature is not yet implemented. When the feature code is implemented, the tests will become green.

The huge benefit of TDD is that a software team can make sure the product is built and will meet the expected requirements, as demonstrated by the tests. Moreover, because developers write the tests first, they need to spend more time thinking about the product and what features the product has to cover; this way, the product being built will tend to have a higher quality.

Also, writing tests before writing product code will prevent developers from needing to refactor the code just to be able to write tests for it. For example, in the Go language, if the developers do not implement code with an interface, it’s very hard to write tests later on.

Example Application to Demonstrate Unit Testing

To better understand how to apply proper testing techniques for Android applications, let’s get your hands dirty by building a real application and then write tests for it. The application will show a list of popular movies for users to choose from as suggestions for their weekly movie night. Check out this GitHub repository for the full application code.

After opening the application, users will see a list of popular movies:

The movie suggestion application shown on a virtual device

Figure 1: The movie suggestion application shown on a virtual device

You can then tap on a movie for details like its plot summary and cast:

Details for the movie “Black Rock”

Figure 2: Details for the movie “Black Rock”

Unit Testing (Local Testing)

The unit test of our application will be run by a popular test runner called JUnit, a unit-testing framework that uses JVM languages like Java or Kotlin. If you’re not familiar with JUnit, you can learn more about it here. It helps you structure your tests, like what needs to be done first, what will be done last to clean data, and which data should be collected for the test report.

An Example of a Simple Unit Test

Okay, now let’s write an example unit test for the application.

We have the MovieValidator class in the utils package, which has the function isValidMovie:

import android.text.Editable
import android.text.TextWatcher
import java.util.regex.Pattern

class MovieValidator : TextWatcher {
internal var isValid = false
override fun afterTextChanged(editableText: Editable) {
isValid = isValidMovie(editableText)
}
override fun beforeTextChanged(s: CharSequence, start: Int, count: Int, after: Int) = Unit
override fun onTextChanged(s: CharSequence, start: Int, before: Int, count: Int) = Unit
companion object {
private val MOVIE_PATTERN = Pattern.compile("^[a-zA-Z]+(?:[\\s-][a-zA-Z]+)*\$")
fun isValidMovie(movie: CharSequence?): Boolean {
return movie != null && MOVIE_PATTERN.matcher(movie).matches()
}
}
}

To write the unit test for the function isValidMovie, we will first create a test class called MovieValidatorTest in the test folder. Then, we will need to import the MovieValidator class to test the isValidMovie in it.

The MovieValidatorTest will look like the following:

import com.fernandocejas.sample.core.functional.MovieValidator
import org.junit.Assert.assertTrue
import org.junit.Assert.assertFalse
import org.junit.Test
import mu.KotlinLogging
class MovieValidatorTest {
private val logger = KotlinLogging.logger {}
@Before
fun setUp() {
logger.info { "Starting the isValidMovie test" }
}
@Test
fun isValidMovie() {
assertTrue(MovieValidator.isValidMovie("The lord of the rings"))
assertFalse(MovieValidator.isValidMovie("name@email"))
}
@After
fun tearDown(){
logger.info { "Finishing the isValidMovie test" }
}
}

In the test file above, we implemented one test case to check the validity of the movie name. We also apply Before and After annotations to adding logging information so that we know when the test is about to start and when it is about to finish.

The Before and After annotations, help us structure our test scenario better. The Before annotation will be executed before every test, and the After annotation will be executed after every test. Developers often use these for setting up data for tests and then cleaning it up after testing is complete.

Notes: In order to install the logger library, we need to add the following code into our gradle configuration file.

implementation 'io.github.microutils:kotlin-logging-jvm:2.0.11'

When we run the test, we will see results as below:

Tests passed for movie validator test case

Figure 3: Tests passed for movie validator test case

The example unit test we just went over is very simple. But in real-world applications, you’ll need to deal with all kinds of dependencies and third-party APIs. How can we write tests for functions that interact with third-party dependencies?

When implementing unit testing, the best practice is to not deal with the real thing, like the real database, the real response from another API we take as input for the function, or any third-party dependencies. The reason for this is that when it comes to unit testing, we want to isolate the tests so that each test will test each unit. We could test the database or the third-party dependencies, but this will lead to flakiness in the tests. Instead, we’ll use “test doubles,” objects that stand in for the real objects when we implement the test. There are five types of test doubles: fake, dummy, stub, spy, and mock.

In this article, we’ll review the stub and mock types and use them for our example application.

  • Stubs provide fake data to the test.
  • Mocks check whether the expectation of the unit we are testing has been met.

How to Create Stubs and Mocks in a Sample Project

To better understand how to use a stub and a mock, let’s apply these techniques for writing unit tests in our movie suggestion app using MockK.

MockK is the well-known mock library in Kotlin, which provides native support for the Kotlin language. Users who are fond of the syntactic sugar of Kotlin will still be able to enjoy it with MockK. Moreover, since the default class and properties in Kotlin are final, using Mockito is considerably hard when mocking in Kotlin. But with MockK’s support, users won’t have to deal with that challenge anymore. To learn more about the benefits of using MockK over Mockito, check out this article.

To include the MockK library in your Android project, we need to add this line into the build.gradle.kts file:

testImplementation(TestLibraries.mockk)

The TestLibraries.mockk value is defined in Dependencies.kt as:

const val mockk = "io.mockk:mockk:${Versions.mockk}"
const val mockk = "1.10.0"

And that’s it.

So, let’s say we’re trying to test the class GetMovieDetails.

Initially, we usually implement the code without dependency injection like the following:

class GetMovieDetails : UseCase<MovieDetails, Params>() {
private val moviesRepository = MoviesRepository()
override fun run(params: Params) = moviesRepository.movieDetails(params.id)
data class Params(val id: Int)
}

The MovieRepository class is as defined below:

class MoviesRepository {
lateinit var context: Context
lateinit var retrofit: Retrofit
private val networkHandler = NetworkHandler(context)
private val service = MoviesService(retrofit)
fun movieDetails(movieId: Int): Either<Failure, MovieDetails> {
return when (networkHandler.isNetworkAvailable()) {
true -> request(
service.movieDetails(movieId),
{ it.toMovieDetails() },
MovieDetailsEntity.empty
)
false -> Left(NetworkConnection)
}
}
}

But writing code like this makes writing unit tests for this class impossible since we cannot mock dependency for the MoviesRepository class. Well, actually, we can write unit tests literally, but we’d need to use the real movie database, and this would lead to slower tests and make your test couple with third-party dependencies. Moreover, the problem with third-party dependencies is that they might not be working for other reasons, not because of our code.

The best practice when it comes to writing code that can be tested is applying dependency injection, which you can learn more about here.

First, we need to change the class MovieRepository to an interface type. The code for the interface MovieRepository will be changed as below:

interface MoviesRepository {
fun movies(): Either<Failure, List<Movie>>
fun movieDetails(movieId: Int): Either<Failure, MovieDetails>
class Network
@Inject constructor(
private val networkHandler: NetworkHandler,
private val service: MoviesService
) : MoviesRepository {
override fun movieDetails(movieId: Int): Either<Failure,
MovieDetails> {
return when (networkHandler.isNetworkAvailable()) {
true -> request(
service.movieDetails(movieId),
{ it.toMovieDetails() },
MovieDetailsEntity.empty
)
false -> Left(NetworkConnection)
}
}
}
..
}

Then, the class GetMovieDetails will be written as below, with the constructor MovieRepository:

class GetMovieDetails {
@Inject constructor(private val
moviesRepository:MoviesRepository):
UseCase < MovieDetails, Params > () {
override fun run(params: Params) = moviesRepository.movieDetails(params.id)
data class Params(val id: Int)
}
}

In order to test this class without calling the real database, we need to mock the MoviesRepository class using MockK:

@MockK private lateinit var moviesRepository: MoviesRepository

The test function for the movieDetails function will be written as below:

class GetMovieDetailsTest : UnitTest() {
private lateinit var getMovieDetails: GetMovieDetails
@MockK private lateinit var moviesRepository:
MoviesRepository
@Before fun setUp() {
getMovieDetails = GetMovieDetails(moviesRepository)
every { moviesRepository.movieDetails(MOVIE_ID) } returns
Right(MovieDetails.empty)
}
@Test fun `should get data from repository`() {
getMovieDetails.run(GetMovieDetails.Params(MOVIE_ID))
verify(exactly = 1) {
moviesRepository.movieDetails(MOVIE_ID)
}
}
companion object {
private const val MOVIE_ID = 1
}
}

In the setUp step, with @Before annotation, we initialize the getMovieDetails variable.

Then in the test function, we call the run function, with the input as GetMovieDetails.Params(MOVIE_ID. After that, we use the verify function, provided by MockK to check whether or not the call was actually made exactly one time.

Now, we will run the test to see whether it works or not. To run the test in Android Studio, click on the green button on the test method:

Log for the unit test run when testing GetMovieDetails class

Figure 4: Log for the unit test run when testing GetMovieDetails class

Advantages and Disadvantages of Unit Testing

With unit tests in place, we can be confident that our logic is met and we will be notified if any changes are made that break the existing logic. In addition, the unit tests are run blazingly fast. Still, we’re not sure if users can interact with the application as we expect.

That’s where UI testing comes into play.

UI Testing (Instrumentation Testing)

Traditionally, automated end-to-end testing is usually done in a blackbox way, meaning we create another project for automated end-to-end testing of the application. We need to find the locator of the elements in our application and find a way to interact with it via a framework such as Appium or UIAutomator. However, this approach is more time-consuming since we have to redefine the locators of the elements in our application; also, Appium is pretty slow when interacting with the real mobile application.

To be able to resolve the drawbacks of Appium, we’ll apply instrumentation tests with the help of the Espresso and AndroidX frameworks.

How to Implement UI in a Project

Let’s say we want to check whether the movie list button is shown and is clickable.

The MoviesActivity is defined as following:

class MoviesActivity : BaseActivity() {
companion object {
fun callingIntent(context: Context) = Intent(context, MoviesActivity::class.java)
}
override fun fragment() = MoviesFragment()
}

The actual logic and how the movies page is rendered is defined in the MoviesFragment class:

@AndroidEntryPoint
class MoviesFragment : BaseFragment() {
...
private fun loadMoviesList() {
emptyView.invisible()
movieList.visible()
showProgress()
moviesViewModel.loadMovies()
}

private fun renderMoviesList(movies: List<MovieView>?) {
moviesAdapter.collection = movies.orEmpty()
hideProgress()
}
...
}

The test class will be written like the following:

class MainApplicationTest {
@get:Rule
val mActivityRule = ActivityTestRule(MoviesActivity::class.java, true, false)

@Before
fun setUp() {
mActivityRule.launchActivity(null)
Intents.init();
}

@After
fun tearDown() {
Intents.release();
}

@Test
fun clickMovieListButton() {
val movieListButton = onView(withId(R.id.movieList))
movieListButton.perform(click())
val moviePoster = onView(withId(R.id.moviePoster))
moviePoster.check(matches(isDisplayed()))
}
}

In the test class, we need to specify the activity of the application we want to run, in this case, MovieActivity.

  @get:Rule
val mActivityRule = ActivityTestRule(MoviesActivity::class.java, true, false)

Before the test is run, the activity will be initialized.

  @Before
fun setUp() {
mActivityRule.launchActivity(null)
Intents.init();
}

Then after the test is done, we will close the activity.

  @After
fun tearDown() {
Intents.release();
}

For the test itself, we find the movieList element, and click on it.

  @Test
fun clickMovieListButton() {
val movieListButton = onView(withId(R.id.movieList))
movieListButton.perform(click())
val moviePoster = onView(withId(R.id.moviePoster))
moviePoster.check(matches(isDisplayed()))
}

After running the test by clicking on the green button, we can see the test has passed:

Test result for instrumentation testing

Figure 5: Test result for instrumentation testing

Advantages and Disadvantages of Instrumentation Tests

So, with instrumentation tests, we can be confident that users can interact with the UI and the functionalities work as expected per our business requirements. And the speed is pretty amazing.

But the drawback of instrumentation tests is that after every change in production code, you will need to change the test code since the test is affected by both the user interface and the business logic.

Conclusion

Creating a working Android application is not a hard task. But to be able to create a high-quality application that’s reliable over time is very difficult. You need to run a lot of tests, from unit tests and integration tests to end-to-end tests. Each test has its own role to play in the success of your product. Creating tests not only ensures high quality, but also gives developers the confidence they need to add new features later on without worrying that new code will break existing functionality. So make sure you implement all of them before releasing your application on the market.

Still, writing tests is a daunting task, so you also need to take your time implementing them. Moreover, debugging tests to know why they failed requires much time and effort too. If you’re having a hard time debugging your tests, or even get stuck in them, check out Shipbook, a logging platform that can help you quickly debug issues in your tests. Shipbook provides numerous resources and documents to help you test your applications, along with logs to easily discover the root cause of that bug you’re struggling with.

· 13 min read
Yossi Elkrief
Elisha Sterngold

Yossi Elkrief

Interview with Mobile Lead at Nike, Author, Google Developers Group Beer Sheva Cofounder, Yossi Elkrief

Thank you for being with us today Yossi, would you like to begin with sharing a little bit about your position at Nike, and what you do?

I joined Nike for a bit more than two years now. I am head of mobile development in the Digital Studio of innovation. It is a bit different from regular app development but we still work closely with all the teams in WHQ, Nike headquarters in the US as well as Europe, China, and Japan. We really work across the globe, and we do some pretty cool things in the realm of innovation. We develop new technologies and try to find ways to harness new technologies or algorithms to help Nike provide the best possible service to our consumers.

I have experience in mobile for the past 13, almost 14 years now. I’ve been involved in Android development since their very first Beta release, even a bit before that. I also worked on iOS throughout the years, and I’ve been involved in a couple of pretty large consumer based companies and startups.

At Nike we have a few apps, such as: Nike Training Club (NTC), Nike Running Club (NRC), and the Nike app made for consumers, where you can purchase Nike’s products.

We work with all of those teams and other teams within Nike, on various apps as well as in-house apps that are specific creations of our studio, where we work on creating new innovative features for Nike.

One major project that is currently working on completing roll out is Nike Fit, recently launched in China and Japan. Nike Fit, is aimed at helping people shop Nike shoes online and hopefully for other apparels in the near future.

How is it working for Nike, as a clothing company, with a background of working mainly for tech companies?

Nike is a company with so much more technology than people realize. We are not just a shoe company or a fashion company.

Our mission is to bring inspiration and innovation to every athlete1 in the world.

We use a tremendous amount of technology to transform a piece of fabric into a piece in the collection of the Nike brand. Nike may be more faceforward than companies that I’ve worked for in the past, but there is a vast array of technologies that we work with in Nike, or work on building upon, to make Nike the choice brand for our customers, now and in the future.

One of the highest priorities at Nike is the athlete consumers. Because Nike is a brand that is specifically designed and geared toward athletes. We therefore try to keep all of Nike’s athletes at the forefront in terms of their importance to the company. Consumer facing, most of Nike’s products are not the apps. All of my previous experiences in app companies or technical companies that provide a service are pretty different from what I focus on now at Nike. So everything we do at Nike, all the services we provide, are to help serve athletes in their day to day activities, whether this be in sports for professional athletes, or for people with hobbies like running, football, or cycling and so on.

Everything I focus on has to do with providing athletes with better service while choosing their shoes, pants, or all the equipment they need, and that Nike provides so they can best utilize their skills.

Can you tell us a bit about what went into writing your book “Android 6 Essentials”? Do you feel that writing the book improved your own skills as a developer?

I write quite a lot. I don’t get to write as many technical manuals as I’d like, but I do write quite a few documentations, technical documents, and blog posts. Writing the book was a different process, but I really wanted to engage a technical audience, as this audience is very different from that of a poem, or story, which is less for use and more for enjoyment.

Writing the book made me a better person in general because I was working full time in the capacity of my position at the company that I was with at the time, and then on top of all of my regular responsibilities, in order to be able to keep to schedule and hit all of the milestones, and points that I wanted to cover in my book. I had to be very organised and devoted to the project. I had to juggle work, and family, and all of my other responsibilities as well, so I divided my time to make sure I could meet all of my goals. The process was really quite fun because in the end I had something that I built and created from scratch.

I would recommend it, because it gives you an added value that no one else will have, and in the end you have a final product that you can show someone, and say that it was your creation. I think the whole process makes you a better developer, and it helps you understand technology better, because you need to understand technology at a level and to a degree of depth in order to then explain it in writing to someone else.

You also took part in co-founding Google Developers Group Beer Sheva, which is also about sharing knowledge and bettering yourselves as developers, can you tell us a little about that process?

The main aim of Google Developers Group is sharing knowledge. When we share knowledge we can learn from everyone. Even if I built each of the pieces of a machine myself, when I share it with someone else, they can always bring to light something that I was unaware of; some new and interesting way of using it. Sharing with people helps more than just the basic value of assistance. Finding a group of peers that share the same desire or passion for technology, knowledge, and information, this is a key concept in growth, for everyone in general.

On that note, we are seeing an interesting trend in development: even though mobile apps are becoming increasingly more complex, the industry has succeeded in reducing the occurrence of crashes. Is that your experience as well and if so, what are, in your eyes, the main reasons for this shift?

It’s really a two part answer.

Firstly, both Google and Apple are providing a lot more information, and are focusing a lot more on user experience in terms of crashes, app not responding, bad reviews etc. Users are more likely to write a good review if you provide more information, or create a better service with more value for them. Consumers in general are more interested in using the same app, the same experience, if they love it. So they will happily provide you with more information so that you can solve its issues, and keep using your app rather than trying something new. We call them Power Users or Advanced Users. With their help, we can keep the app updated and solve issues faster.

The second part of the answer is that all of the tools, ID integrations, shared knowledge, documentation, has been vastly improved. People understand now that they need to provide a service that runs smoothly with as little interference as possible for the user and they do their part to make sure that these issues remain as low as possible in the apps. We want a crash rate lower than 0.1%. So we work 90% of our time to build an infrastructure that will remain robust and maintain top quality, with a negligible amount of crashes, exceptions, and app issues, in general, that will harm and affect the user experience.

Do you believe that all bugs should always be fixed? If not, do you have ways of defining which ones do not need to be fixed?

As a perfectionist, yes, we want to solve all of the app issues. But in terms of real life, we work with a simple process. We look at the impact of the bug. How many users are being impacted? What is the extent to the impact? What does the user have to do in order to use the service? Is it just a simple work around or is it preventing the user from using an important part of the app?

Do you close insignificant issues, or are they kept open in a back office somewhere?

No, so we are very careful and organized about all of the issues that we have in the system. We document every issue with as much information as possible. Sometimes you can fix an issue with dependency and provide a new version for some dependencies and then because of all the interactions of the code versions you have some issues being solved even though you didn’t do anything. So for example, this doesn’t happen much, but sometimes we have issues in the backlog that can remain unsolved for more than a month.

What is your view on QA teams? Some companies have come out saying that they don’t use QA teams and instead move that responsibility to the developer team. Do you believe that there should be a QA Team?

I believe that companies should have a Quality Assurance team, which is sometimes also called QE, Quality Engineering. I think as a developer, working on various platforms, when you implement a new feature or service, give or take on the architecture of the technology, the actual issue can be quite difficult to find. This requires a different point of view than the developer. When you develop or write the code, you have a different point of view in mind then users often have when it comes to using the app. 90% of the time users will actually often behave differently than developers anticipated when writing the code. So when we design the feature, sometimes we need to understand a bit better how users will interact. We have a product team that we involve and engage on an hourly basis. The same goes for QA. We use QA in our Innovation Studio as well, but the same goes for our apps. We are constantly engaging QA to see how to both resolve issues and understand better how the user will interact with the app.

What is your position on Android Unit testing: How are the benefits compared to the efforts?

With testing in general, some will say it's not necessary at all and will just rely on QA. I don’t side with either. I think it is a mix. You don't need to unit test every line of code. I think that is excessive. Understanding the architecture is more important than unit testing. It's more important to understand how and why the pieces of the puzzle interact- to understand why to choose one flow over another, than to just unit test every function. Sometimes pieces of the puzzle are better understood with unit testing, but it is not necessary to unit test everything. That said, the majority of our code does undergo UI and UX testing.

What do you think about the fact that with Kotlin, you don’t state the exception in the function, this is unlike Java or Swift, which both require it. Which approach do you prefer?

I think for each platform there are different methods of working. Both approaches are fine with me. I think the Kotlin approach for Android, or for Kotlin in general, gives the developer more responsibility as to what can go wrong. You need to better understand the code and the reasons behind what can go wrong with exceptions when working with Kotlin. You can solve it using annotations and documentation, but in general people need to understand that if something can go wrong it will. They need to understand then how to solve it within the runtime code that they are writing, or building. If you are using an API, then API documentation will provide you with a bit more knowledge as to what is happening under the surface, and in terms of architecture, yes you need to know that when using an API function call or whichever function you are using within your own classes, you still need to interact with them properly, so it drives you to write better code handling for all exceptions.

Do you feel the fragmentation of devices or versions in Android is a real difficulty?

Yes, we see different behaviors across devices and different versions, and making sure that the app runs smoothly across all platforms can be a bit rough. But even so, it is a lot better than what we had in the past. I hope that as we progress in time, more and more devices will be upgraded to use an API level that is safer to use, and will mitigate fragmentation. Right now, some of the features that we are building, for example API 24 and above, have major progress in comparison to API 21 and above.

As a final question, which feature would you dream that Android or Kotlin would have?

I never thought of that, because, a week ago I would say camera issues on Android. But a month ago I would say, running computer vision in AI on Android on different devices. Camera issues are due to different hardwares. Google is doing a relatively good job in trying to enforce a certain level of compliance and testing on all devices. You have quite a few tests the device has to pass both in hardware and in API. But we still see many devices attempt to bypass, or give false results to the tests.

I would say giving us support for actual devices as far back as five to seven years, instead of three, and giving an all around better camera experience over all devices.

Thank you very much to Yossi Elkrief for your time and expertise!


Shipbook gives you the power to remotely gather, search and analyze your user logs and exceptions in the cloud, on a per-user & session basis.

Footnotes

  1. If you have a body, you are an athlete.

· 8 min read
Uditha Maduranga

in app purchases image by mudassar iqbal from pixabay

Introduction

In-app purchases are a common way for developers to create a free application, and then provide users with options to upgrade through in-app purchases. Google Play in-app purchases are the simplest solution to selling digital products or content on Android apps. Therefore, many app developers who are looking to sell digital goods, or offer premium membership to users, use the Google Play in-app billing process for smooth and easy checkouts.

· 3 min read
Elisha Sterngold

Log Severity Intro

log severity ruler gif

This subject may sound boring. Don’t all programmers know which log severity should be used? The answer is that people are not logging their app in a systematic way. There are several guides that explain the levels but they usually just define them. I’ll try to help you decide which log level should be used. I’ll give examples so that that you will be able to copy it into your app. I’m going to list the severities of Android.