Skip to main content

15 posts tagged with "android"

View All Tags

· 12 min read
Petros Efthymiou

Android Performance Optimization Series - UI Rendering

Introduction

Welcome to the third and final article of the Android Performance Optimization series! In the first article, we explored the fundamentals of performance optimization, focusing on CPU and battery. In the second article, we deep-dived into the crucial topic of RAM memory optimization and memory leaks. Now, it’s time to focus on UI optimization and rendering efficiency.

Traditionally, Android used the XML-based way to build UIs; in recent years Google followed the trend of declarative UI creation and released Jetpack Compose. Even though new projects tend to follow the Jetpack Compose approach there is still an extensive amount of Android apps that are based on the XML approach. Therefore, in this article we will include optimization techniques for both.

We will start with techniques that are applicable to both approaches, then we will continue with the XML, and finally, we will focus on optimizing Jetpack Compose UIs.

By implementing the practical techniques presented here, you can ensure your app delivers a smooth, responsive user experience.

Common techniques

Avoid UI Overdraw

Overdraw happens when the same pixel on the screen is drawn multiple times. This is common with overlapping UI elements. While the system needs to render elements in a specific order for transparency effects, excessive overdraw wastes GPU power and may cause a slow rendering time and responsiveness.

With XML, we can introduce UI overdraw when using FrameLayout and with Jetpack Compose when adding views on a Surface on top of each other.

To identify and fix overdraw, enable developer options and use the overdraw visualization tool. This will highlight areas where pixels are being drawn unnecessarily, allowing you to optimize your UI layout and element usage for better performance.

In order to enable the overdraw visualization tool, open up your device or emulator and navigate to the developer options. Enable the option Debug GPU overdraw.

Debug GPU overdraw

Now, you can run your application, and you will notice all the overdraw areas based on the color code. For example, in the screenshot below, the app bar is drawn on top of the screen, and we can see there is an overdraw. The same is happening with the app bar options.

app bar options

Furthermore, if we drag the pull-to-refresh component, we will see the emulator indicating an overdrawn element.

alt_text

Obviously, you can't avoid all overdraw cases, some, like this one, exist by design. But you can identify and fix the unintended ones.

Use animations sparingly

Animations are resource intensive. And while they can add polish to your app, it's crucial to use them sparingly. Excessive animations can overwhelm users and strain system resources. Think of them as sprinkles on a cupcake - a little adds delight, but too much can overpower the taste. Use animations strategically to highlight key actions or guide users through a process, but prioritize clarity and performance over constant movement.

Avoid processing in the UI thread.

This is probably the most important technique for building a responsive application and avoiding ANRs. An ANR (Application not responsive) occurs when you keep the Main thread busy for too long. In those cases, the OS prompts the user to kill the application. This is the worst possible UX other than an application crash.

Heavy data processing, as well as tasks like HTTP requests and database queries, must always happen on background threads. There are several techniques for performing those tasks in background threads, like using background services, async tasks, and more modern techniques like Reactive Programming or Kotlin Coroutines.

Regardless of which one you choose to perform your background work, the important thing is to avoid doing it in the Main thread.

Profile the Hardware UI rendering.

Unlike the GPU, RAM and battery consumption, we cannot monitor the UI performance from Android Studio. Instead we need to go to the developer options of our device or emulator and enable the Profile HWUI Rendering option.

alt_text

Here I prefer to use the option On screen as bars. Once you click on this option, you will start seeing a bar chart on your screen that looks like this:

alt_text

You can interpret the bar chart as follows:

  • For each visible application, the tool displays a graph.
  • Each vertical bar along the horizontal axis represents a frame, and the height of each vertical bar represents the amount of time the frame took to render (in milliseconds).
  • The horizontal green line represents 16.67 milliseconds. To achieve 60 frames per second which assures an optimal UX, the vertical bar for each frame needs to stay below this line. Any time a bar surpasses this line, there may be pauses in the animations.
  • The tool highlights frames that exceed the 16.67 millisecond threshold by making the corresponding bar wider and less transparent.
  • Each bar has colored components that map to a stage in the rendering pipeline. The number of components vary depending on the API level of the device.

For example:

alt_text

In this application, the first line, the longest one, represents the application startup. The second big line in the middle occurred when I navigated from one screen to another and this caused a rendering overload.

Using this tool, you can identify the most GPU resource-heavy screens and transitions and start focusing on optimizing those.

For more info regarding the HWUI profiling, you can visit the official documentation here: https://developer.android.com/topic/performance/rendering/inspect-gpu-rendering

XML UI Optimization

Now, let’s focus on a few techniques that will help you optimize the XML-based Android UIs.

Flatten View Hierarchy

A deep view hierarchy with lots of nested layouts can lead to performance issues in your Android app. A complex hierarchy forces the system to measure and layout views in a nested fashion. Flattening the hierarchy reduces these nested calculations, leading to faster rendering and smoother UI updates.

Furthermore, a simpler view hierarchy is easier to understand and debug. This saves development time and makes it easier to identify and fix layout issues.

ConstraintLayout excels at creating complex UIs with a flat view hierarchy. Unlike layouts like RelativeLayout, which rely on nested ViewGroups, ConstraintLayout allows you to position views directly relative to each other or the parent layout using constraints. This eliminates unnecessary nesting, resulting in a simpler and more efficient layout structure. The reduced complexity translates to faster rendering times and a smoother user experience, especially on devices with less powerful hardware. Additionally, ConstraintLayout's visual editor in Android Studio makes it intuitive to define these relationships between views, streamlining the UI development process.

For more information about Constraint Layout you can check the following article: https://blog.shipbook.io/constraintlayout

Make use of the View Stub

Not all sections of your UI are needed right away. Imagine a comment section that only appears when a user taps a "show comments" button. Most apps are implementing this using the View visibility attribute.

There's actually a more performant option called ViewStub. It acts as a placeholder in your layout, taking up zero space. When needed, you can inflate a complex layout (like the comment section) into the ViewStub's place. This keeps your initial UI load faster and smoother, and only inflates resource-intensive views when absolutely necessary. This improves both performance and memory usage in your Android app.

    <ViewStub android:id="@+id/stub"
android:inflatedId="@+id/subTree"
android:layout="@layout/mySubTree"
android:layout_width="120dip"
android:layout_height="40dip" />

Of course, not every element that changes visibility during its lifecycle needs to be a View Stub. View stubs currently don’t support the merge tag, and can’t be used more than once. This element is best used on Views that may not appear at all. Some examples can be error messages or advertising banner campaigns.

Recycler View and View Holder Pattern

Using the RecyclerView with the ViewHolder pattern is crucial for efficient and optimized handling of large datasets in Android applications. The ViewHolder pattern enhances performance by recycling and reusing existing views, thus minimizing the overhead of creating new view instances. This approach significantly reduces memory usage and improves scrolling performance, especially when dealing with long lists or grids. By binding data to reusable ViewHolder objects, RecyclerView ensures smooth and responsive UI interactions while dynamically adapting to changes in dataset size. Ultimately, implementing the RecyclerView with the ViewHolder pattern is not just a best practice but a fundamental strategy for delivering high-performance and scalable user interfaces in Android apps.

For more info on this subject, you can refer to the following article: https://blog.shipbook.io/recyclerview-vs-listview

Jetpack Compose Optimization

Now, let’s move our focus to Jetpack Compose. Compose is inherently built to be more performant than XML. Basically, that’s one of the reasons for the declarative UI paradigm shift in all platforms. When a screen element changes, they avoid redrawing the whole screen. They try to keep everything as is, and they only redraw the changed element.

Notice the keyword there — “try”. Compose will trigger recomposition when snapshot state changes and skip any composables that haven’t changed. Importantly though, a composable will only be skipped if Compose can be sure that none of the parameters of a composable have been updated. Otherwise, if Compose can’t be sure, it will always be recomposed when its parent composable is recomposed. If Compose didn’t do this, it would be very hard to diagnose bugs with recomposition not triggering. It is much better to be correct and slightly less performant than incorrect but slightly faster.

You can see how many times a View has been redrawn on the screen using the layout inspector:

alt_text

This way, you can identify which Views keep getting redrawn and may potentially be optimized, as we will show below.

Skippable UI Elements

The compose compiler is trying during compile time to identify which Composable elements are skippable. Meaning that if their own data hasn't changed, they don't need to get redrawn on the screen. It’s clear that the more skippable components you have on your screens, the more performant your UI is going to be as it avoids redrawing unchanged elements.

So the question is, how can you make your Composables skippable? The answer is simple: Immutability!

Is the following Composable skippable?

@Composable
private fun PlaylistRow(
playlist: Playlist
) {
Column(Modifier.padding(8.dp)) {
Text(
text = playlist.name,
style = MaterialTheme.typography.bodySmall,
color = Color.Gray,
)
Text(
text = playlist.length.toString(),
style = MaterialTheme.typography.bodyLarge,
)
}
}

The answer is we can’t tell unless we study the Playlist model.

With the following playlist model, is our Composable skippable? What do you think?

data class Playlist(
val id: String,
val name: String,
var length: Int
)

The answer is no. Because the length is a mutable variable that might have changed without Jetpack Compose knowing.

We can make our PlaylistRow skippable by making length an immutable value by changing var -> val.

data class Playlist(
val id: String,
val name: String,
val length: Int
)

Now if we change our Playlist model as below, will our Playlist row still be skippable or not?

data class Playlist(
val id: String,
val name: String,
val length: Int,
val songs: List<Song>
)

data class Song(
val id: String,
val name: String
)

The answer is not because Kotlin List is mutable. It is compile-time read-only but not immutable. The underlying data can still be changed and Compose compiler is not going to take any risks.

Use a kotlinx immutable collection instead of List

data class Playlist(
val id: String,
val name: String,
val length: Int,
val songs: ImmutableList<Song>
)

Version 1.2 of the Compose compiler includes support for Kotlinx Immutable Collections. These collections are guaranteed to be immutable and will be inferred as such by the compiler. This library is still in alpha, though, so expect possible changes to its API. You should evaluate if this is acceptable for your project.

Finally, you can also decide to annotate your model with the @stable annotation if you are certain that it is skippable. But this can be dangerous. This way you are instructing the Compose compiler that even though a model might be unstable, I want you to treat it as stable and the respective Composables that use it as skippable.

It’s dangerous because the values of the object may have been changed, but Compose may not have noticed it and still be showing the old values, leading to sketchy bugs. Annotating a class overrides what the compiler inferred about your class. In this way, it is similar to the !! operator in Kotlin.

For debugging the stability of your composables you can run the following task:

./gradlew assembleRelease -PcomposeCompilerReports=true

Open up the composables.txt file and you will see all of your composable functions for that module and each will be marked with whether they are skippable and the stability of their parameters.

restartable scheme("[androidx.compose.ui.UiComposable]") fun DisplayPlaylists(
stable index: Int
unstable playlists: List<Playlist>
stable onPlaylistClick: Function1<Long, Unit>
stable modifier: Modifier? = @static Companion
)

Lazy Column

Similar to what we saw in the XML approach, Compose also has a mechanism to optimize large lists and that’s the Lazy Column component. Lazy Column is optimized to display large datasets in a list as it avoids unnecessary pre-calculations. We have a wonderful article here that explains the differences between Column and Lazy Column in this link

Conclusion

In this series of articles, we analyzed how you can profile your app in order to identify performance issues with

  1. Battery
  2. CPU
  3. RAM memory
  4. UI rendering

We also explained optimization techniques that you can include in your toolset in order to resolve those issues.

What I would like you to keep from this series is that you should be profiling much more than optimizing. Premature optimization will slow down your team and product without providing much value.

Profile often, optimize when necessary.

· 25 min read
Boris Nikolov

Benefits of Hilt for Dependency Injection in Android App Development

Chapter 1. Introduction to Dependency Injection (DI)

Dependency Injection (DI) is a software design pattern commonly used in object-oriented programming and particularly prevalent in Android app development. It's a fundamental concept that aims to decouple classes from their dependencies, making them more modular, testable, and maintainable.

What is Dependency Injection?

Dependency Injection is a technique where the dependencies of a class are provided from outside the class rather than being created internally. In simpler terms, instead of a class creating its own dependencies, they are "injected" into the class from an external source.

Why Dependency Injection?

The primary motivation behind using Dependency Injection is to improve the modularity and flexibility of software components. By decoupling classes from their dependencies, DI makes it easier to replace, extend, and test individual components without affecting the rest of the system.

Key Concepts of Dependency Injection

  1. Inversion of Control (IoC)

    Dependency Injection is often associated with the concept of Inversion of Control (IoC), where the control of object creation and lifecycle management is inverted from the class itself to an external entity, typically a framework or container. IoC containers, such as Dagger or Hilt in Android, manage the instantiation and dependency resolution of classes, reducing the coupling between components.

  2. Dependency Inversion Principle (DIP)

    Dependency Injection follows the Dependency Inversion Principle, a key tenet of object-oriented design, which states that high-level modules should not depend on low-level modules but rather both should depend on abstractions. DI allows dependencies to be defined by interfaces or abstract classes, promoting loose coupling between components and facilitating easier substitution of implementations.

Benefits of Dependency Injection

Improved Testability

DI simplifies the process of testing by allowing dependencies to be easily mocked or replaced with test doubles. Components can be tested in isolation, leading to more reliable and maintainable unit tests.

Modular Design

DI promotes a modular architecture by reducing the tight coupling between classes. Components become more reusable and interchangeable, leading to a more flexible and scalable codebase.

Simplified Dependency Management

By centralizing the management of dependencies, DI frameworks handle the instantiation and configuration of objects, reducing the complexity of manual dependency management. This leads to cleaner and more readable code, as the creation of dependencies is abstracted away from the business logic.

Let’s have a look at the following example. Suppose we have an Android app that displays a list of tasks, and we want to test the TaskListViewModel class responsible for managing tasks. First, let's define the TaskRepository interface and its implementation:

interface TaskRepository {
fun getTasks(): List<Task>
// Other methods for managing tasks
}

class TaskRepositoryImpl @Inject constructor() : TaskRepository {
override fun getTasks(): List<Task> {
// Retrieve tasks from a data source (e.g., database, network)
}
// Implement other methods
}

Next, let's create the TaskListViewModel class, which depends on TaskRepository:

class TaskListViewModel @ViewModelInject constructor(
private val taskRepository: TaskRepository
) : ViewModel() {

private val _tasks = MutableLiveData<List<Task>>()
val tasks: LiveData<List<Task>> = _tasks

init {
loadTasks()
}

private fun loadTasks() {
viewModelScope.launch {
_tasks.value = taskRepository.getTasks()
}
}
}

Now, let's write a unit test for the TaskListViewModel class using Hilt for dependency injection:

@HiltAndroidTest
class TaskListViewModelTest {

@get:Rule
var hiltRule = HiltAndroidRule(this)

@Inject
lateinit var testTaskRepository: TaskRepository

@Before
fun setUp() {
hiltRule.inject()
}

@Test
fun testLoadTasks() {
// Arrange
val viewModel = TaskListViewModel(testTaskRepository)
val mockTasks = listOf(Task("Task 1"), Task("Task 2"))
`when`(testTaskRepository.getTasks()).thenReturn(mockTasks)

// Act
viewModel.loadTasks()

// Assert
assertEquals(mockTasks, viewModel.tasks.value)
}
}

This example illustrates the 3 benefits listed above:

  1. Improved testability - the DI mechanism allows us to easily mock the TaskRepository class and configure its output according to our needs, so that our tests can verify that ViewModel is behaving properly according to specific input from the TaskRepository mocked class

  2. Modular design - The constructor injection used for the TaskRepositoryImpl and TaskListViewModel allows us to flawlessly build a hierarchy of components that are embedded in one another and also swap them for alternative implementations without having to update the hierarchy chain above or below them (for example we can inject any implementation of TaskRepository as long as it conforms to its interface without changing how we use it in TaskListViewModel)

  3. Simplified dependency management - Hilt’s DI allows us to request an instance of any class by boiling down the whole hassle around the instantiation to a simple “@Inject” annotation that takes care of the whole process of creating a new instance and feeding it with the required dependencies.

Dependency Injection is a powerful design pattern that enhances the flexibility, testability, and maintainability of software systems. In the context of Android development, DI frameworks like Hilt are indispensable tools for managing dependencies and building robust, scalable apps.

Chapter 2. What is Hilt?

Introduction to Hilt

Hilt is a dependency injection library for Android built on top of Dagger, developed by Google as part of the Android Jetpack libraries. Dagger is a dependency injection framework for Kotlin and Java applications. It helps manage dependencies by automatically providing and managing instances of classes that your application needs. Hilt aims to simplify the implementation of dependency injection in Android apps by providing a set of predefined components and annotations tailored specifically for Android development.

Key Features of Hilt

Integration with Android Components

Hilt seamlessly integrates with Android framework components such as activities, fragments, services, and view models. It provides annotations like @AndroidEntryPoint to mark Android components for injection, simplifying the process of integrating DI into these components.

Simplified Setup

Hilt reduces the setup overhead required to use Dagger for dependency injection in Android projects. Developers no longer need to define custom Dagger components and modules; instead, Hilt generates them automatically based on annotations and conventions.

Annotation-Based Configuration

Hilt uses annotations extensively to configure dependency injection in Android apps. Annotations like @HiltAndroidApp, @Singleton, @ActivityScoped, and @ViewModelInject provide a declarative way to define the scope and lifecycle of dependencies.

Compile-Time Safety

Similar to Dagger, Hilt performs dependency resolution and validation at compile time, ensuring correctness and type safety. This helps catch dependency-related errors early in the development process, reducing the likelihood of runtime issues.

Seamless Integration with Jetpack Libraries

Hilt is designed to work seamlessly with other Jetpack libraries, such as ViewModel, LiveData, and WorkManager. It provides built-in support for injecting dependencies into these components, further simplifying the development of Android apps using Jetpack architecture components.

How Hilt Works

Example given below is implemented on a clean standard new project created via Android Studio’s template.

Adding Hilt to your project

First, we need to add the required dependencies for Dagger Hilt to our project. This is done by adding the following code in the relevant sections indicated in the project’s app-level “build.gradle” file:

plugins {
id 'kotlin-kapt'
id 'com.google.dagger.hilt.android'
}

dependencies {
implementation "com.google.dagger:hilt-android:2.50"
kapt "com.google.dagger:hilt-compiler:2.50"
}

// Allow references to generated code
kapt {
correctErrorTypes true
}

Annotating our Application class

Now we need to annotate our Application class with the relevant annotation. @HiltAndroidApp tells Hilt to generate a base class for our application that serves the dependencies to our Android classes. This is done in the following way:

@HiltAndroidApp
class MyApplication : Application() {
// other application related logic
}

Defining a module

A module is a class that serves the dependencies that we need when we need them. We can define a module by creating a new class and adding the annotation @Module on top of it.After that in this module class we implement methods that provide the necessary dependencies. This is how it looks like:

@Module
@InstallIn(ApplicationComponent::class)
object AppModule {
@Provides
fun appModuleDependency(): AppModuleDependency {
return AppModuleDependencyImpl()
}
}

In this example, we defined a module called AppModule that provides a dependency called AppModuleDependency. We also implemented a method called appModuleDependency() that creates and returns an instance of AppModuleDependencyImpl.

Injecting dependencies into Android classes

To inject a dependency into a class we need to annotate this class with @AndroidEntryPoint. This would tell Hilt that it needs to generate the code required to inject dependencies into this class. This is how we do this:

Suppose we have a simple Android app with an activity that displays a list of tasks. We want to use Hilt for dependency injection in our activity to provide instances of ViewModel and other dependencies.

@AndroidEntryPoint
class TaskListActivity : AppCompatActivity() {

@Inject
lateinit var viewModel: TaskListViewModel

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_task_list)

// ViewModel is automatically injected by Hilt
viewModel.getTasks().observe(this, Observer { tasks ->
// Update UI with the list of tasks
})
}
}

In this example, TaskListActivity is annotated with @AndroidEntryPoint to indicate that Hilt should perform dependency injection on this activity. This annotation tells Hilt to generate a component and inject dependencies into this activity at runtime. The TaskListViewModel is injected into TaskListActivity using Hilt's automatic injection feature. Additionally, if TaskListViewModel itself has dependencies, they can be injected using constructor injection, and Hilt will handle their instantiation and injection automatically.

If we haven’t used DI for injecting the ViewModel, it’s initialisation would’ve looked something like that (presuming the TaskListViewModel is using a repository to fetch the information and a utility class to parse the list of tasks and return them properly formatted and sorted):

class TaskListActivity : AppCompatActivity() {

var viewModel: TaskListViewModel? = null

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_task_list)

val tasksRepository = TasksRepository()
val tasksFormatter = TasksFormatter()
viewModel = TaskListViewModel(tasksRepository, tasksFormatter)
viewModel.getTasks().observe(this, Observer { tasks ->
// Update UI with the list of tasks
})
}
}

As you can see, without DI we are responsible for initializing all dependencies required by TaskListViewModel first and then providing them to the constructor when creating an instance of the ViewModel. You can imagine how messy the code might get to look if TaskListViewModel required more dependencies or if its dependencies had sub dependencies that needed to be initialized beforehand.

Chapter 3. Benefits of Hilt

Hilt, as a dependency injection library for Android development, offers several benefits that enhance the developer experience, improve code quality, and streamline the development process.

Ease of use

Hilt significantly simplifies the setup and usage of dependency injection in Android projects compared to manual configuration with Dagger. Developers no longer need to define custom Dagger components, modules, and subcomponents; instead, they can rely on Hilt's annotations and conventions to handle much of the setup automatically. This reduces the learning curve for developers new to DI and allows them to focus more on writing application logic rather than dealing with DI configuration details.

Reduced boilerplate code

One of the primary benefits of Hilt is its ability to reduce boilerplate code associated with Dagger-based dependency injection. Hilt generates much of the repetitive code required for Dagger setup, including components, modules, and builders, based on annotations and conventions. This not only saves developers time and effort but also leads to cleaner, more concise codebases with fewer manual dependencies to manage.

Compile-Time Safety

Hilt, like Dagger, performs dependency resolution and validation at compile time, ensuring correctness and type safety. By detecting dependency-related errors early in the development process, Hilt helps prevent runtime issues and facilitates smoother debugging. Developers can rely on compile-time checks to catch mistakes such as missing bindings, circular dependencies, or incorrect scope annotations, leading to more robust and stable Android apps.

Integration with Jetpack Libraries

Hilt is designed to seamlessly integrate with other Android Jetpack libraries and components, such as ViewModel, LiveData, and Room. It provides built-in support for injecting dependencies into these components, simplifying the implementation of recommended Android app architectures. Developers can leverage Hilt's annotations and conventions to ensure consistency and compatibility across their entire app architecture, promoting maintainability and scalability.

Scoping and Lifecycle Management

Hilt offers built-in support for scoping and managing the lifecycle of dependencies, ensuring that objects are created and destroyed appropriately based on their scope. Developers can use annotations like @Singleton, @ActivityScoped, or @ViewModelScoped to define the scope of dependencies and let Hilt handle their lifecycle automatically. This helps prevent memory leaks, optimize resource usage, and improve performance in Android apps.

Testing Support

Hilt simplifies testing by providing utilities for injecting test doubles and managing dependencies in test environments. Developers can annotate their test classes with @HiltAndroidTest and use @BindValue or @Module to provide dependencies specific to their test scenarios. This makes it easier to write comprehensive unit tests and integration tests for Android apps, leading to higher code coverage and better overall test quality.

Chapter 4. Scoping and Lifecycle Management

Scoping and lifecycle management are crucial aspects of dependency injection in Android app development. They ensure that objects are created, reused, and destroyed appropriately, optimizing resource usage and preventing memory leaks. In this chapter, we'll explore how Hilt handles scoping and lifecycle management of dependencies in Android apps.

Understanding Scopes in Hilt

Scoping refers to the lifespan of objects managed by Hilt. By defining scopes for dependencies, developers can control when objects are created and destroyed, ensuring that they exist for the appropriate duration and are available when needed.

Singleton Scope

The @Singleton scope in Hilt ensures that a single instance of a dependency is shared across the entire application. Objects annotated with @Singleton are created when the application starts and are reused throughout its lifespan. This scope is typically used for dependencies that are expensive to create or need to be shared globally.

Suppose we have a logging utility class called ShipbookLogger that is used throughout our Android application to log messages to various destinations such as the console, file, and remote server. We want to ensure that there is only one instance of ShipbookLogger created and shared across all components of our application to maintain consistency and optimize resource usage. First, let's define our “ShipbookLogger” class:

@Singleton
class ShipbookLogger @Inject constructor() {
fun log(message: String) {
// Implementation of logging logic
println("Logging message: $message")
}
}

In this example, we annotate the ShipbookLogger class with @Singleton to indicate that it should be treated as a singleton and only one instance should be created by Hilt and shared across the entire application.

Now, let's use the ShipbookLogger class in various parts of our application. For example, in an activity:

@AndroidEntryPoint
class MainActivity : AppCompatActivity() {

@Inject
lateinit var logger: ShipbookLogger

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)

logger.log("MainActivity started")
}
}

In this activity, we inject the ShipbookLogger instance using Hilt's @Inject annotation. Since we've annotated the ShipbookLogger class with @Singleton, Hilt will provide the same instance of ShipbookLogger to all components requesting it throughout the application. Similarly, we can inject the ShipbookLogger instance into other components such as fragments, view models, services, etc., and be assured that they all share the same instance.

Activity Scope

The @ActivityScoped scope in Hilt ensures that a dependency is tied to the lifecycle of an activity. Objects annotated with @ActivityScoped are created when the activity is created and destroyed when the activity is destroyed. This scope is useful for dependencies that are specific to an activity and should be cleaned up when the activity is no longer in use. @ActivityScoped-annotated components are available to all other subcomponents in the activity such as fragments.

Suppose we have an application that supports various interchangeable themes each with its own unique configuration. The light and dark themes are represented by separate activities - LightThemeActivity and DarkThemeActivity respectively. The configuration state of each theme will be tracked by a ThemeStateManager.

@ActivityScoped
class ThemeStateManager @Inject constructor() {
private var themeConfig: ThemeConfig? = null

fun setSelectedThemeConfig(themeConfig: ThemeConfig) {
selectedThemeConfig = themeConfig
}

fun getSelectedThemeConfig(): ThemeConfig? {
return themeConfig
}
}

In this example, ThemeStateManager is annotated with @ActivityScoped to indicate that there should be one instance of this class per activity instance. This ensures that each instance of LightThemeActivity and DarkThemeActivity has its own ThemeStateManager instance, allowing them to maintain separate states of the ThemeConfig for the different themes.

Fragment Scope

The @FragmentScoped scope in Hilt ensures that a dependency is tied to the lifecycle of a fragment. Objects annotated with @FragmentScoped are created when the fragment is created and destroyed when the fragment is destroyed. This scope is similar to @ActivityScoped but applies to fragments instead of activities.

Suppose we have a note-taking app where users can create and edit notes. We want to allow users to open multiple instances of the note editor (NoteEditorFragment) simultaneously, each with its own independent state.

@FragmentScoped
class NoteManager @Inject constructor() {
private val noteContentMap: MutableMap<Int, String> = mutableMapOf()

fun saveNoteContent(noteId: Int, content: String) {
noteContentMap[noteId] = content
}

fun getNoteContent(noteId: Int): String? {
return noteContentMap[noteId]
}

fun deleteNoteContent(noteId: Int) {
noteContentMap.remove(noteId)
}
}

The NoteManager class can be injected into multiple instances of NoteEditorFragment within the app to manage the state of individual notes. Each NoteEditorFragment instance can use its associated NoteManager to save, retrieve, and delete note content independently.

Benefits of Scoping in Hilt

Resource Optimization

By defining appropriate scopes for dependencies, developers can optimize resource usage and prevent memory leaks. Scoped dependencies are created and destroyed as needed, ensuring that resources are released when no longer in use.

Lifecycle Awareness

Scoped dependencies in Hilt are aware of the Android component's lifecycle they're associated with, whether it's an activity, fragment, or application. This ensures that objects are cleaned up properly when their associated component is destroyed, reducing the risk of memory leaks and improving app stability.

Modularization

Scoping allows developers to modularize their codebase and encapsulate dependencies within specific components or features of the app. This promotes code reuse, maintainability, and separation of concerns, making it easier to reason about and maintain the app architecture.

Chapter 5. Testing with Hilt

Testing is a critical aspect of software development, ensuring that code behaves as expected and meets the requirements. Hilt, with its testing support, simplifies the process of writing comprehensive unit tests and integration tests for Android apps.

Overview of Testing with Hilt

Hilt provides utilities and annotations to support testing in Android apps, allowing developers to inject dependencies and manage test environments effectively. Dependency injection (DI) allows developers to focus their tests on the crucial aspects of their business logic providing them with the ability to abstract away mandatory, but unrelated components setup. For example mocking DB connections, remote data sources, etc. and configuring them with specific behavior. With Hilt, developers can write tests that cover various aspects of their app's functionality, including unit tests for individual components and integration tests for larger app features. The following examples will showcase the abstraction of an authentication mechanism and a remote data source allowing developers to focus on the validation of only the upcoming steps from specific outcomes.

Unit Testing with Hilt

Unit testing involves testing individual units or components of code in isolation, typically using mock objects or test doubles for dependencies. Hilt simplifies unit testing by providing utilities to inject mock dependencies into classes under test.

Using @BindValue Annotation

The @BindValue annotation in Hilt allows developers to provide mock implementations of dependencies for testing purposes. By annotating a field or parameter with @BindValue in a test class, developers can replace the actual dependency with a mock object or test double.

Example Unit Test with Hilt with full setup of included dependencies:

Interface representing authentication repository

interface AuthRepository {
fun login(email: String, password: String): Boolean
}

Test class implementing this interface simulating authentication functionality (real implementation will serve this information from an API call verifying the user credentials):

class TestAuthRepositoryImpl : AuthRepository {
override fun login(email: String, password: String): Boolean {
// Simulate authentication logic
return email == "[email protected]" && password == "password"
}
}

Class that handles login logic, using the authentication repository

class LoginManager @Inject constructor(private val authRepository: AuthRepository) {
fun loginUser(email: String, password: String): Boolean {
return authRepository.login(email, password)
}
}

Unit test class using Hilt

@HiltAndroidTest
class ExampleUnitTest {

@BindValue
lateinit var authRepository: AuthRepository

@Inject
lateinit var loginManager: LoginManager

@get:Rule
var hiltRule = HiltAndroidRule(this)

@Before
fun setup() {
hiltRule.inject()
}

@Test
fun testLoginSuccess() {
// Stub authentication repository to return true for valid credentials
`when`(authRepository.login(anyString(), anyString())).thenReturn(true)

// Test login functionality with valid credentials
val result = loginManager.loginUser("[email protected]", "password")

// Verify that login was successful
assertTrue(result)
}

@Test
fun testLoginFailure() {
// Stub authentication repository to return false for invalid credentials
`when`(authRepository.login(anyString(), anyString())).thenReturn(false)

// Test login functionality with invalid credentials
val result = loginManager.loginUser("[email protected]", "password")

// Verify that login failed
assertFalse(result)
}
}

Integration Testing with Hilt

Integration testing involves testing the interactions between different components or features of an app. Hilt simplifies integration testing by providing utilities to initialize test environments and inject dependencies into Android components.

Using @HiltAndroidTest Annotation

The @HiltAndroidTest annotation in Hilt marks a test class as an Android instrumentation test and allows Hilt to initialize the test environment with dependency injection capabilities. Test classes annotated with @HiltAndroidTest can inject dependencies into Android components such as activities, fragments, and view models.

Example Integration Test with Hilt with full setup of included dependencies:

Interface representing an abstract data repository

interface DataRepository {
suspend fun fetchData(): List<Item>
}

Test class implementing repository that fetches data from a remote source(real implementation will serve this information from a real database):

class TestRemoteDataRepository : DataRepository {
override suspend fun fetchData(): List<Item> {
// Simulate fetching data from a remote server
return listOf(Item("Item 1"), Item("Item 2"), Item("Item 3"))
}
}

A ViewModel class using this remote data source to fetch and expose data

class MainViewModel @ViewModelInject constructor(private val dataRepository: DataRepository) : ViewModel() {
private val _items = MutableLiveData<List<Item>>()
val items: LiveData<List<Item>> = _items

init {
viewModelScope.launch {
_items.value = dataRepository.fetchData()
}
}
}

An Activity observing the ViewModel and displaying the exposed list of items

@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
private val viewModel: MainViewModel by viewModels()

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)

viewModel.items.observe(this, Observer { items ->
// Update UI with the list of items
// For example, populate RecyclerView with items
})
}
}

Test case validating the behavior of the components

@HiltAndroidTest
class ExampleIntegrationTest {

@Inject
lateinit var dataRepository: DataRepository

@Inject
lateinit var viewModel: MainViewModel

@get:Rule
var hiltRule = HiltAndroidRule(this)

@Before
fun setUp() {
hiltRule.inject()
}

@Test
fun testActivityBehavior() {
// Verify that the activity is created without crashing
val scenario = launchActivity<MainActivity>()
scenario.onActivity { activity ->
assertNotNull(activity)
// Add more assertions to test activity behavior if needed
}
}

@Test
fun testViewModelBehavior() {
// Verify that the ViewModel fetches data and exposes it correctly
runBlocking {
val items = viewModel.items.getOrAwaitValue()
assertNotNull(items)
assertTrue(items.isNotEmpty())
// Add more assertions to test ViewModel behavior if needed
}
}
}

Best Practices for Testing with Hilt

When writing tests with Hilt, developers should adhere to the following best practices:

Use Mock Objects for Dependencies

In unit tests, use mock objects or test doubles to simulate the behavior of dependencies and isolate the code under test. This allows developers to verify the functionality of individual components in isolation without relying on real dependencies.

Keep Tests Fast and Independent

Make sure tests are fast-running and independent of each other to facilitate quick feedback and maintainability. Minimize dependencies between tests and use techniques like test parallelization to speed up test execution.

Chapter 6. Performance Considerations

In Android app development, performance is a critical aspect that directly impacts user experience and app usability. When using Hilt for dependency injection, it's essential to consider performance implications to ensure that the app remains responsive and efficient. In this chapter, we'll explore various performance considerations when using Hilt in Android apps.

Overhead of Reflection

One potential performance concern when using Hilt (and Dagger) is the overhead introduced by reflection. Both Hilt and Dagger rely on reflection to generate and instantiate components and dependencies dynamically at runtime. While reflection offers flexibility and convenience, it can impact app startup time and memory usage, especially on older devices or devices with limited resources.

Mitigation Strategies

  • Proguard/R8 Optimization: Enable Proguard or R8 obfuscation and optimization to reduce the size of the generated code and remove unused code paths. This can help minimize the impact of reflection on app startup time and memory footprint. To enable shrinking, obfuscation, and optimization, include the following in your project-level build script:

    android {
    buildTypes {
    getByName("release") {
    // Enables code shrinking, obfuscation, and optimization for only
    // your project's release build type. Make sure to use a build
    // variant with `isDebuggable=false`.
    isMinifyEnabled = true
    }
    }
    ...
    }
  • Ahead-of-Time (AOT) Compilation: tools like Dagger's Ahead-of-Time (AOT) compiler generate static component implementations at compile time. AOT compilation reduces the reliance on reflection at runtime, resulting in faster startup times and improved performance.

  • Minimize Component Size: Keep Dagger/Hilt component sizes small by avoiding unnecessary dependencies and modularizing your codebase. Smaller components reduce the amount of reflection needed during initialization, leading to faster startup times and reduced memory overhead.

Eager Initialization

Another performance consideration with Hilt is the eager initialization of dependencies by default. In some cases, eagerly initializing all dependencies at startup can lead to unnecessary overhead, especially if certain dependencies are rarely used or only needed in specific scenarios.

Mitigation Strategies

  • Lazy Loading: Use lazy initialization techniques to defer the creation of dependencies until they are actually needed. This can help reduce startup time and memory usage by delaying the instantiation of less critical dependencies until they are requested by the app.

    class MainActivity : AppCompatActivity() {
    private val retrofitService: RetrofitService by lazy {
    Retrofit.Builder()
    .baseUrl(BASE_URL)
    .addConverterFactory(GsonConverterFactory.create())
    .build()
    .create(RetrofitService::class.java)
    }


    // Rest of the code
    }
  • Custom Scoping: Implement custom scoping mechanisms to control the lifecycle of dependencies more granularly. By defining custom scopes for different parts of the app, developers can ensure that dependencies are initialized only when required and released when no longer needed, minimizing resource usage and improving performance.

Memory Management

Effective memory management is crucial for maintaining optimal app performance, particularly on resource-constrained devices such as older smartphones or tablets. With dependency injection, it's important to ensure that objects are appropriately garbage-collected when no longer in use to prevent memory leaks and excessive memory consumption.

Mitigation Strategies

  • Scoped Lifecycle Management: Leverage Hilt's built-in support for scoping and lifecycle management to control the lifespan of dependencies. By associating dependencies with specific scopes (e.g., activity scope, fragment scope), developers can ensure that objects are cleaned up when their associated component is destroyed, reducing the risk of memory leaks.

  • Weak References: Consider using weak references for long-lived dependencies or objects that need to be accessed across different parts of the app. Weak references allow objects to be garbage-collected when they are no longer strongly referenced, helping to free up memory and prevent memory leaks.

    val person = Person("Boris)
    val personWeakReference = WeakReference<Person>(person)
    // if at some point the system calls the garbage collector, the "person" object might get collected if not in use in order to free memory

Testing Impact

When considering performance, it's also essential to evaluate the impact of Hilt on testing. While dependency injection frameworks like Hilt facilitate testing by providing utilities for injecting test doubles and managing dependencies in test environments, they can also introduce overhead in test setup and execution.

Mitigation Strategies

  • Isolation of Test Scenarios: Identify and isolate critical test scenarios that require dependency injection and focus on optimizing the performance of these tests. Use Hilt's testing support to provide mock or test double implementations of dependencies and avoid unnecessary overhead in test setup.
  • Test Suite Optimization: Optimize test suites by grouping tests with similar dependencies and minimizing the number of redundant injections. Consider using dependency injection frameworks' features such as test modules or custom test scopes to streamline test setup and reduce overhead.

Chapter 7. Conclusion

Dependency injection is a powerful technique in Android app development for managing dependencies, improving code maintainability, and facilitating testing. With the introduction of Hilt, developers now have a streamlined and developer-friendly solution for implementing dependency injection in their Android apps.

As developers continue to build complex and feature-rich Android apps, tools like Hilt play a crucial role in ensuring code quality, scalability, and maintainability. By adopting Hilt in their projects, developers can leverage the benefits of dependency injection while minimizing the associated overhead and complexity. And the more feature-rich your app grows, the more the need for an adequate logging tool arises. This is where Shipbook steps in to help alleviate the pain around constantly digging into the logs during debugging.

· 8 min read
Kevin Skrei

Lessons Learned From Logging

Intro

Writing software can be extremely complex. This complexity can sometimes make finding and resolving issues incredibly challenging. At Arccos Golf, we often run into these kinds of problems. One of the unique things about golf is that a round of golf is like a snowflake or a fingerprint, no two are alike. A golfer could play the same course every day without ever replicating a round identically.

Thus, trying to write software rules about the golf domain inevitably leads to bugs. And since no two rounds of golf are the same, trying to reproduce a user issue they encounter on the golf course is nearly impossible. So, what have we done to attempt to track down some of these issues? You guessed it, logging.

Logging has proven to be an indispensable tool in my workflow, especially when developing new features. This article aims to guide you through key questions that shape a successful logging strategy along with some considerations around performance. Concluding with practical insights, it features a few case studies from Arccos Golf, demonstrating how logging has been instrumental in resolving real-world bugs.

The Arccos app showing several shot detection modes available (Phone & Link)

Figure 1: The Arccos app showing several shot detection modes available (Phone & Link)

Should you log?

When trying to track down a bug or build a new feature, and you're considering logging, the first question to ask yourself is, “Is logging the right choice?”. There are many things to consider when deciding whether to add logging to a particular feature. Some of those considerations are:

  1. Do I have any other tools at my disposal that could be useful if this feature fails? This could be an analytics platform that shows screenviews, and perhaps logs metadata in another way besides traditional logging.
  2. Will adding logging harm the user in any way? This includes privacy, security, and performance.
  3. How will I actually get or view these logs if something goes wrong?

What to log?

Logging should be strategic, focusing on areas that yield the most significant insights.

  1. Identifying Critical Workflows: Determine which parts of your app are crucial for both users and your business. For instance, in a finance app, logging transaction processes is key.
  2. Focusing on Error-Prone Areas: Analyze past incidents and pinpoint sections of your app that are more susceptible to errors. For example, areas with complex database interactions or integrations with 3rd party SDKs might require more intensive logging.

What About Performance?

One of the primary challenges with logging is its impact on performance, a concern that becomes more pronounced when dealing with extensive string creation. To mitigate this, consider the following tips:

  1. Method Calls in Logs: Be wary of incorporating method calls within your log statements. These methods can be opaque, masking the complexity or time-consuming operations they perform internally.
  2. Log Sparingly: Practice judicious logging. Over-logging, particularly in loops, can severely degrade performance. Log only what is essential for debugging or monitoring.
  3. Asynchronous Logging: If your logging involves file operations or third-party libraries, always ensure that these tasks are executed on a background thread, thus preserving the main thread's responsiveness and application performance.

Implementing these strategies will help you strike a balance between obtaining valuable insights from logs and maintaining optimal application performance. I have found that you develop an intuition about what to log the more you practice and learn about the intricacies of your system.

How Do I Access The Logs?

The most straightforward and easiest method to access your applications logs is utilizing a third-party software tool like Shipbook, which offers the convenience of remote, real-time access to your logs.

Finally, I wanted to showcase a few stories illustrating how logging has helped us solve real-world production issues, along with some lessons learned about logging performance.

The 15-Minute Mystery

Our Android mobile app faced an intriguing issue. We noticed conflicting user feedback reports: one showed normal satisfaction, while another indicated a significant drop. The key difference? The latter report included golf rounds shorter than 15 minutes.

Upon investigating these brief rounds, we found that their feedback was much lower than usual. But why? There were no clear patterns related to device type or OS.

The trail of breadcrumbs started when we examined user comments on their rounds, many of which mentioned, "No shots were detected." Diving into the logs of these short rounds, a pattern quickly emerged. We repeatedly saw this line in the logs:

    [2023-12-01 14:20:09.322] DEBUG: Shot detected with ID: XXX but no user location was found at given shot time

This means that we detected a user took a golf shot but we didn’t know where they were on earth to place the shot at a particular location. This was unusual because we had seen log lines like this in our location provider which requests the phones GPS location:

    [2023-12-01 14:20:08.983] VERBOSE: Received GPS location from system with valid coordinates

So, we were clearly receiving location updates at regular intervals but we couldn’t associate them when a shot was taken by the user. After some further analysis, we discovered this line:

    [2023-12-01 14:20:09.321] VERBOSE: Attempting to locate location for timestamp XXX but requested source: “link” history is empty. Current counts: [“phone”:60, “link”:0]

We have a layer above our location providers that handles serving locations depending on which method the user selected for shot detection mode (either their Phone or their external hardware device “Link”). It was attempting to find a location for “Link” even though all of these rounds should have been in phone shot detection mode. Finally, we located this log line:

    [2023-12-01 14:14:33.455] DEBUG: Starting new round with ID: XXX and shot detection mode: Link … { metadata: { “linkConnected”: false, linkFirmwareVersion: null }... }

Once we analyzed this log line it became immediately obvious - The app was starting the round with the incorrect shot detection mode. Some rounds were started with shot detection mode of Link even if the Phone was selected in the UI (Figure 2).

The Arccos app showing a round of golf being played and tracked

Figure 2: The Arccos app showing a round of golf being played and tracked

We eventually identified the issue and it was due to some changes in our upgrade pathing code if users had certain firmware and prior generations of our Link product. Thankfully, this build was early in its incremental rollout and we were able to patch it quickly.

This experience highlighted the crucial role of widespread effective logging in mobile app development. It allowed us to quickly identify and fix an issue, reinforcing the importance of comprehensive testing and attentive log analysis.

When Too Much Detail Backfires

Dealing with hardware is especially difficult given you can rarely easily get information off of the hardware device. We often rely on verbose logging during the development phase to diagnose communication issues between hardware and software. This approach seemed foolproof as we added a new feature to our app, implementing detailed logging to capture every byte of data exchanged with the hardware of our new Link Pro product. In the controlled environment of our office, everything functioned seamlessly in our iOS app.

While on the course testing our app faced an unforeseen adversary: it began to get killed by the operating system. The culprit? Excessive CPU usage. Our iOS engineer, armed with profiling tools, discovered a significant CPU spike during data sync with the external device. Our initial assumption was straightforward – perhaps we were syncing too much data too quickly.

To test this theory, we modified the app to sync data less aggressively. This change did reduce app terminations, but it was a compromise, not a solution. We wanted to offer our users real-time experience without interruptions. Digging deeper into the profiling data, we uncovered the true source of our problem. It wasn't the Bluetooth communication overloading the CPU; it was our own verbose logging.

The moment we disabled this extensive logging, the CPU usage dropped dramatically, bringing it back to acceptable levels. This incident was a stark reminder of how even well-intentioned features, like detailed logging, can have unintended consequences on app performance. We decided to use a remote feature flag paired with a developer setting to be able to toggle detailed verbose logging of the complete data transfer only when necessary.

Through this experience, we learned a valuable lesson: the importance of balancing the need for detailed information with the impact on app performance. In the world of mobile app development, sometimes less is more. This insight not only helped us optimize our Link Pro product but also shaped our approach to future feature development, ensuring that we maintain the delicate balance between functionality and efficiency.

Afterword

In conclusion, our experiences at Arccos Golf have demonstrated the invaluable role of logging in software development. Through it, we’ve successfully navigated the complexities of writing golf software, turning unpredictable challenges into opportunities for improvement. Tools like Shipbook have been instrumental in this journey, offering the ease and flexibility for effective log management. I hope I’ve illustrated that logging is more than just a troubleshooting tool; it's a crucial aspect of understanding and enhancing application performance and user experience.

· 10 min read
Petros Efthymiou

Android Performance Optimization Series - Memory RAM

Introduction

In our previous article, we explored the fundamentals of Android performance optimization, focusing on CPU and battery. This second article delves deeper into the crucial aspect of RAM optimization, examining strategies for profiling and managing memory usage effectively to enhance your app's performance and user experience.

By implementing the practical techniques presented here, you can ensure your app utilizes system resources efficiently, delivering a smooth, responsive experience for your users.

RAM (Random Access Memory) is the primary memory of an Android device, acting as a temporary workspace for storing data actively used by applications.

Why RAM Optimization Matters

RAM optimization is essential for several reasons:

  1. Improved Performance:

    RAM is the primary workspace for active app data, and efficient RAM management ensures that your app doesn't consume excessive resources. This leads to several benefits:

    • Increase responsiveness and avoid ANRs. If the device runs out of memory, the application may become unresponsive. The app will appear as “stacked”. The OS, at that point, may choose to free some memory forcefully, but the UX is already jeopardized.
    • Reduced Scrolling Lag: Efficient RAM usage prevents bottlenecks that can cause scrolling to become sluggish or unresponsive, enhancing the overall user experience.
    • Smoother Animations and User Interface: RAM optimization allows your app to render animations and transitions smoothly, ensuring a responsive and engaging user experience.
  2. Reduced Crashes:

    Memory leaks occur when unused memory remains allocated, leading to performance degradation and potential crashes.

    Memory leaks occur when unused memory remains allocated, leading to performance degradation and potential crashes. By Memory leaks, we mean objects that are unused by the app, but the JVM garbage collector cannot release them because we have forgotten a reference to them somewhere in our code.

    Examples of this can be firing a coroutine to fetch information for a screen but not using the ViewModel scope. Then, if you navigate away from that screen and it gets destroyed, the coroutine will not be destroyed as it’s not tied to the lifecycle of that screen’s ViewModel.

    By implementing proper memory management practices, you can prevent these leaks and maintain system stability.

  3. Extended Battery Life:

    When apps consume excessive RAM, the system needs to constantly reload data from storage, which can drain the battery. RAM optimization helps conserve battery life:

    • Reduced Memory Thrashing: Efficient memory management minimizes the need for frequent garbage collection, which can impact battery performance.
    • Lower Background Activity: By using resources efficiently, your app reduces the need for background activities that consume battery power. By Background Activity, we refer to any kind of asynchronous data retrieval or processing that is not directly related to the current user action.
    • Optimized Data Storage: Use data compression and caching techniques to reduce the amount of data stored in RAM, minimizing battery consumption.

    By prioritizing RAM optimization, you can create a high-performing app that not only delivers a smooth user experience but also extends battery life and contributes to a more efficient overall system experience for your users.

Memory Profiling: Unveiling Memory Usage Patterns

Effective RAM optimization requires a deep understanding of how your app utilizes memory. Memory profiling tools provide valuable insights into memory usage patterns, enabling you to identify potential bottlenecks and optimize memory allocation.

Android Studio's built-in Memory Profiler is a powerful tool for analyzing your app's memory footprint. It allows you to monitor memory usage over time, identify memory allocation spikes, and track the lifecycle of objects. By analyzing heap dumps, you can pinpoint memory leaks and understand which objects are consuming excessive memory.

How to profile memory usage in Android

Realtime Memory Tracking

Monitor the app's memory usage in real time to identify spikes and trends. In order to profile RAM memory usage in Android, you need to open your project in Android Studio. In the search bar, search for “profiler” and click on the respective option.

profiler

Now, the Android profiler has been attached to your running application. You can see it at the bottom view of Android Studio. The initial view is capturing the CPU (top) and the memory (bottom) usage.

cpu and memory

You can see the CPU and MEMORY usage based on time (bottom) that is consumed by each Activity. In our case, as you can see, we first opened a LoginActivity that consumed certain resources, and then, after the login at 00:47, we switched to the MainActivity. We had a spike in CPU usage at the moment of transition, but the RAM usage remained stable. Also, as you can see, the current state of the LoginActivity is stopped - saved while the MainActivity is active.

For more on CPU usage, you can refer to the previous article in the series. Since this article focuses on RAM, let’s switch to the dedicated memory view and get the CPU out of the tracked metrics. In order to do this, click on System Trace

system trace

And on the top right, click on the “MEMORY” tab.

memory

Now you can see a detailed view of the memory consumption per category:

memory detail

Again, you can track the transition of the Activities on the top, but now we get a more detailed RAM graphic that indicates where the RAM is being used. We get the total memory consumption, which is 152 MB, and then we can see that:

  • Java and JVM are consuming 19,2 MB
  • Native 34,6 MB. This refers to C / C ++ objects.
  • Android Graphics 0
  • The stack 1,1 MB
  • The code execution 66,3 MB
  • And others 30,7MB

Two more helpful things to note:

  1. if you look at the top, you can see some pink dots. These represent the user clicks in the application. The prolonged ones refer to extended clicks or scrolling through a list. In my case, I was scrolling through a list, that’s why you can notice there are some spikes in memory usage at those time frames. Scrolling through extensive lists is memory-consuming.
  2. The screen line at the top that represents the activity lifecycle contains some gray spots. Those represent the switching between different fragments. Depending on how much memory each Fragment consumes, you may notice memory spikes at those time frames as well.

Heap Dump Analysis

Besides real-time memory profiling, you can capture heap dumps at different points in the app's lifecycle to analyze the allocation and retention of objects. Identify objects that remain allocated even when no longer needed, indicating potential memory leaks.

In order to do this, you can select the “Capture heap dump” option and click “Record”.

This will capture the current snapshot of the heap and all the active objects that consume memory. What normally helps me navigate through the memory dump is to click “Arrange by package” and then expand on the package name of my application in order to see which of the objects I control consumes the most memory.

heap dump

In this view, you can see how much memory each package is using per memory category, and if you expand on the packages, you will see the detailed memory consumption per object. You can play a bit around with this tool in order to find the view that best suits you to understand where your memory is consumed.

The Heap Dump, as we explained, is a snapshot of the app that contains all the information about how the memory is currently consumed. You also have the option to record the usage of either native(C/C++) or Java/Kotlin allocations over time by using the options below.

allocation options

Personally, I use the real-time memory tracking to get an idea about how my apps consume memory over time or the Heap Dump when I need very detailed information about the current memory usage per package and class.

Leak Canary

Another helpful tool to capture memory leaks in the Android app is the library Leak Canary.

Leak Canary is a useful library to detect such memory leaks. We can very easily integrate it by adding the respective dependency to our app’s build.gradle.

dependencies {
// debugImplementation because LeakCanary should only run in debug builds.
debugImplementation 'com.squareup.leakcanary:leakcanary-android:3.0-alpha-1'
}

No further code is needed, now, when the library detects a memory leak, it will pop a notification and capture a heap dump to help us detect what the memory leak is and what caused it.

Leak Canary

I strongly recommend using Leak Canary in your app.

Memory Optimization Techniques

Effective RAM optimization involves a combination of measures and strategies.

  1. Avoid memory leaks with Coroutines structured concurrency. In the previous section, we explained how to detect memory leaks. Let’s not see how to avoid them. Most memory leaks are caused by background work that is no longer required but still referenced. The most effective way to prevent this is by using the Coroutines structure concurrency.
    Make sure to replace all the background work mechanisms, such as Async Task, RX Kotlin, etc, with coroutines and tie the work to the adequate coroutine scope. When the work is related to a screen, tie it to its View Mode’s lifecycle by using the View Model scope. This way, the work will be canceled when the View Model is destroyed. Avoid using global scope, and if you do, make sure you cancel it when it’s no longer needed.

  2. Build efficient lazy loading lists with Jetpack Compose lazy column or view holder pattern. Extensive lists consume a lot of memory, especially if you load all the items at once. Currently, the most memory-efficient list mechanism is the Jetpack Compose Lazy Column; for more info, please refer to our respective article. The second most efficient way is the recycler view combined with the view holder pattern. The lazy loading technique can be extended to more objects besides lists.

  3. Minimize Unused Resources: Carefully manage the resources your app consumes, particularly images and background services. Use appropriate image formats, such as WebP or PNG, and optimize image dimensions to reduce file size.

  4. Optimize Animation Usage: Animations can be resource-intensive. Use animations sparingly and optimize them for efficiency to minimize memory usage.

  5. Utilize Dependency Injection Frameworks: Dependency injection frameworks like Hilt or Dagger 2 can help manage and reuse objects efficiently, reducing memory usage. Those frameworks using the scope mechanism provide an easy way to allow only a single instance of an object. By allowing only a single object instance, we avoid loading the memory with unnecessary objects.

    Finally, you should be Mindful of External Libraries: Carefully select and use external libraries. Some libraries may introduce unnecessary resource overhead.

By implementing these memory optimization techniques, you can ensure your Android app consistently delivers a smooth, responsive user experience while utilizing system resources efficiently.

Conclusion

In the second, we dived deep into RAM optimization. We first saw how to profile memory usage and detect memory leaks, and then we discussed optimization techniques.

Effective RAM optimization is a crucial aspect of developing high-performing Android apps. By implementing the strategies discussed in this article, you can significantly enhance your app's memory management, reducing memory leaks, improving performance, and extending battery life. Shipbook’s remote logging capabilities are also a helpful tool to track down issues.

Remember, continuous monitoring and optimization are essential for maintaining a top-notch user experience.

· 16 min read
Boris Nikolov

 Kotlin Multiplatform Mobile including Android and iOS

Introduction to Kotlin Multiplatform Mobile

Understanding Kotlin Multiplatform Mobile

What is KMM?

Kotlin Multiplatform Mobile is an extension of the Kotlin programming language that enables the sharing of code between different platforms, including Android and iOS. Unlike traditional cross-platform frameworks that rely on a common runtime, KMM allows developers to write platform-specific code while sharing business logic and other non-UI code.

Key Advantages of KMM

  1. Code Reusability: With KMM, you can write and maintain a single codebase for your business logic, reducing duplication and ensuring consistency across platforms.
  2. Native Performance: KMM leverages the native capabilities of each platform, providing performance comparable to writing platform-specific code. All your KMM code is built to platform-specific code before running on any device following all the latest best practices eventually providing users peak native capabilities.
  3. Interoperability: KMM seamlessly integrates with existing codebases and libraries, allowing developers to leverage platform-specific features when needed.
  4. Incremental Adoption: You can introduce KMM gradually into your projects, starting with shared modules and gradually expanding as needed.

KMM vs. Flutter

While KMM and Flutter do have a lot in common in terms of functionality and end result, they have very different approaches to reaching it:

  1. Programming language - KMM uses Kotlin, a language known for its conciseness, safety features and strong null-safety. Flutter on the other hand uses Dart, developed by Google and specifically targeted at building UIs through a reactive programming model
  2. Architecture - KMM focuses on sharing business logic between platforms and encourages a modular architecture by mixing sharing of core business logic modules with platform specific UI implementations. Flutter embraces a reactive and declarative UI framework with a widget-based architecture. The entire UI in Flutter is expressed as a hierarchy of widgets and doesn’t have a clear separation between business logic and UI.
  3. UI Framework - KMM doesn’t have a UI framework of its own, but rather leverages native UI frameworks like Jetpack Compose for Android and SwiftUI for iOS. Flutter proposes a custom UI framework that is equipped with a rich set of customisable widgets. The UI is rendered via the Skia graphics engine which is aimed at delivering a consistent look and feel across all supported platforms.
  4. Community and ecosystem - KMM is actively developed by JetBrains and has been gaining a lot of traction since inception by drawing many benefits from the Kotlin community. Flutter is maintained by Google and has a large and active community. It’s constantly growing its ecosystem of packages and plugins.
  5. Integration with native code - KMM seamlessly integrates with native codebases making its adoption effortless. Flutter relies on a platform channel mechanism to communicate with native code. It can invoke platform-specific functionality, but requires additional setup.
  6. Performance - Kotlin compiles to native code, providing near-native performance. Flutter uses a custom rendering engine (Skia) and introduces an additional layer between the app and the platform, potentially affecting performance in graphic-intensive applications.
  7. Platform support - KMM currently supports Android and iOS devices with planned support for other platforms in the future. Flutter has a broader range of supported platforms including Android, iOS, web, desktop (yet in experimental stage) and embedded devices.

The choice between KMM and Flutter still remains mostly subjective and is still dependent on language and architecture preferences, specific project requirements and of course - personal choice.

Creating a New KMM Project

Creating a new KMM project is a straightforward process:

  1. Open Android Studio:
    • Select "Create New Project."
    • Choose the "Kotlin Multiplatform App" template.
  2. Configure Project Settings:
    • Provide a project name, package name, and choose a location for your project.
  3. Configure Platforms:
    • Choose names for the platform-specific and shared modules (Android, iOS and shared).
    • Configure the Kotlin version for each platform module.
  4. Finish:
    • Click "Finish" to let Android Studio set up your KMM project.

If you don’t see the “Kotlin Multiplatform App” template then open Settings > Plugins, type “Kotlin Multiplatform Mobile”, install the plugin and restart your IDE.

“Kotlin Multiplatform Mobile plugin IDE

Project Structure and Organization

Understanding the structure of a KMM project is crucial for efficient development:

MyKMMApp
|-- shared
| |-- src
| |-- commonMain
| |-- androidMain
| |-- iosMain
|-- androidApp
|-- iosApp
  • shared: Contains code shared between Android and iOS.
  • commonMain: Shared code that can be used on both platforms.
  • androidMain: Platform-specific code for Android.
  • iosMain: Platform-specific code for iOS.
  • androidApp: Android-specific module containing code and resources specific to the Android platform.
  • iosApp: iOS-specific module containing code and resources specific to the iOS platform.

Shared Code Basics: Writing Platform-Agnostic Logic

Now that you have your Kotlin Multiplatform Mobile (KMM) project set up, it's time to dive into the heart of KMM development—writing shared code. In this chapter, we'll explore the fundamentals of creating platform-agnostic logic that can be used seamlessly across Android and iOS.

Identifying Common Code Components

The essence of KMM lies in identifying and isolating the components of your code that can be shared between platforms. Common code components typically include:

  • Business Logic: The core functionality of your application that is independent of the user interface or platform.
  • Data Models: Definitions for your application's data structures that remain consistent across platforms.
  • Utilities: Helper functions and utilities that don't rely on platform-specific APIs.

Identifying these shared components sets the foundation for maximizing code reuse and maintaining a consistent behavior across different platforms.

Writing Business Logic in Shared Modules

In your KMM project, the commonMain module is where you'll write the majority of your shared code. Here's a simple example illustrating a shared class with business logic:

// shared/src/commonMain/kotlin/com.example.mykmmapp/Calculator.kt

package com.example.mykmmapp

class Calculator {
fun add(a: Int, b: Int): Int {
return a + b
}

fun multiply(a: Int, b: Int): Int {
return a * b
}
}

In this example, the Calculator class provides basic mathematical operations and can be used across both Android and iOS platforms.

Ensuring Platform Independence

While writing shared code, it's crucial to avoid dependencies on platform-specific APIs. Instead, use Kotlin's expect/actual mechanism to provide platform-specific implementations where necessary.

Here's an example illustrating the use of expect/actual for platform-specific logging. In order to stay consistent while writing your code it’s recommended to use the same service provider on both platforms, for example Shipbook’s logger providing all required dependencies for both platforms. For the sake of simplicity, the example given below is using the native loggers of each platform.

Code in shared module:

// shared/src/commonMain/kotlin/com.example.mykmmapp/Logger.kt

package com.example.mykmmapp

expect class Logger() {
fun log(message: String)
}

Code in Android’s module:

// shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidLogger.kt

package com.example.mykmmapp

actual class Logger actual constructor() {
actual fun log(message: String) {
android.util.Log.d("MyKMMApp", message)
}
}

Code in iOS’s module:

// shared/src/iosMain/kotlin/com.example.mykmmapp/IOSLogger.kt

package com.example.mykmmapp

import platform.Foundation.NSLog

actual class Logger actual constructor() {
actual fun log(message: String) {
NSLog("MyKMMApp: %@", message)
}
}

By employing expect/actual declarations, you ensure that the shared code can utilize platform-specific features without compromising the platform independence of the core logic.

Platform-Specific Code: Adapting for Android

Now that you've laid the groundwork with shared code, it's time to explore the intricacies of adapting your Kotlin Multiplatform Mobile (KMM) project for the Android platform.

Leveraging Platform-Specific APIs

One of the advantages of KMM is the ability to seamlessly integrate with platform-specific APIs. In Android development, you can use the Android-specific APIs in the androidMain module. Here's an example of using the Android Toast API:

// shared/src/androidMain/kotlin/com.example.mykmmapp/Toaster.kt

package com.example.mykmmapp

import android.content.Context
import android.widget.Toast

actual class Toaster(private val context: Context) {
actual fun showToast(message: String) {
Toast.makeText(context, message, Toast.LENGTH_SHORT).show()
}
}

In this example, the Toaster class is designed to display Toast messages on Android. The class takes an Android Context as a parameter, allowing it to interact with Android-specific features.

Managing Platform-Specific Dependencies

When working with platform-specific code, it's common to have dependencies that are specific to each platform. KMM provides a mechanism to manage platform-specific dependencies using the expect and actual declarations. For example, if you need a platform-specific library for Android, you can declare the expected behavior in the shared module and provide the actual implementation in the Android module.

Here is a shared class and function intended to fetch data from an online source making a HTTP request:

// shared/src/commonMain/kotlin/com.example.mykmmapp/NetworkClient.kt

package com.example.mykmmapp

expect class NetworkClient() {
suspend fun fetchData(): String
}

Android-specific implementation:

//shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidNetworkClient.kt

package com.example.mykmmapp

import okhttp3.OkHttpClient
import okhttp3.Request

actual class NetworkClient actual constructor() {
private val client = OkHttpClient()

actual suspend fun fetchData(): String {
val request = Request.Builder()
.url("https://api.example.com/data")
.build()

val response = client.newCall(request).execute()
return response.body?.string() ?: "Error fetching data"
}
}

In this example, the NetworkClient interface is declared in the shared module, and the Android-specific implementation is provided in the androidMain module using the OkHttp library.

Building UI with Kotlin Multiplatform

User interfaces play a pivotal role in mobile applications, and with Kotlin Multiplatform Mobile (KMM), you can create shared UI components that work seamlessly across Android and iOS. In this chapter, we'll explore the basics of building UI with KMM, creating shared UI components, and handling platform-specific UI differences.

Overview of KMM UI Capabilities

KMM provides a unified approach to UI development, allowing you to share code for common UI elements while accommodating platform-specific nuances. The shared UI code resides in the “commonMain” module, and platform-specific adaptations are made in the “androidMain” and “iosMain” modules. A more convenient, but advanced approach to designing shared components would be to use a multiplatform composer tool, like the one provided by JetBrains named Compose Multiplatform. While still young in its development, it already provides powerful approach to writing UI logic reusable in many platforms like:

  • Android (including Jetpack Compose, hence the name “Compose Multiplatform)
  • iOS (currently in Alpha, but unfortunately without support for SwiftUI)
  • Desktop (Windows, Mac and Linux)
  • Web (but still in Experimental stage)

Creating Shared UI Components

Let's consider a simple example of creating a shared button component:

// shared/src/commonMain/kotlin/com.example.mykmmapp/Button.kt

package com.example.mykmmapp

expect class Button {
fun render(): Any
}

In this example, the Button interface is declared in the shared module, and the actual rendering implementation is provided in the platform-specific modules.

Android Implementation

// shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidButton.kt

package com.example.mykmmapp

import android.widget.Button

actual class Button actual constructor(private val text: String) {
actual fun render(): Button {
val button = Button(AndroidContext.appContext)
button.text = text
return button
}
}

iOS Implementation

// shared/src/iosMain/kotlin/com.example.mykmmapp/IOSButton.kt

package com.example.mykmmapp

import platform.UIKit.UIButton
import platform.UIKit.UIControlStateNormal

actual class Button actual constructor(private val text: String) {
actual fun render(): UIButton {
val button = UIButton()
button.setTitle(text, UIControlStateNormal)
return button
}
}

In these platform-specific implementations, we use Android's “Button” and iOS's “UIButton” to render the button with the specified text.

Storing Platform-Specific Resources

To manage platform-specific resources such as layouts or styles, you can utilize the “androidMain/res” and “iosMain/resources” directories. This allows you to tailor the UI experience for each platform without duplicating code.

Interoperability: Bridging the Gap Between Kotlin and Native Code

Kotlin Multiplatform Mobile (KMM) doesn't exist in isolation; it seamlessly integrates with native code on each platform, allowing you to leverage platform-specific libraries and functionalities. In this chapter, we'll explore the intricacies of interoperability, incorporating platform-specific libraries, communicating between shared and platform-specific code, and addressing data serialization/deserialization challenges.

Incorporating Platform-Specific Libraries

One of the strengths of KMM is its ability to integrate with existing platform-specific libraries. This allows you to leverage the rich ecosystems of Android and iOS while maintaining a shared codebase. Let's consider an example where we integrate an Android-specific library for image loading.

Shared Code Interface

// shared/src/commonMain/kotlin/com.example.mykmmapp/ImageLoader.kt

package com.example.mykmmapp

expect class ImageLoader {
fun loadImage(url: String): Any
}

Android Implementation

// shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidImageLoader.kt

package com.example.mykmmapp

import android.widget.ImageView
import com.bumptech.glide.Glide

actual class ImageLoader actual constructor() {
actual fun loadImage(url: String): ImageView {
val imageView = ImageView(AndroidContext.appContext)
Glide.with(AndroidContext.appContext).load(url).into(imageView)
return imageView
}
}

In this example, we've integrated the popular Glide library for Android to load images. The ImageLoader interface is declared in the shared module, and the actual implementation utilizes Glide in the Android-specific module.

Communicating Between Shared and Platform-Specific Code

Effective communication between shared and platform-specific code is crucial for building cohesive applications. KMM provides mechanisms for achieving this, including the use of interfaces, callbacks, and delegation.

Callbacks and Delegation

// shared/src/commonMain/kotlin/com.example.mykmmapp/CallbackListener.kt

package com.example.mykmmapp

interface CallbackListener {
fun onResult(data: String)
}

Usage in Android-specific module

//shared/src/androidMain/kotlin/com.example.mykmmapp/AndroidCallbackHandler.kt

package com.example.mykmmapp

actual class AndroidCallbackHandler {
private var callback: CallbackListener? = null

fun setCallback(callback: CallbackListener) {
this.callback = callback
}

fun performCallback(data: String) {
callback?.onResult(data)
}
}

In this example, the “AndroidCallbackHandler” class in the Android-specific module utilizes the shared callback interface and acts as an intermediary for callback communication between shared code and Android-specific code.

Handling Data Serialization/Deserialization

When dealing with shared data models, KMM provides tools for efficient data serialization and deserialization. The “kotlinx.serialization” library simplifies the process of converting objects to and from JSON, facilitating seamless communication between shared and platform-specific code.

Add Serialization Dependency

Ensure that your shared module has the kotlinx.serialization dependency added to its “build.gradle.kts” or “build.gradle” file:

commonMain {
dependencies {
implementation "org.jetbrains.kotlinx:kotlinx-serialization-json:1.3.0"
}
}

Define Serializable Data Class:

Create a data class that represents the structure of your serialized data. Annotate it with “@Serializable”:

// shared/src/commonMain/kotlin/com.example.mykmmapp/User.kt

package com.example.mykmmapp

import kotlinx.serialization.Serializable

@Serializable
data class User(val id: Int, val name: String, val email: String)

Serialize Data to JSON:

Use the “Json.encodeToString” function to serialize an object to JSON:

// shared/src/commonMain/kotlin/com.example.mykmmapp/UserService.kt

package com.example.mykmmapp

import kotlinx.serialization.encodeToString
import kotlinx.serialization.json.Json

class UserService {
fun getUserJson(user: User): String {
return Json.encodeToString(user)
}
}

Deserialize JSON to Object:

Use the “Json.decodeFromString” function to deserialize JSON to an object:

// shared/src/commonMain/kotlin/com.example.mykmmapp/UserService.kt

package com.example.mykmmapp

import kotlinx.serialization.decodeFromString
import kotlinx.serialization.json.Json

class UserService {
fun getUserFromJson(json: String): User {
return Json.decodeFromString(json)
}
}

Debugging and Testing in a Kotlin Multiplatform Project

Debugging and testing are critical aspects of the software development lifecycle, ensuring the reliability and quality of your Kotlin Multiplatform Mobile (KMM) project. In this chapter, we'll explore strategies for debugging shared code, writing tests for shared and platform-specific code, and running tests on Android.

Writing Tests for Shared Code

Testing shared code is crucial for ensuring its correctness and reliability. KMM supports writing tests that can be executed on both Android and iOS platforms. The “kotlin.test” framework is commonly used for writing tests in the shared module.

Sample Test in the Shared Module

// shared/src/commonTest/kotlin/com.example.mykmmapp/CalculatorTest.kt

package com.example.mykmmapp

import kotlin.test.Test
import kotlin.test.assertEquals

class CalculatorTest {
@Test
fun testAddition() {
val calculator = Calculator()
val result = calculator.add(3, 4)
assertEquals(7, result)
}

@Test
fun testMultiplication() {
val calculator = Calculator()
val result = calculator.multiply(2, 5)
assertEquals(10, result)
}
}

Running Tests on Android

Running tests on Android and iOS involves using Android Studio's and xCode’s testing tools. Ensure that your Android and iOS test configurations are set up correctly, and then execute your tests as you would with standard Android and iOS tests.

Testing Platform-Specific Code

While shared code tests focus on business logic, platform-specific code tests ensure the correct behavior of platform-specific implementations. Write tests for Android and iOS code using their respective testing frameworks.

Android Unit Test Example

// shared/src/androidTest/kotlin/com.example.mykmmapp/AndroidImageLoaderTest.kt

package com.example.mykmmapp

import androidx.test.ext.junit.runners.AndroidJUnit4
import org.junit.Test
import org.junit.runner.RunWith
import kotlin.test.assertTrue

@RunWith(AndroidJUnit4::class)
class AndroidImageLoaderTest {
@Test
fun testImageLoading() {
val imageLoader = ImageLoader()
val imageView = imageLoader.loadImage("https://example.com/image.jpg")
assertTrue(imageView is android.widget.ImageView)
}
}

iOS Unit Test Example

// shared/src/iosTest/kotlin/com.example.mykmmapp/IosImageLoaderTest.kt

import XCTest
import MyKmmApp // Assuming this is your Kotlin Multiplatform module name

class IosImageLoaderTest: XCTestCase {

func testImageLoading() {
let imageLoader = ImageLoader()
let imageView = imageLoader.loadImage("https://example.com/image.jpg")
XCTAssertTrue(imageView is UIImageView)
}
}

integrating Kotlin Multiplatform Mobile with Existing Android Projects

Integrating Kotlin Multiplatform Mobile (KMM) with existing Android projects allows you to gradually adopt cross-platform development while leveraging your current codebase. In this chapter, we'll explore the process of adding KMM modules to existing projects, sharing code between new and existing modules, and managing dependencies.

Adding KMM Modules to Existing Projects

  1. Add KMM Module

    • Navigate to "File" > "New" > "New Module..."
    • Choose "Kotlin Multiplatform Shared Module"
    • Follow the prompts to configure the module settings.
  2. Configure Dependencies

    Ensure that your Android module and KMM module are appropriately configured to share code and dependencies. Update the settings.gradle and build.gradle files as needed.

    // settings.gradle

    include ':app', ':shared', ':kmmModule'
    // app/build.gradle

    dependencies {
    implementation project(":shared")
    implementation project(":kmmModule")
    }
  3. Sharing Code

    You can now share code between the Android module and the KMM module. Place common code in the “commonMain” source set of the KMM module.

    // shared/src/commonMain/kotlin/com.example.mykmmapp/CommonCode.kt

    package com.example.mykmmapp

    fun commonFunction() {
    println("This function is shared between Android and KMM.")
    }
  4. Run and Test

    Run your Android project, ensuring that the shared code functions correctly on both platforms.

Managing Dependencies

Shared Dependencies

Ensure that dependencies required by shared code are included in the KMM module's “build.gradle.kts” file.

// shared/build.gradle.kts

kotlin {
android()
ios()
sourceSets {
val commonMain by getting {
dependencies {
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.0")
// Add other shared dependencies
}
}
}
}

Platform-Specific Dependencies

For platform-specific dependencies, declare them in the respective source sets.

// shared/build.gradle.kts

kotlin {
android()
ios()
sourceSets {
val androidMain by getting {
dependencies {
implementation("com.squareup.okhttp3:okhttp:4.9.0")
// Add other Android-specific dependencies
}
}
val iosMain by getting {
dependencies {
// Add iOS-specific dependencies
}
}
}
}

Conclusion

As we conclude our exploration of Kotlin Multiplatform Mobile (KMM), it's evident that this technology has emerged as a powerful solution for cross-platform mobile app development. By seamlessly bridging the gap between Android and iOS, KMM empowers developers to build robust applications with efficiency and code reusability at its core.

Kotlin Multiplatform Mobile stands as a testament to the evolving landscape of mobile app development. By embracing the principles of code reusability, adaptability, and continuous improvement, you are well-equipped to navigate the complexities of cross-platform development.

· 11 min read
Petros Efthymiou

Android Performance Optimization Series- Battery &amp; CPU

Introduction

In the dynamic world of Android app development, performance is crucial in order to meet the growing user expectations. Users demand smooth, responsive, and battery-efficient experiences, and they won't hesitate to uninstall apps that fall short. As developers, it's our responsibility to ensure our Android applications are not just functional but also performant.

We will be posting an exclusive series of articles where we go deep into the realm of Android performance profiling and optimization! Over the next few blog posts, we'll embark on an enlightening journey to demystify the Android apps’ performance. In this comprehensive series, we'll touch on the critical aspects of CPU usage, battery consumption, memory management, and UI optimization. Whether you're a seasoned developer seeking to fine-tune your app or a newcomer eager to master the art of Android optimization, this series is your roadmap to achieving peak performance. Get ready to unleash the full potential of your Android applications! 🚀

The Importance of Performance Optimization

Performance optimization isn't merely a luxury; it's a necessity. Beyond satisfying your users, there are several reasons to prioritize performance optimization in Android app development:

  1. User Retention: Performance issues, such as laggy UIs and slow load times, frustrate users and lead to high uninstall rates. An optimized app is more likely to retain and engage its user base.
  2. Market Competition: The landscape of mobile applications is crowded, and competition is fierce. An app that outperforms its peers has a clear advantage, which often translates to better ratings and more downloads.
  3. Battery Efficiency: Mobile device batteries are finite resources. An inefficient app can quickly drain a user's battery, leading to negative reviews and uninstalls. Optimal performance can significantly extend battery life.
  4. Resource Utilization: Efficient apps consume fewer system resources, such as CPU and memory. This, in turn, benefits the entire ecosystem by reducing strain on the device and enhancing the user experience across all apps.

In this article, we will explore battery consumption and CPU usage profiling and optimization. These two aspects are closely related. High CPU usage also leads to high battery consumption.

Understanding CPU Usage and Battery Consumption

Let’s first make sure we are on the same page regarding what we mean by the terms CPU Usage and Battery Consumption.

CPU Usage

The Central Processing Unit (CPU) is the brain of any computing device, including smartphones. CPU usage in the context of Android app performance refers to the percentage of the CPU's processing power that your app consumes. High CPU usage can lead to sluggish performance, increased power consumption, and a less responsive user interface. This happens as the CPU is unable to calculate everything which results to slow response times.

Monitoring CPU usage is crucial for several reasons:

  • Responsiveness: High CPU usage can cause your app to become unresponsive. Monitoring CPU usage allows you to identify performance bottlenecks and optimize your code for a smoother user experience.
  • Battery Life: As we already explained, excessive CPU usage can quickly drain a device's battery. By reducing CPU load, you can extend the device's battery life, leading to happier users.

Battery Consumption

Battery consumption is a key concern for mobile users. Apps that consume excessive battery are likely to be uninstalled or used sparingly. Why tracking battery consumption is essential:

  • User Retention: Excessive battery consumption is a major annoyance for users. By reducing your app's power consumption, you increase the likelihood of user retention.

I personally tend to uninstall apps that are very battery-demanding.

Profiling Battery Consumption and CPU usage

The skill to identify performance issues is arguably more important than the skill to optimize. In the same way, the read code to write code ratio is estimated to be about 10 to 1, we should spend more time identifying performance issues rather than performance optimizing. At first, this sounds weird, but it actually makes a lot of sense. Nowadays, even mobile devices have become quite powerful and are able to handle effectively heavy-duty tasks. Furthermore, performance optimization often leads to code that is harder to read and reason about. Therefore, we shouldn’t spend time optimizing code that has little to no effect on the actual real-time performance our users have. We must, though, always keep an eye on whether we have serious performance holes that we are not aware of. The Android Profiler is an excellent tool to do that!

Android Profiler

In order to start profiling an app, we first need to run the application from Android Studio in an emulator or a real device. When you have the app running, click the “Profiler” tab at the bottom of Android Studio:

profiler

Then, you need to locate the device on which you are running your app and click the “plus” icon to start a new profiler session. Find your app (debuggable process) and click on it.

debuggable process

Monitoring CPU Usage and Battery Consumption

Once you select your application, you are going to see something like the screenshot below. The top section indicates the percentage of CPU usage, and the bottom section the memory that our application is using.

cpu and memory

We are going to ignore the memory section for now as this article is focusing on CPU and battery. If we start using our app and navigate from screen to screen, we will notice that the CPU usage is increasing. Particularly when scrolling an extensive list that uses pagination, we can notice that the CPU usage is well getting above 50%. This happens because of the multiple network requests to fetch the next items as well as the lazy calculation of the UI items.

The pink dots at the top indicate the clicks we are doing inside the app.

clicks

Now, please click on the System Trace Link. The system trace initially has 2 tabs, one for the CPU and one for memory. Please click on CPU, and you will be able to track the CPU usage in even greater detail.

detailed cpu and memory

The green color indicates the CPU usage by our application, while the gray color the CPU by external factors such as the OS or other apps that may run in the background. We can also see the amount of threads that are currently active.

In order to track the battery usage, select on the left of the screen the system trace option and start recording.

recording

You can now use your app and perform the actions that you are interested in profiling, like navigating inside the app or scrolling a list. Once you are done, click stop recording, and you will get a full profiling report. On the top of the screen, you can see the CPU and, at the bottom, the energy profiler with the battery consumption.

full profiling report

The “Capacity” represents the remaining battery percentage (%).

The Charge the remaining battery charge in microampere-hours (µAh).

The Current is the instantaneous current in microampere (μΑ)

I personally though prefer to focus on CPU usage, which I find more helpful and straightforward. Generally, as a rule of thumb, high CPU usage means high battery consumption.

Besides CPU, though, there are other factors that contribute to battery consumption, such as GPU usage, Sensor core GPS or camera usage, etc. Unfortunately, in most devices, we are unable to get the detailed report as they don’t support the “On Device Power Rails Monitor” (ODPM). A few devices, such as Pixel 6 or Pixel 7, do support it, and the energy profiler there can give us the full battery usage report to understand further where we consume battery.

On Device Power Rails Monitor

Another great way to understand if your application is consuming too much battery is to simply use it as a user and check the system settings report that indicates your app’s battery consumption over time.

We now clearly understand how to profile our app’s CPU usage and battery consumption, either during runtime or by recording and storing usage reports. Let’s move on to the next section, where we will learn certain optimization techniques.

Optimization

The general rule to optimize both CPU usage and battery consumption is to avoid any unnecessary work. When we optimize CPU usage, we also optimize battery consumption and vice-versa. The difference is that in terms of CPU usage, we must avoid “doing all the work at once” which will overload it and cause performance issues, while battery consumption is about how much work we do over time.

Below, we will present certain areas that can overload the CPU and cause high battery drainage.

Precalculations

We often precalculate information, anticipating that we will need to display it later. We do it so that the information is available to the user instantly, and the user doesn’t have to wait for it. In many of the cases, though, the user will never navigate to the anticipated area, and the information won’t be displayed. Resulting in wasted CPU consumption and battery drainage.

  • Try to avoid prefetching data with multiple network requests at the application startup unless it’s really necessary. This can both overload your CPU, resulting at sluggish application startup, as well as unnecessarily drain the battery.
  • Avoid precalculating list elements. Use either the recycler view combined with the view holder pattern or the Jetpack Compose lazy column. Those components are performance-optimized and only create the items when the user is about to see them. API pagination is also a great technique to avoid prefetching an extensive amount of data.

Background Services

Background services are essential for tasks that need to run continuously or periodically, even when your app is not in the foreground. However, they can also be significant contributors to CPU usage and battery drain.

Optimization Strategies:

  • Scheduled Alarms: Utilize the AlarmManager to schedule tasks at specific intervals rather than running them continuously. This allows your app to minimize background processing time and conserve battery.
  • WorkManager: For periodic and deferrable tasks, use WorkManager. It efficiently manages background work, respecting device battery optimization features and network constraints.

Wake Locks

A wake lock allows your app to keep the device awake, which can significantly impact battery life if used excessively.

Optimization Strategies:

  • Use Wake Locks Sparingly: Only use wake locks when necessary, and release them as soon as the task is completed. Prolonged use of wake locks can prevent the device from entering low-power modes.
  • AlarmManager: In scenarios where you need to wake the device periodically, consider using the AlarmManager to schedule tasks instead of a continuous wake lock.
  • JobScheduler or WorkManager: These tools can be used to schedule tasks efficiently without the need for a persistent wake lock.

Location-Based Services

Location-based services, such as GPS and network-based location tracking, can have a significant impact on CPU usage and battery consumption, especially if they're continuously running.

Optimization Strategies:

  • Location Updates: Request location updates at longer intervals or adaptive intervals based on the user's current location. High-frequency updates consume more battery.
  • Geofencing: Utilize geofencing to trigger location-based actions when the user enters or exits defined areas. Geofencing is more efficient than continuous location tracking.
  • Fused Location Provider: Use the Fused Location Provider, which combines data from various sources and optimizes location requests. It reduces the need for the GPS chip, which consumes more power.

Battery and CPU Efficient Network Requests

Network requests can impact the device resource’s usage.

Optimization Strategies:

  • Batch Requests: Minimize the number of network requests by batching multiple requests into one. This reduces the frequency of radio usage, which is a significant battery consumer.
  • Network Constraints: Use tools like WorkManager, which respect network constraints. Schedule network-related work when the device is on Wi-Fi or when it has an unmetered connection, reducing cellular data usage.
  • Background Sync: If your app needs periodic data synchronization, schedule these tasks at intervals that minimize battery impact.
  • Optimize Payload Size: Minimize the size of data payloads exchanged with the server. Smaller payloads lead to shorter network activity, reducing battery usage.

Database queries

Similarly to Network requests, when we utilize a local database for data caching or other purposes, we should be mindful of its usage. Database queries consume both CPU and battery and should be optimized with the same techniques as the network requests.

By implementing these optimization strategies, you can ensure that your app is more energy-efficient and less likely to experience lag during usage.

Conclusion

In the first blog post of the optimization series, we deep-dived into the CPU usage and battery optimization topics. We learned how to effectively use the Android studio profiler to identify potential performance issues as well as optimization techniques to mitigate potential issues.

Remember to “profile often but optimize rarely and only when it’s truly required.

Stay tuned for the rest of the Android optimization series, where we will touch on the critical aspects of memory and UI optimization.

· 13 min read
Petros Efthymiou

Biometric Authentication in Android

Introduction

In today's digital landscape, security and user experience are paramount considerations for developers creating Android applications. Biometric authentication, a revolutionary advancement in mobile security, has emerged as a pivotal solution that addresses both security concerns and user convenience. With the rise of data breaches and the increasing dependency on mobile devices for various transactions, implementing robust authentication mechanisms is non-negotiable.

Biometric authentication is a cutting-edge method that leverages the unique physiological and behavioral characteristics of an individual to grant access to applications and sensitive data. Instead of relying solely on traditional methods like PINs or passwords, biometric authentication harnesses distinctive traits such as fingerprints, facial features, and iris patterns to verify a user's identity.

Advantages of Biometric Authentication

  1. Enhanced Security: Biometric authentication offers a higher level of security compared to traditional methods. Unlike passwords or PINs, which can be forgotten, shared, or hacked, biometric characteristics are unique to each individual. Having said that, there are security gaps in biometrics as well, such as authentication false positives due to poor device hardware, but those can be mitigated, as we will see later. Another opportunity to bypass it might be malicious fingerprint capturing (with photos or other methods) for user imitation.

  2. User Convenience: One of the standout benefits of biometric authentication is its ease of use. Users no longer need to remember complex passwords or worry about typing errors. A simple touch of a finger or a glance at the camera is all it takes to gain access. This frictionless experience not only reduces user frustration but also encourages secure behavior.

  3. Seamless Interaction: Biometric authentication seamlessly integrates into the user's natural interaction with the device. It eliminates the need to switch between apps to retrieve passwords or codes, streamlining the user journey and increasing overall efficiency.

  4. Reduced Friction: Traditional authentication methods often lead to abandoned sign-up or login processes due to the cumbersome nature of password entry. Biometric authentication reduces this friction, leading to higher user engagement and retention rates.

  5. Multifactor Authentication: Many modern devices support multifactor authentication, combining biometric traits with other factors such as PINs or tokens. This layered approach further enhances security by adding an extra barrier against unauthorized access.

In this step-by-step guide, we will explore how to implement biometric authentication in Android applications using the power of Jetpack Compose. To read more about Jetpack Compose you may visit our article. By combining the capabilities of Jetpack Compose with the Android Biometric API, developers can craft applications that prioritize security and provide a seamless and delightful user experience.

In the following sections, we will walk through the process of integrating biometric authentication into an Android app using Jetpack Compose. We will cover various aspects such as understanding the Biometric API, preparing the project, implementing different biometric modalities, and ensuring security best practices.

Stay tuned as we embark on this journey to create more secure, user-centric, and innovative Android applications with the power of biometric authentication and Jetpack Compose.

Understanding Biometric Authentication

Android devices offer several biometric modalities, each with its own set of characteristics and advantages.

Fingerprint Authentication:

Fingerprint authentication is one of the most widely recognized biometric methods. It relies on capturing and analyzing the distinctive patterns in a user's fingerprints. As every individual has unique ridge patterns and minutiae points at their fingertips, fingerprint authentication offers a high level of accuracy and security. Android devices equipped with fingerprint sensors enable users to unlock their devices, authorize transactions, and access sensitive apps simply by placing their registered finger on the sensor. This method has gained significant popularity due to its ease of use and quick recognition.

Face Recognition:

Face recognition involves capturing and analyzing a user's facial features to establish identity. It works by detecting key facial landmarks and comparing them with registered data. The minimum hardware requirement is a high-resolution camera. This camera should have sufficient resolution and quality to detect facial features accurately. To enhance security, some phones carry depth sensors that create a 3D depth map of the user's face. Or, even better, an infrared camera that enables IRIS recognition. Just with a front-facing camera, the device is considered to have weak biometric authentication.

Face recognition is convenient and non-intrusive, providing a seamless user experience. However, it's important to note that lighting conditions and angle variations can impact its accuracy.

Iris Recognition:

Iris recognition is a highly secure biometric method that involves capturing and analyzing the unique patterns in a user's iris, which is the colored part of the eye surrounding the pupil. Like fingerprints, iris patterns are distinct to each individual and remain stable over time. This method offers a higher degree of accuracy and security due to the complexity of the iris patterns. While iris recognition may require specific hardware, it provides a robust solution for applications that demand stringent security measures.

The Role of Biometric Authentication in App Security:

Biometric authentication plays a crucial role in enhancing the security of sensitive app functionalities. While traditional authentication methods like passwords can be compromised through hacking, phishing, or even user negligence, biometric traits are inherent and difficult to replicate. By incorporating biometric authentication as an additional security layer, apps can ensure that only authorized individuals gain access to critical features, sensitive data, and financial transactions.

For instance, financial apps can use biometric authentication to authorize high-value transactions, ensuring that even if a user's device is stolen, unauthorized transactions cannot be carried out without the user's biometric input. Similarly, healthcare apps can use biometrics to secure patient records and medical data, safeguarding sensitive information from unauthorized access.

The significance of biometric authentication extends beyond security. By reducing the need for complex passwords and PINs, biometrics offer a seamless and user-friendly experience, contributing to higher user engagement and satisfaction. Users are more likely to adopt apps that prioritize both security and convenience.

As we proceed through this step-by-step guide, we will explore how to harness the power of Jetpack Compose to integrate biometric authentication seamlessly into your Android apps. By combining the strength of biometric modalities with the modern UI capabilities of Jetpack Compose, you'll be able to create applications that are not only secure but also delightful to use. Stay with us as we dive deeper into the implementation details and unlock the potential of biometric authentication in your Android projects.

Prerequisites

Before diving into the implementation of biometric authentication in your Android app using Jetpack Compose, there are several prerequisites that you need to ensure are in place. These prerequisites ensure that your app can effectively utilize the Biometric API and provide a seamless and secure user experience.

Minimum SDK Version:

To implement biometric authentication, your app should have a minimum SDK version of 23 (Android 6.0, Marshmallow) or higher, as the Biometric API was introduced in this version.

Hardware Requirements:

The availability of biometric authentication methods depends on the hardware capabilities of the user's device. Such as:

  • Fingerprint sensor for fingerprint authentication.
  • Front-facing Camera for facial recognition.
  • Infrared camera for iris recognition.

Ensure that your app gracefully handles scenarios where the required hardware is not available on the device.

Setting Up Biometric Authentication and Jetpack Compose

Now that we've covered the prerequisites, it's time to set up your Android project for biometric authentication using the Android Biometric API and Jetpack Compose. This section will guide you through adding the necessary permissions and dependencies to your project, ensuring that you're well-equipped to integrate biometric authentication seamlessly into your app.

  1. Adding Permissions:

Depending on the biometric modality you plan to use, you may need to add specific permissions to your app's AndroidManifest.xml file. For example, if you intend to use face recognition, you must request CAMERA permission to access the front-facing camera:

<uses-permission android:name="android.permission.CAMERA" />

Make sure to request permissions at runtime if your app targets Android 6.0 (Marshmallow) or higher. You can use the AndroidX Activity or Fragment libraries to handle permission requests effectively.

  1. Adding Dependencies:

To begin implementing biometric authentication using the Android Biometric API and Jetpack Compose, you must add the required dependencies to your app's build.gradle file. We'll be using the Biometric API to interact with biometric hardware and the Jetpack Compose libraries for UI creation.

In your app's build.gradle file, add the following dependencies:

android {
// ...
buildFeatures {
compose true
}

composeOptions {
kotlinCompilerExtensionVersion "1.5.1"
}
}

dependencies {
// ...
implementation "androidx.compose.ui:ui:1.4.3"
implementation "androidx.compose.material:material:1.4.3" // Check for the latest version
implementation "androidx.activity:activity-compose:1.7.2"
implementation("androidx.biometric:biometric:1.2.0-alpha05")

}

The androidx.compose and androidx.activity:activity-compose are required for building the user interface using Jetpack Compose.

The androidx.biometric:biometric dependency provides access to the Android Biometric API, which is essential for implementing biometric authentication.

Checking Biometric Device Compatibility

Now, let’s start implementing the actual solution. As we are using Jetpack Compose we will create a MainActivity and add our Composables to it.

class MainActivity : AppCompatActivity() {

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
BiometricAuthenticationScreen()
}
}
}

We now need to implement the BiometricAuthenticationScreen Composable that will be responsible for the actual biometric authentication.

@Composable
fun BiometricAuthenticationScreen() {
val context = LocalContext.current as FragmentActivity
val biometricManager = BiometricManager.from(context)
val canAuthenticateWithBiometrics = when (biometricManager.canAuthenticate(BiometricManager.Authenticators.BIOMETRIC_STRONG)) {
BiometricManager.BIOMETRIC_SUCCESS -> true
else -> {
Log.e("TAG", "Device does not support strong biometric authentication")
false
}
}

Surface(color = MaterialTheme.colors.background) {
Column(
modifier = Modifier.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center
) {
if (canAuthenticateWithBiometrics) {
//TODO perform biometric authentication
} else {
Text(text = "Biometric authentication is not available on this device.")
}
}
}
}

We have implemented a simple Composable that first of all is using the BiometricManager to identify whether biometric authentication is available on this device. It stores the result on a boolean value. As we explained earlier there are devices, particularly older devices, that do not support any fingerprint, face, or iris authentication.

In our implementation are logging those cases and presenting a text on the screen that informs the user. In a real-world app, we would probably want to redirect the user to a username-password authentication screen instead.

Implementing Biometric Authentication

Let’s proceed with implementing the biometric authentication. First of all, we will create a button Composable that will appear on the screen when the device supports biometric authentication.

@Composable
fun BiometricButton(
onClick: () -> Unit,
text: String
) {
Button(
onClick = onClick,
modifier = Modifier.padding(8.dp)
) {
Text(text = text)
}
}

Now we will implement the authenticate with biometric function.

fun authenticateWithBiometric(context: FragmentActivity) {
val executor = context.mainExecutor
val biometricPrompt = BiometricPrompt(
context,
executor,
object : BiometricPrompt.AuthenticationCallback() {
override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
//TODO handle authentication success, proceed to HomeScreen
Log.d("TAG", "Authentication successful!!!")
}

override fun onAuthenticationError(errorCode: Int, errString: CharSequence) {
Log.e("TAG", "onAuthenticationError")
//TODO Handle authentication errors.
}

override fun onAuthenticationFailed() {
Log.e("TAG", "onAuthenticationFailed")
//TODO Handle authentication failures.
}
})

val promptInfo = BiometricPrompt.PromptInfo.Builder()
.setTitle("Biometric Authentication")
.setDescription("Place your finger the sensor or look at the front camera to authenticate.")
.setNegativeButtonText("Cancel")
.setAllowedAuthenticators(BiometricManager.Authenticators.BIOMETRIC_STRONG)
.build()

biometricPrompt.authenticate(promptInfo)
}

Initially, we are creating a Biometric Prompt with the respective callbacks that decide what happens on each occasion. The onAuthenticationSucceedded callback is called when the authentication is successful, here you probably want to include an Intent for your HomeActivity or present your HomeScreen Composable. Personally, I prefer to separate the pre-authentication app from the post-authentication app with a separate activity.

After we create the BiometricPrompt we also create the PromptInfo that defines the options and literals that will present to the user when he triggers the biometric authentication.

Then we define what authenticators we want to allow. Here we are requesting the BIOMETRIC_STRONG type of authentication. This includes:

  1. Fingerprint authentication.
  2. Face recognition with IRIS detection.
  3. Face recognition with a 3D depth sensor.

As we mentioned earlier, a device that only carries a front-facing camera cannot apply strong biometric authentication. The OS is picking automatically the strong authentication option that is available on the current device (fingerprint or face recognition). Usually, devices don’t carry more than 1 strong-biometric authentication sensors, as this would unnecessarily increase their cost.

Finally, we call the authenticate function on the biometricPrompt to trigger the actual authentication popup.

In order to finalize the implementation, we need to display the BiometricButton on the devices that support biometric authentication. Replace the //TODO perform biometric authentication with BiometricButton(...)

    Surface(color = MaterialTheme.colors.background) {
Column(
modifier = Modifier.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center
) {
if (canAuthenticateWithBiometrics) {
BiometricButton(
onClick = {
authenticateWithBiometric(context)
},
text = "Authenticate with Biometric"
)
} else {
Text(text = "Biometric authentication is not available on this device.")
}
}
}

The implementation is completed! You can now build and install the app on a device that supports biometrics and perform the authentication!

Biometric Authentication Error Handling

Let’s now discuss the error handling of biometric authentication.

Both onAuthenticationError and onAuthenticationFailed are callback methods of the BiometricPrompt.AuthenticationCallback class. These methods are invoked based on different scenarios during the biometric authentication process.

onAuthenticationError Method:

The onAuthenticationError method is called when an error occurs during the biometric authentication process. This could include various types of errors, such as the user clicking the cancel button, sensor errors, hardware issues, or other unexpected conditions that prevent successful authentication. The method receives two parameters:

  1. errorCode: An integer code representing the specific error that occurred. This code can be used to identify the nature of the error.
  2. errString: A human-readable error message that provides additional details about the error.

onAuthenticationFailed Method:

The onAuthenticationFailed method is called when the biometric authentication process fails to recognize the biometric data provided by the user. This can occur when the biometric data presented to the sensor does not match any enrolled biometric template. It's important to note that this callback is not invoked for every unsuccessful attempt; it's specifically for cases where the biometric data provided cannot be matched to any registered data.

Similar to the onAuthenticationError method, the onAuthenticationFailed method should be used to handle authentication failures by implementing appropriate logic.

In summary, onAuthenticationError is called when there's an error during the authentication process, and onAuthenticationFailed is called when the provided biometric data cannot be matched to any registered data. Both methods are essential for creating a comprehensive biometric authentication experience that informs users about errors and failures and guides them through the authentication process.

Conclusion

As we conclude this step-by-step guide on implementing biometric authentication in Android with Jetpack Compose, we've explored the fusion of cutting-edge security measures and user-centric design principles. Biometric authentication has emerged as a formidable solution that not only enhances the security of your Android applications but also elevates the user experience to new heights.

By harnessing the power of biometric modalities such as fingerprint, face recognition, and iris authentication, developers can provide users with a seamless and secure way to access sensitive features, authenticate transactions, and interact with confidential data. The integration of Jetpack Compose further amplifies the potential, enabling the creation of intuitive and visually appealing user interfaces that align with modern design trends.

Shipbook provides awesome remote logging capabilities that can help you identify, debug, and fix critical authentication errors at the time they appear!

Thank you for joining us on this exploration of biometric authentication with Jetpack Compose. As technology continues to evolve, we encourage you to stay curious, experiment, and continually enhance your skills to build exceptional and secure experiences for Android users worldwide.

· 9 min read

RecyclerView Vs ListView

Introduction

RecyclerView and ListView are two popular options for displaying long lists of data within an Android application. Both are subclasses of the ViewGroup class and can be used to display scrollable lists. However, they have different features, capabilities and implementation.

The process of implementing both may seem pretty similar, for example,

  • You get list of data
  • You create an adapter
  • Find the view to which you have to display the list
  • Set the adapter to that list

ListView was one of the earliest components introduced in Android development for displaying a scrollable list of items. Although it provided basic functionality and ease of implementation, it had its limitations, especially when it came to handling large data sets and customizing the appearance and behavior of the list.

As Android applications evolved and the need for more sophisticated list management became apparent, RecyclerView was introduced as a more versatile and efficient solution for displaying lists. As a developer, it's essential to understand the key differences between ListView and RecyclerView to appreciate their respective advantages and disadvantages.

In this article, we'll explore the key differences between RecyclerView and ListView and give you a good understanding of when to use what and how and also appreciate why RecyclerView came into existence over ListView.

ListView

ListView was introduced in Android 1.0 and has been around since then. ListView was the go-to solution for displaying lists of data before RecyclerView was introduced.

One of the biggest advantages of using a ListView is that it's simpler to implement, easier to use. Here is an example of how simply ListViews can be implemented in Android.

main activity

Link to snippet

As you can see, the code is pretty simple and straight-forward compared to the RecyclerView implementation one has to do by implementing custom adapter and viewholder classes.

If you ask any Android Developer about the difference between the two they would say something like “ListView is still available and can be a good solution for displaying smaller lists of data. But, as the complexity of the app increases, the ListView might not be the best solution for managing and displaying large amounts of data.” Let’s try to understand why?

To implement anything a little more complex than just a simple list of Strings, it’s a good practice to write our own Adapter class whose responsibility is to map the data to the positioned view as we scroll through the list.

Let’s write our own adapter class instead of a simple ArrayAdapter for the above snippet.

list adapter

Link to snippet

The getView function on a high level does the following:

  • Gets each view item from the listview
  • Find references to its child views
  • Sets the correct data to those views depending upon the position
  • Returns the created view item.

For each row item in the 1000 item list, we don’t have to create 1000 different views, we can repopulate and reuse the same set of views with different data depending on the position in the list. This can be a major performance boost as we are saving tons of memory for a large list. This is called View-Recycling and is a major building block for RecyclerView which we will see in a while and here is a representation of how View Recycling works.

graph

Now, we have recycled the views by a simple null check and saved memory but if we look inside the getView() function we can see that we are trying to find the references to the child views by doing findViewByID() calls.

Depending upon how many child views there are, in my example code there are 4 so** for each item in the list, we are** calling findViewByID() 4 times.

Hence for a 1000 item list, there will be 4000 findViewByID() calls even though we have optimized the way in which the rowItem views are initialized. To help fix this problem for large lists, the ViewHolder pattern comes into play.

ViewHolder Pattern in Android

The ViewHolder pattern was created in Android to improve the performance of ListViews (and other AdapterView subclasses) by reducing the number of calls to findViewById().

When a ListView is scrolled, new views are created as needed to display the list items that become visible. Each time a new view is created, the findViewById() method is called to find the views in the layout and create references to them. This process can be slow, especially for complex layouts with many views while also at the same time the instantiated views references are kept in memory for the whole list which can grow directly proportional to the size of the list you are rendering.

The ViewHolder pattern addresses this performance issue by caching references to the views in the layout. When a view is recycled (i.e., reused for a different list item), the ViewHolder can simply update the views with new data, rather than having to call findViewById() again.

Implementing ViewHolder Pattern in our ListView

Lets implement our ViewHolder class inside the MyListAdapter class.

MyListAdapter class

Code Snippet

With the above mentioned changes, we have created a structure to:

  • Reuse the View for each item in the list instead of creating new ones for each item in the list.
  • Reduce the number of findViewByID() calls which in case of complex layouts and large number of items in the lists can take down the performance of the app significantly.

These are the two key things which are provided as a structure to the developers with RecyclerView apart from other features of customisations in RecyclerView.

Drawbacks of Using ListView

  • Inefficient scrolling due to inefficient memory usage out of the box
  • Lesser flexibility to customize how the list items should be positioned.
  • Can only implement a vertically scrolling list.
  • Implementing animations can be hard and complex out of the box
  • Only offers notifyDataSetChanged() which is an inefficient way to handle updates.

RecyclerView

RecyclerView was introduced in Android 5.0 Lollipop as an upgrade over the ListView. It is designed to be more flexible and efficient, allowing developers to create complex layouts with minimal effort.

It uses "recycling" out of the box which we have seen above. It also has more flexible layout options, allowing you to create different types of lists with ease and also provides various methods to handle data set changes efficiently.

Let’s use RecyclerView instead of ListView in our above implementation.

RecyclerView

As you can see there are multiple functions to override instead of just one getView() function of ArrayAdapter which makes the implementation of RecyclerViews not as beginner friendly as compared to ListView. It can also feel like an overkill implementation for the simplest of the lists in Android.

Benefits of Using RecyclerView

  • The major advantage of RecyclerView is its performance. It uses a view holder pattern out of the box, which reuses views from the RecyclerView pool and prevents the need to constantly inflate or create new views. This reduces the memory consumption of displaying a long list compared to ListViews and hence improves performance.

  • With LayoutManager you can define how you want your list to be laid out, linearly, in a grid, horizontally, vertically rather than just vertically in a ListView.

  • RecyclerView also offers a lot of customisation features over listview that make it easier to work with. For example, It supports drag and drop functionality, rearrange items in the list, item swiping gestures features like deleting or archiving items in the list. Below is an attached example code on how easy it is to extend the functionality to add swiping gestures.

// Set up the RecyclerView with a LinearLayoutManager and an adapter
recyclerView.layoutManager = LinearLayoutManager(this)
adapter = ItemAdapter(createItemList())
recyclerView.adapter = adapter

// Add support for drag and drop
val itemTouchHelper = ItemTouchHelper(object : ItemTouchHelper.Callback() {
override fun getMovementFlags(
recyclerView: RecyclerView,
viewHolder: RecyclerView.ViewHolder
): Int {
// Set the movement flags for drag and drop and swipe-to-dismiss
val dragFlags = ItemTouchHelper.UP or ItemTouchHelper.DOWN
val swipeFlags = ItemTouchHelper.START or ItemTouchHelper.END
return makeMovementFlags(dragFlags, swipeFlags)
}

override fun onMove(
recyclerView: RecyclerView,
viewHolder: RecyclerView.ViewHolder,
target: RecyclerView.ViewHolder
): Boolean {
// Swap the items in the adapter when dragged and dropped
adapter.swapItems(viewHolder.adapterPosition, target.adapterPosition)
return true
}

override fun onSwiped(viewHolder: RecyclerView.ViewHolder, direction: Int) {
// Remove the item from the adapter when swiped to dismiss
adapter.removeItem(viewHolder.adapterPosition)
}
})

// Attach the ItemTouchHelper to the RecyclerView
itemTouchHelper.attachToRecyclerView(recyclerView)

  • Implementing animations is pretty simple in RecyclerView and can be done by simply setting the itemAnimator as shown below:
val itemAnimator: RecyclerView.ItemAnimator = DefaultItemAnimator()
recyclerView.itemAnimator = itemAnimator

Best Practices to keep in mind with RecyclerView

To ensure the best results, developers should follow best practices when working with RecyclerView and ListView. For example:

  • Use item animations sparingly, as too many animations can lead to janky performance.

  • To update the UI with a RecyclerView, we can use the notifyItemInserted(), notifyItemRemoved() or even notifyItemChanged() methods, which tells the adapter that the data has changed and the list needs to be refreshed, but if not used responsibly can lead to redundant rebuilds of the list and introduce unwanted bugs.

Conclusion

In this article, we started off with implementing a simple list using ListView and made changes to it which don’t come out of the box with ListView to make it more memory efficient, like View Recycling and View Holder pattern, only to realize the limitations of customizations available in ListView.

Then we implemented the same list with RecyclerView which enforces developers to implement the features of View Recycling and ViewHolder pattern out of the box making them efficient, customizable and performant out of the box explaining their popularity as a solution in the Android Community.

· 11 min read
Petros Efthymiou

From Android Views to Jetpack Compose

Jetpack Compose and why it matters

Jetpack Compose is a revolutionary UI toolkit introduced by Google for building native Android applications. Unlike traditional Android Views, Jetpack Compose adopts a declarative approach to UI development, allowing developers to create user interfaces using composable functions.

This paradigm shift simplifies UI development by eliminating the need for complex view hierarchies and manual view updates. With Jetpack Compose, developers can express the desired UI state and let the framework handle the rendering and updating automatically. This results in cleaner and more readable code, improved productivity, and faster UI development cycles.

Jetpack Compose offers a modern and intuitive way to build UIs, enabling developers to create beautiful, responsive, and highly interactive Android applications with ease. Its importance lies in providing a more efficient and enjoyable development experience, enabling developers to focus on crafting exceptional user experiences while reducing boilerplate code and increasing code maintainability.

And the cherry on top? No more Android Fragments! We all had our fair share of pain trying to comprehend and debug the complex Fragment lifecycle. With Jetpack Compose, we can put an end to it! That’s right, Composables can take the Fragments’ place as reusable UI components that are tied up to an Activity.

Declarative UI building is the way that all front-facing applications are moving towards. It was first introduced by React in 2013. After its successful adoption in the web, it later moved to cross platform mobile platforms such as React Native and Flutter. Realizing its advantages, both native platforms, Android and iOS, have recently made a similar move by introducing Jetpack Compose and SwiftUI. Soon all other UI-creating tools will be a thing of the past.

Understanding RecyclerView and its Limitations

RecyclerView has long been a popular component in Android app development for efficiently displaying lists and grids. It offers flexibility and performance optimizations by recycling views as users scroll through the list, reducing memory consumption and improving scrolling smoothness. However, RecyclerView also comes with its limitations. Managing view recycling, implementing complex adapter logic, and supporting different view types for diverse list items can often lead to boilerplate code and increased development effort.

Additionally, RecyclerView lacks built-in support for animations and complex layout transitions, making it challenging to create dynamic and visually engaging user interfaces. These limitations have prompted developers to seek alternative solutions that offer a more streamlined and intuitive approach to building user interfaces. The Jetpack Compose Column and Lazy Column are coming to the rescue.

Analyzing the Existing RecyclerView Implementation

We are creating an application that fetches a list of playlists and displays them on the screen. The initial implementation is based on Android Fragment and Recycler View. Let's take a closer look at the code structure and components involved:

class PlaylistFragment : Fragment() {

private val viewModel: PlaylistViewModel by viewModels()
@Injected
var playlistAdapter: PlaylistAdapter

override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View {
// Inflate the layout for this fragment
val view = inflater.inflate(R.layout.fragment_playlist, container, false)

val playlistsRecyclerView: RecyclerView = view.findViewById(R.id.recyclerView)
playlistsRecyclerView.layoutManager = LinearLayoutManager(requireContext())
playlistsRecyclerView.adapter = playlistAdapter

lifecycleScope.launchWhenStarted {
viewModel.playlists.collect { playlists ->
playlistAdapter.submitList(playlists)
}
}

return view
}
}

Our Fragment depends on the ViewModel, which exposes a Kotlin StateFlow that emits a list of playlists. We observe this StateFlow using the collect method, and upon receiving the updated list, we populate the RecyclerView with the playlist items by calling submitList. The RecyclerView is set up with a custom adapter that extends the RecyclerView Adapter and holds a list of playlists as its data source.

Below is the respective code for the RecyclerView Adapter:

class PlaylistAdapter : RecyclerView.Adapter<PlaylistAdapter.PlaylistViewHolder>() {

private var playlistItems: List<Playlist> = emptyList()

override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PlaylistViewHolder {
val itemView = LayoutInflater.from(parent.context)
.inflate(R.layout.item_playlist, parent, false)
return PlaylistViewHolder(itemView)
}

override fun onBindViewHolder(holder: PlaylistViewHolder, position: Int) {
val playlist = playlistItems[position]
holder.bind(playlist)
}

override fun getItemCount(): Int {
return playlistItems.size
}

inner class PlaylistViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) {
private val titleTextView: TextView = itemView.findViewById(R.id.titleTextView)
private val descriptionTextView: TextView = itemView.findViewById(R.id.descriptionTextView)

fun bind(playlist: Playlist) {
titleTextView.text = playlist.title
descriptionTextView.text = playlist.description
}
}

fun submitList(playlists: List<Playlist>) {
playlistItems = playlists
notifyDataSetChanged()
}
}

Within the adapter, we override the necessary methods, such as onCreateViewHolder, onBindViewHolder, and getItemCount to handle view creation, data binding, and determining the item count respectively. The item layout XML file defines the visual representation of each playlist item, containing the necessary views and bindings.

As we explained earlier, RecyclerView implementations require a lot of boilerplate and repetitive code.

Jetpack Compose Column vs Lazy Column

Before we jump into improving our implementation with Jetpack Compose, let’s discuss the differences between the Column and LazyColumn components.

In Jetpack Compose, both Column and LazyColumn are composable functions used to display vertical lists of UI elements. The primary difference lies in their behavior and performance optimization. The Column is suitable for a small number of items or when the entire list can fit on the screen. It lays out all its children regardless of whether they are currently visible on the screen, which may lead to performance issues with large lists. For short lists, rendering the items from the start offers increased performance.

On the other hand, LazyColumn is optimized for handling large lists efficiently. It loads only the visible items on the screen and recycles the off-screen items, similar to the traditional RecyclerView. This approach reduces memory consumption and enhances scrolling performance for long lists. Therefore, LazyColumn is the preferred choice when dealing with extensive datasets or dynamic content, ensuring a smooth and responsive user experience.

Setting Up Jetpack Compose in the Project

In order to use Jetpack Compose in our project, we need to complete the following setup steps:

Step 1: Add the Jetpack Compose dependency in build.gradle

plugins {
id 'com.android.application'
id 'kotlin-android'
}

android {
// ...
buildFeatures {
compose true // Enable Jetpack Compose
}

composeOptions {
kotlinCompilerExtensionVersion = “$version”
}
// ...
}

dependencies {
implementation "androidx.compose.ui:ui:$compose_version" // Check for the latest version
implementation "androidx.compose.material:material:$material_version" // Check for the latest version
implementation "androidx.activity:activity-compose:$compose_version" // Check for the latest version
// ...
}

Step 2: Initialize Jetpack Compose In your Application class, in the onCreate method.

class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO) // Optional: Disable dark mode
}
}

You can now start adding Composables inside your MainActivity and leverage the power of Jetpack Compose!

class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
//TODO add a composable
}
}
}

Migrating RecyclerView to Lazy Column

Jetpack Compose belongs to the declarative UI family. In declarative UI, we receive the state of the data that needs to be displayed, and we programmatically create the views. The views are immutable, and their state cannot change. Every time the data state changes, everything is being redrawn on the screen, and the views recreated from scratch. Practically, behind the scenes, there are smart diffing mechanisms that don’t redraw elements that their data hasn’t changed. But we, as developers, must write code as if everything is being redrawn when the data changes.

Let’s see how we can refactor the playlists screen with Jetpack Compose.

As we promised earlier, with Jetpack Compose, we can get rid of the Android Fragments. Everything is Jetpack Compose, from a whole screen to a small UI element, is a composable. The composables are functions instead on objects. This reflects one of the paradigm shifts that the declarative UI introduces. We are moving towards stateless functional programming instead of stafeful object oriented programming.

Let’s start by replacing our PlaylistFragment with a screen composable.

@Composable
fun PlaylistScreen(viewModel: PlaylistViewModel) {
val playlists by viewModel.playlists.collectAsState()

LazyColumn {
items(playlists) { playlist ->
PlaylistItem(playlist = playlist)
}
}
}

The PlaylistScreen composable represents the screen where the playlists are displayed. It collects the playlists from the PlaylistViewModel using collectAsState to recompose the composable whenever the playlist data changes automatically. The main component in the PlaylistScreen is the LazyColumn, which is a Jetpack Compose equivalent of RecyclerView. It handles view recycling and renders only the visible items on the screen. Every time the playlist StateFlow emits another result, the composable function PlaylistScreen will automatically recompose, and the UI be redrawn with the updated data.

Each list item is described by the composable below:

@Composable
fun PlaylistItem(playlist: Playlist) {
// Custom composable for rendering an individual playlist item
Column(
modifier = Modifier
.fillMaxWidth()
.padding(16.dp)
) {
Text(
text = playlist.title,
style = TextStyle(fontWeight = FontWeight.Bold, fontSize = 18.sp)
)
Spacer(modifier = Modifier.height(8.dp))
Text(text = playlist.description)
}
}

The PlaylistItem composable represents an individual playlist item. We use a Column composable to stack the title and description texts vertically. We apply styling and padding.

With Jetpack Compose's LazyColumn, we achieve a more concise and declarative way of displaying the list of playlists without the need for a separate adapter or view holder logic. The composable functions automatically handle the UI rendering and updates based on the provided state. This refactoring results in cleaner, moer reuseable and more maintainable code, making UI development more intuitive and efficient. Furthermore, we don’t have to handle the Fragment’s complex lifecycle while retaining the benefit of reusable UI components.

The playlist with Compose&#39;s LazyColumn

Figure: The playlist with Compose's LazyColumn

Handling Clicks

Handling clicks in the Jetpack Compose Column component is super easy, we simply need to add the ‘clickable’ modifier and call the code that we want to execute when the respective list item is clicked. We have access to selected playlist model info.

 @Composable
fun PlaylistItem(playlist: Playlist) {
// Custom composable for rendering an individual playlist item
Column(
modifier = Modifier
.fillMaxWidth()
.clickable { /* Handle item click here */ }
.padding(16.dp)
) {
Text(
text = playlist.title,
style = TextStyle(fontWeight = FontWeight.Bold, fontSize = 18.sp)
)
Spacer(modifier = Modifier.height(8.dp))
Text(text = playlist.description)
}
}

Testing

As good engineers, we should always include automated tests that verify that our code works correctly. With Jetpack Compose, UI testing is much easier than before. Let’s see how we can test the PlaylistScreen after we migrate it to Jetpack Compose.

@ExperimentalCoroutinesApi
@get:Rule
val composeTestRule = createComposeRule()

@OptIn(ExperimentalCoroutinesApi::class)
@Test
fun playlistScreen_RenderList_Success() {
// Dummy data for testing
val playlists = listOf(
Playlist("Playlist 1", "Description 1"),
Playlist("Playlist 2", "Description 2"),
Playlist("Playlist 3", "Description 3")
)

// Create a TestCoroutineDispatcher to be used with Dispatchers.Main
val testDispatcher = TestCoroutineDispatcher()
val testCoroutineScope = TestCoroutineScope(testDispatcher)

// Launch the composable with TestCoroutineScope
testCoroutineScope.launch {
composeTestRule.setContent {
PlaylistScreen(viewModel = PlaylistViewModel(playlists))
}
}

// Wait for recomposition
composeTestRule.waitForIdle()

// Check if each playlist item is rendered correctly
playlists.forEach { playlist ->
composeTestRule.onNode(hasText(playlist.title)).assertIsDisplayed()
composeTestRule.onNode(hasText(playlist.description)).assertIsDisplayed()
}
}

In this test, we use the createComposeRule to set up the Compose test rule. We also create a TestCoroutineDispatcher and a TestCoroutineScope to simulate the background coroutine execution. Then, we launch the PlaylistScreen composable with dummy data for testing. After the recomposition, we use onNode to check if each playlist item title and description is correctly displayed. Note that we are testing UI, therefore this is an instrumentation test that must be inserted under the AndroidTest folder.

Let’s now see how we can test the PlaylistItem in isolation:

@get:Rule
val composeTestRule = createComposeRule()

@Test
fun playlistItem_Render_Success() {
val playlist = Playlist("Playlist 1", "Description 1")

composeTestRule.setContent {
PlaylistItem(playlist = playlist)
}

composeTestRule.onNode(hasText(playlist.title)).assertIsDisplayed()
composeTestRule.onNode(hasText(playlist.description)).assertIsDisplayed()
}

In this test, we use the createComposeRule to set up the Compose test rule. We then render the PlaylistItem composable with a dummy Playlist object. After rendering, we use onNode to check if the playlist title and description are correctly displayed.

These automated tests use Jetpack Compose's testing libraries to verify if the PlaylistScreen and PlaylistItem composables render as expected. They help ensure that the UI is correctly displayed and the appropriate data is rendered, providing confidence in the correctness of your composable functions. Remember to import the necessary dependencies and adapt the test code to your specific project setup.

Conclusion

Declarative UI is the future both in the web and mobile platforms. All major players have already adopted it, and it looks like all the other UI generation tools will eventually become deprecated.

It introduces a paradigm shift in building the UI where the views are immutable, and their state cannot change. When the data state changes, the views are recreated from scratch and are put to display the data updates.

Declarative UI building and Jetpack Compose specifically offer advantages such as simpler code that is easier to read, write and maintain. As a bonus, we can get rid of Fragments while maintaining the advantage of reusable UI components.

Shipbook offers fantastic Jetpack Compose debugging capabilities. You can add logs to monitor any UI rendering errors. Those will enable you to track, trace and fix every issue efficiently and effectively.

The sooner you start getting your hands on it, the better!

· 8 min read
Nikita Lazarev-Zubov

ConstraintLayout

Even though Jetpack Compose has become the recommended tool for building Android applications’ UI, the vast majority of applications still use traditional layout modes and their XML-based syntax. Android SDK provides us with many layout options. Some are already obsolete, but others remain popular and are widely used, including the newest offering: ConstraintLayout. Before we assess which options are actually effective, let’s briefly review the basics of the Android layout system.

Android Layout Basics

The fundamental building block of UI in Android is the View class, which represents a rectangular area on the screen. It’s also a base class for specific views like Button and ImageView. On top of them are ViewGroups—special Views that are used as containers for other views. ViewGroup is also the base class for various layout classes.

Android offers multiple layout options, including RelativeLayout, FrameLayout, and LinearLayout. However, back in 2018, ConstraintLayout was introduced, presumably, to rule them all. But does it live up to the hype? Let’s find out by looking at an example.

Android Layout Example

Let’s pretend ConstraintLayout doesn’t exist and build a UI for the login screen of our Layout Guru application using only pre-ConstraintLayout options.

Old Ways

Here’s what we’re going to build:

Layout Guru’s login screen

Figure 1: Layout Guru’s login screen

The view that we’re going to implement consists of two pairs of input fields and text labels centered on the screen. According to specification, each field takes up 60% of the screen width, and the text occupies the rest of the width. The application’s logo is centered above the fields, and uses 70% of the width. The “Sign In” button is positioned directly below the bottom input field and aligned to the right side of the screen.

Let’s start with one of the input text fields. The most straightforward way to implement it is with a horizontal LinearLayout. The layout_weight attribute will help us to set the desired width distribution. Here’s the layout’s XML:

    <LinearLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:weightSum="1">

<TextView
android:id="@+id/emailInputTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="0.4"
android:text="@string/email_address"
android:textColor="@color/black" />

<EditText
android:id="@+id/emailInputField"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="0.6"
android:inputType="textEmailAddress"
android:autofillHints="Email"
android:hint="@string/email_address"
android:backgroundTint="@color/black" />

</LinearLayout>

The second input is similar, but uses a different inputType’s value. Both inputs can be wrapped with a vertical LinearLayout:

    <LinearLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="wrap_content">

<include layout="@layout/email_field"/>
<include layout="@layout/password_field" />

</LinearLayout>

Finally, let’s combine the input fields with the rest of UI elements in a single RelativeLayout. For the first step of this process, we can add inputs to the layout and center them:

    <include
layout="@layout/login_form"
android:id="@+id/login_form"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_centerInParent="true" />

Then, we can add the “Sign In” button below the inputs, and align it to the right side of the screen:

    <Button
style="?android:attr/borderlessButtonStyle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:layout_below="@+id/login_form"
android:layout_alignParentEnd="true"
android:backgroundTint="@color/white"
android:text="@string/sign_in"
android:textColor="@color/black" />

The trickiest part, though, is the logo. Putting it above the inputs is easy, but there’s no straightforward way to make it take only 70% of the width of the screen using RelativeLayout. One way to achieve this is to put the image inside another LinearLayout, which has a convenient way of manipulating its child views’ weight (but doesn’t provide a way to position elements relative to each other):

    <LinearLayout
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:gravity="center_horizontal"
android:layout_above="@+id/login_form"
android:weightSum="1">

<ImageView
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="0.7"
android:src="@drawable/logo"
android:contentDescription="@string/layout_guru" />

</LinearLayout>

And here’s an outline of the resulting XML:

    <RelativeLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp">

<LinearLayout
<!--...-->>
<ImageView
<!--...-->>
</LinearLayout>

<include
<!--...-->>

<Button
<!--...-->>

</RelativeLayout>

Looking at the result, we can already draw one important conclusion: even simple pieces of UI require a lot of code and mixing-and-matching of various layout types.

ConstraintLayout

Let’s look at how the same screen could be implemented using ConstraintLayout.

This time, let’s start by putting two EditTexts and two TextViews in the center of the screen, and placing them relative to one another exactly as we did before using a combination of multiple LinearLayouts. Because the text input fields are higher than their text labels, we constrain the top one to the parent’s top, the bottom one to the parent’s bottom, and combine them into a packed chain. This will make them centered vertically as a whole. Then, the text fields can be aligned to the inputs’ baselines. This is the corresponding XML snippet:

    <TextView
android:id="@+id/emailInputTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="@string/email_address"
android:textColor="@color/black"
app:layout_constraintBaseline_toBaselineOf="@id/emailInputField"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintWidth_percent="0.4" />

<EditText
android:id="@+id/emailInputField"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:autofillHints="Email"
android:backgroundTint="@color/black"
android:hint="@string/email_address"
android:inputType="textEmailAddress"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@+id/passwordInputField"
app:layout_constraintStart_toEndOf="@id/emailInputTitle"
app:layout_constraintVertical_chainStyle="packed"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintWidth_percent="0.6" />

<TextView
android:id="@+id/passwordInputTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="@string/password"
android:textColor="@color/black"
app:layout_constraintBaseline_toBaselineOf="@id/passwordInputField"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintWidth_percent="0.4" />

<EditText
android:id="@+id/passwordInputField"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:autofillHints="Password"
android:backgroundTint="@color/black"
android:hint="@string/password"
android:inputType="textPassword"
app:layout_constraintTop_toBottomOf="@+id/emailInputField"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toEndOf="@id/passwordInputTitle"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintWidth_percent="0.6" />

The rest of the work is fairly straightforward. The image can be pinned to the top of the parent and to the top of the topmost input field. The relative width can be be provided using the layout_constraintWidth_percent attribute:

    <ImageView
android:layout_width="0dp"
android:layout_height="wrap_content"
android:src="@drawable/logo"
android:contentDescription="@string/layout_guru"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/emailInputField"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintWidth_percent="0.7" />

Positioning of the Button is simple as well:

    <Button
style="?android:attr/borderlessButtonStyle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:backgroundTint="@color/white"
android:text="@string/sign_in"
android:textColor="@color/black"
app:layout_constraintTop_toBottomOf="@id/passwordInputField"
app:layout_constraintEnd_toEndOf="parent"/>

An outline of the resulting layout is self explanatory:

    <androidx.constraintlayout.widget.ConstraintLayout 
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp">

<ImageView
<!--...-->>

<TextView
<!--...-->>
<EditText
<!--...-->>

<TextView
<!--...-->>
<EditText
<!--...-->>

<Button
<!--...-->>

</androidx.constraintlayout.widget.ConstraintLayout>

So, coming back to the original question—does ConstraintLayout replace other layouts? No doubt one can build a complicated UI by means of ConstraintLayout alone. Although, looking at the resulting code, some might prefer traditional options as being (arguably) easier to modularize and reuse, relatively complicated UI can be built simpler and using less code. The more sophisticated the UI, the more evident the statement becomes. This only confirms the conclusion from the previous section.

Another advantage of ConstraintLayout is that it’s more straightforward when building UI by means of the visual design tools of the Android Studio instead of coding it in XML.

Before we jump to conclusions, though, let’s look at another important metric: performance.

Layout Rendering Performance

Android provides us with useful developer tools that can help to measure rendering efficiency, one of which is Profile GPU Rendering. The output of the tool for each layout implementation will look something like this:

GPU Rendering GPU Rendering ConstraintLayout

Figure 2: Profile GPU Rendering output for the two layouts, with ConstraintLayout on the right

The ConstraintLayout option, on the right, is slightly shorter on the horizontal axis, and has fewer red spikes, which translates to less CPU overhead.

Let’s also look at the output from another tool—Debug GPU Overdraw:

GPU Overdraw GPU Overdraw ConstraintLayout

Figure 3: Debug GPU Overdraw output for the two layouts, with ConstraintLayout again on the right

The results are, again, very similar, but the RelativeLyout/LinearLayout version (on the left) has more purple areas—which mean areas that were redrawn once—and even one small green area indicating two redraws.

Although the difference between two layouts appears insignificant at first glance, in real-world situations with a more complicated user interface, the penalty can easily become noticeable and result in choppy animations and visible delays. Let’s explore why that’s the case.

Double Taxation

The phenomenon of slower rendering of nested layouts is widely referred to in the Android community as double taxation. While the system renders the view hierarchy, it iterates over the elements multiple times before finalizing the size and position of each view: At the first pass, the layout system calculates each child’s position and size based on the child’s layout After that, the system makes another iteration, taking into account the layout parameters of the parent layout. The more levels of hierarchy, the bigger the overhead. The problem applies to RelativeLayout, horizontal LinearLayout, and GridLayout.

If performance problems with rendering begin to occur, one of the first things to try is eliminating nested layouts wherever possible. Another potential way to experience an improvement is to switch to ConstraintLayout, which is cheaper in terms of underlying calculation because of its “flat” nature.

Conclusion

While choosing between the newer ConstraintLayout and other, more “traditional” alternatives, several factors should be considered. First of all, it's true that ConstraintLayout can turn into a universal solution for any type of UI. Additionally, for truly complicated user interfaces, ConstraintLayout can be a more lightweight and performant solution. On the other hand, in very simple cases where LinearLayout would provide a more straightforward solution, ConstraintLayout might be overkill.

Logging

If you need to log information related to rendering, Android has an interface called ViewTreeObserver.OnDrawListener that can be easily put to use together with a system to collect and store your log messages remotely, such as Shipbook.

· 9 min read
Nikita Lazarev-Zubov

Exception Handling

The first version of Java was released in 1995 based on the great idea of WORA (“write once, run anywhere”) and a syntax similar to C++ but simpler and human-friendly. One notable language invention was checked exceptions—a model that later was often criticized.

Let’s see if checked exceptions are really that harmful and look at what’s being used instead in contemporary programming languages, such as Kotlin and Swift.

Good Ol’ Java Way

Java has two types of exceptions, checked and unchecked. The latter are runtime failures, errors that the program is not supposed to recover from. One of the most notable examples is the notorious NullPointerException.

The fact that the exception is unchecked doesn’t mean you can’t handle it:

Object object = null;
try {
System.out.println(object.hashCode());
} catch (NullPointerException npe) {
System.out.println("Caught!");
}

The difference between a checked and unchecked exception is that if the former is raised, it must be included in the method’s declaration:

void throwCustomException() throws CustomException {
throw new CustomException();
}

static class CustomException extends Exception { }

The compiler will make sure that it’s handled— sooner or later. The developer must wrap the throwCustomException() with a try-catch block:

try {
throwCustomException();
} catch (CustomException e) {
System.out.println(e.getMessage());
}

Or pass it further:

void rethrowCustomException() throws CustomException {
throwCustomException();
}

What’s Wrong with the Model

Checked exceptions are criticized for forcing people to explicitly deal with every declared exception, even if it’s known to be impossible. This results in a large amount of boilerplate try-catch blocks, the only purpose of which is to silence the compiler.

Programmers tend to work around checked exceptions by either declaring the method with the most general exception:

void throwCustomException() throws Exception {
if (Calendar.getInstance().get(Calendar.DAY_OF_MONTH) % 2 == 0) {
throw new EvenDayException();
} else {
throw new OddDayException();
}
}

Or handling it using a single catch-clause (also known as Pokémon exception handling):

void throwCustomException()
throws EvenDayException, OddDayException {
// ...
}

try {
throwCustomException();
} catch (Exception e) {
System.out.println(e.getMessage());
}

Both ways lead to a potentially dangerous situation, when all possible exceptions are sifted together, including everything that is not supposed to be dismissed. Error-handling blocks of code also become meaningless, fictitious, if not empty.

Even if all exceptions are meticulously dealt with, public methods swarm with various throws declarations. This means all abstraction levels are aware of all exceptions that are thrown around it, compromising the principle of information hiding.

In some parts of the system, where multiple throwing APIs meet, a problem with scalability might emerge. You call one API that raises one exception, then call another that raises two more, and so on, until the method must deal with more exceptions than it reasonably can. Consider a method that must deal with these two:

void throwsDaysExceptions() throws EvenDayException, OddDayException  {
// …
}
void throwsYearsExceptions() throws LeapYearException {
// …
}

It's doomed to have more exception-handling code than business logic:

void handleDate() {
try {
throwsDaysExceptions();
} catch (EvenDayException e) {
// ...
} catch (OddDayException e) {
// ...
}
try {
throwsYearsExceptions();
} catch (LeapYearException e) {
// ...
}
}

And finally, the checked exception approach is claimed to have a problem with versioning. Namely, adding a new exception to the throws section of a method declaration breaks client code. Consider the throwing method from the example above. If you add another exception to its throws declaration, the client code will stop compiling:

void throwException()
throws EvenDayException, OddDayException, LeapYearException {
// ...
}

try {
// Unhandled exception: LeapYearException
} catch (EvenDayException e) {
// ...
} catch (OddDayException e) {
// ...
}

The Kotlin Way

Sixteen years after Java was first released, in 2011, Kotlin was born from the efforts of JetBrains, a Czech company founded by three Russian software engineers. The new programming language aimed to become a modern alternative to Java, mitigating all its known flaws.

I don’t know any programming language that followed Java in implementing checked exceptions, Kotlin included, despite the fact it targeted JVMs. In Kotlin, you can throw and catch exceptions similarly to Java, but you’re not required to indicate an exception in a method’s declaration. (In fact, you can’t):

class CustomException: Exception()

fun throwCustomException() {
throw CustomException()
}

fun rethrowCustomException() {
try {
throwCustomException()
} catch (e: CustomException) {
println(e.message)
}
}

Even catching is up to the programmer:

fun rethrowCustomException() {
throwCustomException() // No compilation errors.
}

For interoperability with Java (and some other programming languages), Kotlin introduced the @Throws annotation. Although it’s optional and purely informative, it’s required for calling a throwing Kotlin method in Java:

@Throws(CustomException::class)
fun throwCustomException() {
throw CustomException()
}

From One Extreme to Another

It may seem that programmers can finally breathe easy, but, personally, I think by solving the original problem, this new approach—Kotlin’s exceptions model—creates another. Unscrupulous developers are free to entirely ignore all possible exceptions. Nothing stops them from quickly wrapping a handful of exceptions with a try-catch expression and shipping the result to their end users, with a prayer. Or not covered exceptions are going to be discovered by end users.

Even if you’re a disciplined engineer, you’re not safe: Neither the compiler nor API will alert you to exceptions lurking inside. There’s no reliable way to make sure that all possible errors are being properly handled.

You can only guard yourself from your own code, patiently annotating your methods with @Throws. Though, even in this case, the compiler will tell you nothing and it’s easy to forget one exception or another.

The Swift Way

Swift first appeared publicly a little later, in 2014. And again, we saw something new. The error-handling model itself lies somewhere between Java’s and Kotlin’s, but how it works together with the language’s optionals is incredible. But first things first.

Of course, Swift has runtime, “unchecked”, errors—an array index out of range, a force-unwrapped optional value turned out to be nil, etc. But unlike Java or Kotlin, you can’t catch them in Swift. This makes sense since runtime exceptions can only happen because of a programming mistake, or intentionally (for instance, by calling fatalError()).

The rest of exceptions are errors that are explicitly thrown in code. All methods that throw anything must be marked with the throws keyword, and all code that calls such methods must either handle errors or propagate them further. Looks familiar, doesn’t it? But there’s a catch.

Fly in the Ointment

Let’s look at an example from above:

func throwError() throws {
if (Calendar.current.component(.day, from: Date()) % 2 == 0) {
throw EvenDayError()
} else {
throw OddDayError()
}
}

As you can see, you don’t declare specific errors that a method can throw; you’re only required to mark it as throwing something. The consequence of this is that you, again, don’t really know what to catch.

Unfortunately, the code below won’t compile:

do {
/*
Errors thrown from here are not handled because the enclosing
catch is not exhaustive
*/
try throwError()
} catch is EvenDayError {
print(String(describing: EvenDayError.self))
} catch is OddDayError {
print(String(describing: EvenDayError.self))
}

You always have to add Pokémon handling:

do {
try throwError()
} catch is EvenDayError {
print(String(describing: EvenDayError.self))
} catch is OddDayError {
print(String(describing: EvenDayError.self))
} catch {
print(error)
}

In fact, the Swift compiler doesn’t care about specific error types that you try to catch. You can even add a handler for something entirely irrelevant:

do {
try throwError()
} catch is EvenDayError {
print(String(describing: EvenDayError.self))
} catch is IrrelevantError {
print(String(describing: EvenDayError.self))
} catch {
print(error)
}

Or you can have only one default catch block that covers everything:

    do {
try throwError()
} catch {
print(error)
}

Another bad thing about the approach is that, without a workaround, you can’t catch one error and propagate another. The only way to implement such behavior is to catch the error you’re interested in and throw it again:

func rethrow() throws {
do {
try throwError()
} catch is EvenDayError {
throw EvenDayError() // Here's the trick.
} catch is IrrelevantError {
print(String(describing: EvenDayError.self))
} catch {
print(error)
}
}

Ointment

In my opinion, Swift’s strongest merit is its optionals system that cooperates with all aspects of the language. If you don’t care about thrown errors, instead of fictitious catch-blocks, you can always write try? Execution of the method will stop the moment the error is thrown, without propagating it further:

try? throwError()

If you’re feeling bold, you can use try! instead of try?, which won’t suppress the error if it’s thrown, but will let you omit the do-catch block:

try! throwError()

This method also allows converting a throwing call to a value. try? will give you an optional one, whereas try! has an effect similar to force-unwrapping:

func intOrError() throws -> Int {
// …
}

let optionalInt = try? intOrError() // Optional(Int)
let dangerousCall = try! intOrError() // Int or die!

Conclusion

Personally, I find Kotlin’s way, ahem, a failure. I can understand why Kotlin developers decided not to follow Java in its way of checked exceptions, but ignoring exceptions entirely, without a hint of static checks, is too much.

On the other hand, is the Java way really that harmful? No mechanism can defend software from undisciplined programmers. Even the best idea can be distorted and misused. But applying Java’s principles as designed can lead to good results.

Connecting two levels of abstraction, you can catch errors from one level and re-throw new types of errors to propagate them to the next level. You can catch several types of errors, “combine” them into one another, and throw them for further handling. This can help mitigate problems with encapsulation and scalability. For instance:

void throwCustomException() throws CustomException {
try {
throwDayException();
} catch (EvenDayException | OddDayException e) {
throw new CustomException();
}
}

What Java lacked from the very beginning is Swift’s optionality system and a syntax to bind exception handling and optional values. I believe, coupled with entirely static checks of thrown exceptions, this would build a very strong model that can satisfy the grouchiest programmers. Although, in any aforementioned programming language, this would require breaking changes, I personally believe it would be a game-changing improvement of code safety.

And if you want to improve your app stability right now, Shipbook is already here for you! It proactively inspects your app, catches exceptions and allows you to analyze them even before your users stumble upon the problem.

· 13 min read
Donald Le

Unit Testing in Android Development

Introduction

Unit testing entails the testing of the smallest parts of software, such as methods or classes. The main role of unit testing is to make sure the isolated part works as expected without integrating with third-party software, databases, or any dependency. To achieve this, software developers implement multiple testing techniques, like using stubs, mocks, dummies, and spies.

This post will show you why you should perform unit testing and how to implement it in your Android development project.

Benefits of Unit Testing

Unit testing allows you to catch software bugs early in the software development process, instead of QA finding them in the integration phase or end-to-end-testing, or, even worse, in the production environment. Moreover, as you develop your product, more features are added, meaning integration tests and end–to-end tests alone cannot cover all the corner cases. With unit testing, more corner cases are covered, which ensures your product meets the expected quality.

Benefits of Test-Driven Development (TDD)

Unit testing often goes along with the test-driven development (TDD) methodology, where developers first write the test, then write the feature code. At first, the tests will fail because the feature is not yet implemented. When the feature code is implemented, the tests will become green.

The huge benefit of TDD is that a software team can make sure the product is built and will meet the expected requirements, as demonstrated by the tests. Moreover, because developers write the tests first, they need to spend more time thinking about the product and what features the product has to cover; this way, the product being built will tend to have a higher quality.

Also, writing tests before writing product code will prevent developers from needing to refactor the code just to be able to write tests for it. For example, in the Go language, if the developers do not implement code with an interface, it’s very hard to write tests later on.

Example Application to Demonstrate Unit Testing

To better understand how to apply proper testing techniques for Android applications, let’s get your hands dirty by building a real application and then write tests for it. The application will show a list of popular movies for users to choose from as suggestions for their weekly movie night. Check out this GitHub repository for the full application code.

After opening the application, users will see a list of popular movies:

The movie suggestion application shown on a virtual device

Figure 1: The movie suggestion application shown on a virtual device

You can then tap on a movie for details like its plot summary and cast:

Details for the movie “Black Rock”

Figure 2: Details for the movie “Black Rock”

Unit Testing (Local Testing)

The unit test of our application will be run by a popular test runner called JUnit, a unit-testing framework that uses JVM languages like Java or Kotlin. If you’re not familiar with JUnit, you can learn more about it here. It helps you structure your tests, like what needs to be done first, what will be done last to clean data, and which data should be collected for the test report.

An Example of a Simple Unit Test

Okay, now let’s write an example unit test for the application.

We have the MovieValidator class in the utils package, which has the function isValidMovie:

import android.text.Editable
import android.text.TextWatcher
import java.util.regex.Pattern

class MovieValidator : TextWatcher {
internal var isValid = false
override fun afterTextChanged(editableText: Editable) {
isValid = isValidMovie(editableText)
}
override fun beforeTextChanged(s: CharSequence, start: Int, count: Int, after: Int) = Unit
override fun onTextChanged(s: CharSequence, start: Int, before: Int, count: Int) = Unit
companion object {
private val MOVIE_PATTERN = Pattern.compile("^[a-zA-Z]+(?:[\\s-][a-zA-Z]+)*\$")
fun isValidMovie(movie: CharSequence?): Boolean {
return movie != null && MOVIE_PATTERN.matcher(movie).matches()
}
}
}

To write the unit test for the function isValidMovie, we will first create a test class called MovieValidatorTest in the test folder. Then, we will need to import the MovieValidator class to test the isValidMovie in it.

The MovieValidatorTest will look like the following:

import com.fernandocejas.sample.core.functional.MovieValidator
import org.junit.Assert.assertTrue
import org.junit.Assert.assertFalse
import org.junit.Test
import mu.KotlinLogging
class MovieValidatorTest {
private val logger = KotlinLogging.logger {}
@Before
fun setUp() {
logger.info { "Starting the isValidMovie test" }
}
@Test
fun isValidMovie() {
assertTrue(MovieValidator.isValidMovie("The lord of the rings"))
assertFalse(MovieValidator.isValidMovie("name@email"))
}
@After
fun tearDown(){
logger.info { "Finishing the isValidMovie test" }
}
}

In the test file above, we implemented one test case to check the validity of the movie name. We also apply Before and After annotations to adding logging information so that we know when the test is about to start and when it is about to finish.

The Before and After annotations, help us structure our test scenario better. The Before annotation will be executed before every test, and the After annotation will be executed after every test. Developers often use these for setting up data for tests and then cleaning it up after testing is complete.

Notes: In order to install the logger library, we need to add the following code into our gradle configuration file.

implementation 'io.github.microutils:kotlin-logging-jvm:2.0.11'

When we run the test, we will see results as below:

Tests passed for movie validator test case

Figure 3: Tests passed for movie validator test case

The example unit test we just went over is very simple. But in real-world applications, you’ll need to deal with all kinds of dependencies and third-party APIs. How can we write tests for functions that interact with third-party dependencies?

When implementing unit testing, the best practice is to not deal with the real thing, like the real database, the real response from another API we take as input for the function, or any third-party dependencies. The reason for this is that when it comes to unit testing, we want to isolate the tests so that each test will test each unit. We could test the database or the third-party dependencies, but this will lead to flakiness in the tests. Instead, we’ll use “test doubles,” objects that stand in for the real objects when we implement the test. There are five types of test doubles: fake, dummy, stub, spy, and mock.

In this article, we’ll review the stub and mock types and use them for our example application.

  • Stubs provide fake data to the test.
  • Mocks check whether the expectation of the unit we are testing has been met.

How to Create Stubs and Mocks in a Sample Project

To better understand how to use a stub and a mock, let’s apply these techniques for writing unit tests in our movie suggestion app using MockK.

MockK is the well-known mock library in Kotlin, which provides native support for the Kotlin language. Users who are fond of the syntactic sugar of Kotlin will still be able to enjoy it with MockK. Moreover, since the default class and properties in Kotlin are final, using Mockito is considerably hard when mocking in Kotlin. But with MockK’s support, users won’t have to deal with that challenge anymore. To learn more about the benefits of using MockK over Mockito, check out this article.

To include the MockK library in your Android project, we need to add this line into the build.gradle.kts file:

testImplementation(TestLibraries.mockk)

The TestLibraries.mockk value is defined in Dependencies.kt as:

const val mockk = "io.mockk:mockk:${Versions.mockk}"
const val mockk = "1.10.0"

And that’s it.

So, let’s say we’re trying to test the class GetMovieDetails.

Initially, we usually implement the code without dependency injection like the following:

class GetMovieDetails : UseCase<MovieDetails, Params>() {
private val moviesRepository = MoviesRepository()
override fun run(params: Params) = moviesRepository.movieDetails(params.id)
data class Params(val id: Int)
}

The MovieRepository class is as defined below:

class MoviesRepository {
lateinit var context: Context
lateinit var retrofit: Retrofit
private val networkHandler = NetworkHandler(context)
private val service = MoviesService(retrofit)
fun movieDetails(movieId: Int): Either<Failure, MovieDetails> {
return when (networkHandler.isNetworkAvailable()) {
true -> request(
service.movieDetails(movieId),
{ it.toMovieDetails() },
MovieDetailsEntity.empty
)
false -> Left(NetworkConnection)
}
}
}

But writing code like this makes writing unit tests for this class impossible since we cannot mock dependency for the MoviesRepository class. Well, actually, we can write unit tests literally, but we’d need to use the real movie database, and this would lead to slower tests and make your test couple with third-party dependencies. Moreover, the problem with third-party dependencies is that they might not be working for other reasons, not because of our code.

The best practice when it comes to writing code that can be tested is applying dependency injection, which you can learn more about here.

First, we need to change the class MovieRepository to an interface type. The code for the interface MovieRepository will be changed as below:

interface MoviesRepository {
fun movies(): Either<Failure, List<Movie>>
fun movieDetails(movieId: Int): Either<Failure, MovieDetails>
class Network
@Inject constructor(
private val networkHandler: NetworkHandler,
private val service: MoviesService
) : MoviesRepository {
override fun movieDetails(movieId: Int): Either<Failure,
MovieDetails> {
return when (networkHandler.isNetworkAvailable()) {
true -> request(
service.movieDetails(movieId),
{ it.toMovieDetails() },
MovieDetailsEntity.empty
)
false -> Left(NetworkConnection)
}
}
}
..
}

Then, the class GetMovieDetails will be written as below, with the constructor MovieRepository:

class GetMovieDetails {
@Inject constructor(private val
moviesRepository:MoviesRepository):
UseCase < MovieDetails, Params > () {
override fun run(params: Params) = moviesRepository.movieDetails(params.id)
data class Params(val id: Int)
}
}

In order to test this class without calling the real database, we need to mock the MoviesRepository class using MockK:

@MockK private lateinit var moviesRepository: MoviesRepository

The test function for the movieDetails function will be written as below:

class GetMovieDetailsTest : UnitTest() {
private lateinit var getMovieDetails: GetMovieDetails
@MockK private lateinit var moviesRepository:
MoviesRepository
@Before fun setUp() {
getMovieDetails = GetMovieDetails(moviesRepository)
every { moviesRepository.movieDetails(MOVIE_ID) } returns
Right(MovieDetails.empty)
}
@Test fun `should get data from repository`() {
getMovieDetails.run(GetMovieDetails.Params(MOVIE_ID))
verify(exactly = 1) {
moviesRepository.movieDetails(MOVIE_ID)
}
}
companion object {
private const val MOVIE_ID = 1
}
}

In the setUp step, with @Before annotation, we initialize the getMovieDetails variable.

Then in the test function, we call the run function, with the input as GetMovieDetails.Params(MOVIE_ID. After that, we use the verify function, provided by MockK to check whether or not the call was actually made exactly one time.

Now, we will run the test to see whether it works or not. To run the test in Android Studio, click on the green button on the test method:

Log for the unit test run when testing GetMovieDetails class

Figure 4: Log for the unit test run when testing GetMovieDetails class

Advantages and Disadvantages of Unit Testing

With unit tests in place, we can be confident that our logic is met and we will be notified if any changes are made that break the existing logic. In addition, the unit tests are run blazingly fast. Still, we’re not sure if users can interact with the application as we expect.

That’s where UI testing comes into play.

UI Testing (Instrumentation Testing)

Traditionally, automated end-to-end testing is usually done in a blackbox way, meaning we create another project for automated end-to-end testing of the application. We need to find the locator of the elements in our application and find a way to interact with it via a framework such as Appium or UIAutomator. However, this approach is more time-consuming since we have to redefine the locators of the elements in our application; also, Appium is pretty slow when interacting with the real mobile application.

To be able to resolve the drawbacks of Appium, we’ll apply instrumentation tests with the help of the Espresso and AndroidX frameworks.

How to Implement UI in a Project

Let’s say we want to check whether the movie list button is shown and is clickable.

The MoviesActivity is defined as following: