84C69042-0571-4E30-A2A5-FF2385BABDF7@3xCreated with sketchtool.


What is new in Android P — ImageDecoder & AnimatedImageDrawable

Photo by Lastly Creative on Unsplash (image source)

This article was first published by Mariusz Dąbrowski in the App’n’roll Publication on Medium. This is part of a series of articles about the new APIs available in Android 9.0:

  1. BiometricPrompt

  2. ImageDecoder and AnimatedImageDrawable

  3. PrecomputedText

The sample application with source code used in this article can be found 

Android P new image API

Images are a very important part of every mobile application. However, handling them from the Android developer’s perspective is not an easy task. They can come from different sources (assets, files, web, …) and they can have different formats (jpg, png, …). No wonder then that most developers are using GlidePicasso or another 3rd party library in order to simplify decoding and displaying images.

The release of new Android P is a good time for every developer to rethink their approach, because two powerful classes were introduced:

The first one allows us to create drawable or bitmap from different sources (file, byte buffer, resource, URI and asset file). The second one extends Drawable.class and is capable of displaying GIF and WEBP animations.

Displaying images the old way

Before Android P, the framework allowed us to decode and display images but the API was not unified. Depending on what we wanted to get we needed to use static methods from either Drawable or BitmapFactory class:

// drawable from file Drawable.createFromPath(pathName)

// drawable from asset (or stream) Drawable.createFromStream(context.assets.open(assetFileName), "")

// bitmap from file BitmapFactory.decodeFile(pathName)

// bitmap from asset (or stream) BitmapFactory.decodeStream(context.assets.open(assetFileName))

// bitmap from byte array BitmapFactory.decodeByteArray(data, offset, length, opts)

Quite a mess but at least displaying is simple:

// display drawable imageView.setImageDrawable(drawable)

// display bitmap imageView.setImageBitmap(bitmap)

Displaying images the new way

With new ImageDecoder the decoding process is divided into two steps. First the Source object needs to be created:

// source from file ImageDecoder.createSource(file)

// source from byte buffer ImageDecoder.createSource(byteBuffer)

// source from resource ImageDecoder.createSource(resources, resId)

// source from URI ImageDecoder.createSource(contentResolver, uri)

// source from asset file ImageDecoder.createSource(assetManager, assetFileName)

Next, the source object can be decoded into drawable or bitmap:

// create bitmap val bitmap = ImageDecoder.decodeBitmap(source)

// create drawable val drawable = ImageDecoder.decodeDrawable(source)

Source object can be created from any thread because it only captures values. Also, one source object can be reused for multiple drawable/bitmap decoding processes and it can even be accessed from different threads.

A full example of how to load image files from assets:

val assets = contex.assets val assetFileName = "cat.jpg"

val source = ImageDecoder.createSource(assets, assetFileName) val drawable = ImageDecoder.decodeDrawable(source)



Previously, Android P GIF and WEBP were not directly supported. Using BitmapFactory.decode...() or Drawable.create...() methods, on an animated file, lead only to create static images with the first frame of the animation. Apps were able to display GIFs using Glide library or using Movieclass as described here.

With new AnimatedImageDrawable class displaying animated GIFs and WEBP is as simple as displaying static images — all thanks to the ImageDecoder. The steps are the same:

  1. Create source object from an animated file (or ByteArray, URI, …)

  2. Decode source object into drawable

ImageDecoder will recognise types of provided source to decode, for static images it will create BitmapDrawable and for animated images — AnimatedImageDrawable. All we need to do more of is to start an animation.

Here is a full example of how to load an animated file (GIF or WEBP) from assets:

val assets = contex.assets val assetFileName = "cat.gif"

val source = ImageDecoder.createSource(assets, assetFileName) val drawable = ImageDecoder.decodeDrawable(source)

imageView.setImageDrawable(drawable) if (drawable is AnimatedImageDrawable) { drawable.start() }


We can also decode the source of animated objects into bitmap but then we will receive only the first frame of the animation in the form of a bitmap.


What is really cool about ImageDecoder is the way we can implement image transformations.

When invoking the decodeDrawable(...) or decodeBitmap(...) method we can add OnHeaderDecodedListener object as a second parameter. This interface has only one callback method:

  • onHeaderDecoded(decoder, info, source) invoked when the image header is decoded and the size of the image is known

Parameters description:

  • decoder — allow us to change the default settings of the decode process

  • info — information about the encoded image

  • source — source object

On decoder object we can also set custom PostProcessor which allow us to get access directly to the Canvas of an image or animation.

Examples of different transformations:

// crop transformation ImageDecoder.OnHeaderDecodedListener { decoder, info, source -> val size = 100 val centerX = info.size.width / 2 val centerY = info.size.height / 2 decoder.crop = Rect( centerX — size, centerY — size, centerX + size, centerY + size) }

// resize transformation ImageDecoder.OnHeaderDecodedListener { decoder, info, source -> decoder.setTargetSize(info.size.width / 2, info.size.height / 2) }

// round corners transformation - using PostProcessor ImageDecoder.OnHeaderDecodedListener { decoder, info, source -> val path = Path().apply { fillType = Path.FillType.INVERSE_EVEN_ODD } val paint = Paint().apply { isAntiAlias = true color = Color.TRANSPARENT xfermode = PorterDuffXfermode(PorterDuff.Mode.SRC) } decoder.setPostProcessor { canvas -> val width = canvas.width.toFloat() val height = canvas.height.toFloat() val direction = Path.Direction.CW path.addRoundRect(0f, 0f, width, height, 40f, 40f, direction) canvas.drawPath(path, paint) PixelFormat.TRANSLUCENT } }


Transformations can be applied both on static image and on animated GIF or WEBP. When custom PostProcessor is used with AnimatedImageDrawable then OnHeaderDecodedListener callback method is invoked only once but the post processor drawing method is applied to every frame of the animation.


Error handling

The new API also enables us to easily detect errors in the decoding process. We need to use OnHeaderDecodedListener and inside its callback we need to set OnPartialImageListener on the decoder object:

ImageDecoder.OnHeaderDecodedListener { decoder, info, source -> decoder.setOnPartialImageListener { decodeException -> true } ... }

This partial image listener has one callback method which will be invoked if the decoded image is incomplete, contains an error or exception occur during the process. The type of error can be read from the decodeException.errorfield and possible values are:

Callback method should return boolean:

  • false — default, abort the decoding process and throw java.io.IOException

  • true — create partially decoded image, remaining lines will be blank

If we choose to display a partially decoded image then all its transformations will be working as expected:


Although error handling works well for static images there are some problems with animated formats. When I tried to display a malformed GIF, the animation stops on the first incomplete frame. On the other hand, invalid WEBP files crashed the application with an exception:

android.graphics.ImageDecoder$DecodeException: Failed to create image decoder with message 'invalid input'Input contained an error.


It is good practice not to run image creation process on the main thread because, in case of bigger images, it can lead to freezing UI. Instead a worker thread should be used. The simplest way to do it is with Kotlin coroutines:

val assets = contex.assets val assetFileName = "cat.gif" val listener = ImageDecoder.OnHeaderDecodedListener { decoder,_,_ -> decoder.setOnPartialImageListener { decodeException -> true } }

GlobalScope.launch(Dispatchers.Default) { // worker thread val source = ImageDecoder.createSource(assets, assetFileName) val drawable = ImageDecoder.decodeDrawable(source, listener) GlobalScope.launch(Dispatchers.Main) { // UI thread imageView.setImageDrawable(drawable) if (drawable is AnimatedImageDrawable) { drawable.start() } } }


API unification is always a good thing so introducing those new components is definitely a step in the right direction in order to make the developers work easier. Image transformations can now be implemented more easily than in Glide or Picasso libraries. About adding support for GIF and WEBP — well, better late than never.

Sadly, both ImageDecoder and AnimatedImageDrawable are only available on Android P and above so in order to use it on older devices we need to wait for the release of a suitable component in the compat library.

If you’ve found this article useful please don’t hesitate to share it and if you have any further questions, feel free to comment below.

Clutch logo 4.8stars

2020 Appnroll Sp. z o.o.

All rights reserved

Privacy policy
We are we rock IT
footer_svgCreated with Sketch.

Hey, our website uses cookies.
We hope you’re cool with that? Read more