Create a highlight impact with CameraX and Jetpack Compose | by Jolanda Verhoef | Android Builders | Jan, 2025

Create a highlight impact with CameraX and Jetpack Compose | by Jolanda Verhoef | Android Builders | Jan, 2025


Half 3 of Unlocking the Energy of CameraX in Jetpack Compose

Hey there! Welcome again to our collection on CameraX and Jetpack Compose. Within the earlier posts, we’ve lined the basics of organising a digital camera preview and added tap-to-focus performance.

  • 🧱 Half 1: Constructing a primary digital camera preview utilizing the brand new camera-compose artifact. We lined permission dealing with and primary integration.
  • 👆 Half 2: Utilizing the Compose gesture system, graphics, and coroutines to implement a visible tap-to-focus.
  • 🔦 Half 3 (this put up): Exploring how one can overlay Compose UI parts on prime of your digital camera preview for a richer consumer expertise.
  • 📂 Half 4: Utilizing adaptive APIs and the Compose animation framework to easily animate to and from tabletop mode on foldable telephones.

On this put up, we’ll dive into one thing a bit extra visually partaking — implementing a highlight impact on prime of our digital camera preview, utilizing face detection as the premise for the impact. Why, you say? I’m unsure. However it positive seems to be cool 🙂. And, extra importantly, it demonstrates how we are able to simply translate sensor coordinates into UI coordinates, permitting us to make use of them in Compose!

First, let’s modify the CameraPreviewViewModel to allow face detection. We’ll use the Camera2Interop API, which permits us to work together with the underlying Camera2 API from CameraX. This provides us the chance to make use of digital camera options that aren’t uncovered by CameraX instantly. We have to make the next modifications:

  • Create a StateFlow that comprises the face bounds as a listing of Rects.
  • Set the STATISTICS_FACE_DETECT_MODE seize request choice to FULL, which permits face detection.
  • Set a CaptureCallback to get the face data from the seize consequence.
class CameraPreviewViewModel : ViewModel() {
...
personal val _sensorFaceRects = MutableStateFlow(listOf<Rect>())
val sensorFaceRects: StateFlow<Checklist<Rect>> = _sensorFaceRects.asStateFlow()

personal val cameraPreviewUseCase = Preview.Builder()
.apply {
Camera2Interop.Extender(this)
.setCaptureRequestOption(
CaptureRequest.STATISTICS_FACE_DETECT_MODE,
CaptureRequest.STATISTICS_FACE_DETECT_MODE_FULL
)
.setSessionCaptureCallback(object : CameraCaptureSession.CaptureCallback() {
override enjoyable onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
consequence: TotalCaptureResult
) {
tremendous.onCaptureCompleted(session, request, consequence)
consequence.get(CaptureResult.STATISTICS_FACES)
?.map { face -> face.bounds.toComposeRect() }
?.toList()
?.let { faces -> _sensorFaceRects.replace { faces } }
}
})
}
.construct().apply {
...
}

With these modifications in place, our view mannequin now emits a listing of Rectobjects representing the bounding packing containers of detected faces in sensor coordinates.

The bounding packing containers of detected faces that we saved within the final part use coordinates within the sensor coordinate system. To attract the bounding packing containers in our UI, we have to remodel these coordinates in order that they’re appropriate within the Compose coordinate system. We have to:

  • Remodel the sensor coordinates into preview buffer coordinates
  • Remodel the preview buffer coordinates into Compose UI coordinates

These transformations are achieved utilizing transformation matrices. Every of the transformations has its personal matrix:

We are able to create a helper technique that may do the transformation for us:

personal enjoyable Checklist<Rect>.transformToUiCoords(
transformationInfo: SurfaceRequest.TransformationInfo?,
uiToBufferCoordinateTransformer: MutableCoordinateTransformer
): Checklist<Rect> = this.map { sensorRect ->
val bufferToUiTransformMatrix = Matrix().apply {
setFrom(uiToBufferCoordinateTransformer.transformMatrix)
invert()
}

val sensorToBufferTransformMatrix = Matrix().apply {
transformationInfo?.let {
setFrom(it.sensorToBufferTransform)
}
}

val bufferRect = sensorToBufferTransformMatrix.map(sensorRect)
val uiRect = bufferToUiTransformMatrix.map(bufferRect)

uiRect
}

  • We iterate via the record of detected faces, and for every face execute the transformation.
  • The CoordinateTransformer.transformMatrix that we get from our CameraXViewfinder transforms coordinates from UI to buffer coordinates by default. In our case, we would like the matrix to work the opposite approach round, reworking buffer coordinates into UI coordinates. Due to this fact, we use the invert() technique to invert the matrix.
  • We first remodel the face from sensor coordinates to buffer coordinates utilizing the sensorToBufferTransformMatrix, after which remodel these buffer coordinates to UI coordinates utilizing the bufferToUiTransformMatrix.

Now, let’s replace the CameraPreviewContent composable to attract the highlight impact. We’ll use a Canvas composable to attract a gradient masks over the preview, making the detected faces seen:

@Composable
enjoyable CameraPreviewContent(
viewModel: CameraPreviewViewModel,
modifier: Modifier = Modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.present
) {
val surfaceRequest by viewModel.surfaceRequest.collectAsStateWithLifecycle()
val sensorFaceRects by viewModel.sensorFaceRects.collectAsStateWithLifecycle()
val transformationInfo by
produceState<SurfaceRequest.TransformationInfo?>(null, surfaceRequest) {
attempt {
surfaceRequest?.setTransformationInfoListener(Runnable::run) { transformationInfo ->
worth = transformationInfo
}
awaitCancellation()
} lastly {
surfaceRequest?.clearTransformationInfoListener()
}
}
val shouldSpotlightFaces by bear in mind {
derivedStateOf { sensorFaceRects.isNotEmpty() && transformationInfo != null}
}
val spotlightColor = Coloration(0xDDE60991)
..

surfaceRequest?.let { request ->
val coordinateTransformer = bear in mind { MutableCoordinateTransformer() }
CameraXViewfinder(
surfaceRequest = request,
coordinateTransformer = coordinateTransformer,
modifier = ..
)

AnimatedVisibility(shouldSpotlightFaces, enter = fadeIn(), exit = fadeOut()) {
Canvas(Modifier.fillMaxSize()) {
val uiFaceRects = sensorFaceRects.transformToUiCoords(
transformationInfo = transformationInfo,
uiToBufferCoordinateTransformer = coordinateTransformer
)

// Fill the entire house with the colour
drawRect(spotlightColor)
// Then extract every face and make it clear

uiFaceRects.forEach { faceRect ->
drawRect(
Brush.radialGradient(
0.4f to Coloration.Black, 1f to Coloration.Clear,
heart = faceRect.heart,
radius = faceRect.minDimension * 2f,
),
blendMode = BlendMode.DstOut
)
}
}
}
}
}

Right here’s the way it works:

  • We acquire the record of faces from the view mannequin.
  • To ensure we’re not recomposing the entire display each time the record of detected faces modifications, we use derivedStateOf to maintain observe of whether or not any faces are detected in any respect. This may then be used with AnimatedVisibility to animate the coloured overlay out and in.
  • The surfaceRequest comprises the knowledge we have to remodel sensor coordinates to buffer coordinates within the SurfaceRequest.TransformationInfo. We use the produceState perform to arrange a listener within the floor request, and clear this listener when the composable leaves the composition tree.
  • We use a Canvas to attract a translucent pink rectangle that covers the complete display.
  • We defer the studying of the sensorFaceRects variable till we’re contained in the Canvas draw block. Then we remodel the coordinates into UI coordinates.
  • We iterate over the detected faces, and for every face, we draw a radial gradient that can make the within of the face rectangle clear.
  • We use BlendMode.DstOut to make it possible for we’re chopping out the gradient from the pink rectangle, creating the highlight impact.

Word: While you change the digital camera to DEFAULT_FRONT_CAMERA you’ll discover that the highlight is mirrored! This can be a recognized concern, tracked within the Google Concern Tracker.

With this code, we’ve got a completely practical highlight impact that highlights detected faces. You’ll find the complete code snippet right here.

This impact is only the start — through the use of the facility of Compose, you possibly can create a myriad of visually beautiful digital camera experiences. With the ability to remodel sensor and buffer coordinates into Compose UI coordinates and again means we are able to make the most of all Compose UI options and combine them seamlessly with the underlying digital camera system. With animations, superior UI graphics, easy UI state administration, and full gesture management, your creativeness is the restrict!

Within the closing put up of the collection, we’ll dive into how one can use adaptive APIs and the Compose animation framework to seamlessly transition between totally different digital camera UIs on foldable units. Keep tuned!

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog. 
rooshohttps://www.roosho.com
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog. 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Latest Articles

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog.